Optional
client?: HttpClient<THttpClientOptions>HTTP client to use for requests. If not supplied, the built-in fetch-based implementation will be used.
Optional
endpoint?: stringThe endpoint URL for the LM Studio model. Defaults to "http://localhost:1234/v1/chat/completions".
The model ID as defined by LMStudio. You must have downloaded this within LMStudio. If no matching model exists, LMStudio will silently use the first loaded model.
The LM Studio Model Provider with the OpenAiChatApi
import { createLmStudioModelProvider } from "generative-ts";
const llama3 = createLmStudioModelProvider({
modelId: "lmstudio-community/Meta-Llama-3-70B-Instruct-GGUF", // a ID of a model you have downloaded in LMStudio
});
const response = await llama3.sendRequest({
$prompt: "Brief History of NY Mets:"
// all other OpenAI ChatCompletion options available here (LMStudio uses the OpenAI ChatCompletion API for all the models it hosts)
});
console.log(response.choices[0]?.message.content);
Creates a LMStudio ModelProvider with the OpenAiChatApi
Provider Setup and Notes
Follow LMStudio's instructions to set up the LMStudio local server.
LMStudio uses the OpenAI ChatCompletion API for all the models it hosts.
Model Parameters
Model IDs