generative-ts

Function createLmStudioModelProvider

  • Creates a LMStudio ModelProvider with the OpenAiChatApi

    import { createLmStudioModelProvider } from "generative-ts";

    const llama3 = createLmStudioModelProvider({
    modelId: "lmstudio-community/Meta-Llama-3-70B-Instruct-GGUF", // a ID of a model you have downloaded in LMStudio
    });

    const response = await llama3.sendRequest({
    $prompt: "Brief History of NY Mets:"
    // all other OpenAI ChatCompletion options available here (LMStudio uses the OpenAI ChatCompletion API for all the models it hosts)
    });

    console.log(response.choices[0]?.message.content);

    Provider Setup and Notes

    Follow LMStudio's instructions to set up the LMStudio local server.

    LMStudio uses the OpenAI ChatCompletion API for all the models it hosts.

    Model Parameters

    Model IDs

    • The model ID can be found in LMStudio, listed as the "name" of the model.

    Type Parameters

    • THttpClientOptions = HttpClientOptions

    Parameters

    • params: {
          client?: HttpClient<THttpClientOptions>;
          endpoint?: string;
          modelId: string;
      }
      • Optional client?: HttpClient<THttpClientOptions>

        HTTP client to use for requests. If not supplied, the built-in fetch-based implementation will be used.

      • Optional endpoint?: string

        The endpoint URL for the LM Studio model. Defaults to "http://localhost:1234/v1/chat/completions".

      • modelId: string

        The model ID as defined by LMStudio. You must have downloaded this within LMStudio. If no matching model exists, LMStudio will silently use the first loaded model.

    Returns HttpModelProvider<OpenAiChatOptions, OpenAiChatResponse, THttpClientOptions, {
        modelId: string;
    }>

    The LM Studio Model Provider with the OpenAiChatApi

    See

    Example: Usage

    import { createLmStudioModelProvider } from "generative-ts";

    const llama3 = createLmStudioModelProvider({
    modelId: "lmstudio-community/Meta-Llama-3-70B-Instruct-GGUF", // a ID of a model you have downloaded in LMStudio
    });

    const response = await llama3.sendRequest({
    $prompt: "Brief History of NY Mets:"
    // all other OpenAI ChatCompletion options available here (LMStudio uses the OpenAI ChatCompletion API for all the models it hosts)
    });

    console.log(response.choices[0]?.message.content);