generative-ts

Function createHuggingfaceInferenceModelProvider

  • Creates a Huggingface Inference ModelProvider with the specified ModelApi

    import {
    createHuggingfaceInferenceModelProvider,
    HfTextGenerationTaskApi
    } from "generative-ts";

    // Huggingface Inference supports many different APIs and models. See below for full list.
    const gpt2 = createHuggingfaceInferenceModelProvider({
    api: HfTextGenerationTaskApi,
    modelId: "gpt2",
    // you can explicitly pass auth here, otherwise by default it is read from process.env
    });

    const response = await gpt2.sendRequest({
    $prompt: "Hello,"
    // all other options for the specified `api` available here
    });

    console.log(response[0]?.generated_text);

    Compatible APIs

    Provider Setup and Notes

    Create a Huggingface Inference API account at Huggingface

    Obtain a Huggingface User Access Token and either pass it explicitly in auth or set it in the environment as HUGGINGFACE_API_TOKEN

    The Huggingface Inference API supports thousands of different models, grouped into "tasks". See the official documentation.

    Currently we only ship API classes for Conversational Task and Text Generation Task. If you need another task, you can create a new API and pass it as the api parameter.

    Model Parameters

    Model IDs

    Type Parameters

    • THfApi extends HfApi
    • THttpClientOptions = HttpClientOptions

    Parameters

    • params: {
          api: THfApi;
          auth?: HuggingfaceAuthConfig;
          client?: HttpClient<THttpClientOptions>;
          modelId: string;
      }
      • api: THfApi

        The API instance to use for making requests.

      • Optional auth?: HuggingfaceAuthConfig

        Authentication configuration for Huggingface. If not supplied, it will be loaded from the environment.

      • Optional client?: HttpClient<THttpClientOptions>

        HTTP client to use for requests. If not supplied, the built-in fetch-based implementation will be used.

      • modelId: string

        The model ID as defined by Huggingface

    Returns HttpModelProvider<InferRequestOptions<THfApi>, InferResponse<THfApi>, THttpClientOptions, {
        modelId: string;
    }>

    The Huggingface Model Provider with the specified ModelApi.

    See

    Throws

    If no auth is passed and HUGGINGFACE_API_TOKEN is not found in process.env

    Example: Usage

    import {
    createHuggingfaceInferenceModelProvider,
    HfTextGenerationTaskApi
    } from "generative-ts";

    // Huggingface Inference supports many different APIs and models. See below for full list.
    const gpt2 = createHuggingfaceInferenceModelProvider({
    api: HfTextGenerationTaskApi,
    modelId: "gpt2",
    // you can explicitly pass auth here, otherwise by default it is read from process.env
    });

    const response = await gpt2.sendRequest({
    $prompt: "Hello,"
    // all other options for the specified `api` available here
    });

    console.log(response[0]?.generated_text);