# Huggingface.Js

## Docs

- [Hugging Face JS libraries](https://huggingface.co/docs/huggingface.js/index.md)
- [🤗 Hugging Face Space Header](https://huggingface.co/docs/huggingface.js/space-header/README.md)
- [🤗 Hugging Face Inference](https://huggingface.co/docs/huggingface.js/inference/README.md)
- [@huggingface/inference](https://huggingface.co/docs/huggingface.js/inference/modules.md)
- [Class: InferenceClientProviderOutputError](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientProviderOutputError.md)
- [Class: InferenceClientHubApiError](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientHubApiError.md)
- [Class: InferenceClientEndpoint](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientEndpoint.md)
- [Class: InferenceClientRoutingError](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientRoutingError.md)
- [Class: InferenceClientInputError](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientInputError.md)
- [Class: InferenceClient](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClient.md)
- [Class: HfInference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference.md)
- [Class: InferenceClientProviderApiError](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientProviderApiError.md)
- [Class: InferenceClientError](https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientError.md)
- [Namespace: snippets](https://huggingface.co/docs/huggingface.js/inference/modules/snippets.md)
- [Interface: TextGenerationInput](https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationInput.md)
- [Interface: Options](https://huggingface.co/docs/huggingface.js/inference/interfaces/Options.md)
- [Interface: TextGenerationStreamToken](https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamToken.md)
- [Interface: AudioToAudioOutput](https://huggingface.co/docs/huggingface.js/inference/interfaces/AudioToAudioOutput.md)
- [Interface: HeaderParams](https://huggingface.co/docs/huggingface.js/inference/interfaces/HeaderParams.md)
- [Interface: TextGenerationStreamDetails](https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamDetails.md)
- [Interface: BaseArgs](https://huggingface.co/docs/huggingface.js/inference/interfaces/BaseArgs.md)
- [Interface: AudioToAudioOutputElem](https://huggingface.co/docs/huggingface.js/inference/interfaces/AudioToAudioOutputElem.md)
- [Interface: TextGenerationOutput](https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationOutput.md)
- [Interface: InferenceProviderMappingEntry](https://huggingface.co/docs/huggingface.js/inference/interfaces/InferenceProviderMappingEntry.md)
- [Interface: TextGenerationStreamPrefillToken](https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamPrefillToken.md)
- [Interface: Logger](https://huggingface.co/docs/huggingface.js/inference/interfaces/Logger.md)
- [Interface: UrlParams](https://huggingface.co/docs/huggingface.js/inference/interfaces/UrlParams.md)
- [Interface: BodyParams\<T\>](https://huggingface.co/docs/huggingface.js/inference/interfaces/BodyParams.md)
- [Interface: TextGenerationStreamOutput](https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamOutput.md)
- [Interface: TextGenerationStreamBestOfSequence](https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamBestOfSequence.md)
- [`@huggingface/gguf`](https://huggingface.co/docs/huggingface.js/gguf/README.md)
- [@huggingface/mcp-client](https://huggingface.co/docs/huggingface.js/mcp-client/README.md)
- [🤗 Hugging Face Hub API](https://huggingface.co/docs/huggingface.js/hub/README.md)
- [@huggingface/hub](https://huggingface.co/docs/huggingface.js/hub/modules.md)
- [Class: InvalidApiResponseFormatError](https://huggingface.co/docs/huggingface.js/hub/classes/InvalidApiResponseFormatError.md)
- [Class: \_\_internal\_XetBlob](https://huggingface.co/docs/huggingface.js/hub/classes/_internal_XetBlob.md)
- [Class: HubApiError](https://huggingface.co/docs/huggingface.js/hub/classes/HubApiError.md)
- [Interface: SpaceRuntime](https://huggingface.co/docs/huggingface.js/hub/interfaces/SpaceRuntime.md)
- [Interface: XetFileInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/XetFileInfo.md)
- [Interface: SecurityFileStatus](https://huggingface.co/docs/huggingface.js/hub/interfaces/SecurityFileStatus.md)
- [Interface: WhoAmIUser](https://huggingface.co/docs/huggingface.js/hub/interfaces/WhoAmIUser.md)
- [Interface: AuthInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/AuthInfo.md)
- [Interface: ModelConfig](https://huggingface.co/docs/huggingface.js/hub/interfaces/ModelConfig.md)
- [Interface: Credentials](https://huggingface.co/docs/huggingface.js/hub/interfaces/Credentials.md)
- [Interface: CommitData](https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitData.md)
- [Interface: PathInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/PathInfo.md)
- [Interface: CommitOutput](https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitOutput.md)
- [Interface: TensorInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/TensorInfo.md)
- [Interface: ModelEntry](https://huggingface.co/docs/huggingface.js/hub/interfaces/ModelEntry.md)
- [Interface: CachedRepoInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/CachedRepoInfo.md)
- [Interface: RepoId](https://huggingface.co/docs/huggingface.js/hub/interfaces/RepoId.md)
- [Interface: SpaceResourceRequirement](https://huggingface.co/docs/huggingface.js/hub/interfaces/SpaceResourceRequirement.md)
- [Interface: CachedRevisionInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/CachedRevisionInfo.md)
- [Interface: CachedFileInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/CachedFileInfo.md)
- [Interface: LfsPathInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/LfsPathInfo.md)
- [Interface: SafetensorsIndexJson](https://huggingface.co/docs/huggingface.js/hub/interfaces/SafetensorsIndexJson.md)
- [Interface: SafetensorsShardFileInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/SafetensorsShardFileInfo.md)
- [Interface: DatasetEntry](https://huggingface.co/docs/huggingface.js/hub/interfaces/DatasetEntry.md)
- [Interface: WhoAmIApp](https://huggingface.co/docs/huggingface.js/hub/interfaces/WhoAmIApp.md)
- [Interface: CommitDeletedEntry](https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitDeletedEntry.md)
- [Interface: CommitInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitInfo.md)
- [Interface: CommitEditFile](https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitEditFile.md)
- [Interface: CommitFile](https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitFile.md)
- [Interface: OAuthResult](https://huggingface.co/docs/huggingface.js/hub/interfaces/OAuthResult.md)
- [Interface: SpaceResourceConfig](https://huggingface.co/docs/huggingface.js/hub/interfaces/SpaceResourceConfig.md)
- [Interface: SpaceEntry](https://huggingface.co/docs/huggingface.js/hub/interfaces/SpaceEntry.md)
- [Interface: FileDownloadInfoOutput](https://huggingface.co/docs/huggingface.js/hub/interfaces/FileDownloadInfoOutput.md)
- [Interface: QuantizationConfig](https://huggingface.co/docs/huggingface.js/hub/interfaces/QuantizationConfig.md)
- [Interface: WhoAmIOrg](https://huggingface.co/docs/huggingface.js/hub/interfaces/WhoAmIOrg.md)
- [Interface: HFCacheInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/HFCacheInfo.md)
- [Interface: ListFileEntry](https://huggingface.co/docs/huggingface.js/hub/interfaces/ListFileEntry.md)
- [Interface: UserInfo](https://huggingface.co/docs/huggingface.js/hub/interfaces/UserInfo.md)
- [@huggingface/tiny-agents](https://huggingface.co/docs/huggingface.js/tiny-agents/README.md)

### Hugging Face JS libraries
https://huggingface.co/docs/huggingface.js/index.md

# Hugging Face JS libraries

This is a collection of JS libraries to interact with the Hugging Face API, with TS types included.

- [@huggingface/inference](inference/README): Use all supported (serverless) Inference Providers or switch to Inference Endpoints (dedicated) to make calls to 100,000+ Machine Learning models
- [@huggingface/hub](hub/README): Interact with huggingface.co to create or delete repos and commit / download files
- [@huggingface/mcp-client](mcp-client/README): A Model Context Protocol (MCP) client, and a tiny Agent library, built on top of InferenceClient.
- [@huggingface/gguf](gguf/README): A GGUF parser that works on remotely hosted files.
- [@huggingface/dduf](dduf/README): Similar package for DDUF (DDUF Diffusers Unified Format)
- [@huggingface/tasks](tasks/README): The definition files and source-of-truth for the Hub's main primitives like pipeline tasks, model libraries, etc.
- [@huggingface/jinja](jinja/README): A minimalistic JS implementation of the Jinja templating engine, to be used for ML chat templates.
- [@huggingface/space-header](space-header/README): Use the Space `mini_header` outside Hugging Face
- [@huggingface/ollama-utils](ollama-utils/README): Various utilities for maintaining Ollama compatibility with models on the Hugging Face Hub.
- [@huggingface/tiny-agents](tiny-agents/README): A tiny, model-agnostic library for building AI agents that can use tools.


We use modern features to avoid polyfills and dependencies, so the libraries will only work on modern browsers / Node.js >= 18 / Bun / Deno.

The libraries are still very young, please help us by opening issues!

## Installation

### From NPM

To install via NPM, you can download the libraries as needed:

```bash
npm install @huggingface/inference
npm install @huggingface/hub
npm install @huggingface/mcp-client
```

Then import the libraries in your code:

```ts
import { InferenceClient } from "@huggingface/inference";
import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub";
import { McpClient } from "@huggingface/mcp-client";
import type { RepoId } from "@huggingface/hub";
```

### From CDN or Static hosting

You can run our packages with vanilla JS, without any bundler, by using a CDN or static hosting. Using [ES modules](https://hacks.mozilla.org/2018/03/es-modules-a-cartoon-deep-dive/), i.e. `<script type="module">`, you can import the libraries in your code:

```html
<script type="module">
    import { InferenceClient } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@4.13.2/+esm';
    import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net/npm/@huggingface/hub@2.6.12/+esm";
</script>
```

### Deno

```ts
// esm.sh
import { InferenceClient } from "https://esm.sh/@huggingface/inference"

import { createRepo, commit, deleteRepo, listFiles } from "https://esm.sh/@huggingface/hub"
// or npm:
import { InferenceClient } from "npm:@huggingface/inference"

import { createRepo, commit, deleteRepo, listFiles } from "npm:@huggingface/hub"
```

## Usage examples

Get your HF access token in your [account settings](https://huggingface.co/settings/tokens).

### @huggingface/inference examples

```ts
import { InferenceClient } from "@huggingface/inference";

const HF_TOKEN = "hf_...";

const client = new InferenceClient(HF_TOKEN);

// Chat completion API
const out = await client.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512
});
console.log(out.choices[0].message);

// Streaming chat completion API
for await (const chunk of client.chatCompletionStream({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512
})) {
  console.log(chunk.choices[0].delta.content);
}

/// Using a third-party provider:
await client.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512,
  provider: "sambanova", // or together, fal-ai, replicate, cohere …
})

await client.textToImage({
  model: "black-forest-labs/FLUX.1-dev",
  inputs: "a picture of a green bird",
  provider: "fal-ai",
})



// You can also omit "model" to use the recommended model for the task
await client.translation({
  inputs: "My name is Wolfgang and I live in Amsterdam",
  parameters: {
    src_lang: "en",
    tgt_lang: "fr",
  },
});

// pass multimodal files or URLs as inputs
await client.imageToText({
  model: 'nlpconnect/vit-gpt2-image-captioning',
  data: await (await fetch('https://picsum.photos/300/300')).blob(),
})

// Using your own dedicated inference endpoint: https://hf.co/docs/inference-endpoints/
const gpt2Client = client.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2Client.textGeneration({ inputs: 'The answer to the universe is' });

// Chat Completion
const llamaEndpoint = client.endpoint(
  "https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.1-8B-Instruct"
);
const out = await llamaEndpoint.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512,
});
console.log(out.choices[0].message);
```

### @huggingface/hub examples

```ts
import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";

const HF_TOKEN = "hf_...";

await createRepo({
  repo: "my-user/nlp-model", // or { type: "model", name: "my-user/nlp-test" },
  accessToken: HF_TOKEN
});

await uploadFile({
  repo: "my-user/nlp-model",
  accessToken: HF_TOKEN,
  // Can work with native File in browsers
  file: {
    path: "pytorch_model.bin",
    content: new Blob(...)
  }
});

await deleteFiles({
  repo: { type: "space", name: "my-user/my-space" }, // or "spaces/my-user/my-space"
  accessToken: HF_TOKEN,
  paths: ["README.md", ".gitattributes"]
});
```

### @huggingface/mcp-client example

```ts
import { Agent } from '@huggingface/mcp-client';

const HF_TOKEN = "hf_...";

const agent = new Agent({
  provider: "auto",
  model: "Qwen/Qwen2.5-72B-Instruct",
  apiKey: HF_TOKEN,
  servers: [
    {
      // Playwright MCP
      command: "npx",
      args: ["@playwright/mcp@latest"],
    },
  ],
});

await agent.loadTools();

for await (const chunk of agent.run("What are the top 5 trending models on Hugging Face?")) {
    if ("choices" in chunk) {
        const delta = chunk.choices[0]?.delta;
        if (delta.content) {
            console.log(delta.content);
        }
    }
}
```

There are more features of course, check each library's README!

## Formatting & testing

```console
sudo corepack enable
pnpm install

pnpm -r format:check
pnpm -r lint:check
pnpm -r test
```

## Building

```
pnpm -r build
```

This will generate ESM and CJS javascript files in `packages/*/dist`, eg `packages/inference/dist/index.mjs`.


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/index.md" />

### 🤗 Hugging Face Space Header
https://huggingface.co/docs/huggingface.js/space-header/README.md

# 🤗 Hugging Face Space Header

A Typescript powered wrapper for the Space `mini_header` feature.

![space header preview](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/space-header-package/thumbnail.png)

## Install

```console
pnpm add @huggingface/space-header

npm add @huggingface/space-header

yarn add @huggingface/space-header
```

### Deno

```ts
// esm.sh
import { init } from "https://esm.sh/@huggingface/space-header"
// or npm:
import { init } from "npm:@huggingface/space-header"
```

### Initialize
```ts
import { init } from "@huggingface/space-header";

// ...

init(":user/:spaceId");
// init("enzostvs/lora-studio") for example
```
❗Important note: The `init` method must be called on the client side.

## Usage

Uses the `target` option to inject the space-header into another DOM element

```ts
const app = document.getElementById("app");

// ...

init(":user/:spaceId", {
  target: app
});
```

If you already have the space data, you can also pass it as a parameter to avoid a fetch

```ts
init(space);

// space = {
//  id: string;
//  likes: number;
//  author: string;
// }
```


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/space-header/README.md" />

### 🤗 Hugging Face Inference
https://huggingface.co/docs/huggingface.js/inference/README.md

# 🤗 Hugging Face Inference

A Typescript powered wrapper that provides a unified interface to run inference across multiple services for models hosted on the Hugging Face Hub:

1.  [Inference Providers](https://huggingface.co/docs/inference-providers/index): a streamlined, unified access to hundreds of machine learning models, powered by our serverless inference partners. This new approach builds on our previous Serverless Inference API, offering more models, improved performance, and greater reliability thanks to world-class providers. Refer to the [documentation](https://huggingface.co/docs/inference-providers/index#partners) for a list of supported providers.
2.  [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index): a product to easily deploy models to production. Inference is run by Hugging Face in a dedicated, fully managed infrastructure on a cloud provider of your choice.
3.  Local endpoints: you can also run inference with local inference servers like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://ollama.com/), [vLLM](https://github.com/vllm-project/vllm), [LiteLLM](https://docs.litellm.ai/docs/simple_proxy), or [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) by connecting the client to these local endpoints.

## Getting Started

### Install

#### Node

```console
npm install @huggingface/inference

pnpm add @huggingface/inference

yarn add @huggingface/inference
```

#### Deno

```ts
// esm.sh
import { InferenceClient } from "https://esm.sh/@huggingface/inference";
// or npm:
import { InferenceClient } from "npm:@huggingface/inference";
```

### Initialize

```typescript
import { InferenceClient } from '@huggingface/inference';

const hf = new InferenceClient('your access token');
```

❗**Important note:** Always pass an access token. Join [Hugging Face](https://huggingface.co/join) and then visit [access tokens](https://huggingface.co/settings/tokens) to generate your access token for **free**.

Your access token should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the access token.

## Using Inference Providers

You can send inference requests to third-party providers with the inference client.

Currently, we support the following providers:
- [Fal.ai](https://fal.ai)
- [Featherless AI](https://featherless.ai)
- [Fireworks AI](https://fireworks.ai)
- [HF Inference](https://huggingface.co/docs/inference-providers/providers/hf-inference)
- [Hyperbolic](https://hyperbolic.xyz)
- [Nebius](https://studio.nebius.ai)
- [Novita](https://novita.ai)
- [Nscale](https://nscale.com)
- [OVHcloud](https://endpoints.ai.cloud.ovh.net/)
- [Public AI](https://publicai.co)
- [Replicate](https://replicate.com)
- [Sambanova](https://sambanova.ai)
- [Scaleway](https://www.scaleway.com/en/generative-apis/)
- [Clarifai](http://clarifai.com)
- [Together](https://together.xyz)
- [Baseten](https://baseten.co)
- [Blackforestlabs](https://blackforestlabs.ai)
- [Cohere](https://cohere.com)
- [Cerebras](https://cerebras.ai/)
- [Groq](https://groq.com)
- [Wavespeed.ai](https://wavespeed.ai/)
- [Z.ai](https://z.ai/)

To send requests to a third-party provider, you have to pass the `provider` parameter to the inference function. The default value of the `provider` parameter is "auto", which will select the first of the providers available for the model, sorted by your preferred order in https://hf.co/settings/inference-providers.

```ts
const accessToken = "hf_..."; // Either a HF access token, or an API key from the third-party provider (Replicate in this example)

const client = new InferenceClient(accessToken);
await client.textToImage({
  provider: "replicate",
  model:"black-forest-labs/Flux.1-dev",
  inputs: "A black forest cake"
})
```

You also have to make sure your request is authenticated with an access token.
When authenticated with a Hugging Face access token, the request is routed through https://huggingface.co.
When authenticated with a third-party provider key, the request is made directly against that provider's inference API.

Only a subset of models are supported when requesting third-party providers. You can check the list of supported models per pipeline tasks here:
- [Fal.ai supported models](https://huggingface.co/api/partners/fal-ai/models)
- [Featherless AI supported models](https://huggingface.co/api/partners/featherless-ai/models)
- [Fireworks AI supported models](https://huggingface.co/api/partners/fireworks-ai/models)
- [HF Inference supported models](https://huggingface.co/api/partners/hf-inference/models)
- [Hyperbolic supported models](https://huggingface.co/api/partners/hyperbolic/models)
- [Nebius supported models](https://huggingface.co/api/partners/nebius/models)
- [Nscale supported models](https://huggingface.co/api/partners/nscale/models)
- [OVHcloud supported models](https://huggingface.co/api/partners/ovhcloud/models)
- [Replicate supported models](https://huggingface.co/api/partners/replicate/models)
- [Sambanova supported models](https://huggingface.co/api/partners/sambanova/models)
- [Scaleway supported models](https://huggingface.co/api/partners/scaleway/models)
- [Together supported models](https://huggingface.co/api/partners/together/models)
- [Baseten supported models](https://huggingface.co/api/partners/baseten/models)
- [Clarifai supported models](https://huggingface.co/api/partners/clarifai/models)
- [Cohere supported models](https://huggingface.co/api/partners/cohere/models)
- [Cerebras supported models](https://huggingface.co/api/partners/cerebras/models)
- [Groq supported models](https://console.groq.com/docs/models)
- [Novita AI supported models](https://huggingface.co/api/partners/novita/models)
- [Wavespeed.ai supported models](https://huggingface.co/api/partners/wavespeed/models)
- [Z.ai supported models](https://huggingface.co/api/partners/zai-org/models)

❗**Important note:** To be compatible, the third-party API must adhere to the "standard" shape API we expect on HF model pages for each pipeline task type.
This is not an issue for LLMs as everyone converged on the OpenAI API anyways, but can be more tricky for other tasks like "text-to-image" or "automatic-speech-recognition" where there exists no standard API. Let us know if any help is needed or if we can make things easier for you!

👋**Want to add another provider?** Get in touch if you'd like to add support for another Inference provider, and/or request it on https://huggingface.co/spaces/huggingface/HuggingDiscussions/discussions/49

### Tree-shaking

You can import the functions you need directly from the module instead of using the `InferenceClient` class.

```ts
import { textGeneration } from "@huggingface/inference";

await textGeneration({
  accessToken: "hf_...",
  model: "model_or_endpoint",
  inputs: ...,
  parameters: ...
})
```

This will enable tree-shaking by your bundler.

### Error handling

The inference package provides specific error types to help you handle different error scenarios effectively.

#### Error Types

The package defines several error types that extend the base `Error` class:

- `InferenceClientError`: Base error class for all Hugging Face Inference errors
- `InferenceClientInputError`: Thrown when there are issues with input parameters
- `InferenceClientProviderApiError`: Thrown when there are API-level errors from providers
- `InferenceClientHubApiError`: Thrown when there are API-levels errors from the Hugging Face Hub
- `InferenceClientProviderOutputError`: Thrown when there are issues with providers' API responses format

### Example Usage

```typescript
import { InferenceClient } from "@huggingface/inference";
import {
  InferenceClientError,
  InferenceClientProviderApiError,
  InferenceClientProviderOutputError,
  InferenceClientHubApiError,
} from "@huggingface/inference";

const client = new InferenceClient();

try {
  const result = await client.textGeneration({
    model: "gpt2",
    inputs: "Hello, I'm a language model",
  });
} catch (error) {
  if (error instanceof InferenceClientProviderApiError) {
    // Handle API errors (e.g., rate limits, authentication issues)
    console.error("Provider API Error:", error.message);
    console.error("HTTP Request details:", error.request);
    console.error("HTTP Response details:", error.response);
  if (error instanceof InferenceClientHubApiError) {
    // Handle API errors (e.g., rate limits, authentication issues)
    console.error("Hub API Error:", error.message);
    console.error("HTTP Request details:", error.request);
    console.error("HTTP Response details:", error.response);
  } else if (error instanceof InferenceClientProviderOutputError) {
    // Handle malformed responses from providers
    console.error("Provider Output Error:", error.message);
  } else if (error instanceof InferenceClientInputError) {
    // Handle invalid input parameters
    console.error("Input Error:", error.message);
  } else {
    // Handle unexpected errors
    console.error("Unexpected error:", error);
  }
}

/// Catch all errors from @huggingface/inference
try {
  const result = await client.textGeneration({
    model: "gpt2",
    inputs: "Hello, I'm a language model",
  });
} catch (error) {
  if (error instanceof InferenceClientError) {
    // Handle errors from @huggingface/inference
    console.error("Error from InferenceClient:", error);
  } else {
    // Handle unexpected errors
    console.error("Unexpected error:", error);
  }
}
```

### Error Details

#### InferenceClientProviderApiError

This error occurs when there are issues with the API request when performing inference at the selected provider.

It has several properties:
- `message`: A descriptive error message
- `request`: Details about the failed request (URL, method, headers)
- `response`: Response details including status code and body

#### InferenceClientHubApiError

This error occurs when there are issues with the API request when requesting the Hugging Face Hub API.

It has several properties:
- `message`: A descriptive error message
- `request`: Details about the failed request (URL, method, headers)
- `response`: Response details including status code and body

#### InferenceClientProviderOutputError

This error occurs when a provider returns a response in an unexpected format.

#### InferenceClientInputError

This error occurs when input parameters are invalid or missing. The error message describes what's wrong with the input.

### Natural Language Processing

#### Text Generation

Generates text from an input prompt.

```typescript
await hf.textGeneration({
  model: 'mistralai/Mixtral-8x7B-v0.1',
  provider: "together",
  inputs: 'The answer to the universe is'
})

for await (const output of hf.textGenerationStream({
  model: "mistralai/Mixtral-8x7B-v0.1",
  provider: "together",
  inputs: 'repeat "one two three four"',
  parameters: { max_new_tokens: 250 }
})) {
  console.log(output.token.text, output.generated_text);
}
```

#### Chat Completion

Generate a model response from a list of messages comprising a conversation.

```typescript
// Non-streaming API
const out = await hf.chatCompletion({
  model: "Qwen/Qwen3-32B",
  provider: "cerebras",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512,
  temperature: 0.1,
});

// Streaming API
let out = "";
for await (const chunk of hf.chatCompletionStream({
  model: "Qwen/Qwen3-32B",
  provider: "cerebras",
  messages: [
    { role: "user", content: "Can you help me solve an equation?" },
  ],
  max_tokens: 512,
  temperature: 0.1,
})) {
  if (chunk.choices && chunk.choices.length > 0) {
    out += chunk.choices[0].delta.content;
  }
}
```
#### Feature Extraction

This task reads some text and outputs raw float values, that are usually consumed as part of a semantic database/semantic search.

```typescript
await hf.featureExtraction({
  model: "sentence-transformers/distilbert-base-nli-mean-tokens",
  inputs: "That is a happy person",
});
```

#### Fill Mask

Tries to fill in a hole with a missing word (token to be precise).

```typescript
await hf.fillMask({
  model: 'bert-base-uncased',
  inputs: '[MASK] world!'
})
```

#### Summarization

Summarizes longer text into shorter text. Be careful, some models have a maximum length of input.

```typescript
await hf.summarization({
  model: 'facebook/bart-large-cnn',
  inputs:
    'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930.',
  parameters: {
    max_length: 100
  }
})
```

#### Question Answering

Answers questions based on the context you provide.

```typescript
await hf.questionAnswering({
  model: 'deepset/roberta-base-squad2',
  inputs: {
    question: 'What is the capital of France?',
    context: 'The capital of France is Paris.'
  }
})
```

#### Table Question Answering

```typescript
await hf.tableQuestionAnswering({
  model: 'google/tapas-base-finetuned-wtq',
  inputs: {
    query: 'How many stars does the transformers repository have?',
    table: {
      Repository: ['Transformers', 'Datasets', 'Tokenizers'],
      Stars: ['36542', '4512', '3934'],
      Contributors: ['651', '77', '34'],
      'Programming language': ['Python', 'Python', 'Rust, Python and NodeJS']
    }
  }
})
```

#### Text Classification

Often used for sentiment analysis, this method will assign labels to the given text along with a probability score of that label.

```typescript
await hf.textClassification({
  model: 'distilbert-base-uncased-finetuned-sst-2-english',
  inputs: 'I like you. I love you.'
})
```

#### Token Classification

Used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text.

```typescript
await hf.tokenClassification({
  model: 'dbmdz/bert-large-cased-finetuned-conll03-english',
  inputs: 'My name is Sarah Jessica Parker but you can call me Jessica'
})
```

#### Translation

Converts text from one language to another.

```typescript
await hf.translation({
  model: 't5-base',
  inputs: 'My name is Wolfgang and I live in Berlin'
})

await hf.translation({
  model: 'facebook/mbart-large-50-many-to-many-mmt',
  inputs: textToTranslate,
  parameters: {
  "src_lang": "en_XX",
  "tgt_lang": "fr_XX"
 }
})
```

#### Zero-Shot Classification

Checks how well an input text fits into a set of labels you provide.

```typescript
await hf.zeroShotClassification({
  model: 'facebook/bart-large-mnli',
  inputs: [
    'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!'
  ],
  parameters: { candidate_labels: ['refund', 'legal', 'faq'] }
})
```

#### Sentence Similarity

Calculate the semantic similarity between one text and a list of other sentences.

```typescript
await hf.sentenceSimilarity({
  model: 'sentence-transformers/paraphrase-xlm-r-multilingual-v1',
  inputs: {
    source_sentence: 'That is a happy person',
    sentences: [
      'That is a happy dog',
      'That is a very happy person',
      'Today is a sunny day'
    ]
  }
})
```

### Audio

#### Automatic Speech Recognition

Transcribes speech from an audio file.

[Demo](https://huggingface.co/spaces/huggingfacejs/speech-recognition-vue)

```typescript
await hf.automaticSpeechRecognition({
  model: 'facebook/wav2vec2-large-960h-lv60-self',
  data: readFileSync('test/sample1.flac')
})
```

#### Audio Classification

Assigns labels to the given audio along with a probability score of that label.

[Demo](https://huggingface.co/spaces/huggingfacejs/audio-classification-vue)

```typescript
await hf.audioClassification({
  model: 'superb/hubert-large-superb-er',
  data: readFileSync('test/sample1.flac')
})
```

#### Text To Speech

Generates natural-sounding speech from text input.

[Interactive tutorial](https://scrimba.com/scrim/co8da4d23b49b648f77f4848a?pl=pkVnrP7uP)

```typescript
await hf.textToSpeech({
  model: 'espnet/kan-bayashi_ljspeech_vits',
  inputs: 'Hello world!'
})
```

#### Audio To Audio

Outputs one or multiple generated audios from an input audio, commonly used for speech enhancement and source separation.

```typescript
await hf.audioToAudio({
  model: 'speechbrain/sepformer-wham',
  data: readFileSync('test/sample1.flac')
})
```

### Computer Vision

#### Image Classification

Assigns labels to a given image along with a probability score of that label.

[Demo](https://huggingface.co/spaces/huggingfacejs/image-classification-vue)

```typescript
await hf.imageClassification({
  data: readFileSync('test/cheetah.png'),
  model: 'google/vit-base-patch16-224'
})
```

#### Object Detection

Detects objects within an image and returns labels with corresponding bounding boxes and probability scores.

[Demo](https://huggingface.co/spaces/huggingfacejs/object-detection-vue)

```typescript
await hf.objectDetection({
  data: readFileSync('test/cats.png'),
  model: 'facebook/detr-resnet-50'
})
```

#### Image Segmentation

Detects segments within an image and returns labels with corresponding bounding boxes and probability scores.

```typescript
await hf.imageSegmentation({
  data: readFileSync('test/cats.png'),
  model: 'facebook/detr-resnet-50-panoptic'
})
```

#### Image To Text

Outputs text from a given image, commonly used for captioning or optical character recognition.

```typescript
await hf.imageToText({
  data: readFileSync('test/cats.png'),
  model: 'nlpconnect/vit-gpt2-image-captioning'
})
```

#### Text To Image

Creates an image from a text prompt.

[Demo](https://huggingface.co/spaces/huggingfacejs/image-to-text)

```typescript
await hf.textToImage({
  model: 'black-forest-labs/FLUX.1-dev',
  inputs: 'a picture of a green bird'
})
```

#### Image To Image

Image-to-image is the task of transforming a source image to match the characteristics of a target image or a target image domain.

[Interactive tutorial](https://scrimba.com/scrim/co4834bf9a91cc81cfab07969?pl=pkVnrP7uP)

```typescript
await hf.imageToImage({
  inputs: new Blob([readFileSync("test/stormtrooper_depth.png")]),
  parameters: {
    prompt: "elmo's lecture",
  },
  model: "lllyasviel/sd-controlnet-depth",
});
```

#### Zero Shot Image Classification

Checks how well an input image fits into a set of labels you provide.

```typescript
await hf.zeroShotImageClassification({
  model: 'openai/clip-vit-large-patch14-336',
  inputs: {
    image: await (await fetch('https://placekitten.com/300/300')).blob()
  },
  parameters: {
    candidate_labels: ['cat', 'dog']
  }
})
```

### Multimodal

#### Visual Question Answering

Visual Question Answering is the task of answering open-ended questions based on an image. They output natural language responses to natural language questions.

[Demo](https://huggingface.co/spaces/huggingfacejs/doc-vis-qa)

```typescript
await hf.visualQuestionAnswering({
  model: 'dandelin/vilt-b32-finetuned-vqa',
  inputs: {
    question: 'How many cats are lying down?',
    image: await (await fetch('https://placekitten.com/300/300')).blob()
  }
})
```

#### Document Question Answering

Document question answering models take a (document, question) pair as input and return an answer in natural language.

[Demo](https://huggingface.co/spaces/huggingfacejs/doc-vis-qa)

```typescript
await hf.documentQuestionAnswering({
  model: 'impira/layoutlm-document-qa',
  inputs: {
    question: 'Invoice number?',
    image: await (await fetch('https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png')).blob(),
  }
})
```

### Tabular

#### Tabular Regression

Tabular regression is the task of predicting a numerical value given a set of attributes.

```typescript
await hf.tabularRegression({
  model: "scikit-learn/Fish-Weight",
  inputs: {
    data: {
      "Height": ["11.52", "12.48", "12.3778"],
      "Length1": ["23.2", "24", "23.9"],
      "Length2": ["25.4", "26.3", "26.5"],
      "Length3": ["30", "31.2", "31.1"],
      "Species": ["Bream", "Bream", "Bream"],
      "Width": ["4.02", "4.3056", "4.6961"]
    },
  },
})
```

#### Tabular Classification

Tabular classification is the task of classifying a target category (a group) based on set of attributes.

```typescript
await hf.tabularClassification({
  model: "vvmnnnkv/wine-quality",
  inputs: {
    data: {
      "fixed_acidity": ["7.4", "7.8", "10.3"],
      "volatile_acidity": ["0.7", "0.88", "0.32"],
      "citric_acid": ["0", "0", "0.45"],
      "residual_sugar": ["1.9", "2.6", "6.4"],
      "chlorides": ["0.076", "0.098", "0.073"],
      "free_sulfur_dioxide": ["11", "25", "5"],
      "total_sulfur_dioxide": ["34", "67", "13"],
      "density": ["0.9978", "0.9968", "0.9976"],
      "pH": ["3.51", "3.2", "3.23"],
      "sulphates": ["0.56", "0.68", "0.82"],
      "alcohol": ["9.4", "9.8", "12.6"]
    },
  },
})
```

You can use any Chat Completion API-compatible provider with the `chatCompletion` method.

```typescript
// Chat Completion Example
const MISTRAL_KEY = process.env.MISTRAL_KEY;
const hf = new InferenceClient(MISTRAL_KEY, {
  endpointUrl: "https://api.mistral.ai",
});
const stream = hf.chatCompletionStream({
  model: "mistral-tiny",
  messages: [{ role: "user", content: "Complete the equation one + one = , just the answer" }],
});
let out = "";
for await (const chunk of stream) {
  if (chunk.choices && chunk.choices.length > 0) {
    out += chunk.choices[0].delta.content;
    console.log(out);
  }
}
```

## Using Inference Endpoints

The examples we saw above use inference providers. While these prove to be very useful for prototyping
and testing things quickly. Once you're ready to deploy your model to production, you'll need to use a dedicated infrastructure. That's where [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) comes into play. It allows you to deploy any model and expose it as a private API. Once deployed, you'll get a URL that you can connect to:

```typescript
import { InferenceClient } from '@huggingface/inference';

const hf = new InferenceClient("hf_xxxxxxxxxxxxxx", {
	endpointUrl: "https://j3z5luu0ooo76jnl.us-east-1.aws.endpoints.huggingface.cloud/v1/",
});

const response = await hf.chatCompletion({
	messages: [
		{
			role: "user",
			content: "What is the capital of France?",
		},
	],
});

console.log(response.choices[0].message.content);
```

By default, all calls to the inference endpoint will wait until the model is loaded. When [scaling to 0](https://huggingface.co/docs/inference-endpoints/en/autoscaling#scaling-to-0)
is enabled on the endpoint, this can result in non-trivial waiting time. If you'd rather disable this behavior and handle the endpoint's returned 500 HTTP errors yourself, you can do so like so:

```typescript
const hf = new InferenceClient("hf_xxxxxxxxxxxxxx", {
	endpointUrl: "https://j3z5luu0ooo76jnl.us-east-1.aws.endpoints.huggingface.cloud/v1/",
});

const response = await hf.chatCompletion(
	{
		messages: [
			{
				role: "user",
				content: "What is the capital of France?",
			},
		],
	},
	{
		retry_on_error: false,
	}
);
```
## Using local endpoints

You can use `InferenceClient` to run chat completion with local inference servers (llama.cpp, vllm, litellm server, TGI, mlx, etc.) running on your own machine. The API should be OpenAI API-compatible.

```typescript
import { InferenceClient } from '@huggingface/inference';

const hf = new InferenceClient(undefined, {
	endpointUrl: "http://localhost:8080",
});

const response = await hf.chatCompletion({
	messages: [
		{
			role: "user",
			content: "What is the capital of France?",
		},
	],
});

console.log(response.choices[0].message.content);
```

<Tip>

Similarily to the OpenAI JS client, `InferenceClient` can be used to run Chat Completion inference with any OpenAI REST API-compatible endpoint.

</Tip>

## Running tests

```console
HF_TOKEN="your access token" pnpm run test
```

## Finding appropriate models

We have an informative documentation project called [Tasks](https://huggingface.co/tasks) to list available models for each task and explain how each task works in detail.

It also contains demos, example outputs, and other resources should you want to dig deeper into the ML side of things.

## Dependencies

- `@huggingface/tasks` : Typings only


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/README.md" />

### @huggingface/inference
https://huggingface.co/docs/huggingface.js/inference/modules.md

# @huggingface/inference

## Namespaces

- [snippets](modules/snippets)

## Classes

- [HfInference](classes/HfInference)
- [InferenceClient](classes/InferenceClient)
- [InferenceClientEndpoint](classes/InferenceClientEndpoint)
- [InferenceClientError](classes/InferenceClientError)
- [InferenceClientHubApiError](classes/InferenceClientHubApiError)
- [InferenceClientInputError](classes/InferenceClientInputError)
- [InferenceClientProviderApiError](classes/InferenceClientProviderApiError)
- [InferenceClientProviderOutputError](classes/InferenceClientProviderOutputError)
- [InferenceClientRoutingError](classes/InferenceClientRoutingError)

## Interfaces

- [AudioToAudioOutput](interfaces/AudioToAudioOutput)
- [AudioToAudioOutputElem](interfaces/AudioToAudioOutputElem)
- [BaseArgs](interfaces/BaseArgs)
- [BodyParams](interfaces/BodyParams)
- [HeaderParams](interfaces/HeaderParams)
- [InferenceProviderMappingEntry](interfaces/InferenceProviderMappingEntry)
- [Logger](interfaces/Logger)
- [Options](interfaces/Options)
- [TextGenerationInput](interfaces/TextGenerationInput)
- [TextGenerationOutput](interfaces/TextGenerationOutput)
- [TextGenerationStreamBestOfSequence](interfaces/TextGenerationStreamBestOfSequence)
- [TextGenerationStreamDetails](interfaces/TextGenerationStreamDetails)
- [TextGenerationStreamOutput](interfaces/TextGenerationStreamOutput)
- [TextGenerationStreamPrefillToken](interfaces/TextGenerationStreamPrefillToken)
- [TextGenerationStreamToken](interfaces/TextGenerationStreamToken)
- [UrlParams](interfaces/UrlParams)

## Type Aliases

### AudioClassificationArgs

Ƭ **AudioClassificationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `AudioClassificationInput` \| `LegacyAudioInput`

#### Defined in[[audioclassificationargs.defined-in]]

[inference/src/tasks/audio/audioClassification.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioClassification.ts#L9)

___

### AudioToAudioArgs

Ƭ **AudioToAudioArgs**: [`BaseArgs`](interfaces/BaseArgs) & \{ `inputs`: `Blob`  } \| `LegacyAudioInput`

#### Defined in[[audiotoaudioargs.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L8)

___

### AuthMethod

Ƭ **AuthMethod**: ``"none"`` \| ``"hf-token"`` \| ``"credentials-include"`` \| ``"provider-key"``

#### Defined in[[authmethod.defined-in]]

[inference/src/types.ts:168](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L168)

___

### AutomaticSpeechRecognitionArgs

Ƭ **AutomaticSpeechRecognitionArgs**: [`BaseArgs`](interfaces/BaseArgs) & `AutomaticSpeechRecognitionInput` \| `LegacyAudioInput`

#### Defined in[[automaticspeechrecognitionargs.defined-in]]

[inference/src/tasks/audio/automaticSpeechRecognition.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/automaticSpeechRecognition.ts#L8)

___

### DocumentQuestionAnsweringArgs

Ƭ **DocumentQuestionAnsweringArgs**: [`BaseArgs`](interfaces/BaseArgs) & `DocumentQuestionAnsweringInput` & \{ `inputs`: `DocumentQuestionAnsweringInputData` & \{ `image`: `Blob`  }  }

#### Defined in[[documentquestionansweringargs.defined-in]]

[inference/src/tasks/multimodal/documentQuestionAnswering.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/documentQuestionAnswering.ts#L13)

___

### FeatureExtractionArgs

Ƭ **FeatureExtractionArgs**: [`BaseArgs`](interfaces/BaseArgs) & `FeatureExtractionInput` & `FeatureExtractionOAICompatInput`

#### Defined in[[featureextractionargs.defined-in]]

[inference/src/tasks/nlp/featureExtraction.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/featureExtraction.ts#L12)

___

### FeatureExtractionOutput

Ƭ **FeatureExtractionOutput**: (`number` \| `number`[] \| `number`[][])[]

Returned values are a multidimensional array of floats (dimension depending on if you sent a string or a list of string, and if the automatic reduction, usually mean_pooling for instance was applied for you or not. This should be explained on the model's README).

#### Defined in[[featureextractionoutput.defined-in]]

[inference/src/tasks/nlp/featureExtraction.ts:17](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/featureExtraction.ts#L17)

___

### FillMaskArgs

Ƭ **FillMaskArgs**: [`BaseArgs`](interfaces/BaseArgs) & `FillMaskInput`

#### Defined in[[fillmaskargs.defined-in]]

[inference/src/tasks/nlp/fillMask.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/fillMask.ts#L7)

___

### ImageClassificationArgs

Ƭ **ImageClassificationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `ImageClassificationInput` \| `LegacyImageInput`

#### Defined in[[imageclassificationargs.defined-in]]

[inference/src/tasks/cv/imageClassification.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageClassification.ts#L8)

___

### ImageSegmentationArgs

Ƭ **ImageSegmentationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `ImageSegmentationInput`

#### Defined in[[imagesegmentationargs.defined-in]]

[inference/src/tasks/cv/imageSegmentation.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageSegmentation.ts#L8)

___

### ImageToImageArgs

Ƭ **ImageToImageArgs**: [`BaseArgs`](interfaces/BaseArgs) & `ImageToImageInput`

#### Defined in[[imagetoimageargs.defined-in]]

[inference/src/tasks/cv/imageToImage.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToImage.ts#L8)

___

### ImageToTextArgs

Ƭ **ImageToTextArgs**: [`BaseArgs`](interfaces/BaseArgs) & `ImageToTextInput` \| `LegacyImageInput`

#### Defined in[[imagetotextargs.defined-in]]

[inference/src/tasks/cv/imageToText.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToText.ts#L9)

___

### ImageToVideoArgs

Ƭ **ImageToVideoArgs**: [`BaseArgs`](interfaces/BaseArgs) & `ImageToVideoInput`

#### Defined in[[imagetovideoargs.defined-in]]

[inference/src/tasks/cv/imageToVideo.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToVideo.ts#L8)

___

### InferenceProvider

Ƭ **InferenceProvider**: typeof [`INFERENCE_PROVIDERS`](modules#inference_providers)[`number`]

#### Defined in[[inferenceprovider.defined-in]]

[inference/src/types.ts:75](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L75)

___

### InferenceProviderOrPolicy

Ƭ **InferenceProviderOrPolicy**: typeof [`PROVIDERS_OR_POLICIES`](modules#providers_or_policies)[`number`]

#### Defined in[[inferenceproviderorpolicy.defined-in]]

[inference/src/types.ts:77](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L77)

___

### InferenceTask

Ƭ **InferenceTask**: `Exclude`\<`PipelineType`, ``"other"``\> \| ``"conversational"``

#### Defined in[[inferencetask.defined-in]]

[inference/src/types.ts:45](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L45)

___

### ModelId

Ƭ **ModelId**: `string`

HF model id, like "meta-llama/Llama-3.3-70B-Instruct"

#### Defined in[[modelid.defined-in]]

[inference/src/types.ts:6](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L6)

___

### ObjectDetectionArgs

Ƭ **ObjectDetectionArgs**: [`BaseArgs`](interfaces/BaseArgs) & `ObjectDetectionInput` \| `LegacyImageInput`

#### Defined in[[objectdetectionargs.defined-in]]

[inference/src/tasks/cv/objectDetection.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/objectDetection.ts#L8)

___

### QuestionAnsweringArgs

Ƭ **QuestionAnsweringArgs**: [`BaseArgs`](interfaces/BaseArgs) & `QuestionAnsweringInput`

#### Defined in[[questionansweringargs.defined-in]]

[inference/src/tasks/nlp/questionAnswering.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/questionAnswering.ts#L8)

___

### RequestArgs

Ƭ **RequestArgs**: [`BaseArgs`](interfaces/BaseArgs) & \{ `data`: `Blob` \| `ArrayBuffer`  } \| \{ `inputs`: `unknown`  } \| \{ `prompt`: `string`  } \| \{ `text`: `string`  } \| \{ `audio_url`: `string`  } \| `ChatCompletionInput` & \{ `parameters?`: `Record`\<`string`, `unknown`\>  }

#### Defined in[[requestargs.defined-in]]

[inference/src/types.ts:156](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L156)

___

### SentenceSimilarityArgs

Ƭ **SentenceSimilarityArgs**: [`BaseArgs`](interfaces/BaseArgs) & `SentenceSimilarityInput`

#### Defined in[[sentencesimilarityargs.defined-in]]

[inference/src/tasks/nlp/sentenceSimilarity.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/sentenceSimilarity.ts#L7)

___

### SummarizationArgs

Ƭ **SummarizationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `SummarizationInput`

#### Defined in[[summarizationargs.defined-in]]

[inference/src/tasks/nlp/summarization.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/summarization.ts#L7)

___

### TableQuestionAnsweringArgs

Ƭ **TableQuestionAnsweringArgs**: [`BaseArgs`](interfaces/BaseArgs) & `TableQuestionAnsweringInput`

#### Defined in[[tablequestionansweringargs.defined-in]]

[inference/src/tasks/nlp/tableQuestionAnswering.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tableQuestionAnswering.ts#L7)

___

### TabularClassificationArgs

Ƭ **TabularClassificationArgs**: [`BaseArgs`](interfaces/BaseArgs) & \{ `inputs`: \{ `data`: `Record`\<`string`, `string`[]\>  }  }

#### Defined in[[tabularclassificationargs.defined-in]]

[inference/src/tasks/tabular/tabularClassification.ts:6](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularClassification.ts#L6)

___

### TabularClassificationOutput

Ƭ **TabularClassificationOutput**: `number`[]

A list of predicted labels for each row

#### Defined in[[tabularclassificationoutput.defined-in]]

[inference/src/tasks/tabular/tabularClassification.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularClassification.ts#L18)

___

### TabularRegressionArgs

Ƭ **TabularRegressionArgs**: [`BaseArgs`](interfaces/BaseArgs) & \{ `inputs`: \{ `data`: `Record`\<`string`, `string`[]\>  }  }

#### Defined in[[tabularregressionargs.defined-in]]

[inference/src/tasks/tabular/tabularRegression.ts:6](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularRegression.ts#L6)

___

### TabularRegressionOutput

Ƭ **TabularRegressionOutput**: `number`[]

a list of predicted values for each row

#### Defined in[[tabularregressionoutput.defined-in]]

[inference/src/tasks/tabular/tabularRegression.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularRegression.ts#L18)

___

### TextClassificationArgs

Ƭ **TextClassificationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `TextClassificationInput`

#### Defined in[[textclassificationargs.defined-in]]

[inference/src/tasks/nlp/textClassification.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textClassification.ts#L7)

___

### TextGenerationStreamFinishReason

Ƭ **TextGenerationStreamFinishReason**: ``"length"`` \| ``"eos_token"`` \| ``"stop_sequence"``

#### Defined in[[textgenerationstreamfinishreason.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:48](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L48)

___

### TextToImageArgs

Ƭ **TextToImageArgs**: [`BaseArgs`](interfaces/BaseArgs) & `TextToImageInput`

#### Defined in[[texttoimageargs.defined-in]]

[inference/src/tasks/cv/textToImage.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L8)

___

### TextToVideoArgs

Ƭ **TextToVideoArgs**: [`BaseArgs`](interfaces/BaseArgs) & `TextToVideoInput`

#### Defined in[[texttovideoargs.defined-in]]

[inference/src/tasks/cv/textToVideo.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToVideo.ts#L11)

___

### TextToVideoOutput

Ƭ **TextToVideoOutput**: `Blob`

#### Defined in[[texttovideooutput.defined-in]]

[inference/src/tasks/cv/textToVideo.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToVideo.ts#L13)

___

### TokenClassificationArgs

Ƭ **TokenClassificationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `TokenClassificationInput`

#### Defined in[[tokenclassificationargs.defined-in]]

[inference/src/tasks/nlp/tokenClassification.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tokenClassification.ts#L7)

___

### TranslationArgs

Ƭ **TranslationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `TranslationInput`

#### Defined in[[translationargs.defined-in]]

[inference/src/tasks/nlp/translation.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/translation.ts#L7)

___

### VisualQuestionAnsweringArgs

Ƭ **VisualQuestionAnsweringArgs**: [`BaseArgs`](interfaces/BaseArgs) & `VisualQuestionAnsweringInput` & \{ `inputs`: `VisualQuestionAnsweringInputData` & \{ `image`: `Blob`  }  }

#### Defined in[[visualquestionansweringargs.defined-in]]

[inference/src/tasks/multimodal/visualQuestionAnswering.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/visualQuestionAnswering.ts#L13)

___

### ZeroShotClassificationArgs

Ƭ **ZeroShotClassificationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `ZeroShotClassificationInput`

#### Defined in[[zeroshotclassificationargs.defined-in]]

[inference/src/tasks/nlp/zeroShotClassification.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/zeroShotClassification.ts#L7)

___

### ZeroShotImageClassificationArgs

Ƭ **ZeroShotImageClassificationArgs**: [`BaseArgs`](interfaces/BaseArgs) & `ZeroShotImageClassificationInput` \| `LegacyZeroShotImageClassificationInput`

#### Defined in[[zeroshotimageclassificationargs.defined-in]]

[inference/src/tasks/cv/zeroShotImageClassification.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/zeroShotImageClassification.ts#L15)

## Variables

### INFERENCE\_PROVIDERS

• `Const` **INFERENCE\_PROVIDERS**: readonly [``"baseten"``, ``"black-forest-labs"``, ``"cerebras"``, ``"clarifai"``, ``"cohere"``, ``"fal-ai"``, ``"featherless-ai"``, ``"fireworks-ai"``, ``"groq"``, ``"hf-inference"``, ``"hyperbolic"``, ``"nebius"``, ``"novita"``, ``"nscale"``, ``"openai"``, ``"ovhcloud"``, ``"publicai"``, ``"replicate"``, ``"sambanova"``, ``"scaleway"``, ``"together"``, ``"wavespeed"``, ``"zai-org"``]

#### Defined in[[inferenceproviders.defined-in]]

[inference/src/types.ts:47](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L47)

___

### PROVIDERS

• `Const` **PROVIDERS**: `Record`\<[`InferenceProvider`](modules#inferenceprovider), `Partial`\<`Record`\<[`InferenceTask`](modules#inferencetask), `TaskProviderHelper`\>\>\>

#### Defined in[[providers.defined-in]]

[inference/src/lib/getProviderHelper.ts:60](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L60)

___

### PROVIDERS\_HUB\_ORGS

• `Const` **PROVIDERS\_HUB\_ORGS**: `Record`\<[`InferenceProvider`](modules#inferenceprovider), `string`\>

The org namespace on the HF Hub i.e. hf.co/…

Whenever possible, InferenceProvider should == org namespace

#### Defined in[[providershuborgs.defined-in]]

[inference/src/types.ts:84](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L84)

___

### PROVIDERS\_OR\_POLICIES

• `Const` **PROVIDERS\_OR\_POLICIES**: readonly [``"baseten"``, ``"black-forest-labs"``, ``"cerebras"``, ``"clarifai"``, ``"cohere"``, ``"fal-ai"``, ``"featherless-ai"``, ``"fireworks-ai"``, ``"groq"``, ``"hf-inference"``, ``"hyperbolic"``, ``"nebius"``, ``"novita"``, ``"nscale"``, ``"openai"``, ``"ovhcloud"``, ``"publicai"``, ``"replicate"``, ``"sambanova"``, ``"scaleway"``, ``"together"``, ``"wavespeed"``, ``"zai-org"``, ``"auto"``]

#### Defined in[[providersorpolicies.defined-in]]

[inference/src/types.ts:73](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L73)

## Functions

### audioClassification

▸ **audioClassification**(`args`, `options?`): `Promise`\<`AudioClassificationOutput`\>

This task reads some audio input and outputs the likelihood of classes.
Recommended model:  superb/hubert-large-superb-er

#### Parameters[[audioclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AudioClassificationArgs`](modules#audioclassificationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[audioclassification.returns]]

`Promise`\<`AudioClassificationOutput`\>

#### Defined in[[audioclassification.defined-in]]

[inference/src/tasks/audio/audioClassification.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioClassification.ts#L15)

___

### audioToAudio

▸ **audioToAudio**(`args`, `options?`): `Promise`\<[`AudioToAudioOutput`](interfaces/AudioToAudioOutput)[]\>

This task reads some audio input and outputs one or multiple audio files.
Example model: speechbrain/sepformer-wham does audio source separation.

#### Parameters[[audiotoaudio.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AudioToAudioArgs`](modules#audiotoaudioargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[audiotoaudio.returns]]

`Promise`\<[`AudioToAudioOutput`](interfaces/AudioToAudioOutput)[]\>

#### Defined in[[audiotoaudio.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:39](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L39)

___

### automaticSpeechRecognition

▸ **automaticSpeechRecognition**(`args`, `options?`): `Promise`\<`AutomaticSpeechRecognitionOutput`\>

This task reads some audio input and outputs the said words within the audio files.
Recommended model (english language): facebook/wav2vec2-large-960h-lv60-self

#### Parameters[[automaticspeechrecognition.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AutomaticSpeechRecognitionArgs`](modules#automaticspeechrecognitionargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[automaticspeechrecognition.returns]]

`Promise`\<`AutomaticSpeechRecognitionOutput`\>

#### Defined in[[automaticspeechrecognition.defined-in]]

[inference/src/tasks/audio/automaticSpeechRecognition.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/automaticSpeechRecognition.ts#L13)

___

### chatCompletion

▸ **chatCompletion**(`args`, `options?`): `Promise`\<`ChatCompletionOutput`\>

Use the chat completion endpoint to generate a response to a prompt, using OpenAI message completion API no stream

#### Parameters[[chatcompletion.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](interfaces/BaseArgs) & `ChatCompletionInput` |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[chatcompletion.returns]]

`Promise`\<`ChatCompletionOutput`\>

#### Defined in[[chatcompletion.defined-in]]

[inference/src/tasks/nlp/chatCompletion.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/chatCompletion.ts#L12)

___

### chatCompletionStream

▸ **chatCompletionStream**(`args`, `options?`): `AsyncGenerator`\<`ChatCompletionStreamOutput`\>

Use to continue text from a prompt. Same as `textGeneration` but returns generator that can be read one token at a time

#### Parameters[[chatcompletionstream.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](interfaces/BaseArgs) & `ChatCompletionInput` |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[chatcompletionstream.returns]]

`AsyncGenerator`\<`ChatCompletionStreamOutput`\>

#### Defined in[[chatcompletionstream.defined-in]]

[inference/src/tasks/nlp/chatCompletionStream.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/chatCompletionStream.ts#L12)

___

### documentQuestionAnswering

▸ **documentQuestionAnswering**(`args`, `options?`): `Promise`\<`DocumentQuestionAnsweringOutput`[`number`]\>

Answers a question on a document image. Recommended model: impira/layoutlm-document-qa.

#### Parameters[[documentquestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`DocumentQuestionAnsweringArgs`](modules#documentquestionansweringargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[documentquestionanswering.returns]]

`Promise`\<`DocumentQuestionAnsweringOutput`[`number`]\>

#### Defined in[[documentquestionanswering.defined-in]]

[inference/src/tasks/multimodal/documentQuestionAnswering.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/documentQuestionAnswering.ts#L19)

___

### featureExtraction

▸ **featureExtraction**(`args`, `options?`): `Promise`\<[`FeatureExtractionOutput`](modules#featureextractionoutput)\>

This task reads some text and outputs raw float values, that are usually consumed as part of a semantic database/semantic search.

#### Parameters[[featureextraction.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`FeatureExtractionArgs`](modules#featureextractionargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[featureextraction.returns]]

`Promise`\<[`FeatureExtractionOutput`](modules#featureextractionoutput)\>

#### Defined in[[featureextraction.defined-in]]

[inference/src/tasks/nlp/featureExtraction.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/featureExtraction.ts#L22)

___

### fillMask

▸ **fillMask**(`args`, `options?`): `Promise`\<`FillMaskOutput`\>

Tries to fill in a hole with a missing word (token to be precise). That’s the base task for BERT models.

#### Parameters[[fillmask.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`FillMaskArgs`](modules#fillmaskargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[fillmask.returns]]

`Promise`\<`FillMaskOutput`\>

#### Defined in[[fillmask.defined-in]]

[inference/src/tasks/nlp/fillMask.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/fillMask.ts#L12)

___

### getProviderHelper

▸ **getProviderHelper**(`provider`, `task`): `TextToImageTaskHelper` & `TaskProviderHelper`

Get provider helper instance by name and task

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"text-to-image"`` |

#### Returns[[getproviderhelper.returns]]

`TextToImageTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:191](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L191)

▸ **getProviderHelper**(`provider`, `task`): `ConversationalTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"conversational"`` |

#### Returns[[getproviderhelper.returns]]

`ConversationalTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:195](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L195)

▸ **getProviderHelper**(`provider`, `task`): `TextGenerationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"text-generation"`` |

#### Returns[[getproviderhelper.returns]]

`TextGenerationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:199](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L199)

▸ **getProviderHelper**(`provider`, `task`): `TextToSpeechTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"text-to-speech"`` |

#### Returns[[getproviderhelper.returns]]

`TextToSpeechTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:203](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L203)

▸ **getProviderHelper**(`provider`, `task`): `TextToAudioTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"text-to-audio"`` |

#### Returns[[getproviderhelper.returns]]

`TextToAudioTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:207](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L207)

▸ **getProviderHelper**(`provider`, `task`): `AutomaticSpeechRecognitionTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"automatic-speech-recognition"`` |

#### Returns[[getproviderhelper.returns]]

`AutomaticSpeechRecognitionTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:211](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L211)

▸ **getProviderHelper**(`provider`, `task`): `TextToVideoTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"text-to-video"`` |

#### Returns[[getproviderhelper.returns]]

`TextToVideoTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:215](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L215)

▸ **getProviderHelper**(`provider`, `task`): `TextClassificationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"text-classification"`` |

#### Returns[[getproviderhelper.returns]]

`TextClassificationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:219](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L219)

▸ **getProviderHelper**(`provider`, `task`): `QuestionAnsweringTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"question-answering"`` |

#### Returns[[getproviderhelper.returns]]

`QuestionAnsweringTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:223](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L223)

▸ **getProviderHelper**(`provider`, `task`): `AudioClassificationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"audio-classification"`` |

#### Returns[[getproviderhelper.returns]]

`AudioClassificationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:227](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L227)

▸ **getProviderHelper**(`provider`, `task`): `AudioToAudioTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"audio-to-audio"`` |

#### Returns[[getproviderhelper.returns]]

`AudioToAudioTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:231](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L231)

▸ **getProviderHelper**(`provider`, `task`): `FillMaskTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"fill-mask"`` |

#### Returns[[getproviderhelper.returns]]

`FillMaskTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:235](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L235)

▸ **getProviderHelper**(`provider`, `task`): `FeatureExtractionTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"feature-extraction"`` |

#### Returns[[getproviderhelper.returns]]

`FeatureExtractionTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:239](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L239)

▸ **getProviderHelper**(`provider`, `task`): `ImageClassificationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"image-classification"`` |

#### Returns[[getproviderhelper.returns]]

`ImageClassificationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:243](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L243)

▸ **getProviderHelper**(`provider`, `task`): `ImageSegmentationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"image-segmentation"`` |

#### Returns[[getproviderhelper.returns]]

`ImageSegmentationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:247](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L247)

▸ **getProviderHelper**(`provider`, `task`): `DocumentQuestionAnsweringTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"document-question-answering"`` |

#### Returns[[getproviderhelper.returns]]

`DocumentQuestionAnsweringTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:251](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L251)

▸ **getProviderHelper**(`provider`, `task`): `ImageToTextTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"image-to-text"`` |

#### Returns[[getproviderhelper.returns]]

`ImageToTextTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:255](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L255)

▸ **getProviderHelper**(`provider`, `task`): `ObjectDetectionTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"object-detection"`` |

#### Returns[[getproviderhelper.returns]]

`ObjectDetectionTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:259](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L259)

▸ **getProviderHelper**(`provider`, `task`): `ZeroShotImageClassificationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"zero-shot-image-classification"`` |

#### Returns[[getproviderhelper.returns]]

`ZeroShotImageClassificationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:263](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L263)

▸ **getProviderHelper**(`provider`, `task`): `ZeroShotClassificationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"zero-shot-classification"`` |

#### Returns[[getproviderhelper.returns]]

`ZeroShotClassificationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:267](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L267)

▸ **getProviderHelper**(`provider`, `task`): `ImageToImageTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"image-to-image"`` |

#### Returns[[getproviderhelper.returns]]

`ImageToImageTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:271](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L271)

▸ **getProviderHelper**(`provider`, `task`): `ImageToVideoTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"image-to-video"`` |

#### Returns[[getproviderhelper.returns]]

`ImageToVideoTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:275](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L275)

▸ **getProviderHelper**(`provider`, `task`): `SentenceSimilarityTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"sentence-similarity"`` |

#### Returns[[getproviderhelper.returns]]

`SentenceSimilarityTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:279](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L279)

▸ **getProviderHelper**(`provider`, `task`): `TableQuestionAnsweringTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"table-question-answering"`` |

#### Returns[[getproviderhelper.returns]]

`TableQuestionAnsweringTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:283](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L283)

▸ **getProviderHelper**(`provider`, `task`): `TabularClassificationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"tabular-classification"`` |

#### Returns[[getproviderhelper.returns]]

`TabularClassificationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:287](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L287)

▸ **getProviderHelper**(`provider`, `task`): `TabularRegressionTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"tabular-regression"`` |

#### Returns[[getproviderhelper.returns]]

`TabularRegressionTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:291](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L291)

▸ **getProviderHelper**(`provider`, `task`): `TokenClassificationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"token-classification"`` |

#### Returns[[getproviderhelper.returns]]

`TokenClassificationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:295](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L295)

▸ **getProviderHelper**(`provider`, `task`): `TranslationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"translation"`` |

#### Returns[[getproviderhelper.returns]]

`TranslationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:299](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L299)

▸ **getProviderHelper**(`provider`, `task`): `SummarizationTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"summarization"`` |

#### Returns[[getproviderhelper.returns]]

`SummarizationTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:303](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L303)

▸ **getProviderHelper**(`provider`, `task`): `VisualQuestionAnsweringTaskHelper` & `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | ``"visual-question-answering"`` |

#### Returns[[getproviderhelper.returns]]

`VisualQuestionAnsweringTaskHelper` & `TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:307](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L307)

▸ **getProviderHelper**(`provider`, `task`): `TaskProviderHelper`

#### Parameters[[getproviderhelper.parameters]]

| Name | Type |
| :------ | :------ |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `task` | `undefined` \| [`InferenceTask`](modules#inferencetask) |

#### Returns[[getproviderhelper.returns]]

`TaskProviderHelper`

#### Defined in[[getproviderhelper.defined-in]]

[inference/src/lib/getProviderHelper.ts:311](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/getProviderHelper.ts#L311)

___

### imageClassification

▸ **imageClassification**(`args`, `options?`): `Promise`\<`ImageClassificationOutput`\>

This task reads some image input and outputs the likelihood of classes.
Recommended model: google/vit-base-patch16-224

#### Parameters[[imageclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageClassificationArgs`](modules#imageclassificationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[imageclassification.returns]]

`Promise`\<`ImageClassificationOutput`\>

#### Defined in[[imageclassification.defined-in]]

[inference/src/tasks/cv/imageClassification.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageClassification.ts#L14)

___

### imageSegmentation

▸ **imageSegmentation**(`args`, `options?`): `Promise`\<`ImageSegmentationOutput`\>

This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects.
Recommended model: facebook/detr-resnet-50-panoptic

#### Parameters[[imagesegmentation.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageSegmentationArgs`](modules#imagesegmentationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[imagesegmentation.returns]]

`Promise`\<`ImageSegmentationOutput`\>

#### Defined in[[imagesegmentation.defined-in]]

[inference/src/tasks/cv/imageSegmentation.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageSegmentation.ts#L14)

___

### imageToImage

▸ **imageToImage**(`args`, `options?`): `Promise`\<`Blob`\>

This task reads some text input and outputs an image.
Recommended model: lllyasviel/sd-controlnet-depth

#### Parameters[[imagetoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToImageArgs`](modules#imagetoimageargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[imagetoimage.returns]]

`Promise`\<`Blob`\>

#### Defined in[[imagetoimage.defined-in]]

[inference/src/tasks/cv/imageToImage.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToImage.ts#L14)

___

### imageToText

▸ **imageToText**(`args`, `options?`): `Promise`\<`ImageToTextOutput`\>

This task reads some image input and outputs the text caption.

#### Parameters[[imagetotext.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToTextArgs`](modules#imagetotextargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[imagetotext.returns]]

`Promise`\<`ImageToTextOutput`\>

#### Defined in[[imagetotext.defined-in]]

[inference/src/tasks/cv/imageToText.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToText.ts#L13)

___

### imageToVideo

▸ **imageToVideo**(`args`, `options?`): `Promise`\<`Blob`\>

This task reads some text input and outputs an image.
Recommended model: Wan-AI/Wan2.1-I2V-14B-720P

#### Parameters[[imagetovideo.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToVideoArgs`](modules#imagetovideoargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[imagetovideo.returns]]

`Promise`\<`Blob`\>

#### Defined in[[imagetovideo.defined-in]]

[inference/src/tasks/cv/imageToVideo.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToVideo.ts#L14)

___

### makeRequestOptions

▸ **makeRequestOptions**(`args`, `providerHelper`, `options?`): `Promise`\<\{ `info`: `RequestInit` ; `url`: `string`  }\>

Helper that prepares request arguments.
This async version handle the model ID resolution step.

#### Parameters[[makerequestoptions.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](modules#requestargs) & \{ `data?`: `Blob` \| `ArrayBuffer` ; `stream?`: `boolean`  } |
| `providerHelper` | `TaskProviderHelper` |
| `options?` | [`Options`](interfaces/Options) & \{ `task?`: [`InferenceTask`](modules#inferencetask)  } |

#### Returns[[makerequestoptions.returns]]

`Promise`\<\{ `info`: `RequestInit` ; `url`: `string`  }\>

#### Defined in[[makerequestoptions.defined-in]]

[inference/src/lib/makeRequestOptions.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/makeRequestOptions.ts#L19)

___

### makeRequestOptionsFromResolvedModel

▸ **makeRequestOptionsFromResolvedModel**(`resolvedModel`, `providerHelper`, `args`, `mapping`, `options?`): `Object`

Helper that prepares request arguments. - for internal use only
This sync version skips the model ID resolution step

#### Parameters[[makerequestoptionsfromresolvedmodel.parameters]]

| Name | Type |
| :------ | :------ |
| `resolvedModel` | `string` |
| `providerHelper` | `TaskProviderHelper` |
| `args` | [`RequestArgs`](modules#requestargs) & \{ `data?`: `Blob` \| `ArrayBuffer` ; `stream?`: `boolean`  } |
| `mapping` | `undefined` \| [`InferenceProviderMappingEntry`](interfaces/InferenceProviderMappingEntry) |
| `options?` | [`Options`](interfaces/Options) & \{ `task?`: [`InferenceTask`](modules#inferencetask)  } |

#### Returns[[makerequestoptionsfromresolvedmodel.returns]]

`Object`

| Name | Type |
| :------ | :------ |
| `info` | `RequestInit` |
| `url` | `string` |

#### Defined in[[makerequestoptionsfromresolvedmodel.defined-in]]

[inference/src/lib/makeRequestOptions.ts:105](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/makeRequestOptions.ts#L105)

___

### objectDetection

▸ **objectDetection**(`args`, `options?`): `Promise`\<`ObjectDetectionOutput`\>

This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects.
Recommended model: facebook/detr-resnet-50

#### Parameters[[objectdetection.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ObjectDetectionArgs`](modules#objectdetectionargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[objectdetection.returns]]

`Promise`\<`ObjectDetectionOutput`\>

#### Defined in[[objectdetection.defined-in]]

[inference/src/tasks/cv/objectDetection.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/objectDetection.ts#L14)

___

### questionAnswering

▸ **questionAnswering**(`args`, `options?`): `Promise`\<`QuestionAnsweringOutput`[`number`]\>

Want to have a nice know-it-all bot that can answer any question?. Recommended model: deepset/roberta-base-squad2

#### Parameters[[questionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`QuestionAnsweringArgs`](modules#questionansweringargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[questionanswering.returns]]

`Promise`\<`QuestionAnsweringOutput`[`number`]\>

#### Defined in[[questionanswering.defined-in]]

[inference/src/tasks/nlp/questionAnswering.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/questionAnswering.ts#L13)

___

### request

▸ **request**\<`T`\>(`args`, `options?`): `Promise`\<`T`\>

Primitive to make custom calls to the inference provider

#### Type parameters[[request.type-parameters]]

| Name |
| :------ |
| `T` |

#### Parameters[[request.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](modules#requestargs) |
| `options?` | [`Options`](interfaces/Options) & \{ `task?`: [`InferenceTask`](modules#inferencetask)  } |

#### Returns[[request.returns]]

`Promise`\<`T`\>

**`Deprecated`**

Use specific task functions instead. This function will be removed in a future version.

#### Defined in[[request.defined-in]]

[inference/src/tasks/custom/request.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/request.ts#L11)

___

### sentenceSimilarity

▸ **sentenceSimilarity**(`args`, `options?`): `Promise`\<`SentenceSimilarityOutput`\>

Calculate the semantic similarity between one text and a list of other sentences by comparing their embeddings.

#### Parameters[[sentencesimilarity.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`SentenceSimilarityArgs`](modules#sentencesimilarityargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[sentencesimilarity.returns]]

`Promise`\<`SentenceSimilarityOutput`\>

#### Defined in[[sentencesimilarity.defined-in]]

[inference/src/tasks/nlp/sentenceSimilarity.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/sentenceSimilarity.ts#L12)

___

### setLogger

▸ **setLogger**(`logger`): `void`

#### Parameters[[setlogger.parameters]]

| Name | Type |
| :------ | :------ |
| `logger` | [`Logger`](interfaces/Logger) |

#### Returns[[setlogger.returns]]

`void`

#### Defined in[[setlogger.defined-in]]

[inference/src/lib/logger.ts:5](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/lib/logger.ts#L5)

___

### streamingRequest

▸ **streamingRequest**\<`T`\>(`args`, `options?`): `AsyncGenerator`\<`T`\>

Primitive to make custom inference calls that expect server-sent events, and returns the response through a generator

#### Type parameters[[streamingrequest.type-parameters]]

| Name |
| :------ |
| `T` |

#### Parameters[[streamingrequest.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](modules#requestargs) |
| `options?` | [`Options`](interfaces/Options) & \{ `task?`: [`InferenceTask`](modules#inferencetask)  } |

#### Returns[[streamingrequest.returns]]

`AsyncGenerator`\<`T`\>

**`Deprecated`**

Use specific task functions instead. This function will be removed in a future version.

#### Defined in[[streamingrequest.defined-in]]

[inference/src/tasks/custom/streamingRequest.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/streamingRequest.ts#L11)

___

### summarization

▸ **summarization**(`args`, `options?`): `Promise`\<`SummarizationOutput`\>

This task is well known to summarize longer text into shorter text. Be careful, some models have a maximum length of input. That means that the summary cannot handle full books for instance. Be careful when choosing your model.

#### Parameters[[summarization.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`SummarizationArgs`](modules#summarizationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[summarization.returns]]

`Promise`\<`SummarizationOutput`\>

#### Defined in[[summarization.defined-in]]

[inference/src/tasks/nlp/summarization.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/summarization.ts#L12)

___

### tableQuestionAnswering

▸ **tableQuestionAnswering**(`args`, `options?`): `Promise`\<`TableQuestionAnsweringOutput`[`number`]\>

Don’t know SQL? Don’t want to dive into a large spreadsheet? Ask questions in plain english! Recommended model: google/tapas-base-finetuned-wtq.

#### Parameters[[tablequestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TableQuestionAnsweringArgs`](modules#tablequestionansweringargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[tablequestionanswering.returns]]

`Promise`\<`TableQuestionAnsweringOutput`[`number`]\>

#### Defined in[[tablequestionanswering.defined-in]]

[inference/src/tasks/nlp/tableQuestionAnswering.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tableQuestionAnswering.ts#L12)

___

### tabularClassification

▸ **tabularClassification**(`args`, `options?`): `Promise`\<[`TabularClassificationOutput`](modules#tabularclassificationoutput)\>

Predicts target label for a given set of features in tabular form.
Typically, you will want to train a classification model on your training data and use it with your new data of the same format.
Example model: vvmnnnkv/wine-quality

#### Parameters[[tabularclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TabularClassificationArgs`](modules#tabularclassificationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[tabularclassification.returns]]

`Promise`\<[`TabularClassificationOutput`](modules#tabularclassificationoutput)\>

#### Defined in[[tabularclassification.defined-in]]

[inference/src/tasks/tabular/tabularClassification.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularClassification.ts#L25)

___

### tabularRegression

▸ **tabularRegression**(`args`, `options?`): `Promise`\<[`TabularRegressionOutput`](modules#tabularregressionoutput)\>

Predicts target value for a given set of features in tabular form.
Typically, you will want to train a regression model on your training data and use it with your new data of the same format.
Example model: scikit-learn/Fish-Weight

#### Parameters[[tabularregression.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TabularRegressionArgs`](modules#tabularregressionargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[tabularregression.returns]]

`Promise`\<[`TabularRegressionOutput`](modules#tabularregressionoutput)\>

#### Defined in[[tabularregression.defined-in]]

[inference/src/tasks/tabular/tabularRegression.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularRegression.ts#L25)

___

### textClassification

▸ **textClassification**(`args`, `options?`): `Promise`\<`TextClassificationOutput`\>

Usually used for sentiment-analysis this will output the likelihood of classes of an input. Recommended model: distilbert-base-uncased-finetuned-sst-2-english

#### Parameters[[textclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextClassificationArgs`](modules#textclassificationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[textclassification.returns]]

`Promise`\<`TextClassificationOutput`\>

#### Defined in[[textclassification.defined-in]]

[inference/src/tasks/nlp/textClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textClassification.ts#L12)

___

### textGeneration

▸ **textGeneration**(`args`, `options?`): `Promise`\<[`TextGenerationOutput`](interfaces/TextGenerationOutput)\>

Use to continue text from a prompt. This is a very generic task. Recommended model: gpt2 (it’s a simple model, but fun to play with).

#### Parameters[[textgeneration.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](interfaces/BaseArgs) & [`TextGenerationInput`](interfaces/TextGenerationInput) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[textgeneration.returns]]

`Promise`\<[`TextGenerationOutput`](interfaces/TextGenerationOutput)\>

#### Defined in[[textgeneration.defined-in]]

[inference/src/tasks/nlp/textGeneration.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGeneration.ts#L13)

___

### textGenerationStream

▸ **textGenerationStream**(`args`, `options?`): `AsyncGenerator`\<[`TextGenerationStreamOutput`](interfaces/TextGenerationStreamOutput)\>

Use to continue text from a prompt. Same as `textGeneration` but returns generator that can be read one token at a time

#### Parameters[[textgenerationstream.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](interfaces/BaseArgs) & [`TextGenerationInput`](interfaces/TextGenerationInput) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[textgenerationstream.returns]]

`AsyncGenerator`\<[`TextGenerationStreamOutput`](interfaces/TextGenerationStreamOutput)\>

#### Defined in[[textgenerationstream.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:90](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L90)

___

### textToImage

▸ **textToImage**(`args`, `options?`): `Promise`\<`string`\>

This task reads some text input and outputs an image.
Recommended model: stabilityai/stable-diffusion-2

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType`: ``"url"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`string`\>

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L18)

▸ **textToImage**(`args`, `options?`): `Promise`\<`Blob`\>

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType?`: ``"blob"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`Blob`\>

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L22)

▸ **textToImage**(`args`, `options?`): `Promise`\<`Record`\<`string`, `unknown`\>\>

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType?`: ``"json"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`Record`\<`string`, `unknown`\>\>

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:26](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L26)

___

### textToSpeech

▸ **textToSpeech**(`args`, `options?`): `Promise`\<`Blob`\>

This task synthesize an audio of a voice pronouncing a given text.
Recommended model: espnet/kan-bayashi_ljspeech_vits

#### Parameters[[texttospeech.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | `TextToSpeechArgs` |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[texttospeech.returns]]

`Promise`\<`Blob`\>

#### Defined in[[texttospeech.defined-in]]

[inference/src/tasks/audio/textToSpeech.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/textToSpeech.ts#L15)

___

### textToVideo

▸ **textToVideo**(`args`, `options?`): `Promise`\<[`TextToVideoOutput`](modules#texttovideooutput)\>

#### Parameters[[texttovideo.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToVideoArgs`](modules#texttovideoargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[texttovideo.returns]]

`Promise`\<[`TextToVideoOutput`](modules#texttovideooutput)\>

#### Defined in[[texttovideo.defined-in]]

[inference/src/tasks/cv/textToVideo.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToVideo.ts#L15)

___

### tokenClassification

▸ **tokenClassification**(`args`, `options?`): `Promise`\<`TokenClassificationOutput`\>

Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text. Recommended model: dbmdz/bert-large-cased-finetuned-conll03-english

#### Parameters[[tokenclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TokenClassificationArgs`](modules#tokenclassificationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[tokenclassification.returns]]

`Promise`\<`TokenClassificationOutput`\>

#### Defined in[[tokenclassification.defined-in]]

[inference/src/tasks/nlp/tokenClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tokenClassification.ts#L12)

___

### translation

▸ **translation**(`args`, `options?`): `Promise`\<`TranslationOutput`\>

This task is well known to translate text from one language to another. Recommended model: Helsinki-NLP/opus-mt-ru-en.

#### Parameters[[translation.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TranslationArgs`](modules#translationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[translation.returns]]

`Promise`\<`TranslationOutput`\>

#### Defined in[[translation.defined-in]]

[inference/src/tasks/nlp/translation.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/translation.ts#L11)

___

### visualQuestionAnswering

▸ **visualQuestionAnswering**(`args`, `options?`): `Promise`\<`VisualQuestionAnsweringOutput`[`number`]\>

Answers a question on an image. Recommended model: dandelin/vilt-b32-finetuned-vqa.

#### Parameters[[visualquestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`VisualQuestionAnsweringArgs`](modules#visualquestionansweringargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[visualquestionanswering.returns]]

`Promise`\<`VisualQuestionAnsweringOutput`[`number`]\>

#### Defined in[[visualquestionanswering.defined-in]]

[inference/src/tasks/multimodal/visualQuestionAnswering.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/visualQuestionAnswering.ts#L19)

___

### zeroShotClassification

▸ **zeroShotClassification**(`args`, `options?`): `Promise`\<`ZeroShotClassificationOutput`\>

This task is super useful to try out classification with zero code, you simply pass a sentence/paragraph and the possible labels for that sentence, and you get a result. Recommended model: facebook/bart-large-mnli.

#### Parameters[[zeroshotclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ZeroShotClassificationArgs`](modules#zeroshotclassificationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[zeroshotclassification.returns]]

`Promise`\<`ZeroShotClassificationOutput`\>

#### Defined in[[zeroshotclassification.defined-in]]

[inference/src/tasks/nlp/zeroShotClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/zeroShotClassification.ts#L12)

___

### zeroShotImageClassification

▸ **zeroShotImageClassification**(`args`, `options?`): `Promise`\<`ZeroShotImageClassificationOutput`\>

Classify an image to specified classes.
Recommended model: openai/clip-vit-large-patch14-336

#### Parameters[[zeroshotimageclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ZeroShotImageClassificationArgs`](modules#zeroshotimageclassificationargs) |
| `options?` | [`Options`](interfaces/Options) |

#### Returns[[zeroshotimageclassification.returns]]

`Promise`\<`ZeroShotImageClassificationOutput`\>

#### Defined in[[zeroshotimageclassification.defined-in]]

[inference/src/tasks/cv/zeroShotImageClassification.ts:44](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/zeroShotImageClassification.ts#L44)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/modules.md" />

### Class: InferenceClientProviderOutputError
https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientProviderOutputError.md

# Class: InferenceClientProviderOutputError

Thrown when the inference output returned by the provider is invalid / does not match the expectations

## Hierarchy

- [`InferenceClientError`](InferenceClientError)

  ↳ **`InferenceClientProviderOutputError`**

## Constructors

### constructor

• **new InferenceClientProviderOutputError**(`message`): [`InferenceClientProviderOutputError`](InferenceClientProviderOutputError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |

#### Returns[[constructor.returns]]

[`InferenceClientProviderOutputError`](InferenceClientProviderOutputError)

#### Overrides[[constructor.overrides]]

[InferenceClientError](InferenceClientError).[constructor](InferenceClientError#constructor)

#### Defined in[[constructor.defined-in]]

[inference/src/errors.ts:85](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L85)

## Properties

### cause

• `Optional` **cause**: `unknown`

#### Inherited from[[cause.inherited-from]]

[InferenceClientError](InferenceClientError).[cause](InferenceClientError#cause)

#### Defined in[[cause.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es2022.error.d.ts:26

___

### message

• **message**: `string`

#### Inherited from[[message.inherited-from]]

[InferenceClientError](InferenceClientError).[message](InferenceClientError#message)

#### Defined in[[message.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1077

___

### name

• **name**: `string`

#### Inherited from[[name.inherited-from]]

[InferenceClientError](InferenceClientError).[name](InferenceClientError#name)

#### Defined in[[name.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1076

___

### stack

• `Optional` **stack**: `string`

#### Inherited from[[stack.inherited-from]]

[InferenceClientError](InferenceClientError).[stack](InferenceClientError#stack)

#### Defined in[[stack.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1078

___

### prepareStackTrace

▪ `Static` `Optional` **prepareStackTrace**: (`err`: `Error`, `stackTraces`: `CallSite`[]) => `any`

Optional override for formatting stack traces

**`See`**

https://v8.dev/docs/stack-trace-api#customizing-stack-traces

#### Type declaration[[preparestacktrace.type-declaration]]

▸ (`err`, `stackTraces`): `any`

##### Parameters[[preparestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `err` | `Error` |
| `stackTraces` | `CallSite`[] |

##### Returns[[preparestacktrace.returns]]

`any`

#### Inherited from[[preparestacktrace.inherited-from]]

[InferenceClientError](InferenceClientError).[prepareStackTrace](InferenceClientError#preparestacktrace)

#### Defined in[[preparestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:11

___

### stackTraceLimit

▪ `Static` **stackTraceLimit**: `number`

#### Inherited from[[stacktracelimit.inherited-from]]

[InferenceClientError](InferenceClientError).[stackTraceLimit](InferenceClientError#stacktracelimit)

#### Defined in[[stacktracelimit.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:13

## Methods

### captureStackTrace

▸ **captureStackTrace**(`targetObject`, `constructorOpt?`): `void`

Create .stack property on a target object

#### Parameters[[capturestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `targetObject` | `object` |
| `constructorOpt?` | `Function` |

#### Returns[[capturestacktrace.returns]]

`void`

#### Inherited from[[capturestacktrace.inherited-from]]

[InferenceClientError](InferenceClientError).[captureStackTrace](InferenceClientError#capturestacktrace)

#### Defined in[[capturestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:4


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/InferenceClientProviderOutputError.md" />

### Class: InferenceClientHubApiError
https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientHubApiError.md

# Class: InferenceClientHubApiError

Thrown when the HTTP request to the hub fails, e.g. due to API issues or server errors.

## Hierarchy

- `InferenceClientHttpRequestError`

  ↳ **`InferenceClientHubApiError`**

## Constructors

### constructor

• **new InferenceClientHubApiError**(`message`, `httpRequest`, `httpResponse`): [`InferenceClientHubApiError`](InferenceClientHubApiError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |
| `httpRequest` | `HttpRequest` |
| `httpResponse` | `HttpResponse` |

#### Returns[[constructor.returns]]

[`InferenceClientHubApiError`](InferenceClientHubApiError)

#### Overrides[[constructor.overrides]]

InferenceClientHttpRequestError.constructor

#### Defined in[[constructor.defined-in]]

[inference/src/errors.ts:75](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L75)

## Properties

### cause

• `Optional` **cause**: `unknown`

#### Inherited from[[cause.inherited-from]]

InferenceClientHttpRequestError.cause

#### Defined in[[cause.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es2022.error.d.ts:26

___

### httpRequest

• **httpRequest**: `HttpRequest`

#### Inherited from[[httprequest.inherited-from]]

InferenceClientHttpRequestError.httpRequest

#### Defined in[[httprequest.defined-in]]

[inference/src/errors.ts:41](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L41)

___

### httpResponse

• **httpResponse**: `HttpResponse`

#### Inherited from[[httpresponse.inherited-from]]

InferenceClientHttpRequestError.httpResponse

#### Defined in[[httpresponse.defined-in]]

[inference/src/errors.ts:42](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L42)

___

### message

• **message**: `string`

#### Inherited from[[message.inherited-from]]

InferenceClientHttpRequestError.message

#### Defined in[[message.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1077

___

### name

• **name**: `string`

#### Inherited from[[name.inherited-from]]

InferenceClientHttpRequestError.name

#### Defined in[[name.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1076

___

### stack

• `Optional` **stack**: `string`

#### Inherited from[[stack.inherited-from]]

InferenceClientHttpRequestError.stack

#### Defined in[[stack.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1078

___

### prepareStackTrace

▪ `Static` `Optional` **prepareStackTrace**: (`err`: `Error`, `stackTraces`: `CallSite`[]) => `any`

Optional override for formatting stack traces

**`See`**

https://v8.dev/docs/stack-trace-api#customizing-stack-traces

#### Type declaration[[preparestacktrace.type-declaration]]

▸ (`err`, `stackTraces`): `any`

##### Parameters[[preparestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `err` | `Error` |
| `stackTraces` | `CallSite`[] |

##### Returns[[preparestacktrace.returns]]

`any`

#### Inherited from[[preparestacktrace.inherited-from]]

InferenceClientHttpRequestError.prepareStackTrace

#### Defined in[[preparestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:11

___

### stackTraceLimit

▪ `Static` **stackTraceLimit**: `number`

#### Inherited from[[stacktracelimit.inherited-from]]

InferenceClientHttpRequestError.stackTraceLimit

#### Defined in[[stacktracelimit.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:13

## Methods

### captureStackTrace

▸ **captureStackTrace**(`targetObject`, `constructorOpt?`): `void`

Create .stack property on a target object

#### Parameters[[capturestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `targetObject` | `object` |
| `constructorOpt?` | `Function` |

#### Returns[[capturestacktrace.returns]]

`void`

#### Inherited from[[capturestacktrace.inherited-from]]

InferenceClientHttpRequestError.captureStackTrace

#### Defined in[[capturestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:4


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/InferenceClientHubApiError.md" />

### Class: InferenceClientEndpoint
https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientEndpoint.md

# Class: InferenceClientEndpoint

For backward compatibility only, will remove soon.

**`Deprecated`**

replace with InferenceClient

## Hierarchy

- [`InferenceClient`](InferenceClient)

  ↳ **`InferenceClientEndpoint`**

## Constructors

### constructor

• **new InferenceClientEndpoint**(`accessToken?`, `defaultOptions?`): [`InferenceClientEndpoint`](InferenceClientEndpoint)

#### Parameters[[constructor.parameters]]

| Name | Type | Default value |
| :------ | :------ | :------ |
| `accessToken` | `string` | `""` |
| `defaultOptions` | [`Options`](../interfaces/Options) & \{ `endpointUrl?`: `string`  } | `{}` |

#### Returns[[constructor.returns]]

[`InferenceClientEndpoint`](InferenceClientEndpoint)

#### Inherited from[[constructor.inherited-from]]

[InferenceClient](InferenceClient).[constructor](InferenceClient#constructor)

#### Defined in[[constructor.defined-in]]

[inference/src/InferenceClient.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/InferenceClient.ts#L15)

## Methods

### audioClassification

▸ **audioClassification**(`args`, `options?`): `Promise`\<`AudioClassificationOutput`\>

This task reads some audio input and outputs the likelihood of classes.
Recommended model:  superb/hubert-large-superb-er

#### Parameters[[audioclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AudioClassificationArgs`](../modules#audioclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[audioclassification.returns]]

`Promise`\<`AudioClassificationOutput`\>

#### Inherited from[[audioclassification.inherited-from]]

[InferenceClient](InferenceClient).[audioClassification](InferenceClient#audioclassification)

#### Defined in[[audioclassification.defined-in]]

[inference/src/tasks/audio/audioClassification.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioClassification.ts#L15)

___

### audioToAudio

▸ **audioToAudio**(`args`, `options?`): `Promise`\<[`AudioToAudioOutput`](../interfaces/AudioToAudioOutput)[]\>

This task reads some audio input and outputs one or multiple audio files.
Example model: speechbrain/sepformer-wham does audio source separation.

#### Parameters[[audiotoaudio.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AudioToAudioArgs`](../modules#audiotoaudioargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[audiotoaudio.returns]]

`Promise`\<[`AudioToAudioOutput`](../interfaces/AudioToAudioOutput)[]\>

#### Inherited from[[audiotoaudio.inherited-from]]

[InferenceClient](InferenceClient).[audioToAudio](InferenceClient#audiotoaudio)

#### Defined in[[audiotoaudio.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:39](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L39)

___

### automaticSpeechRecognition

▸ **automaticSpeechRecognition**(`args`, `options?`): `Promise`\<`AutomaticSpeechRecognitionOutput`\>

This task reads some audio input and outputs the said words within the audio files.
Recommended model (english language): facebook/wav2vec2-large-960h-lv60-self

#### Parameters[[automaticspeechrecognition.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AutomaticSpeechRecognitionArgs`](../modules#automaticspeechrecognitionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[automaticspeechrecognition.returns]]

`Promise`\<`AutomaticSpeechRecognitionOutput`\>

#### Inherited from[[automaticspeechrecognition.inherited-from]]

[InferenceClient](InferenceClient).[automaticSpeechRecognition](InferenceClient#automaticspeechrecognition)

#### Defined in[[automaticspeechrecognition.defined-in]]

[inference/src/tasks/audio/automaticSpeechRecognition.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/automaticSpeechRecognition.ts#L13)

___

### chatCompletion

▸ **chatCompletion**(`args`, `options?`): `Promise`\<`ChatCompletionOutput`\>

Use the chat completion endpoint to generate a response to a prompt, using OpenAI message completion API no stream

#### Parameters[[chatcompletion.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & `ChatCompletionInput` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[chatcompletion.returns]]

`Promise`\<`ChatCompletionOutput`\>

#### Inherited from[[chatcompletion.inherited-from]]

[InferenceClient](InferenceClient).[chatCompletion](InferenceClient#chatcompletion)

#### Defined in[[chatcompletion.defined-in]]

[inference/src/tasks/nlp/chatCompletion.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/chatCompletion.ts#L12)

___

### chatCompletionStream

▸ **chatCompletionStream**(`args`, `options?`): `AsyncGenerator`\<`ChatCompletionStreamOutput`\>

Use to continue text from a prompt. Same as `textGeneration` but returns generator that can be read one token at a time

#### Parameters[[chatcompletionstream.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & `ChatCompletionInput` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[chatcompletionstream.returns]]

`AsyncGenerator`\<`ChatCompletionStreamOutput`\>

#### Inherited from[[chatcompletionstream.inherited-from]]

[InferenceClient](InferenceClient).[chatCompletionStream](InferenceClient#chatcompletionstream)

#### Defined in[[chatcompletionstream.defined-in]]

[inference/src/tasks/nlp/chatCompletionStream.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/chatCompletionStream.ts#L12)

___

### documentQuestionAnswering

▸ **documentQuestionAnswering**(`args`, `options?`): `Promise`\<`DocumentQuestionAnsweringOutput`[`number`]\>

Answers a question on a document image. Recommended model: impira/layoutlm-document-qa.

#### Parameters[[documentquestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`DocumentQuestionAnsweringArgs`](../modules#documentquestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[documentquestionanswering.returns]]

`Promise`\<`DocumentQuestionAnsweringOutput`[`number`]\>

#### Inherited from[[documentquestionanswering.inherited-from]]

[InferenceClient](InferenceClient).[documentQuestionAnswering](InferenceClient#documentquestionanswering)

#### Defined in[[documentquestionanswering.defined-in]]

[inference/src/tasks/multimodal/documentQuestionAnswering.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/documentQuestionAnswering.ts#L19)

___

### endpoint

▸ **endpoint**(`endpointUrl`): [`InferenceClient`](InferenceClient)

Returns a new instance of InferenceClient tied to a specified endpoint.

For backward compatibility mostly.

#### Parameters[[endpoint.parameters]]

| Name | Type |
| :------ | :------ |
| `endpointUrl` | `string` |

#### Returns[[endpoint.returns]]

[`InferenceClient`](InferenceClient)

#### Inherited from[[endpoint.inherited-from]]

[InferenceClient](InferenceClient).[endpoint](InferenceClient#endpoint)

#### Defined in[[endpoint.defined-in]]

[inference/src/InferenceClient.ts:46](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/InferenceClient.ts#L46)

___

### featureExtraction

▸ **featureExtraction**(`args`, `options?`): `Promise`\<[`FeatureExtractionOutput`](../modules#featureextractionoutput)\>

This task reads some text and outputs raw float values, that are usually consumed as part of a semantic database/semantic search.

#### Parameters[[featureextraction.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`FeatureExtractionArgs`](../modules#featureextractionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[featureextraction.returns]]

`Promise`\<[`FeatureExtractionOutput`](../modules#featureextractionoutput)\>

#### Inherited from[[featureextraction.inherited-from]]

[InferenceClient](InferenceClient).[featureExtraction](InferenceClient#featureextraction)

#### Defined in[[featureextraction.defined-in]]

[inference/src/tasks/nlp/featureExtraction.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/featureExtraction.ts#L22)

___

### fillMask

▸ **fillMask**(`args`, `options?`): `Promise`\<`FillMaskOutput`\>

Tries to fill in a hole with a missing word (token to be precise). That’s the base task for BERT models.

#### Parameters[[fillmask.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`FillMaskArgs`](../modules#fillmaskargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[fillmask.returns]]

`Promise`\<`FillMaskOutput`\>

#### Inherited from[[fillmask.inherited-from]]

[InferenceClient](InferenceClient).[fillMask](InferenceClient#fillmask)

#### Defined in[[fillmask.defined-in]]

[inference/src/tasks/nlp/fillMask.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/fillMask.ts#L12)

___

### imageClassification

▸ **imageClassification**(`args`, `options?`): `Promise`\<`ImageClassificationOutput`\>

This task reads some image input and outputs the likelihood of classes.
Recommended model: google/vit-base-patch16-224

#### Parameters[[imageclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageClassificationArgs`](../modules#imageclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imageclassification.returns]]

`Promise`\<`ImageClassificationOutput`\>

#### Inherited from[[imageclassification.inherited-from]]

[InferenceClient](InferenceClient).[imageClassification](InferenceClient#imageclassification)

#### Defined in[[imageclassification.defined-in]]

[inference/src/tasks/cv/imageClassification.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageClassification.ts#L14)

___

### imageSegmentation

▸ **imageSegmentation**(`args`, `options?`): `Promise`\<`ImageSegmentationOutput`\>

This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects.
Recommended model: facebook/detr-resnet-50-panoptic

#### Parameters[[imagesegmentation.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageSegmentationArgs`](../modules#imagesegmentationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagesegmentation.returns]]

`Promise`\<`ImageSegmentationOutput`\>

#### Inherited from[[imagesegmentation.inherited-from]]

[InferenceClient](InferenceClient).[imageSegmentation](InferenceClient#imagesegmentation)

#### Defined in[[imagesegmentation.defined-in]]

[inference/src/tasks/cv/imageSegmentation.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageSegmentation.ts#L14)

___

### imageToImage

▸ **imageToImage**(`args`, `options?`): `Promise`\<`Blob`\>

This task reads some text input and outputs an image.
Recommended model: lllyasviel/sd-controlnet-depth

#### Parameters[[imagetoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToImageArgs`](../modules#imagetoimageargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetoimage.returns]]

`Promise`\<`Blob`\>

#### Inherited from[[imagetoimage.inherited-from]]

[InferenceClient](InferenceClient).[imageToImage](InferenceClient#imagetoimage)

#### Defined in[[imagetoimage.defined-in]]

[inference/src/tasks/cv/imageToImage.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToImage.ts#L14)

___

### imageToText

▸ **imageToText**(`args`, `options?`): `Promise`\<`ImageToTextOutput`\>

This task reads some image input and outputs the text caption.

#### Parameters[[imagetotext.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToTextArgs`](../modules#imagetotextargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetotext.returns]]

`Promise`\<`ImageToTextOutput`\>

#### Inherited from[[imagetotext.inherited-from]]

[InferenceClient](InferenceClient).[imageToText](InferenceClient#imagetotext)

#### Defined in[[imagetotext.defined-in]]

[inference/src/tasks/cv/imageToText.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToText.ts#L13)

___

### imageToVideo

▸ **imageToVideo**(`args`, `options?`): `Promise`\<`Blob`\>

This task reads some text input and outputs an image.
Recommended model: Wan-AI/Wan2.1-I2V-14B-720P

#### Parameters[[imagetovideo.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToVideoArgs`](../modules#imagetovideoargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetovideo.returns]]

`Promise`\<`Blob`\>

#### Inherited from[[imagetovideo.inherited-from]]

[InferenceClient](InferenceClient).[imageToVideo](InferenceClient#imagetovideo)

#### Defined in[[imagetovideo.defined-in]]

[inference/src/tasks/cv/imageToVideo.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToVideo.ts#L14)

___

### objectDetection

▸ **objectDetection**(`args`, `options?`): `Promise`\<`ObjectDetectionOutput`\>

This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects.
Recommended model: facebook/detr-resnet-50

#### Parameters[[objectdetection.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ObjectDetectionArgs`](../modules#objectdetectionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[objectdetection.returns]]

`Promise`\<`ObjectDetectionOutput`\>

#### Inherited from[[objectdetection.inherited-from]]

[InferenceClient](InferenceClient).[objectDetection](InferenceClient#objectdetection)

#### Defined in[[objectdetection.defined-in]]

[inference/src/tasks/cv/objectDetection.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/objectDetection.ts#L14)

___

### questionAnswering

▸ **questionAnswering**(`args`, `options?`): `Promise`\<`QuestionAnsweringOutput`[`number`]\>

Want to have a nice know-it-all bot that can answer any question?. Recommended model: deepset/roberta-base-squad2

#### Parameters[[questionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`QuestionAnsweringArgs`](../modules#questionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[questionanswering.returns]]

`Promise`\<`QuestionAnsweringOutput`[`number`]\>

#### Inherited from[[questionanswering.inherited-from]]

[InferenceClient](InferenceClient).[questionAnswering](InferenceClient#questionanswering)

#### Defined in[[questionanswering.defined-in]]

[inference/src/tasks/nlp/questionAnswering.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/questionAnswering.ts#L13)

___

### request

▸ **request**\<`T`\>(`args`, `options?`): `Promise`\<`T`\>

Primitive to make custom calls to the inference provider

#### Type parameters[[request.type-parameters]]

| Name |
| :------ |
| `T` |

#### Parameters[[request.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](../modules#requestargs) |
| `options?` | [`Options`](../interfaces/Options) & \{ `task?`: [`InferenceTask`](../modules#inferencetask)  } |

#### Returns[[request.returns]]

`Promise`\<`T`\>

**`Deprecated`**

Use specific task functions instead. This function will be removed in a future version.

#### Inherited from[[request.inherited-from]]

[InferenceClient](InferenceClient).[request](InferenceClient#request)

#### Defined in[[request.defined-in]]

[inference/src/tasks/custom/request.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/request.ts#L11)

___

### sentenceSimilarity

▸ **sentenceSimilarity**(`args`, `options?`): `Promise`\<`SentenceSimilarityOutput`\>

Calculate the semantic similarity between one text and a list of other sentences by comparing their embeddings.

#### Parameters[[sentencesimilarity.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`SentenceSimilarityArgs`](../modules#sentencesimilarityargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[sentencesimilarity.returns]]

`Promise`\<`SentenceSimilarityOutput`\>

#### Inherited from[[sentencesimilarity.inherited-from]]

[InferenceClient](InferenceClient).[sentenceSimilarity](InferenceClient#sentencesimilarity)

#### Defined in[[sentencesimilarity.defined-in]]

[inference/src/tasks/nlp/sentenceSimilarity.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/sentenceSimilarity.ts#L12)

___

### streamingRequest

▸ **streamingRequest**\<`T`\>(`args`, `options?`): `AsyncGenerator`\<`T`\>

Primitive to make custom inference calls that expect server-sent events, and returns the response through a generator

#### Type parameters[[streamingrequest.type-parameters]]

| Name |
| :------ |
| `T` |

#### Parameters[[streamingrequest.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](../modules#requestargs) |
| `options?` | [`Options`](../interfaces/Options) & \{ `task?`: [`InferenceTask`](../modules#inferencetask)  } |

#### Returns[[streamingrequest.returns]]

`AsyncGenerator`\<`T`\>

**`Deprecated`**

Use specific task functions instead. This function will be removed in a future version.

#### Inherited from[[streamingrequest.inherited-from]]

[InferenceClient](InferenceClient).[streamingRequest](InferenceClient#streamingrequest)

#### Defined in[[streamingrequest.defined-in]]

[inference/src/tasks/custom/streamingRequest.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/streamingRequest.ts#L11)

___

### summarization

▸ **summarization**(`args`, `options?`): `Promise`\<`SummarizationOutput`\>

This task is well known to summarize longer text into shorter text. Be careful, some models have a maximum length of input. That means that the summary cannot handle full books for instance. Be careful when choosing your model.

#### Parameters[[summarization.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`SummarizationArgs`](../modules#summarizationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[summarization.returns]]

`Promise`\<`SummarizationOutput`\>

#### Inherited from[[summarization.inherited-from]]

[InferenceClient](InferenceClient).[summarization](InferenceClient#summarization)

#### Defined in[[summarization.defined-in]]

[inference/src/tasks/nlp/summarization.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/summarization.ts#L12)

___

### tableQuestionAnswering

▸ **tableQuestionAnswering**(`args`, `options?`): `Promise`\<`TableQuestionAnsweringOutput`[`number`]\>

Don’t know SQL? Don’t want to dive into a large spreadsheet? Ask questions in plain english! Recommended model: google/tapas-base-finetuned-wtq.

#### Parameters[[tablequestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TableQuestionAnsweringArgs`](../modules#tablequestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tablequestionanswering.returns]]

`Promise`\<`TableQuestionAnsweringOutput`[`number`]\>

#### Inherited from[[tablequestionanswering.inherited-from]]

[InferenceClient](InferenceClient).[tableQuestionAnswering](InferenceClient#tablequestionanswering)

#### Defined in[[tablequestionanswering.defined-in]]

[inference/src/tasks/nlp/tableQuestionAnswering.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tableQuestionAnswering.ts#L12)

___

### tabularClassification

▸ **tabularClassification**(`args`, `options?`): `Promise`\<[`TabularClassificationOutput`](../modules#tabularclassificationoutput)\>

Predicts target label for a given set of features in tabular form.
Typically, you will want to train a classification model on your training data and use it with your new data of the same format.
Example model: vvmnnnkv/wine-quality

#### Parameters[[tabularclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TabularClassificationArgs`](../modules#tabularclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tabularclassification.returns]]

`Promise`\<[`TabularClassificationOutput`](../modules#tabularclassificationoutput)\>

#### Inherited from[[tabularclassification.inherited-from]]

[InferenceClient](InferenceClient).[tabularClassification](InferenceClient#tabularclassification)

#### Defined in[[tabularclassification.defined-in]]

[inference/src/tasks/tabular/tabularClassification.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularClassification.ts#L25)

___

### tabularRegression

▸ **tabularRegression**(`args`, `options?`): `Promise`\<[`TabularRegressionOutput`](../modules#tabularregressionoutput)\>

Predicts target value for a given set of features in tabular form.
Typically, you will want to train a regression model on your training data and use it with your new data of the same format.
Example model: scikit-learn/Fish-Weight

#### Parameters[[tabularregression.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TabularRegressionArgs`](../modules#tabularregressionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tabularregression.returns]]

`Promise`\<[`TabularRegressionOutput`](../modules#tabularregressionoutput)\>

#### Inherited from[[tabularregression.inherited-from]]

[InferenceClient](InferenceClient).[tabularRegression](InferenceClient#tabularregression)

#### Defined in[[tabularregression.defined-in]]

[inference/src/tasks/tabular/tabularRegression.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularRegression.ts#L25)

___

### textClassification

▸ **textClassification**(`args`, `options?`): `Promise`\<`TextClassificationOutput`\>

Usually used for sentiment-analysis this will output the likelihood of classes of an input. Recommended model: distilbert-base-uncased-finetuned-sst-2-english

#### Parameters[[textclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextClassificationArgs`](../modules#textclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textclassification.returns]]

`Promise`\<`TextClassificationOutput`\>

#### Inherited from[[textclassification.inherited-from]]

[InferenceClient](InferenceClient).[textClassification](InferenceClient#textclassification)

#### Defined in[[textclassification.defined-in]]

[inference/src/tasks/nlp/textClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textClassification.ts#L12)

___

### textGeneration

▸ **textGeneration**(`args`, `options?`): `Promise`\<[`TextGenerationOutput`](../interfaces/TextGenerationOutput)\>

Use to continue text from a prompt. This is a very generic task. Recommended model: gpt2 (it’s a simple model, but fun to play with).

#### Parameters[[textgeneration.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & [`TextGenerationInput`](../interfaces/TextGenerationInput) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textgeneration.returns]]

`Promise`\<[`TextGenerationOutput`](../interfaces/TextGenerationOutput)\>

#### Inherited from[[textgeneration.inherited-from]]

[InferenceClient](InferenceClient).[textGeneration](InferenceClient#textgeneration)

#### Defined in[[textgeneration.defined-in]]

[inference/src/tasks/nlp/textGeneration.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGeneration.ts#L13)

___

### textGenerationStream

▸ **textGenerationStream**(`args`, `options?`): `AsyncGenerator`\<[`TextGenerationStreamOutput`](../interfaces/TextGenerationStreamOutput)\>

Use to continue text from a prompt. Same as `textGeneration` but returns generator that can be read one token at a time

#### Parameters[[textgenerationstream.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & [`TextGenerationInput`](../interfaces/TextGenerationInput) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textgenerationstream.returns]]

`AsyncGenerator`\<[`TextGenerationStreamOutput`](../interfaces/TextGenerationStreamOutput)\>

#### Inherited from[[textgenerationstream.inherited-from]]

[InferenceClient](InferenceClient).[textGenerationStream](InferenceClient#textgenerationstream)

#### Defined in[[textgenerationstream.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:90](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L90)

___

### textToImage

▸ **textToImage**(`args`, `options?`): `Promise`\<`string`\>

This task reads some text input and outputs an image.
Recommended model: stabilityai/stable-diffusion-2

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType`: ``"url"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`string`\>

#### Inherited from[[texttoimage.inherited-from]]

[InferenceClient](InferenceClient).[textToImage](InferenceClient#texttoimage)

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L18)

▸ **textToImage**(`args`, `options?`): `Promise`\<`Blob`\>

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType?`: ``"blob"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`Blob`\>

#### Inherited from[[texttoimage.inherited-from]]

[InferenceClient](InferenceClient).[textToImage](InferenceClient#texttoimage)

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L22)

▸ **textToImage**(`args`, `options?`): `Promise`\<`Record`\<`string`, `unknown`\>\>

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType?`: ``"json"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`Record`\<`string`, `unknown`\>\>

#### Inherited from[[texttoimage.inherited-from]]

[InferenceClient](InferenceClient).[textToImage](InferenceClient#texttoimage)

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:26](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L26)

___

### textToSpeech

▸ **textToSpeech**(`args`, `options?`): `Promise`\<`Blob`\>

This task synthesize an audio of a voice pronouncing a given text.
Recommended model: espnet/kan-bayashi_ljspeech_vits

#### Parameters[[texttospeech.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | `TextToSpeechArgs` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[texttospeech.returns]]

`Promise`\<`Blob`\>

#### Inherited from[[texttospeech.inherited-from]]

[InferenceClient](InferenceClient).[textToSpeech](InferenceClient#texttospeech)

#### Defined in[[texttospeech.defined-in]]

[inference/src/tasks/audio/textToSpeech.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/textToSpeech.ts#L15)

___

### textToVideo

▸ **textToVideo**(`args`, `options?`): `Promise`\<[`TextToVideoOutput`](../modules#texttovideooutput)\>

#### Parameters[[texttovideo.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToVideoArgs`](../modules#texttovideoargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[texttovideo.returns]]

`Promise`\<[`TextToVideoOutput`](../modules#texttovideooutput)\>

#### Inherited from[[texttovideo.inherited-from]]

[InferenceClient](InferenceClient).[textToVideo](InferenceClient#texttovideo)

#### Defined in[[texttovideo.defined-in]]

[inference/src/tasks/cv/textToVideo.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToVideo.ts#L15)

___

### tokenClassification

▸ **tokenClassification**(`args`, `options?`): `Promise`\<`TokenClassificationOutput`\>

Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text. Recommended model: dbmdz/bert-large-cased-finetuned-conll03-english

#### Parameters[[tokenclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TokenClassificationArgs`](../modules#tokenclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tokenclassification.returns]]

`Promise`\<`TokenClassificationOutput`\>

#### Inherited from[[tokenclassification.inherited-from]]

[InferenceClient](InferenceClient).[tokenClassification](InferenceClient#tokenclassification)

#### Defined in[[tokenclassification.defined-in]]

[inference/src/tasks/nlp/tokenClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tokenClassification.ts#L12)

___

### translation

▸ **translation**(`args`, `options?`): `Promise`\<`TranslationOutput`\>

This task is well known to translate text from one language to another. Recommended model: Helsinki-NLP/opus-mt-ru-en.

#### Parameters[[translation.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TranslationArgs`](../modules#translationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[translation.returns]]

`Promise`\<`TranslationOutput`\>

#### Inherited from[[translation.inherited-from]]

[InferenceClient](InferenceClient).[translation](InferenceClient#translation)

#### Defined in[[translation.defined-in]]

[inference/src/tasks/nlp/translation.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/translation.ts#L11)

___

### visualQuestionAnswering

▸ **visualQuestionAnswering**(`args`, `options?`): `Promise`\<`VisualQuestionAnsweringOutput`[`number`]\>

Answers a question on an image. Recommended model: dandelin/vilt-b32-finetuned-vqa.

#### Parameters[[visualquestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`VisualQuestionAnsweringArgs`](../modules#visualquestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[visualquestionanswering.returns]]

`Promise`\<`VisualQuestionAnsweringOutput`[`number`]\>

#### Inherited from[[visualquestionanswering.inherited-from]]

[InferenceClient](InferenceClient).[visualQuestionAnswering](InferenceClient#visualquestionanswering)

#### Defined in[[visualquestionanswering.defined-in]]

[inference/src/tasks/multimodal/visualQuestionAnswering.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/visualQuestionAnswering.ts#L19)

___

### zeroShotClassification

▸ **zeroShotClassification**(`args`, `options?`): `Promise`\<`ZeroShotClassificationOutput`\>

This task is super useful to try out classification with zero code, you simply pass a sentence/paragraph and the possible labels for that sentence, and you get a result. Recommended model: facebook/bart-large-mnli.

#### Parameters[[zeroshotclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ZeroShotClassificationArgs`](../modules#zeroshotclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[zeroshotclassification.returns]]

`Promise`\<`ZeroShotClassificationOutput`\>

#### Inherited from[[zeroshotclassification.inherited-from]]

[InferenceClient](InferenceClient).[zeroShotClassification](InferenceClient#zeroshotclassification)

#### Defined in[[zeroshotclassification.defined-in]]

[inference/src/tasks/nlp/zeroShotClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/zeroShotClassification.ts#L12)

___

### zeroShotImageClassification

▸ **zeroShotImageClassification**(`args`, `options?`): `Promise`\<`ZeroShotImageClassificationOutput`\>

Classify an image to specified classes.
Recommended model: openai/clip-vit-large-patch14-336

#### Parameters[[zeroshotimageclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ZeroShotImageClassificationArgs`](../modules#zeroshotimageclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[zeroshotimageclassification.returns]]

`Promise`\<`ZeroShotImageClassificationOutput`\>

#### Inherited from[[zeroshotimageclassification.inherited-from]]

[InferenceClient](InferenceClient).[zeroShotImageClassification](InferenceClient#zeroshotimageclassification)

#### Defined in[[zeroshotimageclassification.defined-in]]

[inference/src/tasks/cv/zeroShotImageClassification.ts:44](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/zeroShotImageClassification.ts#L44)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/InferenceClientEndpoint.md" />

### Class: InferenceClientRoutingError
https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientRoutingError.md

# Class: InferenceClientRoutingError

Base class for all inference-related errors.

## Hierarchy

- [`InferenceClientError`](InferenceClientError)

  ↳ **`InferenceClientRoutingError`**

## Constructors

### constructor

• **new InferenceClientRoutingError**(`message`): [`InferenceClientRoutingError`](InferenceClientRoutingError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |

#### Returns[[constructor.returns]]

[`InferenceClientRoutingError`](InferenceClientRoutingError)

#### Overrides[[constructor.overrides]]

[InferenceClientError](InferenceClientError).[constructor](InferenceClientError#constructor)

#### Defined in[[constructor.defined-in]]

[inference/src/errors.ts:21](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L21)

## Properties

### cause

• `Optional` **cause**: `unknown`

#### Inherited from[[cause.inherited-from]]

[InferenceClientError](InferenceClientError).[cause](InferenceClientError#cause)

#### Defined in[[cause.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es2022.error.d.ts:26

___

### message

• **message**: `string`

#### Inherited from[[message.inherited-from]]

[InferenceClientError](InferenceClientError).[message](InferenceClientError#message)

#### Defined in[[message.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1077

___

### name

• **name**: `string`

#### Inherited from[[name.inherited-from]]

[InferenceClientError](InferenceClientError).[name](InferenceClientError#name)

#### Defined in[[name.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1076

___

### stack

• `Optional` **stack**: `string`

#### Inherited from[[stack.inherited-from]]

[InferenceClientError](InferenceClientError).[stack](InferenceClientError#stack)

#### Defined in[[stack.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1078

___

### prepareStackTrace

▪ `Static` `Optional` **prepareStackTrace**: (`err`: `Error`, `stackTraces`: `CallSite`[]) => `any`

Optional override for formatting stack traces

**`See`**

https://v8.dev/docs/stack-trace-api#customizing-stack-traces

#### Type declaration[[preparestacktrace.type-declaration]]

▸ (`err`, `stackTraces`): `any`

##### Parameters[[preparestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `err` | `Error` |
| `stackTraces` | `CallSite`[] |

##### Returns[[preparestacktrace.returns]]

`any`

#### Inherited from[[preparestacktrace.inherited-from]]

[InferenceClientError](InferenceClientError).[prepareStackTrace](InferenceClientError#preparestacktrace)

#### Defined in[[preparestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:11

___

### stackTraceLimit

▪ `Static` **stackTraceLimit**: `number`

#### Inherited from[[stacktracelimit.inherited-from]]

[InferenceClientError](InferenceClientError).[stackTraceLimit](InferenceClientError#stacktracelimit)

#### Defined in[[stacktracelimit.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:13

## Methods

### captureStackTrace

▸ **captureStackTrace**(`targetObject`, `constructorOpt?`): `void`

Create .stack property on a target object

#### Parameters[[capturestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `targetObject` | `object` |
| `constructorOpt?` | `Function` |

#### Returns[[capturestacktrace.returns]]

`void`

#### Inherited from[[capturestacktrace.inherited-from]]

[InferenceClientError](InferenceClientError).[captureStackTrace](InferenceClientError#capturestacktrace)

#### Defined in[[capturestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:4


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/InferenceClientRoutingError.md" />

### Class: InferenceClientInputError
https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientInputError.md

# Class: InferenceClientInputError

Base class for all inference-related errors.

## Hierarchy

- [`InferenceClientError`](InferenceClientError)

  ↳ **`InferenceClientInputError`**

## Constructors

### constructor

• **new InferenceClientInputError**(`message`): [`InferenceClientInputError`](InferenceClientInputError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |

#### Returns[[constructor.returns]]

[`InferenceClientInputError`](InferenceClientInputError)

#### Overrides[[constructor.overrides]]

[InferenceClientError](InferenceClientError).[constructor](InferenceClientError#constructor)

#### Defined in[[constructor.defined-in]]

[inference/src/errors.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L14)

## Properties

### cause

• `Optional` **cause**: `unknown`

#### Inherited from[[cause.inherited-from]]

[InferenceClientError](InferenceClientError).[cause](InferenceClientError#cause)

#### Defined in[[cause.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es2022.error.d.ts:26

___

### message

• **message**: `string`

#### Inherited from[[message.inherited-from]]

[InferenceClientError](InferenceClientError).[message](InferenceClientError#message)

#### Defined in[[message.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1077

___

### name

• **name**: `string`

#### Inherited from[[name.inherited-from]]

[InferenceClientError](InferenceClientError).[name](InferenceClientError#name)

#### Defined in[[name.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1076

___

### stack

• `Optional` **stack**: `string`

#### Inherited from[[stack.inherited-from]]

[InferenceClientError](InferenceClientError).[stack](InferenceClientError#stack)

#### Defined in[[stack.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1078

___

### prepareStackTrace

▪ `Static` `Optional` **prepareStackTrace**: (`err`: `Error`, `stackTraces`: `CallSite`[]) => `any`

Optional override for formatting stack traces

**`See`**

https://v8.dev/docs/stack-trace-api#customizing-stack-traces

#### Type declaration[[preparestacktrace.type-declaration]]

▸ (`err`, `stackTraces`): `any`

##### Parameters[[preparestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `err` | `Error` |
| `stackTraces` | `CallSite`[] |

##### Returns[[preparestacktrace.returns]]

`any`

#### Inherited from[[preparestacktrace.inherited-from]]

[InferenceClientError](InferenceClientError).[prepareStackTrace](InferenceClientError#preparestacktrace)

#### Defined in[[preparestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:11

___

### stackTraceLimit

▪ `Static` **stackTraceLimit**: `number`

#### Inherited from[[stacktracelimit.inherited-from]]

[InferenceClientError](InferenceClientError).[stackTraceLimit](InferenceClientError#stacktracelimit)

#### Defined in[[stacktracelimit.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:13

## Methods

### captureStackTrace

▸ **captureStackTrace**(`targetObject`, `constructorOpt?`): `void`

Create .stack property on a target object

#### Parameters[[capturestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `targetObject` | `object` |
| `constructorOpt?` | `Function` |

#### Returns[[capturestacktrace.returns]]

`void`

#### Inherited from[[capturestacktrace.inherited-from]]

[InferenceClientError](InferenceClientError).[captureStackTrace](InferenceClientError#capturestacktrace)

#### Defined in[[capturestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:4


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/InferenceClientInputError.md" />

### Class: InferenceClient
https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClient.md

# Class: InferenceClient

## Hierarchy

- `Task`

  ↳ **`InferenceClient`**

  ↳↳ [`InferenceClientEndpoint`](InferenceClientEndpoint)

  ↳↳ [`HfInference`](HfInference)

## Constructors

### constructor

• **new InferenceClient**(`accessToken?`, `defaultOptions?`): [`InferenceClient`](InferenceClient)

#### Parameters[[constructor.parameters]]

| Name | Type | Default value |
| :------ | :------ | :------ |
| `accessToken` | `string` | `""` |
| `defaultOptions` | [`Options`](../interfaces/Options) & \{ `endpointUrl?`: `string`  } | `{}` |

#### Returns[[constructor.returns]]

[`InferenceClient`](InferenceClient)

#### Defined in[[constructor.defined-in]]

[inference/src/InferenceClient.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/InferenceClient.ts#L15)

## Properties

### accessToken

• `Private` `Readonly` **accessToken**: `string`

#### Defined in[[accesstoken.defined-in]]

[inference/src/InferenceClient.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/InferenceClient.ts#L12)

___

### defaultOptions

• `Private` `Readonly` **defaultOptions**: [`Options`](../interfaces/Options)

#### Defined in[[defaultoptions.defined-in]]

[inference/src/InferenceClient.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/InferenceClient.ts#L13)

## Methods

### audioClassification

▸ **audioClassification**(`args`, `options?`): `Promise`\<`AudioClassificationOutput`\>

This task reads some audio input and outputs the likelihood of classes.
Recommended model:  superb/hubert-large-superb-er

#### Parameters[[audioclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AudioClassificationArgs`](../modules#audioclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[audioclassification.returns]]

`Promise`\<`AudioClassificationOutput`\>

#### Defined in[[audioclassification.defined-in]]

[inference/src/tasks/audio/audioClassification.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioClassification.ts#L15)

___

### audioToAudio

▸ **audioToAudio**(`args`, `options?`): `Promise`\<[`AudioToAudioOutput`](../interfaces/AudioToAudioOutput)[]\>

This task reads some audio input and outputs one or multiple audio files.
Example model: speechbrain/sepformer-wham does audio source separation.

#### Parameters[[audiotoaudio.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AudioToAudioArgs`](../modules#audiotoaudioargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[audiotoaudio.returns]]

`Promise`\<[`AudioToAudioOutput`](../interfaces/AudioToAudioOutput)[]\>

#### Defined in[[audiotoaudio.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:39](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L39)

___

### automaticSpeechRecognition

▸ **automaticSpeechRecognition**(`args`, `options?`): `Promise`\<`AutomaticSpeechRecognitionOutput`\>

This task reads some audio input and outputs the said words within the audio files.
Recommended model (english language): facebook/wav2vec2-large-960h-lv60-self

#### Parameters[[automaticspeechrecognition.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AutomaticSpeechRecognitionArgs`](../modules#automaticspeechrecognitionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[automaticspeechrecognition.returns]]

`Promise`\<`AutomaticSpeechRecognitionOutput`\>

#### Defined in[[automaticspeechrecognition.defined-in]]

[inference/src/tasks/audio/automaticSpeechRecognition.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/automaticSpeechRecognition.ts#L13)

___

### chatCompletion

▸ **chatCompletion**(`args`, `options?`): `Promise`\<`ChatCompletionOutput`\>

Use the chat completion endpoint to generate a response to a prompt, using OpenAI message completion API no stream

#### Parameters[[chatcompletion.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & `ChatCompletionInput` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[chatcompletion.returns]]

`Promise`\<`ChatCompletionOutput`\>

#### Defined in[[chatcompletion.defined-in]]

[inference/src/tasks/nlp/chatCompletion.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/chatCompletion.ts#L12)

___

### chatCompletionStream

▸ **chatCompletionStream**(`args`, `options?`): `AsyncGenerator`\<`ChatCompletionStreamOutput`\>

Use to continue text from a prompt. Same as `textGeneration` but returns generator that can be read one token at a time

#### Parameters[[chatcompletionstream.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & `ChatCompletionInput` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[chatcompletionstream.returns]]

`AsyncGenerator`\<`ChatCompletionStreamOutput`\>

#### Defined in[[chatcompletionstream.defined-in]]

[inference/src/tasks/nlp/chatCompletionStream.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/chatCompletionStream.ts#L12)

___

### documentQuestionAnswering

▸ **documentQuestionAnswering**(`args`, `options?`): `Promise`\<`DocumentQuestionAnsweringOutput`[`number`]\>

Answers a question on a document image. Recommended model: impira/layoutlm-document-qa.

#### Parameters[[documentquestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`DocumentQuestionAnsweringArgs`](../modules#documentquestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[documentquestionanswering.returns]]

`Promise`\<`DocumentQuestionAnsweringOutput`[`number`]\>

#### Defined in[[documentquestionanswering.defined-in]]

[inference/src/tasks/multimodal/documentQuestionAnswering.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/documentQuestionAnswering.ts#L19)

___

### endpoint

▸ **endpoint**(`endpointUrl`): [`InferenceClient`](InferenceClient)

Returns a new instance of InferenceClient tied to a specified endpoint.

For backward compatibility mostly.

#### Parameters[[endpoint.parameters]]

| Name | Type |
| :------ | :------ |
| `endpointUrl` | `string` |

#### Returns[[endpoint.returns]]

[`InferenceClient`](InferenceClient)

#### Defined in[[endpoint.defined-in]]

[inference/src/InferenceClient.ts:46](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/InferenceClient.ts#L46)

___

### featureExtraction

▸ **featureExtraction**(`args`, `options?`): `Promise`\<[`FeatureExtractionOutput`](../modules#featureextractionoutput)\>

This task reads some text and outputs raw float values, that are usually consumed as part of a semantic database/semantic search.

#### Parameters[[featureextraction.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`FeatureExtractionArgs`](../modules#featureextractionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[featureextraction.returns]]

`Promise`\<[`FeatureExtractionOutput`](../modules#featureextractionoutput)\>

#### Defined in[[featureextraction.defined-in]]

[inference/src/tasks/nlp/featureExtraction.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/featureExtraction.ts#L22)

___

### fillMask

▸ **fillMask**(`args`, `options?`): `Promise`\<`FillMaskOutput`\>

Tries to fill in a hole with a missing word (token to be precise). That’s the base task for BERT models.

#### Parameters[[fillmask.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`FillMaskArgs`](../modules#fillmaskargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[fillmask.returns]]

`Promise`\<`FillMaskOutput`\>

#### Defined in[[fillmask.defined-in]]

[inference/src/tasks/nlp/fillMask.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/fillMask.ts#L12)

___

### imageClassification

▸ **imageClassification**(`args`, `options?`): `Promise`\<`ImageClassificationOutput`\>

This task reads some image input and outputs the likelihood of classes.
Recommended model: google/vit-base-patch16-224

#### Parameters[[imageclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageClassificationArgs`](../modules#imageclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imageclassification.returns]]

`Promise`\<`ImageClassificationOutput`\>

#### Defined in[[imageclassification.defined-in]]

[inference/src/tasks/cv/imageClassification.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageClassification.ts#L14)

___

### imageSegmentation

▸ **imageSegmentation**(`args`, `options?`): `Promise`\<`ImageSegmentationOutput`\>

This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects.
Recommended model: facebook/detr-resnet-50-panoptic

#### Parameters[[imagesegmentation.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageSegmentationArgs`](../modules#imagesegmentationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagesegmentation.returns]]

`Promise`\<`ImageSegmentationOutput`\>

#### Defined in[[imagesegmentation.defined-in]]

[inference/src/tasks/cv/imageSegmentation.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageSegmentation.ts#L14)

___

### imageToImage

▸ **imageToImage**(`args`, `options?`): `Promise`\<`Blob`\>

This task reads some text input and outputs an image.
Recommended model: lllyasviel/sd-controlnet-depth

#### Parameters[[imagetoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToImageArgs`](../modules#imagetoimageargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetoimage.returns]]

`Promise`\<`Blob`\>

#### Defined in[[imagetoimage.defined-in]]

[inference/src/tasks/cv/imageToImage.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToImage.ts#L14)

___

### imageToText

▸ **imageToText**(`args`, `options?`): `Promise`\<`ImageToTextOutput`\>

This task reads some image input and outputs the text caption.

#### Parameters[[imagetotext.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToTextArgs`](../modules#imagetotextargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetotext.returns]]

`Promise`\<`ImageToTextOutput`\>

#### Defined in[[imagetotext.defined-in]]

[inference/src/tasks/cv/imageToText.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToText.ts#L13)

___

### imageToVideo

▸ **imageToVideo**(`args`, `options?`): `Promise`\<`Blob`\>

This task reads some text input and outputs an image.
Recommended model: Wan-AI/Wan2.1-I2V-14B-720P

#### Parameters[[imagetovideo.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToVideoArgs`](../modules#imagetovideoargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetovideo.returns]]

`Promise`\<`Blob`\>

#### Defined in[[imagetovideo.defined-in]]

[inference/src/tasks/cv/imageToVideo.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToVideo.ts#L14)

___

### objectDetection

▸ **objectDetection**(`args`, `options?`): `Promise`\<`ObjectDetectionOutput`\>

This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects.
Recommended model: facebook/detr-resnet-50

#### Parameters[[objectdetection.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ObjectDetectionArgs`](../modules#objectdetectionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[objectdetection.returns]]

`Promise`\<`ObjectDetectionOutput`\>

#### Defined in[[objectdetection.defined-in]]

[inference/src/tasks/cv/objectDetection.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/objectDetection.ts#L14)

___

### questionAnswering

▸ **questionAnswering**(`args`, `options?`): `Promise`\<`QuestionAnsweringOutput`[`number`]\>

Want to have a nice know-it-all bot that can answer any question?. Recommended model: deepset/roberta-base-squad2

#### Parameters[[questionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`QuestionAnsweringArgs`](../modules#questionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[questionanswering.returns]]

`Promise`\<`QuestionAnsweringOutput`[`number`]\>

#### Defined in[[questionanswering.defined-in]]

[inference/src/tasks/nlp/questionAnswering.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/questionAnswering.ts#L13)

___

### request

▸ **request**\<`T`\>(`args`, `options?`): `Promise`\<`T`\>

Primitive to make custom calls to the inference provider

#### Type parameters[[request.type-parameters]]

| Name |
| :------ |
| `T` |

#### Parameters[[request.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](../modules#requestargs) |
| `options?` | [`Options`](../interfaces/Options) & \{ `task?`: [`InferenceTask`](../modules#inferencetask)  } |

#### Returns[[request.returns]]

`Promise`\<`T`\>

**`Deprecated`**

Use specific task functions instead. This function will be removed in a future version.

#### Defined in[[request.defined-in]]

[inference/src/tasks/custom/request.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/request.ts#L11)

___

### sentenceSimilarity

▸ **sentenceSimilarity**(`args`, `options?`): `Promise`\<`SentenceSimilarityOutput`\>

Calculate the semantic similarity between one text and a list of other sentences by comparing their embeddings.

#### Parameters[[sentencesimilarity.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`SentenceSimilarityArgs`](../modules#sentencesimilarityargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[sentencesimilarity.returns]]

`Promise`\<`SentenceSimilarityOutput`\>

#### Defined in[[sentencesimilarity.defined-in]]

[inference/src/tasks/nlp/sentenceSimilarity.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/sentenceSimilarity.ts#L12)

___

### streamingRequest

▸ **streamingRequest**\<`T`\>(`args`, `options?`): `AsyncGenerator`\<`T`\>

Primitive to make custom inference calls that expect server-sent events, and returns the response through a generator

#### Type parameters[[streamingrequest.type-parameters]]

| Name |
| :------ |
| `T` |

#### Parameters[[streamingrequest.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](../modules#requestargs) |
| `options?` | [`Options`](../interfaces/Options) & \{ `task?`: [`InferenceTask`](../modules#inferencetask)  } |

#### Returns[[streamingrequest.returns]]

`AsyncGenerator`\<`T`\>

**`Deprecated`**

Use specific task functions instead. This function will be removed in a future version.

#### Defined in[[streamingrequest.defined-in]]

[inference/src/tasks/custom/streamingRequest.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/streamingRequest.ts#L11)

___

### summarization

▸ **summarization**(`args`, `options?`): `Promise`\<`SummarizationOutput`\>

This task is well known to summarize longer text into shorter text. Be careful, some models have a maximum length of input. That means that the summary cannot handle full books for instance. Be careful when choosing your model.

#### Parameters[[summarization.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`SummarizationArgs`](../modules#summarizationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[summarization.returns]]

`Promise`\<`SummarizationOutput`\>

#### Defined in[[summarization.defined-in]]

[inference/src/tasks/nlp/summarization.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/summarization.ts#L12)

___

### tableQuestionAnswering

▸ **tableQuestionAnswering**(`args`, `options?`): `Promise`\<`TableQuestionAnsweringOutput`[`number`]\>

Don’t know SQL? Don’t want to dive into a large spreadsheet? Ask questions in plain english! Recommended model: google/tapas-base-finetuned-wtq.

#### Parameters[[tablequestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TableQuestionAnsweringArgs`](../modules#tablequestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tablequestionanswering.returns]]

`Promise`\<`TableQuestionAnsweringOutput`[`number`]\>

#### Defined in[[tablequestionanswering.defined-in]]

[inference/src/tasks/nlp/tableQuestionAnswering.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tableQuestionAnswering.ts#L12)

___

### tabularClassification

▸ **tabularClassification**(`args`, `options?`): `Promise`\<[`TabularClassificationOutput`](../modules#tabularclassificationoutput)\>

Predicts target label for a given set of features in tabular form.
Typically, you will want to train a classification model on your training data and use it with your new data of the same format.
Example model: vvmnnnkv/wine-quality

#### Parameters[[tabularclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TabularClassificationArgs`](../modules#tabularclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tabularclassification.returns]]

`Promise`\<[`TabularClassificationOutput`](../modules#tabularclassificationoutput)\>

#### Defined in[[tabularclassification.defined-in]]

[inference/src/tasks/tabular/tabularClassification.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularClassification.ts#L25)

___

### tabularRegression

▸ **tabularRegression**(`args`, `options?`): `Promise`\<[`TabularRegressionOutput`](../modules#tabularregressionoutput)\>

Predicts target value for a given set of features in tabular form.
Typically, you will want to train a regression model on your training data and use it with your new data of the same format.
Example model: scikit-learn/Fish-Weight

#### Parameters[[tabularregression.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TabularRegressionArgs`](../modules#tabularregressionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tabularregression.returns]]

`Promise`\<[`TabularRegressionOutput`](../modules#tabularregressionoutput)\>

#### Defined in[[tabularregression.defined-in]]

[inference/src/tasks/tabular/tabularRegression.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularRegression.ts#L25)

___

### textClassification

▸ **textClassification**(`args`, `options?`): `Promise`\<`TextClassificationOutput`\>

Usually used for sentiment-analysis this will output the likelihood of classes of an input. Recommended model: distilbert-base-uncased-finetuned-sst-2-english

#### Parameters[[textclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextClassificationArgs`](../modules#textclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textclassification.returns]]

`Promise`\<`TextClassificationOutput`\>

#### Defined in[[textclassification.defined-in]]

[inference/src/tasks/nlp/textClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textClassification.ts#L12)

___

### textGeneration

▸ **textGeneration**(`args`, `options?`): `Promise`\<[`TextGenerationOutput`](../interfaces/TextGenerationOutput)\>

Use to continue text from a prompt. This is a very generic task. Recommended model: gpt2 (it’s a simple model, but fun to play with).

#### Parameters[[textgeneration.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & [`TextGenerationInput`](../interfaces/TextGenerationInput) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textgeneration.returns]]

`Promise`\<[`TextGenerationOutput`](../interfaces/TextGenerationOutput)\>

#### Defined in[[textgeneration.defined-in]]

[inference/src/tasks/nlp/textGeneration.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGeneration.ts#L13)

___

### textGenerationStream

▸ **textGenerationStream**(`args`, `options?`): `AsyncGenerator`\<[`TextGenerationStreamOutput`](../interfaces/TextGenerationStreamOutput)\>

Use to continue text from a prompt. Same as `textGeneration` but returns generator that can be read one token at a time

#### Parameters[[textgenerationstream.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & [`TextGenerationInput`](../interfaces/TextGenerationInput) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textgenerationstream.returns]]

`AsyncGenerator`\<[`TextGenerationStreamOutput`](../interfaces/TextGenerationStreamOutput)\>

#### Defined in[[textgenerationstream.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:90](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L90)

___

### textToImage

▸ **textToImage**(`args`, `options?`): `Promise`\<`string`\>

This task reads some text input and outputs an image.
Recommended model: stabilityai/stable-diffusion-2

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType`: ``"url"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`string`\>

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L18)

▸ **textToImage**(`args`, `options?`): `Promise`\<`Blob`\>

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType?`: ``"blob"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`Blob`\>

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L22)

▸ **textToImage**(`args`, `options?`): `Promise`\<`Record`\<`string`, `unknown`\>\>

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType?`: ``"json"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`Record`\<`string`, `unknown`\>\>

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:26](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L26)

___

### textToSpeech

▸ **textToSpeech**(`args`, `options?`): `Promise`\<`Blob`\>

This task synthesize an audio of a voice pronouncing a given text.
Recommended model: espnet/kan-bayashi_ljspeech_vits

#### Parameters[[texttospeech.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | `TextToSpeechArgs` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[texttospeech.returns]]

`Promise`\<`Blob`\>

#### Defined in[[texttospeech.defined-in]]

[inference/src/tasks/audio/textToSpeech.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/textToSpeech.ts#L15)

___

### textToVideo

▸ **textToVideo**(`args`, `options?`): `Promise`\<[`TextToVideoOutput`](../modules#texttovideooutput)\>

#### Parameters[[texttovideo.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToVideoArgs`](../modules#texttovideoargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[texttovideo.returns]]

`Promise`\<[`TextToVideoOutput`](../modules#texttovideooutput)\>

#### Defined in[[texttovideo.defined-in]]

[inference/src/tasks/cv/textToVideo.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToVideo.ts#L15)

___

### tokenClassification

▸ **tokenClassification**(`args`, `options?`): `Promise`\<`TokenClassificationOutput`\>

Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text. Recommended model: dbmdz/bert-large-cased-finetuned-conll03-english

#### Parameters[[tokenclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TokenClassificationArgs`](../modules#tokenclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tokenclassification.returns]]

`Promise`\<`TokenClassificationOutput`\>

#### Defined in[[tokenclassification.defined-in]]

[inference/src/tasks/nlp/tokenClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tokenClassification.ts#L12)

___

### translation

▸ **translation**(`args`, `options?`): `Promise`\<`TranslationOutput`\>

This task is well known to translate text from one language to another. Recommended model: Helsinki-NLP/opus-mt-ru-en.

#### Parameters[[translation.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TranslationArgs`](../modules#translationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[translation.returns]]

`Promise`\<`TranslationOutput`\>

#### Defined in[[translation.defined-in]]

[inference/src/tasks/nlp/translation.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/translation.ts#L11)

___

### visualQuestionAnswering

▸ **visualQuestionAnswering**(`args`, `options?`): `Promise`\<`VisualQuestionAnsweringOutput`[`number`]\>

Answers a question on an image. Recommended model: dandelin/vilt-b32-finetuned-vqa.

#### Parameters[[visualquestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`VisualQuestionAnsweringArgs`](../modules#visualquestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[visualquestionanswering.returns]]

`Promise`\<`VisualQuestionAnsweringOutput`[`number`]\>

#### Defined in[[visualquestionanswering.defined-in]]

[inference/src/tasks/multimodal/visualQuestionAnswering.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/visualQuestionAnswering.ts#L19)

___

### zeroShotClassification

▸ **zeroShotClassification**(`args`, `options?`): `Promise`\<`ZeroShotClassificationOutput`\>

This task is super useful to try out classification with zero code, you simply pass a sentence/paragraph and the possible labels for that sentence, and you get a result. Recommended model: facebook/bart-large-mnli.

#### Parameters[[zeroshotclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ZeroShotClassificationArgs`](../modules#zeroshotclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[zeroshotclassification.returns]]

`Promise`\<`ZeroShotClassificationOutput`\>

#### Defined in[[zeroshotclassification.defined-in]]

[inference/src/tasks/nlp/zeroShotClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/zeroShotClassification.ts#L12)

___

### zeroShotImageClassification

▸ **zeroShotImageClassification**(`args`, `options?`): `Promise`\<`ZeroShotImageClassificationOutput`\>

Classify an image to specified classes.
Recommended model: openai/clip-vit-large-patch14-336

#### Parameters[[zeroshotimageclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ZeroShotImageClassificationArgs`](../modules#zeroshotimageclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[zeroshotimageclassification.returns]]

`Promise`\<`ZeroShotImageClassificationOutput`\>

#### Defined in[[zeroshotimageclassification.defined-in]]

[inference/src/tasks/cv/zeroShotImageClassification.ts:44](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/zeroShotImageClassification.ts#L44)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/InferenceClient.md" />

### Class: HfInference
https://huggingface.co/docs/huggingface.js/inference/classes/HfInference.md

# Class: HfInference

For backward compatibility only, will remove soon.

**`Deprecated`**

replace with InferenceClient

## Hierarchy

- [`InferenceClient`](InferenceClient)

  ↳ **`HfInference`**

## Constructors

### constructor

• **new HfInference**(`accessToken?`, `defaultOptions?`): [`HfInference`](HfInference)

#### Parameters[[constructor.parameters]]

| Name | Type | Default value |
| :------ | :------ | :------ |
| `accessToken` | `string` | `""` |
| `defaultOptions` | [`Options`](../interfaces/Options) & \{ `endpointUrl?`: `string`  } | `{}` |

#### Returns[[constructor.returns]]

[`HfInference`](HfInference)

#### Inherited from[[constructor.inherited-from]]

[InferenceClient](InferenceClient).[constructor](InferenceClient#constructor)

#### Defined in[[constructor.defined-in]]

[inference/src/InferenceClient.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/InferenceClient.ts#L15)

## Methods

### audioClassification

▸ **audioClassification**(`args`, `options?`): `Promise`\<`AudioClassificationOutput`\>

This task reads some audio input and outputs the likelihood of classes.
Recommended model:  superb/hubert-large-superb-er

#### Parameters[[audioclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AudioClassificationArgs`](../modules#audioclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[audioclassification.returns]]

`Promise`\<`AudioClassificationOutput`\>

#### Inherited from[[audioclassification.inherited-from]]

[InferenceClient](InferenceClient).[audioClassification](InferenceClient#audioclassification)

#### Defined in[[audioclassification.defined-in]]

[inference/src/tasks/audio/audioClassification.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioClassification.ts#L15)

___

### audioToAudio

▸ **audioToAudio**(`args`, `options?`): `Promise`\<[`AudioToAudioOutput`](../interfaces/AudioToAudioOutput)[]\>

This task reads some audio input and outputs one or multiple audio files.
Example model: speechbrain/sepformer-wham does audio source separation.

#### Parameters[[audiotoaudio.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AudioToAudioArgs`](../modules#audiotoaudioargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[audiotoaudio.returns]]

`Promise`\<[`AudioToAudioOutput`](../interfaces/AudioToAudioOutput)[]\>

#### Inherited from[[audiotoaudio.inherited-from]]

[InferenceClient](InferenceClient).[audioToAudio](InferenceClient#audiotoaudio)

#### Defined in[[audiotoaudio.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:39](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L39)

___

### automaticSpeechRecognition

▸ **automaticSpeechRecognition**(`args`, `options?`): `Promise`\<`AutomaticSpeechRecognitionOutput`\>

This task reads some audio input and outputs the said words within the audio files.
Recommended model (english language): facebook/wav2vec2-large-960h-lv60-self

#### Parameters[[automaticspeechrecognition.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`AutomaticSpeechRecognitionArgs`](../modules#automaticspeechrecognitionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[automaticspeechrecognition.returns]]

`Promise`\<`AutomaticSpeechRecognitionOutput`\>

#### Inherited from[[automaticspeechrecognition.inherited-from]]

[InferenceClient](InferenceClient).[automaticSpeechRecognition](InferenceClient#automaticspeechrecognition)

#### Defined in[[automaticspeechrecognition.defined-in]]

[inference/src/tasks/audio/automaticSpeechRecognition.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/automaticSpeechRecognition.ts#L13)

___

### chatCompletion

▸ **chatCompletion**(`args`, `options?`): `Promise`\<`ChatCompletionOutput`\>

Use the chat completion endpoint to generate a response to a prompt, using OpenAI message completion API no stream

#### Parameters[[chatcompletion.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & `ChatCompletionInput` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[chatcompletion.returns]]

`Promise`\<`ChatCompletionOutput`\>

#### Inherited from[[chatcompletion.inherited-from]]

[InferenceClient](InferenceClient).[chatCompletion](InferenceClient#chatcompletion)

#### Defined in[[chatcompletion.defined-in]]

[inference/src/tasks/nlp/chatCompletion.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/chatCompletion.ts#L12)

___

### chatCompletionStream

▸ **chatCompletionStream**(`args`, `options?`): `AsyncGenerator`\<`ChatCompletionStreamOutput`\>

Use to continue text from a prompt. Same as `textGeneration` but returns generator that can be read one token at a time

#### Parameters[[chatcompletionstream.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & `ChatCompletionInput` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[chatcompletionstream.returns]]

`AsyncGenerator`\<`ChatCompletionStreamOutput`\>

#### Inherited from[[chatcompletionstream.inherited-from]]

[InferenceClient](InferenceClient).[chatCompletionStream](InferenceClient#chatcompletionstream)

#### Defined in[[chatcompletionstream.defined-in]]

[inference/src/tasks/nlp/chatCompletionStream.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/chatCompletionStream.ts#L12)

___

### documentQuestionAnswering

▸ **documentQuestionAnswering**(`args`, `options?`): `Promise`\<`DocumentQuestionAnsweringOutput`[`number`]\>

Answers a question on a document image. Recommended model: impira/layoutlm-document-qa.

#### Parameters[[documentquestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`DocumentQuestionAnsweringArgs`](../modules#documentquestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[documentquestionanswering.returns]]

`Promise`\<`DocumentQuestionAnsweringOutput`[`number`]\>

#### Inherited from[[documentquestionanswering.inherited-from]]

[InferenceClient](InferenceClient).[documentQuestionAnswering](InferenceClient#documentquestionanswering)

#### Defined in[[documentquestionanswering.defined-in]]

[inference/src/tasks/multimodal/documentQuestionAnswering.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/documentQuestionAnswering.ts#L19)

___

### endpoint

▸ **endpoint**(`endpointUrl`): [`InferenceClient`](InferenceClient)

Returns a new instance of InferenceClient tied to a specified endpoint.

For backward compatibility mostly.

#### Parameters[[endpoint.parameters]]

| Name | Type |
| :------ | :------ |
| `endpointUrl` | `string` |

#### Returns[[endpoint.returns]]

[`InferenceClient`](InferenceClient)

#### Inherited from[[endpoint.inherited-from]]

[InferenceClient](InferenceClient).[endpoint](InferenceClient#endpoint)

#### Defined in[[endpoint.defined-in]]

[inference/src/InferenceClient.ts:46](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/InferenceClient.ts#L46)

___

### featureExtraction

▸ **featureExtraction**(`args`, `options?`): `Promise`\<[`FeatureExtractionOutput`](../modules#featureextractionoutput)\>

This task reads some text and outputs raw float values, that are usually consumed as part of a semantic database/semantic search.

#### Parameters[[featureextraction.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`FeatureExtractionArgs`](../modules#featureextractionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[featureextraction.returns]]

`Promise`\<[`FeatureExtractionOutput`](../modules#featureextractionoutput)\>

#### Inherited from[[featureextraction.inherited-from]]

[InferenceClient](InferenceClient).[featureExtraction](InferenceClient#featureextraction)

#### Defined in[[featureextraction.defined-in]]

[inference/src/tasks/nlp/featureExtraction.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/featureExtraction.ts#L22)

___

### fillMask

▸ **fillMask**(`args`, `options?`): `Promise`\<`FillMaskOutput`\>

Tries to fill in a hole with a missing word (token to be precise). That’s the base task for BERT models.

#### Parameters[[fillmask.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`FillMaskArgs`](../modules#fillmaskargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[fillmask.returns]]

`Promise`\<`FillMaskOutput`\>

#### Inherited from[[fillmask.inherited-from]]

[InferenceClient](InferenceClient).[fillMask](InferenceClient#fillmask)

#### Defined in[[fillmask.defined-in]]

[inference/src/tasks/nlp/fillMask.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/fillMask.ts#L12)

___

### imageClassification

▸ **imageClassification**(`args`, `options?`): `Promise`\<`ImageClassificationOutput`\>

This task reads some image input and outputs the likelihood of classes.
Recommended model: google/vit-base-patch16-224

#### Parameters[[imageclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageClassificationArgs`](../modules#imageclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imageclassification.returns]]

`Promise`\<`ImageClassificationOutput`\>

#### Inherited from[[imageclassification.inherited-from]]

[InferenceClient](InferenceClient).[imageClassification](InferenceClient#imageclassification)

#### Defined in[[imageclassification.defined-in]]

[inference/src/tasks/cv/imageClassification.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageClassification.ts#L14)

___

### imageSegmentation

▸ **imageSegmentation**(`args`, `options?`): `Promise`\<`ImageSegmentationOutput`\>

This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects.
Recommended model: facebook/detr-resnet-50-panoptic

#### Parameters[[imagesegmentation.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageSegmentationArgs`](../modules#imagesegmentationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagesegmentation.returns]]

`Promise`\<`ImageSegmentationOutput`\>

#### Inherited from[[imagesegmentation.inherited-from]]

[InferenceClient](InferenceClient).[imageSegmentation](InferenceClient#imagesegmentation)

#### Defined in[[imagesegmentation.defined-in]]

[inference/src/tasks/cv/imageSegmentation.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageSegmentation.ts#L14)

___

### imageToImage

▸ **imageToImage**(`args`, `options?`): `Promise`\<`Blob`\>

This task reads some text input and outputs an image.
Recommended model: lllyasviel/sd-controlnet-depth

#### Parameters[[imagetoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToImageArgs`](../modules#imagetoimageargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetoimage.returns]]

`Promise`\<`Blob`\>

#### Inherited from[[imagetoimage.inherited-from]]

[InferenceClient](InferenceClient).[imageToImage](InferenceClient#imagetoimage)

#### Defined in[[imagetoimage.defined-in]]

[inference/src/tasks/cv/imageToImage.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToImage.ts#L14)

___

### imageToText

▸ **imageToText**(`args`, `options?`): `Promise`\<`ImageToTextOutput`\>

This task reads some image input and outputs the text caption.

#### Parameters[[imagetotext.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToTextArgs`](../modules#imagetotextargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetotext.returns]]

`Promise`\<`ImageToTextOutput`\>

#### Inherited from[[imagetotext.inherited-from]]

[InferenceClient](InferenceClient).[imageToText](InferenceClient#imagetotext)

#### Defined in[[imagetotext.defined-in]]

[inference/src/tasks/cv/imageToText.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToText.ts#L13)

___

### imageToVideo

▸ **imageToVideo**(`args`, `options?`): `Promise`\<`Blob`\>

This task reads some text input and outputs an image.
Recommended model: Wan-AI/Wan2.1-I2V-14B-720P

#### Parameters[[imagetovideo.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ImageToVideoArgs`](../modules#imagetovideoargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[imagetovideo.returns]]

`Promise`\<`Blob`\>

#### Inherited from[[imagetovideo.inherited-from]]

[InferenceClient](InferenceClient).[imageToVideo](InferenceClient#imagetovideo)

#### Defined in[[imagetovideo.defined-in]]

[inference/src/tasks/cv/imageToVideo.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/imageToVideo.ts#L14)

___

### objectDetection

▸ **objectDetection**(`args`, `options?`): `Promise`\<`ObjectDetectionOutput`\>

This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects.
Recommended model: facebook/detr-resnet-50

#### Parameters[[objectdetection.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ObjectDetectionArgs`](../modules#objectdetectionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[objectdetection.returns]]

`Promise`\<`ObjectDetectionOutput`\>

#### Inherited from[[objectdetection.inherited-from]]

[InferenceClient](InferenceClient).[objectDetection](InferenceClient#objectdetection)

#### Defined in[[objectdetection.defined-in]]

[inference/src/tasks/cv/objectDetection.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/objectDetection.ts#L14)

___

### questionAnswering

▸ **questionAnswering**(`args`, `options?`): `Promise`\<`QuestionAnsweringOutput`[`number`]\>

Want to have a nice know-it-all bot that can answer any question?. Recommended model: deepset/roberta-base-squad2

#### Parameters[[questionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`QuestionAnsweringArgs`](../modules#questionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[questionanswering.returns]]

`Promise`\<`QuestionAnsweringOutput`[`number`]\>

#### Inherited from[[questionanswering.inherited-from]]

[InferenceClient](InferenceClient).[questionAnswering](InferenceClient#questionanswering)

#### Defined in[[questionanswering.defined-in]]

[inference/src/tasks/nlp/questionAnswering.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/questionAnswering.ts#L13)

___

### request

▸ **request**\<`T`\>(`args`, `options?`): `Promise`\<`T`\>

Primitive to make custom calls to the inference provider

#### Type parameters[[request.type-parameters]]

| Name |
| :------ |
| `T` |

#### Parameters[[request.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](../modules#requestargs) |
| `options?` | [`Options`](../interfaces/Options) & \{ `task?`: [`InferenceTask`](../modules#inferencetask)  } |

#### Returns[[request.returns]]

`Promise`\<`T`\>

**`Deprecated`**

Use specific task functions instead. This function will be removed in a future version.

#### Inherited from[[request.inherited-from]]

[InferenceClient](InferenceClient).[request](InferenceClient#request)

#### Defined in[[request.defined-in]]

[inference/src/tasks/custom/request.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/request.ts#L11)

___

### sentenceSimilarity

▸ **sentenceSimilarity**(`args`, `options?`): `Promise`\<`SentenceSimilarityOutput`\>

Calculate the semantic similarity between one text and a list of other sentences by comparing their embeddings.

#### Parameters[[sentencesimilarity.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`SentenceSimilarityArgs`](../modules#sentencesimilarityargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[sentencesimilarity.returns]]

`Promise`\<`SentenceSimilarityOutput`\>

#### Inherited from[[sentencesimilarity.inherited-from]]

[InferenceClient](InferenceClient).[sentenceSimilarity](InferenceClient#sentencesimilarity)

#### Defined in[[sentencesimilarity.defined-in]]

[inference/src/tasks/nlp/sentenceSimilarity.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/sentenceSimilarity.ts#L12)

___

### streamingRequest

▸ **streamingRequest**\<`T`\>(`args`, `options?`): `AsyncGenerator`\<`T`\>

Primitive to make custom inference calls that expect server-sent events, and returns the response through a generator

#### Type parameters[[streamingrequest.type-parameters]]

| Name |
| :------ |
| `T` |

#### Parameters[[streamingrequest.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`RequestArgs`](../modules#requestargs) |
| `options?` | [`Options`](../interfaces/Options) & \{ `task?`: [`InferenceTask`](../modules#inferencetask)  } |

#### Returns[[streamingrequest.returns]]

`AsyncGenerator`\<`T`\>

**`Deprecated`**

Use specific task functions instead. This function will be removed in a future version.

#### Inherited from[[streamingrequest.inherited-from]]

[InferenceClient](InferenceClient).[streamingRequest](InferenceClient#streamingrequest)

#### Defined in[[streamingrequest.defined-in]]

[inference/src/tasks/custom/streamingRequest.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/streamingRequest.ts#L11)

___

### summarization

▸ **summarization**(`args`, `options?`): `Promise`\<`SummarizationOutput`\>

This task is well known to summarize longer text into shorter text. Be careful, some models have a maximum length of input. That means that the summary cannot handle full books for instance. Be careful when choosing your model.

#### Parameters[[summarization.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`SummarizationArgs`](../modules#summarizationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[summarization.returns]]

`Promise`\<`SummarizationOutput`\>

#### Inherited from[[summarization.inherited-from]]

[InferenceClient](InferenceClient).[summarization](InferenceClient#summarization)

#### Defined in[[summarization.defined-in]]

[inference/src/tasks/nlp/summarization.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/summarization.ts#L12)

___

### tableQuestionAnswering

▸ **tableQuestionAnswering**(`args`, `options?`): `Promise`\<`TableQuestionAnsweringOutput`[`number`]\>

Don’t know SQL? Don’t want to dive into a large spreadsheet? Ask questions in plain english! Recommended model: google/tapas-base-finetuned-wtq.

#### Parameters[[tablequestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TableQuestionAnsweringArgs`](../modules#tablequestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tablequestionanswering.returns]]

`Promise`\<`TableQuestionAnsweringOutput`[`number`]\>

#### Inherited from[[tablequestionanswering.inherited-from]]

[InferenceClient](InferenceClient).[tableQuestionAnswering](InferenceClient#tablequestionanswering)

#### Defined in[[tablequestionanswering.defined-in]]

[inference/src/tasks/nlp/tableQuestionAnswering.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tableQuestionAnswering.ts#L12)

___

### tabularClassification

▸ **tabularClassification**(`args`, `options?`): `Promise`\<[`TabularClassificationOutput`](../modules#tabularclassificationoutput)\>

Predicts target label for a given set of features in tabular form.
Typically, you will want to train a classification model on your training data and use it with your new data of the same format.
Example model: vvmnnnkv/wine-quality

#### Parameters[[tabularclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TabularClassificationArgs`](../modules#tabularclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tabularclassification.returns]]

`Promise`\<[`TabularClassificationOutput`](../modules#tabularclassificationoutput)\>

#### Inherited from[[tabularclassification.inherited-from]]

[InferenceClient](InferenceClient).[tabularClassification](InferenceClient#tabularclassification)

#### Defined in[[tabularclassification.defined-in]]

[inference/src/tasks/tabular/tabularClassification.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularClassification.ts#L25)

___

### tabularRegression

▸ **tabularRegression**(`args`, `options?`): `Promise`\<[`TabularRegressionOutput`](../modules#tabularregressionoutput)\>

Predicts target value for a given set of features in tabular form.
Typically, you will want to train a regression model on your training data and use it with your new data of the same format.
Example model: scikit-learn/Fish-Weight

#### Parameters[[tabularregression.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TabularRegressionArgs`](../modules#tabularregressionargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tabularregression.returns]]

`Promise`\<[`TabularRegressionOutput`](../modules#tabularregressionoutput)\>

#### Inherited from[[tabularregression.inherited-from]]

[InferenceClient](InferenceClient).[tabularRegression](InferenceClient#tabularregression)

#### Defined in[[tabularregression.defined-in]]

[inference/src/tasks/tabular/tabularRegression.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/tabular/tabularRegression.ts#L25)

___

### textClassification

▸ **textClassification**(`args`, `options?`): `Promise`\<`TextClassificationOutput`\>

Usually used for sentiment-analysis this will output the likelihood of classes of an input. Recommended model: distilbert-base-uncased-finetuned-sst-2-english

#### Parameters[[textclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextClassificationArgs`](../modules#textclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textclassification.returns]]

`Promise`\<`TextClassificationOutput`\>

#### Inherited from[[textclassification.inherited-from]]

[InferenceClient](InferenceClient).[textClassification](InferenceClient#textclassification)

#### Defined in[[textclassification.defined-in]]

[inference/src/tasks/nlp/textClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textClassification.ts#L12)

___

### textGeneration

▸ **textGeneration**(`args`, `options?`): `Promise`\<[`TextGenerationOutput`](../interfaces/TextGenerationOutput)\>

Use to continue text from a prompt. This is a very generic task. Recommended model: gpt2 (it’s a simple model, but fun to play with).

#### Parameters[[textgeneration.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & [`TextGenerationInput`](../interfaces/TextGenerationInput) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textgeneration.returns]]

`Promise`\<[`TextGenerationOutput`](../interfaces/TextGenerationOutput)\>

#### Inherited from[[textgeneration.inherited-from]]

[InferenceClient](InferenceClient).[textGeneration](InferenceClient#textgeneration)

#### Defined in[[textgeneration.defined-in]]

[inference/src/tasks/nlp/textGeneration.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGeneration.ts#L13)

___

### textGenerationStream

▸ **textGenerationStream**(`args`, `options?`): `AsyncGenerator`\<[`TextGenerationStreamOutput`](../interfaces/TextGenerationStreamOutput)\>

Use to continue text from a prompt. Same as `textGeneration` but returns generator that can be read one token at a time

#### Parameters[[textgenerationstream.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`BaseArgs`](../interfaces/BaseArgs) & [`TextGenerationInput`](../interfaces/TextGenerationInput) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[textgenerationstream.returns]]

`AsyncGenerator`\<[`TextGenerationStreamOutput`](../interfaces/TextGenerationStreamOutput)\>

#### Inherited from[[textgenerationstream.inherited-from]]

[InferenceClient](InferenceClient).[textGenerationStream](InferenceClient#textgenerationstream)

#### Defined in[[textgenerationstream.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:90](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L90)

___

### textToImage

▸ **textToImage**(`args`, `options?`): `Promise`\<`string`\>

This task reads some text input and outputs an image.
Recommended model: stabilityai/stable-diffusion-2

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType`: ``"url"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`string`\>

#### Inherited from[[texttoimage.inherited-from]]

[InferenceClient](InferenceClient).[textToImage](InferenceClient#texttoimage)

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L18)

▸ **textToImage**(`args`, `options?`): `Promise`\<`Blob`\>

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType?`: ``"blob"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`Blob`\>

#### Inherited from[[texttoimage.inherited-from]]

[InferenceClient](InferenceClient).[textToImage](InferenceClient#texttoimage)

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L22)

▸ **textToImage**(`args`, `options?`): `Promise`\<`Record`\<`string`, `unknown`\>\>

#### Parameters[[texttoimage.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToImageArgs`](../modules#texttoimageargs) |
| `options?` | `TextToImageOptions` & \{ `outputType?`: ``"json"``  } |

#### Returns[[texttoimage.returns]]

`Promise`\<`Record`\<`string`, `unknown`\>\>

#### Inherited from[[texttoimage.inherited-from]]

[InferenceClient](InferenceClient).[textToImage](InferenceClient#texttoimage)

#### Defined in[[texttoimage.defined-in]]

[inference/src/tasks/cv/textToImage.ts:26](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToImage.ts#L26)

___

### textToSpeech

▸ **textToSpeech**(`args`, `options?`): `Promise`\<`Blob`\>

This task synthesize an audio of a voice pronouncing a given text.
Recommended model: espnet/kan-bayashi_ljspeech_vits

#### Parameters[[texttospeech.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | `TextToSpeechArgs` |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[texttospeech.returns]]

`Promise`\<`Blob`\>

#### Inherited from[[texttospeech.inherited-from]]

[InferenceClient](InferenceClient).[textToSpeech](InferenceClient#texttospeech)

#### Defined in[[texttospeech.defined-in]]

[inference/src/tasks/audio/textToSpeech.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/textToSpeech.ts#L15)

___

### textToVideo

▸ **textToVideo**(`args`, `options?`): `Promise`\<[`TextToVideoOutput`](../modules#texttovideooutput)\>

#### Parameters[[texttovideo.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TextToVideoArgs`](../modules#texttovideoargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[texttovideo.returns]]

`Promise`\<[`TextToVideoOutput`](../modules#texttovideooutput)\>

#### Inherited from[[texttovideo.inherited-from]]

[InferenceClient](InferenceClient).[textToVideo](InferenceClient#texttovideo)

#### Defined in[[texttovideo.defined-in]]

[inference/src/tasks/cv/textToVideo.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/textToVideo.ts#L15)

___

### tokenClassification

▸ **tokenClassification**(`args`, `options?`): `Promise`\<`TokenClassificationOutput`\>

Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text. Recommended model: dbmdz/bert-large-cased-finetuned-conll03-english

#### Parameters[[tokenclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TokenClassificationArgs`](../modules#tokenclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[tokenclassification.returns]]

`Promise`\<`TokenClassificationOutput`\>

#### Inherited from[[tokenclassification.inherited-from]]

[InferenceClient](InferenceClient).[tokenClassification](InferenceClient#tokenclassification)

#### Defined in[[tokenclassification.defined-in]]

[inference/src/tasks/nlp/tokenClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/tokenClassification.ts#L12)

___

### translation

▸ **translation**(`args`, `options?`): `Promise`\<`TranslationOutput`\>

This task is well known to translate text from one language to another. Recommended model: Helsinki-NLP/opus-mt-ru-en.

#### Parameters[[translation.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`TranslationArgs`](../modules#translationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[translation.returns]]

`Promise`\<`TranslationOutput`\>

#### Inherited from[[translation.inherited-from]]

[InferenceClient](InferenceClient).[translation](InferenceClient#translation)

#### Defined in[[translation.defined-in]]

[inference/src/tasks/nlp/translation.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/translation.ts#L11)

___

### visualQuestionAnswering

▸ **visualQuestionAnswering**(`args`, `options?`): `Promise`\<`VisualQuestionAnsweringOutput`[`number`]\>

Answers a question on an image. Recommended model: dandelin/vilt-b32-finetuned-vqa.

#### Parameters[[visualquestionanswering.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`VisualQuestionAnsweringArgs`](../modules#visualquestionansweringargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[visualquestionanswering.returns]]

`Promise`\<`VisualQuestionAnsweringOutput`[`number`]\>

#### Inherited from[[visualquestionanswering.inherited-from]]

[InferenceClient](InferenceClient).[visualQuestionAnswering](InferenceClient#visualquestionanswering)

#### Defined in[[visualquestionanswering.defined-in]]

[inference/src/tasks/multimodal/visualQuestionAnswering.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/multimodal/visualQuestionAnswering.ts#L19)

___

### zeroShotClassification

▸ **zeroShotClassification**(`args`, `options?`): `Promise`\<`ZeroShotClassificationOutput`\>

This task is super useful to try out classification with zero code, you simply pass a sentence/paragraph and the possible labels for that sentence, and you get a result. Recommended model: facebook/bart-large-mnli.

#### Parameters[[zeroshotclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ZeroShotClassificationArgs`](../modules#zeroshotclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[zeroshotclassification.returns]]

`Promise`\<`ZeroShotClassificationOutput`\>

#### Inherited from[[zeroshotclassification.inherited-from]]

[InferenceClient](InferenceClient).[zeroShotClassification](InferenceClient#zeroshotclassification)

#### Defined in[[zeroshotclassification.defined-in]]

[inference/src/tasks/nlp/zeroShotClassification.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/zeroShotClassification.ts#L12)

___

### zeroShotImageClassification

▸ **zeroShotImageClassification**(`args`, `options?`): `Promise`\<`ZeroShotImageClassificationOutput`\>

Classify an image to specified classes.
Recommended model: openai/clip-vit-large-patch14-336

#### Parameters[[zeroshotimageclassification.parameters]]

| Name | Type |
| :------ | :------ |
| `args` | [`ZeroShotImageClassificationArgs`](../modules#zeroshotimageclassificationargs) |
| `options?` | [`Options`](../interfaces/Options) |

#### Returns[[zeroshotimageclassification.returns]]

`Promise`\<`ZeroShotImageClassificationOutput`\>

#### Inherited from[[zeroshotimageclassification.inherited-from]]

[InferenceClient](InferenceClient).[zeroShotImageClassification](InferenceClient#zeroshotimageclassification)

#### Defined in[[zeroshotimageclassification.defined-in]]

[inference/src/tasks/cv/zeroShotImageClassification.ts:44](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/cv/zeroShotImageClassification.ts#L44)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/HfInference.md" />

### Class: InferenceClientProviderApiError
https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientProviderApiError.md

# Class: InferenceClientProviderApiError

Thrown when the HTTP request to the provider fails, e.g. due to API issues or server errors.

## Hierarchy

- `InferenceClientHttpRequestError`

  ↳ **`InferenceClientProviderApiError`**

## Constructors

### constructor

• **new InferenceClientProviderApiError**(`message`, `httpRequest`, `httpResponse`): [`InferenceClientProviderApiError`](InferenceClientProviderApiError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |
| `httpRequest` | `HttpRequest` |
| `httpResponse` | `HttpResponse` |

#### Returns[[constructor.returns]]

[`InferenceClientProviderApiError`](InferenceClientProviderApiError)

#### Overrides[[constructor.overrides]]

InferenceClientHttpRequestError.constructor

#### Defined in[[constructor.defined-in]]

[inference/src/errors.ts:65](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L65)

## Properties

### cause

• `Optional` **cause**: `unknown`

#### Inherited from[[cause.inherited-from]]

InferenceClientHttpRequestError.cause

#### Defined in[[cause.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es2022.error.d.ts:26

___

### httpRequest

• **httpRequest**: `HttpRequest`

#### Inherited from[[httprequest.inherited-from]]

InferenceClientHttpRequestError.httpRequest

#### Defined in[[httprequest.defined-in]]

[inference/src/errors.ts:41](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L41)

___

### httpResponse

• **httpResponse**: `HttpResponse`

#### Inherited from[[httpresponse.inherited-from]]

InferenceClientHttpRequestError.httpResponse

#### Defined in[[httpresponse.defined-in]]

[inference/src/errors.ts:42](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L42)

___

### message

• **message**: `string`

#### Inherited from[[message.inherited-from]]

InferenceClientHttpRequestError.message

#### Defined in[[message.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1077

___

### name

• **name**: `string`

#### Inherited from[[name.inherited-from]]

InferenceClientHttpRequestError.name

#### Defined in[[name.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1076

___

### stack

• `Optional` **stack**: `string`

#### Inherited from[[stack.inherited-from]]

InferenceClientHttpRequestError.stack

#### Defined in[[stack.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1078

___

### prepareStackTrace

▪ `Static` `Optional` **prepareStackTrace**: (`err`: `Error`, `stackTraces`: `CallSite`[]) => `any`

Optional override for formatting stack traces

**`See`**

https://v8.dev/docs/stack-trace-api#customizing-stack-traces

#### Type declaration[[preparestacktrace.type-declaration]]

▸ (`err`, `stackTraces`): `any`

##### Parameters[[preparestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `err` | `Error` |
| `stackTraces` | `CallSite`[] |

##### Returns[[preparestacktrace.returns]]

`any`

#### Inherited from[[preparestacktrace.inherited-from]]

InferenceClientHttpRequestError.prepareStackTrace

#### Defined in[[preparestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:11

___

### stackTraceLimit

▪ `Static` **stackTraceLimit**: `number`

#### Inherited from[[stacktracelimit.inherited-from]]

InferenceClientHttpRequestError.stackTraceLimit

#### Defined in[[stacktracelimit.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:13

## Methods

### captureStackTrace

▸ **captureStackTrace**(`targetObject`, `constructorOpt?`): `void`

Create .stack property on a target object

#### Parameters[[capturestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `targetObject` | `object` |
| `constructorOpt?` | `Function` |

#### Returns[[capturestacktrace.returns]]

`void`

#### Inherited from[[capturestacktrace.inherited-from]]

InferenceClientHttpRequestError.captureStackTrace

#### Defined in[[capturestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:4


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/InferenceClientProviderApiError.md" />

### Class: InferenceClientError
https://huggingface.co/docs/huggingface.js/inference/classes/InferenceClientError.md

# Class: InferenceClientError

Base class for all inference-related errors.

## Hierarchy

- `Error`

  ↳ **`InferenceClientError`**

  ↳↳ [`InferenceClientInputError`](InferenceClientInputError)

  ↳↳ [`InferenceClientRoutingError`](InferenceClientRoutingError)

  ↳↳ [`InferenceClientProviderOutputError`](InferenceClientProviderOutputError)

## Constructors

### constructor

• **new InferenceClientError**(`message`): [`InferenceClientError`](InferenceClientError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |

#### Returns[[constructor.returns]]

[`InferenceClientError`](InferenceClientError)

#### Overrides[[constructor.overrides]]

Error.constructor

#### Defined in[[constructor.defined-in]]

[inference/src/errors.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/errors.ts#L7)

## Properties

### cause

• `Optional` **cause**: `unknown`

#### Inherited from[[cause.inherited-from]]

Error.cause

#### Defined in[[cause.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es2022.error.d.ts:26

___

### message

• **message**: `string`

#### Inherited from[[message.inherited-from]]

Error.message

#### Defined in[[message.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1077

___

### name

• **name**: `string`

#### Inherited from[[name.inherited-from]]

Error.name

#### Defined in[[name.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1076

___

### stack

• `Optional` **stack**: `string`

#### Inherited from[[stack.inherited-from]]

Error.stack

#### Defined in[[stack.defined-in]]

doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1078

___

### prepareStackTrace

▪ `Static` `Optional` **prepareStackTrace**: (`err`: `Error`, `stackTraces`: `CallSite`[]) => `any`

Optional override for formatting stack traces

**`See`**

https://v8.dev/docs/stack-trace-api#customizing-stack-traces

#### Type declaration[[preparestacktrace.type-declaration]]

▸ (`err`, `stackTraces`): `any`

##### Parameters[[preparestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `err` | `Error` |
| `stackTraces` | `CallSite`[] |

##### Returns[[preparestacktrace.returns]]

`any`

#### Inherited from[[preparestacktrace.inherited-from]]

Error.prepareStackTrace

#### Defined in[[preparestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:11

___

### stackTraceLimit

▪ `Static` **stackTraceLimit**: `number`

#### Inherited from[[stacktracelimit.inherited-from]]

Error.stackTraceLimit

#### Defined in[[stacktracelimit.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:13

## Methods

### captureStackTrace

▸ **captureStackTrace**(`targetObject`, `constructorOpt?`): `void`

Create .stack property on a target object

#### Parameters[[capturestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `targetObject` | `object` |
| `constructorOpt?` | `Function` |

#### Returns[[capturestacktrace.returns]]

`void`

#### Inherited from[[capturestacktrace.inherited-from]]

Error.captureStackTrace

#### Defined in[[capturestacktrace.defined-in]]

inference/node_modules/.pnpm/@types+node@18.13.0/node_modules/@types/node/globals.d.ts:4


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/classes/InferenceClientError.md" />

### Namespace: snippets
https://huggingface.co/docs/huggingface.js/inference/modules/snippets.md

# Namespace: snippets

## Type Aliases

### InferenceSnippetOptions

Ƭ **InferenceSnippetOptions**: \{ `accessToken?`: `string` ; `billTo?`: `string` ; `directRequest?`: `boolean` ; `endpointUrl?`: `string` ; `inputs?`: `Record`\<`string`, `unknown`\> ; `streaming?`: `boolean`  } & `Record`\<`string`, `unknown`\>

#### Defined in[[inferencesnippetoptions.defined-in]]

[inference/src/snippets/getInferenceSnippets.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/snippets/getInferenceSnippets.ts#L18)

## Functions

### getInferenceSnippets

▸ **getInferenceSnippets**(`model`, `provider`, `inferenceProviderMapping?`, `opts?`): `InferenceSnippet`[]

#### Parameters[[getinferencesnippets.parameters]]

| Name | Type |
| :------ | :------ |
| `model` | `ModelDataMinimal` |
| `provider` | ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"`` |
| `inferenceProviderMapping?` | [`InferenceProviderMappingEntry`](../interfaces/InferenceProviderMappingEntry) |
| `opts?` | `Record`\<`string`, `unknown`\> |

#### Returns[[getinferencesnippets.returns]]

`InferenceSnippet`[]

#### Defined in[[getinferencesnippets.defined-in]]

[inference/src/snippets/getInferenceSnippets.ts:416](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/snippets/getInferenceSnippets.ts#L416)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/modules/snippets.md" />

### Interface: TextGenerationInput
https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationInput.md

# Interface: TextGenerationInput

Text Generation Input.

Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

## Indexable

▪ [property: `string`]: `unknown`

## Properties

### inputs

• **inputs**: `string`

#### Defined in[[inputs.defined-in]]

tasks/dist/esm/tasks/text-generation/inference.d.ts:14

___

### parameters

• `Optional` **parameters**: `TextGenerationInputGenerateParameters`

#### Defined in[[parameters.defined-in]]

tasks/dist/esm/tasks/text-generation/inference.d.ts:15

___

### stream

• `Optional` **stream**: `boolean`

#### Defined in[[stream.defined-in]]

tasks/dist/esm/tasks/text-generation/inference.d.ts:16


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/TextGenerationInput.md" />

### Interface: Options
https://huggingface.co/docs/huggingface.js/inference/interfaces/Options.md

# Interface: Options

## Properties

### billTo

• `Optional` **billTo**: `string`

The billing account to use for the requests.

By default the requests are billed on the user's account.
Requests can only be billed to an organization the user is a member of, and which has subscribed to Enterprise Hub.

#### Defined in[[billto.defined-in]]

[inference/src/types.ts:42](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L42)

___

### fetch

• `Optional` **fetch**: (`input`: `RequestInfo` \| `URL`, `init?`: `RequestInit`) => `Promise`\<`Response`\>

Custom fetch function to use instead of the default one, for example to use a proxy or edit headers.

#### Type declaration[[fetch.type-declaration]]

▸ (`input`, `init?`): `Promise`\<`Response`\>

##### Parameters[[fetch.parameters]]

| Name | Type |
| :------ | :------ |
| `input` | `RequestInfo` \| `URL` |
| `init?` | `RequestInit` |

##### Returns[[fetch.returns]]

`Promise`\<`Response`\>

#### Defined in[[fetch.defined-in]]

[inference/src/types.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L25)

___

### includeCredentials

• `Optional` **includeCredentials**: `string` \| `boolean`

(Default: "same-origin"). String | Boolean. Credentials to use for the request. If this is a string, it will be passed straight on. If it's a boolean, true will be "include" and false will not send credentials at all.

#### Defined in[[includecredentials.defined-in]]

[inference/src/types.ts:34](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L34)

___

### retry\_on\_error

• `Optional` **retry\_on\_error**: `boolean`

(Default: true) Boolean. If a request 503s, the request will be retried with the same parameters.

#### Defined in[[retryonerror.defined-in]]

[inference/src/types.ts:20](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L20)

___

### signal

• `Optional` **signal**: `AbortSignal`

Abort Controller signal to use for request interruption.

#### Defined in[[signal.defined-in]]

[inference/src/types.ts:29](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L29)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/Options.md" />

### Interface: TextGenerationStreamToken
https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamToken.md

# Interface: TextGenerationStreamToken

## Properties

### id

• **id**: `number`

Token ID from the model tokenizer

#### Defined in[[id.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L9)

___

### logprob

• **logprob**: `number`

Logprob

#### Defined in[[logprob.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L13)

___

### special

• **special**: `boolean`

Is the token a special token
Can be used to ignore tokens when concatenating

#### Defined in[[special.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L18)

___

### text

• **text**: `string`

Token text

#### Defined in[[text.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L11)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/TextGenerationStreamToken.md" />

### Interface: AudioToAudioOutput
https://huggingface.co/docs/huggingface.js/inference/interfaces/AudioToAudioOutput.md

# Interface: AudioToAudioOutput

## Properties

### blob

• **blob**: `string`

#### Defined in[[blob.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:30](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L30)

___

### content-type

• **content-type**: `string`

#### Defined in[[content-type.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:31](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L31)

___

### label

• **label**: `string`

#### Defined in[[label.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:32](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L32)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/AudioToAudioOutput.md" />

### Interface: HeaderParams
https://huggingface.co/docs/huggingface.js/inference/interfaces/HeaderParams.md

# Interface: HeaderParams

## Properties

### accessToken

• `Optional` **accessToken**: `string`

#### Defined in[[accesstoken.defined-in]]

[inference/src/types.ts:171](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L171)

___

### authMethod

• **authMethod**: [`AuthMethod`](../modules#authmethod)

#### Defined in[[authmethod.defined-in]]

[inference/src/types.ts:172](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L172)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/HeaderParams.md" />

### Interface: TextGenerationStreamDetails
https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamDetails.md

# Interface: TextGenerationStreamDetails

## Properties

### best\_of\_sequences

• `Optional` **best\_of\_sequences**: [`TextGenerationStreamBestOfSequence`](TextGenerationStreamBestOfSequence)[]

Additional sequences when using the `best_of` parameter

#### Defined in[[bestofsequences.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:68](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L68)

___

### finish\_reason

• **finish\_reason**: [`TextGenerationStreamFinishReason`](../modules#textgenerationstreamfinishreason)

Generation finish reason

#### Defined in[[finishreason.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:58](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L58)

___

### generated\_tokens

• **generated\_tokens**: `number`

Number of generated tokens

#### Defined in[[generatedtokens.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:60](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L60)

___

### prefill

• **prefill**: [`TextGenerationStreamPrefillToken`](TextGenerationStreamPrefillToken)[]

Prompt tokens

#### Defined in[[prefill.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:64](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L64)

___

### seed

• `Optional` **seed**: `number`

Sampling seed if sampling was activated

#### Defined in[[seed.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:62](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L62)

___

### tokens

• **tokens**: [`TextGenerationStreamToken`](TextGenerationStreamToken)[]

#### Defined in[[tokens.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:66](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L66)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/TextGenerationStreamDetails.md" />

### Interface: BaseArgs
https://huggingface.co/docs/huggingface.js/inference/interfaces/BaseArgs.md

# Interface: BaseArgs

## Properties

### accessToken

• `Optional` **accessToken**: `string`

The access token to use. Without it, you'll get rate-limited quickly.

Can be created for free in hf.co/settings/token

You can also pass an external Inference provider's key if you intend to call a compatible provider like Sambanova, Together, Replicate...

#### Defined in[[accesstoken.defined-in]]

[inference/src/types.ts:129](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L129)

___

### endpointUrl

• `Optional` **endpointUrl**: `string`

The URL of the endpoint to use.

If not specified, will call the default router.huggingface.co Inference Providers endpoint.

#### Defined in[[endpointurl.defined-in]]

[inference/src/types.ts:146](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L146)

___

### model

• `Optional` **model**: `string`

The HF model to use.

If not specified, will call huggingface.co/api/tasks to get the default model for the task.

/!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future.
Use the `endpointUrl` parameter instead.

#### Defined in[[model.defined-in]]

[inference/src/types.ts:139](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L139)

___

### provider

• `Optional` **provider**: ``"baseten"`` \| ``"black-forest-labs"`` \| ``"cerebras"`` \| ``"clarifai"`` \| ``"cohere"`` \| ``"fal-ai"`` \| ``"featherless-ai"`` \| ``"fireworks-ai"`` \| ``"groq"`` \| ``"hf-inference"`` \| ``"hyperbolic"`` \| ``"nebius"`` \| ``"novita"`` \| ``"nscale"`` \| ``"openai"`` \| ``"ovhcloud"`` \| ``"publicai"`` \| ``"replicate"`` \| ``"sambanova"`` \| ``"scaleway"`` \| ``"together"`` \| ``"wavespeed"`` \| ``"zai-org"`` \| ``"auto"``

Set an Inference provider to run this model on.

Defaults to "auto" i.e. the first of the providers available for the model, sorted by the user's order in https://hf.co/settings/inference-providers.

#### Defined in[[provider.defined-in]]

[inference/src/types.ts:153](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L153)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/BaseArgs.md" />

### Interface: AudioToAudioOutputElem
https://huggingface.co/docs/huggingface.js/inference/interfaces/AudioToAudioOutputElem.md

# Interface: AudioToAudioOutputElem

## Properties

### audio

• **audio**: `Blob`

Base64 encoded audio output.

#### Defined in[[audio.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:26](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L26)

___

### label

• **label**: `string`

The label for the audio output (model specific)

#### Defined in[[label.defined-in]]

[inference/src/tasks/audio/audioToAudio.ts:21](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/audio/audioToAudio.ts#L21)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/AudioToAudioOutputElem.md" />

### Interface: TextGenerationOutput
https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationOutput.md

# Interface: TextGenerationOutput

Text Generation Output.

Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

## Indexable

▪ [property: `string`]: `unknown`

## Properties

### details

• `Optional` **details**: `TextGenerationOutputDetails`

#### Defined in[[details.defined-in]]

tasks/dist/esm/tasks/text-generation/inference.d.ts:121

___

### generated\_text

• **generated\_text**: `string`

#### Defined in[[generatedtext.defined-in]]

tasks/dist/esm/tasks/text-generation/inference.d.ts:122


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/TextGenerationOutput.md" />

### Interface: InferenceProviderMappingEntry
https://huggingface.co/docs/huggingface.js/inference/interfaces/InferenceProviderMappingEntry.md

# Interface: InferenceProviderMappingEntry

## Properties

### adapter

• `Optional` **adapter**: `string`

#### Defined in[[adapter.defined-in]]

[inference/src/types.ts:111](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L111)

___

### adapterWeightsPath

• `Optional` **adapterWeightsPath**: `string`

#### Defined in[[adapterweightspath.defined-in]]

[inference/src/types.ts:112](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L112)

___

### hfModelId

• **hfModelId**: `string`

#### Defined in[[hfmodelid.defined-in]]

[inference/src/types.ts:113](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L113)

___

### provider

• **provider**: `string`

#### Defined in[[provider.defined-in]]

[inference/src/types.ts:114](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L114)

___

### providerId

• **providerId**: `string`

#### Defined in[[providerid.defined-in]]

[inference/src/types.ts:115](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L115)

___

### status

• **status**: ``"live"`` \| ``"staging"``

#### Defined in[[status.defined-in]]

[inference/src/types.ts:116](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L116)

___

### task

• **task**: `WidgetType`

#### Defined in[[task.defined-in]]

[inference/src/types.ts:117](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L117)

___

### type

• `Optional` **type**: ``"single-model"`` \| ``"tag-filter"``

#### Defined in[[type.defined-in]]

[inference/src/types.ts:118](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L118)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/InferenceProviderMappingEntry.md" />

### Interface: TextGenerationStreamPrefillToken
https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamPrefillToken.md

# Interface: TextGenerationStreamPrefillToken

## Properties

### id

• **id**: `number`

Token ID from the model tokenizer

#### Defined in[[id.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:23](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L23)

___

### logprob

• `Optional` **logprob**: `number`

Logprob
Optional since the logprob of the first token cannot be computed

#### Defined in[[logprob.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:30](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L30)

___

### text

• **text**: `string`

Token text

#### Defined in[[text.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L25)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/TextGenerationStreamPrefillToken.md" />

### Interface: Logger
https://huggingface.co/docs/huggingface.js/inference/interfaces/Logger.md

# Interface: Logger

## Properties

### debug

• **debug**: (`message`: `string`, ...`args`: `unknown`[]) => `void`

#### Type declaration[[debug.type-declaration]]

▸ (`message`, `...args`): `void`

##### Parameters[[debug.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |
| `...args` | `unknown`[] |

##### Returns[[debug.returns]]

`void`

#### Defined in[[debug.defined-in]]

[inference/src/types.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L9)

___

### error

• **error**: (`message`: `string`, ...`args`: `unknown`[]) => `void`

#### Type declaration[[error.type-declaration]]

▸ (`message`, `...args`): `void`

##### Parameters[[error.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |
| `...args` | `unknown`[] |

##### Returns[[error.returns]]

`void`

#### Defined in[[error.defined-in]]

[inference/src/types.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L12)

___

### info

• **info**: (`message`: `string`, ...`args`: `unknown`[]) => `void`

#### Type declaration[[info.type-declaration]]

▸ (`message`, `...args`): `void`

##### Parameters[[info.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |
| `...args` | `unknown`[] |

##### Returns[[info.returns]]

`void`

#### Defined in[[info.defined-in]]

[inference/src/types.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L10)

___

### log

• **log**: (`message`: `string`, ...`args`: `unknown`[]) => `void`

#### Type declaration[[log.type-declaration]]

▸ (`message`, `...args`): `void`

##### Parameters[[log.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |
| `...args` | `unknown`[] |

##### Returns[[log.returns]]

`void`

#### Defined in[[log.defined-in]]

[inference/src/types.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L13)

___

### warn

• **warn**: (`message`: `string`, ...`args`: `unknown`[]) => `void`

#### Type declaration[[warn.type-declaration]]

▸ (`message`, `...args`): `void`

##### Parameters[[warn.parameters]]

| Name | Type |
| :------ | :------ |
| `message` | `string` |
| `...args` | `unknown`[] |

##### Returns[[warn.returns]]

`void`

#### Defined in[[warn.defined-in]]

[inference/src/types.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L11)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/Logger.md" />

### Interface: UrlParams
https://huggingface.co/docs/huggingface.js/inference/interfaces/UrlParams.md

# Interface: UrlParams

## Properties

### authMethod

• **authMethod**: [`AuthMethod`](../modules#authmethod)

#### Defined in[[authmethod.defined-in]]

[inference/src/types.ts:176](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L176)

___

### model

• **model**: `string`

#### Defined in[[model.defined-in]]

[inference/src/types.ts:177](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L177)

___

### task

• `Optional` **task**: [`InferenceTask`](../modules#inferencetask)

#### Defined in[[task.defined-in]]

[inference/src/types.ts:178](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L178)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/UrlParams.md" />

### Interface: BodyParams\<T\>
https://huggingface.co/docs/huggingface.js/inference/interfaces/BodyParams.md

# Interface: BodyParams\<T\>

## Type parameters[[interface-bodyparamst.type-parameters]]

| Name | Type |
| :------ | :------ |
| `T` | extends `Record`\<`string`, `unknown`\> = `Record`\<`string`, `unknown`\> |

## Properties

### args

• **args**: `T`

#### Defined in[[args.defined-in]]

[inference/src/types.ts:182](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L182)

___

### mapping

• `Optional` **mapping**: [`InferenceProviderMappingEntry`](InferenceProviderMappingEntry)

#### Defined in[[mapping.defined-in]]

[inference/src/types.ts:184](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L184)

___

### model

• **model**: `string`

#### Defined in[[model.defined-in]]

[inference/src/types.ts:183](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L183)

___

### task

• `Optional` **task**: [`InferenceTask`](../modules#inferencetask)

#### Defined in[[task.defined-in]]

[inference/src/types.ts:185](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/types.ts#L185)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/BodyParams.md" />

### Interface: TextGenerationStreamOutput
https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamOutput.md

# Interface: TextGenerationStreamOutput

## Properties

### details

• **details**: ``null`` \| [`TextGenerationStreamDetails`](TextGenerationStreamDetails)

Generation details
Only available when the generation is finished

#### Defined in[[details.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:84](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L84)

___

### generated\_text

• **generated\_text**: ``null`` \| `string`

Complete generated text
Only available when the generation is finished

#### Defined in[[generatedtext.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:79](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L79)

___

### index

• `Optional` **index**: `number`

#### Defined in[[index.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:72](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L72)

___

### token

• **token**: [`TextGenerationStreamToken`](TextGenerationStreamToken)

Generated token, one at a time

#### Defined in[[token.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:74](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L74)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/TextGenerationStreamOutput.md" />

### Interface: TextGenerationStreamBestOfSequence
https://huggingface.co/docs/huggingface.js/inference/interfaces/TextGenerationStreamBestOfSequence.md

# Interface: TextGenerationStreamBestOfSequence

## Properties

### finish\_reason

• **finish\_reason**: [`TextGenerationStreamFinishReason`](../modules#textgenerationstreamfinishreason)

Generation finish reason

#### Defined in[[finishreason.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:37](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L37)

___

### generated\_text

• **generated\_text**: `string`

Generated text

#### Defined in[[generatedtext.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:35](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L35)

___

### generated\_tokens

• **generated\_tokens**: `number`

Number of generated tokens

#### Defined in[[generatedtokens.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:39](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L39)

___

### prefill

• **prefill**: [`TextGenerationStreamPrefillToken`](TextGenerationStreamPrefillToken)[]

Prompt tokens

#### Defined in[[prefill.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:43](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L43)

___

### seed

• `Optional` **seed**: `number`

Sampling seed if sampling was activated

#### Defined in[[seed.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:41](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L41)

___

### tokens

• **tokens**: [`TextGenerationStreamToken`](TextGenerationStreamToken)[]

Generated tokens

#### Defined in[[tokens.defined-in]]

[inference/src/tasks/nlp/textGenerationStream.ts:45](https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/nlp/textGenerationStream.ts#L45)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/inference/interfaces/TextGenerationStreamBestOfSequence.md" />

### `@huggingface/gguf`
https://huggingface.co/docs/huggingface.js/gguf/README.md

# `@huggingface/gguf`

A GGUF parser that works on remotely hosted files.

## Spec

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/gguf-spec.png"/>

Spec: https://github.com/ggerganov/ggml/blob/master/docs/gguf.md

Reference implementation (Python): https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/gguf_reader.py

## Install

```bash
npm install @huggingface/gguf
```

## Usage

### Basic usage

```ts
import { GGMLQuantizationType, gguf } from "@huggingface/gguf";

// remote GGUF file from https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF
const URL_LLAMA = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/191239b/llama-2-7b-chat.Q2_K.gguf";

const { metadata, tensorInfos } = await gguf(URL_LLAMA);

console.log(metadata);
// {
//     version: 2,
//     tensor_count: 291n,
//     kv_count: 19n,
//     "general.architecture": "llama",
//     "general.file_type": 10,
//     "general.name": "LLaMA v2",
//     ...
// }

console.log(tensorInfos);
// [
//     {
//         name: "token_embd.weight",
//         shape: [4096n, 32000n],
//         dtype: GGMLQuantizationType.Q2_K,
//     },

//     ... ,

//     {
//         name: "output_norm.weight",
//         shape: [4096n],
//         dtype: GGMLQuantizationType.F32,
//     }
// ]

```

### Reading a local file

```ts
// Reading a local file. (Not supported on browser)
const { metadata, tensorInfos } = await gguf(
  './my_model.gguf',
  { allowLocalFile: true },
);
```

### Typed metadata

You can get metadata with type information by setting `typedMetadata: true`. This provides both the original value and its GGUF data type:

```ts
import { GGMLQuantizationType, GGUFValueType, gguf } from "@huggingface/gguf";

const URL_LLAMA = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/191239b/llama-2-7b-chat.Q2_K.gguf";

const { metadata, typedMetadata } = await gguf(URL_LLAMA, { typedMetadata: true });

console.log(typedMetadata);
// {
//     version: { value: 2, type: GGUFValueType.UINT32 },
//     tensor_count: { value: 291n, type: GGUFValueType.UINT64 },
//     kv_count: { value: 19n, type: GGUFValueType.UINT64 },
//     "general.architecture": { value: "llama", type: GGUFValueType.STRING },
//     "general.file_type": { value: 10, type: GGUFValueType.UINT32 },
//     "general.name": { value: "LLaMA v2", type: GGUFValueType.STRING },
//     "llama.attention.head_count": { value: 32, type: GGUFValueType.UINT32 },
//     "llama.attention.layer_norm_rms_epsilon": { value: 9.999999974752427e-7, type: GGUFValueType.FLOAT32 },
//     "tokenizer.ggml.tokens": { value: ["<unk>", "<s>", "</s>", ...], type: GGUFValueType.ARRAY, subType: GGUFValueType.STRING },
//     "tokenizer.ggml.scores": { value: [0.0, -1000.0, -1000.0, ...], type: GGUFValueType.ARRAY, subType: GGUFValueType.FLOAT32 },
//     ...
// }

// Access both value and type information
console.log(typedMetadata["general.architecture"].value); // "llama"
console.log(typedMetadata["general.architecture"].type);  // GGUFValueType.STRING (8)

// For arrays, subType indicates the type of array elements
console.log(typedMetadata["tokenizer.ggml.tokens"].type);    // GGUFValueType.ARRAY (9)  
console.log(typedMetadata["tokenizer.ggml.tokens"].subType); // GGUFValueType.STRING (8)
```

### Strictly typed

By default, known fields in `metadata` are typed. This includes various fields found in [llama.cpp](https://github.com/ggerganov/llama.cpp), [whisper.cpp](https://github.com/ggerganov/whisper.cpp) and [ggml](https://github.com/ggerganov/ggml).

```ts
const { metadata, tensorInfos } = await gguf(URL_MODEL);

// Type check for model architecture at runtime
if (metadata["general.architecture"] === "llama") {

  // "llama.attention.head_count" is a valid key for llama architecture, this is typed as a number
  console.log(model["llama.attention.head_count"]);

  // "mamba.ssm.conv_kernel" is an invalid key, because it requires model architecture to be mamba
  console.log(model["mamba.ssm.conv_kernel"]); // error
}
```

### Disable strictly typed

Because GGUF format can be used to store tensors, we can technically use it for other usages. For example, storing [control vectors](https://github.com/ggerganov/llama.cpp/pull/5970), [lora weights](https://github.com/ggerganov/llama.cpp/pull/2632), etc.

In case you want to use your own GGUF metadata structure, you can disable strict typing by casting the parse output to `GGUFParseOutput<{ strict: false }>`:

```ts
const { metadata, tensorInfos }: GGUFParseOutput<{ strict: false }> = await gguf(URL_LLAMA);
```

## Command line interface

This package provides a CLI equivalent to [`gguf_dump.py`](https://github.com/ggml-org/llama.cpp/blob/7a2c913e66353362d7f28d612fd3c9d51a831eda/gguf-py/gguf/scripts/gguf_dump.py) script. You can dump GGUF metadata and list of tensors using this command:

```bash
npx @huggingface/gguf my_model.gguf

# or, with a remote GGUF file:
# npx @huggingface/gguf https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf
```

Example for the output:

```
* Dumping 36 key/value pair(s)
  Idx | Count  | Value                                                                            
  ----|--------|----------------------------------------------------------------------------------
    1 |      1 | version = 3                                                                      
    2 |      1 | tensor_count = 292                                                               
    3 |      1 | kv_count = 33                                                                    
    4 |      1 | general.architecture = "llama"                                                   
    5 |      1 | general.type = "model"                                                           
    6 |      1 | general.name = "Meta Llama 3.1 8B Instruct"                                      
    7 |      1 | general.finetune = "Instruct"                                                    
    8 |      1 | general.basename = "Meta-Llama-3.1"                                                   

[truncated]

* Dumping 292 tensor(s)
  Idx | Num Elements | Shape                          | Data Type | Name                     
  ----|--------------|--------------------------------|-----------|--------------------------
    1 |           64 |     64,      1,      1,      1 | F32       | rope_freqs.weight        
    2 |    525336576 |   4096, 128256,      1,      1 | Q4_K      | token_embd.weight        
    3 |         4096 |   4096,      1,      1,      1 | F32       | blk.0.attn_norm.weight   
    4 |     58720256 |  14336,   4096,      1,      1 | Q6_K      | blk.0.ffn_down.weight

[truncated]
```

Alternatively, you can install this package as global, which will provide the `gguf-view` command:

```bash
npm i -g @huggingface/gguf
gguf-view my_model.gguf
```

## Hugging Face Hub

The Hub supports all file formats and has built-in features for GGUF format. 

Find more information at: http://hf.co/docs/hub/gguf.

## Acknowledgements & Inspirations

- https://github.com/hyparam/hyllama by @platypii (MIT license)
- https://github.com/ahoylabs/gguf.js by @biw @dkogut1996 @spencekim (MIT license)

🔥❤️



<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/gguf/README.md" />

### @huggingface/mcp-client
https://huggingface.co/docs/huggingface.js/mcp-client/README.md

# @huggingface/mcp-client

Client for the Model Context Protocol (MCP).

This package provides a client implementation for interacting with MCP servers, built on top of our InferenceClient, `@huggingface/inference`.

It includes an example CLI smol Agent that can leverage MCP tools.

## Installation

This package is part of the Hugging Face JS monorepo. To install dependencies for all packages, run from the root of the repository:

```bash
pnpm install
```

## Usage (CLI Agent)

The package includes a command-line interface (CLI) agent that demonstrates how to use the MCP client.

### Prerequisites

*   **Hugging Face API Token:** You need a Hugging Face API token with appropriate permissions. Set it as an environment variable:
    ```bash
    export HF_TOKEN="hf_..."
    ```

### Running the Agent

Navigate to the package directory and run the agent script:

```bash
cd packages/mcp-client
pnpm agent
```

Alternatively, run from the root of the monorepo:

```bash
pnpm --filter @huggingface/mcp-client agent
```

The agent will load available MCP tools (by default, connecting to a filesystem server for your Desktop and a Playwright server) and prompt you for input (`>`).

### Configuration (Environment Variables)

*   `HF_TOKEN` (Optional): Your Hugging Face API token. Required if you use an Inference Provider on HF.
*   `MODEL_ID` (Optional): The model ID to use for the agent's inference. Defaults to `Qwen/Qwen2.5-72B-Instruct`.
*   `PROVIDER` (Optional): The inference provider. Defaults to `together`. See `@huggingface/inference` for available providers.
*   `ENDPOINT_URL` or `BASE_URL` (Optional): A custom base URL (local for instance) to call.

Example with custom model:

```bash
export HF_TOKEN="hf_..."
export MODEL_ID="Qwen/Qwen2.5-72B-Instruct"
pnpm agent
```

## Development

Common development tasks can be run using pnpm scripts:

*   `pnpm build`: Build the package.
*   `pnpm lint`: Lint and fix code style.
*   `pnpm format`: Format code using Prettier.
*   `pnpm test`: Run tests using Vitest.
*   `pnpm check`: Type-check the code using TypeScript.

## License

MIT


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/mcp-client/README.md" />

### 🤗 Hugging Face Hub API
https://huggingface.co/docs/huggingface.js/hub/README.md

# 🤗 Hugging Face Hub API

Official utilities to use the Hugging Face Hub API.

## Install

```console
pnpm add @huggingface/hub

npm add @huggingface/hub

yarn add @huggingface/hub
```

### Deno

```ts
// esm.sh
import { uploadFiles, listModels } from "https://esm.sh/@huggingface/hub"
// or npm:
import { uploadFiles, listModels } from "npm:@huggingface/hub"
```

## Usage

For some of the calls, you need to create an account and generate an [access token](https://huggingface.co/settings/tokens).

Learn how to find free models using the hub package in this [interactive tutorial](https://scrimba.com/scrim/c7BbVPcd?pl=pkVnrP7uP).

```ts
import * as hub from "@huggingface/hub";
import type { RepoDesignation } from "@huggingface/hub";

const repo: RepoDesignation = { type: "model", name: "myname/some-model" };

const {name: username} = await hub.whoAmI({accessToken: "hf_..."});

for await (const model of hub.listModels({search: {owner: username}, accessToken: "hf_..."})) {
  console.log("My model:", model);
}

const specificModel = await hub.modelInfo({name: "openai-community/gpt2"});
await hub.checkRepoAccess({repo, accessToken: "hf_..."});

await hub.createRepo({ repo, accessToken: "hf_...", license: "mit" });

await hub.uploadFiles({
  repo,
  accessToken: "hf_...",
  files: [
    // path + blob content
    {
      path: "file.txt",
      content: new Blob(["Hello World"]),
    },
    // Local file URL
    pathToFileURL("./pytorch-model.bin"),
    // Local folder URL
    pathToFileURL("./models"),
    // Web URL
    new URL("https://huggingface.co/xlm-roberta-base/resolve/main/tokenizer.json"),
    // Path + Web URL
    {
      path: "myfile.bin",
      content: new URL("https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin")
    }
    // Can also work with native File in browsers
  ],
});

// or

for await (const progressEvent of await hub.uploadFilesWithProgress({
  repo,
  accessToken: "hf_...",
  files: [
    ...
  ],
})) {
  console.log(progressEvent);
}

// Edit a file by adding prefix & suffix
await commit({
  repo,
  accessToken: "hf_...",
  operations: [{
    type: "edit",
    originalContent: originalFile,
    edits: [{
      start: 0,
      end: 0,
      content: new Blob(["prefix"])
    }, {
      start: originalFile.length,
      end: originalFile.length,
      content: new Blob(["suffix"])
    }]
  }]
})

await hub.deleteFile({repo, accessToken: "hf_...", path: "myfile.bin"});

await (await hub.downloadFile({ repo, path: "README.md" })).text();

for await (const fileInfo of hub.listFiles({repo})) {
  console.log(fileInfo);
}

await hub.deleteRepo({ repo, accessToken: "hf_..." });
```

## CLI usage

You can use `@huggingface/hub` in CLI mode to upload files and folders to your repo. 

```console
npx @huggingface/hub upload coyotte508/test-model .
npx @huggingface/hub upload datasets/coyotte508/test-dataset .
# Same thing
npx @huggingface/hub upload --repo-type dataset coyotte508/test-dataset .
# Upload new data with 0 history in a separate branch
npx @huggingface/hub branch create coyotte508/test-model release --empty
npx @huggingface/hub upload coyotte508/test-model . --revision release

npx @huggingface/hub --help
npx @huggingface/hub upload --help
```

You can also install globally with `npm install -g @huggingface/hub`. Then you can do:

```console
hfjs upload coyotte508/test-model .

hfjs branch create --repo-type dataset coyotte508/test-dataset release --empty
hfjs upload --repo-type dataset coyotte508/test-dataset . --revision release

hfjs --help
hfjs  upload --help
```

## OAuth Login

It's possible to login using OAuth (["Sign in with HF"](https://huggingface.co/docs/hub/oauth)).

This will allow you get an access token to use some of the API, depending on the scopes set inside the Space or the OAuth App.

```ts
import { oauthLoginUrl, oauthHandleRedirectIfPresent } from "@huggingface/hub";

const oauthResult = await oauthHandleRedirectIfPresent();

if (!oauthResult) {
  // If the user is not logged in, redirect to the login page
  window.location.href = await oauthLoginUrl();
}

// You can use oauthResult.accessToken, oauthResult.accessTokenExpiresAt and oauthResult.userInfo
console.log(oauthResult);
```

Checkout the demo: https://huggingface.co/spaces/huggingfacejs/client-side-oauth

## Hugging face cache

The `@huggingface/hub` package provide basic capabilities to scan the cache directory. Learn more about [Manage huggingface_hub cache-system](https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache).

### `scanCacheDir`

You can get the list of cached repositories using the `scanCacheDir` function.

```ts
import { scanCacheDir } from "@huggingface/hub";

const result = await scanCacheDir();

console.log(result);
```
Note: this does not work in the browser

### `downloadFileToCacheDir`

You can cache a file of a repository using the `downloadFileToCacheDir` function.

```ts
import { downloadFileToCacheDir } from "@huggingface/hub";

const file = await downloadFileToCacheDir({
  repo: 'foo/bar',
  path: 'README.md'
});

console.log(file);
```
Note: this does not work in the browser

### `snapshotDownload`

You can download an entire repository at a given revision in the cache directory using the `snapshotDownload` function.

```ts
import { snapshotDownload } from "@huggingface/hub";

const directory = await snapshotDownload({
  repo: 'foo/bar',
});

console.log(directory);
```
The code use internally the `downloadFileToCacheDir` function.

Note: this does not work in the browser

## Performance considerations

When uploading large files, you may want to run the `commit` calls inside a worker, to offload the sha256 computations.

Remote resources and local files should be passed as `URL` whenever it's possible so they can be lazy loaded in chunks to reduce RAM usage. Passing a `File` inside the browser's context is fine, because it natively behaves as a `Blob`.

Under the hood, `@huggingface/hub` uses a lazy blob implementation to load the file.

## Dependencies

- `@huggingface/tasks` : Typings only


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/README.md" />

### @huggingface/hub
https://huggingface.co/docs/huggingface.js/hub/modules.md

# @huggingface/hub

## Classes

- [HubApiError](classes/HubApiError)
- [InvalidApiResponseFormatError](classes/InvalidApiResponseFormatError)
- [\_\_internal\_XetBlob](classes/_internal_XetBlob)

## Interfaces

- [AuthInfo](interfaces/AuthInfo)
- [CachedFileInfo](interfaces/CachedFileInfo)
- [CachedRepoInfo](interfaces/CachedRepoInfo)
- [CachedRevisionInfo](interfaces/CachedRevisionInfo)
- [CommitData](interfaces/CommitData)
- [CommitDeletedEntry](interfaces/CommitDeletedEntry)
- [CommitEditFile](interfaces/CommitEditFile)
- [CommitFile](interfaces/CommitFile)
- [CommitInfo](interfaces/CommitInfo)
- [CommitOutput](interfaces/CommitOutput)
- [Credentials](interfaces/Credentials)
- [DatasetEntry](interfaces/DatasetEntry)
- [FileDownloadInfoOutput](interfaces/FileDownloadInfoOutput)
- [HFCacheInfo](interfaces/HFCacheInfo)
- [LfsPathInfo](interfaces/LfsPathInfo)
- [ListFileEntry](interfaces/ListFileEntry)
- [ModelConfig](interfaces/ModelConfig)
- [ModelEntry](interfaces/ModelEntry)
- [OAuthResult](interfaces/OAuthResult)
- [PathInfo](interfaces/PathInfo)
- [QuantizationConfig](interfaces/QuantizationConfig)
- [RepoId](interfaces/RepoId)
- [SafetensorsIndexJson](interfaces/SafetensorsIndexJson)
- [SafetensorsShardFileInfo](interfaces/SafetensorsShardFileInfo)
- [SecurityFileStatus](interfaces/SecurityFileStatus)
- [SpaceEntry](interfaces/SpaceEntry)
- [SpaceResourceConfig](interfaces/SpaceResourceConfig)
- [SpaceResourceRequirement](interfaces/SpaceResourceRequirement)
- [SpaceRuntime](interfaces/SpaceRuntime)
- [TensorInfo](interfaces/TensorInfo)
- [UserInfo](interfaces/UserInfo)
- [WhoAmIApp](interfaces/WhoAmIApp)
- [WhoAmIOrg](interfaces/WhoAmIOrg)
- [WhoAmIUser](interfaces/WhoAmIUser)
- [XetFileInfo](interfaces/XetFileInfo)

## Type Aliases

### AccessToken

Ƭ **AccessToken**: `string`

Actually `hf_${string}`, but for convenience, using the string type

#### Defined in[[accesstoken.defined-in]]

[packages/hub/src/types/public.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L15)

___

### AccessTokenRole

Ƭ **AccessTokenRole**: ``"admin"`` \| ``"write"`` \| ``"contributor"`` \| ``"read"``

#### Defined in[[accesstokenrole.defined-in]]

[packages/hub/src/types/public.ts:70](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L70)

___

### AuthType

Ƭ **AuthType**: ``"access_token"`` \| ``"app_token"`` \| ``"app_token_as_user"``

#### Defined in[[authtype.defined-in]]

[packages/hub/src/types/public.ts:72](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L72)

___

### CommitOperation

Ƭ **CommitOperation**: [`CommitDeletedEntry`](interfaces/CommitDeletedEntry) \| [`CommitFile`](interfaces/CommitFile) \| [`CommitEditFile`](interfaces/CommitEditFile)

#### Defined in[[commitoperation.defined-in]]

[packages/hub/src/lib/commit.ts:87](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L87)

___

### CommitParams

Ƭ **CommitParams**: \{ `abortSignal?`: `AbortSignal` ; `branch?`: `string` ; `description?`: `string` ; `fetch?`: typeof [`__type`](classes/_internal_XetBlob#__type) ; `hubUrl?`: `string` ; `isPullRequest?`: `boolean` ; `maxFolderDepth?`: `number` ; `operations`: [`CommitOperation`](modules#commitoperation)[] ; `parentCommit?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `title`: `string` ; `useWebWorkers?`: `boolean` \| \{ `minSize?`: `number` ; `poolSize?`: `number`  } ; `useXet?`: `boolean`  } & `Partial`\<`CredentialsParams`\>

#### Defined in[[commitparams.defined-in]]

[packages/hub/src/lib/commit.ts:90](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L90)

___

### CommitProgressEvent

Ƭ **CommitProgressEvent**: \{ `event`: ``"phase"`` ; `phase`: ``"preuploading"`` \| ``"uploadingLargeFiles"`` \| ``"committing"``  } \| \{ `event`: ``"fileProgress"`` ; `path`: `string` ; `progress`: `number` ; `state`: ``"hashing"`` \| ``"uploading"``  }

#### Defined in[[commitprogressevent.defined-in]]

[packages/hub/src/lib/commit.ts:146](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L146)

___

### ContentSource

Ƭ **ContentSource**: `Blob` \| `URL`

#### Defined in[[contentsource.defined-in]]

[packages/hub/src/lib/commit.ts:38](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L38)

___

### Dtype

Ƭ **Dtype**: ``"F64"`` \| ``"F32"`` \| ``"F16"`` \| ``"F8_E4M3"`` \| ``"F8_E5M2"`` \| ``"E8M0"`` \| ``"F6_E3M2"`` \| ``"F6_E2M3"`` \| ``"F4"`` \| ``"FP4"`` \| ``"BF16"`` \| ``"I64"`` \| ``"I32"`` \| ``"I16"`` \| ``"I8"`` \| ``"U16"`` \| ``"U8"`` \| ``"UE8"`` \| ``"BOOL"``

#### Defined in[[dtype.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:46](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L46)

___

### PipelineType

Ƭ **PipelineType**: keyof typeof `PIPELINE_DATA`

#### Defined in[[pipelinetype.defined-in]]

packages/tasks/dist/commonjs/pipelines.d.ts:372

___

### RepoDesignation

Ƭ **RepoDesignation**: [`RepoId`](interfaces/RepoId) \| [`RepoFullName`](modules#repofullname)

#### Defined in[[repodesignation.defined-in]]

[packages/hub/src/types/public.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L12)

___

### RepoFullName

Ƭ **RepoFullName**: `string` \| \`spaces/$\{string}\` \| \`datasets/$\{string}\`

#### Defined in[[repofullname.defined-in]]

[packages/hub/src/types/public.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L10)

___

### RepoType

Ƭ **RepoType**: ``"space"`` \| ``"dataset"`` \| ``"model"``

#### Defined in[[repotype.defined-in]]

[packages/hub/src/types/public.ts:3](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L3)

___

### SafetensorsFileHeader

Ƭ **SafetensorsFileHeader**: `Record`\<[`TensorName`](modules#tensorname), [`TensorInfo`](interfaces/TensorInfo)\> & \{ `__metadata__`: \{ `total_parameters?`: `string` \| `number`  } & `Record`\<`string`, `string`\>  }

#### Defined in[[safetensorsfileheader.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:73](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L73)

___

### SafetensorsParseFromRepo

Ƭ **SafetensorsParseFromRepo**: \{ `header`: [`SafetensorsFileHeader`](modules#safetensorsfileheader) ; `parameterCount?`: `Partial`\<`Record`\<[`Dtype`](modules#dtype), `number`\>\> ; `parameterTotal?`: `number` ; `sharded`: ``false``  } \| \{ `headers`: [`SafetensorsShardedHeaders`](modules#safetensorsshardedheaders) ; `index`: [`SafetensorsIndexJson`](interfaces/SafetensorsIndexJson) ; `parameterCount?`: `Partial`\<`Record`\<[`Dtype`](modules#dtype), `number`\>\> ; `parameterTotal?`: `number` ; `sharded`: ``true``  }

#### Defined in[[safetensorsparsefromrepo.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:87](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L87)

___

### SafetensorsShardedHeaders

Ƭ **SafetensorsShardedHeaders**: `Record`\<`FileName`, [`SafetensorsFileHeader`](modules#safetensorsfileheader)\>

#### Defined in[[safetensorsshardedheaders.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:85](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L85)

___

### SpaceHardwareFlavor

Ƭ **SpaceHardwareFlavor**: ``"cpu-basic"`` \| ``"cpu-upgrade"`` \| ``"t4-small"`` \| ``"t4-medium"`` \| ``"l4x1"`` \| ``"l4x4"`` \| ``"a10g-small"`` \| ``"a10g-large"`` \| ``"a10g-largex2"`` \| ``"a10g-largex4"`` \| ``"a100-large"`` \| ``"v5e-1x1"`` \| ``"v5e-2x2"`` \| ``"v5e-2x4"``

#### Defined in[[spacehardwareflavor.defined-in]]

[packages/hub/src/types/public.ts:40](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L40)

___

### SpaceSdk

Ƭ **SpaceSdk**: ``"streamlit"`` \| ``"gradio"`` \| ``"docker"`` \| ``"static"``

#### Defined in[[spacesdk.defined-in]]

[packages/hub/src/types/public.ts:56](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L56)

___

### SpaceStage

Ƭ **SpaceStage**: ``"NO_APP_FILE"`` \| ``"CONFIG_ERROR"`` \| ``"BUILDING"`` \| ``"BUILD_ERROR"`` \| ``"RUNNING"`` \| ``"RUNNING_BUILDING"`` \| ``"RUNTIME_ERROR"`` \| ``"DELETING"`` \| ``"PAUSED"`` \| ``"SLEEPING"``

#### Defined in[[spacestage.defined-in]]

[packages/hub/src/types/public.ts:58](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L58)

___

### TensorName

Ƭ **TensorName**: `string`

#### Defined in[[tensorname.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:45](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L45)

___

### WhoAmI

Ƭ **WhoAmI**: [`WhoAmIApp`](interfaces/WhoAmIApp) \| [`WhoAmIOrg`](interfaces/WhoAmIOrg) \| [`WhoAmIUser`](interfaces/WhoAmIUser)

#### Defined in[[whoami.defined-in]]

[packages/hub/src/lib/who-am-i.ts:50](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L50)

## Variables

### DATASET\_EXPANDABLE\_KEYS

• `Const` **DATASET\_EXPANDABLE\_KEYS**: readonly [``"author"``, ``"cardData"``, ``"citation"``, ``"createdAt"``, ``"disabled"``, ``"description"``, ``"downloads"``, ``"downloadsAllTime"``, ``"gated"``, ``"gitalyUid"``, ``"lastModified"``, ``"likes"``, ``"paperswithcode_id"``, ``"private"``, ``"sha"``, ``"tags"``]

#### Defined in[[datasetexpandablekeys.defined-in]]

[packages/hub/src/lib/list-datasets.ts:17](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L17)

___

### DATASET\_EXPAND\_KEYS

• `Const` **DATASET\_EXPAND\_KEYS**: readonly [``"private"``, ``"downloads"``, ``"gated"``, ``"likes"``, ``"lastModified"``]

#### Defined in[[datasetexpandkeys.defined-in]]

[packages/hub/src/lib/list-datasets.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L9)

___

### DEFAULT\_REVISION

• `Const` **DEFAULT\_REVISION**: ``"main"``

#### Defined in[[defaultrevision.defined-in]]

[packages/hub/src/lib/snapshot-download.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/snapshot-download.ts#L12)

___

### HUB\_URL

• `Const` **HUB\_URL**: ``"https://huggingface.co"``

#### Defined in[[huburl.defined-in]]

[packages/hub/src/consts.ts:1](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/consts.ts#L1)

___

### MODEL\_EXPANDABLE\_KEYS

• `Const` **MODEL\_EXPANDABLE\_KEYS**: readonly [``"author"``, ``"cardData"``, ``"config"``, ``"createdAt"``, ``"disabled"``, ``"downloads"``, ``"downloadsAllTime"``, ``"gated"``, ``"gitalyUid"``, ``"inferenceProviderMapping"``, ``"lastModified"``, ``"library_name"``, ``"likes"``, ``"model-index"``, ``"pipeline_tag"``, ``"private"``, ``"safetensors"``, ``"sha"``, ``"spaces"``, ``"tags"``, ``"transformersInfo"``]

#### Defined in[[modelexpandablekeys.defined-in]]

[packages/hub/src/lib/list-models.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L19)

___

### MODEL\_EXPAND\_KEYS

• `Const` **MODEL\_EXPAND\_KEYS**: readonly [``"pipeline_tag"``, ``"private"``, ``"gated"``, ``"downloads"``, ``"likes"``, ``"lastModified"``]

#### Defined in[[modelexpandkeys.defined-in]]

[packages/hub/src/lib/list-models.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L10)

___

### REGEX\_COMMIT\_HASH

• `Const` **REGEX\_COMMIT\_HASH**: `RegExp`

#### Defined in[[regexcommithash.defined-in]]

[packages/hub/src/lib/download-file-to-cache-dir.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/download-file-to-cache-dir.ts#L15)

___

### REPO\_ID\_SEPARATOR

• `Const` **REPO\_ID\_SEPARATOR**: `string` = `"--"`

#### Defined in[[repoidseparator.defined-in]]

[packages/hub/src/lib/cache-management.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L25)

___

### RE\_SAFETENSORS\_FILE

• `Const` **RE\_SAFETENSORS\_FILE**: `RegExp`

#### Defined in[[resafetensorsfile.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L14)

___

### RE\_SAFETENSORS\_INDEX\_FILE

• `Const` **RE\_SAFETENSORS\_INDEX\_FILE**: `RegExp`

#### Defined in[[resafetensorsindexfile.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L15)

___

### RE\_SAFETENSORS\_SHARD\_FILE

• `Const` **RE\_SAFETENSORS\_SHARD\_FILE**: `RegExp`

#### Defined in[[resafetensorsshardfile.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:16](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L16)

___

### SAFETENSORS\_FILE

• `Const` **SAFETENSORS\_FILE**: ``"model.safetensors"``

#### Defined in[[safetensorsfile.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L10)

___

### SAFETENSORS\_INDEX\_FILE

• `Const` **SAFETENSORS\_INDEX\_FILE**: ``"model.safetensors.index.json"``

#### Defined in[[safetensorsindexfile.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L11)

___

### SPACE\_EXPANDABLE\_KEYS

• `Const` **SPACE\_EXPANDABLE\_KEYS**: readonly [``"author"``, ``"cardData"``, ``"datasets"``, ``"disabled"``, ``"gitalyUid"``, ``"lastModified"``, ``"createdAt"``, ``"likes"``, ``"private"``, ``"runtime"``, ``"sdk"``, ``"sha"``, ``"subdomain"``, ``"tags"``, ``"models"``]

#### Defined in[[spaceexpandablekeys.defined-in]]

[packages/hub/src/lib/list-spaces.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L15)

___

### SPACE\_EXPAND\_KEYS

• `Const` **SPACE\_EXPAND\_KEYS**: readonly [``"sdk"``, ``"likes"``, ``"private"``, ``"lastModified"``]

#### Defined in[[spaceexpandkeys.defined-in]]

[packages/hub/src/lib/list-spaces.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L9)

## Functions

### \_\_internal\_sha256

▸ **__internal_sha256**(`buffer`, `opts?`): `AsyncGenerator`\<`number`, `string`\>

#### Parameters[[internalsha256.parameters]]

| Name | Type |
| :------ | :------ |
| `buffer` | `Blob` |
| `opts?` | `Object` |
| `opts.abortSignal?` | `AbortSignal` |
| `opts.useWebWorker?` | `boolean` \| \{ `minSize?`: `number` ; `poolSize?`: `number`  } |

#### Returns[[internalsha256.returns]]

`AsyncGenerator`\<`number`, `string`\>

hex-encoded sha

**`Yields`**

progress (0-1)

#### Defined in[[internalsha256.defined-in]]

[packages/hub/src/utils/sha256.ts:72](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/sha256.ts#L72)

___

### checkRepoAccess

▸ **checkRepoAccess**(`params`): `Promise`\<`void`\>

Check if we have read access to a repository.

Throw a [HubApiError](classes/HubApiError) error if we don't have access. HubApiError.statusCode will be 401, 403 or 404.

#### Parameters[[checkrepoaccess.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation)  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[checkrepoaccess.returns]]

`Promise`\<`void`\>

#### Defined in[[checkrepoaccess.defined-in]]

[packages/hub/src/lib/check-repo-access.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/check-repo-access.ts#L13)

___

### commit

▸ **commit**(`params`): `Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Parameters[[commit.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | [`CommitParams`](modules#commitparams) |

#### Returns[[commit.returns]]

`Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Defined in[[commit.defined-in]]

[packages/hub/src/lib/commit.ts:707](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L707)

___

### commitIter

▸ **commitIter**(`params`): `AsyncGenerator`\<[`CommitProgressEvent`](modules#commitprogressevent), [`CommitOutput`](interfaces/CommitOutput)\>

Internal function for now, used by commit.

Can be exposed later to offer fine-tuned progress info

#### Parameters[[commititer.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | [`CommitParams`](modules#commitparams) |

#### Returns[[commititer.returns]]

`AsyncGenerator`\<[`CommitProgressEvent`](modules#commitprogressevent), [`CommitOutput`](interfaces/CommitOutput)\>

#### Defined in[[commititer.defined-in]]

[packages/hub/src/lib/commit.ts:163](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L163)

___

### countCommits

▸ **countCommits**(`params`): `Promise`\<`number`\>

#### Parameters[[countcommits.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[countcommits.returns]]

`Promise`\<`number`\>

#### Defined in[[countcommits.defined-in]]

[packages/hub/src/lib/count-commits.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/count-commits.ts#L7)

___

### createBranch

▸ **createBranch**(`params`): `Promise`\<`void`\>

#### Parameters[[createbranch.parameters]]

| Name | Type | Description |
| :------ | :------ | :------ |
| `params` | `Object` | - |
| `params.accessToken?` | `string` | - |
| `params.branch` | `string` | The name of the branch to create |
| `params.empty?` | `boolean` | Use this to create an empty branch, with no commits. |
| `params.fetch?` | (`input`: `URL` \| `RequestInfo`, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> | [MDN Reference](https://developer.mozilla.org/docs/Web/API/Window/fetch) |
| `params.hubUrl?` | `string` | - |
| `params.overwrite?` | `boolean` | Use this to overwrite the branch if it already exists. If you only specify `overwrite` and no `revision`/`empty`, and the branch already exists, it will be a no-op. |
| `params.repo` | [`RepoDesignation`](modules#repodesignation) | - |
| `params.revision?` | `string` | Revision to create the branch from. Defaults to the default branch. Use empty: true to create an empty branch. |

#### Returns[[createbranch.returns]]

`Promise`\<`void`\>

#### Defined in[[createbranch.defined-in]]

[packages/hub/src/lib/create-branch.ts:6](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/create-branch.ts#L6)

___

### createCollection

▸ **createCollection**(`params`): `Promise`\<\{ `slug`: `string`  }\>

#### Parameters[[createcollection.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `collection`: `ApiCreateCollectionPayload` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[createcollection.returns]]

`Promise`\<\{ `slug`: `string`  }\>

#### Defined in[[createcollection.defined-in]]

[packages/hub/src/lib/create-collection.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/create-collection.ts#L7)

___

### createRepo

▸ **createRepo**(`params`): `Promise`\<\{ `repoUrl`: `string`  }\>

#### Parameters[[createrepo.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `files?`: \{ `content`: `ArrayBuffer` \| `Blob` ; `path`: `string`  }[] ; `hubUrl?`: `string` ; `license?`: `string` ; `private?`: `boolean` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `sdk?`: SpaceSdk \| undefined  } & `CredentialsParams` |

#### Returns[[createrepo.returns]]

`Promise`\<\{ `repoUrl`: `string`  }\>

#### Defined in[[createrepo.defined-in]]

[packages/hub/src/lib/create-repo.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/create-repo.ts#L9)

___

### datasetInfo

▸ **datasetInfo**\<`T`\>(`params`): `Promise`\<[`DatasetEntry`](interfaces/DatasetEntry) & `Pick`\<`ApiDatasetInfo`, `T`\>\>

#### Type parameters[[datasetinfo.type-parameters]]

| Name | Type |
| :------ | :------ |
| `T` | extends ``"author"`` \| ``"cardData"`` \| ``"disabled"`` \| ``"gitalyUid"`` \| ``"createdAt"`` \| ``"tags"`` \| ``"paperswithcode_id"`` \| ``"sha"`` \| ``"citation"`` \| ``"description"`` \| ``"downloadsAllTime"`` = `never` |

#### Parameters[[datasetinfo.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `additionalFields?`: `T`[] ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `name`: `string` ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[datasetinfo.returns]]

`Promise`\<[`DatasetEntry`](interfaces/DatasetEntry) & `Pick`\<`ApiDatasetInfo`, `T`\>\>

#### Defined in[[datasetinfo.defined-in]]

[packages/hub/src/lib/dataset-info.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/dataset-info.ts#L9)

___

### deleteBranch

▸ **deleteBranch**(`params`): `Promise`\<`void`\>

#### Parameters[[deletebranch.parameters]]

| Name | Type | Description |
| :------ | :------ | :------ |
| `params` | `Object` | - |
| `params.accessToken?` | `string` | - |
| `params.branch` | `string` | The name of the branch to delete |
| `params.fetch?` | (`input`: `URL` \| `RequestInfo`, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> | [MDN Reference](https://developer.mozilla.org/docs/Web/API/Window/fetch) |
| `params.hubUrl?` | `string` | - |
| `params.repo` | [`RepoDesignation`](modules#repodesignation) | - |

#### Returns[[deletebranch.returns]]

`Promise`\<`void`\>

#### Defined in[[deletebranch.defined-in]]

[packages/hub/src/lib/delete-branch.ts:6](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/delete-branch.ts#L6)

___

### deleteCollection

▸ **deleteCollection**(`params`): `Promise`\<`void`\>

#### Parameters[[deletecollection.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `slug`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[deletecollection.returns]]

`Promise`\<`void`\>

#### Defined in[[deletecollection.defined-in]]

[packages/hub/src/lib/delete-collection.ts:6](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/delete-collection.ts#L6)

___

### deleteFile

▸ **deleteFile**(`params`): `Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Parameters[[deletefile.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `branch?`: `string` ; `commitDescription?`: `string` ; `commitTitle?`: `string` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `isPullRequest?`: `boolean` ; `parentCommit?`: `string` ; `path`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation)  } & `CredentialsParams` |

#### Returns[[deletefile.returns]]

`Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Defined in[[deletefile.defined-in]]

[packages/hub/src/lib/delete-file.ts:5](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/delete-file.ts#L5)

___

### deleteFiles

▸ **deleteFiles**(`params`): `Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Parameters[[deletefiles.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `branch?`: `string` ; `commitDescription?`: `string` ; `commitTitle?`: `string` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `isPullRequest?`: `boolean` ; `parentCommit?`: `string` ; `paths`: `string`[] ; `repo`: [`RepoDesignation`](modules#repodesignation)  } & `CredentialsParams` |

#### Returns[[deletefiles.returns]]

`Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Defined in[[deletefiles.defined-in]]

[packages/hub/src/lib/delete-files.ts:5](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/delete-files.ts#L5)

___

### deleteRepo

▸ **deleteRepo**(`params`): `Promise`\<`void`\>

#### Parameters[[deleterepo.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation)  } & `CredentialsParams` |

#### Returns[[deleterepo.returns]]

`Promise`\<`void`\>

#### Defined in[[deleterepo.defined-in]]

[packages/hub/src/lib/delete-repo.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/delete-repo.ts#L7)

___

### downloadFile

▸ **downloadFile**(`params`): `Promise`\<`Blob` \| ``null``\>

#### Parameters[[downloadfile.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `downloadInfo?`: [`FileDownloadInfoOutput`](interfaces/FileDownloadInfoOutput) ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `path`: `string` ; `raw?`: `boolean` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string` ; `xet?`: `boolean`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[downloadfile.returns]]

`Promise`\<`Blob` \| ``null``\>

null when the file doesn't exist

#### Defined in[[downloadfile.defined-in]]

[packages/hub/src/lib/download-file.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/download-file.ts#L11)

___

### downloadFileToCacheDir

▸ **downloadFileToCacheDir**(`params`): `Promise`\<`string`\>

Download a given file if it's not already present in the local cache.

#### Parameters[[downloadfiletocachedir.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `cacheDir?`: `string` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `path`: `string` ; `raw?`: `boolean` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[downloadfiletocachedir.returns]]

`Promise`\<`string`\>

the symlink to the blob object

#### Defined in[[downloadfiletocachedir.defined-in]]

[packages/hub/src/lib/download-file-to-cache-dir.ts:45](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/download-file-to-cache-dir.ts#L45)

___

### fileDownloadInfo

▸ **fileDownloadInfo**(`params`): `Promise`\<[`FileDownloadInfoOutput`](interfaces/FileDownloadInfoOutput) \| ``null``\>

#### Parameters[[filedownloadinfo.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `noContentDisposition?`: `boolean` ; `path`: `string` ; `raw?`: `boolean` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[filedownloadinfo.returns]]

`Promise`\<[`FileDownloadInfoOutput`](interfaces/FileDownloadInfoOutput) \| ``null``\>

null when the file doesn't exist

#### Defined in[[filedownloadinfo.defined-in]]

[packages/hub/src/lib/file-download-info.ts:27](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-download-info.ts#L27)

___

### fileExists

▸ **fileExists**(`params`): `Promise`\<`boolean`\>

#### Parameters[[fileexists.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `path`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[fileexists.returns]]

`Promise`\<`boolean`\>

#### Defined in[[fileexists.defined-in]]

[packages/hub/src/lib/file-exists.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-exists.ts#L7)

___

### getBlobStat

▸ **getBlobStat**(`blobPath`, `blobStats`): `Promise`\<`Stats`\>

#### Parameters[[getblobstat.parameters]]

| Name | Type |
| :------ | :------ |
| `blobPath` | `string` |
| `blobStats` | `Map`\<`string`, `Stats`\> |

#### Returns[[getblobstat.returns]]

`Promise`\<`Stats`\>

#### Defined in[[getblobstat.defined-in]]

[packages/hub/src/lib/cache-management.ts:244](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L244)

___

### getHFHubCachePath

▸ **getHFHubCachePath**(): `string`

#### Returns[[gethfhubcachepath.returns]]

`string`

#### Defined in[[gethfhubcachepath.defined-in]]

[packages/hub/src/lib/cache-management.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L19)

___

### getRepoFolderName

▸ **getRepoFolderName**(`«destructured»`): `string`

#### Parameters[[getrepofoldername.parameters]]

| Name | Type |
| :------ | :------ |
| `«destructured»` | [`RepoId`](interfaces/RepoId) |

#### Returns[[getrepofoldername.returns]]

`string`

#### Defined in[[getrepofoldername.defined-in]]

[packages/hub/src/lib/cache-management.ts:27](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L27)

___

### listCollections

▸ **listCollections**(`params?`): `AsyncGenerator`\<`ApiCollectionInfo`\>

#### Parameters[[listcollections.parameters]]

| Name | Type |
| :------ | :------ |
| `params?` | \{ search?: \{ owner?: string[] \| undefined; item?: string[] \| undefined; q?: string \| undefined; } \| undefined; sort?: "lastModified" \| "trending" \| "upvotes" \| undefined; limit?: number \| undefined; hubUrl?: string \| undefined; fetch?: \{ ...; } \| undefined; } & Partial\<...\> |

#### Returns[[listcollections.returns]]

`AsyncGenerator`\<`ApiCollectionInfo`\>

#### Defined in[[listcollections.defined-in]]

[packages/hub/src/lib/list-collections.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-collections.ts#L12)

___

### listCommits

▸ **listCommits**(`params`): `AsyncGenerator`\<[`CommitData`](interfaces/CommitData)\>

#### Parameters[[listcommits.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `batchSize?`: `number` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[listcommits.returns]]

`AsyncGenerator`\<[`CommitData`](interfaces/CommitData)\>

#### Defined in[[listcommits.defined-in]]

[packages/hub/src/lib/list-commits.ts:17](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-commits.ts#L17)

___

### listDatasets

▸ **listDatasets**\<`T`\>(`params?`): `AsyncGenerator`\<[`DatasetEntry`](interfaces/DatasetEntry) & `Pick`\<`ApiDatasetInfo`, `T`\>\>

#### Type parameters[[listdatasets.type-parameters]]

| Name | Type |
| :------ | :------ |
| `T` | extends ``"author"`` \| ``"cardData"`` \| ``"disabled"`` \| ``"gitalyUid"`` \| ``"createdAt"`` \| ``"tags"`` \| ``"paperswithcode_id"`` \| ``"sha"`` \| ``"citation"`` \| ``"description"`` \| ``"downloadsAllTime"`` = `never` |

#### Parameters[[listdatasets.parameters]]

| Name | Type |
| :------ | :------ |
| `params?` | \{ search?: \{ query?: string \| undefined; owner?: string \| undefined; tags?: string[] \| undefined; } \| undefined; hubUrl?: string \| undefined; additionalFields?: T[] \| undefined; limit?: number \| undefined; fetch?: \{ ...; } \| undefined; } & Partial\<...\> |

#### Returns[[listdatasets.returns]]

`AsyncGenerator`\<[`DatasetEntry`](interfaces/DatasetEntry) & `Pick`\<`ApiDatasetInfo`, `T`\>\>

#### Defined in[[listdatasets.defined-in]]

[packages/hub/src/lib/list-datasets.ts:47](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L47)

___

### listFiles

▸ **listFiles**(`params`): `AsyncGenerator`\<[`ListFileEntry`](interfaces/ListFileEntry)\>

List files in a folder. To list ALL files in the directory, call it
with params.recursive set to `true`.

#### Parameters[[listfiles.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `expand?`: `boolean` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `path?`: `string` ; `recursive?`: `boolean` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[listfiles.returns]]

`AsyncGenerator`\<[`ListFileEntry`](interfaces/ListFileEntry)\>

#### Defined in[[listfiles.defined-in]]

[packages/hub/src/lib/list-files.ts:42](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L42)

___

### listModels

▸ **listModels**\<`T`\>(`params?`): `AsyncGenerator`\<[`ModelEntry`](interfaces/ModelEntry) & `Pick`\<`ApiModelInfo`, `T`\>\>

#### Type parameters[[listmodels.type-parameters]]

| Name | Type |
| :------ | :------ |
| `T` | extends ``"spaces"`` \| ``"author"`` \| ``"cardData"`` \| ``"disabled"`` \| ``"gitalyUid"`` \| ``"createdAt"`` \| ``"tags"`` \| ``"sha"`` \| ``"downloadsAllTime"`` \| ``"config"`` \| ``"inferenceProviderMapping"`` \| ``"library_name"`` \| ``"model-index"`` \| ``"safetensors"`` \| ``"transformersInfo"`` = `never` |

#### Parameters[[listmodels.parameters]]

| Name | Type |
| :------ | :------ |
| `params?` | \{ search?: \{ query?: string \| undefined; owner?: string \| undefined; task?: "other" \| "text-classification" \| "token-classification" \| "table-question-answering" \| "question-answering" \| ... 50 more ... \| undefined; tags?: string[] \| undefined; inferenceProviders?: string[] \| undefined; } \| undefined; hubUrl?: strin... |

#### Returns[[listmodels.returns]]

`AsyncGenerator`\<[`ModelEntry`](interfaces/ModelEntry) & `Pick`\<`ApiModelInfo`, `T`\>\>

#### Defined in[[listmodels.defined-in]]

[packages/hub/src/lib/list-models.ts:55](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L55)

___

### listSpaces

▸ **listSpaces**\<`T`\>(`params?`): `AsyncGenerator`\<[`SpaceEntry`](interfaces/SpaceEntry) & `Pick`\<`ApiSpaceInfo`, `T`\>\>

#### Type parameters[[listspaces.type-parameters]]

| Name | Type |
| :------ | :------ |
| `T` | extends ``"models"`` \| ``"datasets"`` \| ``"author"`` \| ``"cardData"`` \| ``"disabled"`` \| ``"gitalyUid"`` \| ``"createdAt"`` \| ``"tags"`` \| ``"sha"`` \| ``"subdomain"`` \| ``"runtime"`` = `never` |

#### Parameters[[listspaces.parameters]]

| Name | Type |
| :------ | :------ |
| `params?` | \{ search?: \{ query?: string \| undefined; owner?: string \| undefined; tags?: string[] \| undefined; } \| undefined; hubUrl?: string \| undefined; fetch?: \{ (input: URL \| RequestInfo, init?: RequestInit \| undefined): Promise\<...\>; (input: string \| ... 1 more ... \| Request, init?: RequestInit \| undefined): Promise\<...\>; }... |

#### Returns[[listspaces.returns]]

`AsyncGenerator`\<[`SpaceEntry`](interfaces/SpaceEntry) & `Pick`\<`ApiSpaceInfo`, `T`\>\>

#### Defined in[[listspaces.defined-in]]

[packages/hub/src/lib/list-spaces.ts:44](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L44)

___

### modelInfo

▸ **modelInfo**\<`T`\>(`params`): `Promise`\<[`ModelEntry`](interfaces/ModelEntry) & `Pick`\<`ApiModelInfo`, `T`\>\>

#### Type parameters[[modelinfo.type-parameters]]

| Name | Type |
| :------ | :------ |
| `T` | extends ``"spaces"`` \| ``"author"`` \| ``"cardData"`` \| ``"disabled"`` \| ``"gitalyUid"`` \| ``"createdAt"`` \| ``"tags"`` \| ``"sha"`` \| ``"downloadsAllTime"`` \| ``"config"`` \| ``"inferenceProviderMapping"`` \| ``"library_name"`` \| ``"model-index"`` \| ``"safetensors"`` \| ``"transformersInfo"`` = `never` |

#### Parameters[[modelinfo.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `additionalFields?`: `T`[] ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `name`: `string` ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[modelinfo.returns]]

`Promise`\<[`ModelEntry`](interfaces/ModelEntry) & `Pick`\<`ApiModelInfo`, `T`\>\>

#### Defined in[[modelinfo.defined-in]]

[packages/hub/src/lib/model-info.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/model-info.ts#L10)

___

### oauthHandleRedirect

▸ **oauthHandleRedirect**(`opts?`): `Promise`\<[`OAuthResult`](interfaces/OAuthResult)\>

To call after the OAuth provider redirects back to the app.

There is also a helper function [oauthHandleRedirectIfPresent](modules#oauthhandleredirectifpresent), which will call `oauthHandleRedirect` if the URL contains an oauth code
in the query parameters and return `false` otherwise.

#### Parameters[[oauthhandleredirect.parameters]]

| Name | Type | Description |
| :------ | :------ | :------ |
| `opts?` | `Object` | - |
| `opts.codeVerifier?` | `string` | codeVerifier generated by oauthLoginUrl **`Default`** ```ts localStorage.getItem("huggingface.co:oauth:code_verifier") ``` |
| `opts.hubUrl?` | `string` | The URL of the hub. Defaults to [HUB_URL](modules#hub_url). |
| `opts.nonce?` | `string` | nonce generated by oauthLoginUrl **`Default`** ```ts localStorage.getItem("huggingface.co:oauth:nonce") ``` |
| `opts.redirectedUrl?` | `string` | The URL to analyze. **`Default`** ```ts window.location.href ``` |

#### Returns[[oauthhandleredirect.returns]]

`Promise`\<[`OAuthResult`](interfaces/OAuthResult)\>

#### Defined in[[oauthhandleredirect.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:123](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L123)

___

### oauthHandleRedirectIfPresent

▸ **oauthHandleRedirectIfPresent**(`opts?`): `Promise`\<[`OAuthResult`](interfaces/OAuthResult) \| ``false``\>

To call after the OAuth provider redirects back to the app.

It returns false if the URL does not contain an oauth code in the query parameters, otherwise
it calls [oauthHandleRedirect](modules#oauthhandleredirect).

Depending on your app, you may want to call [oauthHandleRedirect](modules#oauthhandleredirect) directly instead.

#### Parameters[[oauthhandleredirectifpresent.parameters]]

| Name | Type | Description |
| :------ | :------ | :------ |
| `opts?` | `Object` | - |
| `opts.codeVerifier?` | `string` | codeVerifier generated by oauthLoginUrl **`Default`** ```ts localStorage.getItem("huggingface.co:oauth:code_verifier") ``` |
| `opts.hubUrl?` | `string` | The URL of the hub. Defaults to [HUB_URL](modules#hub_url). |
| `opts.nonce?` | `string` | nonce generated by oauthLoginUrl **`Default`** ```ts localStorage.getItem("huggingface.co:oauth:nonce") ``` |
| `opts.redirectedUrl?` | `string` | The URL to analyze. **`Default`** ```ts window.location.href ``` |

#### Returns[[oauthhandleredirectifpresent.returns]]

`Promise`\<[`OAuthResult`](interfaces/OAuthResult) \| ``false``\>

#### Defined in[[oauthhandleredirectifpresent.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:293](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L293)

___

### oauthLoginUrl

▸ **oauthLoginUrl**(`opts?`): `Promise`\<`string`\>

Use "Sign in with Hub" to authenticate a user, and get oauth user info / access token.

Returns an url to redirect to. After the user is redirected back to your app, call `oauthHandleRedirect` to get the oauth user info / access token.

When called from inside a static Space with OAuth enabled, it will load the config from the space, otherwise you need to at least specify
the client ID of your OAuth App.

#### Parameters[[oauthloginurl.parameters]]

| Name | Type | Description |
| :------ | :------ | :------ |
| `opts?` | `Object` | - |
| `opts.clientId?` | `string` | OAuth client ID. For static Spaces, you can omit this and it will be loaded from the Space config, as long as `hf_oauth: true` is present in the README.md's metadata. For other Spaces, it is available to the backend in the OAUTH_CLIENT_ID environment variable, as long as `hf_oauth: true` is present in the README.md's metadata. You can also create a Developer Application at https://huggingface.co/settings/connected-applications and use its client ID. |
| `opts.hubUrl?` | `string` | - |
| `opts.localStorage?` | `Object` | If provided, will be filled with the code verifier and nonce used for the OAuth flow, instead of using localStorage. When calling `oauthHandleRedirectIfPresent` or `oauthHandleRedirect` you will need to provide the same values. |
| `opts.localStorage.codeVerifier?` | `string` | - |
| `opts.localStorage.nonce?` | `string` | - |
| `opts.redirectUrl?` | `string` | Redirect URI, defaults to the current URL. For Spaces, any URL within the Space is allowed. For Developer Applications, you can add any URL you want to the list of allowed redirect URIs at https://huggingface.co/settings/connected-applications. |
| `opts.scopes?` | `string` | OAuth scope, a list of space-separated scopes. For static Spaces, you can omit this and it will be loaded from the Space config, as long as `hf_oauth: true` is present in the README.md's metadata. For other Spaces, it is available to the backend in the OAUTH_SCOPES environment variable, as long as `hf_oauth: true` is present in the README.md's metadata. Defaults to "openid profile". You can also create a Developer Application at https://huggingface.co/settings/connected-applications and use its scopes. See https://huggingface.co/docs/hub/oauth for a list of available scopes. |
| `opts.state?` | `string` | State to pass to the OAuth provider, which will be returned in the call to `oauthLogin` after the redirect. |

#### Returns[[oauthloginurl.returns]]

`Promise`\<`string`\>

**`Example`**

```ts
import { oauthLoginUrl, oauthHandleRedirectIfPresent } from "@huggingface/hub";

const oauthResult = await oauthHandleRedirectIfPresent();

if (!oauthResult) {
  // If the user is not logged in, redirect to the login page
  window.location.href = await oauthLoginUrl();
}

// You can use oauthResult.accessToken, oauthResult.accessTokenExpiresAt and oauthResult.userInfo
console.log(oauthResult);
```

(Theoretically, this function could be used to authenticate a user for any OAuth provider supporting PKCE and OpenID Connect by changing `hubUrl`,
but it is currently only tested with the Hugging Face Hub.)

#### Defined in[[oauthloginurl.defined-in]]

[packages/hub/src/lib/oauth-login-url.ts:31](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-login-url.ts#L31)

___

### parseRepoType

▸ **parseRepoType**(`type`): [`RepoType`](modules#repotype)

#### Parameters[[parserepotype.parameters]]

| Name | Type |
| :------ | :------ |
| `type` | `string` |

#### Returns[[parserepotype.returns]]

[`RepoType`](modules#repotype)

#### Defined in[[parserepotype.defined-in]]

[packages/hub/src/lib/cache-management.ts:254](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L254)

___

### parseSafetensorsMetadata

▸ **parseSafetensorsMetadata**(`params`): `Promise`\<`SetRequired`\<[`SafetensorsParseFromRepo`](modules#safetensorsparsefromrepo), ``"parameterCount"``\>\>

Analyze model.safetensors.index.json or model.safetensors from a model hosted
on Hugging Face using smart range requests to extract its metadata.

#### Parameters[[parsesafetensorsmetadata.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `computeParametersCount`: ``true`` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `path?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[parsesafetensorsmetadata.returns]]

`Promise`\<`SetRequired`\<[`SafetensorsParseFromRepo`](modules#safetensorsparsefromrepo), ``"parameterCount"``\>\>

#### Defined in[[parsesafetensorsmetadata.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:231](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L231)

▸ **parseSafetensorsMetadata**(`params`): `Promise`\<[`SafetensorsParseFromRepo`](modules#safetensorsparsefromrepo)\>

#### Parameters[[parsesafetensorsmetadata.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `computeParametersCount?`: `boolean` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `path?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[parsesafetensorsmetadata.returns]]

`Promise`\<[`SafetensorsParseFromRepo`](modules#safetensorsparsefromrepo)\>

#### Defined in[[parsesafetensorsmetadata.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:253](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L253)

___

### parseSafetensorsShardFilename

▸ **parseSafetensorsShardFilename**(`filename`): [`SafetensorsShardFileInfo`](interfaces/SafetensorsShardFileInfo) \| ``null``

#### Parameters[[parsesafetensorsshardfilename.parameters]]

| Name | Type |
| :------ | :------ |
| `filename` | `string` |

#### Returns[[parsesafetensorsshardfilename.returns]]

[`SafetensorsShardFileInfo`](interfaces/SafetensorsShardFileInfo) \| ``null``

#### Defined in[[parsesafetensorsshardfilename.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:24](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L24)

___

### pathsInfo

▸ **pathsInfo**(`params`): `Promise`\<[`PathInfo`](interfaces/PathInfo) & \{ `lastCommit`: [`CommitInfo`](interfaces/CommitInfo) ; `securityFileStatus`: [`SecurityFileStatus`](interfaces/SecurityFileStatus)  }[]\>

#### Parameters[[pathsinfo.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `expand`: ``true`` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `paths`: `string`[] ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[pathsinfo.returns]]

`Promise`\<[`PathInfo`](interfaces/PathInfo) & \{ `lastCommit`: [`CommitInfo`](interfaces/CommitInfo) ; `securityFileStatus`: [`SecurityFileStatus`](interfaces/SecurityFileStatus)  }[]\>

#### Defined in[[pathsinfo.defined-in]]

[packages/hub/src/lib/paths-info.ts:37](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L37)

▸ **pathsInfo**(`params`): `Promise`\<[`PathInfo`](interfaces/PathInfo)[]\>

#### Parameters[[pathsinfo.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `expand?`: `boolean` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `paths`: `string`[] ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[pathsinfo.returns]]

`Promise`\<[`PathInfo`](interfaces/PathInfo)[]\>

#### Defined in[[pathsinfo.defined-in]]

[packages/hub/src/lib/paths-info.ts:50](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L50)

___

### repoExists

▸ **repoExists**(`params`): `Promise`\<`boolean`\>

#### Parameters[[repoexists.parameters]]

| Name | Type | Description |
| :------ | :------ | :------ |
| `params` | `Object` | - |
| `params.accessToken?` | `string` | - |
| `params.fetch?` | (`input`: `URL` \| `RequestInfo`, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> | Custom fetch function to use instead of the default one, for example to use a proxy or edit headers. |
| `params.hubUrl?` | `string` | - |
| `params.repo` | [`RepoDesignation`](modules#repodesignation) | - |
| `params.revision?` | `string` | An optional Git revision id which can be a branch name, a tag, or a commit hash. |

#### Returns[[repoexists.returns]]

`Promise`\<`boolean`\>

#### Defined in[[repoexists.defined-in]]

[packages/hub/src/lib/repo-exists.ts:6](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/repo-exists.ts#L6)

___

### scanCacheDir

▸ **scanCacheDir**(`cacheDir?`): `Promise`\<[`HFCacheInfo`](interfaces/HFCacheInfo)\>

#### Parameters[[scancachedir.parameters]]

| Name | Type | Default value |
| :------ | :------ | :------ |
| `cacheDir` | `undefined` \| `string` | `undefined` |

#### Returns[[scancachedir.returns]]

`Promise`\<[`HFCacheInfo`](interfaces/HFCacheInfo)\>

#### Defined in[[scancachedir.defined-in]]

[packages/hub/src/lib/cache-management.ts:72](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L72)

___

### scanCachedRepo

▸ **scanCachedRepo**(`repoPath`): `Promise`\<[`CachedRepoInfo`](interfaces/CachedRepoInfo)\>

#### Parameters[[scancachedrepo.parameters]]

| Name | Type |
| :------ | :------ |
| `repoPath` | `string` |

#### Returns[[scancachedrepo.returns]]

`Promise`\<[`CachedRepoInfo`](interfaces/CachedRepoInfo)\>

#### Defined in[[scancachedrepo.defined-in]]

[packages/hub/src/lib/cache-management.ts:114](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L114)

___

### scanRefsDir

▸ **scanRefsDir**(`refsPath`, `refsByHash`): `Promise`\<`void`\>

#### Parameters[[scanrefsdir.parameters]]

| Name | Type |
| :------ | :------ |
| `refsPath` | `string` |
| `refsByHash` | `Map`\<`string`, `string`[]\> |

#### Returns[[scanrefsdir.returns]]

`Promise`\<`void`\>

#### Defined in[[scanrefsdir.defined-in]]

[packages/hub/src/lib/cache-management.ts:204](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L204)

___

### scanSnapshotDir

▸ **scanSnapshotDir**(`revisionPath`, `cachedFiles`, `blobStats`): `Promise`\<`void`\>

#### Parameters[[scansnapshotdir.parameters]]

| Name | Type |
| :------ | :------ |
| `revisionPath` | `string` |
| `cachedFiles` | [`CachedFileInfo`](interfaces/CachedFileInfo)[] |
| `blobStats` | `Map`\<`string`, `Stats`\> |

#### Returns[[scansnapshotdir.returns]]

`Promise`\<`void`\>

#### Defined in[[scansnapshotdir.defined-in]]

[packages/hub/src/lib/cache-management.ts:219](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L219)

___

### snapshotDownload

▸ **snapshotDownload**(`params`): `Promise`\<`string`\>

Downloads an entire repository at a given revision in the cache directory [getHFHubCachePath](modules#gethfhubcachepath).
You can list all cached repositories using [scanCachedRepo](modules#scancachedrepo)

#### Parameters[[snapshotdownload.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `cacheDir?`: `string` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[snapshotdownload.returns]]

`Promise`\<`string`\>

**`Remarks`**

It uses internally [downloadFileToCacheDir](modules#downloadfiletocachedir).

#### Defined in[[snapshotdownload.defined-in]]

[packages/hub/src/lib/snapshot-download.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/snapshot-download.ts#L19)

___

### spaceInfo

▸ **spaceInfo**\<`T`\>(`params`): `Promise`\<[`SpaceEntry`](interfaces/SpaceEntry) & `Pick`\<`ApiSpaceInfo`, `T`\>\>

#### Type parameters[[spaceinfo.type-parameters]]

| Name | Type |
| :------ | :------ |
| `T` | extends ``"models"`` \| ``"datasets"`` \| ``"author"`` \| ``"cardData"`` \| ``"disabled"`` \| ``"gitalyUid"`` \| ``"createdAt"`` \| ``"tags"`` \| ``"sha"`` \| ``"subdomain"`` \| ``"runtime"`` = `never` |

#### Parameters[[spaceinfo.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `additionalFields?`: `T`[] ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string` ; `name`: `string` ; `revision?`: `string`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[spaceinfo.returns]]

`Promise`\<[`SpaceEntry`](interfaces/SpaceEntry) & `Pick`\<`ApiSpaceInfo`, `T`\>\>

#### Defined in[[spaceinfo.defined-in]]

[packages/hub/src/lib/space-info.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/space-info.ts#L10)

___

### uploadFile

▸ **uploadFile**(`params`): `Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Parameters[[uploadfile.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `abortSignal?`: `AbortSignal` ; `branch?`: `string` ; `commitDescription?`: `string` ; `commitTitle?`: `string` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `file`: `URL` \| `File` \| \{ `content`: [`ContentSource`](modules#contentsource) ; `path`: `string`  } ; `hubUrl?`: `string` ; `isPullRequest?`: `boolean` ; `parentCommit?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `useWebWorkers?`: `boolean` \| \{ `minSize?`: `number` ; `poolSize?`: `number`  } ; `useXet?`: `boolean`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[uploadfile.returns]]

`Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Defined in[[uploadfile.defined-in]]

[packages/hub/src/lib/upload-file.ts:5](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/upload-file.ts#L5)

___

### uploadFiles

▸ **uploadFiles**(`params`): `Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Parameters[[uploadfiles.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `abortSignal?`: `AbortSignal` ; `branch?`: `string` ; `commitDescription?`: `string` ; `commitTitle?`: `string` ; `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `files`: (`URL` \| `File` \| \{ `content`: [`ContentSource`](modules#contentsource) ; `path`: `string`  })[] ; `hubUrl?`: `string` ; `isPullRequest?`: `boolean` ; `maxFolderDepth?`: `number` ; `parentCommit?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `useWebWorkers?`: `boolean` \| \{ `minSize?`: `number` ; `poolSize?`: `number`  } ; `useXet?`: `boolean`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[uploadfiles.returns]]

`Promise`\<[`CommitOutput`](interfaces/CommitOutput)\>

#### Defined in[[uploadfiles.defined-in]]

[packages/hub/src/lib/upload-files.ts:5](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/upload-files.ts#L5)

___

### uploadFilesWithProgress

▸ **uploadFilesWithProgress**(`params`): `AsyncGenerator`\<[`CommitProgressEvent`](modules#commitprogressevent), [`CommitOutput`](interfaces/CommitOutput)\>

Uploads with progress

Needs XMLHttpRequest to be available for progress events for uploads
Set useWebWorkers to true in order to have progress events for hashing

#### Parameters[[uploadfileswithprogress.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `abortSignal?`: `AbortSignal` ; `branch?`: `string` ; `commitDescription?`: `string` ; `commitTitle?`: `string` ; `files`: (`URL` \| `File` \| \{ `content`: [`ContentSource`](modules#contentsource) ; `path`: `string`  })[] ; `hubUrl?`: `string` ; `isPullRequest?`: `boolean` ; `maxFolderDepth?`: `number` ; `parentCommit?`: `string` ; `repo`: [`RepoDesignation`](modules#repodesignation) ; `useWebWorkers?`: `boolean` \| \{ `minSize?`: `number` ; `poolSize?`: `number`  } ; `useXet?`: `boolean`  } & `Partial`\<`CredentialsParams`\> |

#### Returns[[uploadfileswithprogress.returns]]

`AsyncGenerator`\<[`CommitProgressEvent`](modules#commitprogressevent), [`CommitOutput`](interfaces/CommitOutput)\>

#### Defined in[[uploadfileswithprogress.defined-in]]

[packages/hub/src/lib/upload-files-with-progress.ts:20](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/upload-files-with-progress.ts#L20)

___

### whoAmI

▸ **whoAmI**(`params`): `Promise`\<[`WhoAmI`](modules#whoami) & \{ `auth`: [`AuthInfo`](interfaces/AuthInfo)  }\>

#### Parameters[[whoami.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | \{ `fetch?`: (`input`: URL \| RequestInfo, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\> ; `hubUrl?`: `string`  } & `CredentialsParams` |

#### Returns[[whoami.returns]]

`Promise`\<[`WhoAmI`](modules#whoami) & \{ `auth`: [`AuthInfo`](interfaces/AuthInfo)  }\>

#### Defined in[[whoami.defined-in]]

[packages/hub/src/lib/who-am-i.ts:61](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L61)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/modules.md" />

### Class: InvalidApiResponseFormatError
https://huggingface.co/docs/huggingface.js/hub/classes/InvalidApiResponseFormatError.md

# Class: InvalidApiResponseFormatError

## Hierarchy

- `Error`

  ↳ **`InvalidApiResponseFormatError`**

## Constructors

### constructor

• **new InvalidApiResponseFormatError**(`message?`): [`InvalidApiResponseFormatError`](InvalidApiResponseFormatError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `message?` | `string` |

#### Returns[[constructor.returns]]

[`InvalidApiResponseFormatError`](InvalidApiResponseFormatError)

#### Inherited from[[constructor.inherited-from]]

Error.constructor

#### Defined in[[constructor.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1082

• **new InvalidApiResponseFormatError**(`message?`, `options?`): [`InvalidApiResponseFormatError`](InvalidApiResponseFormatError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `message?` | `string` |
| `options?` | `ErrorOptions` |

#### Returns[[constructor.returns]]

[`InvalidApiResponseFormatError`](InvalidApiResponseFormatError)

#### Inherited from[[constructor.inherited-from]]

Error.constructor

#### Defined in[[constructor.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1082

## Properties

### cause

• `Optional` **cause**: `unknown`

#### Inherited from[[cause.inherited-from]]

Error.cause

#### Defined in[[cause.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es2022.error.d.ts:26

___

### message

• **message**: `string`

#### Inherited from[[message.inherited-from]]

Error.message

#### Defined in[[message.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1077

___

### name

• **name**: `string`

#### Inherited from[[name.inherited-from]]

Error.name

#### Defined in[[name.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1076

___

### stack

• `Optional` **stack**: `string`

#### Inherited from[[stack.inherited-from]]

Error.stack

#### Defined in[[stack.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1078

___

### prepareStackTrace

▪ `Static` `Optional` **prepareStackTrace**: (`err`: `Error`, `stackTraces`: `CallSite`[]) => `any`

Optional override for formatting stack traces

**`See`**

https://v8.dev/docs/stack-trace-api#customizing-stack-traces

#### Type declaration[[preparestacktrace.type-declaration]]

▸ (`err`, `stackTraces`): `any`

##### Parameters[[preparestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `err` | `Error` |
| `stackTraces` | `CallSite`[] |

##### Returns[[preparestacktrace.returns]]

`any`

#### Inherited from[[preparestacktrace.inherited-from]]

Error.prepareStackTrace

#### Defined in[[preparestacktrace.defined-in]]

node_modules/.pnpm/@types+node@22.14.1/node_modules/@types/node/globals.d.ts:143

___

### stackTraceLimit

▪ `Static` **stackTraceLimit**: `number`

#### Inherited from[[stacktracelimit.inherited-from]]

Error.stackTraceLimit

#### Defined in[[stacktracelimit.defined-in]]

node_modules/.pnpm/@types+node@22.14.1/node_modules/@types/node/globals.d.ts:145

## Methods

### captureStackTrace

▸ **captureStackTrace**(`targetObject`, `constructorOpt?`): `void`

Create .stack property on a target object

#### Parameters[[capturestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `targetObject` | `object` |
| `constructorOpt?` | `Function` |

#### Returns[[capturestacktrace.returns]]

`void`

#### Inherited from[[capturestacktrace.inherited-from]]

Error.captureStackTrace

#### Defined in[[capturestacktrace.defined-in]]

node_modules/.pnpm/@types+node@22.14.1/node_modules/@types/node/globals.d.ts:136


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/classes/InvalidApiResponseFormatError.md" />

### Class: \_\_internal\_XetBlob
https://huggingface.co/docs/huggingface.js/hub/classes/_internal_XetBlob.md

# Class: \_\_internal\_XetBlob

XetBlob is a blob implementation that fetches data directly from the Xet storage

## Hierarchy

- `Blob`

  ↳ **`__internal_XetBlob`**

## Constructors

### constructor

• **new __internal_XetBlob**(`params`): [`__internal_XetBlob`](_internal_XetBlob)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `params` | `XetBlobCreateOptions` |

#### Returns[[constructor.returns]]

[`__internal_XetBlob`](_internal_XetBlob)

#### Overrides[[constructor.overrides]]

Blob.constructor

#### Defined in[[constructor.defined-in]]

[packages/hub/src/utils/XetBlob.ts:96](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L96)

## Properties

### #reconstructionInfoPromise

• `Private` `Optional` **#reconstructionInfoPromise**: `Promise`\<`ReconstructionInfo`\>

#### Defined in[[reconstructioninfopromise.defined-in]]

[packages/hub/src/utils/XetBlob.ts:151](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L151)

___

### accessToken

• `Optional` **accessToken**: `string`

#### Defined in[[accesstoken.defined-in]]

[packages/hub/src/utils/XetBlob.ts:86](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L86)

___

### end

• **end**: `number` = `0`

#### Defined in[[end.defined-in]]

[packages/hub/src/utils/XetBlob.ts:91](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L91)

___

### fetch

• **fetch**: (`input`: `URL` \| `RequestInfo`, `init?`: `RequestInit`) => `Promise`\<`Response`\>(`input`: `string` \| `URL` \| `Request`, `init?`: `RequestInit`) => `Promise`\<`Response`\>

#### Type declaration[[fetch.type-declaration]]

▸ (`input`, `init?`): `Promise`\<`Response`\>

[MDN Reference](https://developer.mozilla.org/docs/Web/API/Window/fetch)

##### Parameters[[fetch.parameters]]

| Name | Type |
| :------ | :------ |
| `input` | `URL` \| `RequestInfo` |
| `init?` | `RequestInit` |

##### Returns[[fetch.returns]]

`Promise`\<`Response`\>

▸ (`input`, `init?`): `Promise`\<`Response`\>

##### Parameters[[fetch.parameters]]

| Name | Type |
| :------ | :------ |
| `input` | `string` \| `URL` \| `Request` |
| `init?` | `RequestInit` |

##### Returns[[fetch.returns]]

`Promise`\<`Response`\>

#### Defined in[[fetch.defined-in]]

[packages/hub/src/utils/XetBlob.ts:85](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L85)

___

### hash

• `Optional` **hash**: `string`

#### Defined in[[hash.defined-in]]

[packages/hub/src/utils/XetBlob.ts:89](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L89)

___

### internalLogging

• **internalLogging**: `boolean` = `false`

#### Defined in[[internallogging.defined-in]]

[packages/hub/src/utils/XetBlob.ts:92](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L92)

___

### listener

• **listener**: `undefined` \| (`arg`: \{ `event`: ``"read"``  } \| \{ `event`: ``"progress"`` ; `progress`: \{ `read`: `number` ; `total`: `number`  }  }) => `void`

#### Defined in[[listener.defined-in]]

[packages/hub/src/utils/XetBlob.ts:94](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L94)

___

### reconstructionInfo

• **reconstructionInfo**: `undefined` \| `ReconstructionInfo`

#### Defined in[[reconstructioninfo.defined-in]]

[packages/hub/src/utils/XetBlob.ts:93](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L93)

___

### reconstructionUrl

• `Optional` **reconstructionUrl**: `string`

#### Defined in[[reconstructionurl.defined-in]]

[packages/hub/src/utils/XetBlob.ts:88](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L88)

___

### refreshUrl

• **refreshUrl**: `string`

#### Defined in[[refreshurl.defined-in]]

[packages/hub/src/utils/XetBlob.ts:87](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L87)

___

### start

• **start**: `number` = `0`

#### Defined in[[start.defined-in]]

[packages/hub/src/utils/XetBlob.ts:90](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L90)

___

### type

• `Readonly` **type**: `string`

[MDN Reference](https://developer.mozilla.org/docs/Web/API/Blob/type)

#### Inherited from[[type.inherited-from]]

Blob.type

#### Defined in[[type.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.dom.d.ts:3501

## Accessors

### size

• `get` **size**(): `number`

#### Returns[[size.returns]]

`number`

#### Overrides[[size.overrides]]

Blob.size

#### Defined in[[size.defined-in]]

[packages/hub/src/utils/XetBlob.ts:110](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L110)

## Methods

### #clone

▸ **#clone**(): [`__internal_XetBlob`](_internal_XetBlob)

#### Returns[[clone.returns]]

[`__internal_XetBlob`](_internal_XetBlob)

#### Defined in[[clone.defined-in]]

[packages/hub/src/utils/XetBlob.ts:114](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L114)

___

### #fetch

▸ **#fetch**(): `Promise`\<`ReadableStream`\<`Uint8Array`\<`ArrayBufferLike`\>\>\>

#### Returns[[fetch.returns]]

`Promise`\<`ReadableStream`\<`Uint8Array`\<`ArrayBufferLike`\>\>\>

#### Defined in[[fetch.defined-in]]

[packages/hub/src/utils/XetBlob.ts:184](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L184)

___

### #loadReconstructionInfo

▸ **#loadReconstructionInfo**(): `Promise`\<`ReconstructionInfo`\>

#### Returns[[loadreconstructioninfo.returns]]

`Promise`\<`ReconstructionInfo`\>

#### Defined in[[loadreconstructioninfo.defined-in]]

[packages/hub/src/utils/XetBlob.ts:153](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L153)

___

### arrayBuffer

▸ **arrayBuffer**(): `Promise`\<`ArrayBuffer`\>

#### Returns[[arraybuffer.returns]]

`Promise`\<`ArrayBuffer`\>

#### Overrides[[arraybuffer.overrides]]

Blob.arrayBuffer

#### Defined in[[arraybuffer.defined-in]]

[packages/hub/src/utils/XetBlob.ts:486](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L486)

___

### bytes

▸ **bytes**(): `Promise`\<`Uint8Array`\<`ArrayBufferLike`\>\>

[MDN Reference](https://developer.mozilla.org/docs/Web/API/Blob/bytes)

#### Returns[[bytes.returns]]

`Promise`\<`Uint8Array`\<`ArrayBufferLike`\>\>

#### Inherited from[[bytes.inherited-from]]

Blob.bytes

#### Defined in[[bytes.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.dom.d.ts:3505

___

### response

▸ **response**(): `Promise`\<`Response`\>

#### Returns[[response.returns]]

`Promise`\<`Response`\>

#### Defined in[[response.defined-in]]

[packages/hub/src/utils/XetBlob.ts:498](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L498)

___

### slice

▸ **slice**(`start?`, `end?`): [`__internal_XetBlob`](_internal_XetBlob)

#### Parameters[[slice.parameters]]

| Name | Type | Default value |
| :------ | :------ | :------ |
| `start` | `number` | `0` |
| `end` | `number` | `undefined` |

#### Returns[[slice.returns]]

[`__internal_XetBlob`](_internal_XetBlob)

#### Overrides[[slice.overrides]]

Blob.slice

#### Defined in[[slice.defined-in]]

[packages/hub/src/utils/XetBlob.ts:134](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L134)

___

### stream

▸ **stream**(): `ReadableStream`\<`Uint8Array`\<`ArrayBufferLike`\>\>

#### Returns[[stream.returns]]

`ReadableStream`\<`Uint8Array`\<`ArrayBufferLike`\>\>

#### Overrides[[stream.overrides]]

Blob.stream

#### Defined in[[stream.defined-in]]

[packages/hub/src/utils/XetBlob.ts:504](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L504)

___

### text

▸ **text**(): `Promise`\<`string`\>

#### Returns[[text.returns]]

`Promise`\<`string`\>

#### Overrides[[text.overrides]]

Blob.text

#### Defined in[[text.defined-in]]

[packages/hub/src/utils/XetBlob.ts:492](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/utils/XetBlob.ts#L492)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/classes/_internal_XetBlob.md" />

### Class: HubApiError
https://huggingface.co/docs/huggingface.js/hub/classes/HubApiError.md

# Class: HubApiError

Error thrown when an API call to the Hugging Face Hub fails.

## Hierarchy

- `Error`

  ↳ **`HubApiError`**

## Constructors

### constructor

• **new HubApiError**(`url`, `statusCode`, `requestId?`, `message?`): [`HubApiError`](HubApiError)

#### Parameters[[constructor.parameters]]

| Name | Type |
| :------ | :------ |
| `url` | `string` |
| `statusCode` | `number` |
| `requestId?` | `string` |
| `message?` | `string` |

#### Returns[[constructor.returns]]

[`HubApiError`](HubApiError)

#### Overrides[[constructor.overrides]]

Error.constructor

#### Defined in[[constructor.defined-in]]

[packages/hub/src/error.ts:40](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/error.ts#L40)

## Properties

### cause

• `Optional` **cause**: `unknown`

#### Inherited from[[cause.inherited-from]]

Error.cause

#### Defined in[[cause.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es2022.error.d.ts:26

___

### data

• `Optional` **data**: `JsonObject`

#### Defined in[[data.defined-in]]

[packages/hub/src/error.ts:38](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/error.ts#L38)

___

### message

• **message**: `string`

#### Inherited from[[message.inherited-from]]

Error.message

#### Defined in[[message.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1077

___

### name

• **name**: `string`

#### Inherited from[[name.inherited-from]]

Error.name

#### Defined in[[name.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1076

___

### requestId

• `Optional` **requestId**: `string`

#### Defined in[[requestid.defined-in]]

[packages/hub/src/error.ts:37](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/error.ts#L37)

___

### stack

• `Optional` **stack**: `string`

#### Inherited from[[stack.inherited-from]]

Error.stack

#### Defined in[[stack.defined-in]]

packages/doc-internal/node_modules/.pnpm/typescript@5.8.3/node_modules/typescript/lib/lib.es5.d.ts:1078

___

### statusCode

• **statusCode**: `number`

#### Defined in[[statuscode.defined-in]]

[packages/hub/src/error.ts:35](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/error.ts#L35)

___

### url

• **url**: `string`

#### Defined in[[url.defined-in]]

[packages/hub/src/error.ts:36](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/error.ts#L36)

___

### prepareStackTrace

▪ `Static` `Optional` **prepareStackTrace**: (`err`: `Error`, `stackTraces`: `CallSite`[]) => `any`

Optional override for formatting stack traces

**`See`**

https://v8.dev/docs/stack-trace-api#customizing-stack-traces

#### Type declaration[[preparestacktrace.type-declaration]]

▸ (`err`, `stackTraces`): `any`

##### Parameters[[preparestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `err` | `Error` |
| `stackTraces` | `CallSite`[] |

##### Returns[[preparestacktrace.returns]]

`any`

#### Inherited from[[preparestacktrace.inherited-from]]

Error.prepareStackTrace

#### Defined in[[preparestacktrace.defined-in]]

node_modules/.pnpm/@types+node@22.14.1/node_modules/@types/node/globals.d.ts:143

___

### stackTraceLimit

▪ `Static` **stackTraceLimit**: `number`

#### Inherited from[[stacktracelimit.inherited-from]]

Error.stackTraceLimit

#### Defined in[[stacktracelimit.defined-in]]

node_modules/.pnpm/@types+node@22.14.1/node_modules/@types/node/globals.d.ts:145

## Methods

### captureStackTrace

▸ **captureStackTrace**(`targetObject`, `constructorOpt?`): `void`

Create .stack property on a target object

#### Parameters[[capturestacktrace.parameters]]

| Name | Type |
| :------ | :------ |
| `targetObject` | `object` |
| `constructorOpt?` | `Function` |

#### Returns[[capturestacktrace.returns]]

`void`

#### Inherited from[[capturestacktrace.inherited-from]]

Error.captureStackTrace

#### Defined in[[capturestacktrace.defined-in]]

node_modules/.pnpm/@types+node@22.14.1/node_modules/@types/node/globals.d.ts:136


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/classes/HubApiError.md" />

### Interface: SpaceRuntime
https://huggingface.co/docs/huggingface.js/hub/interfaces/SpaceRuntime.md

# Interface: SpaceRuntime

## Properties

### errorMessage

• `Optional` **errorMessage**: `string`

#### Defined in[[errormessage.defined-in]]

[packages/hub/src/types/public.ts:80](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L80)

___

### gcTimeout

• `Optional` **gcTimeout**: ``null`` \| `number`

in seconds

#### Defined in[[gctimeout.defined-in]]

[packages/hub/src/types/public.ts:90](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L90)

___

### hardware

• `Optional` **hardware**: `Object`

#### Type declaration[[hardware.type-declaration]]

| Name | Type |
| :------ | :------ |
| `current` | ``null`` \| [`SpaceHardwareFlavor`](../modules#spacehardwareflavor) |
| `currentPrettyName?` | `string` |
| `requested` | ``null`` \| [`SpaceHardwareFlavor`](../modules#spacehardwareflavor) |
| `requestedPrettyName?` | `string` |

#### Defined in[[hardware.defined-in]]

[packages/hub/src/types/public.ts:81](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L81)

___

### resources

• `Optional` **resources**: [`SpaceResourceConfig`](SpaceResourceConfig)

when calling /spaces, those props are only fetched if ?full=true

#### Defined in[[resources.defined-in]]

[packages/hub/src/types/public.ts:88](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L88)

___

### sdk

• `Optional` **sdk**: [`SpaceSdk`](../modules#spacesdk)

#### Defined in[[sdk.defined-in]]

[packages/hub/src/types/public.ts:78](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L78)

___

### sdkVersion

• `Optional` **sdkVersion**: `string`

#### Defined in[[sdkversion.defined-in]]

[packages/hub/src/types/public.ts:79](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L79)

___

### stage

• **stage**: [`SpaceStage`](../modules#spacestage)

#### Defined in[[stage.defined-in]]

[packages/hub/src/types/public.ts:77](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L77)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/SpaceRuntime.md" />

### Interface: XetFileInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/XetFileInfo.md

# Interface: XetFileInfo

## Properties

### hash

• **hash**: `string`

#### Defined in[[hash.defined-in]]

[packages/hub/src/lib/file-download-info.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-download-info.ts#L9)

___

### reconstructionUrl

• **reconstructionUrl**: `URL`

Can be directly used instead of the hash.

#### Defined in[[reconstructionurl.defined-in]]

[packages/hub/src/lib/file-download-info.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-download-info.ts#L14)

___

### refreshUrl

• **refreshUrl**: `URL`

#### Defined in[[refreshurl.defined-in]]

[packages/hub/src/lib/file-download-info.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-download-info.ts#L10)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/XetFileInfo.md" />

### Interface: SecurityFileStatus
https://huggingface.co/docs/huggingface.js/hub/interfaces/SecurityFileStatus.md

# Interface: SecurityFileStatus

## Properties

### status

• **status**: `string`

#### Defined in[[status.defined-in]]

[packages/hub/src/lib/paths-info.ts:20](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L20)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/SecurityFileStatus.md" />

### Interface: WhoAmIUser
https://huggingface.co/docs/huggingface.js/hub/interfaces/WhoAmIUser.md

# Interface: WhoAmIUser

## Properties

### avatarUrl

• **avatarUrl**: `string`

#### Defined in[[avatarurl.defined-in]]

[packages/hub/src/lib/who-am-i.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L18)

___

### canPay

• **canPay**: `boolean`

#### Defined in[[canpay.defined-in]]

[packages/hub/src/lib/who-am-i.ts:17](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L17)

___

### email

• **email**: `string`

#### Defined in[[email.defined-in]]

[packages/hub/src/lib/who-am-i.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L11)

___

### emailVerified

• **emailVerified**: `boolean`

#### Defined in[[emailverified.defined-in]]

[packages/hub/src/lib/who-am-i.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L12)

___

### fullname

• **fullname**: `string`

#### Defined in[[fullname.defined-in]]

[packages/hub/src/lib/who-am-i.ts:16](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L16)

___

### id

• **id**: `string`

Unique ID persistent across renames

#### Defined in[[id.defined-in]]

[packages/hub/src/lib/who-am-i.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L9)

___

### isPro

• **isPro**: `boolean`

#### Defined in[[ispro.defined-in]]

[packages/hub/src/lib/who-am-i.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L13)

___

### name

• **name**: `string`

#### Defined in[[name.defined-in]]

[packages/hub/src/lib/who-am-i.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L15)

___

### orgs

• **orgs**: [`WhoAmIOrg`](WhoAmIOrg)[]

#### Defined in[[orgs.defined-in]]

[packages/hub/src/lib/who-am-i.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L14)

___

### periodEnd

• **periodEnd**: ``null`` \| `number`

Unix timestamp in seconds

#### Defined in[[periodend.defined-in]]

[packages/hub/src/lib/who-am-i.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L22)

___

### type

• **type**: ``"user"``

#### Defined in[[type.defined-in]]

[packages/hub/src/lib/who-am-i.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L10)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/WhoAmIUser.md" />

### Interface: AuthInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/AuthInfo.md

# Interface: AuthInfo

## Properties

### accessToken

• `Optional` **accessToken**: `Object`

#### Type declaration[[accesstoken.type-declaration]]

| Name | Type |
| :------ | :------ |
| `createdAt` | `Date` |
| `displayName` | `string` |
| `role` | [`AccessTokenRole`](../modules#accesstokenrole) |

#### Defined in[[accesstoken.defined-in]]

[packages/hub/src/lib/who-am-i.ts:53](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L53)

___

### expiresAt

• `Optional` **expiresAt**: `Date`

#### Defined in[[expiresat.defined-in]]

[packages/hub/src/lib/who-am-i.ts:58](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L58)

___

### type

• **type**: [`AuthType`](../modules#authtype)

#### Defined in[[type.defined-in]]

[packages/hub/src/lib/who-am-i.ts:52](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L52)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/AuthInfo.md" />

### Interface: ModelConfig
https://huggingface.co/docs/huggingface.js/hub/interfaces/ModelConfig.md

# Interface: ModelConfig

## Properties

### quantization\_config

• `Optional` **quantization\_config**: [`QuantizationConfig`](QuantizationConfig)

#### Defined in[[quantizationconfig.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:359](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L359)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/ModelConfig.md" />

### Interface: Credentials
https://huggingface.co/docs/huggingface.js/hub/interfaces/Credentials.md

# Interface: Credentials

**`Deprecated`**

Use `AccessToken` instead. Pass { accessToken: "hf_..." } instead of { credentials: { accessToken: "hf_..." } }

## Properties

### accessToken

• **accessToken**: `string`

#### Defined in[[accesstoken.defined-in]]

[packages/hub/src/types/public.ts:21](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L21)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/Credentials.md" />

### Interface: CommitData
https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitData.md

# Interface: CommitData

## Properties

### authors

• **authors**: \{ `avatarUrl`: `string` ; `username`: `string`  }[]

#### Defined in[[authors.defined-in]]

[packages/hub/src/lib/list-commits.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-commits.ts#L13)

___

### date

• **date**: `Date`

#### Defined in[[date.defined-in]]

[packages/hub/src/lib/list-commits.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-commits.ts#L14)

___

### message

• **message**: `string`

#### Defined in[[message.defined-in]]

[packages/hub/src/lib/list-commits.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-commits.ts#L12)

___

### oid

• **oid**: `string`

#### Defined in[[oid.defined-in]]

[packages/hub/src/lib/list-commits.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-commits.ts#L10)

___

### title

• **title**: `string`

#### Defined in[[title.defined-in]]

[packages/hub/src/lib/list-commits.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-commits.ts#L11)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CommitData.md" />

### Interface: PathInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/PathInfo.md

# Interface: PathInfo

## Properties

### lastCommit

• `Optional` **lastCommit**: [`CommitInfo`](CommitInfo)

#### Defined in[[lastcommit.defined-in]]

[packages/hub/src/lib/paths-info.ts:32](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L32)

___

### lfs

• `Optional` **lfs**: [`LfsPathInfo`](LfsPathInfo)

Only defined when path is LFS pointer

#### Defined in[[lfs.defined-in]]

[packages/hub/src/lib/paths-info.ts:31](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L31)

___

### oid

• **oid**: `string`

#### Defined in[[oid.defined-in]]

[packages/hub/src/lib/paths-info.ts:26](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L26)

___

### path

• **path**: `string`

#### Defined in[[path.defined-in]]

[packages/hub/src/lib/paths-info.ts:24](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L24)

___

### securityFileStatus

• `Optional` **securityFileStatus**: [`SecurityFileStatus`](SecurityFileStatus)

#### Defined in[[securityfilestatus.defined-in]]

[packages/hub/src/lib/paths-info.ts:33](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L33)

___

### size

• **size**: `number`

#### Defined in[[size.defined-in]]

[packages/hub/src/lib/paths-info.ts:27](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L27)

___

### type

• **type**: `string`

#### Defined in[[type.defined-in]]

[packages/hub/src/lib/paths-info.ts:25](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L25)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/PathInfo.md" />

### Interface: CommitOutput
https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitOutput.md

# Interface: CommitOutput

## Properties

### commit

• **commit**: `Object`

#### Type declaration[[commit.type-declaration]]

| Name | Type |
| :------ | :------ |
| `oid` | `string` |
| `url` | `string` |

#### Defined in[[commit.defined-in]]

[packages/hub/src/lib/commit.ts:129](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L129)

___

### hookOutput

• **hookOutput**: `string`

#### Defined in[[hookoutput.defined-in]]

[packages/hub/src/lib/commit.ts:133](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L133)

___

### pullRequestUrl

• `Optional` **pullRequestUrl**: `string`

#### Defined in[[pullrequesturl.defined-in]]

[packages/hub/src/lib/commit.ts:128](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L128)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CommitOutput.md" />

### Interface: TensorInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/TensorInfo.md

# Interface: TensorInfo

## Properties

### data\_offsets

• **data\_offsets**: [`number`, `number`]

#### Defined in[[dataoffsets.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:70](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L70)

___

### dtype

• **dtype**: [`Dtype`](../modules#dtype)

#### Defined in[[dtype.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:68](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L68)

___

### shape

• **shape**: `number`[]

#### Defined in[[shape.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:69](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L69)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/TensorInfo.md" />

### Interface: ModelEntry
https://huggingface.co/docs/huggingface.js/hub/interfaces/ModelEntry.md

# Interface: ModelEntry

## Properties

### downloads

• **downloads**: `number`

#### Defined in[[downloads.defined-in]]

[packages/hub/src/lib/list-models.ts:51](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L51)

___

### gated

• **gated**: ``false`` \| ``"auto"`` \| ``"manual"``

#### Defined in[[gated.defined-in]]

[packages/hub/src/lib/list-models.ts:48](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L48)

___

### id

• **id**: `string`

#### Defined in[[id.defined-in]]

[packages/hub/src/lib/list-models.ts:45](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L45)

___

### likes

• **likes**: `number`

#### Defined in[[likes.defined-in]]

[packages/hub/src/lib/list-models.ts:50](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L50)

___

### name

• **name**: `string`

#### Defined in[[name.defined-in]]

[packages/hub/src/lib/list-models.ts:46](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L46)

___

### private

• **private**: `boolean`

#### Defined in[[private.defined-in]]

[packages/hub/src/lib/list-models.ts:47](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L47)

___

### task

• `Optional` **task**: ``"other"`` \| ``"text-classification"`` \| ``"token-classification"`` \| ``"table-question-answering"`` \| ``"question-answering"`` \| ``"zero-shot-classification"`` \| ``"translation"`` \| ``"summarization"`` \| ``"feature-extraction"`` \| ``"text-generation"`` \| ``"fill-mask"`` \| ``"sentence-similarity"`` \| ``"text-to-speech"`` \| ``"text-to-audio"`` \| ``"automatic-speech-recognition"`` \| ``"audio-to-audio"`` \| ``"audio-classification"`` \| ``"audio-text-to-text"`` \| ``"voice-activity-detection"`` \| ``"depth-estimation"`` \| ``"image-classification"`` \| ``"object-detection"`` \| ``"image-segmentation"`` \| ``"text-to-image"`` \| ``"image-to-text"`` \| ``"image-to-image"`` \| ``"image-to-video"`` \| ``"unconditional-image-generation"`` \| ``"video-classification"`` \| ``"reinforcement-learning"`` \| ``"robotics"`` \| ``"tabular-classification"`` \| ``"tabular-regression"`` \| ``"tabular-to-text"`` \| ``"table-to-text"`` \| ``"multiple-choice"`` \| ``"text-ranking"`` \| ``"text-retrieval"`` \| ``"time-series-forecasting"`` \| ``"text-to-video"`` \| ``"image-text-to-text"`` \| ``"visual-question-answering"`` \| ``"document-question-answering"`` \| ``"zero-shot-image-classification"`` \| ``"graph-ml"`` \| ``"mask-generation"`` \| ``"zero-shot-object-detection"`` \| ``"text-to-3d"`` \| ``"image-to-3d"`` \| ``"image-feature-extraction"`` \| ``"video-text-to-text"`` \| ``"keypoint-detection"`` \| ``"visual-document-retrieval"`` \| ``"any-to-any"`` \| ``"video-to-video"``

#### Defined in[[task.defined-in]]

[packages/hub/src/lib/list-models.ts:49](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L49)

___

### updatedAt

• **updatedAt**: `Date`

#### Defined in[[updatedat.defined-in]]

[packages/hub/src/lib/list-models.ts:52](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-models.ts#L52)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/ModelEntry.md" />

### Interface: CachedRepoInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/CachedRepoInfo.md

# Interface: CachedRepoInfo

## Properties

### filesCount

• **filesCount**: `number`

#### Defined in[[filescount.defined-in]]

[packages/hub/src/lib/cache-management.ts:59](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L59)

___

### id

• **id**: [`RepoId`](RepoId)

#### Defined in[[id.defined-in]]

[packages/hub/src/lib/cache-management.ts:56](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L56)

___

### lastAccessedAt

• **lastAccessedAt**: `Date`

#### Defined in[[lastaccessedat.defined-in]]

[packages/hub/src/lib/cache-management.ts:62](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L62)

___

### lastModifiedAt

• **lastModifiedAt**: `Date`

#### Defined in[[lastmodifiedat.defined-in]]

[packages/hub/src/lib/cache-management.ts:63](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L63)

___

### path

• **path**: `string`

#### Defined in[[path.defined-in]]

[packages/hub/src/lib/cache-management.ts:57](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L57)

___

### revisions

• **revisions**: [`CachedRevisionInfo`](CachedRevisionInfo)[]

#### Defined in[[revisions.defined-in]]

[packages/hub/src/lib/cache-management.ts:60](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L60)

___

### size

• **size**: `number`

#### Defined in[[size.defined-in]]

[packages/hub/src/lib/cache-management.ts:58](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L58)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CachedRepoInfo.md" />

### Interface: RepoId
https://huggingface.co/docs/huggingface.js/hub/interfaces/RepoId.md

# Interface: RepoId

## Properties

### name

• **name**: `string`

#### Defined in[[name.defined-in]]

[packages/hub/src/types/public.ts:6](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L6)

___

### type

• **type**: [`RepoType`](../modules#repotype)

#### Defined in[[type.defined-in]]

[packages/hub/src/types/public.ts:7](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L7)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/RepoId.md" />

### Interface: SpaceResourceRequirement
https://huggingface.co/docs/huggingface.js/hub/interfaces/SpaceResourceRequirement.md

# Interface: SpaceResourceRequirement

## Properties

### cpu

• `Optional` **cpu**: `string`

#### Defined in[[cpu.defined-in]]

[packages/hub/src/types/public.ts:94](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L94)

___

### ephemeral

• `Optional` **ephemeral**: `string`

#### Defined in[[ephemeral.defined-in]]

[packages/hub/src/types/public.ts:98](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L98)

___

### gpu

• `Optional` **gpu**: `string`

#### Defined in[[gpu.defined-in]]

[packages/hub/src/types/public.ts:96](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L96)

___

### gpuModel

• `Optional` **gpuModel**: `string`

#### Defined in[[gpumodel.defined-in]]

[packages/hub/src/types/public.ts:97](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L97)

___

### memory

• `Optional` **memory**: `string`

#### Defined in[[memory.defined-in]]

[packages/hub/src/types/public.ts:95](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L95)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/SpaceResourceRequirement.md" />

### Interface: CachedRevisionInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/CachedRevisionInfo.md

# Interface: CachedRevisionInfo

## Properties

### commitOid

• **commitOid**: `string`

#### Defined in[[commitoid.defined-in]]

[packages/hub/src/lib/cache-management.ts:46](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L46)

___

### files

• **files**: [`CachedFileInfo`](CachedFileInfo)[]

#### Defined in[[files.defined-in]]

[packages/hub/src/lib/cache-management.ts:49](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L49)

___

### lastModifiedAt

• **lastModifiedAt**: `Date`

#### Defined in[[lastmodifiedat.defined-in]]

[packages/hub/src/lib/cache-management.ts:52](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L52)

___

### path

• **path**: `string`

#### Defined in[[path.defined-in]]

[packages/hub/src/lib/cache-management.ts:47](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L47)

___

### refs

• **refs**: `string`[]

#### Defined in[[refs.defined-in]]

[packages/hub/src/lib/cache-management.ts:50](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L50)

___

### size

• **size**: `number`

#### Defined in[[size.defined-in]]

[packages/hub/src/lib/cache-management.ts:48](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L48)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CachedRevisionInfo.md" />

### Interface: CachedFileInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/CachedFileInfo.md

# Interface: CachedFileInfo

## Properties

### blob

• **blob**: `Object`

Underlying file - which `path` is symlinked to

#### Type declaration[[blob.type-declaration]]

| Name | Type |
| :------ | :------ |
| `lastAccessedAt` | `Date` |
| `lastModifiedAt` | `Date` |
| `path` | `string` |
| `size` | `number` |

#### Defined in[[blob.defined-in]]

[packages/hub/src/lib/cache-management.ts:37](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L37)

___

### path

• **path**: `string`

#### Defined in[[path.defined-in]]

[packages/hub/src/lib/cache-management.ts:33](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L33)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CachedFileInfo.md" />

### Interface: LfsPathInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/LfsPathInfo.md

# Interface: LfsPathInfo

## Properties

### oid

• **oid**: `string`

#### Defined in[[oid.defined-in]]

[packages/hub/src/lib/paths-info.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L8)

___

### pointerSize

• **pointerSize**: `number`

#### Defined in[[pointersize.defined-in]]

[packages/hub/src/lib/paths-info.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L10)

___

### size

• **size**: `number`

#### Defined in[[size.defined-in]]

[packages/hub/src/lib/paths-info.ts:9](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L9)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/LfsPathInfo.md" />

### Interface: SafetensorsIndexJson
https://huggingface.co/docs/huggingface.js/hub/interfaces/SafetensorsIndexJson.md

# Interface: SafetensorsIndexJson

## Properties

### dtype

• `Optional` **dtype**: `string`

#### Defined in[[dtype.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:78](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L78)

___

### metadata

• `Optional` **metadata**: \{ `total_parameters?`: `string` \| `number`  } & `Record`\<`string`, `string`\>

#### Defined in[[metadata.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:80](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L80)

___

### weight\_map

• **weight\_map**: `Record`\<`string`, `string`\>

#### Defined in[[weightmap.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:82](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L82)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/SafetensorsIndexJson.md" />

### Interface: SafetensorsShardFileInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/SafetensorsShardFileInfo.md

# Interface: SafetensorsShardFileInfo

## Properties

### basePrefix

• **basePrefix**: `string`

#### Defined in[[baseprefix.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:20](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L20)

___

### prefix

• **prefix**: `string`

#### Defined in[[prefix.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L19)

___

### shard

• **shard**: `string`

#### Defined in[[shard.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:21](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L21)

___

### total

• **total**: `string`

#### Defined in[[total.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L22)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/SafetensorsShardFileInfo.md" />

### Interface: DatasetEntry
https://huggingface.co/docs/huggingface.js/hub/interfaces/DatasetEntry.md

# Interface: DatasetEntry

## Properties

### downloads

• **downloads**: `number`

#### Defined in[[downloads.defined-in]]

[packages/hub/src/lib/list-datasets.ts:41](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L41)

___

### gated

• **gated**: ``false`` \| ``"auto"`` \| ``"manual"``

#### Defined in[[gated.defined-in]]

[packages/hub/src/lib/list-datasets.ts:42](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L42)

___

### id

• **id**: `string`

#### Defined in[[id.defined-in]]

[packages/hub/src/lib/list-datasets.ts:38](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L38)

___

### likes

• **likes**: `number`

#### Defined in[[likes.defined-in]]

[packages/hub/src/lib/list-datasets.ts:43](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L43)

___

### name

• **name**: `string`

#### Defined in[[name.defined-in]]

[packages/hub/src/lib/list-datasets.ts:39](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L39)

___

### private

• **private**: `boolean`

#### Defined in[[private.defined-in]]

[packages/hub/src/lib/list-datasets.ts:40](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L40)

___

### updatedAt

• **updatedAt**: `Date`

#### Defined in[[updatedat.defined-in]]

[packages/hub/src/lib/list-datasets.ts:44](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-datasets.ts#L44)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/DatasetEntry.md" />

### Interface: WhoAmIApp
https://huggingface.co/docs/huggingface.js/hub/interfaces/WhoAmIApp.md

# Interface: WhoAmIApp

## Properties

### id

• **id**: `string`

#### Defined in[[id.defined-in]]

[packages/hub/src/lib/who-am-i.ts:41](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L41)

___

### name

• **name**: `string`

#### Defined in[[name.defined-in]]

[packages/hub/src/lib/who-am-i.ts:43](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L43)

___

### scope

• `Optional` **scope**: `Object`

#### Type declaration[[scope.type-declaration]]

| Name | Type |
| :------ | :------ |
| `entities` | `string`[] |
| `role` | ``"admin"`` \| ``"write"`` \| ``"contributor"`` \| ``"read"`` |

#### Defined in[[scope.defined-in]]

[packages/hub/src/lib/who-am-i.ts:44](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L44)

___

### type

• **type**: ``"app"``

#### Defined in[[type.defined-in]]

[packages/hub/src/lib/who-am-i.ts:42](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L42)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/WhoAmIApp.md" />

### Interface: CommitDeletedEntry
https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitDeletedEntry.md

# Interface: CommitDeletedEntry

## Properties

### operation

• **operation**: ``"delete"``

#### Defined in[[operation.defined-in]]

[packages/hub/src/lib/commit.ts:34](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L34)

___

### path

• **path**: `string`

#### Defined in[[path.defined-in]]

[packages/hub/src/lib/commit.ts:35](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L35)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CommitDeletedEntry.md" />

### Interface: CommitInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitInfo.md

# Interface: CommitInfo

## Properties

### date

• **date**: `Date`

#### Defined in[[date.defined-in]]

[packages/hub/src/lib/paths-info.ts:16](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L16)

___

### id

• **id**: `string`

#### Defined in[[id.defined-in]]

[packages/hub/src/lib/paths-info.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L14)

___

### title

• **title**: `string`

#### Defined in[[title.defined-in]]

[packages/hub/src/lib/paths-info.ts:15](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/paths-info.ts#L15)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CommitInfo.md" />

### Interface: CommitEditFile
https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitEditFile.md

# Interface: CommitEditFile

Opitmized when only the beginning or the end of the file is replaced

todo: handle other cases

## Properties

### edits

• **edits**: \{ `content`: `Blob` ; `end`: `number` ; `start`: `number`  }[]

#### Defined in[[edits.defined-in]]

[packages/hub/src/lib/commit.ts:57](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L57)

___

### operation

• **operation**: ``"edit"``

#### Defined in[[operation.defined-in]]

[packages/hub/src/lib/commit.ts:53](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L53)

___

### originalContent

• **originalContent**: `Blob`

Later, will be ContentSource. For now simpler to just handle blobs

#### Defined in[[originalcontent.defined-in]]

[packages/hub/src/lib/commit.ts:56](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L56)

___

### path

• **path**: `string`

#### Defined in[[path.defined-in]]

[packages/hub/src/lib/commit.ts:54](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L54)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CommitEditFile.md" />

### Interface: CommitFile
https://huggingface.co/docs/huggingface.js/hub/interfaces/CommitFile.md

# Interface: CommitFile

## Properties

### content

• **content**: [`ContentSource`](../modules#contentsource)

#### Defined in[[content.defined-in]]

[packages/hub/src/lib/commit.ts:43](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L43)

___

### operation

• **operation**: ``"addOrUpdate"``

#### Defined in[[operation.defined-in]]

[packages/hub/src/lib/commit.ts:41](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L41)

___

### path

• **path**: `string`

#### Defined in[[path.defined-in]]

[packages/hub/src/lib/commit.ts:42](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/commit.ts#L42)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/CommitFile.md" />

### Interface: OAuthResult
https://huggingface.co/docs/huggingface.js/hub/interfaces/OAuthResult.md

# Interface: OAuthResult

## Properties

### accessToken

• **accessToken**: `string`

#### Defined in[[accesstoken.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:104](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L104)

___

### accessTokenExpiresAt

• **accessTokenExpiresAt**: `Date`

#### Defined in[[accesstokenexpiresat.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:105](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L105)

___

### scope

• **scope**: `string`

Granted scope

#### Defined in[[scope.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:114](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L114)

___

### state

• `Optional` **state**: `string`

State passed to the OAuth provider in the original request to the OAuth provider.

#### Defined in[[state.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:110](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L110)

___

### userInfo

• **userInfo**: [`UserInfo`](UserInfo)

#### Defined in[[userinfo.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:106](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L106)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/OAuthResult.md" />

### Interface: SpaceResourceConfig
https://huggingface.co/docs/huggingface.js/hub/interfaces/SpaceResourceConfig.md

# Interface: SpaceResourceConfig

## Properties

### is\_custom

• `Optional` **is\_custom**: `boolean`

#### Defined in[[iscustom.defined-in]]

[packages/hub/src/types/public.ts:106](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L106)

___

### limits

• **limits**: [`SpaceResourceRequirement`](SpaceResourceRequirement)

#### Defined in[[limits.defined-in]]

[packages/hub/src/types/public.ts:103](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L103)

___

### replicas

• `Optional` **replicas**: `number`

#### Defined in[[replicas.defined-in]]

[packages/hub/src/types/public.ts:104](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L104)

___

### requests

• **requests**: [`SpaceResourceRequirement`](SpaceResourceRequirement)

#### Defined in[[requests.defined-in]]

[packages/hub/src/types/public.ts:102](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L102)

___

### throttled

• `Optional` **throttled**: `boolean`

#### Defined in[[throttled.defined-in]]

[packages/hub/src/types/public.ts:105](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/types/public.ts#L105)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/SpaceResourceConfig.md" />

### Interface: SpaceEntry
https://huggingface.co/docs/huggingface.js/hub/interfaces/SpaceEntry.md

# Interface: SpaceEntry

## Properties

### id

• **id**: `string`

#### Defined in[[id.defined-in]]

[packages/hub/src/lib/list-spaces.ts:35](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L35)

___

### likes

• **likes**: `number`

#### Defined in[[likes.defined-in]]

[packages/hub/src/lib/list-spaces.ts:38](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L38)

___

### name

• **name**: `string`

#### Defined in[[name.defined-in]]

[packages/hub/src/lib/list-spaces.ts:36](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L36)

___

### private

• **private**: `boolean`

#### Defined in[[private.defined-in]]

[packages/hub/src/lib/list-spaces.ts:39](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L39)

___

### sdk

• `Optional` **sdk**: [`SpaceSdk`](../modules#spacesdk)

#### Defined in[[sdk.defined-in]]

[packages/hub/src/lib/list-spaces.ts:37](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L37)

___

### updatedAt

• **updatedAt**: `Date`

#### Defined in[[updatedat.defined-in]]

[packages/hub/src/lib/list-spaces.ts:40](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-spaces.ts#L40)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/SpaceEntry.md" />

### Interface: FileDownloadInfoOutput
https://huggingface.co/docs/huggingface.js/hub/interfaces/FileDownloadInfoOutput.md

# Interface: FileDownloadInfoOutput

## Properties

### etag

• **etag**: `string`

#### Defined in[[etag.defined-in]]

[packages/hub/src/lib/file-download-info.ts:19](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-download-info.ts#L19)

___

### size

• **size**: `number`

#### Defined in[[size.defined-in]]

[packages/hub/src/lib/file-download-info.ts:18](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-download-info.ts#L18)

___

### url

• **url**: `string`

#### Defined in[[url.defined-in]]

[packages/hub/src/lib/file-download-info.ts:22](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-download-info.ts#L22)

___

### xet

• `Optional` **xet**: [`XetFileInfo`](XetFileInfo)

#### Defined in[[xet.defined-in]]

[packages/hub/src/lib/file-download-info.ts:20](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/file-download-info.ts#L20)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/FileDownloadInfoOutput.md" />

### Interface: QuantizationConfig
https://huggingface.co/docs/huggingface.js/hub/interfaces/QuantizationConfig.md

# Interface: QuantizationConfig

## Properties

### bits

• `Optional` **bits**: `number`

#### Defined in[[bits.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:353](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L353)

___

### load\_in\_4bit

• `Optional` **load\_in\_4bit**: `boolean`

#### Defined in[[loadin4bit.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:354](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L354)

___

### load\_in\_8bit

• `Optional` **load\_in\_8bit**: `boolean`

#### Defined in[[loadin8bit.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:355](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L355)

___

### modules\_to\_not\_convert

• `Optional` **modules\_to\_not\_convert**: `string`[]

#### Defined in[[modulestonotconvert.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:352](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L352)

___

### quant\_method

• `Optional` **quant\_method**: `string`

#### Defined in[[quantmethod.defined-in]]

[packages/hub/src/lib/parse-safetensors-metadata.ts:351](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/parse-safetensors-metadata.ts#L351)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/QuantizationConfig.md" />

### Interface: WhoAmIOrg
https://huggingface.co/docs/huggingface.js/hub/interfaces/WhoAmIOrg.md

# Interface: WhoAmIOrg

## Properties

### avatarUrl

• **avatarUrl**: `string`

#### Defined in[[avatarurl.defined-in]]

[packages/hub/src/lib/who-am-i.ts:33](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L33)

___

### canPay

• **canPay**: `boolean`

#### Defined in[[canpay.defined-in]]

[packages/hub/src/lib/who-am-i.ts:32](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L32)

___

### email

• **email**: ``null`` \| `string`

#### Defined in[[email.defined-in]]

[packages/hub/src/lib/who-am-i.ts:31](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L31)

___

### fullname

• **fullname**: `string`

#### Defined in[[fullname.defined-in]]

[packages/hub/src/lib/who-am-i.ts:30](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L30)

___

### id

• **id**: `string`

Unique ID persistent across renames

#### Defined in[[id.defined-in]]

[packages/hub/src/lib/who-am-i.ts:27](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L27)

___

### name

• **name**: `string`

#### Defined in[[name.defined-in]]

[packages/hub/src/lib/who-am-i.ts:29](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L29)

___

### periodEnd

• **periodEnd**: ``null`` \| `number`

Unix timestamp in seconds

#### Defined in[[periodend.defined-in]]

[packages/hub/src/lib/who-am-i.ts:37](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L37)

___

### type

• **type**: ``"org"``

#### Defined in[[type.defined-in]]

[packages/hub/src/lib/who-am-i.ts:28](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/who-am-i.ts#L28)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/WhoAmIOrg.md" />

### Interface: HFCacheInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/HFCacheInfo.md

# Interface: HFCacheInfo

## Properties

### repos

• **repos**: [`CachedRepoInfo`](CachedRepoInfo)[]

#### Defined in[[repos.defined-in]]

[packages/hub/src/lib/cache-management.ts:68](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L68)

___

### size

• **size**: `number`

#### Defined in[[size.defined-in]]

[packages/hub/src/lib/cache-management.ts:67](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L67)

___

### warnings

• **warnings**: `Error`[]

#### Defined in[[warnings.defined-in]]

[packages/hub/src/lib/cache-management.ts:69](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/cache-management.ts#L69)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/HFCacheInfo.md" />

### Interface: ListFileEntry
https://huggingface.co/docs/huggingface.js/hub/interfaces/ListFileEntry.md

# Interface: ListFileEntry

## Properties

### lastCommit

• `Optional` **lastCommit**: `Object`

Only fetched if `expand` is set to `true` in the `listFiles` call.

#### Type declaration[[lastcommit.type-declaration]]

| Name | Type |
| :------ | :------ |
| `date` | `string` |
| `id` | `string` |
| `title` | `string` |

#### Defined in[[lastcommit.defined-in]]

[packages/hub/src/lib/list-files.ts:27](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L27)

___

### lfs

• `Optional` **lfs**: `Object`

#### Type declaration[[lfs.type-declaration]]

| Name | Type | Description |
| :------ | :------ | :------ |
| `oid` | `string` | - |
| `pointerSize` | `number` | Size of the raw pointer file, 100~200 bytes |
| `size` | `number` | - |

#### Defined in[[lfs.defined-in]]

[packages/hub/src/lib/list-files.ts:14](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L14)

___

### oid

• **oid**: `string`

#### Defined in[[oid.defined-in]]

[packages/hub/src/lib/list-files.ts:13](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L13)

___

### path

• **path**: `string`

#### Defined in[[path.defined-in]]

[packages/hub/src/lib/list-files.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L12)

___

### securityFileStatus

• `Optional` **securityFileStatus**: `unknown`

Only fetched if `expand` is set to `true` in the `listFiles` call.

#### Defined in[[securityfilestatus.defined-in]]

[packages/hub/src/lib/list-files.ts:35](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L35)

___

### size

• **size**: `number`

#### Defined in[[size.defined-in]]

[packages/hub/src/lib/list-files.ts:11](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L11)

___

### type

• **type**: ``"unknown"`` \| ``"file"`` \| ``"directory"``

#### Defined in[[type.defined-in]]

[packages/hub/src/lib/list-files.ts:10](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L10)

___

### xetHash

• `Optional` **xetHash**: `string`

Xet-backed hash, a new protocol replacing LFS for big files.

#### Defined in[[xethash.defined-in]]

[packages/hub/src/lib/list-files.ts:23](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/list-files.ts#L23)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/ListFileEntry.md" />

### Interface: UserInfo
https://huggingface.co/docs/huggingface.js/hub/interfaces/UserInfo.md

# Interface: UserInfo

## Properties

### canPay

• `Optional` **canPay**: `boolean`

Hugging Face field. Whether the user has a payment method set up. Needs "read-billing" scope.

#### Defined in[[canpay.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:45](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L45)

___

### email

• `Optional` **email**: `string`

OpenID Connect field, available if scope "email" was granted.

#### Defined in[[email.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:24](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L24)

___

### email\_verified

• `Optional` **email\_verified**: `boolean`

OpenID Connect field, available if scope "email" was granted.

#### Defined in[[emailverified.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:20](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L20)

___

### isPro

• **isPro**: `boolean`

Hugging Face field. Whether the user is a pro user.

#### Defined in[[ispro.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:41](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L41)

___

### name

• **name**: `string`

OpenID Connect field. The user's full name.

#### Defined in[[name.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:12](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L12)

___

### orgs

• `Optional` **orgs**: \{ `canPay?`: `boolean` ; `isEnterprise`: `boolean` ; `missingMFA?`: `boolean` ; `name`: `string` ; `pendingSSO?`: `boolean` ; `picture`: `string` ; `preferred_username`: `string` ; `roleInOrg?`: `string` ; `securityRestrictions?`: (``"mfa"`` \| ``"sso"`` \| ``"ip"`` \| ``"token-policy"``)[] ; `sub`: `string`  }[]

Hugging Face field. The user's orgs

#### Defined in[[orgs.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:49](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L49)

___

### picture

• **picture**: `string`

OpenID Connect field. The user's profile picture URL.

#### Defined in[[picture.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:28](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L28)

___

### preferred\_username

• **preferred\_username**: `string`

OpenID Connect field. The user's username.

#### Defined in[[preferredusername.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:16](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L16)

___

### profile

• **profile**: `string`

OpenID Connect field. The user's profile URL.

#### Defined in[[profile.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:32](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L32)

___

### sub

• **sub**: `string`

OpenID Connect field. Unique identifier for the user, even in case of rename.

#### Defined in[[sub.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:8](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L8)

___

### website

• `Optional` **website**: `string`

OpenID Connect field. The user's website URL.

#### Defined in[[website.defined-in]]

[packages/hub/src/lib/oauth-handle-redirect.ts:36](https://github.com/huggingface/huggingface.js/blob/main/packages/hub/src/lib/oauth-handle-redirect.ts#L36)


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/hub/interfaces/UserInfo.md" />

### @huggingface/tiny-agents
https://huggingface.co/docs/huggingface.js/tiny-agents/README.md

# @huggingface/tiny-agents

![meme](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/tiny-agents/legos.png)

A squad of lightweight composable AI applications built on Hugging Face's Inference Client and MCP stack.

## Installation

```bash
npm install @huggingface/tiny-agents
# or
pnpm add @huggingface/tiny-agents
```

## CLI Usage

```bash
npx @huggingface/tiny-agents [command] "agent/id"
```

```
Usage:
  tiny-agents [flags]
  tiny-agents run   "agent/id"
  tiny-agents serve "agent/id"

Available Commands:
  run         Run the Agent in command-line
  serve       Run the Agent as an OpenAI-compatible HTTP server
```

You can load agents directly from the Hugging Face Hub [tiny-agents](https://huggingface.co/datasets/tiny-agents/tiny-agents) Dataset, or specify a path to your own local agent configuration.

## Define your own agent

The simplest way to create your own agent is to create a folder containing an `agent.json` file:

```bash
mkdir my-agent
touch my-agent/agent.json
```

```json
{
	"model": "Qwen/Qwen2.5-72B-Instruct",
	"provider": "nebius",
	"servers": [
		{
			"type": "stdio",
			"command": "npx",
			"args": ["@playwright/mcp@latest"]
		}
	]
}
```

Or using a local or remote endpoint URL:

```json
{
	"model": "Qwen/Qwen3-32B",
	"endpointUrl": "http://localhost:1234/v1",
	"servers": [
		{
			"type": "stdio",
			"command": "npx",
			"args": ["@playwright/mcp@latest"]
		}
	]
}

```

Where `servers` is a list of MCP servers (we support Stdio, SSE, and HTTP servers).

Optionally, you can add an [`AGENTS.md`](https://agents.md/) (or `PROMPT.md`) file to override the default Agent prompt.

Then just point tiny-agents to your local folder:

```bash
npx @huggingface/tiny-agents run ./my-agent
```

Voilà! 🔥


## Tiny Agents collection

Browse our curated collection of Tiny Agents at https://huggingface.co/datasets/tiny-agents/tiny-agents. Each agent is stored in its own subdirectory, following the structure outlined above. Running an agent from the Hub is as simple as using its `agent_id`. For example, to run the [`julien-c/flux-schnell-generator`](https://huggingface.co/datasets/tiny-agents/tiny-agents/tree/main/julien-c/flux-schnell-generator) agent:

```bash
npx @huggingface/tiny-agents run "julien-c/flux-schnell-generator"
```

> [!NOTE]
> Want to share your own agent with the community? Submit a PR to the [Tiny Agents](https://huggingface.co/datasets/tiny-agents/tiny-agents/discussions) repository on the Hub. Your submission must include an `agent.json` file, and you can optionally add a `PROMPT.md` or [`AGENTS.md`](https://agents.md/) file. To help others understand your agent's capabilities, consider including an `EXAMPLES.md` file with sample prompts and use cases.

## Advanced: Programmatic Usage

```typescript
import { Agent } from '@huggingface/tiny-agents';

const HF_TOKEN = "hf_...";

// Create an Agent
const agent = new Agent({
  provider: "auto",
  model: "Qwen/Qwen2.5-72B-Instruct",
  apiKey: HF_TOKEN,
  servers: [
    {
      // Playwright MCP
      command: "npx",
      args: ["@playwright/mcp@latest"],
    },
  ],
});

await agent.loadTools();

// Use the Agent
for await (const chunk of agent.run("What are the top 5 trending models on Hugging Face?")) {
    if ("choices" in chunk) {
        const delta = chunk.choices[0]?.delta;
        if (delta.content) {
            console.log(delta.content);
        }
    }
}
```


## License

MIT


<EditOnGithub source="https://github.com/huggingface/huggingface.js/blob/main/docs/tiny-agents/README.md" />
