Hugging Face provides a wide range of state-of-the-art language models tailored to diverse NLP tasks, including text generation, summarization, translation, and question answering. These models are available through the Hugging Face Transformers library and are widely adopted due to their ease of use, flexibility, and comprehensive documentation. Explore HuggingFace’s language models here.Documentation Index
Fetch the complete documentation index at: https://agno-v2-shaloo-ai-support-link.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Authentication
Set yourHF_TOKEN environment. You can get one from HuggingFace here.
Example
UseHuggingFace with your Agent:
View more examples here.
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
id | str | "microsoft/DialoGPT-medium" | The id of the Hugging Face model to use |
name | str | "HuggingFace" | The name of the model |
provider | str | "HuggingFace" | The provider of the model |
api_key | Optional[str] | None | The API key for Hugging Face (defaults to HF_TOKEN env var) |
base_url | str | "https://api-inference.huggingface.co/models" | The base URL for Hugging Face Inference API |
wait_for_model | bool | True | Whether to wait for the model to load if it’s cold |
use_cache | bool | True | Whether to use caching for faster inference |
max_tokens | Optional[int] | None | Maximum number of tokens to generate |
temperature | Optional[float] | None | Controls randomness in the model’s output |
top_p | Optional[float] | None | Controls diversity via nucleus sampling |
repetition_penalty | Optional[float] | None | Penalty for repeating tokens (higher values reduce repetition) |
HuggingFace is a subclass of the Model class and has access to the same params.