-
Notifications
You must be signed in to change notification settings - Fork 4.4k
05.1 Built In Providers
Relevant source files
The following files were used as context for generating this wiki page:
This page documents all 28+ built-in LLM providers supported by ZeroClaw, including their configuration names, aliases, authentication methods, and special features. For information about creating custom providers or extending the provider system, see Custom Providers. For details on provider resilience features like retries and fallbacks, see Provider Resilience.
ZeroClaw providers are organized into three implementation categories:
| Category | Implementation | Count | Examples |
|---|---|---|---|
| Primary Providers | Custom implementation per API | 6 |
openai, anthropic, openrouter, gemini, ollama, copilot
|
| OpenAI-Compatible | Single OpenAiCompatibleProvider
|
20+ |
groq, mistral, deepseek, venice, qwen
|
| Custom Endpoints | User-defined URLs | ∞ |
custom:https://..., anthropic-custom:https://...
|
Sources: src/providers/mod.rs:593-778
Aggregates 200+ models from multiple providers through a unified API.
[agent.provider]
name = "openrouter"
api_key = "sk-or-v1-..."
model = "anthropic/claude-sonnet-4"Authentication:
- Environment:
OPENROUTER_API_KEY→ZEROCLAW_API_KEY→API_KEY - Config:
agent.provider.api_key
Features:
- Native tool calling support
- Model routing (prefix models with provider namespace)
- Automatic connection pool warmup via
/api/v1/auth/keyendpoint
Sources: src/providers/openrouter.rs:1-486, src/providers/mod.rs:603
Direct integration with Claude models including setup tokens for OAuth.
[agent.provider]
name = "anthropic"
api_key = "sk-ant-api03-..." # or sk-ant-oat01-... (setup token)
model = "claude-3-5-sonnet-20241022"Authentication:
- Environment:
ANTHROPIC_OAUTH_TOKEN→ANTHROPIC_API_KEY→ fallbacks - Setup tokens (
sk-ant-oat01-*) useAuthorization: Bearer+anthropic-beta: oauth-2025-04-20 - Regular API keys (
sk-ant-api03-*) usex-api-keyheader
Features:
- Native tool calling with
input_schema - Automatic prompt caching (system prompts >3KB, conversations >4 messages, tools)
- Tool results sent as
userrole messages withtool_resultcontent blocks
Sources: src/providers/anthropic.rs:1-689, src/providers/mod.rs:604
Direct integration with OpenAI models including custom base URL support.
[agent.provider]
name = "openai"
api_key = "sk-proj-..."
api_url = "https://api.openai.com/v1" # optional
model = "gpt-4o"Authentication:
- Environment:
OPENAI_API_KEY→ fallbacks - Config:
agent.provider.api_key
Features:
- Native tool calling with function definitions
- Reasoning content fallback (
reasoning_contentfield for thinking models) - Connection warmup via
/v1/modelsendpoint
Sources: src/providers/openai.rs:1-541, src/providers/mod.rs:605
Google Gemini with multi-method authentication.
[agent.provider]
name = "gemini" # aliases: google, google-gemini
api_key = "AIzaSy..."
model = "gemini-2.0-flash-exp"Authentication Priority:
- Explicit
api_keyin config -
GEMINI_API_KEYenvironment variable -
GOOGLE_API_KEYenvironment variable - Gemini CLI OAuth tokens (
~/.gemini/oauth_creds.json)
API Endpoint Selection:
-
API Key users:
https://generativelanguage.googleapis.com/v1beta(public API) -
OAuth users:
https://cloudcode-pa.googleapis.com/v1internal(internal Code Assist API)
Sources: src/providers/gemini.rs:1-689, src/providers/mod.rs:608-610
Local and remote Ollama instances with cloud routing support.
[agent.provider]
name = "ollama"
api_url = "http://localhost:11434" # default
api_key = "" # optional for remote instances
model = "qwen2.5-coder:32b"Cloud Routing:
Append :cloud suffix to model name for hosted Ollama instances:
model = "qwen2.5-coder:32b:cloud"Requires non-local api_url and api_key.
Quirk Handling: Ollama models sometimes wrap tool calls in nested structures:
{"name": "tool_call", "arguments": {"name": "shell", ...}}{"name": "tool.shell", "arguments": {...}}
ZeroClaw unwraps these automatically via extract_tool_name_and_args().
Sources: src/providers/ollama.rs:1-561, src/providers/mod.rs:607
Experimental OAuth-based provider using Copilot tokens.
[agent.provider]
name = "copilot" # aliases: github-copilot
api_key = "..." # optional override
model = "gpt-4o"Sources: src/providers/mod.rs:707-709, src/providers/copilot.rs
All providers below use OpenAiCompatibleProvider with different base URLs and auth styles.
graph TB
Factory["create_provider()"]
Compatible["OpenAiCompatibleProvider"]
Factory -->|"groq"| Compatible
Factory -->|"mistral"| Compatible
Factory -->|"deepseek"| Compatible
Factory -->|"venice"| Compatible
Factory -->|"qwen"| Compatible
Factory -->|"moonshot"| Compatible
Factory -->|"glm"| Compatible
Factory -->|"minimax"| Compatible
Factory -->|"bedrock"| Compatible
Factory -->|"xai"| Compatible
Compatible -->|"/v1/chat/completions"| API["Provider API Endpoint"]
style Factory fill:#f9f9f9
style Compatible fill:#e8e8e8
Sources: src/providers/mod.rs:612-741, src/providers/compatible.rs:1-1432
| Provider | Config Name | Base URL | Auth Header |
|---|---|---|---|
| Groq | groq |
https://api.groq.com/openai |
Authorization: Bearer |
| Together AI |
together, together-ai
|
https://api.together.xyz |
Authorization: Bearer |
| Fireworks AI |
fireworks, fireworks-ai
|
https://api.fireworks.ai/inference/v1 |
Authorization: Bearer |
| Perplexity | perplexity |
https://api.perplexity.ai |
Authorization: Bearer |
Environment Variables:
-
GROQ_API_KEY,TOGETHER_API_KEY,FIREWORKS_API_KEY,PERPLEXITY_API_KEY
Sources: src/providers/mod.rs:683-703
| Provider | Config Name | Base URL | Auth Header |
|---|---|---|---|
| Mistral AI | mistral |
https://api.mistral.ai/v1 |
Authorization: Bearer |
| DeepSeek | deepseek |
https://api.deepseek.com |
Authorization: Bearer |
| xAI (Grok) |
xai, grok
|
https://api.x.ai |
Authorization: Bearer |
| Cohere | cohere |
https://api.cohere.com/compatibility |
Authorization: Bearer |
Environment Variables:
-
MISTRAL_API_KEY,DEEPSEEK_API_KEY,XAI_API_KEY,COHERE_API_KEY
Sources: src/providers/mod.rs:686-706
Multiple regional endpoints with alias support.
| Alias | Base URL | Region |
|---|---|---|
qwen, dashscope, qwen-cn
|
https://dashscope.aliyuncs.com/compatible-mode/v1 |
China |
qwen-intl, dashscope-intl
|
https://dashscope-intl.aliyuncs.com/compatible-mode/v1 |
International |
qwen-us, dashscope-us
|
https://dashscope-us.aliyuncs.com/compatible-mode/v1 |
United States |
Environment: DASHSCOPE_API_KEY
Sources: src/providers/mod.rs:41-43,99-116,328-338,675-680
| Alias | Base URL | Region |
|---|---|---|
glm, zhipu, glm-global
|
https://api.z.ai/api/paas/v4 |
Global |
glm-cn, zhipu-cn, bigmodel
|
https://open.bigmodel.cn/api/paas/v4 |
China |
Environment: GLM_API_KEY
Features:
- Disables
/v1/responsesfallback (usesnew_no_responses_fallback()) - Reasoning content fallback support
Sources: src/providers/mod.rs:37-38,72-82,308-316,652-659
OAuth and API key authentication with regional endpoints.
| Alias | Base URL | Region |
|---|---|---|
minimax, minimax-intl, minimax-oauth
|
https://api.minimax.io/v1 |
International |
minimax-cn, minimaxi, minimax-oauth-cn
|
https://api.minimaxi.com/v1 |
China |
OAuth Authentication:
Set api_key = "minimax-oauth" to enable automatic OAuth refresh:
[agent.provider]
name = "minimax"
api_key = "minimax-oauth" # triggers OAuth flowEnvironment Variables:
-
MINIMAX_OAUTH_TOKEN(static access token) -
MINIMAX_API_KEY(regular API key) -
MINIMAX_OAUTH_REFRESH_TOKEN(auto-refresh) -
MINIMAX_OAUTH_CLIENT_ID(default:78257093-7e40-4613-99e0-527b14b39113) -
MINIMAX_OAUTH_REGION(override:cn,global,intl)
OAuth Refresh Flow:
- Check
MINIMAX_OAUTH_TOKENorMINIMAX_API_KEY - If
api_key = "minimax-oauth"and no static token, useMINIMAX_OAUTH_REFRESH_TOKEN - POST to
https://api.minimax.io/oauth/tokenor.../api.minimaxi.com/...(region-aware) - Parse
access_tokenfrom response and use for subsequent requests
Sources: src/providers/mod.rs:25-36,47-70,172-278,298-306,660-665
| Alias | Base URL | Region |
|---|---|---|
moonshot, kimi, moonshot-cn
|
https://api.moonshot.cn/v1 |
China |
moonshot-intl, kimi-intl
|
https://api.moonshot.ai/v1 |
International |
Special: Kimi Code variant with custom user agent:
name = "kimi-code" # aliases: kimi_coding, kimi_for_codingUses https://api.kimi.com/coding/v1 with User-Agent: KimiCLI/0.77.
Environment: MOONSHOT_API_KEY, KIMI_CODE_API_KEY
Sources: src/providers/mod.rs:39-40,84-96,318-326,625-639
| Provider | Config Name | Base URL | Env Var |
|---|---|---|---|
| Z.AI |
zai, z.ai, zai-global
|
https://api.z.ai/api/coding/paas/v4 |
ZAI_API_KEY |
| Z.AI China | zai-cn |
https://open.bigmodel.cn/api/coding/paas/v4 |
ZAI_API_KEY |
| Qianfan (Baidu) |
qianfan, baidu
|
https://aip.baidubce.com |
QIANFAN_API_KEY |
Sources: src/providers/mod.rs:44-45,118-128,340-348,646-651,672-674
| Provider | Config Name | Base URL | Env Var |
|---|---|---|---|
| Amazon Bedrock |
bedrock, aws-bedrock
|
https://bedrock-runtime.us-east-1.amazonaws.com |
Generic fallback |
| OVH Cloud |
ovhcloud, ovh
|
https://oai.endpoints.kepler.ai.cloud.ovh.net/v1 |
OVH_AI_ENDPOINTS_ACCESS_TOKEN |
| NVIDIA NIM |
nvidia, nvidia-nim
|
https://integrate.api.nvidia.com/v1 |
NVIDIA_API_KEY |
Sources: src/providers/mod.rs:666-671,722-729,737-740
| Provider | Config Name | Base URL | Env Var |
|---|---|---|---|
| Venice | venice |
https://api.venice.ai |
VENICE_API_KEY |
| Vercel AI |
vercel, vercel-ai
|
https://api.vercel.ai |
VERCEL_API_KEY |
| Cloudflare |
cloudflare, cloudflare-ai
|
https://gateway.ai.cloudflare.com/v1 |
CLOUDFLARE_API_KEY |
| Astrai | astrai |
https://as-trai.com/v1 |
ASTRAI_API_KEY |
Sources: src/providers/mod.rs:613-624,732-734
| Provider | Config Name | Base URL | Env Var |
|---|---|---|---|
| Synthetic | synthetic |
https://api.synthetic.com |
SYNTHETIC_API_KEY |
| OpenCode Zen |
opencode, opencode-zen
|
https://opencode.ai/zen/v1 |
OPENCODE_API_KEY |
Sources: src/providers/mod.rs:640-645
| Provider | Config Name | Base URL | Default API Key |
|---|---|---|---|
| LM Studio |
lmstudio, lm-studio
|
http://localhost:1234/v1 |
"lm-studio" (placeholder) |
Sources: src/providers/mod.rs:710-721
For any OpenAI API-compatible endpoint:
[agent.provider]
name = "custom:https://your-llm-api.com/v1"
api_key = "your-key"
model = "your-model"URL Validation:
- Must be valid HTTP/HTTPS URL
- Automatically appends
/chat/completionsunless already present - Error message:
Custom provider requires a valid URL. Format: custom:https://your-api.com
Sources: src/providers/mod.rs:744-756
For Anthropic Messages API-compatible endpoints:
[agent.provider]
name = "anthropic-custom:https://your-claude-api.com"
api_key = "your-key"
model = "custom-claude-model"Sources: src/providers/mod.rs:759-770
graph TB
Start["Provider Factory"]
Start --> CheckOverride{"Explicit api_key<br/>in config?"}
CheckOverride -->|Yes| CheckMinimax{"Is MiniMax and<br/>value='minimax-oauth'?"}
CheckMinimax -->|Yes| MinimaxOAuth["MiniMax OAuth Flow"]
CheckMinimax -->|No| UseOverride["Use config value"]
CheckOverride -->|No| CheckProviderEnv{"Provider-specific<br/>env var exists?"}
CheckProviderEnv -->|Yes| UseProviderEnv["Use provider env var<br/>(e.g., ANTHROPIC_API_KEY)"]
CheckProviderEnv -->|No| CheckGeneric{"ZEROCLAW_API_KEY<br/>or API_KEY?"}
CheckGeneric -->|Yes| UseGeneric["Use generic fallback"]
CheckGeneric -->|No| NoKey["No credential<br/>(error on first request)"]
MinimaxOAuth --> CheckStatic{"MINIMAX_OAUTH_TOKEN<br/>or MINIMAX_API_KEY?"}
CheckStatic -->|Yes| UseStatic["Use static token"]
CheckStatic -->|No| CheckRefresh{"MINIMAX_OAUTH_REFRESH_TOKEN?"}
CheckRefresh -->|Yes| RefreshToken["POST /oauth/token<br/>Get access_token"]
CheckRefresh -->|No| NoKey
style Start fill:#f9f9f9
style UseOverride fill:#d4edda
style UseProviderEnv fill:#d4edda
style UseGeneric fill:#d4edda
style UseStatic fill:#d4edda
style RefreshToken fill:#fff3cd
style NoKey fill:#f8d7da
Sources: src/providers/mod.rs:465-547
| Provider | Primary Env Var | Fallback Env Vars | OAuth Support |
|---|---|---|---|
anthropic |
ANTHROPIC_OAUTH_TOKEN |
ANTHROPIC_API_KEY |
✓ (setup tokens) |
openrouter |
OPENROUTER_API_KEY |
Generic fallbacks | ✗ |
openai |
OPENAI_API_KEY |
Generic fallbacks | ✗ |
ollama |
OLLAMA_API_KEY |
None | ✗ |
gemini |
GEMINI_API_KEY |
GOOGLE_API_KEY → CLI OAuth |
✓ (CLI tokens) |
venice |
VENICE_API_KEY |
Generic fallbacks | ✗ |
groq |
GROQ_API_KEY |
Generic fallbacks | ✗ |
mistral |
MISTRAL_API_KEY |
Generic fallbacks | ✗ |
deepseek |
DEEPSEEK_API_KEY |
Generic fallbacks | ✗ |
xai |
XAI_API_KEY |
Generic fallbacks | ✗ |
together |
TOGETHER_API_KEY |
Generic fallbacks | ✗ |
fireworks |
FIREWORKS_API_KEY |
Generic fallbacks | ✗ |
perplexity |
PERPLEXITY_API_KEY |
Generic fallbacks | ✗ |
cohere |
COHERE_API_KEY |
Generic fallbacks | ✗ |
moonshot |
MOONSHOT_API_KEY |
Generic fallbacks | ✗ |
kimi-code |
KIMI_CODE_API_KEY |
MOONSHOT_API_KEY |
✗ |
glm |
GLM_API_KEY |
Generic fallbacks | ✗ |
minimax |
MINIMAX_OAUTH_TOKEN |
MINIMAX_API_KEY → ...REFRESH_TOKEN
|
✓ (OAuth) |
qianfan |
QIANFAN_API_KEY |
Generic fallbacks | ✗ |
qwen |
DASHSCOPE_API_KEY |
Generic fallbacks | ✗ |
zai |
ZAI_API_KEY |
Generic fallbacks | ✗ |
nvidia |
NVIDIA_API_KEY |
Generic fallbacks | ✗ |
vercel |
VERCEL_API_KEY |
Generic fallbacks | ✗ |
cloudflare |
CLOUDFLARE_API_KEY |
Generic fallbacks | ✗ |
ovhcloud |
OVH_AI_ENDPOINTS_ACCESS_TOKEN |
Generic fallbacks | ✗ |
astrai |
ASTRAI_API_KEY |
Generic fallbacks | ✗ |
synthetic |
SYNTHETIC_API_KEY |
Generic fallbacks | ✗ |
opencode |
OPENCODE_API_KEY |
Generic fallbacks | ✗ |
Generic Fallbacks: ZEROCLAW_API_KEY, API_KEY
Sources: src/providers/mod.rs:485-516
Display all supported providers with their aliases:
zeroclaw providers listOutput format:
openrouter OpenRouter
anthropic Anthropic
openai OpenAI
openai-codex OpenAI Codex (OAuth) [openai_codex, codex]
ollama Ollama [local]
gemini Google Gemini [google, google-gemini]
venice Venice
groq Groq
...
Implementation: list_providers() returns Vec<ProviderInfo> with:
-
name: Canonical config name -
display_name: Human-readable label -
aliases: Alternative config names -
local: Whether provider runs locally (no API key required)
Sources: src/providers/mod.rs:915-1131
graph LR
Provider["Provider Trait"]
Provider -->|"capabilities()"| Caps["ProviderCapabilities"]
Caps --> NativeTools["native_tool_calling: bool"]
NativeTools -->|true| Native["Providers with native tools:<br/>• openai<br/>• anthropic<br/>• openrouter<br/>• gemini<br/>• compatible providers"]
NativeTools -->|false| PromptGuided["Providers with prompt-guided tools:<br/>• ollama<br/>• custom providers<br/>without tool schema support"]
style Provider fill:#f9f9f9
style Caps fill:#e8e8e8
style Native fill:#d4edda
style PromptGuided fill:#fff3cd
Native Tool Calling: Provider converts ToolSpec to API-native format (OpenAI function definitions, Anthropic input_schema, Gemini functionDeclarations).
Prompt-Guided: Tools are injected into system prompt as XML-tagged documentation. LLM responds with <tool_call> tags that get parsed by the agent loop.
Sources: src/providers/traits.rs:195-262, src/providers/compatible.rs:767-771, src/providers/anthropic.rs:496-498
All provider error messages are automatically scrubbed to prevent credential leakage:
Scrubbed Patterns:
-
sk-*(OpenAI-style API keys) -
xoxb-*,xoxp-*(Slack tokens) -
ghp_*,gho_*,ghu_*,github_pat_*(GitHub tokens)
Function: scrub_secret_patterns(input: &str) -> String
Example:
// Input: "Invalid API key: sk-proj-abcdef123456"
// Output: "Invalid API key: [REDACTED]"Length Limiting: Errors truncated to 200 characters via sanitize_api_error().
Sources: src/providers/mod.rs:367-439
graph TB
Config["Config::load()"]
Config --> Factory["create_provider_with_url()"]
Factory --> CheckName{"Provider name?"}
CheckName -->|"openrouter"| OpenRouter["OpenRouterProvider::new()"]
CheckName -->|"anthropic"| Anthropic["AnthropicProvider::new()"]
CheckName -->|"openai"| OpenAI["OpenAiProvider::with_base_url()"]
CheckName -->|"ollama"| Ollama["OllamaProvider::new()"]
CheckName -->|"gemini"| Gemini["GeminiProvider::new()"]
CheckName -->|"groq"| Groq["OpenAiCompatibleProvider::new()<br/>api.groq.com/openai"]
CheckName -->|"qwen"| Qwen["OpenAiCompatibleProvider::new()<br/>dashscope.aliyuncs.com"]
CheckName -->|"custom:*"| Custom["parse_custom_provider_url()<br/>OpenAiCompatibleProvider"]
CheckName -->|"anthropic-custom:*"| AnthropicCustom["parse_custom_provider_url()<br/>AnthropicProvider"]
CheckName -->|unknown| Error["anyhow::bail!()<br/>'Unknown provider'"]
OpenRouter --> Wrap["Box<dyn Provider>"]
Anthropic --> Wrap
OpenAI --> Wrap
Ollama --> Wrap
Gemini --> Wrap
Groq --> Wrap
Qwen --> Wrap
Custom --> Wrap
AnthropicCustom --> Wrap
Wrap --> Resilient["create_resilient_provider()<br/>ReliableProvider wrapper"]
style Factory fill:#f9f9f9
style Wrap fill:#e8e8e8
style Resilient fill:#d4edda
Sources: src/providers/mod.rs:572-913