mirror of
https://github.com/openclaw/openclaw.git
synced 2026-02-19 18:39:20 -05:00
* feat: add LiteLLM provider types, env var, credentials, and auth choice Add litellm-api-key auth choice, LITELLM_API_KEY env var mapping, setLitellmApiKey() credential storage, and LITELLM_DEFAULT_MODEL_REF. * feat: add LiteLLM onboarding handler and provider config Add applyLitellmProviderConfig which properly registers models.providers.litellm with baseUrl, api type, and model definitions. This fixes the critical bug from PR #6488 where the provider entry was never created, causing model resolution to fail at runtime. * docs: add LiteLLM provider documentation Add setup guide covering onboarding, manual config, virtual keys, model routing, and usage tracking. Link from provider index. * docs: add LiteLLM to sidebar navigation in docs.json Add providers/litellm to both English and Chinese provider page lists so the docs page appears in the sidebar navigation. * test: add LiteLLM non-interactive onboarding test Wire up litellmApiKey flag inference and auth-choice handler for the non-interactive onboarding path, and add an integration test covering profile, model default, and credential storage. * fix: register --litellm-api-key CLI flag and add preferred provider mapping Wire up the missing Commander CLI option, action handler mapping, and help text for --litellm-api-key. Add litellm-api-key to the preferred provider map for consistency with other providers. * fix: remove zh-CN sidebar entry for litellm (no localized page yet) * style: format buildLitellmModelDefinition return type * fix(onboarding): harden LiteLLM provider setup (#12823) * refactor(onboarding): keep auth-choice provider dispatcher under size limit --------- Co-authored-by: Peter Steinberger <steipete@gmail.com>
2.0 KiB
2.0 KiB
summary, read_when, title
| summary | read_when | title | ||
|---|---|---|---|---|
| Model providers (LLMs) supported by OpenClaw |
|
Model Providers |
Model Providers
OpenClaw can use many LLM providers. Pick a provider, authenticate, then set the
default model as provider/model.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See Channels.
Highlight: Venice (Venice AI)
Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.
- Default:
venice/llama-3.3-70b - Best overall:
venice/claude-opus-45(Opus remains the strongest)
See Venice AI.
Quick start
- Authenticate with the provider (usually via
openclaw onboard). - Set the default model:
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
Provider docs
- OpenAI (API + Codex)
- Anthropic (API + Claude Code CLI)
- Qwen (OAuth)
- OpenRouter
- LiteLLM (unified gateway)
- Vercel AI Gateway
- Together AI
- Cloudflare AI Gateway
- Moonshot AI (Kimi + Kimi Coding)
- OpenCode Zen
- Amazon Bedrock
- Z.AI
- Xiaomi
- GLM models
- MiniMax
- Venice (Venice AI, privacy-focused)
- Ollama (local models)
- Qianfan
Transcription providers
Community tools
- Claude Max API Proxy - Use Claude Max/Pro subscription as an OpenAI-compatible API endpoint
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration, see Model providers.