Files
openclaw/docs/providers/index.md
ryan-crabbe a36b9be245 Feat/litellm provider (#12823)
* feat: add LiteLLM provider types, env var, credentials, and auth choice

Add litellm-api-key auth choice, LITELLM_API_KEY env var mapping,
setLitellmApiKey() credential storage, and LITELLM_DEFAULT_MODEL_REF.

* feat: add LiteLLM onboarding handler and provider config

Add applyLitellmProviderConfig which properly registers
models.providers.litellm with baseUrl, api type, and model definitions.
This fixes the critical bug from PR #6488 where the provider entry was
never created, causing model resolution to fail at runtime.

* docs: add LiteLLM provider documentation

Add setup guide covering onboarding, manual config, virtual keys,
model routing, and usage tracking. Link from provider index.

* docs: add LiteLLM to sidebar navigation in docs.json

Add providers/litellm to both English and Chinese provider page lists
so the docs page appears in the sidebar navigation.

* test: add LiteLLM non-interactive onboarding test

Wire up litellmApiKey flag inference and auth-choice handler for the
non-interactive onboarding path, and add an integration test covering
profile, model default, and credential storage.

* fix: register --litellm-api-key CLI flag and add preferred provider mapping

Wire up the missing Commander CLI option, action handler mapping, and
help text for --litellm-api-key. Add litellm-api-key to the preferred
provider map for consistency with other providers.

* fix: remove zh-CN sidebar entry for litellm (no localized page yet)

* style: format buildLitellmModelDefinition return type

* fix(onboarding): harden LiteLLM provider setup (#12823)

* refactor(onboarding): keep auth-choice provider dispatcher under size limit

---------

Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-02-11 11:46:56 +01:00

2.0 KiB

summary, read_when, title
summary read_when title
Model providers (LLMs) supported by OpenClaw
You want to choose a model provider
You need a quick overview of supported LLM backends
Model Providers

Model Providers

OpenClaw can use many LLM providers. Pick a provider, authenticate, then set the default model as provider/model.

Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See Channels.

Highlight: Venice (Venice AI)

Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.

  • Default: venice/llama-3.3-70b
  • Best overall: venice/claude-opus-45 (Opus remains the strongest)

See Venice AI.

Quick start

  1. Authenticate with the provider (usually via openclaw onboard).
  2. Set the default model:
{
  agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}

Provider docs

Transcription providers

Community tools

For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration, see Model providers.