refactor: rename to openclaw

This commit is contained in:
Peter Steinberger
2026-01-30 03:15:10 +01:00
parent 4583f88626
commit 9a7160786a
2357 changed files with 16688 additions and 16788 deletions

View File

@@ -1,13 +1,13 @@
---
summary: "Use Anthropic Claude via API keys or setup-token in Moltbot"
summary: "Use Anthropic Claude via API keys or setup-token in OpenClaw"
read_when:
- You want to use Anthropic models in Moltbot
- You want to use Anthropic models in OpenClaw
- You want setup-token instead of API keys
---
# Anthropic (Claude)
Anthropic builds the **Claude** model family and provides access via an API.
In Moltbot you can authenticate with an API key or a **setup-token**.
In OpenClaw you can authenticate with an API key or a **setup-token**.
## Option A: Anthropic API key
@@ -17,11 +17,11 @@ Create your API key in the Anthropic Console.
### CLI setup
```bash
moltbot onboard
openclaw onboard
# choose: Anthropic API key
# or non-interactive
moltbot onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
openclaw onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
```
### Config snippet
@@ -35,7 +35,7 @@ moltbot onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
## Prompt caching (Anthropic API)
Moltbot does **not** override Anthropics default cache TTL unless you set it.
OpenClaw does **not** override Anthropics default cache TTL unless you set it.
This is **API-only**; subscription auth does not honor TTL settings.
To set the TTL per model, use `cacheControlTtl` in the model `params`:
@@ -54,7 +54,7 @@ To set the TTL per model, use `cacheControlTtl` in the model `params`:
}
```
Moltbot includes the `extended-cache-ttl-2025-04-11` beta flag for Anthropic API
OpenClaw includes the `extended-cache-ttl-2025-04-11` beta flag for Anthropic API
requests; keep it if you override provider headers (see [/gateway/configuration](/gateway/configuration)).
## Option B: Claude setup-token
@@ -69,23 +69,23 @@ Setup-tokens are created by the **Claude Code CLI**, not the Anthropic Console.
claude setup-token
```
Paste the token into Moltbot (wizard: **Anthropic token (paste setup-token)**), or run it on the gateway host:
Paste the token into OpenClaw (wizard: **Anthropic token (paste setup-token)**), or run it on the gateway host:
```bash
moltbot models auth setup-token --provider anthropic
openclaw models auth setup-token --provider anthropic
```
If you generated the token on a different machine, paste it:
```bash
moltbot models auth paste-token --provider anthropic
openclaw models auth paste-token --provider anthropic
```
### CLI setup
```bash
# Paste a setup-token during onboarding
moltbot onboard --auth-choice setup-token
openclaw onboard --auth-choice setup-token
```
### Config snippet
@@ -98,7 +98,7 @@ moltbot onboard --auth-choice setup-token
## Notes
- Generate the setup-token with `claude setup-token` and paste it, or run `moltbot models auth setup-token` on the gateway host.
- Generate the setup-token with `claude setup-token` and paste it, or run `openclaw models auth setup-token` on the gateway host.
- If you see “OAuth token refresh failed …” on a Claude subscription, re-auth with a setup-token. See [/gateway/troubleshooting#oauth-token-refresh-failed-anthropic-claude-subscription](/gateway/troubleshooting#oauth-token-refresh-failed-anthropic-claude-subscription).
- Auth details + reuse rules are in [/concepts/oauth](/concepts/oauth).
@@ -108,19 +108,19 @@ moltbot onboard --auth-choice setup-token
- Claude subscription auth can expire or be revoked. Re-run `claude setup-token`
and paste it into the **gateway host**.
- If the Claude CLI login lives on a different machine, use
`moltbot models auth paste-token --provider anthropic` on the gateway host.
`openclaw models auth paste-token --provider anthropic` on the gateway host.
**No API key found for provider "anthropic"**
- Auth is **per agent**. New agents dont inherit the main agents keys.
- Re-run onboarding for that agent, or paste a setup-token / API key on the
gateway host, then verify with `moltbot models status`.
gateway host, then verify with `openclaw models status`.
**No credentials found for profile `anthropic:default`**
- Run `moltbot models status` to see which auth profile is active.
- Run `openclaw models status` to see which auth profile is active.
- Re-run onboarding, or paste a setup-token / API key for that profile.
**No available auth profile (all in cooldown/unavailable)**
- Check `moltbot models status --json` for `auth.unusableProfiles`.
- Check `openclaw models status --json` for `auth.unusableProfiles`.
- Add another Anthropic profile or wait for cooldown.
More: [/gateway/troubleshooting](/gateway/troubleshooting) and [/help/faq](/help/faq).

View File

@@ -67,9 +67,9 @@ curl http://localhost:3456/v1/chat/completions \
}'
```
### With Moltbot
### With OpenClaw
You can point Moltbot at the proxy as a custom OpenAI-compatible endpoint:
You can point OpenClaw at the proxy as a custom OpenAI-compatible endpoint:
```json5
{
@@ -134,12 +134,12 @@ launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.claude-max-api.plist
## Notes
- This is a **community tool**, not officially supported by Anthropic or Moltbot
- This is a **community tool**, not officially supported by Anthropic or OpenClaw
- Requires an active Claude Max/Pro subscription with Claude Code CLI authenticated
- The proxy runs locally and does not send data to any third-party servers
- Streaming responses are fully supported
## See Also
- [Anthropic provider](/providers/anthropic) - Native Moltbot integration with Claude setup-token or API keys
- [Anthropic provider](/providers/anthropic) - Native OpenClaw integration with Claude setup-token or API keys
- [OpenAI provider](/providers/openai) - For OpenAI/Codex subscriptions

View File

@@ -6,10 +6,10 @@ read_when:
---
# Deepgram (Audio Transcription)
Deepgram is a speech-to-text API. In Moltbot it is used for **inbound audio/voice note
Deepgram is a speech-to-text API. In OpenClaw it is used for **inbound audio/voice note
transcription** via `tools.media.audio`.
When enabled, Moltbot uploads the audio file to Deepgram and injects the transcript
When enabled, OpenClaw uploads the audio file to Deepgram and injects the transcript
into the reply pipeline (`{{Transcript}}` + `[Audio]` block). This is **not streaming**;
it uses the pre-recorded transcription endpoint.

View File

@@ -1,28 +1,28 @@
---
summary: "Sign in to GitHub Copilot from Moltbot using the device flow"
summary: "Sign in to GitHub Copilot from OpenClaw using the device flow"
read_when:
- You want to use GitHub Copilot as a model provider
- You need the `moltbot models auth login-github-copilot` flow
- You need the `openclaw models auth login-github-copilot` flow
---
# Github Copilot
## What is GitHub Copilot?
GitHub Copilot is GitHub's AI coding assistant. It provides access to Copilot
models for your GitHub account and plan. Moltbot can use Copilot as a model
models for your GitHub account and plan. OpenClaw can use Copilot as a model
provider in two different ways.
## Two ways to use Copilot in Moltbot
## Two ways to use Copilot in OpenClaw
### 1) Built-in GitHub Copilot provider (`github-copilot`)
Use the native device-login flow to obtain a GitHub token, then exchange it for
Copilot API tokens when Moltbot runs. This is the **default** and simplest path
Copilot API tokens when OpenClaw runs. This is the **default** and simplest path
because it does not require VS Code.
### 2) Copilot Proxy plugin (`copilot-proxy`)
Use the **Copilot Proxy** VS Code extension as a local bridge. Moltbot talks to
Use the **Copilot Proxy** VS Code extension as a local bridge. OpenClaw talks to
the proxys `/v1` endpoint and uses the model list you configure there. Choose
this when you already run Copilot Proxy in VS Code or need to route through it.
You must enable the plugin and keep the VS Code extension running.
@@ -34,7 +34,7 @@ profile.
## CLI setup
```bash
moltbot models auth login-github-copilot
openclaw models auth login-github-copilot
```
You'll be prompted to visit a URL and enter a one-time code. Keep the terminal
@@ -43,14 +43,14 @@ open until it completes.
### Optional flags
```bash
moltbot models auth login-github-copilot --profile-id github-copilot:work
moltbot models auth login-github-copilot --yes
openclaw models auth login-github-copilot --profile-id github-copilot:work
openclaw models auth login-github-copilot --yes
```
## Set a default model
```bash
moltbot models set github-copilot/gpt-4o
openclaw models set github-copilot/gpt-4o
```
### Config snippet
@@ -67,4 +67,4 @@ moltbot models set github-copilot/gpt-4o
- Copilot model availability depends on your plan; if a model is rejected, try
another ID (for example `github-copilot/gpt-4.1`).
- The login stores a GitHub token in the auth profile store and exchanges it for a
Copilot API token when Moltbot runs.
Copilot API token when OpenClaw runs.

View File

@@ -1,18 +1,18 @@
---
summary: "GLM model family overview + how to use it in Moltbot"
summary: "GLM model family overview + how to use it in OpenClaw"
read_when:
- You want GLM models in Moltbot
- You want GLM models in OpenClaw
- You need the model naming convention and setup
---
# GLM models
GLM is a **model family** (not a company) available through the Z.AI platform. In Moltbot, GLM
GLM is a **model family** (not a company) available through the Z.AI platform. In OpenClaw, GLM
models are accessed via the `zai` provider and model IDs like `zai/glm-4.7`.
## CLI setup
```bash
moltbot onboard --auth-choice zai-api-key
openclaw onboard --auth-choice zai-api-key
```
## Config snippet

View File

@@ -1,12 +1,12 @@
---
summary: "Model providers (LLMs) supported by Moltbot"
summary: "Model providers (LLMs) supported by OpenClaw"
read_when:
- You want to choose a model provider
- You need a quick overview of supported LLM backends
---
# Model Providers
Moltbot can use many LLM providers. Pick a provider, authenticate, then set the
OpenClaw can use many LLM providers. Pick a provider, authenticate, then set the
default model as `provider/model`.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels).
@@ -22,7 +22,7 @@ See [Venice AI](/providers/venice).
## Quick start
1) Authenticate with the provider (usually via `moltbot onboard`).
1) Authenticate with the provider (usually via `openclaw onboard`).
2) Set the default model:
```json5

View File

@@ -1,7 +1,7 @@
---
summary: "Use MiniMax M2.1 in Moltbot"
summary: "Use MiniMax M2.1 in OpenClaw"
read_when:
- You want MiniMax models in Moltbot
- You want MiniMax models in OpenClaw
- You need MiniMax setup guidance
---
# MiniMax
@@ -40,7 +40,7 @@ MiniMax highlights these improvements in M2.1:
**Best for:** hosted MiniMax with Anthropic-compatible API.
Configure via CLI:
- Run `moltbot configure`
- Run `openclaw configure`
- Select **Model/auth**
- Choose **MiniMax M2.1**
@@ -100,7 +100,7 @@ Configure via CLI:
We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a
desktop/server) using LM Studio's local server.
Configure manually via `moltbot.json`:
Configure manually via `openclaw.json`:
```json5
{
@@ -134,11 +134,11 @@ Configure manually via `moltbot.json`:
}
```
## Configure via `moltbot configure`
## Configure via `openclaw configure`
Use the interactive config wizard to set MiniMax without editing JSON:
1) Run `moltbot configure`.
1) Run `openclaw configure`.
2) Select **Model/auth**.
3) Choose **MiniMax M2.1**.
4) Pick your default model when prompted.
@@ -159,7 +159,7 @@ Use the interactive config wizard to set MiniMax without editing JSON:
- Update pricing values in `models.json` if you need exact cost tracking.
- Referral link for MiniMax Coding Plan (10% off): https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link
- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
- Use `moltbot models list` and `moltbot models set minimax/MiniMax-M2.1` to switch.
- Use `openclaw models list` and `openclaw models set minimax/MiniMax-M2.1` to switch.
## Troubleshooting
@@ -169,7 +169,7 @@ This usually means the **MiniMax provider isnt configured** (no provider entr
and no MiniMax auth profile/env key found). A fix for this detection is in
**2026.1.12** (unreleased at the time of writing). Fix by:
- Upgrading to **2026.1.12** (or run from source `main`), then restarting the gateway.
- Running `moltbot configure` and selecting **MiniMax M2.1**, or
- Running `openclaw configure` and selecting **MiniMax M2.1**, or
- Adding the `models.providers.minimax` block manually, or
- Setting `MINIMAX_API_KEY` (or a MiniMax auth profile) so the provider can be injected.
@@ -179,5 +179,5 @@ Make sure the model id is **casesensitive**:
Then recheck with:
```bash
moltbot models list
openclaw models list
```

View File

@@ -1,12 +1,12 @@
---
summary: "Model providers (LLMs) supported by Moltbot"
summary: "Model providers (LLMs) supported by OpenClaw"
read_when:
- You want to choose a model provider
- You want quick setup examples for LLM auth + model selection
---
# Model Providers
Moltbot can use many LLM providers. Pick one, authenticate, then set the default
OpenClaw can use many LLM providers. Pick one, authenticate, then set the default
model as `provider/model`.
## Highlight: Venius (Venice AI)
@@ -20,7 +20,7 @@ See [Venice AI](/providers/venice).
## Quick start (two steps)
1) Authenticate with the provider (usually via `moltbot onboard`).
1) Authenticate with the provider (usually via `openclaw onboard`).
2) Set the default model:
```json5

View File

@@ -22,13 +22,13 @@ Current Kimi K2 model IDs:
{/* moonshot-kimi-k2-ids:end */}
```bash
moltbot onboard --auth-choice moonshot-api-key
openclaw onboard --auth-choice moonshot-api-key
```
Kimi Code:
```bash
moltbot onboard --auth-choice kimi-code-api-key
openclaw onboard --auth-choice kimi-code-api-key
```
Note: Moonshot and Kimi Code are separate providers. Keys are not interchangeable, endpoints differ, and model refs differ (Moonshot uses `moonshot/...`, Kimi Code uses `kimi-code/...`).

View File

@@ -1,12 +1,12 @@
---
summary: "Run Moltbot with Ollama (local LLM runtime)"
summary: "Run OpenClaw with Ollama (local LLM runtime)"
read_when:
- You want to run Moltbot with local models via Ollama
- You want to run OpenClaw with local models via Ollama
- You need Ollama setup and configuration guidance
---
# Ollama
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. Moltbot integrates with Ollama's OpenAI-compatible API and can **auto-discover tool-capable models** when you opt in with `OLLAMA_API_KEY` (or an auth profile) and do not define an explicit `models.providers.ollama` entry.
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. OpenClaw integrates with Ollama's OpenAI-compatible API and can **auto-discover tool-capable models** when you opt in with `OLLAMA_API_KEY` (or an auth profile) and do not define an explicit `models.providers.ollama` entry.
## Quick start
@@ -22,14 +22,14 @@ ollama pull qwen2.5-coder:32b
ollama pull deepseek-r1:32b
```
3) Enable Ollama for Moltbot (any value works; Ollama doesn't require a real key):
3) Enable Ollama for OpenClaw (any value works; Ollama doesn't require a real key):
```bash
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
moltbot config set models.providers.ollama.apiKey "ollama-local"
openclaw config set models.providers.ollama.apiKey "ollama-local"
```
4) Use Ollama models:
@@ -46,7 +46,7 @@ moltbot config set models.providers.ollama.apiKey "ollama-local"
## Model discovery (implicit provider)
When you set `OLLAMA_API_KEY` (or an auth profile) and **do not** define `models.providers.ollama`, Moltbot discovers models from the local Ollama instance at `http://127.0.0.1:11434`:
When you set `OLLAMA_API_KEY` (or an auth profile) and **do not** define `models.providers.ollama`, OpenClaw discovers models from the local Ollama instance at `http://127.0.0.1:11434`:
- Queries `/api/tags` and `/api/show`
- Keeps only models that report `tools` capability
@@ -61,7 +61,7 @@ To see what models are available:
```bash
ollama list
moltbot models list
openclaw models list
```
To add a new model, simply pull it with Ollama:
@@ -117,7 +117,7 @@ Use explicit config when:
}
```
If `OLLAMA_API_KEY` is set, you can omit `apiKey` in the provider entry and Moltbot will fill it for availability checks.
If `OLLAMA_API_KEY` is set, you can omit `apiKey` in the provider entry and OpenClaw will fill it for availability checks.
### Custom base URL (explicit config)
@@ -157,7 +157,7 @@ Once configured, all your Ollama models are available:
### Reasoning models
Moltbot marks models as reasoning-capable when Ollama reports `thinking` in `/api/show`:
OpenClaw marks models as reasoning-capable when Ollama reports `thinking` in `/api/show`:
```bash
ollama pull deepseek-r1:32b
@@ -169,7 +169,7 @@ Ollama is free and runs locally, so all model costs are set to $0.
### Context windows
For auto-discovered models, Moltbot uses the context window reported by Ollama when available, otherwise it defaults to `8192`. You can override `contextWindow` and `maxTokens` in explicit provider config.
For auto-discovered models, OpenClaw uses the context window reported by Ollama when available, otherwise it defaults to `8192`. You can override `contextWindow` and `maxTokens` in explicit provider config.
## Troubleshooting
@@ -189,7 +189,7 @@ curl http://localhost:11434/api/tags
### No models available
Moltbot only auto-discovers models that report tool support. If your model isn't listed, either:
OpenClaw only auto-discovers models that report tool support. If your model isn't listed, either:
- Pull a tool-capable model, or
- Define the model explicitly in `models.providers.ollama`.

View File

@@ -1,7 +1,7 @@
---
summary: "Use OpenAI via API keys or Codex subscription in Moltbot"
summary: "Use OpenAI via API keys or Codex subscription in OpenClaw"
read_when:
- You want to use OpenAI models in Moltbot
- You want to use OpenAI models in OpenClaw
- You want Codex subscription auth instead of API keys
---
# OpenAI
@@ -17,9 +17,9 @@ Get your API key from the OpenAI dashboard.
### CLI setup
```bash
moltbot onboard --auth-choice openai-api-key
openclaw onboard --auth-choice openai-api-key
# or non-interactive
moltbot onboard --openai-api-key "$OPENAI_API_KEY"
openclaw onboard --openai-api-key "$OPENAI_API_KEY"
```
### Config snippet
@@ -40,10 +40,10 @@ Codex cloud requires ChatGPT sign-in, while the Codex CLI supports ChatGPT or AP
```bash
# Run Codex OAuth in the wizard
moltbot onboard --auth-choice openai-codex
openclaw onboard --auth-choice openai-codex
# Or run OAuth directly
moltbot models auth login --provider openai-codex
openclaw models auth login --provider openai-codex
```
### Config snippet

View File

@@ -1,5 +1,5 @@
---
summary: "Use OpenCode Zen (curated models) with Moltbot"
summary: "Use OpenCode Zen (curated models) with OpenClaw"
read_when:
- You want OpenCode Zen for model access
- You want a curated list of coding-friendly models
@@ -13,9 +13,9 @@ Zen is currently in beta.
## CLI setup
```bash
moltbot onboard --auth-choice opencode-zen
openclaw onboard --auth-choice opencode-zen
# or non-interactive
moltbot onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
openclaw onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
```
## Config snippet

View File

@@ -1,8 +1,8 @@
---
summary: "Use OpenRouter's unified API to access many models in Moltbot"
summary: "Use OpenRouter's unified API to access many models in OpenClaw"
read_when:
- You want a single API key for many LLMs
- You want to run models via OpenRouter in Moltbot
- You want to run models via OpenRouter in OpenClaw
---
# OpenRouter
@@ -12,7 +12,7 @@ endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switc
## CLI setup
```bash
moltbot onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"
openclaw onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"
```
## Config snippet

View File

@@ -1,7 +1,7 @@
---
summary: "Use Qwen OAuth (free tier) in Moltbot"
summary: "Use Qwen OAuth (free tier) in OpenClaw"
read_when:
- You want to use Qwen with Moltbot
- You want to use Qwen with OpenClaw
- You want free-tier OAuth access to Qwen Coder
---
# Qwen
@@ -12,7 +12,7 @@ Qwen provides a free-tier OAuth flow for Qwen Coder and Qwen Vision models
## Enable the plugin
```bash
moltbot plugins enable qwen-portal-auth
openclaw plugins enable qwen-portal-auth
```
Restart the Gateway after enabling.
@@ -20,7 +20,7 @@ Restart the Gateway after enabling.
## Authenticate
```bash
moltbot models auth login --provider qwen-portal --set-default
openclaw models auth login --provider qwen-portal --set-default
```
This runs the Qwen device-code OAuth flow and writes a provider entry to your
@@ -34,12 +34,12 @@ This runs the Qwen device-code OAuth flow and writes a provider entry to your
Switch models with:
```bash
moltbot models set qwen-portal/coder-model
openclaw models set qwen-portal/coder-model
```
## Reuse Qwen Code CLI login
If you already logged in with the Qwen Code CLI, Moltbot will sync credentials
If you already logged in with the Qwen Code CLI, OpenClaw will sync credentials
from `~/.qwen/oauth_creds.json` when it loads the auth store. You still need a
`models.providers.qwen-portal` entry (use the login command above to create one).

View File

@@ -1,12 +1,12 @@
---
summary: "Use Synthetic's Anthropic-compatible API in Moltbot"
summary: "Use Synthetic's Anthropic-compatible API in OpenClaw"
read_when:
- You want to use Synthetic as a model provider
- You need a Synthetic API key or base URL setup
---
# Synthetic
Synthetic exposes Anthropic-compatible endpoints. Moltbot registers it as the
Synthetic exposes Anthropic-compatible endpoints. OpenClaw registers it as the
`synthetic` provider and uses the Anthropic Messages API.
## Quick setup
@@ -15,7 +15,7 @@ Synthetic exposes Anthropic-compatible endpoints. Moltbot registers it as the
2) Run onboarding:
```bash
moltbot onboard --auth-choice synthetic-api-key
openclaw onboard --auth-choice synthetic-api-key
```
The default model is set to:
@@ -59,7 +59,7 @@ synthetic/hf:MiniMaxAI/MiniMax-M2.1
}
```
Note: Moltbot's Anthropic client appends `/v1` to the base URL, so use
Note: OpenClaw's Anthropic client appends `/v1` to the base URL, so use
`https://api.synthetic.new/anthropic` (not `/anthropic/v1`). If Synthetic changes
its base URL, override `models.providers.synthetic.baseUrl`.

View File

@@ -1,7 +1,7 @@
---
summary: "Use Venice AI privacy-focused models in Moltbot"
summary: "Use Venice AI privacy-focused models in OpenClaw"
read_when:
- You want privacy-focused inference in Moltbot
- You want privacy-focused inference in OpenClaw
- You want Venice AI setup guidance
---
# Venice AI (Venice highlight)
@@ -10,7 +10,7 @@ read_when:
Venice AI provides privacy-focused AI inference with support for uncensored models and access to major proprietary models through their anonymized proxy. All inference is private by default—no training on your data, no logging.
## Why Venice in Moltbot
## Why Venice in OpenClaw
- **Private inference** for open-source models (no logging).
- **Uncensored models** when you need them.
@@ -45,7 +45,7 @@ Venice offers two privacy levels — understanding this is key to choosing your
2. Go to **Settings → API Keys → Create new key**
3. Copy your API key (format: `vapi_xxxxxxxxxxxx`)
### 2. Configure Moltbot
### 2. Configure OpenClaw
**Option A: Environment Variable**
@@ -56,7 +56,7 @@ export VENICE_API_KEY="vapi_xxxxxxxxxxxx"
**Option B: Interactive Setup (Recommended)**
```bash
moltbot onboard --auth-choice venice-api-key
openclaw onboard --auth-choice venice-api-key
```
This will:
@@ -68,7 +68,7 @@ This will:
**Option C: Non-interactive**
```bash
moltbot onboard --non-interactive \
openclaw onboard --non-interactive \
--auth-choice venice-api-key \
--venice-api-key "vapi_xxxxxxxxxxxx"
```
@@ -76,12 +76,12 @@ moltbot onboard --non-interactive \
### 3. Verify Setup
```bash
moltbot chat --model venice/llama-3.3-70b "Hello, are you working?"
openclaw chat --model venice/llama-3.3-70b "Hello, are you working?"
```
## Model Selection
After setup, Moltbot shows all available Venice models. Pick based on your needs:
After setup, OpenClaw shows all available Venice models. Pick based on your needs:
- **Default (our pick)**: `venice/llama-3.3-70b` for private, balanced performance.
- **Best overall quality**: `venice/claude-opus-45` for hard jobs (Opus remains the strongest).
@@ -91,19 +91,19 @@ After setup, Moltbot shows all available Venice models. Pick based on your needs
Change your default model anytime:
```bash
moltbot models set venice/claude-opus-45
moltbot models set venice/llama-3.3-70b
openclaw models set venice/claude-opus-45
openclaw models set venice/llama-3.3-70b
```
List all available models:
```bash
moltbot models list | grep venice
openclaw models list | grep venice
```
## Configure via `moltbot configure`
## Configure via `openclaw configure`
1. Run `moltbot configure`
1. Run `openclaw configure`
2. Select **Model/auth**
3. Choose **Venice AI**
@@ -159,7 +159,7 @@ moltbot models list | grep venice
## Model Discovery
Moltbot automatically discovers models from the Venice API when `VENICE_API_KEY` is set. If the API is unreachable, it falls back to a static catalog.
OpenClaw automatically discovers models from the Venice API when `VENICE_API_KEY` is set. If the API is unreachable, it falls back to a static catalog.
The `/models` endpoint is public (no auth needed for listing), but inference requires a valid API key.
@@ -192,19 +192,19 @@ Venice uses a credit-based system. Check [venice.ai/pricing](https://venice.ai/p
```bash
# Use default private model
moltbot chat --model venice/llama-3.3-70b
openclaw chat --model venice/llama-3.3-70b
# Use Claude via Venice (anonymized)
moltbot chat --model venice/claude-opus-45
openclaw chat --model venice/claude-opus-45
# Use uncensored model
moltbot chat --model venice/venice-uncensored
openclaw chat --model venice/venice-uncensored
# Use vision model with image
moltbot chat --model venice/qwen3-vl-235b-a22b
openclaw chat --model venice/qwen3-vl-235b-a22b
# Use coding model
moltbot chat --model venice/qwen3-coder-480b-a35b-instruct
openclaw chat --model venice/qwen3-coder-480b-a35b-instruct
```
## Troubleshooting
@@ -213,14 +213,14 @@ moltbot chat --model venice/qwen3-coder-480b-a35b-instruct
```bash
echo $VENICE_API_KEY
moltbot models list | grep venice
openclaw models list | grep venice
```
Ensure the key starts with `vapi_`.
### Model not available
The Venice model catalog updates dynamically. Run `moltbot models list` to see currently available models. Some models may be temporarily offline.
The Venice model catalog updates dynamically. Run `openclaw models list` to see currently available models. Some models may be temporarily offline.
### Connection issues

View File

@@ -2,7 +2,7 @@
title: "Vercel AI Gateway"
summary: "Vercel AI Gateway setup (auth + model selection)"
read_when:
- You want to use Vercel AI Gateway with Moltbot
- You want to use Vercel AI Gateway with OpenClaw
- You need the API key env var or CLI auth choice
---
# Vercel AI Gateway
@@ -19,7 +19,7 @@ The [Vercel AI Gateway](https://vercel.com/ai-gateway) provides a unified API to
1) Set the API key (recommended: store it for the Gateway):
```bash
moltbot onboard --auth-choice ai-gateway-api-key
openclaw onboard --auth-choice ai-gateway-api-key
```
2) Set a default model:
@@ -37,7 +37,7 @@ moltbot onboard --auth-choice ai-gateway-api-key
## Non-interactive example
```bash
moltbot onboard --non-interactive \
openclaw onboard --non-interactive \
--mode local \
--auth-choice ai-gateway-api-key \
--ai-gateway-api-key "$AI_GATEWAY_API_KEY"
@@ -46,5 +46,5 @@ moltbot onboard --non-interactive \
## Environment note
If the Gateway runs as a daemon (launchd/systemd), make sure `AI_GATEWAY_API_KEY`
is available to that process (for example, in `~/.clawdbot/.env` or via
is available to that process (for example, in `~/.openclaw/.env` or via
`env.shellEnv`).

View File

@@ -1,21 +1,21 @@
---
summary: "Use Z.AI (GLM models) with Moltbot"
summary: "Use Z.AI (GLM models) with OpenClaw"
read_when:
- You want Z.AI / GLM models in Moltbot
- You want Z.AI / GLM models in OpenClaw
- You need a simple ZAI_API_KEY setup
---
# Z.AI
Z.AI is the API platform for **GLM** models. It provides REST APIs for GLM and uses API keys
for authentication. Create your API key in the Z.AI console. Moltbot uses the `zai` provider
for authentication. Create your API key in the Z.AI console. OpenClaw uses the `zai` provider
with a Z.AI API key.
## CLI setup
```bash
moltbot onboard --auth-choice zai-api-key
openclaw onboard --auth-choice zai-api-key
# or non-interactive
moltbot onboard --zai-api-key "$ZAI_API_KEY"
openclaw onboard --zai-api-key "$ZAI_API_KEY"
```
## Config snippet