feat(agents) : Hugging Face Inference provider first-class support and Together API fix and Direct Injection Refactor Auths [AI-assisted] (#13472)

* initial commit

* removes assesment from docs

* resolves automated review comments

* resolves lint , type , tests , refactors , and submits

* solves : why do we have to lint the tests xD

* adds greptile fixes

* solves a type error

* solves a ci error

* refactors auths

* solves a failing test after i pulled from main lol

* solves a failing test after i pulled from main lol

* resolves token naming issue to comply with better practices when using hf / huggingface

* fixes curly lints !

* fixes failing tests for google api from main

* solve merge conflicts

* solve failing tests with a defensive check 'undefined' openrouterapi key

* fix: preserve Hugging Face auth-choice intent and token behavior (#13472) (thanks @Josephrp)

* test: resolve auth-choice cherry-pick conflict cleanup (#13472)

---------

Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
This commit is contained in:
Tonic
2026-02-13 16:18:16 +01:00
committed by GitHub
parent e50ce897b0
commit 08b7932df0
27 changed files with 1617 additions and 355 deletions

View File

@@ -19,6 +19,7 @@ Docs: https://docs.openclaw.ai
- Sandbox: pass configured `sandbox.docker.env` variables to sandbox containers at `docker create` time. (#15138) Thanks @stevebot-alive.
- Onboarding/CLI: restore terminal state without resuming paused `stdin`, so onboarding exits cleanly after choosing Web UI and the installer returns instead of appearing stuck.
- Onboarding/Providers: add vLLM as an onboarding provider with model discovery, auth profile wiring, and non-interactive auth-choice validation. (#12577) Thanks @gejifeng.
- Onboarding/Providers: preserve Hugging Face auth intent in auth-choice remapping (`tokenProvider=huggingface` with `authChoice=apiKey`) and skip env-override prompts when an explicit token is provided. (#13472) Thanks @Josephrp.
- macOS Voice Wake: fix a crash in trigger trimming for CJK/Unicode transcripts by matching and slicing on original-string ranges instead of transformed-string indices. (#11052) Thanks @Flash-LHR.
- Heartbeat: prevent scheduler silent-death races during runner reloads, preserve retry cooldown backoff under wake bursts, and prioritize user/action wake causes over interval/retry reasons when coalescing. (#15108) Thanks @joeykrug.
- Outbound targets: fail closed for WhatsApp/Twitch/Google Chat fallback paths so invalid or missing targets are dropped instead of rerouted, and align resolver hints with strict target requirements. (#13578) Thanks @mcaxtr.

View File

@@ -120,6 +120,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- OpenAI-compatible base URL: `https://api.cerebras.ai/v1`.
- Mistral: `mistral` (`MISTRAL_API_KEY`)
- GitHub Copilot: `github-copilot` (`COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN`)
- Hugging Face Inference: `huggingface` (`HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN`) — OpenAI-compatible router; example model: `huggingface/deepseek-ai/DeepSeek-R1`; CLI: `openclaw onboard --auth-choice huggingface-api-key`. See [Hugging Face (Inference)](/providers/huggingface).
## Providers via `models.providers` (custom/base URL)

View File

@@ -0,0 +1,209 @@
---
summary: "Hugging Face Inference setup (auth + model selection)"
read_when:
- You want to use Hugging Face Inference with OpenClaw
- You need the HF token env var or CLI auth choice
title: "Hugging Face (Inference)"
---
# Hugging Face (Inference)
[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers) offer OpenAI-compatible chat completions through a single router API. You get access to many models (DeepSeek, Llama, and more) with one token. OpenClaw uses the **OpenAI-compatible endpoint** (chat completions only); for text-to-image, embeddings, or speech use the [HF inference clients](https://huggingface.co/docs/api-inference/quicktour) directly.
- Provider: `huggingface`
- Auth: `HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN` (fine-grained token with **Make calls to Inference Providers**)
- API: OpenAI-compatible (`https://router.huggingface.co/v1`)
- Billing: Single HF token; [pricing](https://huggingface.co/docs/inference-providers/pricing) follows provider rates with a free tier.
## Quick start
1. Create a fine-grained token at [Hugging Face → Settings → Tokens](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) with the **Make calls to Inference Providers** permission.
2. Run onboarding and choose **Hugging Face** in the provider dropdown, then enter your API key when prompted:
```bash
openclaw onboard --auth-choice huggingface-api-key
```
3. In the **Default Hugging Face model** dropdown, pick the model you want (the list is loaded from the Inference API when you have a valid token; otherwise a built-in list is shown). Your choice is saved as the default model.
4. You can also set or change the default model later in config:
```json5
{
agents: {
defaults: {
model: { primary: "huggingface/deepseek-ai/DeepSeek-R1" },
},
},
}
```
## Non-interactive example
```bash
openclaw onboard --non-interactive \
--mode local \
--auth-choice huggingface-api-key \
--huggingface-api-key "$HF_TOKEN"
```
This will set `huggingface/deepseek-ai/DeepSeek-R1` as the default model.
## Environment note
If the Gateway runs as a daemon (launchd/systemd), make sure `HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN`
is available to that process (for example, in `~/.openclaw/.env` or via
`env.shellEnv`).
## Model discovery and onboarding dropdown
OpenClaw discovers models by calling the **Inference endpoint directly**:
```bash
GET https://router.huggingface.co/v1/models
```
(Optional: send `Authorization: Bearer $HUGGINGFACE_HUB_TOKEN` or `$HF_TOKEN` for the full list; some endpoints return a subset without auth.) The response is OpenAI-style `{ "object": "list", "data": [ { "id": "Qwen/Qwen3-8B", "owned_by": "Qwen", ... }, ... ] }`.
When you configure a Hugging Face API key (via onboarding, `HUGGINGFACE_HUB_TOKEN`, or `HF_TOKEN`), OpenClaw uses this GET to discover available chat-completion models. During **interactive onboarding**, after you enter your token you see a **Default Hugging Face model** dropdown populated from that list (or the built-in catalog if the request fails). At runtime (e.g. Gateway startup), when a key is present, OpenClaw again calls **GET** `https://router.huggingface.co/v1/models` to refresh the catalog. The list is merged with a built-in catalog (for metadata like context window and cost). If the request fails or no key is set, only the built-in catalog is used.
## Model names and editable options
- **Name from API:** The model display name is **hydrated from GET /v1/models** when the API returns `name`, `title`, or `display_name`; otherwise it is derived from the model id (e.g. `deepseek-ai/DeepSeek-R1` → “DeepSeek R1”).
- **Override display name:** You can set a custom label per model in config so it appears the way you want in the CLI and UI:
```json5
{
agents: {
defaults: {
models: {
"huggingface/deepseek-ai/DeepSeek-R1": { alias: "DeepSeek R1 (fast)" },
"huggingface/deepseek-ai/DeepSeek-R1:cheapest": { alias: "DeepSeek R1 (cheap)" },
},
},
},
}
```
- **Provider / policy selection:** Append a suffix to the **model id** to choose how the router picks the backend:
- **`:fastest`** — highest throughput (router picks; provider choice is **locked** — no interactive backend picker).
- **`:cheapest`** — lowest cost per output token (router picks; provider choice is **locked**).
- **`:provider`** — force a specific backend (e.g. `:sambanova`, `:together`).
When you select **:cheapest** or **:fastest** (e.g. in the onboarding model dropdown), the provider is locked: the router decides by cost or speed and no optional “prefer specific backend” step is shown. You can add these as separate entries in `models.providers.huggingface.models` or set `model.primary` with the suffix. You can also set your default order in [Inference Provider settings](https://hf.co/settings/inference-providers) (no suffix = use that order).
- **Config merge:** Existing entries in `models.providers.huggingface.models` (e.g. in `models.json`) are kept when config is merged. So any custom `name`, `alias`, or model options you set there are preserved.
## Model IDs and configuration examples
Model refs use the form `huggingface/<org>/<model>` (Hub-style IDs). The list below is from **GET** `https://router.huggingface.co/v1/models`; your catalog may include more.
**Example IDs (from the inference endpoint):**
| Model | Ref (prefix with `huggingface/`) |
| ---------------------- | ----------------------------------- |
| DeepSeek R1 | `deepseek-ai/DeepSeek-R1` |
| DeepSeek V3.2 | `deepseek-ai/DeepSeek-V3.2` |
| Qwen3 8B | `Qwen/Qwen3-8B` |
| Qwen2.5 7B Instruct | `Qwen/Qwen2.5-7B-Instruct` |
| Qwen3 32B | `Qwen/Qwen3-32B` |
| Llama 3.3 70B Instruct | `meta-llama/Llama-3.3-70B-Instruct` |
| Llama 3.1 8B Instruct | `meta-llama/Llama-3.1-8B-Instruct` |
| GPT-OSS 120B | `openai/gpt-oss-120b` |
| GLM 4.7 | `zai-org/GLM-4.7` |
| Kimi K2.5 | `moonshotai/Kimi-K2.5` |
You can append `:fastest`, `:cheapest`, or `:provider` (e.g. `:together`, `:sambanova`) to the model id. Set your default order in [Inference Provider settings](https://hf.co/settings/inference-providers); see [Inference Providers](https://huggingface.co/docs/inference-providers) and **GET** `https://router.huggingface.co/v1/models` for the full list.
### Complete configuration examples
**Primary DeepSeek R1 with Qwen fallback:**
```json5
{
agents: {
defaults: {
model: {
primary: "huggingface/deepseek-ai/DeepSeek-R1",
fallbacks: ["huggingface/Qwen/Qwen3-8B"],
},
models: {
"huggingface/deepseek-ai/DeepSeek-R1": { alias: "DeepSeek R1" },
"huggingface/Qwen/Qwen3-8B": { alias: "Qwen3 8B" },
},
},
},
}
```
**Qwen as default, with :cheapest and :fastest variants:**
```json5
{
agents: {
defaults: {
model: { primary: "huggingface/Qwen/Qwen3-8B" },
models: {
"huggingface/Qwen/Qwen3-8B": { alias: "Qwen3 8B" },
"huggingface/Qwen/Qwen3-8B:cheapest": { alias: "Qwen3 8B (cheapest)" },
"huggingface/Qwen/Qwen3-8B:fastest": { alias: "Qwen3 8B (fastest)" },
},
},
},
}
```
**DeepSeek + Llama + GPT-OSS with aliases:**
```json5
{
agents: {
defaults: {
model: {
primary: "huggingface/deepseek-ai/DeepSeek-V3.2",
fallbacks: [
"huggingface/meta-llama/Llama-3.3-70B-Instruct",
"huggingface/openai/gpt-oss-120b",
],
},
models: {
"huggingface/deepseek-ai/DeepSeek-V3.2": { alias: "DeepSeek V3.2" },
"huggingface/meta-llama/Llama-3.3-70B-Instruct": { alias: "Llama 3.3 70B" },
"huggingface/openai/gpt-oss-120b": { alias: "GPT-OSS 120B" },
},
},
},
}
```
**Force a specific backend with :provider:**
```json5
{
agents: {
defaults: {
model: { primary: "huggingface/deepseek-ai/DeepSeek-R1:together" },
models: {
"huggingface/deepseek-ai/DeepSeek-R1:together": { alias: "DeepSeek R1 (Together)" },
},
},
},
}
```
**Multiple Qwen and DeepSeek models with policy suffixes:**
```json5
{
agents: {
defaults: {
model: { primary: "huggingface/Qwen/Qwen2.5-7B-Instruct:cheapest" },
models: {
"huggingface/Qwen/Qwen2.5-7B-Instruct": { alias: "Qwen2.5 7B" },
"huggingface/Qwen/Qwen2.5-7B-Instruct:cheapest": { alias: "Qwen2.5 7B (cheap)" },
"huggingface/deepseek-ai/DeepSeek-R1:fastest": { alias: "DeepSeek R1 (fast)" },
"huggingface/meta-llama/Llama-3.1-8B-Instruct": { alias: "Llama 3.1 8B" },
},
},
},
}
```

View File

@@ -51,6 +51,7 @@ See [Venice AI](/providers/venice).
- [GLM models](/providers/glm)
- [MiniMax](/providers/minimax)
- [Venice (Venice AI, privacy-focused)](/providers/venice)
- [Hugging Face (Inference)](/providers/huggingface)
- [Ollama (local models)](/providers/ollama)
- [vLLM (local models)](/providers/vllm)
- [Qianfan](/providers/qianfan)

View File

@@ -0,0 +1,44 @@
import { describe, expect, it } from "vitest";
import {
discoverHuggingfaceModels,
HUGGINGFACE_MODEL_CATALOG,
buildHuggingfaceModelDefinition,
isHuggingfacePolicyLocked,
} from "./huggingface-models.js";
describe("huggingface-models", () => {
it("buildHuggingfaceModelDefinition returns config with required fields", () => {
const entry = HUGGINGFACE_MODEL_CATALOG[0];
const def = buildHuggingfaceModelDefinition(entry);
expect(def.id).toBe(entry.id);
expect(def.name).toBe(entry.name);
expect(def.reasoning).toBe(entry.reasoning);
expect(def.input).toEqual(entry.input);
expect(def.cost).toEqual(entry.cost);
expect(def.contextWindow).toBe(entry.contextWindow);
expect(def.maxTokens).toBe(entry.maxTokens);
});
it("discoverHuggingfaceModels returns static catalog when apiKey is empty", async () => {
const models = await discoverHuggingfaceModels("");
expect(models).toHaveLength(HUGGINGFACE_MODEL_CATALOG.length);
expect(models.map((m) => m.id)).toEqual(HUGGINGFACE_MODEL_CATALOG.map((m) => m.id));
});
it("discoverHuggingfaceModels returns static catalog in test env (VITEST)", async () => {
const models = await discoverHuggingfaceModels("hf_test_token");
expect(models).toHaveLength(HUGGINGFACE_MODEL_CATALOG.length);
expect(models[0].id).toBe("deepseek-ai/DeepSeek-R1");
});
describe("isHuggingfacePolicyLocked", () => {
it("returns true for :cheapest and :fastest refs", () => {
expect(isHuggingfacePolicyLocked("huggingface/deepseek-ai/DeepSeek-R1:cheapest")).toBe(true);
expect(isHuggingfacePolicyLocked("huggingface/deepseek-ai/DeepSeek-R1:fastest")).toBe(true);
});
it("returns false for base ref and :provider refs", () => {
expect(isHuggingfacePolicyLocked("huggingface/deepseek-ai/DeepSeek-R1")).toBe(false);
expect(isHuggingfacePolicyLocked("huggingface/foo:together")).toBe(false);
});
});
});

View File

@@ -0,0 +1,229 @@
import type { ModelDefinitionConfig } from "../config/types.models.js";
/** Hugging Face Inference Providers (router) — OpenAI-compatible chat completions. */
export const HUGGINGFACE_BASE_URL = "https://router.huggingface.co/v1";
/** Router policy suffixes: router picks backend by cost or speed; no specific provider selection. */
export const HUGGINGFACE_POLICY_SUFFIXES = ["cheapest", "fastest"] as const;
/**
* True when the model ref uses :cheapest or :fastest. When true, provider choice is locked
* (router decides); do not show an interactive "prefer specific backend" option.
*/
export function isHuggingfacePolicyLocked(modelRef: string): boolean {
const ref = String(modelRef).trim();
return HUGGINGFACE_POLICY_SUFFIXES.some((s) => ref.endsWith(`:${s}`) || ref === s);
}
/** Default cost when not in static catalog (HF pricing varies by provider). */
const HUGGINGFACE_DEFAULT_COST = {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
};
/** Defaults for models discovered from GET /v1/models. */
const HUGGINGFACE_DEFAULT_CONTEXT_WINDOW = 131072;
const HUGGINGFACE_DEFAULT_MAX_TOKENS = 8192;
/**
* Shape of a single model entry from GET https://router.huggingface.co/v1/models.
* Aligned with the Inference Providers API response (object, data[].id, owned_by, architecture, providers).
*/
interface HFModelEntry {
id: string;
object?: string;
created?: number;
/** Organisation that owns the model (e.g. "Qwen", "deepseek-ai"). Used for display when name/title absent. */
owned_by?: string;
/** Display name from API when present (not all responses include this). */
name?: string;
title?: string;
display_name?: string;
/** Input/output modalities; we use input_modalities for ModelDefinitionConfig.input. */
architecture?: {
input_modalities?: string[];
output_modalities?: string[];
[key: string]: unknown;
};
/** Backend providers; we use the first provider with context_length when available. */
providers?: Array<{
provider?: string;
context_length?: number;
status?: string;
pricing?: { input?: number; output?: number; [key: string]: unknown };
[key: string]: unknown;
}>;
[key: string]: unknown;
}
/** Response shape from GET https://router.huggingface.co/v1/models (OpenAI-style list). */
interface OpenAIListModelsResponse {
object?: string;
data?: HFModelEntry[];
}
export const HUGGINGFACE_MODEL_CATALOG: ModelDefinitionConfig[] = [
{
id: "deepseek-ai/DeepSeek-R1",
name: "DeepSeek R1",
reasoning: true,
input: ["text"],
contextWindow: 131072,
maxTokens: 8192,
cost: { input: 3.0, output: 7.0, cacheRead: 3.0, cacheWrite: 3.0 },
},
{
id: "deepseek-ai/DeepSeek-V3.1",
name: "DeepSeek V3.1",
reasoning: false,
input: ["text"],
contextWindow: 131072,
maxTokens: 8192,
cost: { input: 0.6, output: 1.25, cacheRead: 0.6, cacheWrite: 0.6 },
},
{
id: "meta-llama/Llama-3.3-70B-Instruct-Turbo",
name: "Llama 3.3 70B Instruct Turbo",
reasoning: false,
input: ["text"],
contextWindow: 131072,
maxTokens: 8192,
cost: { input: 0.88, output: 0.88, cacheRead: 0.88, cacheWrite: 0.88 },
},
{
id: "openai/gpt-oss-120b",
name: "GPT-OSS 120B",
reasoning: false,
input: ["text"],
contextWindow: 131072,
maxTokens: 8192,
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
},
];
export function buildHuggingfaceModelDefinition(
model: (typeof HUGGINGFACE_MODEL_CATALOG)[number],
): ModelDefinitionConfig {
return {
id: model.id,
name: model.name,
reasoning: model.reasoning,
input: model.input,
cost: model.cost,
contextWindow: model.contextWindow,
maxTokens: model.maxTokens,
};
}
/**
* Infer reasoning and display name from Hub-style model id (e.g. "deepseek-ai/DeepSeek-R1").
*/
function inferredMetaFromModelId(id: string): { name: string; reasoning: boolean } {
const base = id.split("/").pop() ?? id;
const reasoning = /r1|reasoning|thinking|reason/i.test(id) || /-\d+[tb]?-thinking/i.test(base);
const name = base.replace(/-/g, " ").replace(/\b(\w)/g, (c) => c.toUpperCase());
return { name, reasoning };
}
/** Prefer API-supplied display name, then owned_by/id, then inferred from id. */
function displayNameFromApiEntry(entry: HFModelEntry, inferredName: string): string {
const fromApi =
(typeof entry.name === "string" && entry.name.trim()) ||
(typeof entry.title === "string" && entry.title.trim()) ||
(typeof entry.display_name === "string" && entry.display_name.trim());
if (fromApi) {
return fromApi;
}
if (typeof entry.owned_by === "string" && entry.owned_by.trim()) {
const base = entry.id.split("/").pop() ?? entry.id;
return `${entry.owned_by.trim()}/${base}`;
}
return inferredName;
}
/**
* Discover chat-completion models from Hugging Face Inference Providers (GET /v1/models).
* Requires a valid HF token. Falls back to static catalog on failure or in test env.
*/
export async function discoverHuggingfaceModels(apiKey: string): Promise<ModelDefinitionConfig[]> {
if (process.env.VITEST === "true" || process.env.NODE_ENV === "test") {
return HUGGINGFACE_MODEL_CATALOG.map(buildHuggingfaceModelDefinition);
}
const trimmedKey = apiKey?.trim();
if (!trimmedKey) {
return HUGGINGFACE_MODEL_CATALOG.map(buildHuggingfaceModelDefinition);
}
try {
// GET https://router.huggingface.co/v1/models — response: { object, data: [{ id, owned_by, architecture: { input_modalities }, providers: [{ provider, context_length?, pricing? }] }] }. POST /v1/chat/completions requires Authorization.
const response = await fetch(`${HUGGINGFACE_BASE_URL}/models`, {
signal: AbortSignal.timeout(10_000),
headers: {
Authorization: `Bearer ${trimmedKey}`,
"Content-Type": "application/json",
},
});
if (!response.ok) {
console.warn(
`[huggingface-models] GET /v1/models failed: HTTP ${response.status}, using static catalog`,
);
return HUGGINGFACE_MODEL_CATALOG.map(buildHuggingfaceModelDefinition);
}
const body = (await response.json()) as OpenAIListModelsResponse;
const data = body?.data;
if (!Array.isArray(data) || data.length === 0) {
console.warn("[huggingface-models] No models in response, using static catalog");
return HUGGINGFACE_MODEL_CATALOG.map(buildHuggingfaceModelDefinition);
}
const catalogById = new Map(HUGGINGFACE_MODEL_CATALOG.map((m) => [m.id, m] as const));
const seen = new Set<string>();
const models: ModelDefinitionConfig[] = [];
for (const entry of data) {
const id = typeof entry?.id === "string" ? entry.id.trim() : "";
if (!id || seen.has(id)) {
continue;
}
seen.add(id);
const catalogEntry = catalogById.get(id);
if (catalogEntry) {
models.push(buildHuggingfaceModelDefinition(catalogEntry));
} else {
const inferred = inferredMetaFromModelId(id);
const name = displayNameFromApiEntry(entry, inferred.name);
const modalities = entry.architecture?.input_modalities;
const input: Array<"text" | "image"> =
Array.isArray(modalities) && modalities.includes("image") ? ["text", "image"] : ["text"];
const providers = Array.isArray(entry.providers) ? entry.providers : [];
const providerWithContext = providers.find(
(p) => typeof p?.context_length === "number" && p.context_length > 0,
);
const contextLength =
providerWithContext?.context_length ?? HUGGINGFACE_DEFAULT_CONTEXT_WINDOW;
models.push({
id,
name,
reasoning: inferred.reasoning,
input,
cost: HUGGINGFACE_DEFAULT_COST,
contextWindow: contextLength,
maxTokens: HUGGINGFACE_DEFAULT_MAX_TOKENS,
});
}
}
return models.length > 0
? models
: HUGGINGFACE_MODEL_CATALOG.map(buildHuggingfaceModelDefinition);
} catch (error) {
console.warn(`[huggingface-models] Discovery failed: ${String(error)}, using static catalog`);
return HUGGINGFACE_MODEL_CATALOG.map(buildHuggingfaceModelDefinition);
}
}

View File

@@ -532,4 +532,79 @@ describe("getApiKeyForModel", () => {
}
}
});
it("resolveEnvApiKey('huggingface') returns HUGGINGFACE_HUB_TOKEN when set", async () => {
const prevHub = process.env.HUGGINGFACE_HUB_TOKEN;
const prevHf = process.env.HF_TOKEN;
try {
delete process.env.HF_TOKEN;
process.env.HUGGINGFACE_HUB_TOKEN = "hf_hub_xyz";
vi.resetModules();
const { resolveEnvApiKey } = await import("./model-auth.js");
const resolved = resolveEnvApiKey("huggingface");
expect(resolved?.apiKey).toBe("hf_hub_xyz");
expect(resolved?.source).toContain("HUGGINGFACE_HUB_TOKEN");
} finally {
if (prevHub === undefined) {
delete process.env.HUGGINGFACE_HUB_TOKEN;
} else {
process.env.HUGGINGFACE_HUB_TOKEN = prevHub;
}
if (prevHf === undefined) {
delete process.env.HF_TOKEN;
} else {
process.env.HF_TOKEN = prevHf;
}
}
});
it("resolveEnvApiKey('huggingface') prefers HUGGINGFACE_HUB_TOKEN over HF_TOKEN when both set", async () => {
const prevHub = process.env.HUGGINGFACE_HUB_TOKEN;
const prevHf = process.env.HF_TOKEN;
try {
process.env.HUGGINGFACE_HUB_TOKEN = "hf_hub_first";
process.env.HF_TOKEN = "hf_second";
vi.resetModules();
const { resolveEnvApiKey } = await import("./model-auth.js");
const resolved = resolveEnvApiKey("huggingface");
expect(resolved?.apiKey).toBe("hf_hub_first");
expect(resolved?.source).toContain("HUGGINGFACE_HUB_TOKEN");
} finally {
if (prevHub === undefined) {
delete process.env.HUGGINGFACE_HUB_TOKEN;
} else {
process.env.HUGGINGFACE_HUB_TOKEN = prevHub;
}
if (prevHf === undefined) {
delete process.env.HF_TOKEN;
} else {
process.env.HF_TOKEN = prevHf;
}
}
});
it("resolveEnvApiKey('huggingface') returns HF_TOKEN when only HF_TOKEN set", async () => {
const prevHub = process.env.HUGGINGFACE_HUB_TOKEN;
const prevHf = process.env.HF_TOKEN;
try {
delete process.env.HUGGINGFACE_HUB_TOKEN;
process.env.HF_TOKEN = "hf_abc123";
vi.resetModules();
const { resolveEnvApiKey } = await import("./model-auth.js");
const resolved = resolveEnvApiKey("huggingface");
expect(resolved?.apiKey).toBe("hf_abc123");
expect(resolved?.source).toContain("HF_TOKEN");
} finally {
if (prevHub === undefined) {
delete process.env.HUGGINGFACE_HUB_TOKEN;
} else {
process.env.HUGGINGFACE_HUB_TOKEN = prevHub;
}
if (prevHf === undefined) {
delete process.env.HF_TOKEN;
} else {
process.env.HF_TOKEN = prevHf;
}
}
});
});

View File

@@ -287,6 +287,10 @@ export function resolveEnvApiKey(provider: string): EnvApiKeyResult | null {
return pick("KIMI_API_KEY") ?? pick("KIMICODE_API_KEY");
}
if (normalized === "huggingface") {
return pick("HUGGINGFACE_HUB_TOKEN") ?? pick("HF_TOKEN");
}
const envMap: Record<string, string> = {
openai: "OPENAI_API_KEY",
google: "GEMINI_API_KEY",

View File

@@ -10,6 +10,12 @@ import {
buildCloudflareAiGatewayModelDefinition,
resolveCloudflareAiGatewayBaseUrl,
} from "./cloudflare-ai-gateway.js";
import {
discoverHuggingfaceModels,
HUGGINGFACE_BASE_URL,
HUGGINGFACE_MODEL_CATALOG,
buildHuggingfaceModelDefinition,
} from "./huggingface-models.js";
import { resolveAwsSdkEnvVarName, resolveEnvApiKey } from "./model-auth.js";
import {
buildSyntheticModelDefinition,
@@ -542,6 +548,25 @@ async function buildOllamaProvider(configuredBaseUrl?: string): Promise<Provider
};
}
async function buildHuggingfaceProvider(apiKey?: string): Promise<ProviderConfig> {
// Resolve env var name to value for discovery (GET /v1/models requires Bearer token).
const resolvedSecret =
apiKey?.trim() !== ""
? /^[A-Z][A-Z0-9_]*$/.test(apiKey!.trim())
? (process.env[apiKey!.trim()] ?? "").trim()
: apiKey!.trim()
: "";
const models =
resolvedSecret !== ""
? await discoverHuggingfaceModels(resolvedSecret)
: HUGGINGFACE_MODEL_CATALOG.map(buildHuggingfaceModelDefinition);
return {
baseUrl: HUGGINGFACE_BASE_URL,
api: "openai-completions",
models,
};
}
function buildTogetherProvider(): ProviderConfig {
return {
baseUrl: TOGETHER_BASE_URL,
@@ -715,6 +740,17 @@ export async function resolveImplicitProviders(params: {
};
}
const huggingfaceKey =
resolveEnvApiKeyVarName("huggingface") ??
resolveApiKeyFromProfiles({ provider: "huggingface", store: authStore });
if (huggingfaceKey) {
const hfProvider = await buildHuggingfaceProvider(huggingfaceKey);
providers.huggingface = {
...hfProvider,
apiKey: huggingfaceKey,
};
}
const qianfanKey =
resolveEnvApiKeyVarName("qianfan") ??
resolveApiKeyFromProfiles({ provider: "qianfan", store: authStore });

View File

@@ -58,7 +58,8 @@ export function registerOnboardCommand(program: Command) {
.option("--mode <mode>", "Wizard mode: local|remote")
.option(
"--auth-choice <choice>",
"Auth: setup-token|token|chutes|vllm|openai-codex|openai-api-key|xai-api-key|qianfan-api-key|openrouter-api-key|litellm-api-key|ai-gateway-api-key|cloudflare-ai-gateway-api-key|moonshot-api-key|moonshot-api-key-cn|kimi-code-api-key|synthetic-api-key|venice-api-key|gemini-api-key|zai-api-key|zai-coding-global|zai-coding-cn|zai-global|zai-cn|xiaomi-api-key|apiKey|minimax-api|minimax-api-lightning|opencode-zen|custom-api-key|skip|together-api-key",
"Auth: setup-token|token|chutes|openai-codex|openai-api-key|xai-api-key|qianfan-api-key|openrouter-api-key|litellm-api-key|ai-gateway-api-key|cloudflare-ai-gateway-api-key|moonshot-api-key|moonshot-api-key-cn|kimi-code-api-key|synthetic-api-key|venice-api-key|gemini-api-key|zai-api-key|zai-coding-global|zai-coding-cn|zai-global|zai-cn|xiaomi-api-key|apiKey|minimax-api|minimax-api-lightning|opencode-zen|custom-api-key|skip|together-api-key|huggingface-api-key",
"Auth: setup-token|token|chutes|vllm|openai-codex|openai-api-key|xai-api-key|qianfan-api-key|openrouter-api-key|litellm-api-key|ai-gateway-api-key|cloudflare-ai-gateway-api-key|moonshot-api-key|moonshot-api-key-cn|kimi-code-api-key|synthetic-api-key|venice-api-key|gemini-api-key|zai-api-key|zai-coding-global|zai-coding-cn|zai-global|zai-cn|xiaomi-api-key|apiKey|minimax-api|minimax-api-lightning|opencode-zen|custom-api-key|skip|together-api-key|huggingface-api-key",
)
.option(
"--token-provider <id>",
@@ -86,6 +87,7 @@ export function registerOnboardCommand(program: Command) {
.option("--synthetic-api-key <key>", "Synthetic API key")
.option("--venice-api-key <key>", "Venice API key")
.option("--together-api-key <key>", "Together AI API key")
.option("--huggingface-api-key <key>", "Hugging Face API key (HF token)")
.option("--opencode-zen-api-key <key>", "OpenCode Zen API key")
.option("--xai-api-key <key>", "xAI API key")
.option("--litellm-api-key <key>", "LiteLLM API key")
@@ -153,6 +155,7 @@ export function registerOnboardCommand(program: Command) {
syntheticApiKey: opts.syntheticApiKey as string | undefined,
veniceApiKey: opts.veniceApiKey as string | undefined,
togetherApiKey: opts.togetherApiKey as string | undefined,
huggingfaceApiKey: opts.huggingfaceApiKey as string | undefined,
opencodeZenApiKey: opts.opencodeZenApiKey as string | undefined,
xaiApiKey: opts.xaiApiKey as string | undefined,
litellmApiKey: opts.litellmApiKey as string | undefined,

View File

@@ -1,35 +1,13 @@
import type { AuthProfileStore } from "../agents/auth-profiles.js";
import type { AuthChoice } from "./onboard-types.js";
import type { AuthChoice, AuthChoiceGroupId } from "./onboard-types.js";
export type { AuthChoiceGroupId };
export type AuthChoiceOption = {
value: AuthChoice;
label: string;
hint?: string;
};
export type AuthChoiceGroupId =
| "openai"
| "anthropic"
| "vllm"
| "google"
| "copilot"
| "openrouter"
| "litellm"
| "ai-gateway"
| "cloudflare-ai-gateway"
| "moonshot"
| "zai"
| "xiaomi"
| "opencode-zen"
| "minimax"
| "synthetic"
| "venice"
| "qwen"
| "together"
| "qianfan"
| "xai"
| "custom";
export type AuthChoiceGroup = {
value: AuthChoiceGroupId;
label: string;
@@ -145,6 +123,12 @@ const AUTH_CHOICE_GROUP_DEFS: {
hint: "API key",
choices: ["together-api-key"],
},
{
value: "huggingface",
label: "Hugging Face",
hint: "Inference API (HF token)",
choices: ["huggingface-api-key"],
},
{
value: "venice",
label: "Venice AI",
@@ -238,6 +222,11 @@ export function buildAuthChoiceOptions(params: {
label: "Together AI API key",
hint: "Access to Llama, DeepSeek, Qwen, and more open models",
});
options.push({
value: "huggingface-api-key",
label: "Hugging Face API key (HF token)",
hint: "Inference Providers — OpenAI-compatible chat",
});
options.push({
value: "github-copilot",
label: "GitHub Copilot (GitHub device login)",

View File

@@ -6,6 +6,8 @@ import {
normalizeApiKeyInput,
validateApiKeyInput,
} from "./auth-choice.api-key.js";
import { applyAuthChoiceHuggingface } from "./auth-choice.apply.huggingface.js";
import { applyAuthChoiceOpenRouter } from "./auth-choice.apply.openrouter.js";
import { applyDefaultModelChoice } from "./auth-choice.default-model.js";
import {
applyGoogleGeminiModelDefault,
@@ -27,8 +29,6 @@ import {
applyMoonshotProviderConfigCn,
applyOpencodeZenConfig,
applyOpencodeZenProviderConfig,
applyOpenrouterConfig,
applyOpenrouterProviderConfig,
applySyntheticConfig,
applySyntheticProviderConfig,
applyTogetherConfig,
@@ -46,7 +46,6 @@ import {
QIANFAN_DEFAULT_MODEL_REF,
KIMI_CODING_MODEL_REF,
MOONSHOT_DEFAULT_MODEL_REF,
OPENROUTER_DEFAULT_MODEL_REF,
SYNTHETIC_DEFAULT_MODEL_REF,
TOGETHER_DEFAULT_MODEL_REF,
VENICE_DEFAULT_MODEL_REF,
@@ -59,7 +58,6 @@ import {
setKimiCodingApiKey,
setMoonshotApiKey,
setOpencodeZenApiKey,
setOpenrouterApiKey,
setSyntheticApiKey,
setTogetherApiKey,
setVeniceApiKey,
@@ -120,6 +118,8 @@ export async function applyAuthChoiceApiProviders(
authChoice = "venice-api-key";
} else if (params.opts.tokenProvider === "together") {
authChoice = "together-api-key";
} else if (params.opts.tokenProvider === "huggingface") {
authChoice = "huggingface-api-key";
} else if (params.opts.tokenProvider === "opencode") {
authChoice = "opencode-zen";
} else if (params.opts.tokenProvider === "qianfan") {
@@ -128,81 +128,7 @@ export async function applyAuthChoiceApiProviders(
}
if (authChoice === "openrouter-api-key") {
const store = ensureAuthProfileStore(params.agentDir, {
allowKeychainPrompt: false,
});
const profileOrder = resolveAuthProfileOrder({
cfg: nextConfig,
store,
provider: "openrouter",
});
const existingProfileId = profileOrder.find((profileId) => Boolean(store.profiles[profileId]));
const existingCred = existingProfileId ? store.profiles[existingProfileId] : undefined;
let profileId = "openrouter:default";
let mode: "api_key" | "oauth" | "token" = "api_key";
let hasCredential = false;
if (existingProfileId && existingCred?.type) {
profileId = existingProfileId;
mode =
existingCred.type === "oauth"
? "oauth"
: existingCred.type === "token"
? "token"
: "api_key";
hasCredential = true;
}
if (!hasCredential && params.opts?.token && params.opts?.tokenProvider === "openrouter") {
await setOpenrouterApiKey(normalizeApiKeyInput(params.opts.token), params.agentDir);
hasCredential = true;
}
if (!hasCredential) {
const envKey = resolveEnvApiKey("openrouter");
if (envKey) {
const useExisting = await params.prompter.confirm({
message: `Use existing OPENROUTER_API_KEY (${envKey.source}, ${formatApiKeyPreview(envKey.apiKey)})?`,
initialValue: true,
});
if (useExisting) {
await setOpenrouterApiKey(envKey.apiKey, params.agentDir);
hasCredential = true;
}
}
}
if (!hasCredential) {
const key = await params.prompter.text({
message: "Enter OpenRouter API key",
validate: validateApiKeyInput,
});
await setOpenrouterApiKey(normalizeApiKeyInput(String(key ?? "")), params.agentDir);
hasCredential = true;
}
if (hasCredential) {
nextConfig = applyAuthProfileConfig(nextConfig, {
profileId,
provider: "openrouter",
mode,
});
}
{
const applied = await applyDefaultModelChoice({
config: nextConfig,
setDefaultModel: params.setDefaultModel,
defaultModel: OPENROUTER_DEFAULT_MODEL_REF,
applyDefaultConfig: applyOpenrouterConfig,
applyProviderConfig: applyOpenrouterProviderConfig,
noteDefault: OPENROUTER_DEFAULT_MODEL_REF,
noteAgentModel,
prompter: params.prompter,
});
nextConfig = applied.config;
agentModelOverride = applied.agentModelOverride ?? agentModelOverride;
}
return { config: nextConfig, agentModelOverride };
return applyAuthChoiceOpenRouter(params);
}
if (authChoice === "litellm-api-key") {
@@ -993,6 +919,10 @@ export async function applyAuthChoiceApiProviders(
return { config: nextConfig, agentModelOverride };
}
if (authChoice === "huggingface-api-key") {
return applyAuthChoiceHuggingface({ ...params, authChoice });
}
if (authChoice === "qianfan-api-key") {
let hasCredential = false;
if (!hasCredential && params.opts?.token && params.opts?.tokenProvider === "qianfan") {

View File

@@ -0,0 +1,163 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
import type { RuntimeEnv } from "../runtime.js";
import type { WizardPrompter } from "../wizard/prompts.js";
import { applyAuthChoiceHuggingface } from "./auth-choice.apply.huggingface.js";
const noopAsync = async () => {};
const noop = () => {};
const authProfilePathFor = (agentDir: string) => path.join(agentDir, "auth-profiles.json");
describe("applyAuthChoiceHuggingface", () => {
const previousAgentDir = process.env.OPENCLAW_AGENT_DIR;
const previousHfToken = process.env.HF_TOKEN;
const previousHubToken = process.env.HUGGINGFACE_HUB_TOKEN;
let tempStateDir: string | null = null;
afterEach(async () => {
if (tempStateDir) {
await fs.rm(tempStateDir, { recursive: true, force: true });
tempStateDir = null;
}
if (previousAgentDir === undefined) {
delete process.env.OPENCLAW_AGENT_DIR;
} else {
process.env.OPENCLAW_AGENT_DIR = previousAgentDir;
}
if (previousHfToken === undefined) {
delete process.env.HF_TOKEN;
} else {
process.env.HF_TOKEN = previousHfToken;
}
if (previousHubToken === undefined) {
delete process.env.HUGGINGFACE_HUB_TOKEN;
} else {
process.env.HUGGINGFACE_HUB_TOKEN = previousHubToken;
}
});
it("returns null when authChoice is not huggingface-api-key", async () => {
const result = await applyAuthChoiceHuggingface({
authChoice: "openrouter-api-key",
config: {},
prompter: {} as WizardPrompter,
runtime: {} as RuntimeEnv,
setDefaultModel: false,
});
expect(result).toBeNull();
});
it("prompts for key and model, then writes config and auth profile", async () => {
tempStateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-hf-"));
const agentDir = path.join(tempStateDir, "agent");
process.env.OPENCLAW_AGENT_DIR = agentDir;
await fs.mkdir(agentDir, { recursive: true });
const text = vi.fn().mockResolvedValue("hf-test-token");
const select: WizardPrompter["select"] = vi.fn(
async (params) => params.options?.[0]?.value as never,
);
const prompter: WizardPrompter = {
intro: vi.fn(noopAsync),
outro: vi.fn(noopAsync),
note: vi.fn(noopAsync),
select,
multiselect: vi.fn(async () => []),
text,
confirm: vi.fn(async () => false),
progress: vi.fn(() => ({ update: noop, stop: noop })),
};
const runtime: RuntimeEnv = {
log: vi.fn(),
error: vi.fn(),
exit: vi.fn((code: number) => {
throw new Error(`exit:${code}`);
}),
};
const result = await applyAuthChoiceHuggingface({
authChoice: "huggingface-api-key",
config: {},
prompter,
runtime,
setDefaultModel: true,
});
expect(result).not.toBeNull();
expect(result?.config.auth?.profiles?.["huggingface:default"]).toMatchObject({
provider: "huggingface",
mode: "api_key",
});
expect(result?.config.agents?.defaults?.model?.primary).toMatch(/^huggingface\/.+/);
expect(text).toHaveBeenCalledWith(
expect.objectContaining({ message: expect.stringContaining("Hugging Face") }),
);
expect(select).toHaveBeenCalledWith(
expect.objectContaining({ message: "Default Hugging Face model" }),
);
const authProfilePath = authProfilePathFor(agentDir);
const raw = await fs.readFile(authProfilePath, "utf8");
const parsed = JSON.parse(raw) as {
profiles?: Record<string, { key?: string }>;
};
expect(parsed.profiles?.["huggingface:default"]?.key).toBe("hf-test-token");
});
it("does not prompt to reuse env token when opts.token already provided", async () => {
tempStateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-hf-"));
const agentDir = path.join(tempStateDir, "agent");
process.env.OPENCLAW_AGENT_DIR = agentDir;
process.env.HF_TOKEN = "hf-env-token";
delete process.env.HUGGINGFACE_HUB_TOKEN;
await fs.mkdir(agentDir, { recursive: true });
const text = vi.fn().mockResolvedValue("hf-text-token");
const select: WizardPrompter["select"] = vi.fn(
async (params) => params.options?.[0]?.value as never,
);
const confirm = vi.fn(async () => true);
const prompter: WizardPrompter = {
intro: vi.fn(noopAsync),
outro: vi.fn(noopAsync),
note: vi.fn(noopAsync),
select,
multiselect: vi.fn(async () => []),
text,
confirm,
progress: vi.fn(() => ({ update: noop, stop: noop })),
};
const runtime: RuntimeEnv = {
log: vi.fn(),
error: vi.fn(),
exit: vi.fn((code: number) => {
throw new Error(`exit:${code}`);
}),
};
const result = await applyAuthChoiceHuggingface({
authChoice: "huggingface-api-key",
config: {},
prompter,
runtime,
setDefaultModel: true,
opts: {
tokenProvider: "huggingface",
token: "hf-opts-token",
},
});
expect(result).not.toBeNull();
expect(confirm).not.toHaveBeenCalled();
expect(text).not.toHaveBeenCalled();
const authProfilePath = authProfilePathFor(agentDir);
const raw = await fs.readFile(authProfilePath, "utf8");
const parsed = JSON.parse(raw) as {
profiles?: Record<string, { key?: string }>;
};
expect(parsed.profiles?.["huggingface:default"]?.key).toBe("hf-opts-token");
});
});

View File

@@ -0,0 +1,165 @@
import type { ApplyAuthChoiceParams, ApplyAuthChoiceResult } from "./auth-choice.apply.js";
import {
discoverHuggingfaceModels,
isHuggingfacePolicyLocked,
} from "../agents/huggingface-models.js";
import { resolveEnvApiKey } from "../agents/model-auth.js";
import {
formatApiKeyPreview,
normalizeApiKeyInput,
validateApiKeyInput,
} from "./auth-choice.api-key.js";
import { applyDefaultModelChoice } from "./auth-choice.default-model.js";
import { ensureModelAllowlistEntry } from "./model-allowlist.js";
import {
applyAuthProfileConfig,
applyHuggingfaceProviderConfig,
setHuggingfaceApiKey,
HUGGINGFACE_DEFAULT_MODEL_REF,
} from "./onboard-auth.js";
export async function applyAuthChoiceHuggingface(
params: ApplyAuthChoiceParams,
): Promise<ApplyAuthChoiceResult | null> {
if (params.authChoice !== "huggingface-api-key") {
return null;
}
let nextConfig = params.config;
let agentModelOverride: string | undefined;
const noteAgentModel = async (model: string) => {
if (!params.agentId) {
return;
}
await params.prompter.note(
`Default model set to ${model} for agent "${params.agentId}".`,
"Model configured",
);
};
let hasCredential = false;
let hfKey = "";
if (!hasCredential && params.opts?.token && params.opts.tokenProvider === "huggingface") {
hfKey = normalizeApiKeyInput(params.opts.token);
await setHuggingfaceApiKey(hfKey, params.agentDir);
hasCredential = true;
}
if (!hasCredential) {
await params.prompter.note(
[
"Hugging Face Inference Providers offer OpenAI-compatible chat completions.",
"Create a token at: https://huggingface.co/settings/tokens (fine-grained, 'Make calls to Inference Providers').",
].join("\n"),
"Hugging Face",
);
}
if (!hasCredential) {
const envKey = resolveEnvApiKey("huggingface");
if (envKey) {
const useExisting = await params.prompter.confirm({
message: `Use existing Hugging Face token (${envKey.source}, ${formatApiKeyPreview(envKey.apiKey)})?`,
initialValue: true,
});
if (useExisting) {
hfKey = envKey.apiKey;
await setHuggingfaceApiKey(hfKey, params.agentDir);
hasCredential = true;
}
}
}
if (!hasCredential) {
const key = await params.prompter.text({
message: "Enter Hugging Face API key (HF token)",
validate: validateApiKeyInput,
});
hfKey = normalizeApiKeyInput(String(key ?? ""));
await setHuggingfaceApiKey(hfKey, params.agentDir);
}
nextConfig = applyAuthProfileConfig(nextConfig, {
profileId: "huggingface:default",
provider: "huggingface",
mode: "api_key",
});
const models = await discoverHuggingfaceModels(hfKey);
const modelRefPrefix = "huggingface/";
const options: { value: string; label: string }[] = [];
for (const m of models) {
const baseRef = `${modelRefPrefix}${m.id}`;
const label = m.name ?? m.id;
options.push({ value: baseRef, label });
options.push({ value: `${baseRef}:cheapest`, label: `${label} (cheapest)` });
options.push({ value: `${baseRef}:fastest`, label: `${label} (fastest)` });
}
const defaultRef = HUGGINGFACE_DEFAULT_MODEL_REF;
options.sort((a, b) => {
if (a.value === defaultRef) {
return -1;
}
if (b.value === defaultRef) {
return 1;
}
return a.label.localeCompare(b.label, undefined, { sensitivity: "base" });
});
const selectedModelRef =
options.length === 0
? defaultRef
: options.length === 1
? options[0].value
: await params.prompter.select({
message: "Default Hugging Face model",
options,
initialValue: options.some((o) => o.value === defaultRef)
? defaultRef
: options[0].value,
});
if (isHuggingfacePolicyLocked(selectedModelRef)) {
await params.prompter.note(
"Provider locked — router will choose backend by cost or speed.",
"Hugging Face",
);
}
const applied = await applyDefaultModelChoice({
config: nextConfig,
setDefaultModel: params.setDefaultModel,
defaultModel: selectedModelRef,
applyDefaultConfig: (config) => {
const withProvider = applyHuggingfaceProviderConfig(config);
const existingModel = withProvider.agents?.defaults?.model;
const withPrimary = {
...withProvider,
agents: {
...withProvider.agents,
defaults: {
...withProvider.agents?.defaults,
model: {
...(existingModel && typeof existingModel === "object" && "fallbacks" in existingModel
? {
fallbacks: (existingModel as { fallbacks?: string[] }).fallbacks,
}
: {}),
primary: selectedModelRef,
},
},
},
};
return ensureModelAllowlistEntry({
cfg: withPrimary,
modelRef: selectedModelRef,
});
},
applyProviderConfig: applyHuggingfaceProviderConfig,
noteDefault: selectedModelRef,
noteAgentModel,
prompter: params.prompter,
});
nextConfig = applied.config;
agentModelOverride = applied.agentModelOverride ?? agentModelOverride;
return { config: nextConfig, agentModelOverride };
}

View File

@@ -0,0 +1,102 @@
import type { ApplyAuthChoiceParams, ApplyAuthChoiceResult } from "./auth-choice.apply.js";
import { ensureAuthProfileStore, resolveAuthProfileOrder } from "../agents/auth-profiles.js";
import { resolveEnvApiKey } from "../agents/model-auth.js";
import {
formatApiKeyPreview,
normalizeApiKeyInput,
validateApiKeyInput,
} from "./auth-choice.api-key.js";
import { applyDefaultModelChoice } from "./auth-choice.default-model.js";
import {
applyAuthProfileConfig,
applyOpenrouterConfig,
applyOpenrouterProviderConfig,
setOpenrouterApiKey,
OPENROUTER_DEFAULT_MODEL_REF,
} from "./onboard-auth.js";
export async function applyAuthChoiceOpenRouter(
params: ApplyAuthChoiceParams,
): Promise<ApplyAuthChoiceResult> {
let nextConfig = params.config;
let agentModelOverride: string | undefined;
const noteAgentModel = async (model: string) => {
if (!params.agentId) {
return;
}
await params.prompter.note(
`Default model set to ${model} for agent "${params.agentId}".`,
"Model configured",
);
};
const store = ensureAuthProfileStore(params.agentDir, { allowKeychainPrompt: false });
const profileOrder = resolveAuthProfileOrder({
cfg: nextConfig,
store,
provider: "openrouter",
});
const existingProfileId = profileOrder.find((profileId) => Boolean(store.profiles[profileId]));
const existingCred = existingProfileId ? store.profiles[existingProfileId] : undefined;
let profileId = "openrouter:default";
let mode: "api_key" | "oauth" | "token" = "api_key";
let hasCredential = false;
if (existingProfileId && existingCred?.type) {
profileId = existingProfileId;
mode =
existingCred.type === "oauth" ? "oauth" : existingCred.type === "token" ? "token" : "api_key";
hasCredential = true;
}
if (!hasCredential && params.opts?.token && params.opts?.tokenProvider === "openrouter") {
await setOpenrouterApiKey(normalizeApiKeyInput(params.opts.token), params.agentDir);
hasCredential = true;
}
if (!hasCredential) {
const envKey = resolveEnvApiKey("openrouter");
if (envKey) {
const useExisting = await params.prompter.confirm({
message: `Use existing OPENROUTER_API_KEY (${envKey.source}, ${formatApiKeyPreview(envKey.apiKey)})?`,
initialValue: true,
});
if (useExisting) {
await setOpenrouterApiKey(envKey.apiKey, params.agentDir);
hasCredential = true;
}
}
}
if (!hasCredential) {
const key = await params.prompter.text({
message: "Enter OpenRouter API key",
validate: validateApiKeyInput,
});
await setOpenrouterApiKey(normalizeApiKeyInput(String(key ?? "")), params.agentDir);
hasCredential = true;
}
if (hasCredential) {
nextConfig = applyAuthProfileConfig(nextConfig, {
profileId,
provider: "openrouter",
mode,
});
}
const applied = await applyDefaultModelChoice({
config: nextConfig,
setDefaultModel: params.setDefaultModel,
defaultModel: OPENROUTER_DEFAULT_MODEL_REF,
applyDefaultConfig: applyOpenrouterConfig,
applyProviderConfig: applyOpenrouterProviderConfig,
noteDefault: OPENROUTER_DEFAULT_MODEL_REF,
noteAgentModel,
prompter: params.prompter,
});
nextConfig = applied.config;
agentModelOverride = applied.agentModelOverride ?? agentModelOverride;
return { config: nextConfig, agentModelOverride };
}

View File

@@ -34,6 +34,8 @@ describe("applyAuthChoice", () => {
const previousPiAgentDir = process.env.PI_CODING_AGENT_DIR;
const previousAnthropicKey = process.env.ANTHROPIC_API_KEY;
const previousOpenrouterKey = process.env.OPENROUTER_API_KEY;
const previousHfToken = process.env.HF_TOKEN;
const previousHfHubToken = process.env.HUGGINGFACE_HUB_TOKEN;
const previousLitellmKey = process.env.LITELLM_API_KEY;
const previousAiGatewayKey = process.env.AI_GATEWAY_API_KEY;
const previousCloudflareGatewayKey = process.env.CLOUDFLARE_AI_GATEWAY_API_KEY;
@@ -73,6 +75,16 @@ describe("applyAuthChoice", () => {
} else {
process.env.OPENROUTER_API_KEY = previousOpenrouterKey;
}
if (previousHfToken === undefined) {
delete process.env.HF_TOKEN;
} else {
process.env.HF_TOKEN = previousHfToken;
}
if (previousHfHubToken === undefined) {
delete process.env.HUGGINGFACE_HUB_TOKEN;
} else {
process.env.HUGGINGFACE_HUB_TOKEN = previousHfHubToken;
}
if (previousLitellmKey === undefined) {
delete process.env.LITELLM_API_KEY;
} else {
@@ -206,6 +218,60 @@ describe("applyAuthChoice", () => {
expect(parsed.profiles?.["synthetic:default"]?.key).toBe("sk-synthetic-test");
});
it("prompts and writes Hugging Face API key when selecting huggingface-api-key", async () => {
tempStateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-auth-"));
process.env.OPENCLAW_STATE_DIR = tempStateDir;
process.env.OPENCLAW_AGENT_DIR = path.join(tempStateDir, "agent");
process.env.PI_CODING_AGENT_DIR = process.env.OPENCLAW_AGENT_DIR;
const text = vi.fn().mockResolvedValue("hf-test-token");
const select: WizardPrompter["select"] = vi.fn(
async (params) => params.options[0]?.value as never,
);
const multiselect: WizardPrompter["multiselect"] = vi.fn(async () => []);
const prompter: WizardPrompter = {
intro: vi.fn(noopAsync),
outro: vi.fn(noopAsync),
note: vi.fn(noopAsync),
select,
multiselect,
text,
confirm: vi.fn(async () => false),
progress: vi.fn(() => ({ update: noop, stop: noop })),
};
const runtime: RuntimeEnv = {
log: vi.fn(),
error: vi.fn(),
exit: vi.fn((code: number) => {
throw new Error(`exit:${code}`);
}),
};
const result = await applyAuthChoice({
authChoice: "huggingface-api-key",
config: {},
prompter,
runtime,
setDefaultModel: true,
});
expect(text).toHaveBeenCalledWith(
expect.objectContaining({ message: expect.stringContaining("Hugging Face") }),
);
expect(result.config.auth?.profiles?.["huggingface:default"]).toMatchObject({
provider: "huggingface",
mode: "api_key",
});
expect(result.config.agents?.defaults?.model?.primary).toMatch(/^huggingface\/.+/);
const authProfilePath = authProfilePathFor(requireAgentDir());
const raw = await fs.readFile(authProfilePath, "utf8");
const parsed = JSON.parse(raw) as {
profiles?: Record<string, { key?: string }>;
};
expect(parsed.profiles?.["huggingface:default"]?.key).toBe("hf-test-token");
});
it("prompts for Z.AI endpoint when selecting zai-api-key", async () => {
tempStateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-auth-"));
process.env.OPENCLAW_STATE_DIR = tempStateDir;
@@ -301,6 +367,64 @@ describe("applyAuthChoice", () => {
expect(result.config.models?.providers?.zai?.baseUrl).toBe(ZAI_CODING_GLOBAL_BASE_URL);
});
it("maps apiKey + tokenProvider=huggingface to huggingface-api-key flow", async () => {
tempStateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-auth-"));
process.env.OPENCLAW_STATE_DIR = tempStateDir;
process.env.OPENCLAW_AGENT_DIR = path.join(tempStateDir, "agent");
process.env.PI_CODING_AGENT_DIR = process.env.OPENCLAW_AGENT_DIR;
delete process.env.HF_TOKEN;
delete process.env.HUGGINGFACE_HUB_TOKEN;
const text = vi.fn().mockResolvedValue("should-not-be-used");
const select: WizardPrompter["select"] = vi.fn(
async (params) => params.options[0]?.value as never,
);
const multiselect: WizardPrompter["multiselect"] = vi.fn(async () => []);
const confirm = vi.fn(async () => false);
const prompter: WizardPrompter = {
intro: vi.fn(noopAsync),
outro: vi.fn(noopAsync),
note: vi.fn(noopAsync),
select,
multiselect,
text,
confirm,
progress: vi.fn(() => ({ update: noop, stop: noop })),
};
const runtime: RuntimeEnv = {
log: vi.fn(),
error: vi.fn(),
exit: vi.fn((code: number) => {
throw new Error(`exit:${code}`);
}),
};
const result = await applyAuthChoice({
authChoice: "apiKey",
config: {},
prompter,
runtime,
setDefaultModel: true,
opts: {
tokenProvider: "huggingface",
token: "hf-token-provider-test",
},
});
expect(result.config.auth?.profiles?.["huggingface:default"]).toMatchObject({
provider: "huggingface",
mode: "api_key",
});
expect(result.config.agents?.defaults?.model?.primary).toMatch(/^huggingface\/.+/);
expect(text).not.toHaveBeenCalled();
const authProfilePath = authProfilePathFor(requireAgentDir());
const raw = await fs.readFile(authProfilePath, "utf8");
const parsed = JSON.parse(raw) as {
profiles?: Record<string, { key?: string }>;
};
expect(parsed.profiles?.["huggingface:default"]?.key).toBe("hf-token-provider-test");
});
it("does not override the global default model when selecting xai-api-key without setDefaultModel", async () => {
tempStateDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-auth-"));
process.env.OPENCLAW_STATE_DIR = tempStateDir;

View File

@@ -29,6 +29,7 @@ const PREFERRED_PROVIDER_BY_AUTH_CHOICE: Partial<Record<AuthChoice, string>> = {
"synthetic-api-key": "synthetic",
"venice-api-key": "venice",
"together-api-key": "together",
"huggingface-api-key": "huggingface",
"github-copilot": "github-copilot",
"copilot-proxy": "copilot-proxy",
"minimax-cloud": "minimax",

View File

@@ -1,9 +1,10 @@
import type { OpenClawConfig } from "../config/config.js";
import type { ModelApi } from "../config/types.models.js";
import {
buildCloudflareAiGatewayModelDefinition,
resolveCloudflareAiGatewayBaseUrl,
} from "../agents/cloudflare-ai-gateway.js";
buildHuggingfaceModelDefinition,
HUGGINGFACE_BASE_URL,
HUGGINGFACE_MODEL_CATALOG,
} from "../agents/huggingface-models.js";
import {
buildQianfanProvider,
buildXiaomiProvider,
@@ -28,15 +29,25 @@ import {
VENICE_MODEL_CATALOG,
} from "../agents/venice-models.js";
import {
CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF,
LITELLM_DEFAULT_MODEL_REF,
HUGGINGFACE_DEFAULT_MODEL_REF,
OPENROUTER_DEFAULT_MODEL_REF,
TOGETHER_DEFAULT_MODEL_REF,
VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF,
XIAOMI_DEFAULT_MODEL_REF,
ZAI_DEFAULT_MODEL_REF,
XAI_DEFAULT_MODEL_REF,
} from "./onboard-auth.credentials.js";
export {
applyCloudflareAiGatewayConfig,
applyCloudflareAiGatewayProviderConfig,
applyVercelAiGatewayConfig,
applyVercelAiGatewayProviderConfig,
} from "./onboard-auth.config-gateways.js";
export {
applyLitellmConfig,
applyLitellmProviderConfig,
LITELLM_BASE_URL,
LITELLM_DEFAULT_MODEL_ID,
} from "./onboard-auth.config-litellm.js";
import {
buildZaiModelDefinition,
buildMoonshotModelDefinition,
@@ -170,139 +181,6 @@ export function applyOpenrouterProviderConfig(cfg: OpenClawConfig): OpenClawConf
};
}
export function applyVercelAiGatewayProviderConfig(cfg: OpenClawConfig): OpenClawConfig {
const models = { ...cfg.agents?.defaults?.models };
models[VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF] = {
...models[VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF],
alias: models[VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF]?.alias ?? "Vercel AI Gateway",
};
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
};
}
export function applyCloudflareAiGatewayProviderConfig(
cfg: OpenClawConfig,
params?: { accountId?: string; gatewayId?: string },
): OpenClawConfig {
const models = { ...cfg.agents?.defaults?.models };
models[CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF] = {
...models[CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF],
alias: models[CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF]?.alias ?? "Cloudflare AI Gateway",
};
const providers = { ...cfg.models?.providers };
const existingProvider = providers["cloudflare-ai-gateway"];
const existingModels = Array.isArray(existingProvider?.models) ? existingProvider.models : [];
const defaultModel = buildCloudflareAiGatewayModelDefinition();
const hasDefaultModel = existingModels.some((model) => model.id === defaultModel.id);
const mergedModels = hasDefaultModel ? existingModels : [...existingModels, defaultModel];
const baseUrl =
params?.accountId && params?.gatewayId
? resolveCloudflareAiGatewayBaseUrl({
accountId: params.accountId,
gatewayId: params.gatewayId,
})
: existingProvider?.baseUrl;
if (!baseUrl) {
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
};
}
const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record<
string,
unknown
> as { apiKey?: string };
const resolvedApiKey = typeof existingApiKey === "string" ? existingApiKey : undefined;
const normalizedApiKey = resolvedApiKey?.trim();
providers["cloudflare-ai-gateway"] = {
...existingProviderRest,
baseUrl,
api: "anthropic-messages",
...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}),
models: mergedModels.length > 0 ? mergedModels : [defaultModel],
};
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
models: {
mode: cfg.models?.mode ?? "merge",
providers,
},
};
}
export function applyVercelAiGatewayConfig(cfg: OpenClawConfig): OpenClawConfig {
const next = applyVercelAiGatewayProviderConfig(cfg);
const existingModel = next.agents?.defaults?.model;
return {
...next,
agents: {
...next.agents,
defaults: {
...next.agents?.defaults,
model: {
...(existingModel && "fallbacks" in (existingModel as Record<string, unknown>)
? {
fallbacks: (existingModel as { fallbacks?: string[] }).fallbacks,
}
: undefined),
primary: VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF,
},
},
},
};
}
export function applyCloudflareAiGatewayConfig(
cfg: OpenClawConfig,
params?: { accountId?: string; gatewayId?: string },
): OpenClawConfig {
const next = applyCloudflareAiGatewayProviderConfig(cfg, params);
const existingModel = next.agents?.defaults?.model;
return {
...next,
agents: {
...next.agents,
defaults: {
...next.agents?.defaults,
model: {
...(existingModel && "fallbacks" in (existingModel as Record<string, unknown>)
? {
fallbacks: (existingModel as { fallbacks?: string[] }).fallbacks,
}
: undefined),
primary: CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF,
},
},
},
};
}
export function applyOpenrouterConfig(cfg: OpenClawConfig): OpenClawConfig {
const next = applyOpenrouterProviderConfig(cfg);
const existingModel = next.agents?.defaults?.model;
@@ -325,105 +203,6 @@ export function applyOpenrouterConfig(cfg: OpenClawConfig): OpenClawConfig {
};
}
export const LITELLM_BASE_URL = "http://localhost:4000";
export const LITELLM_DEFAULT_MODEL_ID = "claude-opus-4-6";
const LITELLM_DEFAULT_CONTEXT_WINDOW = 128_000;
const LITELLM_DEFAULT_MAX_TOKENS = 8_192;
const LITELLM_DEFAULT_COST = {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
};
function buildLitellmModelDefinition(): {
id: string;
name: string;
reasoning: boolean;
input: Array<"text" | "image">;
cost: { input: number; output: number; cacheRead: number; cacheWrite: number };
contextWindow: number;
maxTokens: number;
} {
return {
id: LITELLM_DEFAULT_MODEL_ID,
name: "Claude Opus 4.6",
reasoning: true,
input: ["text", "image"],
// LiteLLM routes to many upstreams; keep neutral placeholders.
cost: LITELLM_DEFAULT_COST,
contextWindow: LITELLM_DEFAULT_CONTEXT_WINDOW,
maxTokens: LITELLM_DEFAULT_MAX_TOKENS,
};
}
export function applyLitellmProviderConfig(cfg: OpenClawConfig): OpenClawConfig {
const models = { ...cfg.agents?.defaults?.models };
models[LITELLM_DEFAULT_MODEL_REF] = {
...models[LITELLM_DEFAULT_MODEL_REF],
alias: models[LITELLM_DEFAULT_MODEL_REF]?.alias ?? "LiteLLM",
};
const providers = { ...cfg.models?.providers };
const existingProvider = providers.litellm;
const existingModels = Array.isArray(existingProvider?.models) ? existingProvider.models : [];
const defaultModel = buildLitellmModelDefinition();
const hasDefaultModel = existingModels.some((model) => model.id === LITELLM_DEFAULT_MODEL_ID);
const mergedModels = hasDefaultModel ? existingModels : [...existingModels, defaultModel];
const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record<
string,
unknown
> as { apiKey?: string };
const resolvedBaseUrl =
typeof existingProvider?.baseUrl === "string" ? existingProvider.baseUrl.trim() : "";
const resolvedApiKey = typeof existingApiKey === "string" ? existingApiKey : undefined;
const normalizedApiKey = resolvedApiKey?.trim();
providers.litellm = {
...existingProviderRest,
baseUrl: resolvedBaseUrl || LITELLM_BASE_URL,
api: "openai-completions",
...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}),
models: mergedModels.length > 0 ? mergedModels : [defaultModel],
};
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
models: {
mode: cfg.models?.mode ?? "merge",
providers,
},
};
}
export function applyLitellmConfig(cfg: OpenClawConfig): OpenClawConfig {
const next = applyLitellmProviderConfig(cfg);
const existingModel = next.agents?.defaults?.model;
return {
...next,
agents: {
...next.agents,
defaults: {
...next.agents?.defaults,
model: {
...(existingModel && "fallbacks" in (existingModel as Record<string, unknown>)
? {
fallbacks: (existingModel as { fallbacks?: string[] }).fallbacks,
}
: undefined),
primary: LITELLM_DEFAULT_MODEL_REF,
},
},
},
};
}
export function applyMoonshotProviderConfig(cfg: OpenClawConfig): OpenClawConfig {
return applyMoonshotProviderConfigWithBaseUrl(cfg, MOONSHOT_BASE_URL);
}
@@ -855,6 +634,79 @@ export function applyTogetherConfig(cfg: OpenClawConfig): OpenClawConfig {
};
}
/**
* Apply Hugging Face (Inference Providers) provider configuration without changing the default model.
*/
export function applyHuggingfaceProviderConfig(cfg: OpenClawConfig): OpenClawConfig {
const models = { ...cfg.agents?.defaults?.models };
models[HUGGINGFACE_DEFAULT_MODEL_REF] = {
...models[HUGGINGFACE_DEFAULT_MODEL_REF],
alias: models[HUGGINGFACE_DEFAULT_MODEL_REF]?.alias ?? "Hugging Face",
};
const providers = { ...cfg.models?.providers };
const existingProvider = providers.huggingface;
const existingModels = Array.isArray(existingProvider?.models) ? existingProvider.models : [];
const hfModels = HUGGINGFACE_MODEL_CATALOG.map(buildHuggingfaceModelDefinition);
const mergedModels = [
...existingModels,
...hfModels.filter((model) => !existingModels.some((existing) => existing.id === model.id)),
];
const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record<
string,
unknown
> as { apiKey?: string };
const resolvedApiKey = typeof existingApiKey === "string" ? existingApiKey : undefined;
const normalizedApiKey = resolvedApiKey?.trim();
providers.huggingface = {
...existingProviderRest,
baseUrl: HUGGINGFACE_BASE_URL,
api: "openai-completions",
...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}),
models: mergedModels.length > 0 ? mergedModels : hfModels,
};
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
models: {
mode: cfg.models?.mode ?? "merge",
providers,
},
};
}
/**
* Apply Hugging Face provider configuration AND set Hugging Face as the default model.
*/
export function applyHuggingfaceConfig(cfg: OpenClawConfig): OpenClawConfig {
const next = applyHuggingfaceProviderConfig(cfg);
const existingModel = next.agents?.defaults?.model;
return {
...next,
agents: {
...next.agents,
defaults: {
...next.agents?.defaults,
model: {
...(existingModel && "fallbacks" in (existingModel as Record<string, unknown>)
? {
fallbacks: (existingModel as { fallbacks?: string[] }).fallbacks,
}
: undefined),
primary: HUGGINGFACE_DEFAULT_MODEL_REF,
},
},
},
};
}
export function applyXaiProviderConfig(cfg: OpenClawConfig): OpenClawConfig {
const models = { ...cfg.agents?.defaults?.models };
models[XAI_DEFAULT_MODEL_REF] = {

View File

@@ -0,0 +1,142 @@
import type { OpenClawConfig } from "../config/config.js";
import {
buildCloudflareAiGatewayModelDefinition,
resolveCloudflareAiGatewayBaseUrl,
} from "../agents/cloudflare-ai-gateway.js";
import {
CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF,
VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF,
} from "./onboard-auth.credentials.js";
export function applyVercelAiGatewayProviderConfig(cfg: OpenClawConfig): OpenClawConfig {
const models = { ...cfg.agents?.defaults?.models };
models[VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF] = {
...models[VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF],
alias: models[VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF]?.alias ?? "Vercel AI Gateway",
};
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
};
}
export function applyCloudflareAiGatewayProviderConfig(
cfg: OpenClawConfig,
params?: { accountId?: string; gatewayId?: string },
): OpenClawConfig {
const models = { ...cfg.agents?.defaults?.models };
models[CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF] = {
...models[CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF],
alias: models[CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF]?.alias ?? "Cloudflare AI Gateway",
};
const providers = { ...cfg.models?.providers };
const existingProvider = providers["cloudflare-ai-gateway"];
const existingModels = Array.isArray(existingProvider?.models) ? existingProvider.models : [];
const defaultModel = buildCloudflareAiGatewayModelDefinition();
const hasDefaultModel = existingModels.some((model) => model.id === defaultModel.id);
const mergedModels = hasDefaultModel ? existingModels : [...existingModels, defaultModel];
const baseUrl =
params?.accountId && params?.gatewayId
? resolveCloudflareAiGatewayBaseUrl({
accountId: params.accountId,
gatewayId: params.gatewayId,
})
: existingProvider?.baseUrl;
if (!baseUrl) {
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
};
}
const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record<
string,
unknown
> as { apiKey?: string };
const resolvedApiKey = typeof existingApiKey === "string" ? existingApiKey : undefined;
const normalizedApiKey = resolvedApiKey?.trim();
providers["cloudflare-ai-gateway"] = {
...existingProviderRest,
baseUrl,
api: "anthropic-messages",
...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}),
models: mergedModels.length > 0 ? mergedModels : [defaultModel],
};
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
models: {
mode: cfg.models?.mode ?? "merge",
providers,
},
};
}
export function applyVercelAiGatewayConfig(cfg: OpenClawConfig): OpenClawConfig {
const next = applyVercelAiGatewayProviderConfig(cfg);
const existingModel = next.agents?.defaults?.model;
return {
...next,
agents: {
...next.agents,
defaults: {
...next.agents?.defaults,
model: {
...(existingModel && "fallbacks" in (existingModel as Record<string, unknown>)
? {
fallbacks: (existingModel as { fallbacks?: string[] }).fallbacks,
}
: undefined),
primary: VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF,
},
},
},
};
}
export function applyCloudflareAiGatewayConfig(
cfg: OpenClawConfig,
params?: { accountId?: string; gatewayId?: string },
): OpenClawConfig {
const next = applyCloudflareAiGatewayProviderConfig(cfg, params);
const existingModel = next.agents?.defaults?.model;
return {
...next,
agents: {
...next.agents,
defaults: {
...next.agents?.defaults,
model: {
...(existingModel && "fallbacks" in (existingModel as Record<string, unknown>)
? {
fallbacks: (existingModel as { fallbacks?: string[] }).fallbacks,
}
: undefined),
primary: CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF,
},
},
},
};
}

View File

@@ -0,0 +1,100 @@
import type { OpenClawConfig } from "../config/config.js";
import { LITELLM_DEFAULT_MODEL_REF } from "./onboard-auth.credentials.js";
export const LITELLM_BASE_URL = "http://localhost:4000";
export const LITELLM_DEFAULT_MODEL_ID = "claude-opus-4-6";
const LITELLM_DEFAULT_CONTEXT_WINDOW = 128_000;
const LITELLM_DEFAULT_MAX_TOKENS = 8_192;
const LITELLM_DEFAULT_COST = {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
};
function buildLitellmModelDefinition(): {
id: string;
name: string;
reasoning: boolean;
input: Array<"text" | "image">;
cost: { input: number; output: number; cacheRead: number; cacheWrite: number };
contextWindow: number;
maxTokens: number;
} {
return {
id: LITELLM_DEFAULT_MODEL_ID,
name: "Claude Opus 4.6",
reasoning: true,
input: ["text", "image"],
cost: LITELLM_DEFAULT_COST,
contextWindow: LITELLM_DEFAULT_CONTEXT_WINDOW,
maxTokens: LITELLM_DEFAULT_MAX_TOKENS,
};
}
export function applyLitellmProviderConfig(cfg: OpenClawConfig): OpenClawConfig {
const models = { ...cfg.agents?.defaults?.models };
models[LITELLM_DEFAULT_MODEL_REF] = {
...models[LITELLM_DEFAULT_MODEL_REF],
alias: models[LITELLM_DEFAULT_MODEL_REF]?.alias ?? "LiteLLM",
};
const providers = { ...cfg.models?.providers };
const existingProvider = providers.litellm;
const existingModels = Array.isArray(existingProvider?.models) ? existingProvider.models : [];
const defaultModel = buildLitellmModelDefinition();
const hasDefaultModel = existingModels.some((model) => model.id === LITELLM_DEFAULT_MODEL_ID);
const mergedModels = hasDefaultModel ? existingModels : [...existingModels, defaultModel];
const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record<
string,
unknown
> as { apiKey?: string };
const resolvedBaseUrl =
typeof existingProvider?.baseUrl === "string" ? existingProvider.baseUrl.trim() : "";
const resolvedApiKey = typeof existingApiKey === "string" ? existingApiKey : undefined;
const normalizedApiKey = resolvedApiKey?.trim();
providers.litellm = {
...existingProviderRest,
baseUrl: resolvedBaseUrl || LITELLM_BASE_URL,
api: "openai-completions",
...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}),
models: mergedModels.length > 0 ? mergedModels : [defaultModel],
};
return {
...cfg,
agents: {
...cfg.agents,
defaults: {
...cfg.agents?.defaults,
models,
},
},
models: {
mode: cfg.models?.mode ?? "merge",
providers,
},
};
}
export function applyLitellmConfig(cfg: OpenClawConfig): OpenClawConfig {
const next = applyLitellmProviderConfig(cfg);
const existingModel = next.agents?.defaults?.model;
return {
...next,
agents: {
...next.agents,
defaults: {
...next.agents?.defaults,
model: {
...(existingModel && "fallbacks" in (existingModel as Record<string, unknown>)
? {
fallbacks: (existingModel as { fallbacks?: string[] }).fallbacks,
}
: undefined),
primary: LITELLM_DEFAULT_MODEL_REF,
},
},
},
};
}

View File

@@ -118,6 +118,7 @@ export async function setVeniceApiKey(key: string, agentDir?: string) {
export const ZAI_DEFAULT_MODEL_REF = "zai/glm-5";
export const XIAOMI_DEFAULT_MODEL_REF = "xiaomi/mimo-v2-flash";
export const OPENROUTER_DEFAULT_MODEL_REF = "openrouter/auto";
export const HUGGINGFACE_DEFAULT_MODEL_REF = "huggingface/deepseek-ai/DeepSeek-R1";
export const TOGETHER_DEFAULT_MODEL_REF = "together/moonshotai/Kimi-K2.5";
export const LITELLM_DEFAULT_MODEL_REF = "litellm/claude-opus-4-6";
export const VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF = "vercel-ai-gateway/anthropic/claude-opus-4.6";
@@ -148,12 +149,14 @@ export async function setXiaomiApiKey(key: string, agentDir?: string) {
}
export async function setOpenrouterApiKey(key: string, agentDir?: string) {
// Never persist the literal "undefined" (e.g. when prompt returns undefined and caller used String(key)).
const safeKey = key === "undefined" ? "" : key;
upsertAuthProfile({
profileId: "openrouter:default",
credential: {
type: "api_key",
provider: "openrouter",
key,
key: safeKey,
},
agentDir: resolveAuthAgentDir(agentDir),
});
@@ -231,6 +234,18 @@ export async function setTogetherApiKey(key: string, agentDir?: string) {
});
}
export async function setHuggingfaceApiKey(key: string, agentDir?: string) {
upsertAuthProfile({
profileId: "huggingface:default",
credential: {
type: "api_key",
provider: "huggingface",
key,
},
agentDir: resolveAuthAgentDir(agentDir),
});
}
export function setQianfanApiKey(key: string, agentDir?: string) {
upsertAuthProfile({
profileId: "qianfan:default",

View File

@@ -7,6 +7,8 @@ export {
applyAuthProfileConfig,
applyCloudflareAiGatewayConfig,
applyCloudflareAiGatewayProviderConfig,
applyHuggingfaceConfig,
applyHuggingfaceProviderConfig,
applyQianfanConfig,
applyQianfanProviderConfig,
applyKimiCodeConfig,
@@ -63,12 +65,14 @@ export {
setOpenrouterApiKey,
setSyntheticApiKey,
setTogetherApiKey,
setHuggingfaceApiKey,
setVeniceApiKey,
setVercelAiGatewayApiKey,
setXiaomiApiKey,
setZaiApiKey,
setXaiApiKey,
writeOAuthCredentials,
HUGGINGFACE_DEFAULT_MODEL_REF,
VERCEL_AI_GATEWAY_DEFAULT_MODEL_REF,
XIAOMI_DEFAULT_MODEL_REF,
ZAI_DEFAULT_MODEL_REF,

View File

@@ -450,6 +450,36 @@ describe("onboard (non-interactive): provider auth", () => {
});
}, 60_000);
it("infers Together auth choice from --together-api-key and sets default model", async () => {
await withOnboardEnv("openclaw-onboard-together-infer-", async ({ configPath, runtime }) => {
await runNonInteractive(
{
nonInteractive: true,
togetherApiKey: "together-test-key",
skipHealth: true,
skipChannels: true,
skipSkills: true,
json: true,
},
runtime,
);
const cfg = await readJsonFile<{
auth?: { profiles?: Record<string, { provider?: string; mode?: string }> };
agents?: { defaults?: { model?: { primary?: string } } };
}>(configPath);
expect(cfg.auth?.profiles?.["together:default"]?.provider).toBe("together");
expect(cfg.auth?.profiles?.["together:default"]?.mode).toBe("api_key");
expect(cfg.agents?.defaults?.model?.primary).toBe("together/moonshotai/Kimi-K2.5");
await expectApiKeyProfile({
profileId: "together:default",
provider: "together",
key: "together-test-key",
});
});
}, 60_000);
it("configures a custom provider from non-interactive flags", async () => {
await withOnboardEnv("openclaw-onboard-custom-provider-", async ({ configPath, runtime }) => {
await runNonInteractive(

View File

@@ -18,6 +18,8 @@ type AuthChoiceFlagOptions = Pick<
| "kimiCodeApiKey"
| "syntheticApiKey"
| "veniceApiKey"
| "togetherApiKey"
| "huggingfaceApiKey"
| "zaiApiKey"
| "xiaomiApiKey"
| "minimaxApiKey"
@@ -44,11 +46,13 @@ const AUTH_CHOICE_FLAG_MAP = [
{ flag: "kimiCodeApiKey", authChoice: "kimi-code-api-key", label: "--kimi-code-api-key" },
{ flag: "syntheticApiKey", authChoice: "synthetic-api-key", label: "--synthetic-api-key" },
{ flag: "veniceApiKey", authChoice: "venice-api-key", label: "--venice-api-key" },
{ flag: "togetherApiKey", authChoice: "together-api-key", label: "--together-api-key" },
{ flag: "zaiApiKey", authChoice: "zai-api-key", label: "--zai-api-key" },
{ flag: "xiaomiApiKey", authChoice: "xiaomi-api-key", label: "--xiaomi-api-key" },
{ flag: "xaiApiKey", authChoice: "xai-api-key", label: "--xai-api-key" },
{ flag: "minimaxApiKey", authChoice: "minimax-api", label: "--minimax-api-key" },
{ flag: "opencodeZenApiKey", authChoice: "opencode-zen", label: "--opencode-zen-api-key" },
{ flag: "huggingfaceApiKey", authChoice: "huggingface-api-key", label: "--huggingface-api-key" },
{ flag: "litellmApiKey", authChoice: "litellm-api-key", label: "--litellm-api-key" },
] satisfies ReadonlyArray<AuthChoiceFlag>;

View File

@@ -23,6 +23,7 @@ import {
applySyntheticConfig,
applyVeniceConfig,
applyTogetherConfig,
applyHuggingfaceConfig,
applyVercelAiGatewayConfig,
applyLitellmConfig,
applyXaiConfig,
@@ -42,6 +43,7 @@ import {
setXaiApiKey,
setVeniceApiKey,
setTogetherApiKey,
setHuggingfaceApiKey,
setVercelAiGatewayApiKey,
setXiaomiApiKey,
setZaiApiKey,
@@ -644,6 +646,29 @@ export async function applyNonInteractiveAuthChoice(params: {
return applyTogetherConfig(nextConfig);
}
if (authChoice === "huggingface-api-key") {
const resolved = await resolveNonInteractiveApiKey({
provider: "huggingface",
cfg: baseConfig,
flagValue: opts.huggingfaceApiKey,
flagName: "--huggingface-api-key",
envVar: "HF_TOKEN",
runtime,
});
if (!resolved) {
return null;
}
if (resolved.source !== "profile") {
await setHuggingfaceApiKey(resolved.key);
}
nextConfig = applyAuthProfileConfig(nextConfig, {
profileId: "huggingface:default",
provider: "huggingface",
mode: "api_key",
});
return applyHuggingfaceConfig(nextConfig);
}
if (authChoice === "custom-api-key") {
try {
const customAuth = parseNonInteractiveCustomApiFlags({

View File

@@ -22,6 +22,7 @@ export type AuthChoice =
| "synthetic-api-key"
| "venice-api-key"
| "together-api-key"
| "huggingface-api-key"
| "codex-cli"
| "apiKey"
| "gemini-api-key"
@@ -52,6 +53,7 @@ export type AuthChoiceGroupId =
| "google"
| "copilot"
| "openrouter"
| "litellm"
| "ai-gateway"
| "cloudflare-ai-gateway"
| "moonshot"
@@ -62,6 +64,8 @@ export type AuthChoiceGroupId =
| "synthetic"
| "venice"
| "qwen"
| "together"
| "huggingface"
| "qianfan"
| "xai"
| "custom";
@@ -109,6 +113,7 @@ export type OnboardOptions = {
syntheticApiKey?: string;
veniceApiKey?: string;
togetherApiKey?: string;
huggingfaceApiKey?: string;
opencodeZenApiKey?: string;
xaiApiKey?: string;
qianfanApiKey?: string;

View File

@@ -14,26 +14,34 @@ const resolveRequestUrl = (input: RequestInfo | URL) => {
return input.url;
};
function stubPinnedHostname(hostname: string) {
const normalized = hostname.trim().toLowerCase().replace(/\.$/, "");
const addresses = [TEST_NET_IP];
return {
hostname: normalized,
addresses,
lookup: ssrf.createPinnedLookup({ hostname: normalized, addresses }),
};
}
describe("describeGeminiVideo", () => {
let resolvePinnedHostnameWithPolicySpy: ReturnType<typeof vi.spyOn>;
let resolvePinnedHostnameSpy: ReturnType<typeof vi.spyOn>;
beforeEach(() => {
resolvePinnedHostnameSpy = vi
// Stub both entry points so fetch-guard never does live DNS (CI can use either path).
resolvePinnedHostnameWithPolicySpy = vi
.spyOn(ssrf, "resolvePinnedHostnameWithPolicy")
.mockImplementation(async (hostname) => {
// SSRF guard pins DNS; stub resolution to avoid live lookups in unit tests.
const normalized = hostname.trim().toLowerCase().replace(/\.$/, "");
const addresses = [TEST_NET_IP];
return {
hostname: normalized,
addresses,
lookup: ssrf.createPinnedLookup({ hostname: normalized, addresses }),
};
});
.mockImplementation(async (hostname) => stubPinnedHostname(hostname));
resolvePinnedHostnameSpy = vi
.spyOn(ssrf, "resolvePinnedHostname")
.mockImplementation(async (hostname) => stubPinnedHostname(hostname));
});
afterEach(() => {
resolvePinnedHostnameWithPolicySpy?.mockRestore();
resolvePinnedHostnameSpy?.mockRestore();
resolvePinnedHostnameWithPolicySpy = undefined;
resolvePinnedHostnameSpy = undefined;
});