From 96c9ffdedcb40b6027f2a52d6c5c324905c22f6c Mon Sep 17 00:00:00 2001 From: jonisjongithub <86072337+jonisjongithub@users.noreply.github.com> Date: Thu, 29 Jan 2026 15:31:48 -0800 Subject: [PATCH] =?UTF-8?q?docs:=20fix=20Venice=20AI=20typo=20(Venius=20?= =?UTF-8?q?=E2=86=92=20Venice)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: jonisjongithub Co-authored-by: Clawdbot --- docs/providers/index.md | 6 +++--- docs/providers/models.md | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/providers/index.md b/docs/providers/index.md index 7675af830f..6009dba15b 100644 --- a/docs/providers/index.md +++ b/docs/providers/index.md @@ -13,9 +13,9 @@ default model as `provider/model`. Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels). -## Highlight: Venius (Venice AI) +## Highlight: Venice (Venice AI) -Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks. +Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks. - Default: `venice/llama-3.3-70b` - Best overall: `venice/claude-opus-45` (Opus remains the strongest) @@ -47,7 +47,7 @@ See [Venice AI](/providers/venice). - [Xiaomi](/providers/xiaomi) - [GLM models](/providers/glm) - [MiniMax](/providers/minimax) -- [Venius (Venice AI, privacy-focused)](/providers/venice) +- [Venice (Venice AI, privacy-focused)](/providers/venice) - [Ollama (local models)](/providers/ollama) ## Transcription providers diff --git a/docs/providers/models.md b/docs/providers/models.md index 78f228eb8c..ad6e424b05 100644 --- a/docs/providers/models.md +++ b/docs/providers/models.md @@ -11,9 +11,9 @@ title: "Model Provider Quickstart" OpenClaw can use many LLM providers. Pick one, authenticate, then set the default model as `provider/model`. -## Highlight: Venius (Venice AI) +## Highlight: Venice (Venice AI) -Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks. +Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks. - Default: `venice/llama-3.3-70b` - Best overall: `venice/claude-opus-45` (Opus remains the strongest) @@ -43,7 +43,7 @@ See [Venice AI](/providers/venice). - [Z.AI](/providers/zai) - [GLM models](/providers/glm) - [MiniMax](/providers/minimax) -- [Venius (Venice AI)](/providers/venice) +- [Venice (Venice AI)](/providers/venice) - [Amazon Bedrock](/bedrock) For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,