mirror of
https://github.com/openclaw/openclaw.git
synced 2026-02-19 18:39:20 -05:00
Onboarding: add vLLM provider support
This commit is contained in:
committed by
Peter Steinberger
parent
54bf5d0f41
commit
e73d881c50
@@ -259,6 +259,32 @@ ollama pull llama3.3
|
||||
|
||||
Ollama is automatically detected when running locally at `http://127.0.0.1:11434/v1`. See [/providers/ollama](/providers/ollama) for model recommendations and custom configuration.
|
||||
|
||||
### vLLM
|
||||
|
||||
vLLM is a local (or self-hosted) OpenAI-compatible server:
|
||||
|
||||
- Provider: `vllm`
|
||||
- Auth: Optional (depends on your server)
|
||||
- Default base URL: `http://127.0.0.1:8000/v1`
|
||||
|
||||
To opt in to auto-discovery locally (any value works if your server doesn’t enforce auth):
|
||||
|
||||
```bash
|
||||
export VLLM_API_KEY="vllm-local"
|
||||
```
|
||||
|
||||
Then set a model (replace with one of the IDs returned by `/v1/models`):
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: { model: { primary: "vllm/your-model-id" } },
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
See [/providers/vllm](/providers/vllm) for details.
|
||||
|
||||
### Local proxies (LM Studio, vLLM, LiteLLM, etc.)
|
||||
|
||||
Example (OpenAI‑compatible):
|
||||
|
||||
@@ -52,6 +52,7 @@ See [Venice AI](/providers/venice).
|
||||
- [MiniMax](/providers/minimax)
|
||||
- [Venice (Venice AI, privacy-focused)](/providers/venice)
|
||||
- [Ollama (local models)](/providers/ollama)
|
||||
- [vLLM (local models)](/providers/vllm)
|
||||
- [Qianfan](/providers/qianfan)
|
||||
|
||||
## Transcription providers
|
||||
|
||||
92
docs/providers/vllm.md
Normal file
92
docs/providers/vllm.md
Normal file
@@ -0,0 +1,92 @@
|
||||
---
|
||||
summary: "Run OpenClaw with vLLM (OpenAI-compatible local server)"
|
||||
read_when:
|
||||
- You want to run OpenClaw against a local vLLM server
|
||||
- You want OpenAI-compatible /v1 endpoints with your own models
|
||||
title: "vLLM"
|
||||
---
|
||||
|
||||
# vLLM
|
||||
|
||||
vLLM can serve open-source (and some custom) models via an **OpenAI-compatible** HTTP API. OpenClaw can connect to vLLM using the `openai-completions` API.
|
||||
|
||||
OpenClaw can also **auto-discover** available models from vLLM when you opt in with `VLLM_API_KEY` (any value works if your server doesn’t enforce auth) and you do not define an explicit `models.providers.vllm` entry.
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Start vLLM with an OpenAI-compatible server.
|
||||
|
||||
Your base URL should expose `/v1` endpoints (e.g. `/v1/models`, `/v1/chat/completions`). vLLM commonly runs on:
|
||||
|
||||
- `http://127.0.0.1:8000/v1`
|
||||
|
||||
2. Opt in (any value works if no auth is configured):
|
||||
|
||||
```bash
|
||||
export VLLM_API_KEY="vllm-local"
|
||||
```
|
||||
|
||||
3. Select a model (replace with one of your vLLM model IDs):
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "vllm/your-model-id" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Model discovery (implicit provider)
|
||||
|
||||
When `VLLM_API_KEY` is set (or an auth profile exists) and you **do not** define `models.providers.vllm`, OpenClaw will query:
|
||||
|
||||
- `GET http://127.0.0.1:8000/v1/models`
|
||||
|
||||
…and convert the returned IDs into model entries.
|
||||
|
||||
If you set `models.providers.vllm` explicitly, auto-discovery is skipped and you must define models manually.
|
||||
|
||||
## Explicit configuration (manual models)
|
||||
|
||||
Use explicit config when:
|
||||
|
||||
- vLLM runs on a different host/port.
|
||||
- You want to pin `contextWindow`/`maxTokens` values.
|
||||
- Your server requires a real API key (or you want to control headers).
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
vllm: {
|
||||
baseUrl: "http://127.0.0.1:8000/v1",
|
||||
apiKey: "${VLLM_API_KEY}",
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
{
|
||||
id: "your-model-id",
|
||||
name: "Local vLLM Model",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 128000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Check the server is reachable:
|
||||
|
||||
```bash
|
||||
curl http://127.0.0.1:8000/v1/models
|
||||
```
|
||||
|
||||
- If requests fail with auth errors, set a real `VLLM_API_KEY` that matches your server configuration, or configure the provider explicitly under `models.providers.vllm`.
|
||||
Reference in New Issue
Block a user