Wire up the missing Commander CLI option, action handler mapping, and
help text for --litellm-api-key. Add litellm-api-key to the preferred
provider map for consistency with other providers.
Wire up litellmApiKey flag inference and auth-choice handler for the
non-interactive onboarding path, and add an integration test covering
profile, model default, and credential storage.
Add applyLitellmProviderConfig which properly registers
models.providers.litellm with baseUrl, api type, and model definitions.
This fixes the critical bug from PR #6488 where the provider entry was
never created, causing model resolution to fail at runtime.
* fix: enforce embedding model token limit to prevent 8192 overflow
- Replace EMBEDDING_APPROX_CHARS_PER_TOKEN=1 with UTF-8 byte length
estimation (safe upper bound for tokenizer output)
- Add EMBEDDING_MODEL_MAX_TOKENS=8192 hard cap
- Add splitChunkToTokenLimit() that binary-searches for the largest
safe split point, with surrogate pair handling
- Add enforceChunkTokenLimit() wrapper called in indexFile() after
chunkMarkdown(), before any embedding API call
- Fixes: session files with large JSONL entries could produce chunks
exceeding text-embedding-3-small's 8192 token limit
Tests: 2 new colocated tests in manager.embedding-token-limit.test.ts
- Verifies oversized ASCII chunks are split to <=8192 bytes each
- Verifies multibyte (emoji) content batching respects byte limits
* fix: make embedding token limit provider-aware
- Add optional maxInputTokens to EmbeddingProvider interface
- Each provider (openai, gemini, voyage) reports its own limit
- Known-limits map as fallback: openai 8192, gemini 2048, voyage 32K
- Resolution: provider field > known map > default 8192
- Backward compatible: local/llama uses fallback
* fix: enforce embedding input size limits (#13455) (thanks @rodrigouroz)
---------
Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
* Heartbeat: inject cron-style current time into prompts
* Tests: fix type for web heartbeat timestamp test
* Infra: inline heartbeat current-time injection
* fix(pairing): show actual code in approval command instead of placeholder
The pairing reply shown to new users included the approval command with
a literal '<code>' placeholder. Users had to manually copy the code
from one line and substitute it into the command.
Now shows the ready-to-copy command with the real pairing code:
Before: openclaw pairing approve telegram <code>
After: openclaw pairing approve telegram abc123
Fixed in both the shared pairing message builder and the Telegram
inline pairing reply.
* test(pairing): update test to expect actual code instead of placeholder
---------
Co-authored-by: Echo Ito <echoito@MacBook-Air.local>
xAI's /v1/responses endpoint does not support the 'include' parameter,
returning 400 'Argument not supported: include'. Inline citations are
returned automatically when available — no explicit request needed.
Closes#12910
Co-authored-by: Luna AI <luna@coredirection.ai>
* fix: remap session JSONL chunk line numbers to original source positions
buildSessionEntry() flattens JSONL messages into plain text before
chunkMarkdown() assigns line numbers. The stored startLine/endLine
values therefore reference positions in the flattened text, not the
original JSONL file.
- Add lineMap to SessionFileEntry tracking which JSONL line each
extracted message came from
- Add remapChunkLines() to translate chunk positions back to original
JSONL lines after chunking
- Guard remap with source === "sessions" to prevent misapplication
- Include lineMap in content hash so existing sessions get re-indexed
Fixes#12044
* memory: dedupe session JSONL parsing
---------
Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
* fix(browser): prevent permanent timeout after stuck evaluate
Thread AbortSignal from client-fetch through dispatcher to Playwright
operations. When a timeout fires, force-disconnect the Playwright CDP
connection to unblock the serialized command queue, allowing the next
call to reconnect transparently.
Key changes:
- client-fetch.ts: proper AbortController with signal propagation
- pw-session.ts: new forceDisconnectPlaywrightForTarget()
- pw-tools-core.interactions.ts: accept signal, align inner timeout
to outer-500ms, inject in-browser Promise.race for async evaluates
- routes/dispatcher.ts + types.ts: propagate signal through dispatch
- server.ts + bridge-server.ts: Express middleware creates AbortSignal
from request lifecycle
- client-actions-core.ts: add timeoutMs to evaluate type
Fixes#10994
* fix(browser): v2 - force-disconnect via Connection.close() instead of browser.close()
When page.evaluate() is stuck on a hung CDP transport, browser.close() also
hangs because it tries to send a close command through the same stuck pipe.
v2 fix: forceDisconnectPlaywrightForTarget now directly calls Playwright's
internal Connection.close() which locally rejects all pending callbacks and
emits 'disconnected' without touching the network. This instantly unblocks
all stuck Playwright operations.
closePlaywrightBrowserConnection (clean shutdown) now also has a 3s timeout
fallback that drops to forceDropConnection if browser.close() hangs.
Fixes permanent browser timeout after stuck evaluate.
* fix(browser): v3 - fire-and-forget browser.close() instead of Connection.close()
v2's forceDropConnection called browser._connection.close() which corrupts
the entire Playwright instance because Connection is shared across all
objects (BrowserType, Browser, Page, etc.). This prevented reconnection
with cascading 'connectOverCDP: Force-disconnected' errors.
v3 fix: forceDisconnectPlaywrightForTarget now:
1. Nulls cached connection immediately
2. Fire-and-forgets browser.close() (doesn't await — it may hang)
3. Next connectBrowser() creates a fresh connectOverCDP WebSocket
Each connectOverCDP creates an independent WebSocket to the CDP endpoint,
so the new connection is unaffected by the old one's pending close.
The old browser.close() eventually resolves when the in-browser evaluate
timeout fires, or the old connection gets GC'd.
* fix(browser): v4 - clear connecting state and remove stale disconnect listeners
The reconnect was failing because:
1. forceDisconnectPlaywrightForTarget nulled cached but not connecting,
so subsequent calls could await a stale promise
2. The old browser's 'disconnected' event handler raced with new
connections, nulling the fresh cached reference
Fix: null both cached and connecting, and removeAllListeners on the
old browser before fire-and-forget close.
* fix(browser): v5 - use raw CDP Runtime.terminateExecution to kill stuck evaluate
When forceDisconnectPlaywrightForTarget fires, open a raw WebSocket
to the stuck page's CDP endpoint and send Runtime.terminateExecution.
This kills running JS without navigating away or crashing the page.
Also clear connecting state and remove stale disconnect listeners.
* fix(browser): abort cancels stuck evaluate
* Browser: always cleanup evaluate abort listener
* Chore: remove Playwright debug scripts
* Docs: add CDP evaluate refactor plan
* Browser: refactor Playwright force-disconnect
* Browser: abort stops evaluate promptly
* Node host: extract withTimeout helper
* Browser: remove disconnected listener safely
* Changelog: note act:evaluate hang fix
---------
Co-authored-by: Bob <bob@dutifulbob.com>
- Add [Historical context: ...] marker pattern to stripDowngradedToolCallText
- Apply stripDowngradedToolCallText in emitBlockChunk streaming path
- Previously only stripBlockTags ran during streaming, leaking [Tool Call: ...] markers to users
- Add 7 test cases for the new pattern stripping
* fix(signal): enforce mention gating for group messages
Signal group messages bypassed mention gating, causing the bot to reply
even when requireMention was enabled and the message did not mention
the bot. This aligns Signal with Slack, Discord, Telegram, and iMessage
which all enforce mention gating correctly.
Fixes#13106
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(signal): keep pending history context for mention-gated skips (#13124) (thanks @zerone0x)
---------
Co-authored-by: Yansu <no-reply@yansu.ai>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
* fix: prune stale session entries, cap entry count, and rotate sessions.json
The sessions.json file grows unbounded over time. Every heartbeat tick (default: 30m)
triggers multiple full rewrites, and session keys from groups, threads, and DMs
accumulate indefinitely with large embedded objects (skillsSnapshot,
systemPromptReport). At >50MB the synchronous JSON parse blocks the event loop,
causing Telegram webhook timeouts and effectively taking the bot down.
Three mitigations, all running inside saveSessionStoreUnlocked() on every write:
1. Prune stale entries: remove entries with updatedAt older than 30 days
(configurable via session.maintenance.pruneDays in openclaw.json)
2. Cap entry count: keep only the 500 most recently updated entries
(configurable via session.maintenance.maxEntries). Entries without updatedAt
are evicted first.
3. File rotation: if the existing sessions.json exceeds 10MB before a write,
rename it to sessions.json.bak.{timestamp} and keep only the 3 most recent
backups (configurable via session.maintenance.rotateBytes).
All three thresholds are configurable under session.maintenance in openclaw.json
with Zod validation. No env vars.
Existing tests updated to use Date.now() instead of epoch-relative timestamps
(1, 2, 3) that would be incorrectly pruned as stale.
27 new tests covering pruning, capping, rotation, and integration scenarios.
* feat: auto-prune expired cron run sessions (#12289)
Add TTL-based reaper for isolated cron run sessions that accumulate
indefinitely in sessions.json.
New config option:
cron.sessionRetention: string | false (default: '24h')
The reaper runs piggy-backed on the cron timer tick, self-throttled
to sweep at most every 5 minutes. It removes session entries matching
the pattern cron:<jobId>:run:<uuid> whose updatedAt + retention < now.
Design follows the Kubernetes ttlSecondsAfterFinished pattern:
- Sessions are persisted normally (observability/debugging)
- A periodic reaper prunes expired entries
- Configurable retention with sensible default
- Set to false to disable pruning entirely
Files changed:
- src/config/types.cron.ts: Add sessionRetention to CronConfig
- src/config/zod-schema.ts: Add Zod validation for sessionRetention
- src/cron/session-reaper.ts: New reaper module (sweepCronRunSessions)
- src/cron/session-reaper.test.ts: 12 tests covering all paths
- src/cron/service/state.ts: Add cronConfig/sessionStorePath to deps
- src/cron/service/timer.ts: Wire reaper into onTimer tick
- src/gateway/server-cron.ts: Pass config and session store path to deps
Closes#12289
* fix: sweep cron session stores per agent
* docs: add changelog for session maintenance (#13083) (thanks @skyfallsin, @Glucksberg)
* fix: add warn-only session maintenance mode
* fix: warn-only maintenance defaults to active session
* fix: deliver maintenance warnings to active session
* docs: add session maintenance examples
* fix: accept duration and size maintenance thresholds
* refactor: share cron run session key check
* fix: format issues and replace defaultRuntime.warn with console.warn
---------
Co-authored-by: Pradeep Elankumaran <pradeepe@gmail.com>
Co-authored-by: Glucksberg <markuscontasul@gmail.com>
Co-authored-by: max <40643627+quotentiroler@users.noreply.github.com>
Co-authored-by: quotentiroler <max.nussbaumer@maxhealth.tech>
* fix(tools): correct Grok response parsing for xAI Responses API
The xAI Responses API returns content in output[0].content[0].text,
not in output_text field. Updated GrokSearchResponse type and
runGrokSearch to extract content from the correct path.
Fixes the 'No response' issue when using Grok web search.
* fix(tools): harden Grok web_search parsing (#13049) (thanks @ereid7)
---------
Co-authored-by: erai <erai@erais-Mac-mini.local>
Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
* fix: suggest /reset in context overflow error message
When the context window overflows, the error message now suggests
using /reset to clear session history, giving users an actionable
recovery path instead of a dead-end error.
Closes#12940
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: suggest /reset in context overflow error message (#12973) (thanks @RamiNoodle733)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Rami Abdelrazzaq <RamiNoodle733@users.noreply.github.com>
* fix: cap Discord gateway reconnect attempts to prevent infinite loop
The Discord GatewayPlugin was configured with maxAttempts: Infinity,
which causes an unbounded reconnection loop when the Discord gateway
enters a persistent failure state (e.g. code 1005 with stalled HELLO).
In production, this manifested as 2,483+ reconnection attempts in a
single log file, starving the Node.js event loop and preventing cron,
heartbeat, and other subsystems from functioning.
Cap maxAttempts at 50, which provides ~25 minutes of retry time
(with 30s HELLO timeout between attempts) before cleanly exiting
via the existing "Max reconnect attempts" error handler.
Closes#11836
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Changelog: note Discord gateway reconnect cap (#12230) (thanks @Yida-Dev)
---------
Co-authored-by: Yida-Dev <reyifeijun@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Shadow <shadow@clawd.bot>