mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-30 03:00:41 -04:00
seer/fix-missing-name-validation
8385 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
17cbeca2d6 |
fix(backend): use model_validate instead of model_construct for AgentInput/OutputBlock
Agent-Logs-Url: https://github.com/Significant-Gravitas/AutoGPT/sessions/cca18525-f4df-4eef-898e-1b3cb2ba7600 Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com> |
||
|
|
39dc4c2f6c | Merge branch 'dev' into seer/fix-missing-name-validation | ||
|
|
bd2efed080 |
fix(frontend): allow zooming out more in the builder (#12690)
Reduced minZoom on the builder canvas from 0.1 to 0.05 to allow zooming out further when working with large agent graphs. Fixes #9325 Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
5fccd8a762 | Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev | ||
|
|
2740b2be3a |
fix(backend/copilot): disable fallback model to fix prod CLI rejection (#12802)
### Why / What / How **Why:** `fffbe0aad8` changed both `ChatConfig.model` and `ChatConfig.claude_agent_fallback_model` to `claude-sonnet-4-6`. The Claude Code CLI rejects this with `Error: Fallback model cannot be the same as the main model`, causing every standard-mode copilot turn to fail with exit code 1 — the session "completes" in ~30s but produces no response and drops the transcript. **What:** Set `claude_agent_fallback_model` default to `""`. `_resolve_fallback_model()` already returns `None` on empty string, which means the `--fallback-model` flag is simply not passed to the CLI. On 529 overload errors the turn will surface normally instead of silently retrying with a fallback. **How:** One-line config change + test update. ### Changes 🏗️ - `ChatConfig.claude_agent_fallback_model` default: `"claude-sonnet-4-6"` → `""` - Update `test_fallback_model_default` to assert the empty default ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run pytest backend/copilot/sdk/p0_guardrails_test.py` #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes |
||
|
|
d27d22159d | Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev | ||
|
|
fffbe0aad8 |
fix(backend): default copilot sonnet to 4.6 (#12799)
### Why / What / How
Why: Copilot/Autopilot standard requests were still defaulting to Claude
Sonnet 4, while the expected default for this path is Sonnet 4.6.
What: This PR updates the backend Copilot defaults so the
standard/default path and fast path use Sonnet 4.6, and aligns the SDK
fallback model and related test expectations.
How: It changes `ChatConfig.model`, `ChatConfig.fast_model`, and
`ChatConfig.claude_agent_fallback_model` to Sonnet 4.6 values, then
updates backend tests that assert the default Sonnet model strings.
### Changes 🏗️
- Switch `ChatConfig.model` from `anthropic/claude-sonnet-4` to
`anthropic/claude-sonnet-4-6`
- Switch `ChatConfig.fast_model` from `anthropic/claude-sonnet-4` to
`anthropic/claude-sonnet-4-6`
- Switch `ChatConfig.claude_agent_fallback_model` from
`claude-sonnet-4-20250514` to `claude-sonnet-4-6`
- Update backend Copilot tests that assert the default Sonnet model
strings
- Configuration changes:
- No new environment variables or docker-compose changes are required
- Existing `.env.default` and compose files remain compatible because
this only changes backend default model values in code
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run pytest
backend/copilot/baseline/transcript_integration_test.py`
- [x] `poetry run pytest backend/copilot/sdk/service_helpers_test.py`
- [x] `poetry run pytest backend/copilot/sdk/service_test.py`
- [x] `poetry run pytest backend/copilot/sdk/p0_guardrails_test.py`
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Changes default/fallback LLM model identifiers for Copilot requests,
which can affect runtime behavior, cost, and availability
characteristics across both baseline and SDK paths. Risk is mitigated by
being a small, config-only change with updated tests.
>
> **Overview**
> Updates Copilot backend defaults so both the standard (`model`) and
fast (`fast_model`) paths use `anthropic/claude-sonnet-4-6`, and aligns
the Claude Agent SDK fallback model to `claude-sonnet-4-6`.
>
> Adjusts related test expectations in baseline transcript integration
and SDK helper tests to match the new Sonnet 4.6 model strings.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
|
||
|
|
df205b5444 |
fix(backend/copilot): strip CLI session file to prevent auto-compaction context loss
The Claude Code CLI auto-compacts its native session JSONL when the context approaches the model's token limit (~200K for Sonnet). After compaction the detailed conversation history is replaced by a ~27K-token summary, causing the silent context loss users see as memory failures in long sessions. Root cause identified from production logs for session 93ecf7c9: - T6 CLI session: 233KB / ~207K tokens (near Sonnet limit) - T7 CLI compacted session -> ~167KB / ~47K tokens (PreCompact hook missed) - T12 second compaction -> ~176KB / ~27K tokens (just system prompt + summary) - T14-T21: cache_read=26714 constantly -- only system prompt visible to Claude The same stripping we already apply to our transcript (stale thinking blocks, progress/metadata entries) now also runs on the CLI native session file. At ~2x the size of the stripped transcript, unstripped sessions routinely hit the compaction threshold within 6-10 turns of a heavy Opus/thinking session. After stripping: - same-pod turns reuse the stripped local file (no compaction trigger) - cross-pod turns restore the stripped GCS file (same benefit) |
||
|
|
4efa1c4310 |
fix(copilot): set session_id on mode-switch T1 to enable --resume on subsequent turns
When a user switches from baseline (fast) mode to SDK (extended_thinking) mode mid-session, the first SDK turn has has_history=True (prior baseline messages in DB) but no CLI session file in storage. The old code gated session_id on `not has_history`, so mode-switch T1 never received a session_id — the CLI generated a random ID that wasn't uploaded under the expected key. Every subsequent SDK turn would fail to restore the CLI session and run without --resume, injecting the full compressed history on each turn, causing model confusion. Fix: set session_id whenever not using --resume (the `else` branch), covering T1 fresh, mode-switch T1, and T2+ fallback turns. The retry path is updated to use `"session_id" in sdk_options_kwargs` as the discriminator (instead of `not has_history`) so mode-switch T1 retries also keep the session_id while T2+ retries (where T1 restored a session file via restore_cli_session) still remove it to avoid "Session ID already in use". |
||
|
|
ab3221a251 |
feat(backend): MemoryEnvelope metadata model, scoped retrieval, and memory hardening (#12765)
### Why / What / How
**Why:** CoPilot's Graphiti memory system needed structured metadata to
distinguish memory types (rules, procedures, facts, preferences),
support scoped retrieval, enable targeted deletion, and track memory
costs under the AutoPilot billing account separately from the platform.
**What:** Adds the MemoryEnvelope metadata model, structured
rule/procedure memory types, a derived-finding lane for
assistant-distilled knowledge, two-step forget tools, scope-aware
retrieval filtering, AutoPilot-dedicated API key routing, and several
reliability fixes (streaming socket leaks, event-loop-scoped caches,
ingestion hardening).
**How:** MemoryEnvelope wraps every stored episode with typed metadata
(source_kind, memory_kind, scope, status, confidence) serialized as
JSON. Retrieval filters by scope at the context layer. The forget flow
uses a search-then-confirm two-step pattern. Ingestion queues and client
caches are scoped per event loop via WeakKeyDictionary to prevent
cross-loop RuntimeErrors in multi-worker deployments. API key resolution
falls back to AutoPilot-dedicated keys (CHAT_API_KEY,
CHAT_OPENAI_API_KEY) before platform-wide keys.
### Changes 🏗️
**New: MemoryEnvelope metadata model** (`memory_model.py`)
- Typed memory categories: fact, preference, rule, finding, plan, event,
procedure
- Source tracking: user_asserted, assistant_derived, tool_observed
- Scope namespacing: `real:global`, `project:<name>`, `book:<title>`,
`session:<id>`
- Status lifecycle: active, tentative, superseded, contradicted
- Structured `RuleMemory` and `ProcedureMemory` models for complex
instructions
**New: Targeted forget tools** (`graphiti_forget.py`)
- `memory_forget_search`: returns candidate facts with UUIDs for user
confirmation
- `memory_forget_confirm`: deletes specific edges by UUID after
confirmation
**New: Architecture test** (`architecture_test.py`)
- Validates no new `@cached(...)` usage around event-loop-bound async
clients
- Allowlists pre-existing violations for future cleanup
**Enhanced: memory_store tool** (`graphiti_store.py`)
- Accepts MemoryEnvelope metadata fields (source_kind, scope,
memory_kind, rule, procedure)
- Wraps content in MemoryEnvelope before ingestion
**Enhanced: memory_search tool** (`graphiti_search.py`)
- Scope-aware retrieval with hard filtering on group_id
**Enhanced: Ingestion pipeline** (`ingest.py`)
- Derived-finding lane: distills substantive assistant responses into
tentative findings
- Event-loop-scoped queues and workers via WeakKeyDictionary (fixes
multi-worker RuntimeError)
- Improved error handling and dropped-episode reporting
**Enhanced: Client cache** (`client.py`)
- Per-loop client cache and lock via WeakKeyDictionary (fixes "Future
attached to a different loop")
**Enhanced: Warm context** (`context.py`)
- Filters out non-global-scope episodes from warm context
**Fix: Streaming socket leak** (`baseline/service.py`)
- try/finally around async stream iteration to release httpx connections
on early exit
**Config: AutoPilot key routing** (`config.py`, `.env.default`)
- LLM key fallback: GRAPHITI_LLM_API_KEY → CHAT_API_KEY →
OPEN_ROUTER_API_KEY
- Embedder key fallback: GRAPHITI_EMBEDDER_API_KEY → CHAT_OPENAI_API_KEY
→ OPENAI_API_KEY
- Backwards-compatible: existing behavior unchanged until new keys are
provisioned
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run pytest backend/copilot/graphiti/config_test.py` — 16
tests pass (key fallback priority)
- [x] `poetry run pytest backend/copilot/tools/graphiti_store_test.py` —
store envelope tests pass
- [x] `poetry run pytest backend/copilot/graphiti/ingest_test.py` —
ingestion tests pass
- [x] `poetry run pytest backend/util/architecture_test.py` — structural
validation passes
- [x] Verify memory store/retrieve/forget cycle via copilot chat
- [x] Run AgentProbe multi-session memory benchmark (31 scenarios x3
repeats)
- [x] Confirm no CLOSE_WAIT socket accumulation under sustained
streaming load
- [x] Verify multi-worker deployment doesn't produce loop-binding errors
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- Configuration changes:
- New optional env var `CHAT_OPENAI_API_KEY` — AutoPilot-dedicated
OpenAI key for Graphiti embeddings (falls back to `OPENAI_API_KEY` if
not set)
- `CHAT_API_KEY` now used as first fallback for Graphiti LLM calls (was
`OPEN_ROUTER_API_KEY`)
- Infra action needed: add `CHAT_OPENAI_API_KEY` sealed secret in
`autogpt-shared-config` values (dev + prod)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Touches Graphiti memory ingestion/retrieval and introduces hard-delete
capabilities plus event-loop–scoped caching/queues; failures could
affect memory correctness or delete the wrong edges. Also changes
streaming resource cleanup and key routing, which could surface as
connection or billing/cost attribution issues if misconfigured.
>
> **Overview**
> **Graphiti memory is upgraded from plain text episodes to a structured
JSON `MemoryEnvelope`.** `memory_store` now wraps content with typed
metadata (source, kind, scope, status) and optional structured
`rule`/`procedure` payloads, and ingestion supports JSON episodes.
>
> **Memory retrieval and lifecycle controls are expanded.**
`memory_search` adds optional scope hard-filtering to prevent
cross-scope leakage, warm-context formatting drops non-global scoped
episodes (and avoids empty wrappers), and new two-step tools
(`memory_forget_search` → `memory_forget_confirm`) enable targeted soft-
or hard-deletion of specific graph edges by UUID.
>
> **Reliability and multi-worker safety improvements.** Graphiti client
caching and ingestion worker registries are now per-event-loop (avoiding
cross-loop `Future` errors), streaming chat completions explicitly close
async streams to prevent `CLOSE_WAIT` socket leaks, warm-context is
injected into the first user message to keep the system prompt
cacheable, and a new `architecture_test.py` blocks future process-wide
caching of event-loop–bound async clients. Config updates route Graphiti
LLM/embedder keys to AutoPilot-specific env vars first, and OpenAPI
schema exports include the new memory response types.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
autogpt-platform-beta-v0.6.56
|
||
|
|
b2f7faabc7 |
fix(backend/copilot): pre-create assistant msg before first yield to prevent last_role=tool (#12797)
## Changes **Root cause:** When a copilot session ends with a tool result as the last saved message (`last_role=tool`), the next assistant response is never persisted. This happens when: 1. An intermediate flush saves the session with `last_role=tool` (after a tool call completes) 2. The Claude Agent SDK generates a text response for the next turn 3. The client disconnects (`GeneratorExit`) at the `yield StreamStartStep` — the very first yield of the new turn 4. `_dispatch_response(StreamTextDelta)` is never called, so the assistant message is never appended to `ctx.session.messages` 5. The session `finally` block persists the session still with `last_role=tool` **Fix:** In `_run_stream_attempt`, after `convert_message()` returns the full list of adapter responses but *before* entering the yield loop, pre-create the assistant message placeholder in `ctx.session.messages` when: - `acc.has_tool_results` is True (there are pending tool results) - `acc.has_appended_assistant` is True (at least one prior message exists) - A `StreamTextDelta` is present in the batch (confirms this is a text response turn) This ensures that even if `GeneratorExit` fires at the first `yield`, the placeholder assistant message is already in the session and will be persisted by the `finally` block. **Tests:** Added `session_persistence_test.py` with 7 unit tests covering the pre-create condition logic and delta accumulation behavior. **Confirmed:** Langfuse trace `e57ebd26` for session `465bf5cf-7219-4313-a1f6-5194d2a44ff8` showed the final assistant response was logged at 13:06:49 but never reached DB — session had 51 messages with `last_role=tool`. ## Checklist - [x] My code follows the code style of this project - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have made corresponding changes to the documentation (N/A) - [x] My changes generate no new warnings (Pyright warnings are pre-existing) - [x] I have added tests that prove my fix is effective - [x] New and existing unit tests pass locally with my changes --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com> |
||
|
|
c9fa6bcd62 |
fix(backend/copilot): make system prompt fully static for cross-user prompt caching (#12790)
### Why / What / How **Why:** Anthropic prompt caching keys on exact system prompt content. Two sources of per-session dynamic data were leaking into the system prompt, making it unique per session/user — causing a full 28K-token cache write (~$0.10 on Sonnet) on *every* first message for *every* session instead of once globally per model. **What:** 1. `get_sdk_supplement` was embedding the session-specific working directory (`/tmp/copilot-<uuid>`) in the system prompt text. Every session has a different UUID, making every session's system prompt unique, blocking cross-session cache hits. 2. Graphiti `warm_ctx` (user-personalised memory facts fetched on the first turn) was appended directly to the system prompt, making it unique per user per query. **How:** - `get_sdk_supplement` now uses the constant placeholder `/tmp/copilot-<session-id>` in the supplement text and memoizes the result. The actual `cwd` is still passed to `ClaudeAgentOptions.cwd` so the CLI subprocess uses the correct session directory. - `warm_ctx` is now injected into the first user message as a trusted `<memory_context>` block (prepended before `inject_user_context` runs), following the same pattern already used for business understanding. It is persisted to DB and replayed correctly on `--resume`. - `sanitize_user_supplied_context` now also strips user-supplied `<memory_context>` tags, preventing context-spoofing via the new tag. After this change the system prompt is byte-for-byte identical across all users and sessions for a given model. ### Changes 🏗️ - `backend/copilot/prompting.py`: `get_sdk_supplement` ignores `cwd` and uses a constant working-directory placeholder; result is memoized in `_LOCAL_STORAGE_SUPPLEMENT`. - `backend/copilot/sdk/service.py`: `warm_ctx` is saved to a local variable instead of appended to `system_prompt`; on the first turn it is prepended to `current_message` as a `<memory_context>` block before `inject_user_context` is called. - `backend/copilot/service.py`: `sanitize_user_supplied_context` extended to strip `<memory_context>` blocks alongside `<user_context>`. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run pytest backend/copilot/prompting_test.py backend/copilot/prompt_cache_test.py` — all passed #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com>autogpt-platform-beta-v0.6.55 |
||
|
|
c955b3901c |
fix(frontend/copilot): load older chat messages reliably and preserve scrollback across turns (#12792)
### Why / What / How Fixes two SECRT-2226 bugs in copilot chat pagination. **Bug 1 — can't load older messages when the newest page fits on screen.** The `IntersectionObserver` in `LoadMoreSentinel` bailed when `scrollHeight <= clientHeight`, which happens routinely once reasoning + tool groups collapse. With no scrollbar and no button, users were stuck. Fix: remove the guard, cap auto-fill at 3 non-scrollable rounds (keeps the original anti-loop intent), and add a manual "Load older messages" button as the always-available escape hatch. **Bug 2 — older loaded pages vanish after a new turn, then reloading them produces duplicates.** After each stream `useCopilotStream` invalidates the session query; the refetch returns a shifted `oldest_sequence`, which `useLoadMoreMessages` used as a signal to wipe `olderRawMessages` and reset the local cursor. Scroll-back history was lost on every turn, and the next load fetched a page that overlapped with AI SDK's retained `currentMessages` — the "loops" users reported. Fix: once any older page is loaded, preserve `olderRawMessages` and the local cursor across same-session refetches. Only reset on session change. The gap between the new initial window and older pages is covered by AI SDK's retained state. ### Changes 🏗️ - `ChatMessagesContainer.tsx`: drop the scrollability guard; add `MAX_AUTO_FILL_ROUNDS = 3` counter; add "Load older messages" button (`ghost`/`small`); distinguish observer-triggered vs. button-triggered loads so the button bypasses the cap; export `LoadMoreSentinel` for testing. - `useLoadMoreMessages.ts`: remove the wipe-and-reset branch on `initialOldestSequence` change; preserve local state mid-session; still mirror parent's cursor while no older page is loaded. - New integration test `__tests__/LoadMoreSentinel.test.tsx`. No backend changes. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Short/collapsed newest page: "Load older messages" button loads older pages, preserves scroll - [x] Full-viewport newest page: scroll-to-top auto-pagination still works (no regression) - [x] `has_more_messages=false` hides the button; `isLoadingMore=true` shows spinner instead - [x] Bug 2 reproduced locally with temporary `limit=5`: before fix older page vanished and next load duplicated AI SDK messages; after fix older page stays and next load fetches cleanly further back - [x] `pnpm format`, `pnpm lint`, `pnpm types`, `pnpm test:unit` all pass (1208/1208) #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) — N/A --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
56864aea87 |
fix(copilot/frontend): align ModelToggleButton styling + add execution ID filter to platform cost page (#12793)
## Why Two fixes bundled together: 1. **ModelToggleButton styling**: after merging the ModelToggleButton feature, the "Standard" state was invisible — no background, no label — while "Advanced" had a colored pill. This was inconsistent with `ModeToggleButton` where both states (Fast / Thinking) always show a colored background + label. 2. **Execution ID filter on platform cost admin page**: admins needed to look up cost rows for a specific agent run but had no way to filter by `graph_exec_id`. All other identifiers (user, model, provider, block, tracking type) were already filterable. ## What - **ModelToggleButton**: inactive (Standard) state now uses `bg-neutral-100 text-neutral-700 hover:bg-neutral-200` (same palette as ModeToggleButton inactive), always shows the "Standard" label. - **Platform cost admin page**: added `graph_exec_id` query filter across the full stack — backend service functions, FastAPI route handlers, generated TypeScript params types, `usePlatformCostContent` hook, and the filter UI in `PlatformCostContent`. ## How ### ModelToggleButton Changed the inactive-state class from hover-only transparent to always-visible neutral background, and added the "Standard" text label (was empty before — only the CPU icon showed). ### Execution ID filter Added `graph_exec_id: str | None = None` parameter to: - `_build_prisma_where` — applies `where["graphExecId"] = graph_exec_id` - `get_platform_cost_dashboard`, `get_platform_cost_logs`, `get_platform_cost_logs_for_export` - All three FastAPI route handlers (`/dashboard`, `/logs`, `/logs/export`) - Generated TypeScript params types - `usePlatformCostContent`: new `executionIDInput` / `setExecutionIDInput` state, wired into `filterParams`, `handleFilter`, and `handleClear` - `PlatformCostContent`: new Execution ID input field in the filter bar ## Changes - [x] I have explained why I made the changes, not just what I changed - [x] There are no unrelated changes in this PR - [x] I have run the relevant linters and tests before submitting --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com> |
||
|
|
d23ca824ad |
fix(copilot): set session_id on mode-switch T1 to enable --resume on subsequent SDK turns (#12795)
## Why When a user switches from **baseline** (fast) mode to **SDK** (extended_thinking) mode mid-session, every subsequent SDK turn started fresh with no memory of prior conversation. Root cause: two complementary bugs on mode-switch T1 (first SDK turn after baseline turns): 1. `session_id` was gated on `not has_history`. On mode-switch T1, `has_history=True` (prior baseline turns in DB) so no `session_id` was set. The CLI generated a random ID and could not upload the session file under a predictable path → `--resume` failed on every following SDK turn. 2. Even if `session_id` were set, the upload guard `(not has_history or state.use_resume)` would block the session file upload on mode-switch T1 (`has_history=True`, `use_resume=False`), so the next turn still cannot `--resume`. Together these caused every SDK turn to re-inject the full compressed history, causing model confusion (proactive tool calls, forgetting context) observed in session `8237a27b-45d0-4688-af20-c185379e926f`. ## What - **`service.py`**: Change `elif not has_history:` → `else:` for the `session_id` assignment — set it whenever `--resume` is not active. Covers T1 fresh, mode-switch T1 (`has_history=True` but no CLI session exists), and T2+ fallback turns where restore failed. - **`service.py` retry path**: Replace `not has_history` with `"session_id" in sdk_options_kwargs` as the discriminator, so mode-switch T1 retries also keep `session_id` while T2+ retries (where `restore_cli_session` put a file on disk) correctly remove it to avoid "Session ID already in use". - **`service.py` upload guard**: Remove `and not skip_transcript_upload` and `and (not has_history or state.use_resume)` from the `upload_cli_session` guard. The CLI session file is independent of the JSONL transcript; and upload must run on mode-switch T1 so the next turn can `--resume`. `upload_cli_session` silently skips when the file is absent, so unconditional upload is always safe. ## How | Scenario | Before | After | |---|---|---| | T1 fresh (`has_history=False`) | `session_id` set ✓ | `session_id` set ✓ | | Mode-switch T1 (`has_history=True`, no CLI session) | ❌ not set — **bug** | `session_id` set ✓ | | T2+ with `--resume` | `resume` set ✓ | `resume` set ✓ | | T2+ retry after `--resume` failed | `session_id` removed ✓ | `session_id` removed ✓ | | Mode-switch T1 retry | `session_id` removed ❌ | `session_id` kept ✓ | | Upload on mode-switch T1 | ❌ blocked by guard — **bug** | uploaded ✓ | 7 new unit tests in `TestSdkSessionIdSelection` document all session_id cases. 6 new tests in `mode_switch_context_test.py` cover transcript bridging for both fast→SDK and SDK→fast switches. ## Checklist - [x] I have read the contributing guidelines - [x] My changes are covered by tests - [x] `poetry run format` passes --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com> |
||
|
|
227c60abd3 |
fix(backend/copilot): idempotency guard + frontend dedup fix for duplicate messages (#12788)
## Why After merging #12782 to dev, a k8s rolling deployment triggered infrastructure-level POST retries — nginx detected the old pod's connection reset mid-stream and resent the same POST to a new pod. Both pods independently saved the user message and ran the executor, producing duplicate entries in the DB (seq 159, 161, 163) and a duplicate response in the chat. The model saw the same question 3× in its context window and spent its response commenting on that instead of answering. Two compounding issues: 1. **No backend idempotency**: `append_and_save_message` saves unconditionally — k8s/nginx retries silently produce duplicate turns. 2. **Frontend dedup cleared after success**: `lastSubmittedMsgRef.current = null` after every completed turn wipes the dedup guard, so any rapid re-submit of the same text (from a stalled UI or user double-click) slips through. ## What **Backend** — Redis idempotency gate in `stream_chat_post`: - Before saving the user message, compute `sha256(session_id + message)[:16]` and `SET NX ex=30` in Redis - If key already exists → duplicate: return empty SSE (`StreamFinish + [DONE]`) immediately, skip save + executor enqueue - User messages only (`is_user_message=True`); system/assistant messages bypass the check **Frontend** — Keep `lastSubmittedMsgRef` populated after success: - Remove `lastSubmittedMsgRef.current = null` on stream complete - `getSendSuppressionReason` already has a two-condition check: `ref === text AND lastUserMsg === text` — so legitimate re-asks (after a different question was answered) still work; only rapid re-sends of the exact same text while it's still the last user message are blocked ## How - 30 s Redis TTL covers infrastructure retry windows (k8s SIGTERM → connection reset → ingress retry typically < 5 s) - Empty SSE response is well-formed (StreamFinish + [DONE]) — frontend AI SDK marks the turn complete without rendering a ghost message - Frontend ref kept live means: submit "foo" → success → submit "foo" again instantly → suppressed. Submit "foo" → success → submit "bar" → proceeds (different text updates the ref). ## Tests - 3 new backend route tests: duplicate blocked, first POST proceeds, non-user messages bypass - 5 new frontend `getSendSuppressionReason` unit tests: fresh ref, reconnecting, duplicate suppressed, different-turn re-ask allowed, different text allowed ## Checklist - [x] I have read the [AutoGPT Contributing Guide](https://github.com/Significant-Gravitas/AutoGPT/blob/master/CONTRIBUTING.md) - [x] I have performed a self-review of my code - [x] I have added tests that prove the fix is effective - [x] I have run `poetry run format` and `pnpm format` + `pnpm lint` |
||
|
|
0284614df0 |
fix(copilot): abort SSE stream and disconnect backend listeners on session switch (#12766)
## Summary
Fixes stream disconnection bugs where the UI shows "running" with no
output when users switch between copilot chat sessions. The root cause
is that the old SSE fetch is not aborted and backend XREAD listeners
keep running until timeout when switching sessions.
### Changes
**Frontend (`useCopilotStream.ts`, `helpers.ts`)**
- Call `sdkStop()` on session switch to abort the in-flight SSE fetch
from the old session's transport
- Fire-and-forget `DELETE` to new backend disconnect endpoint so
server-side listeners release immediately
- Store `resumeStream` and `sdkStop` in refs to fix stale closure bugs
in:
- Wake re-sync visibility handler (could call stale `resumeStream` after
tab sleep)
- Reconnect timer callback (could target wrong session's transport)
- Resume effect (captured stale `resumeStream` during rapid session
switches)
**Backend (`stream_registry.py`, `routes.py`)**
- Add `disconnect_all_listeners(session_id)` to stream registry —
iterates active listener tasks, cancels any matching the session
- Add `DELETE /sessions/{session_id}/stream` endpoint — auth-protected,
calls `disconnect_all_listeners`, returns 204
### Why
Reported by multiple team members: when using Autopilot for anything
serious, the frontend loses the SSE connection — particularly when
switching between conversations. The backend completes fine (refreshing
shows full output), but the UI gets stuck showing "running". This is the
worst UX bug we have right now because real users will never know to
refresh.
### How to test
1. Start a long-running autopilot task (e.g., "build a snake game")
2. While it's streaming, switch to a different chat session
3. Switch back — the UI should correctly show the completed output or
resume the stream
4. Verify no "stuck running" state
## Test plan
- [ ] Manual: switch sessions during active stream — no stuck "running"
state
- [ ] Manual: background tab for >30s during stream, return — wake
re-sync works
- [ ] Manual: trigger reconnect (kill network briefly) — reconnects to
correct session
- [ ] Verify: `pnpm lint`, `pnpm types`, `poetry run lint` all pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: majdyz <zamil.majdy@agpt.co>
|
||
|
|
f835674498 |
feat(copilot): standard/advanced model toggle with Opus rate-limit multiplier (#12786)
## Why Users have different task complexity needs. Sonnet is fast and cheap for most queries; Opus is more capable for hard reasoning tasks. Exposing this as a simple toggle gives users control without requiring infrastructure complexity. Opus costs 5× more than Sonnet per Anthropic pricing ($15/$75 vs $3/$15 per M tokens). Rather than adding a separate entitlement gate, the rate-limit multiplier (5×) ensures Opus turns deplete the daily/weekly quota proportionally faster — users self-limit via their existing budget. ## What - **Standard/Advanced model toggle** in the chat input toolbar (sky-blue star icon, label only when active — matches the simulation DryRunToggleButton pattern but visually distinct) - **`CopilotLlmModel = Literal["standard", "advanced"]`** — model-agnostic tier names (not tied to Anthropic model names) - **Backend model resolution**: `"advanced"` → `claude-opus-4-6`, `"standard"` → `config.model` (currently Sonnet) - **Rate-limit multiplier**: Opus turns count as 5× in Redis token counters (daily + weekly limits). Does **not** affect `PlatformCostLog` or `cost_usd` — those use real API-reported values - **localStorage persistence** via `Key.COPILOT_MODEL` so preference survives page refresh - **`claude_agent_max_budget_usd`** reduced from $15 to $10 ## How ### Backend - `CopilotLlmModel` type added to `config.py`, imported in routes/executor/service - `stream_chat_completion_sdk` accepts `model: CopilotLlmModel | None` - Model tier resolved early in the SDK path; `_normalize_model_name` strips the OpenRouter provider prefix - `model_cost_multiplier` (1.0 or 5.0) computed from final resolved model name, passed to `persist_and_record_usage` → `record_token_usage` (Redis only) - No separate LD flag needed — rate limit is the gate ### Frontend - `ModelToggleButton` component: sky-blue, star icon, "Advanced" label when active - `copilotModel` state in `useCopilotUIStore` with localStorage hydration - `copilotModelRef` pattern in `useCopilotStream` (avoids recreating `DefaultChatTransport`) - Toggle gated behind `showModeToggle && !isStreaming` in `ChatInput` ## Checklist - [x] Tests added/updated (ModelToggleButton.test.tsx, service_helpers_test.py, token_tracking_test.py) - [x] Rate-limit multiplier only affects Redis counters, not cost tracking - [x] No new LD flag needed |
||
|
|
da18f372f7 |
feat(backend/copilot): add for_agent_generation flag to find_block (#12787)
## Why When the agent generator LLM builds a graph, it may need to look up schema details for graph-only blocks like `AgentInputBlock`, `AgentOutputBlock`, or `OrchestratorBlock`. These blocks are correctly hidden from regular CoPilot `find_block` results (they can't run standalone), but that same filter was also preventing the LLM from discovering them when composing an agent graph. ## What Added a `for_agent_generation: bool = False` parameter to `FindBlockTool`. ## How - `for_agent_generation=false` (default): existing behaviour unchanged — graph-only blocks are filtered from both UUID lookups and text search results. - `for_agent_generation=true`: bypasses `COPILOT_EXCLUDED_BLOCK_TYPES` / `COPILOT_EXCLUDED_BLOCK_IDS` so the LLM can find and inspect schemas for INPUT, OUTPUT, ORCHESTRATOR, WEBHOOK, etc. blocks when building agent JSON. - MCP_TOOL blocks are still excluded even with `for_agent_generation=true` (they go through `run_mcp_tool`, not `find_block`). ## Checklist - [x] No new dependencies - [x] Backward compatible (default `false` preserves existing behaviour) - [x] No frontend changes |
||
|
|
d82ecac363 |
fix(backend/copilot): null-safe token accumulation for OpenRouter null cache fields (#12789)
## Why OpenRouter occasionally returns `null` (not `0`) for `cache_read_input_tokens` and `cache_creation_input_tokens` on the initial streaming event, before real token counts are available. Python's `dict.get(key, 0)` only falls back to `0` when the key is **missing** — when the key exists with a `null` value, `.get(key, 0)` returns `None`. This causes `TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType'` in the usage accumulator on the first streaming chunk from OpenRouter models. ## What - Replace `.get(key, 0)` with `.get(key) or 0` for all four token fields in `_run_stream_attempt` - Add `TestTokenUsageNullSafety` unit tests in `service_helpers_test.py` ## How Minimal targeted fix — only the four `+=` accumulation lines changed. No behaviour change for Anthropic-native models (they never emit null values). ## Checklist - [x] Tests cover null event, real event, absent keys, and multi-turn accumulation - [x] No behaviour change for Anthropic-native models - [x] No API changes |
||
|
|
8a2e2365f7 |
fix(backend/executor): charge per LLM iteration and per tool call in OrchestratorBlock (#12735)
### Why / What / How
**Why:** The OrchestratorBlock in agent mode makes multiple LLM calls in
a single node execution (one per iteration of the tool-calling loop),
but the executor was only charging the user once per run via
`_charge_usage`. Tools spawned by the orchestrator also bypassed
`_charge_usage` entirely — they execute via `on_node_execution()`
directly without going through the main execution queue, producing free
internal block executions.
**What:**
1. Charge `base_cost * (llm_call_count - 1)` extra credits after the
orchestrator block completes — covers the additional iterations beyond
the first (which is already paid for upfront).
2. Charge user credits for tools executed inside the orchestrator, the
same way queue-driven node executions are charged.
**How:**
**1. Per-iteration LLM charging**
- New `Block.extra_runtime_cost(execution_stats)` virtual method
(default returns `0`)
- `OrchestratorBlock` overrides it to return `max(0, llm_call_count -
1)`
- New `resolve_block_cost` free function in `billing.py` centralises the
block-lookup + cost-calculation pattern (used by both `charge_usage` and
`charge_extra_runtime_cost`)
- New `billing.charge_extra_runtime_cost(node_exec, extra_count)`
function that debits `base_cost * min(extra_count,
_MAX_EXTRA_RUNTIME_COST)` via `spend_credits()`, running synchronously
in a thread-pool worker
- After `_on_node_execution` completes with COMPLETED status,
`on_node_execution` calls `charge_extra_runtime_cost` if
`extra_runtime_cost > 0` and not a dry run
- `InsufficientBalanceError` from post-hoc charging is treated as a
billing leak: logged at ERROR with `billing_leak: True` structured
fields, user is notified via `_handle_insufficient_funds_notif`, but the
run status stays COMPLETED (work already done)
**2. Tool execution charging**
- New public async `ExecutionProcessor.charge_node_usage(node_exec)`
wrapper around `charge_usage` (with `execution_count=0` to avoid
inflating execution-tier counters); also calls `_handle_low_balance`
internally
- `OrchestratorBlock._execute_single_tool_with_manager` calls
`charge_node_usage` after successful tool execution (skipped for dry
runs and failed/cancelled tool runs)
- Tool cost is added to the orchestrator's `extra_cost` so it shows up
in graph stats display
- `InsufficientBalanceError` from tool charging is re-raised (not
downgraded to a tool error) in all three execution paths:
`_execute_single_tool_with_manager`, `_agent_mode_tool_executor`, and
`_execute_tools_sdk_mode`
**3. Billing module extraction**
- All billing logic extracted from `ExecutionProcessor` into
`backend/executor/billing.py` as free functions — keeps `manager.py` and
`service.py` focused on orchestration
- `ExecutionProcessor` retains thin delegation methods
(`charge_node_usage`, `charge_extra_runtime_cost`) for backward
compatibility with blocks that call them
**4. Structured error signalling**
- Tool error detection replaced brittle `text.startswith("Tool execution
failed:")` string check with a structured `_is_error` boolean field on
the tool response dict
### Changes
- `backend/blocks/_base.py`: Add
`Block.extra_runtime_cost(execution_stats) -> int` virtual method
(default `0`)
- `backend/blocks/orchestrator.py`: Override `extra_runtime_cost`; add
tool charging in `_execute_single_tool_with_manager`; add
`InsufficientBalanceError` re-raise carve-outs in all three execution
paths; replace string-prefix error detection with `_is_error` flag
- `backend/executor/billing.py` (new): Free functions
`resolve_block_cost`, `charge_usage`, `charge_extra_runtime_cost`,
`charge_node_usage`, `handle_post_execution_billing`,
`clear_insufficient_funds_notifications` — extracted from
`ExecutionProcessor`
- `backend/executor/manager.py`: Thin delegation to `billing.*`; remove
~500 lines of billing methods from `ExecutionProcessor`
- `backend/data/credit.py`: Update lazy import source from `manager` to
`billing`
- `backend/blocks/test/test_orchestrator.py`: Add `charge_node_usage`
mock + assertion
- `backend/blocks/test/test_orchestrator_dynamic_fields.py`: Add
`charge_node_usage` async mock
- `backend/blocks/test/test_orchestrator_responses_api.py`: Add
`charge_node_usage` async mock
- `backend/blocks/test/test_orchestrator_per_iteration_cost.py`: New
test file — `extra_runtime_cost` hook, `charge_extra_runtime_cost` math
(positive/zero/negative/capped/zero-cost/block-not-found/IBE),
`charge_node_usage` delegation, `on_node_execution` gate conditions
(COMPLETED/FAILED/zero-charges/dry-run/IBE), tool charging guards
(dry-run/failed/cancelled/IBE propagation)
### Checklist
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Run `poetry run pytest
backend/blocks/test/test_orchestrator_per_iteration_cost.py`
- [ ] Verify on dev: an OrchestratorBlock run with
`agent_mode_max_iterations=5` and 5 actual iterations is charged 5x the
base cost
- [ ] Verify tool executions inside the orchestrator are charged
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: majdyz <majdy.zamil@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: majdyz <majdyz@users.noreply.github.com>
|
||
|
|
55869d3c75 |
fix(backend/copilot): robust context fallback — upload gate, gap-fill, token-budget compression (#12782)
## Why During a live production session, the copilot lost all conversation context mid-session. The model stated \"I don't see any implementation plan in our conversation\" despite 9 prior turns of context. Three compounding bugs: **Bug 1 — Self-perpetuating upload gate:** When `restore_cli_session` fails on a T2+ turn, `state.use_resume=False`. The old gate `and (not has_history or state.use_resume)` then skips the CLI session upload — even though the T1 file may exist. Each turn without `use_resume` skips upload → next turn can't restore → also skips → etc. **Bug 2 — Blunt message-count cap on retries:** On `prompt-too-long`, `_reduce_context` retried 3× but rebuilt the same oversized query each time (transcript was empty, so all 3 attempts were identical). The `max_fallback_messages` count-cap was a blunt instrument — it threw away middle turns blindly instead of letting the compressor summarize intelligently. **Bug 3 — Gap-empty path returned zero context:** When a transcript exists but no `--resume` (CLI session unavailable), and the gap is empty (transcript is current), the code fell through to `return current_message, False` — the model got no history at all. ## What 1. **Remove upload gate** — upload is attempted after every successful turn; `upload_cli_session` silently skips when the file is absent. 2. **`transcript_msg_count` set on `cli_restored=False`** — enables the gap path on the very next turn without waiting for a full upload cycle. 3. **Token-budget compression instead of message-count cap** — `_reduce_context` now returns `target_tokens` (50K → 15K across retries). `compress_context` decides what to drop via LLM summarize → content truncate → middle-out delete → first/last trim. More context preserved at any budget vs. blindly slicing the list. 4. **Fix gap-empty case** — when transcript is current but `--resume` unavailable, fall through to full-session compression with the token budget instead of returning no context. 5. **Transcript seeding after fallback** — after `use_resume=False` with no stored transcript, compress DB messages to 30K tokens and serialise as JSONL into `transcript_builder`. Next turn uses the gap path (inject only new messages) instead of re-compressing full history. Only fires once per broken session (`not transcript_content` guard). 6. **Seeding guard** — seeding skips when `skip_transcript_upload=True` (avoids wasted compression work when the result won't be saved). 7. **Structured logging** — INFO/WARNING at every branch of `_build_query_message` with path variables, context_bytes, compression results. ## How **Upload gate** (`sdk/service.py` finally-block): removed `and (not has_history or state.use_resume)`; added INFO log showing `use_resume`/`has_history` before upload. **`transcript_msg_count`**: set from `dl.message_count` in the `cli_restored=False` branch. **`_build_query_message`**: `max_fallback_messages: int | None` → `target_tokens: int | None`; gap-empty case falls through to full-session compression rather than returning bare message. **`_reduce_context`**: `_FALLBACK_MSG_LIMITS` → `_RETRY_TARGET_TOKENS = (50_000, 15_000)`; returns `ReducedContext.target_tokens`. **`_compress_messages` / `_run_compression`**: both now accept `target_tokens: int | None` and thread it through to `compress_context`. **Seeding block**: added `not skip_transcript_upload` guard; uses `_SEED_TARGET_TOKENS = 30_000` so the seeded JSONL is always compact enough to pass `validate_transcript`. ## Checklist - [x] `poetry run format` passes - [x] No new lint errors introduced (pre-existing pyright errors unrelated) - [x] Tests added for `attempt` parameter and `target_tokens` in `_reduce_context` |
||
|
|
142c5dbe99 |
fix(frontend): tighten artifact preview behavior (#12770)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
b06648de8c |
ci(frontend): add Playwright PR smoke suite with seeded QA accounts (#12682)
### Why / What / How This PR simplifies frontend PR validation to one Playwright E2E suite, moves redundant page-level browser coverage into Vitest integration tests, and switches Playwright auth to deterministic seeded QA accounts. It also folds in the follow-up fixes that came out of review and CI: lint cleanup, CodeQL feedback, PR-local type regressions, and the flaky Library run helper. The approach is: - keep Playwright focused on real browser and cross-page flows that integration tests cannot prove well - keep page-level render and mocked API behavior in Vitest - remove the old PR-vs-full Playwright split from CI and run one deterministic PR suite instead - seed reusable auth states for fixed QA users so the browser suite is less flaky and faster to bootstrap ### Changes 🏗️ - Removed the workflow indirection that selected different Playwright suites for PRs vs other events - Standardized frontend CI on a single command: `pnpm test:e2e:no-build` - Consolidated the PR-gating Playwright suite around these happy-path specs: - `auth-happy-path.spec.ts` - `settings-happy-path.spec.ts` - `api-keys-happy-path.spec.ts` - `builder-happy-path.spec.ts` - `library-happy-path.spec.ts` - `marketplace-happy-path.spec.ts` - `publish-happy-path.spec.ts` - `copilot-happy-path.spec.ts` - Added the missing browser-only confidence checks to the PR suite: - settings persistence across reload and re-login - API key create, copy, and revoke - schedule `Run now` from Library - activity dropdown visibility for a real run - creator dashboard verification after publish submission - Increased Playwright CI workers from `6` to `8` - Migrated redundant page-level browser coverage into Vitest integration/unit tests where appropriate, including marketplace, profile, settings, API keys, signup behavior, agent dashboard row behavior, agent activity, and utility/auth helpers - Seeded deterministic Playwright QA users in `backend/test/e2e_test_data.py` and reused auth states from `frontend/src/tests/credentials/` - Fixed CodeQL insecure randomness feedback by replacing insecure randomness in test auth utilities - Fixed frontend lint issues in marketplace image rendering - Fixed PR-local type regressions introduced during test migration - Stabilized the Library E2E run helper to support the current Library action states: `Setup your task`, `New task`, `Rerun task`, and `Run now` - Removed obsolete Playwright specs and the temporary migration planning doc once the consolidation was complete - Reverted unintended non-test backend source changes; only backend test fixture changes remain in scope ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `pnpm lint` - [x] `pnpm types` - [x] `pnpm test:unit` - [x] `pnpm exec playwright test --list` - [x] `pnpm test:e2e:no-build` locally - [ ] PR CI green after the latest push #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) Notes: - Current local Playwright run on this branch: `28 passed`, `0 flaky`, `0 retries`, `3m 25s`. - Latest Codecov report on this PR showed overall coverage `63.14% -> 63.61%` (`+0.47%`), with frontend coverage up `+2.32%` and frontend E2E coverage up `+2.10%`. - The backend change in this PR is limited to deterministic E2E test data setup in `backend/test/e2e_test_data.py`. - Playwright retries remain enabled in CI; this branch does not add fail-on-flaky behavior. --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> Co-authored-by: Zamil Majdy <majdy.zamil@gmail.com> |
||
|
|
7240dd4fb1 |
feat(platform/admin): enhance cost dashboard with token breakdown and averages (#12757)
## Summary - **Token breakdown in provider table**: Added separate Input Tokens and Output Tokens columns to the By Provider table, making it easy to see whether costs are driven by large contexts (input) or verbose responses/thinking (output) - **New summary cards (8 total)**: Added Avg Cost/Request, Avg Input Tokens, Avg Output Tokens, and Total Tokens (in/out split) cards plus P50/P75/P95/P99 cost percentile cards at the top of the dashboard for at-a-glance cost analysis - **Cost distribution histogram**: Added a cost distribution section showing request count across configurable price buckets ($0–0.50, $0.50–1, $1–2, $2–5, $5–10, $10+) - **Per-user avg cost**: Added Avg Cost/Req column to the By User table to identify users with unusually expensive requests - **Backend aggregations**: Extended `PlatformCostDashboard` model with `total_input_tokens`, `total_output_tokens`, `avg_input_tokens_per_request`, `avg_output_tokens_per_request`, `avg_cost_microdollars_per_request`, `cost_p50/p75/p95/p99_microdollars`, and `cost_buckets` fields - **Correct denominators**: Avg cost uses cost-bearing requests only; avg token stats use token-bearing requests only — no artificial dilution from non-cost/non-token rows ## Test plan - [x] Verify the admin cost dashboard loads without errors at `/admin/platform-costs` - [x] Check that the new summary cards display correct values - [x] Verify Input/Output Tokens columns appear in the By Provider table - [x] Verify Avg Cost/Req column appears in the By User table - [x] Confirm existing functionality (filters, export, rate overrides) still works - [x] Verify backward compatibility — new fields have defaults so old API responses still work |
||
|
|
b4cd00bea9 |
dx(frontend): untrack auto-generated API client model files (#12778)
## Why `src/app/api/__generated__/` is listed in `.gitignore` but 4 model files were committed before that rule existed, so git kept tracking them and they showed up in every PR that touched the API schema. ## What Run `git rm --cached` on all 4 tracked files so the existing gitignore rule takes effect. No gitignore content changes needed — the rule was already correct. ## How The `check API types` CI job only diffs `openapi.json` against the backend's exported schema — it does not diff the generated TypeScript models. So removing these from tracking does not break any CI check. After this merges, `pnpm generate:api` output will be gitignored everywhere and future API-touching PRs won't include generated model diffs. |
||
|
|
e17914d393 |
perf(backend): enable cross-user prompt caching via SystemPromptPreset (#12758)
## Summary - Use `SystemPromptPreset` with `exclude_dynamic_sections=True` in the SDK path so the Claude Code default prompt serves as a cacheable prefix shared across all users, reducing input token cost by ~90% - Add `claude_agent_cross_user_prompt_cache` config field (default `True`) to make this configurable, with fallback to raw string when disabled - Extract `_build_system_prompt_value()` helper for testability, with `_SystemPromptPreset` TypedDict for proper type annotation > **Depends on #12747** — requires SDK >=0.1.58 which adds `SystemPromptPreset` with `exclude_dynamic_sections`. Must be merged after #12747. ## Changes - **`config.py`**: New `claude_agent_cross_user_prompt_cache: bool = True` field on `ChatConfig` - **`sdk/service.py`**: `_SystemPromptPreset` TypedDict for type safety; `_build_system_prompt_value()` helper that constructs the preset dict or returns the raw string; call site uses the helper - **`sdk/service_test.py`**: Tests exercise the production `_build_system_prompt_value()` helper directly — verifying preset dict structure (enabled), raw string fallback (disabled), and default config value ## How it works The Claude Code CLI supports `SystemPromptPreset` which uses the built-in Claude Code default prompt as a static prefix. By setting `exclude_dynamic_sections=True`, per-user dynamic sections (working dir, git status, auto-memory) are stripped from that prefix so it stays identical across users and benefits from Anthropic's prompt caching. Our custom prompt (tool notes, supplements, graphiti context) is appended after the cacheable prefix. ## Test plan - [x] CI passes (formatting, linting, unit tests) - [x] Verify `_build_system_prompt_value()` returns correct preset dict when enabled - [x] Verify fallback to raw string when `CHAT_CLAUDE_AGENT_CROSS_USER_PROMPT_CACHE=false` |
||
|
|
b3a58389e5 |
fix(copilot): baseline cost tracking and cache token display (#12762)
## Why The baseline copilot path (OpenAI-compatible / OpenRouter) did not record any cost when the `x-total-cost` response header was absent, even though token counts were always available. The admin cost dashboard also lacked cache token columns. ## What - **`x-total-cost` header extraction**: Reads the OpenRouter cost header per LLM call in the `finally` block (so cost is captured even when the stream errors mid-way). Accumulated across multi-round tool-calling turns. - **Cache token extraction**: Extracts `prompt_tokens_details.cached_tokens` and `cache_creation_input_tokens` from streaming usage chunks and passes `cache_read_tokens`/`cache_creation_tokens` through to `persist_and_record_usage` for storage in `PlatformCostLog`. - **Dashboard cache token display**: Adds cache read/write columns to the Raw Logs and By User tables on the admin platform costs dashboard. Adds `total_cache_read_tokens` and `total_cache_creation_tokens` to `UserCostSummary`. - **No cost estimation**: When `x-total-cost` is absent, `cost_usd` is left as `None` and `persist_and_record_usage` records the entry under `tracking_type="tokens"`. Token-based cost estimation was removed — the platform dashboard already handles per-token cost display, and estimates would introduce inaccuracy in the reported figures. ## How - In `_baseline_llm_caller`: extract the `x-total-cost` header in the `finally` block; accumulate to `state.cost_usd`. - In `_BaselineStreamState`: add `turn_cache_read_tokens` / `turn_cache_creation_tokens` counters, populated from streaming usage chunks. - In `persist_and_record_usage` / `record_cost_log`: pass through `cache_read_tokens` and `cache_creation_tokens` to `PlatformCostEntry`. - Frontend: add `total_cache_read_tokens` / `total_cache_creation_tokens` fields to `UserCostSummary` and render them as columns in the cost dashboard. ## Test plan - [x] Verify baseline copilot sessions log cost when `x-total-cost` header is present - [x] Verify `cost_usd` stays `None` and token count is logged when header is absent - [x] Verify cache tokens appear in the dashboard logs table for sessions using prompt caching - [x] Verify the By User tab shows Cache Read and Cache Write columns - [x] Unit tests: `test_cost_usd_extracted_from_response_header`, `test_cost_usd_remains_none_when_header_missing`, `test_cache_tokens_extracted_from_usage_details` |
||
|
|
a3846e1e74 |
fix(copilot): unified MCP file tools (Read/Write/Edit) to prevent truncation data loss (#12750)
### Why / What / How
**Why:** The Claude Agent SDK's built-in Write and Edit tools have no
defence against output-token truncation. When the LLM generates a large
`content` or `new_string` argument, the API truncates the response
mid-JSON, causing Ajv to reject it with the opaque `"'file_path' is a
required property"` error. The user's work is silently lost, and
retrying with the same approach loops infinitely.
**What:** Replaces the SDK's built-in Write and Edit tools with unified
MCP equivalents that detect truncation and return actionable recovery
guidance. Adds a new `read_file` MCP tool with offset/limit pagination.
Consolidates all file-tool handlers into a single module
(`e2b_file_tools.py`) covering both E2B (sandbox) and non-E2B (local SDK
working directory) modes.
**How:**
- `file_path` is placed first in every JSON schema so truncation is more
likely to preserve the path
- `"required"` is intentionally omitted from all MCP schemas so the MCP
SDK delivers empty/truncated args to the handler instead of rejecting
them with an opaque error
- Handlers detect two truncation patterns: complete (`{}`) and partial
(other fields present but `file_path` missing), returning actionable
error messages in both cases
- Edit uses a per-path `asyncio.Lock` (keyed by resolved absolute path)
to prevent parallel read-modify-write races when MCP tools are
dispatched concurrently
- Both E2B and non-E2B paths validate via `is_allowed_local_path()` /
`is_within_allowed_dirs()` to block directory traversal
- The SDK built-in Write and Edit are added to `SDK_DISALLOWED_TOOLS`;
the SDK built-in Read remains allowed only for workspace-scoped paths
(tool-results/tool-outputs) via `WORKSPACE_SCOPED_TOOLS`
- E2B write/edit tools are registered with `readOnlyHint=False`
(`_MUTATING_ANNOTATION`) to prevent parallel dispatch
- `bridge_to_sandbox` copies host-side tool-result files into the E2B
sandbox on read so `bash_exec` can process them
### Changes 🏗️
- **`e2b_file_tools.py`** — unified file-tool handlers for Write, Read
(`read_file`), Edit, Glob, Grep covering both E2B and non-E2B modes;
per-path edit locking; truncation detection; sandbox symlink-escape
check; `bridge_to_sandbox` for SDK→E2B file bridging
- **`tool_adapter.py`** — registers unified Write/Edit/read_file MCP
tools (non-E2B only); adds `Read` tool for workspace-scoped SDK-internal
reads (both modes); E2B tools use `_MUTATING_ANNOTATION`;
`get_copilot_tool_names` / `get_sdk_disallowed_tools` updated for both
modes
- **`security_hooks.py`** — `WORKSPACE_SCOPED_TOOLS` checked before
`BLOCKED_TOOLS` so SDK internal Read is allowed on tool-results paths;
Write/Edit removed from workspace scope
- **`prompting.py`** — improved wording for large-file truncation
warning
- **`e2b_file_tools_test.py`** — comprehensive tests for non-E2B
Write/Read/Edit (path validation, truncation detection, offset/limit,
binary rejection, schema validation); E2B sandbox symlink-escape,
`bridge_to_sandbox`, and `_sandbox_write` tests
- **`security_hooks_test.py`** — updated tests for revised tool-blocking
and workspace-scoped Read behaviour
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Read: normal read, offset/limit, file not found, path traversal
blocked, binary file handling, truncation detection
- [x] Edit: normal edit, old_string not found, old_string not unique,
replace_all, partial truncation, path traversal blocked
- [x] Write: existing tests unchanged; truncation detection, path
validation, large-content warning
- [x] Schema validation: file_path first, required fields intentionally
absent
- [x] CLI built-in Write and Edit are in `SDK_DISALLOWED_TOOLS`; Read is
workspace-scoped only
- [x] E2B write/edit use `_MUTATING_ANNOTATION` (not parallel)
- [x] `black`, `ruff`, `pyright` pass on all modified files
- [ ] CI pipeline passes
|
||
|
|
e5b0b7f18e |
fix(copilot): store mode per session so indicator updates on switch (#12761)
## Summary - Hide the mode toggle button while streaming (instead of disabling it) to avoid confusing partial-toggle UI - Remove localStorage mode persistence — mode is now transient in-memory state only (no stale overrides across sessions) - The copilot mode indicator now correctly reflects the active session's mode because it reads from Zustand store which is updated on session switch ## Changes - `ChatInput.tsx` — hide `<ModeToggleButton>` when `isStreaming` instead of passing `isStreaming` prop and showing a disabled button - `ModeToggleButton.tsx` — remove `isStreaming` prop, disabled state, and streaming-specific tooltip - `store.ts` — remove localStorage read/write for `copilotMode`; mode now defaults to `extended_thinking` and resets on page load - `local-storage.ts` — keep `COPILOT_MODE` enum entry for backward compatibility; remove unused `COPILOT_SESSION_MODES` - `store.test.ts` — update tests to assert mode is NOT persisted to localStorage - `ChatInput.test.tsx` / `ModeToggleButton.stories.tsx` — update to match hide-not-disable behavior ## Test plan - [x] Create a session in fast mode, create another in extended_thinking mode - [x] Switch between sessions and verify the mode indicator updates correctly - [x] Mode toggle is hidden (not disabled) while a response is streaming - [x] Refreshing the page resets mode to extended_thinking (no stale localStorage override) |
||
|
|
92575ae76b |
fix(backend): fix sub-agent session hang and orphan on E2B API stall (#12774)
### Why / What / How **Why:** AutoPilot sessions were silently dying with no response. Root cause: `AsyncSandbox.create()` in the E2B SDK uses `httpx.AsyncClient(timeout=None)` — infinite wait. When the E2B API stalled during sandbox provisioning, executor goroutines hung indefinitely. After 1h42m the RabbitMQ consumer timeout (`COPILOT_CONSUMER_TIMEOUT_SECONDS = 3600`) killed the pod and all in-flight sessions were orphaned — user sees no response, no error. **What:** 1. Added per-attempt timeout + retry loop to `AsyncSandbox.create()` calls in `e2b_sandbox.py` — 30s/attempt × 3 retries with exponential backoff (~93s worst case vs infinite) 2. Added recovery enqueue in `AutoPilotBlock.run()` — on unexpected failure, re-enqueues the session to RabbitMQ so a fresh executor pod picks it up on the next turn 3. Added `_is_deliberate_block()` guard so recursion-limit errors are not re-enqueued (they are expected terminations) 4. Unit tests for both new mechanisms **How:** - `asyncio.wait_for(AsyncSandbox.create(), timeout=30)` wraps each attempt; `TimeoutError` triggers retry - Redis creation sentinel TTL bumped 60→120s to cover the full retry window (prevents concurrent callers from seeing stale sentinel) - `_enqueue_for_recovery` calls `enqueue_copilot_turn()` with the original prompt so the session resumes where it left off; dry-run sessions are skipped; enqueue failures are logged but never mask the original error - `CancelledError` is re-raised after yielding the error output (cooperative cancellation) ### Changes 🏗️ **`backend/copilot/tools/e2b_sandbox.py`** - Added `_SANDBOX_CREATE_TIMEOUT_SECONDS = 30`, `_SANDBOX_CREATE_MAX_RETRIES = 3` - Bumped `_CREATION_LOCK_TTL` 60 → 120s - Replaced bare `AsyncSandbox.create()` with `asyncio.wait_for` + retry loop **`backend/blocks/autopilot.py`** - Added `_is_deliberate_block(exc)` — returns True for recursion-limit RuntimeErrors - Added `_enqueue_for_recovery(session_id, user_id, message, dry_run)` — re-enqueues to RabbitMQ; no-ops on dry_run - Exception handler in `run()` calls `_enqueue_for_recovery` for transient failures; inner try/except prevents enqueue failure from masking the original error **`backend/blocks/test/test_autopilot.py`** - `TestIsDeliberateBlock` — 4 unit tests for `_is_deliberate_block` - `TestRecoveryEnqueue` — 5 tests: transient error triggers enqueue, recursion limit skips, dry_run passes flag through, enqueue failure doesn't mask original error, `ctx.dry_run` is OR-ed in ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run pytest backend/blocks/test/test_autopilot.py -xvs` — 24/24 pass - [x] Verified retry logic constants: 30s × 3 retries + 1s + 2s = 93s worst case, sentinel TTL 120s covers it - [x] Verified `_enqueue_for_recovery` is no-op for dry_run=True (no RabbitMQ publish) - [x] Verified `CancelledError` re-raises after yield |
||
|
|
44b58ca22c |
fix(backend/copilot): fix T2+ --resume by using CLI native session file (#12777)
## Why
The Claude CLI 2.1.97 (bundled in `claude-agent-sdk 0.1.58`) changed the
`--resume` flag to accept a **session UUID**, not a file path. Our
service was incorrectly passing a temp file path (from
`write_transcript_to_tempfile`), causing the CLI subprocess to crash
with exit code 1 on every T2+ message — breaking all multi-turn CoPilot
conversations.
Additionally, using a file-per-pod approach meant pod affinity was
required for `--resume` to work (the file only existed on the pod that
handled T1).
## What
- Add `upload_cli_session()` to `transcript.py`: after each turn, upload
the CLI's native session JSONL (at
`{projects_base}/{encoded_cwd}/{session_id}.jsonl`) to remote storage
- Add `restore_cli_session()` to `transcript.py`: before T2+, download
and restore the CLI native session file to the expected path
- Pass `--session-id {app_uuid}` via `ClaudeAgentOptions` so the CLI
uses the app session UUID as its session ID → predictable file path
- On T2+: call `restore_cli_session()` and if successful, pass `--resume
{session_uuid}` (UUID, not file path)
- Remove `write_transcript_to_tempfile` from the resume path in
service.py (it only exists in transcript.py for compaction use)
- Keep DB reconstruction as last-resort fallback (populates builder
state only, no `--resume`)
- Compaction retry path now runs without `--resume` (compacted content
cannot be written in CLI native format)
## How
**Normal multi-turn flow (fixed):**
1. T1: SDK runs with `--session-id {app_uuid}` → CLI writes session to
predictable path
2. T1 finally: `upload_cli_session()` uploads native session to storage
(GCS/local)
3. T2+: `restore_cli_session()` downloads and writes the native session
back to disk
4. T2+: `--resume {app_uuid}` → CLI reads the restored session → full
context preserved
**Cross-pod benefit:**
The native session file is now in remote storage, so any pod can restore
it before a turn. Pod affinity for CoPilot is no longer required.
**Backward compatibility:**
- First turn: no native session in storage → runs without `--resume`
(same as before)
- If `restore_cli_session` fails: falls back gracefully to no
`--resume`, logs a warning
- DB reconstruction still available as last resort when no transcript
exists at all
## Checklist
- [x] Tests updated (service_helpers_test, retry_scenarios_test,
transcript_test all pass)
- [x] `poetry run ruff check` clean
- [x] `poetry run black --check` clean
- [x] `poetry run pyright` 0 errors on changed files
|
||
|
|
9de22eb053 |
fix(backend): remove extra blank line in platform_cost_test.py (#12768)
## Why `platform_cost_test.py` had an extra blank line between `TestUsdToMicrodollars.test_large_value` and `class TestMaskEmail`, causing black to flag it. This failure was appearing in the CI merge checks of unrelated PRs that target `dev`. ## What Remove the extra blank line (3 → 2) to satisfy black's formatting rules. ## How Single-character diff — no logic changes. |
||
|
|
55fe900650 |
fix(backend/copilot): keep credential setup inline on run and schedule paths (#12739)
## Why When the AutoPilot copilot needed to connect credentials for an existing agent, it was routing users to the Builder — flagged by @Pwuts in [the AutoPilot Credential UX thread](https://discord.com/channels/1126875755960336515/1492203735034892471/1492204936056930304). Two root causes: 1. **Credential race-condition on the run/schedule path.** `_check_prerequisites` only catches missing creds *before* the executor/scheduler call. If creds are deleted (or drift) between the prereq check and the actual call, the executor/scheduler raises `GraphValidationError`. The tool returned a plain `ErrorResponse`, and the LLM fell back to `create_agent`/`edit_agent` — whose `AgentSavedResponse.agent_page_link=/build?flowID=...` is exactly the Builder redirect the user saw. 2. **`GraphValidationError.node_errors` lost over RPC.** The scheduler call goes through `get_scheduler_client()` (RPC). The server-side error handler only preserved `exc.args` — the structured `node_errors` mapping was stripped, making it impossible for the copilot to distinguish credential failures from other validation errors on the schedule path. ## What - **Race-condition handling for both run and schedule paths.** `_run_agent` and `_schedule_agent` now catch `GraphValidationError`, detect credential-flavoured node errors, and rebuild the inline `SetupRequirementsResponse` so the credential setup card renders inline without leaving chat. Mixed credential+structural errors fall through to plain `ErrorResponse` so structural errors aren't hidden. - **`GraphValidationError` round-trips over RPC.** `service.py` now packs `node_errors` into a typed `RemoteCallExtras` field on `RemoteCallError`, and the client-side handler re-threads it back into the reconstructed exception. - **Shared credential-error matcher.** The credential-string matching logic is extracted to `is_credential_validation_error_message()` in `backend/executor/utils.py`, backed by `CRED_ERR_*` module-level constants that are referenced at both raise sites and in the matcher — so adding a new credential error string doesn't silently break the copilot fallback. - **Tool-description guardrails.** `create_agent` and `edit_agent` descriptions now explicitly say "Do NOT use this to connect credentials — call run_agent instead." `agent_generation_guide.md` has the same guardrail for the agent-building context. ## How - `backend/copilot/tools/run_agent.py`: new `_build_setup_requirements_from_validation_error()` helper; try/except around `add_graph_execution` and `add_execution_schedule` in the respective `_run_agent`/`_schedule_agent` paths; race-condition warnings logged. - `backend/executor/utils.py`: `CRED_ERR_*` constants + `_CREDENTIAL_ERROR_MARKERS` typed tuple + public `is_credential_validation_error_message()` exported; old private `_is_credential_error` lambda replaced. - `backend/util/service.py`: `RemoteCallExtras` Pydantic model with `node_errors: Optional[dict[str, dict[str, str]]]`; server handler packs it for `GraphValidationError`; client handler re-threads it; `exception_class is GraphValidationError` identity check (not `issubclass`). - `backend/copilot/tools/create_agent.py`, `edit_agent.py`: added credential-routing guardrail to tool descriptions. - `backend/copilot/sdk/agent_generation_guide.md`: added credential-routing guardrail. ## Test plan - [x] Unit tests for `is_credential_validation_error_message` (all four error templates matched, case-insensitive, non-credential messages rejected). - [x] Parity tests in `utils_test.py` that pin all `CRED_ERR_*` constants against `is_credential_validation_error_message` — drift when a new credential error is added fails immediately. - [x] Unit tests for `_build_setup_requirements_from_validation_error`: credential error → `SetupRequirementsResponse`; non-credential error → `None`; mixed errors → `None`. - [x] E2E test for `_schedule_agent` race path: `get_scheduler_client().add_execution_schedule` mocked to raise credential `GraphValidationError` → response is `setup_requirements`, not generic error. - [x] E2E test for `_run_agent` race path: `execution_utils.add_graph_execution` mocked with `AsyncMock` to raise credential `GraphValidationError` → response is `setup_requirements`. - [x] `RemoteCallError` round-trip tests in `service_test.py`: server handler packs `node_errors` into `extras`; client handler unpacks; full round-trip preserves `node_errors`. - [x] Backwards-compat test: old `RemoteCallError` without `extras` still deserializes to `GraphValidationError` with empty `node_errors`. |
||
|
|
bc6709dda1 |
fix(copilot): strip <internal_reasoning> tags from Sonnet response stream (#12763)
## Summary - Extract `ThinkingStripper` from `baseline/service.py` into a shared `copilot/thinking_stripper.py` module - Apply thinking-tag stripping to the SDK streaming path (`_dispatch_response`) so `<internal_reasoning>` and `<thinking>` tags emitted by non-extended-thinking models (e.g. Sonnet) are stripped before reaching the SSE client - Flush any buffered text from the stripper at stream end so no content is lost - Add unit tests for the shared `ThinkingStripper` and integration tests for the SDK dispatch path ## Problem When using Claude Sonnet (which doesn't have extended thinking), the model sometimes outputs `<internal_reasoning>...</internal_reasoning>` tags as visible text in the response stream. The baseline path already stripped these, but the SDK path did not. ## Test plan - [ ] CI passes (unit tests for ThinkingStripper and SDK dispatch stripping) - [ ] Manual test: send a message via Sonnet and verify no `<internal_reasoning>` tags appear in the response |
||
|
|
b2b6f75420 |
fix(copilot): deduplicate SSE-replayed messages by content fingerprint (#12759)
## Summary - Fixes duplicate message content shown in CoPilot during SSE reconnections (page visibility change, network hiccups, wake-resync) - The `resume_session_stream` backend always replays from `"0-0"` (beginning of Redis stream), and replayed `UIMessage` objects get new generated IDs from `useChat`, bypassing the old adjacent-only content dedup - Extends `deduplicateMessages` to track all seen `role + preceding-user-context + content` fingerprints globally, catching replayed messages regardless of different IDs or position in the list - Scopes fingerprints by preceding user message text to avoid false positives when the assistant legitimately gives the same answer to different prompts ## Test plan - [ ] Verify new unit tests pass in CI (`helpers.test.ts` - 7 new dedup test cases) - [ ] Manual: start a long tool-use session, switch tabs, return - no duplicate content - [ ] Manual: refresh page during active session - content loads from DB without duplicates - [ ] Manual: ask the same question twice in different turns - both answers preserved |
||
|
|
573fb7163f |
feat(copilot): upgrade claude-agent-sdk to 0.1.58 with OpenRouter compat + cost controls (#12747)
## Why We've been pinned at `claude-agent-sdk==0.1.45` (bundled CLI 2.1.63) since PR #12294 because newer versions had two OpenRouter incompatibilities: 1. **`tool_reference` content blocks** (CLI 2.1.69+) — OpenRouter's Zod validation rejects them 2. **`context-management-2025-06-27` beta header** (CLI 2.1.91+) — OpenRouter returns 400 Both are now resolved: - **`tool_reference`: Fixed by CLI's built-in proxy detection.** CLI 2.1.70+ detects `ANTHROPIC_BASE_URL` pointing to a non-Anthropic endpoint and disables `tool_reference` blocks automatically. Verified working in CLI 2.1.97 — the bare CLI test only XFAILs on the beta header, NOT on tool_reference. - **`context-management` beta: Fixed by `CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1` env var.** Injected via `build_sdk_env()` for all SDK subprocess calls. Verified in CI. ## What - Upgrades `claude-agent-sdk` from **0.1.45 → 0.1.58** (bundled CLI 2.1.63 → 2.1.97) - Injects `CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1` in `build_sdk_env()` (all modes) - Adds `claude_agent_cli_path` config override with executable validation - Adds `claude_agent_max_thinking_tokens=8192` (was unlimited — 54% of $14K/5-day spend was thinking tokens at $75/M) - Lowers `max_budget_usd` from $100 → $15 and `max_turns` from 1000 → 50 ### Features unlocked by the upgrade | Feature | SDK | Impact | |---|---|---| | `exclude_dynamic_sections` | 0.1.57 | Cross-user prompt cache hits (see #12758) | | `AssistantMessage.usage` per-turn | 0.1.49 | Cost attribution per LLM call | | `task_budget` | 0.1.51 | Per-task cost ceiling at SDK level | | `get_context_usage()` | 0.1.52 | Live context-window monitoring | | MCP large-tool-result fix | 0.1.55 | No more silent truncation >50K chars | | MCP HTTP/SSE buffer leak fix | CLI 2.1.97 | Production memory creep ~50 MB/hr | | 429 retry exponential backoff | CLI 2.1.97 | Rate-limit recovery (was burning all retries in ~13s) | | `--resume` cache miss fix | CLI 2.1.90 | Prompt cache works after resume | | SDK session quadratic-write fix | CLI 2.1.90 | No more slowdown on long sessions | | `max_thinking_tokens` | 0.1.57 | Cap extended thinking cost | ## How - `build_sdk_env()` in `env.py` injects the env var unconditionally (all 3 auth modes) - `service.py` passes `max_thinking_tokens` to `ClaudeAgentOptions` - `config.py` adds 3 new fields with env var overrides - Regression tests verify both OpenRouter compat issues are handled ## Test plan - [x] CI green on all test matrices (3.11, 3.12, 3.13) - [x] `test_disable_experimental_betas_env_var_strips_headers` passes — verifies env var strips both patterns - [x] `test_bare_cli_*` correctly XFAILs — documents the CLI regression exists - [x] `test_sdk_exposes_max_thinking_tokens_option` guards the new param - [x] Config validation tests use real temp executables |
||
|
|
c0306b1d21 |
perf(backend/copilot): enable LLM prompt caching + harden user_context injection (#12725)
### Why
LLM token costs are significant, especially for the copilot feature. The
system prompt and tool definitions are the two largest static components
of every request — caching them dramatically reduces input token costs
(cache reads cost 10% of the base input price).
Previously, user-specific context (business understanding) was embedded
directly in the system prompt, making it unique per user and preventing
cache sharing across users or sessions. Every request paid full price
for the system prompt even when the content was functionally identical.
A secondary security concern was identified during review: because the
LLM is instructed to parse `<user_context>` blocks, a user could type a
literal `<user_context>…</user_context>` tag in any message and
potentially spoof or suppress their own personalisation context. This PR
includes a full defence-in-depth fix for that injection vector on the
first turn (including new users with no stored understanding), plus
GET-endpoint stripping so injected context is never surfaced back to the
client.
### What
- **`copilot/service.py`**: Added `USER_CONTEXT_TAG` constant (shared by
writer and reader). Added `_USER_CONTEXT_ANYWHERE_RE` /
`_USER_CONTEXT_PREFIX_RE` regexes, `format_user_context_prefix`,
`strip_user_context_prefix`, `sanitize_user_supplied_context`, and
`_sanitize_user_context_field` helpers. Replaced the old
`_build_cacheable_system_prompt` / `_build_system_prompt` pair with a
single `_build_system_prompt` that returns `(static_prompt,
understanding)`. Added `inject_user_context` which sanitizes user input,
optionally wraps trusted understanding, and persists the result to DB.
- **`copilot/sdk/service.py`**: On first turn calls
`inject_user_context` before `_build_query_message` so the query sees
the prefixed content. Passes `user_id if not has_history else None` to
avoid redundant DB lookups on subsequent turns.
- **`copilot/baseline/service.py`**: Same pattern —
`inject_user_context` called before transcript append and OpenAI message
list construction; `openai_messages` loop patches the first user entry
after injection.
- **`blocks/llm.py`**: System prompt sent as a structured block with
`cache_control: {"type": "ephemeral"}`. `cache_control` placed on the
last tool in the tool list. Guards against empty/whitespace-only system
blocks (Anthropic rejects them). Fixed `anthropic.omit` →
`anthropic.NOT_GIVEN` sentinel for the no-tools case.
- **`api/features/chat/routes.py`**: Added `_strip_injected_context`
which returns a shallow copy of each message with the server-injected
`<user_context>` prefix stripped before the GET `/sessions/{id}`
response, so the prefix is invisible to the frontend.
- **`copilot/db.py`**: Added defence-in-depth `result > 1` error log in
`update_message_content_by_sequence`. Added authorization note
documenting why a `userId` join is not required.
- **`data/db_manager.py`**: Registered
`update_message_content_by_sequence` on both the sync and async DB
manager clients.
### How it works
**Static system prompt**: The system prompt is now identical for every
user. The LLM is instructed to look for a `<user_context>` block in the
first user message when present, and to greet new users warmly when no
context is provided.
**User context injection**: On the first turn of a new session, the
caller's business understanding is prepended to the user's message as
`<user_context>…</user_context>`. The prefixed content is also persisted
to the DB so resumed sessions and page reloads retain personalisation.
**`<user_context>` tag sanitization (security)**: `inject_user_context`
calls `sanitize_user_supplied_context` unconditionally — even when
`understanding` is `None` — so new users cannot smuggle a
`<user_context>` tag to the LLM on the first turn. Fields from the
stored `BusinessUnderstanding` object are escaped with
`_sanitize_user_context_field` so user-controlled free-text cannot break
out of the trusted block. The GET endpoint strips the injected prefix
before returning message history to the client.
**All-turn sanitization**: `strip_user_context_tags` (a public alias of
`sanitize_user_supplied_context`) is called unconditionally on every
incoming message in both the SDK and baseline paths — before
`maybe_append_user_message` — so `<user_context>` tags typed by a user
on any turn (not just the first) are stripped before reaching the LLM.
Lone unpaired tags (e.g. `<user_context>spoof` without a closing tag)
are also caught by a second-pass `_USER_CONTEXT_LONE_TAG_RE`
substitution. The system prompt explicitly states the tag is
server-injected, only trusted on the first message, and must be ignored
on subsequent turns.
**Cache placement**: Per Anthropic's caching model, placing
`cache_control` on the system prompt block caches everything up to and
including it. Placing `cache_control` on the last tool definition caches
all tool schemas as a single prefix. Both cache points are set so
repeated requests from any user can hit both caches.
**Langfuse compatibility**: `_build_system_prompt` calls
`prompt.compile(users_information="")` so existing Langfuse prompt
templates remain static and cacheable.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify system prompt no longer contains user-specific information
- [x] Verify `<user_context>` block appears in the first user message on
new sessions
- [x] Verify returning users still receive personalised responses via
user context
- [x] Verify Langfuse-sourced prompts compile correctly with empty
`users_information`
- [x] Verify Anthropic API calls include `cache_control` on system block
and last tool
- [x] Verify user-supplied `<user_context>` tags are stripped on the
first turn (including when understanding is None)
- [x] Verify user-supplied `<user_context>` tags are stripped on all
turns (turn 2+ sanitization via `strip_user_context_tags`)
- [x] Verify lone unpaired `<user_context>` tags (no closing tag) are
also stripped
- [x] Verify GET `/sessions/{id}` does not expose the injected
`<user_context>` prefix to the client
---------
Co-authored-by: majdyz <majdy.zamil@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
b319c26cab |
feat(platform/admin): per-model cost breakdown, cache token tracking, OrchestratorBlock cost fix (#12726)
## Why The platform cost tracking system had several gaps that made the admin dashboard less accurate and harder to reason about: **Q: Do we have per-model granularity on the provider page?** The `model` column was stored in `PlatformCostLog` but the SQL aggregation grouped only by `(provider, tracking_type)`, so all models for a given provider collapsed into one row. Now grouped by `(provider, tracking_type, model)` — each model gets its own row. **Q: Why does Anthropic show `per_run` for OrchestratorBlock?** Bug: `OrchestratorBlock._call_llm()` was building `NodeExecutionStats` with only `input_token_count` and `output_token_count` — it dropped `resp.provider_cost` entirely. For OpenRouter calls this silently discarded the `cost_usd`. For the SDK (autopilot) path, `ResultMessage.total_cost_usd` was never read. When `provider_cost` is None and token counts are 0 (e.g. SDK error path), `resolve_tracking` falls through to `per_run`. Fixed by propagating all cost/cache fields. **Q: Why can't we get `cost_usd` for Anthropic direct API calls?** The Anthropic Messages API does not return a dollar amount — only token counts. OpenRouter returns cost via response headers, so it uses `cost_usd` directly. The Claude Agent SDK *does* compute `total_cost_usd` internally, so SDK-mode OrchestratorBlock runs now get `cost_usd` tracking. For direct Anthropic LLM blocks the estimate uses per-token rates (see cache section below). **Q: What about labeling by source (autopilot vs block)?** Already tracked: `block_name` stores `copilot:SDK`, `copilot:Baseline`, or the actual block name. Visible in the raw logs table. Not added to the provider group-by (would explode row count); use the logs table filter instead. **Q: Is there double-counting between `tokens`, `per_run`, and `cost_usd`?** No. `resolve_tracking()` uses a strict preference hierarchy — exactly one tracking type per execution: `cost_usd` > `tokens` > provider heuristics > `per_run`. A single execution produces exactly one `PlatformCostLog` row. **Q: Should we track Anthropic prompt cache tokens (PR #12725)?** Yes — PR #12725 adds `cache_control` markers to Anthropic API calls, which causes the API to return `cache_read_input_tokens` and `cache_creation_input_tokens` alongside regular `input_tokens`. These have different billing rates: - Cache reads: **10%** of base input rate (much cheaper) - Cache writes: **125%** of base input rate (slightly more expensive, one-time) - Uncached input: **100%** of base rate Without tracking them separately, a flat-rate estimate on `total_input_tokens` would be wrong in both directions. ## What - **Per-model provider table**: SQL now groups by `(provider, tracking_type, model)`. `ProviderCostSummary` and the frontend `ProviderTable` show a model column. - **Cache token columns**: New `cacheReadTokens` and `cacheCreationTokens` columns in `PlatformCostLog` with matching migration. - **LLM block cache tracking**: `LLMResponse` captures `cache_read_input_tokens` / `cache_creation_input_tokens` from Anthropic responses. `NodeExecutionStats` gains `cache_read_token_count` / `cache_creation_token_count`. Both propagate to `PlatformCostEntry` and the DB. - **Copilot path**: `token_tracking.persist_and_record_usage` now writes cache tokens as dedicated `PlatformCostEntry` fields (was metadata-only). - **OrchestratorBlock bug fix**: `_call_llm()` now includes `resp.provider_cost`, `resp.cache_read_tokens`, `resp.cache_creation_tokens` in the stats merge. SDK path captures `ResultMessage.total_cost_usd` as `provider_cost`. - **Accurate cost estimation**: `estimateCostForRow` uses token-type-specific rates for `tokens` rows (uncached=100%, reads=10%, writes=125% of configured base rate). ## How `resolve_tracking` priority is unchanged. For Anthropic LLM blocks the tracking type remains `tokens` (Anthropic API returns no dollar amount). For OrchestratorBlock in SDK/autopilot mode it now correctly uses `cost_usd` because the Claude Agent SDK computes and returns `total_cost_usd`. For OpenRouter through OrchestratorBlock it now correctly uses `cost_usd` (was silently dropped before). ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `ProviderCostSummary` SQL updated - [x] Cache token fields present in `PlatformCostEntry` and `PlatformCostLogCreateInput` - [x] Prisma client regenerated — all type checks pass - [x] Frontend `helpers.test.ts` updated for new `rateKey` format - [x] Pre-commit hooks pass (Black, Ruff, isort, tsc, Prisma generate) |
||
|
|
85921f227a | Merge branch 'dev' of github.com:Significant-Gravitas/AutoGPT into preview/all-active-prs | ||
|
|
5844b13fb1 |
feat(backend/copilot): support multiple questions in ask_question tool (#12732)
### Why / What / How **Why:** The `ask_question` copilot tool previously only accepted a single question per invocation. When the LLM needs to ask multiple clarifying questions simultaneously, it either crams them into one text field (requiring users to format numbered answers manually) or makes multiple sequential tool calls (slow and disruptive UX). **What:** Replace the single `question`/`options`/`keyword` parameters with a `questions` array parameter so the LLM can ask multiple questions in one tool call, each rendered as its own input box. **How:** Simplified the tool to accept only `questions` (array of question objects). Each item has `question` (required), `options`, and `keyword`. The frontend `ClarificationQuestionsCard` already supports rendering multiple questions — no frontend changes needed. ### Changes 🏗️ - `backend/copilot/tools/ask_question.py`: Replaced dual question/questions schema with single `questions` array. Extracted parsing into module-level `_parse_questions` and `_parse_one` helpers. Follows backend code style: early returns, list comprehensions, top-down ordering, functions under 40 lines. - `backend/copilot/tools/ask_question_test.py`: Rewritten with 18 focused tests covering happy paths, keyword handling, options filtering, and invalid input handling. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [ ] I have tested my changes according to the test plan: - [ ] Run `poetry run pytest backend/copilot/tools/ask_question_test.py` — all tests pass 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
c014e1aa35 | merge(preview): merge all active PRs into preview/all-active-prs from fresh dev | ||
|
|
e59f576622 | Merge remote-tracking branch 'origin/spare/13' into preview/all-active-prs | ||
|
|
c99fa32ae3 | Merge remote-tracking branch 'origin/spare/3' into preview/all-active-prs | ||
|
|
b71789da50 | Merge remote-tracking branch 'origin/feat/subscription-tier-billing' into preview/all-active-prs | ||
|
|
5661326e7e |
fix(platform): fetch real Stripe prices in subscription status endpoint
- Import get_subscription_price_id in v1.py - get_subscription_status now calls stripe.Price.retrieve for PRO/BUSINESS tiers to return actual unit_amount instead of hardcoded zeros - UI will now show correct monthly costs when LD price IDs are configured - Fix Button import from __legacy__ to design system in SubscriptionTierSection - Update subscription status tests to mock the new Stripe price lookup |
||
|
|
df3fe926f2 |
style(backend/copilot): apply Black formatting to ask_question
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
505af7e673 |
refactor(backend/copilot): simplify ask_question to questions-only API
Drop the dual question/questions schema in favor of a single `questions` array parameter. This removes ~175 lines of complexity (the _execute_single path, duplicate params, precedence logic). Restructured per backend code style rules: - Top-down ordering: public _execute first, helpers below - Early return with guard clauses, no deep nesting - List comprehensions via walrus operator in _parse_questions - Helpers extracted as module-level functions (not methods) - Functions under 40 lines each The frontend ClarificationQuestionsCard already renders arrays of any length — no UI changes needed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
d896a1f9fa |
fix(backend/copilot): add missing isinstance assertion in test
Add isinstance narrowing in test_execute_multiple_questions_ignores_single_params to fix Pyright type-check CI failure (reportAttributeAccessIssue). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
6aa5a808e0 |
fix(backend/copilot): add isinstance assertions to fix type-check CI
Tests that access `result.questions` without first narrowing the type from `ToolResponseBase` to `ClarificationNeededResponse` cause Pyright type-check failures. Added `assert isinstance(result, ClarificationNeededResponse)` before accessing `.questions` in 4 tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |