When ENABLE_PLATFORM_PAYMENT is off for paid tier requests, return 422
instead of setting the tier directly. Admin tier changes must go through
the /api/admin/ routes, not the self-service endpoint.
Updates the corresponding subscription route test to assert the 422
response and removes the now-invalid set_subscription_tier mock.
## Summary
- Use `SystemPromptPreset` with `exclude_dynamic_sections=True` in the
SDK path so the Claude Code default prompt serves as a cacheable prefix
shared across all users, reducing input token cost by ~90%
- Add `claude_agent_cross_user_prompt_cache` config field (default
`True`) to make this configurable, with fallback to raw string when
disabled
- Extract `_build_system_prompt_value()` helper for testability, with
`_SystemPromptPreset` TypedDict for proper type annotation
> **Depends on #12747** — requires SDK >=0.1.58 which adds
`SystemPromptPreset` with `exclude_dynamic_sections`. Must be merged
after #12747.
## Changes
- **`config.py`**: New `claude_agent_cross_user_prompt_cache: bool =
True` field on `ChatConfig`
- **`sdk/service.py`**: `_SystemPromptPreset` TypedDict for type safety;
`_build_system_prompt_value()` helper that constructs the preset dict or
returns the raw string; call site uses the helper
- **`sdk/service_test.py`**: Tests exercise the production
`_build_system_prompt_value()` helper directly — verifying preset dict
structure (enabled), raw string fallback (disabled), and default config
value
## How it works
The Claude Code CLI supports `SystemPromptPreset` which uses the
built-in Claude Code default prompt as a static prefix. By setting
`exclude_dynamic_sections=True`, per-user dynamic sections (working dir,
git status, auto-memory) are stripped from that prefix so it stays
identical across users and benefits from Anthropic's prompt caching. Our
custom prompt (tool notes, supplements, graphiti context) is appended
after the cacheable prefix.
## Test plan
- [x] CI passes (formatting, linting, unit tests)
- [x] Verify `_build_system_prompt_value()` returns correct preset dict
when enabled
- [x] Verify fallback to raw string when
`CHAT_CLAUDE_AGENT_CROSS_USER_PROMPT_CACHE=false`
## Why
The baseline copilot path (OpenAI-compatible / OpenRouter) did not
record any cost when the `x-total-cost` response header was absent, even
though token counts were always available. The admin cost dashboard also
lacked cache token columns.
## What
- **`x-total-cost` header extraction**: Reads the OpenRouter cost header
per LLM call in the `finally` block (so cost is captured even when the
stream errors mid-way). Accumulated across multi-round tool-calling
turns.
- **Cache token extraction**: Extracts
`prompt_tokens_details.cached_tokens` and `cache_creation_input_tokens`
from streaming usage chunks and passes
`cache_read_tokens`/`cache_creation_tokens` through to
`persist_and_record_usage` for storage in `PlatformCostLog`.
- **Dashboard cache token display**: Adds cache read/write columns to
the Raw Logs and By User tables on the admin platform costs dashboard.
Adds `total_cache_read_tokens` and `total_cache_creation_tokens` to
`UserCostSummary`.
- **No cost estimation**: When `x-total-cost` is absent, `cost_usd` is
left as `None` and `persist_and_record_usage` records the entry under
`tracking_type="tokens"`. Token-based cost estimation was removed — the
platform dashboard already handles per-token cost display, and estimates
would introduce inaccuracy in the reported figures.
## How
- In `_baseline_llm_caller`: extract the `x-total-cost` header in the
`finally` block; accumulate to `state.cost_usd`.
- In `_BaselineStreamState`: add `turn_cache_read_tokens` /
`turn_cache_creation_tokens` counters, populated from streaming usage
chunks.
- In `persist_and_record_usage` / `record_cost_log`: pass through
`cache_read_tokens` and `cache_creation_tokens` to `PlatformCostEntry`.
- Frontend: add `total_cache_read_tokens` /
`total_cache_creation_tokens` fields to `UserCostSummary` and render
them as columns in the cost dashboard.
## Test plan
- [x] Verify baseline copilot sessions log cost when `x-total-cost`
header is present
- [x] Verify `cost_usd` stays `None` and token count is logged when
header is absent
- [x] Verify cache tokens appear in the dashboard logs table for
sessions using prompt caching
- [x] Verify the By User tab shows Cache Read and Cache Write columns
- [x] Unit tests: `test_cost_usd_extracted_from_response_header`,
`test_cost_usd_remains_none_when_header_missing`,
`test_cache_tokens_extracted_from_usage_details`
### Why / What / How
**Why:** The Claude Agent SDK's built-in Write and Edit tools have no
defence against output-token truncation. When the LLM generates a large
`content` or `new_string` argument, the API truncates the response
mid-JSON, causing Ajv to reject it with the opaque `"'file_path' is a
required property"` error. The user's work is silently lost, and
retrying with the same approach loops infinitely.
**What:** Replaces the SDK's built-in Write and Edit tools with unified
MCP equivalents that detect truncation and return actionable recovery
guidance. Adds a new `read_file` MCP tool with offset/limit pagination.
Consolidates all file-tool handlers into a single module
(`e2b_file_tools.py`) covering both E2B (sandbox) and non-E2B (local SDK
working directory) modes.
**How:**
- `file_path` is placed first in every JSON schema so truncation is more
likely to preserve the path
- `"required"` is intentionally omitted from all MCP schemas so the MCP
SDK delivers empty/truncated args to the handler instead of rejecting
them with an opaque error
- Handlers detect two truncation patterns: complete (`{}`) and partial
(other fields present but `file_path` missing), returning actionable
error messages in both cases
- Edit uses a per-path `asyncio.Lock` (keyed by resolved absolute path)
to prevent parallel read-modify-write races when MCP tools are
dispatched concurrently
- Both E2B and non-E2B paths validate via `is_allowed_local_path()` /
`is_within_allowed_dirs()` to block directory traversal
- The SDK built-in Write and Edit are added to `SDK_DISALLOWED_TOOLS`;
the SDK built-in Read remains allowed only for workspace-scoped paths
(tool-results/tool-outputs) via `WORKSPACE_SCOPED_TOOLS`
- E2B write/edit tools are registered with `readOnlyHint=False`
(`_MUTATING_ANNOTATION`) to prevent parallel dispatch
- `bridge_to_sandbox` copies host-side tool-result files into the E2B
sandbox on read so `bash_exec` can process them
### Changes 🏗️
- **`e2b_file_tools.py`** — unified file-tool handlers for Write, Read
(`read_file`), Edit, Glob, Grep covering both E2B and non-E2B modes;
per-path edit locking; truncation detection; sandbox symlink-escape
check; `bridge_to_sandbox` for SDK→E2B file bridging
- **`tool_adapter.py`** — registers unified Write/Edit/read_file MCP
tools (non-E2B only); adds `Read` tool for workspace-scoped SDK-internal
reads (both modes); E2B tools use `_MUTATING_ANNOTATION`;
`get_copilot_tool_names` / `get_sdk_disallowed_tools` updated for both
modes
- **`security_hooks.py`** — `WORKSPACE_SCOPED_TOOLS` checked before
`BLOCKED_TOOLS` so SDK internal Read is allowed on tool-results paths;
Write/Edit removed from workspace scope
- **`prompting.py`** — improved wording for large-file truncation
warning
- **`e2b_file_tools_test.py`** — comprehensive tests for non-E2B
Write/Read/Edit (path validation, truncation detection, offset/limit,
binary rejection, schema validation); E2B sandbox symlink-escape,
`bridge_to_sandbox`, and `_sandbox_write` tests
- **`security_hooks_test.py`** — updated tests for revised tool-blocking
and workspace-scoped Read behaviour
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Read: normal read, offset/limit, file not found, path traversal
blocked, binary file handling, truncation detection
- [x] Edit: normal edit, old_string not found, old_string not unique,
replace_all, partial truncation, path traversal blocked
- [x] Write: existing tests unchanged; truncation detection, path
validation, large-content warning
- [x] Schema validation: file_path first, required fields intentionally
absent
- [x] CLI built-in Write and Edit are in `SDK_DISALLOWED_TOOLS`; Read is
workspace-scoped only
- [x] E2B write/edit use `_MUTATING_ANNOTATION` (not parallel)
- [x] `black`, `ruff`, `pyright` pass on all modified files
- [ ] CI pipeline passes
## Summary
- Hide the mode toggle button while streaming (instead of disabling it)
to avoid confusing partial-toggle UI
- Remove localStorage mode persistence — mode is now transient in-memory
state only (no stale overrides across sessions)
- The copilot mode indicator now correctly reflects the active session's
mode because it reads from Zustand store which is updated on session
switch
## Changes
- `ChatInput.tsx` — hide `<ModeToggleButton>` when `isStreaming` instead
of passing `isStreaming` prop and showing a disabled button
- `ModeToggleButton.tsx` — remove `isStreaming` prop, disabled state,
and streaming-specific tooltip
- `store.ts` — remove localStorage read/write for `copilotMode`; mode
now defaults to `extended_thinking` and resets on page load
- `local-storage.ts` — keep `COPILOT_MODE` enum entry for backward
compatibility; remove unused `COPILOT_SESSION_MODES`
- `store.test.ts` — update tests to assert mode is NOT persisted to
localStorage
- `ChatInput.test.tsx` / `ModeToggleButton.stories.tsx` — update to
match hide-not-disable behavior
## Test plan
- [x] Create a session in fast mode, create another in extended_thinking
mode
- [x] Switch between sessions and verify the mode indicator updates
correctly
- [x] Mode toggle is hidden (not disabled) while a response is streaming
- [x] Refreshing the page resets mode to extended_thinking (no stale
localStorage override)
### Why / What / How
**Why:** AutoPilot sessions were silently dying with no response. Root
cause: `AsyncSandbox.create()` in the E2B SDK uses
`httpx.AsyncClient(timeout=None)` — infinite wait. When the E2B API
stalled during sandbox provisioning, executor goroutines hung
indefinitely. After 1h42m the RabbitMQ consumer timeout
(`COPILOT_CONSUMER_TIMEOUT_SECONDS = 3600`) killed the pod and all
in-flight sessions were orphaned — user sees no response, no error.
**What:**
1. Added per-attempt timeout + retry loop to `AsyncSandbox.create()`
calls in `e2b_sandbox.py` — 30s/attempt × 3 retries with exponential
backoff (~93s worst case vs infinite)
2. Added recovery enqueue in `AutoPilotBlock.run()` — on unexpected
failure, re-enqueues the session to RabbitMQ so a fresh executor pod
picks it up on the next turn
3. Added `_is_deliberate_block()` guard so recursion-limit errors are
not re-enqueued (they are expected terminations)
4. Unit tests for both new mechanisms
**How:**
- `asyncio.wait_for(AsyncSandbox.create(), timeout=30)` wraps each
attempt; `TimeoutError` triggers retry
- Redis creation sentinel TTL bumped 60→120s to cover the full retry
window (prevents concurrent callers from seeing stale sentinel)
- `_enqueue_for_recovery` calls `enqueue_copilot_turn()` with the
original prompt so the session resumes where it left off; dry-run
sessions are skipped; enqueue failures are logged but never mask the
original error
- `CancelledError` is re-raised after yielding the error output
(cooperative cancellation)
### Changes 🏗️
**`backend/copilot/tools/e2b_sandbox.py`**
- Added `_SANDBOX_CREATE_TIMEOUT_SECONDS = 30`,
`_SANDBOX_CREATE_MAX_RETRIES = 3`
- Bumped `_CREATION_LOCK_TTL` 60 → 120s
- Replaced bare `AsyncSandbox.create()` with `asyncio.wait_for` + retry
loop
**`backend/blocks/autopilot.py`**
- Added `_is_deliberate_block(exc)` — returns True for recursion-limit
RuntimeErrors
- Added `_enqueue_for_recovery(session_id, user_id, message, dry_run)` —
re-enqueues to RabbitMQ; no-ops on dry_run
- Exception handler in `run()` calls `_enqueue_for_recovery` for
transient failures; inner try/except prevents enqueue failure from
masking the original error
**`backend/blocks/test/test_autopilot.py`**
- `TestIsDeliberateBlock` — 4 unit tests for `_is_deliberate_block`
- `TestRecoveryEnqueue` — 5 tests: transient error triggers enqueue,
recursion limit skips, dry_run passes flag through, enqueue failure
doesn't mask original error, `ctx.dry_run` is OR-ed in
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run pytest backend/blocks/test/test_autopilot.py -xvs` —
24/24 pass
- [x] Verified retry logic constants: 30s × 3 retries + 1s + 2s = 93s
worst case, sentinel TTL 120s covers it
- [x] Verified `_enqueue_for_recovery` is no-op for dry_run=True (no
RabbitMQ publish)
- [x] Verified `CancelledError` re-raises after yield
## Why
The Claude CLI 2.1.97 (bundled in `claude-agent-sdk 0.1.58`) changed the
`--resume` flag to accept a **session UUID**, not a file path. Our
service was incorrectly passing a temp file path (from
`write_transcript_to_tempfile`), causing the CLI subprocess to crash
with exit code 1 on every T2+ message — breaking all multi-turn CoPilot
conversations.
Additionally, using a file-per-pod approach meant pod affinity was
required for `--resume` to work (the file only existed on the pod that
handled T1).
## What
- Add `upload_cli_session()` to `transcript.py`: after each turn, upload
the CLI's native session JSONL (at
`{projects_base}/{encoded_cwd}/{session_id}.jsonl`) to remote storage
- Add `restore_cli_session()` to `transcript.py`: before T2+, download
and restore the CLI native session file to the expected path
- Pass `--session-id {app_uuid}` via `ClaudeAgentOptions` so the CLI
uses the app session UUID as its session ID → predictable file path
- On T2+: call `restore_cli_session()` and if successful, pass `--resume
{session_uuid}` (UUID, not file path)
- Remove `write_transcript_to_tempfile` from the resume path in
service.py (it only exists in transcript.py for compaction use)
- Keep DB reconstruction as last-resort fallback (populates builder
state only, no `--resume`)
- Compaction retry path now runs without `--resume` (compacted content
cannot be written in CLI native format)
## How
**Normal multi-turn flow (fixed):**
1. T1: SDK runs with `--session-id {app_uuid}` → CLI writes session to
predictable path
2. T1 finally: `upload_cli_session()` uploads native session to storage
(GCS/local)
3. T2+: `restore_cli_session()` downloads and writes the native session
back to disk
4. T2+: `--resume {app_uuid}` → CLI reads the restored session → full
context preserved
**Cross-pod benefit:**
The native session file is now in remote storage, so any pod can restore
it before a turn. Pod affinity for CoPilot is no longer required.
**Backward compatibility:**
- First turn: no native session in storage → runs without `--resume`
(same as before)
- If `restore_cli_session` fails: falls back gracefully to no
`--resume`, logs a warning
- DB reconstruction still available as last resort when no transcript
exists at all
## Checklist
- [x] Tests updated (service_helpers_test, retry_scenarios_test,
transcript_test all pass)
- [x] `poetry run ruff check` clean
- [x] `poetry run black --check` clean
- [x] `poetry run pyright` 0 errors on changed files
## Why
`platform_cost_test.py` had an extra blank line between
`TestUsdToMicrodollars.test_large_value` and `class TestMaskEmail`,
causing black to flag it. This failure was appearing in the CI merge
checks of unrelated PRs that target `dev`.
## What
Remove the extra blank line (3 → 2) to satisfy black's formatting rules.
## How
Single-character diff — no logic changes.
## Why
When the AutoPilot copilot needed to connect credentials for an existing
agent, it was routing users to the Builder — flagged by @Pwuts in [the
AutoPilot Credential UX
thread](https://discord.com/channels/1126875755960336515/1492203735034892471/1492204936056930304).
Two root causes:
1. **Credential race-condition on the run/schedule path.**
`_check_prerequisites` only catches missing creds *before* the
executor/scheduler call. If creds are deleted (or drift) between the
prereq check and the actual call, the executor/scheduler raises
`GraphValidationError`. The tool returned a plain `ErrorResponse`, and
the LLM fell back to `create_agent`/`edit_agent` — whose
`AgentSavedResponse.agent_page_link=/build?flowID=...` is exactly the
Builder redirect the user saw.
2. **`GraphValidationError.node_errors` lost over RPC.** The scheduler
call goes through `get_scheduler_client()` (RPC). The server-side error
handler only preserved `exc.args` — the structured `node_errors` mapping
was stripped, making it impossible for the copilot to distinguish
credential failures from other validation errors on the schedule path.
## What
- **Race-condition handling for both run and schedule paths.**
`_run_agent` and `_schedule_agent` now catch `GraphValidationError`,
detect credential-flavoured node errors, and rebuild the inline
`SetupRequirementsResponse` so the credential setup card renders inline
without leaving chat. Mixed credential+structural errors fall through to
plain `ErrorResponse` so structural errors aren't hidden.
- **`GraphValidationError` round-trips over RPC.** `service.py` now
packs `node_errors` into a typed `RemoteCallExtras` field on
`RemoteCallError`, and the client-side handler re-threads it back into
the reconstructed exception.
- **Shared credential-error matcher.** The credential-string matching
logic is extracted to `is_credential_validation_error_message()` in
`backend/executor/utils.py`, backed by `CRED_ERR_*` module-level
constants that are referenced at both raise sites and in the matcher —
so adding a new credential error string doesn't silently break the
copilot fallback.
- **Tool-description guardrails.** `create_agent` and `edit_agent`
descriptions now explicitly say "Do NOT use this to connect credentials
— call run_agent instead." `agent_generation_guide.md` has the same
guardrail for the agent-building context.
## How
- `backend/copilot/tools/run_agent.py`: new
`_build_setup_requirements_from_validation_error()` helper; try/except
around `add_graph_execution` and `add_execution_schedule` in the
respective `_run_agent`/`_schedule_agent` paths; race-condition warnings
logged.
- `backend/executor/utils.py`: `CRED_ERR_*` constants +
`_CREDENTIAL_ERROR_MARKERS` typed tuple + public
`is_credential_validation_error_message()` exported; old private
`_is_credential_error` lambda replaced.
- `backend/util/service.py`: `RemoteCallExtras` Pydantic model with
`node_errors: Optional[dict[str, dict[str, str]]]`; server handler packs
it for `GraphValidationError`; client handler re-threads it;
`exception_class is GraphValidationError` identity check (not
`issubclass`).
- `backend/copilot/tools/create_agent.py`, `edit_agent.py`: added
credential-routing guardrail to tool descriptions.
- `backend/copilot/sdk/agent_generation_guide.md`: added
credential-routing guardrail.
## Test plan
- [x] Unit tests for `is_credential_validation_error_message` (all four
error templates matched, case-insensitive, non-credential messages
rejected).
- [x] Parity tests in `utils_test.py` that pin all `CRED_ERR_*`
constants against `is_credential_validation_error_message` — drift when
a new credential error is added fails immediately.
- [x] Unit tests for `_build_setup_requirements_from_validation_error`:
credential error → `SetupRequirementsResponse`; non-credential error →
`None`; mixed errors → `None`.
- [x] E2E test for `_schedule_agent` race path:
`get_scheduler_client().add_execution_schedule` mocked to raise
credential `GraphValidationError` → response is `setup_requirements`,
not generic error.
- [x] E2E test for `_run_agent` race path:
`execution_utils.add_graph_execution` mocked with `AsyncMock` to raise
credential `GraphValidationError` → response is `setup_requirements`.
- [x] `RemoteCallError` round-trip tests in `service_test.py`: server
handler packs `node_errors` into `extras`; client handler unpacks; full
round-trip preserves `node_errors`.
- [x] Backwards-compat test: old `RemoteCallError` without `extras`
still deserializes to `GraphValidationError` with empty `node_errors`.
## Summary
- Extract `ThinkingStripper` from `baseline/service.py` into a shared
`copilot/thinking_stripper.py` module
- Apply thinking-tag stripping to the SDK streaming path
(`_dispatch_response`) so `<internal_reasoning>` and `<thinking>` tags
emitted by non-extended-thinking models (e.g. Sonnet) are stripped
before reaching the SSE client
- Flush any buffered text from the stripper at stream end so no content
is lost
- Add unit tests for the shared `ThinkingStripper` and integration tests
for the SDK dispatch path
## Problem
When using Claude Sonnet (which doesn't have extended thinking), the
model sometimes outputs `<internal_reasoning>...</internal_reasoning>`
tags as visible text in the response stream. The baseline path already
stripped these, but the SDK path did not.
## Test plan
- [ ] CI passes (unit tests for ThinkingStripper and SDK dispatch
stripping)
- [ ] Manual test: send a message via Sonnet and verify no
`<internal_reasoning>` tags appear in the response
## Summary
- Fixes duplicate message content shown in CoPilot during SSE
reconnections (page visibility change, network hiccups, wake-resync)
- The `resume_session_stream` backend always replays from `"0-0"`
(beginning of Redis stream), and replayed `UIMessage` objects get new
generated IDs from `useChat`, bypassing the old adjacent-only content
dedup
- Extends `deduplicateMessages` to track all seen `role +
preceding-user-context + content` fingerprints globally, catching
replayed messages regardless of different IDs or position in the list
- Scopes fingerprints by preceding user message text to avoid false
positives when the assistant legitimately gives the same answer to
different prompts
## Test plan
- [ ] Verify new unit tests pass in CI (`helpers.test.ts` - 7 new dedup
test cases)
- [ ] Manual: start a long tool-use session, switch tabs, return - no
duplicate content
- [ ] Manual: refresh page during active session - content loads from DB
without duplicates
- [ ] Manual: ask the same question twice in different turns - both
answers preserved
## Why
We've been pinned at `claude-agent-sdk==0.1.45` (bundled CLI 2.1.63)
since PR #12294 because newer versions had two OpenRouter
incompatibilities:
1. **`tool_reference` content blocks** (CLI 2.1.69+) — OpenRouter's Zod
validation rejects them
2. **`context-management-2025-06-27` beta header** (CLI 2.1.91+) —
OpenRouter returns 400
Both are now resolved:
- **`tool_reference`: Fixed by CLI's built-in proxy detection.** CLI
2.1.70+ detects `ANTHROPIC_BASE_URL` pointing to a non-Anthropic
endpoint and disables `tool_reference` blocks automatically. Verified
working in CLI 2.1.97 — the bare CLI test only XFAILs on the beta
header, NOT on tool_reference.
- **`context-management` beta: Fixed by
`CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1` env var.** Injected via
`build_sdk_env()` for all SDK subprocess calls. Verified in CI.
## What
- Upgrades `claude-agent-sdk` from **0.1.45 → 0.1.58** (bundled CLI
2.1.63 → 2.1.97)
- Injects `CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1` in
`build_sdk_env()` (all modes)
- Adds `claude_agent_cli_path` config override with executable
validation
- Adds `claude_agent_max_thinking_tokens=8192` (was unlimited — 54% of
$14K/5-day spend was thinking tokens at $75/M)
- Lowers `max_budget_usd` from $100 → $15 and `max_turns` from 1000 → 50
### Features unlocked by the upgrade
| Feature | SDK | Impact |
|---|---|---|
| `exclude_dynamic_sections` | 0.1.57 | Cross-user prompt cache hits
(see #12758) |
| `AssistantMessage.usage` per-turn | 0.1.49 | Cost attribution per LLM
call |
| `task_budget` | 0.1.51 | Per-task cost ceiling at SDK level |
| `get_context_usage()` | 0.1.52 | Live context-window monitoring |
| MCP large-tool-result fix | 0.1.55 | No more silent truncation >50K
chars |
| MCP HTTP/SSE buffer leak fix | CLI 2.1.97 | Production memory creep
~50 MB/hr |
| 429 retry exponential backoff | CLI 2.1.97 | Rate-limit recovery (was
burning all retries in ~13s) |
| `--resume` cache miss fix | CLI 2.1.90 | Prompt cache works after
resume |
| SDK session quadratic-write fix | CLI 2.1.90 | No more slowdown on
long sessions |
| `max_thinking_tokens` | 0.1.57 | Cap extended thinking cost |
## How
- `build_sdk_env()` in `env.py` injects the env var unconditionally (all
3 auth modes)
- `service.py` passes `max_thinking_tokens` to `ClaudeAgentOptions`
- `config.py` adds 3 new fields with env var overrides
- Regression tests verify both OpenRouter compat issues are handled
## Test plan
- [x] CI green on all test matrices (3.11, 3.12, 3.13)
- [x] `test_disable_experimental_betas_env_var_strips_headers` passes —
verifies env var strips both patterns
- [x] `test_bare_cli_*` correctly XFAILs — documents the CLI regression
exists
- [x] `test_sdk_exposes_max_thinking_tokens_option` guards the new param
- [x] Config validation tests use real temp executables
### Why
LLM token costs are significant, especially for the copilot feature. The
system prompt and tool definitions are the two largest static components
of every request — caching them dramatically reduces input token costs
(cache reads cost 10% of the base input price).
Previously, user-specific context (business understanding) was embedded
directly in the system prompt, making it unique per user and preventing
cache sharing across users or sessions. Every request paid full price
for the system prompt even when the content was functionally identical.
A secondary security concern was identified during review: because the
LLM is instructed to parse `<user_context>` blocks, a user could type a
literal `<user_context>…</user_context>` tag in any message and
potentially spoof or suppress their own personalisation context. This PR
includes a full defence-in-depth fix for that injection vector on the
first turn (including new users with no stored understanding), plus
GET-endpoint stripping so injected context is never surfaced back to the
client.
### What
- **`copilot/service.py`**: Added `USER_CONTEXT_TAG` constant (shared by
writer and reader). Added `_USER_CONTEXT_ANYWHERE_RE` /
`_USER_CONTEXT_PREFIX_RE` regexes, `format_user_context_prefix`,
`strip_user_context_prefix`, `sanitize_user_supplied_context`, and
`_sanitize_user_context_field` helpers. Replaced the old
`_build_cacheable_system_prompt` / `_build_system_prompt` pair with a
single `_build_system_prompt` that returns `(static_prompt,
understanding)`. Added `inject_user_context` which sanitizes user input,
optionally wraps trusted understanding, and persists the result to DB.
- **`copilot/sdk/service.py`**: On first turn calls
`inject_user_context` before `_build_query_message` so the query sees
the prefixed content. Passes `user_id if not has_history else None` to
avoid redundant DB lookups on subsequent turns.
- **`copilot/baseline/service.py`**: Same pattern —
`inject_user_context` called before transcript append and OpenAI message
list construction; `openai_messages` loop patches the first user entry
after injection.
- **`blocks/llm.py`**: System prompt sent as a structured block with
`cache_control: {"type": "ephemeral"}`. `cache_control` placed on the
last tool in the tool list. Guards against empty/whitespace-only system
blocks (Anthropic rejects them). Fixed `anthropic.omit` →
`anthropic.NOT_GIVEN` sentinel for the no-tools case.
- **`api/features/chat/routes.py`**: Added `_strip_injected_context`
which returns a shallow copy of each message with the server-injected
`<user_context>` prefix stripped before the GET `/sessions/{id}`
response, so the prefix is invisible to the frontend.
- **`copilot/db.py`**: Added defence-in-depth `result > 1` error log in
`update_message_content_by_sequence`. Added authorization note
documenting why a `userId` join is not required.
- **`data/db_manager.py`**: Registered
`update_message_content_by_sequence` on both the sync and async DB
manager clients.
### How it works
**Static system prompt**: The system prompt is now identical for every
user. The LLM is instructed to look for a `<user_context>` block in the
first user message when present, and to greet new users warmly when no
context is provided.
**User context injection**: On the first turn of a new session, the
caller's business understanding is prepended to the user's message as
`<user_context>…</user_context>`. The prefixed content is also persisted
to the DB so resumed sessions and page reloads retain personalisation.
**`<user_context>` tag sanitization (security)**: `inject_user_context`
calls `sanitize_user_supplied_context` unconditionally — even when
`understanding` is `None` — so new users cannot smuggle a
`<user_context>` tag to the LLM on the first turn. Fields from the
stored `BusinessUnderstanding` object are escaped with
`_sanitize_user_context_field` so user-controlled free-text cannot break
out of the trusted block. The GET endpoint strips the injected prefix
before returning message history to the client.
**All-turn sanitization**: `strip_user_context_tags` (a public alias of
`sanitize_user_supplied_context`) is called unconditionally on every
incoming message in both the SDK and baseline paths — before
`maybe_append_user_message` — so `<user_context>` tags typed by a user
on any turn (not just the first) are stripped before reaching the LLM.
Lone unpaired tags (e.g. `<user_context>spoof` without a closing tag)
are also caught by a second-pass `_USER_CONTEXT_LONE_TAG_RE`
substitution. The system prompt explicitly states the tag is
server-injected, only trusted on the first message, and must be ignored
on subsequent turns.
**Cache placement**: Per Anthropic's caching model, placing
`cache_control` on the system prompt block caches everything up to and
including it. Placing `cache_control` on the last tool definition caches
all tool schemas as a single prefix. Both cache points are set so
repeated requests from any user can hit both caches.
**Langfuse compatibility**: `_build_system_prompt` calls
`prompt.compile(users_information="")` so existing Langfuse prompt
templates remain static and cacheable.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify system prompt no longer contains user-specific information
- [x] Verify `<user_context>` block appears in the first user message on
new sessions
- [x] Verify returning users still receive personalised responses via
user context
- [x] Verify Langfuse-sourced prompts compile correctly with empty
`users_information`
- [x] Verify Anthropic API calls include `cache_control` on system block
and last tool
- [x] Verify user-supplied `<user_context>` tags are stripped on the
first turn (including when understanding is None)
- [x] Verify user-supplied `<user_context>` tags are stripped on all
turns (turn 2+ sanitization via `strip_user_context_tags`)
- [x] Verify lone unpaired `<user_context>` tags (no closing tag) are
also stripped
- [x] Verify GET `/sessions/{id}` does not expose the injected
`<user_context>` prefix to the client
---------
Co-authored-by: majdyz <majdy.zamil@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
A transient LaunchDarkly failure returned None from get_subscription_price_id,
which was cached for the full 60-second TTL, blocking subscription upgrades
until expiry. Adding cache_none=False ensures None is never stored in the cache
so the next call retries LD immediately.
Adds a regression test verifying that two consecutive calls where the first
returns None (LD transient error) and the second returns the real price ID
both hit LD, confirming the None sentinel is not cached.
Flagged by sentry[bot] (credit.py:1352, Severity: MEDIUM).
Since get_subscription_price_id is now @cached, in-memory cache state can
persist between tests in the same process and cause false cache hits. Call
cache_clear() before and after tests that call the function directly to
ensure each test exercises a fresh LD flag lookup.
- Cache `get_subscription_price_id` with 60s TTL via @cached decorator;
LD flag values change only at deploy time — caching avoids hitting the
LD SDK on every webhook delivery and GET /credits/subscription page load
- Add webhook identity cross-check in `sync_subscription_from_stripe`:
verify metadata.user_id (set during Checkout Session creation) matches
the user found via stripeCustomerId; log + bail on mismatch to prevent
silently updating the wrong user's subscription tier
- Move `handleTierChange` business logic from SubscriptionTierSection
component into `useSubscriptionTierSection` hook per project convention;
dialog state (confirmDowngradeTo) stays in component as it's UI state
- Add three new backend tests for metadata identity cross-check:
matching user_id accepted, mismatching user_id blocked, absent
metadata skips check (backward compat with non-Checkout subs)
- Strip ?subscription=cancelled from address bar in useSubscriptionTierSection
alongside the existing ?subscription=success cleanup so Stripe cancel
redirects don't leave stale params in the URL
- Parallelize the two sequential stripe.Subscription.list calls on the
cancel webhook path using asyncio.gather to reduce handler latency
- Add a test for ?subscription=cancelled being a no-op (no toast, URL cleaned)
- Add test_sync_subscription_from_stripe_missing_customer_key_returns_early to
verify the .get("customer") fix: a payload with no 'customer' key returns
early without querying the DB or writing a tier (no KeyError→500)
- Update SubscriptionTierSection loading test to match skeleton-card output
(no longer expects empty container; now asserts tier card text is absent)
- Replace null return on isLoading with three Skeleton card placeholders
matching the expected height of the tier grid to prevent layout shift
- Add eslint-disable-next-line comment on useEffect dependency array
explaining why refetch/toast are included despite being new refs each
render (stable in practice; effect is guarded by subscriptionStatus check)
- Use .get("customer") with an early return + warning log instead of direct
key access; prevents KeyError→500 on malformed webhook payloads that pass
HMAC verification but omit the customer field
- Document the paid-to-paid upgrade race window (PRO→BUSINESS) in a comment
so the known limitation is visible without changing semantics
- Add mock_set_tier.assert_not_called() to the multi-partial-failure test to
explicitly assert the DB tier is never updated when a Stripe cancel raises
Replace toastShownRef guard with router.replace(pathname) so the success
toast is not re-shown on page refresh and correctly re-fires on a second
checkout in the same SPA session. Adds test coverage for the behaviour.
_cleanup_stale_subscriptions was called before the idempotency guard
(current_tier == tier -> return), so webhook replays for an already-
applied event would fire another cleanup round and could inadvertently
cancel a new subscription the user signed up for between the original
event and its replay.
Move the cleanup call to after the idempotency check so it only runs
when we are actually going to apply a tier change. Add status in
("active", "trialing") and new_sub_id guard to ensure cleanup is
only triggered for paid-sub activation events, not cancellations.
- openapi.json: SubscriptionStatusResponse.tier was missing enum constraint — generated TS type was string instead of literal union. Added enum:[FREE,PRO,BUSINESS,ENTERPRISE] to match the Literal on the Python model.
- credit_subscription_test.py: set has_more=False on all MagicMock subscription list objects so _cancel_customer_subscriptions does not log spurious 'more than 10 subs' errors in tests. Also added clarifying comment on multi_partial_failure assertion.
- cache.py: replaced _MISSING: Any = object() with a dedicated _MissingType singleton class so mypy correctly narrows type after 'result is _MISSING' comparisons.
- SubscriptionStatusResponse.tier: str -> Literal["FREE","PRO","BUSINESS","ENTERPRISE"] so OpenAPI schema emits an enum and the generated TS client is narrowly typed
- Add test_update_subscription_tier_same_tier_is_noop: asserts the double-billing guard at line 868 returns 200/empty URL and never calls create_subscription_checkout
- Reset toastShownRef.current to false when subscriptionStatus != "success" so the success toast fires again after a second checkout on the same SPA mount
React onClick handlers don't await async functions, so passing an
async function directly creates a floating promise. Wrap in void to
make the intent explicit and prevent unhandled rejections.
The pre-parse rejection of @ was overly broad — it rejected valid URLs
with @ in query strings or fragments (e.g. ?ref=user@company.com).
The user:pass@host authority attack only applies to the netloc component.
Move the @ check to run against parsed.netloc after urlparse.
- Use set difference instead of any() to correctly detect other active
subs (any(sub["id"] != new_sub_id ...) returns True if ANY sub has a
different ID, which is always true when >1 sub exists regardless of
whether the cancelled sub is in the list).
- Add has_more check with logger.error in _cancel_customer_subscriptions
so we surface when a customer has >10 subs and some were silently
skipped.
Revert the tentative update_many conditional guard (prisma where-clause
null semantics are fiddly and the test suite mocks get_stripe_customer_id
end-to-end, so a real prisma error wouldn't be caught locally). The
idempotency_key on Customer.create is sufficient: Stripe collapses
concurrent + retried calls to the same Customer object for 24h, which
comfortably covers every realistic in-flight retry window.
Also invalidate the get_user_by_id cache after the DB write so the
freshly-persisted stripeCustomerId is visible on the next read.
Address review findings on the subscription tier billing PR:
1. get_stripe_customer_id race: two concurrent calls (double-click,
retried request) could each create a Stripe Customer for the same
user, leaving an orphaned billable customer. Pass an idempotency_key
so Stripe collapses concurrent + retried calls server-side, and use
a conditional update_many so the loser of a longer-window race
re-reads the persisted ID instead of overwriting.
2. update_subscription_tier no-op short-circuit: if the user is already
on the requested paid tier, return without creating a Checkout
Session. Without this guard, a duplicate request creates a second
subscription for the same price; the user would be charged for both
until _cleanup_stale_subscriptions runs from the resulting webhook —
which only fires after the second charge has cleared.
3. stripe_webhook payload defensive extraction: a malformed payload
(missing/non-dict data.object, missing id) would raise KeyError /
TypeError after signature verification, which Stripe interprets as
a delivery failure and retries forever. Validate shape, log a
warning, and ack with 200 so Stripe stops retrying.
4. _cleanup_stale_subscriptions: bump the swallowed-error log from
warning to exception so Sentry surfaces it as an error, include
the customer/sub IDs needed for manual reconciliation, and add a
TODO referencing the missing periodic reconcile job that the
docstring already promises as the backstop.
The @cached decorator could not differentiate "no entry" from "entry is
None" — both `_get_from_memory` and `_get_from_redis` returned `None`
for misses, and the wrappers checked `result is not None` to decide
whether to recompute. Functions that returned `None` as a valid value
were therefore re-executed on every call, defeating the cache and (for
shared_cache=False) potentially causing per-pod thundering herd against
upstream APIs.
Fix:
- Use a module-level `_MISSING = object()` sentinel for "no entry".
- Wrappers now check `result is not _MISSING` so cached `None` is
returned correctly.
- Add a `cache_none: bool = True` parameter so callers that *want* the
retry-on-None behavior (e.g. external API calls returning `None` to
signal a transient error) can explicitly opt out via `cache_none=False`.
- `_get_stripe_price_amount` opts out: returning None on a Stripe error
must not poison the 5-minute cache window. Updated its docstring to
describe the actual behavior.
New tests cover both default (None is cached) and `cache_none=False`
(None is not stored, next call retries) for sync, async, and shared
cache paths.
Sentry bug prediction: PRRT_kwDOJKSTjM56RTEu (severity HIGH).
- Dialog controlled set callback: use explicit if-block to avoid
returning 'false | void' (TS2322)
- Test redirect test: use vi.stubGlobal to replace window.location with
a plain object (Proxy on jsdom Location breaks private-field access)
Return None on StripeError instead of 0 so the @cached decorator
(which skips caching None) does not persist the error state for 5 min.
Added test to verify the None→0 fallback path in get_subscription_status.
- Replace __legacy__ Dialog import with molecules/Dialog in SubscriptionTierSection
- Update test mock to match new Dialog API (controlled pattern)
- Guard still_has_active_sub against empty new_sub_id in sync_subscription_from_stripe
- Move urlparse import from inside _validate_checkout_redirect_url to module level
Backend:
- Cache stripe.Price.retrieve with 5-min TTL via _get_stripe_price_amount
to avoid 200-600ms Stripe round-trip on every GET /credits/subscription
- Use SubscriptionTier enum .value for FREE/ENTERPRISE in tier_costs dict
for consistency (instead of hardcoded strings)
- Rename misleading test names: "defaults_to_FREE" → "preserves_current_tier"
to reflect actual behaviour (unknown price IDs preserve tier, not reset)
- Update subscription_routes_test to mock _get_stripe_price_amount instead
of stripe.Price.retrieve directly, avoiding cached-result interference
Frontend:
- Handle ?subscription=success return from Stripe Checkout: refetch + toast
- Add downgrade confirmation Dialog before cancelling paid subscription
- Handle ENTERPRISE tier: render dedicated admin-managed plan card, not the
FREE/PRO/BUSINESS tier cards (which would show no "Current" badge)
- Track pendingTier (via variables) so only the clicked button shows "Updating..."
- Show "Pricing available soon" for paid tiers with cost=0 (unconfigured LD flags)
instead of misleading "Free"
- Move tierError state into the hook, set via changeTier internally
- Move TIER_ORDER constant to module scope (was magic array inside render body)
- Add aria-current="true" to active tier card for screen reader accessibility
- Add role="alert" to all error paragraph elements
- Improve tier descriptions with concrete capacity values
Black detected double blank lines between class definitions in
platform_cost_test.py (pulled from dev base). Normalise to a single
blank line so the CI merge-commit lint check passes.
An empty STRIPE_WEBHOOK_SECRET (the default) allows an attacker to
compute a valid HMAC-SHA256 signature over the same key and forge any
webhook event (customer.subscription.created, etc.), escalating any
user to an arbitrary subscription tier without paying.
Fix: return 503 immediately when stripe_webhook_secret is unset rather
than proceeding to signature verification. Also add run_in_threadpool
to get_stripe_customer_id and remove the duplicate trialing-sub test.
Merges origin/feat/subscription-tier-billing which had the open-redirect
guard, blocking-IO fix, and idempotency/ENTERPRISE guard.
Test added: test_stripe_webhook_unconfigured_secret_returns_503
- Guard stripe_webhook: return 503 when STRIPE_WEBHOOK_SECRET is empty.
An empty secret allows HMAC forgery (attacker computes a valid sig over
the same key), so we reject all webhook calls when unconfigured.
- Suppress raw Stripe error from 502 cancel response; log server-side instead.
- Wrap all blocking Stripe SDK calls in run_in_threadpool: Customer.create,
Subscription.list, Subscription.cancel, checkout.Session.create.
- cancel_stripe_subscription now also cancels 'trialing' subscriptions
(previously only 'active'), preventing billing after a FREE downgrade.
- session.url None now raises ValueError instead of returning empty string.
- Add tests: webhook 503 on missing secret, trialing-sub cancellation.
## Why
The platform cost tracking system had several gaps that made the admin
dashboard less accurate and harder to reason about:
**Q: Do we have per-model granularity on the provider page?**
The `model` column was stored in `PlatformCostLog` but the SQL
aggregation grouped only by `(provider, tracking_type)`, so all models
for a given provider collapsed into one row. Now grouped by `(provider,
tracking_type, model)` — each model gets its own row.
**Q: Why does Anthropic show `per_run` for OrchestratorBlock?**
Bug: `OrchestratorBlock._call_llm()` was building `NodeExecutionStats`
with only `input_token_count` and `output_token_count` — it dropped
`resp.provider_cost` entirely. For OpenRouter calls this silently
discarded the `cost_usd`. For the SDK (autopilot) path,
`ResultMessage.total_cost_usd` was never read. When `provider_cost` is
None and token counts are 0 (e.g. SDK error path), `resolve_tracking`
falls through to `per_run`. Fixed by propagating all cost/cache fields.
**Q: Why can't we get `cost_usd` for Anthropic direct API calls?**
The Anthropic Messages API does not return a dollar amount — only token
counts. OpenRouter returns cost via response headers, so it uses
`cost_usd` directly. The Claude Agent SDK *does* compute
`total_cost_usd` internally, so SDK-mode OrchestratorBlock runs now get
`cost_usd` tracking. For direct Anthropic LLM blocks the estimate uses
per-token rates (see cache section below).
**Q: What about labeling by source (autopilot vs block)?**
Already tracked: `block_name` stores `copilot:SDK`, `copilot:Baseline`,
or the actual block name. Visible in the raw logs table. Not added to
the provider group-by (would explode row count); use the logs table
filter instead.
**Q: Is there double-counting between `tokens`, `per_run`, and
`cost_usd`?**
No. `resolve_tracking()` uses a strict preference hierarchy — exactly
one tracking type per execution: `cost_usd` > `tokens` > provider
heuristics > `per_run`. A single execution produces exactly one
`PlatformCostLog` row.
**Q: Should we track Anthropic prompt cache tokens (PR #12725)?**
Yes — PR #12725 adds `cache_control` markers to Anthropic API calls,
which causes the API to return `cache_read_input_tokens` and
`cache_creation_input_tokens` alongside regular `input_tokens`. These
have different billing rates:
- Cache reads: **10%** of base input rate (much cheaper)
- Cache writes: **125%** of base input rate (slightly more expensive,
one-time)
- Uncached input: **100%** of base rate
Without tracking them separately, a flat-rate estimate on
`total_input_tokens` would be wrong in both directions.
## What
- **Per-model provider table**: SQL now groups by `(provider,
tracking_type, model)`. `ProviderCostSummary` and the frontend
`ProviderTable` show a model column.
- **Cache token columns**: New `cacheReadTokens` and
`cacheCreationTokens` columns in `PlatformCostLog` with matching
migration.
- **LLM block cache tracking**: `LLMResponse` captures
`cache_read_input_tokens` / `cache_creation_input_tokens` from Anthropic
responses. `NodeExecutionStats` gains `cache_read_token_count` /
`cache_creation_token_count`. Both propagate to `PlatformCostEntry` and
the DB.
- **Copilot path**: `token_tracking.persist_and_record_usage` now writes
cache tokens as dedicated `PlatformCostEntry` fields (was
metadata-only).
- **OrchestratorBlock bug fix**: `_call_llm()` now includes
`resp.provider_cost`, `resp.cache_read_tokens`,
`resp.cache_creation_tokens` in the stats merge. SDK path captures
`ResultMessage.total_cost_usd` as `provider_cost`.
- **Accurate cost estimation**: `estimateCostForRow` uses
token-type-specific rates for `tokens` rows (uncached=100%, reads=10%,
writes=125% of configured base rate).
## How
`resolve_tracking` priority is unchanged. For Anthropic LLM blocks the
tracking type remains `tokens` (Anthropic API returns no dollar amount).
For OrchestratorBlock in SDK/autopilot mode it now correctly uses
`cost_usd` because the Claude Agent SDK computes and returns
`total_cost_usd`. For OpenRouter through OrchestratorBlock it now
correctly uses `cost_usd` (was silently dropped before).
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `ProviderCostSummary` SQL updated
- [x] Cache token fields present in `PlatformCostEntry` and
`PlatformCostLogCreateInput`
- [x] Prisma client regenerated — all type checks pass
- [x] Frontend `helpers.test.ts` updated for new `rateKey` format
- [x] Pre-commit hooks pass (Black, Ruff, isort, tsc, Prisma generate)