Conflicts in baseline/service.py, baseline/transcript_integration_test.py,
and transcript.py arose because dev-only commit 0cd0a76305
(baseline upload fix) overlapped with the same fix in PR #12804 which
landed in master. Took master's version for all three files — it is the
complete, reviewed implementation.
### Why / What / How
**Why:** The copilot had two separate GCS paths (`cli-sessions/` and
`chat-transcripts/`), redundant function names
(`upload_cli_session`/`restore_cli_session`), and no shared context
strategy between modes. When switching from baseline→SDK or
SDK→baseline, the receiving mode discarded the stored transcript and
fell back to full DB reconstruction — loading all raw messages instead
of the compacted form — causing inflated context, wasted tokens, and
loss of CLI compaction summaries.
**What:**
- Single GCS path (`cli-sessions/`) for both modes — `chat-transcripts/`
removed
- Unified public API: `upload_transcript` / `download_transcript` /
`TranscriptDownload`
- `TranscriptMode = Literal["sdk", "baseline"]` persisted in
`.meta.json` — SDK skips `--resume` when `mode != "sdk"`
(baseline-written JSONL has stripped fields / synthetic IDs)
- `extract_context_messages(download, session_messages)` — shared
context primitive used by **both SDK and baseline**: reads compacted
transcript content + fills only the DB gap (messages after watermark),
so CLI compaction summaries are preserved across mode switches
- Watermark fix: `_jsonl_covered = transcript_msg_count + 2` when a real
transcript is present, preventing false gap detection after `--resume`
- Baseline gap-fill: `_append_gap_to_builder` converts `ChatMessage` →
JSONL entries; no more silently discarded stale transcripts
**How:**
```
SDK turn (mode="sdk" transcript available):
──► --resume [full CLI session restored natively]
──► inject gap prefix if DB has messages after watermark
SDK turn (mode="baseline" transcript available):
──► cannot --resume (synthetic CLI IDs)
──► extract_context_messages(download, session_messages):
returns transcript JSONL (compacted, isCompactSummary preserved) + gap
excludes session_messages[-1] (current turn — caller injects it separately)
──► format as <conversation_history> + "Now, the user says: {current}"
Baseline turn (any transcript):
──► _load_prior_transcript → TranscriptDownload
──► extract_context_messages(download, session_messages) + session_messages[-1]
replaces full session.messages DB read
──► LLM messages: [compacted history + gap] + [current user turn]
Transcript unavailable — both SDK (use_resume=False) and baseline:
──► extract_context_messages(None, session_messages) returns session_messages[:-1]
(all prior DB messages except the current user turn at [-1])
──► graceful fallback — no crash, no empty context
──► covers: first turn, GCS error, corrupt JSONL, missing .meta.json
──► next successful response uploads a fresh transcript
```
`extract_context_messages` is the shared primitive — both modes call the
same function, which handles:
- `download=None` (first turn, GCS unavailable) → falls back to
`session_messages[:-1]`
- Empty/corrupt content → falls back to `session_messages[:-1]`
- `bytes` content (raw GCS) or `str` content (pre-decoded baseline path)
- `isCompactSummary=True` entries → preserved so CLI compaction survives
mode switches
- Missing/corrupt `.meta.json` → `message_count` defaults to `0`, `mode`
defaults to `"sdk"`
**Why `[:-1]` and not all messages?** `session_messages[-1]` is always
the current user turn being handled right now. Both callers inject it
separately — SDK wraps it as `"Now, the user says: ..."`, baseline
appends it as the final message in the LLM array. Returning it inside
`extract_context_messages` would double-inject it.
### Changes 🏗️
- **`transcript.py`**: `CliSessionRestore` → `TranscriptDownload` +
`mode` field; `upload_cli_session` → `upload_transcript`;
`restore_cli_session` → `download_transcript`; add `TranscriptMode`,
`detect_gap`, `extract_context_messages`; import `ChatMessage` via
relative path to match `service.py` style
- **`sdk/service.py`**: mode-check before `--resume`; `_RestoreResult`
carries `baseline_download` + `context_messages` + `transcript_content`;
`_build_query_message` accepts `prior_messages` override;
`_restore_cli_session_for_turn` populates `context_messages` via
`extract_context_messages` and sets `transcript_content` to prevent
duplicate DB reconstruction; watermark fix (`_jsonl_covered =
transcript_msg_count + 2`)
- **`baseline/service.py`**: `_load_prior_transcript` returns `(bool,
TranscriptDownload | None)`; LLM context replaced with
`extract_context_messages(download, messages)`; `_append_gap_to_builder`
+ `detect_gap` call; `upload_transcript(mode="baseline")`
- **`sdk/transcript.py`**: updated re-exports, old aliases removed
- **`scripts/download_transcripts.py`**: updated for `bytes | str`
content type
- **Test files**: 179 tests total; `transcript_test.py`,
`baseline/transcript_integration_test.py`,
`sdk/service_helpers_test.py`, `sdk/test_transcript_watermark.py`,
`test/copilot/test_transcript_watermark.py` all updated/added
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] 179 unit tests pass — `transcript_test`,
`baseline/transcript_integration_test`, `sdk/service_helpers_test`,
`sdk/test_transcript_watermark`
- [x] pyright 0 errors on all changed files
- [x] SDK `--resume` path still works when `mode="sdk"` transcript is
present
- [x] SDK fallback uses `extract_context_messages` (compacted baseline
content + gap) when `mode="baseline"` transcript is stored — no more
full DB reconstruction
- [x] Baseline uses `extract_context_messages` per turn instead of full
`session.messages` DB read
- [x] `isCompactSummary=True` entries preserved across mode switches
- [x] Watermark (`_jsonl_covered`) fix prevents false gap detection
after `--resume`
- [x] Baseline gap detection no longer silently discards stale
transcripts
- [x] `TranscriptDownload.content` accepts `bytes | str` — backward
compatible
- [x] Transcript unavailable (GCS error, first turn, corrupt file)
gracefully falls back to `session_messages[:-1]` without crash — applies
to both SDK and baseline paths
---------
Co-authored-by: chernistry <73943355+chernistry@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
_load_prior_transcript was returning False for missing/invalid transcripts,
which caused should_upload_transcript to suppress the upload. The original
intent was to protect against overwriting a *newer* GCS version — but a
missing or corrupt file is not 'newer'. Only stale (watermark ahead) and
download errors (unknown GCS state) should suppress upload.
Also renames transcript_covers_prefix → transcript_upload_safe throughout
to accurately describe what the flag means.
Added instruction to check GitHub authentication status before prompting
user. This prevents repeated, unnecessary asking of the user to add
their GitHub credentials when they're already added, which is currently
a prevalent bug.
### Changes 🏗️
- Added one line to
`autogpt_platform/backend/backend/copilot/prompting.py` instructing
AutoPilot to run `gh auth status` before prompting the user to connect
their GitHub account.
Co-authored-by: Toran Bruce Richards <22963551+Torantulino@users.noreply.github.com>
Reduced minZoom on the builder canvas from 0.1 to 0.05 to allow zooming
out further when working with large agent graphs.
Fixes#9325
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
### Why / What / How
**Why:** `fffbe0aad8` changed both `ChatConfig.model` and
`ChatConfig.claude_agent_fallback_model` to `claude-sonnet-4-6`. The
Claude Code CLI rejects this with `Error: Fallback model cannot be the
same as the main model`, causing every standard-mode copilot turn to
fail with exit code 1 — the session "completes" in ~30s but produces no
response and drops the transcript.
**What:** Set `claude_agent_fallback_model` default to `""`.
`_resolve_fallback_model()` already returns `None` on empty string,
which means the `--fallback-model` flag is simply not passed to the CLI.
On 529 overload errors the turn will surface normally instead of
silently retrying with a fallback.
**How:** One-line config change + test update.
### Changes 🏗️
- `ChatConfig.claude_agent_fallback_model` default:
`"claude-sonnet-4-6"` → `""`
- Update `test_fallback_model_default` to assert the empty default
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run pytest backend/copilot/sdk/p0_guardrails_test.py`
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
### Why / What / How
Why: Copilot/Autopilot standard requests were still defaulting to Claude
Sonnet 4, while the expected default for this path is Sonnet 4.6.
What: This PR updates the backend Copilot defaults so the
standard/default path and fast path use Sonnet 4.6, and aligns the SDK
fallback model and related test expectations.
How: It changes `ChatConfig.model`, `ChatConfig.fast_model`, and
`ChatConfig.claude_agent_fallback_model` to Sonnet 4.6 values, then
updates backend tests that assert the default Sonnet model strings.
### Changes 🏗️
- Switch `ChatConfig.model` from `anthropic/claude-sonnet-4` to
`anthropic/claude-sonnet-4-6`
- Switch `ChatConfig.fast_model` from `anthropic/claude-sonnet-4` to
`anthropic/claude-sonnet-4-6`
- Switch `ChatConfig.claude_agent_fallback_model` from
`claude-sonnet-4-20250514` to `claude-sonnet-4-6`
- Update backend Copilot tests that assert the default Sonnet model
strings
- Configuration changes:
- No new environment variables or docker-compose changes are required
- Existing `.env.default` and compose files remain compatible because
this only changes backend default model values in code
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run pytest
backend/copilot/baseline/transcript_integration_test.py`
- [x] `poetry run pytest backend/copilot/sdk/service_helpers_test.py`
- [x] `poetry run pytest backend/copilot/sdk/service_test.py`
- [x] `poetry run pytest backend/copilot/sdk/p0_guardrails_test.py`
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Changes default/fallback LLM model identifiers for Copilot requests,
which can affect runtime behavior, cost, and availability
characteristics across both baseline and SDK paths. Risk is mitigated by
being a small, config-only change with updated tests.
>
> **Overview**
> Updates Copilot backend defaults so both the standard (`model`) and
fast (`fast_model`) paths use `anthropic/claude-sonnet-4-6`, and aligns
the Claude Agent SDK fallback model to `claude-sonnet-4-6`.
>
> Adjusts related test expectations in baseline transcript integration
and SDK helper tests to match the new Sonnet 4.6 model strings.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
563361ac11. Bugbot is set up for automated
code reviews on this repo. Configure
[here](https://www.cursor.com/dashboard/bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
The Claude Code CLI auto-compacts its native session JSONL when the context
approaches the model's token limit (~200K for Sonnet). After compaction the
detailed conversation history is replaced by a ~27K-token summary, causing
the silent context loss users see as memory failures in long sessions.
Root cause identified from production logs for session 93ecf7c9:
- T6 CLI session: 233KB / ~207K tokens (near Sonnet limit)
- T7 CLI compacted session -> ~167KB / ~47K tokens (PreCompact hook missed)
- T12 second compaction -> ~176KB / ~27K tokens (just system prompt + summary)
- T14-T21: cache_read=26714 constantly -- only system prompt visible to Claude
The same stripping we already apply to our transcript (stale thinking blocks,
progress/metadata entries) now also runs on the CLI native session file. At
~2x the size of the stripped transcript, unstripped sessions routinely hit the
compaction threshold within 6-10 turns of a heavy Opus/thinking session.
After stripping:
- same-pod turns reuse the stripped local file (no compaction trigger)
- cross-pod turns restore the stripped GCS file (same benefit)
When a user switches from baseline (fast) mode to SDK (extended_thinking)
mode mid-session, the first SDK turn has has_history=True (prior baseline
messages in DB) but no CLI session file in storage.
The old code gated session_id on `not has_history`, so mode-switch T1
never received a session_id — the CLI generated a random ID that wasn't
uploaded under the expected key. Every subsequent SDK turn would fail to
restore the CLI session and run without --resume, injecting the full
compressed history on each turn, causing model confusion.
Fix: set session_id whenever not using --resume (the `else` branch),
covering T1 fresh, mode-switch T1, and T2+ fallback turns. The retry
path is updated to use `"session_id" in sdk_options_kwargs` as the
discriminator (instead of `not has_history`) so mode-switch T1 retries
also keep the session_id while T2+ retries (where T1 restored a session
file via restore_cli_session) still remove it to avoid "Session ID
already in use".
When neither frontend_base_url nor platform_base_url is set, subscription
upgrade attempts were failing with a misleading 422 'success_url and
cancel_url must match the platform frontend origin' error. The real problem
is a server misconfiguration, not a bad URL from the client.
Add an explicit pre-flight check in update_subscription_tier: if the allowed
origin is not configured, log an error and raise 503 with a clear message so
operators can diagnose the missing config instead of chasing a false URL
mismatch error.
### Why / What / How
**Why:** CoPilot's Graphiti memory system needed structured metadata to
distinguish memory types (rules, procedures, facts, preferences),
support scoped retrieval, enable targeted deletion, and track memory
costs under the AutoPilot billing account separately from the platform.
**What:** Adds the MemoryEnvelope metadata model, structured
rule/procedure memory types, a derived-finding lane for
assistant-distilled knowledge, two-step forget tools, scope-aware
retrieval filtering, AutoPilot-dedicated API key routing, and several
reliability fixes (streaming socket leaks, event-loop-scoped caches,
ingestion hardening).
**How:** MemoryEnvelope wraps every stored episode with typed metadata
(source_kind, memory_kind, scope, status, confidence) serialized as
JSON. Retrieval filters by scope at the context layer. The forget flow
uses a search-then-confirm two-step pattern. Ingestion queues and client
caches are scoped per event loop via WeakKeyDictionary to prevent
cross-loop RuntimeErrors in multi-worker deployments. API key resolution
falls back to AutoPilot-dedicated keys (CHAT_API_KEY,
CHAT_OPENAI_API_KEY) before platform-wide keys.
### Changes 🏗️
**New: MemoryEnvelope metadata model** (`memory_model.py`)
- Typed memory categories: fact, preference, rule, finding, plan, event,
procedure
- Source tracking: user_asserted, assistant_derived, tool_observed
- Scope namespacing: `real:global`, `project:<name>`, `book:<title>`,
`session:<id>`
- Status lifecycle: active, tentative, superseded, contradicted
- Structured `RuleMemory` and `ProcedureMemory` models for complex
instructions
**New: Targeted forget tools** (`graphiti_forget.py`)
- `memory_forget_search`: returns candidate facts with UUIDs for user
confirmation
- `memory_forget_confirm`: deletes specific edges by UUID after
confirmation
**New: Architecture test** (`architecture_test.py`)
- Validates no new `@cached(...)` usage around event-loop-bound async
clients
- Allowlists pre-existing violations for future cleanup
**Enhanced: memory_store tool** (`graphiti_store.py`)
- Accepts MemoryEnvelope metadata fields (source_kind, scope,
memory_kind, rule, procedure)
- Wraps content in MemoryEnvelope before ingestion
**Enhanced: memory_search tool** (`graphiti_search.py`)
- Scope-aware retrieval with hard filtering on group_id
**Enhanced: Ingestion pipeline** (`ingest.py`)
- Derived-finding lane: distills substantive assistant responses into
tentative findings
- Event-loop-scoped queues and workers via WeakKeyDictionary (fixes
multi-worker RuntimeError)
- Improved error handling and dropped-episode reporting
**Enhanced: Client cache** (`client.py`)
- Per-loop client cache and lock via WeakKeyDictionary (fixes "Future
attached to a different loop")
**Enhanced: Warm context** (`context.py`)
- Filters out non-global-scope episodes from warm context
**Fix: Streaming socket leak** (`baseline/service.py`)
- try/finally around async stream iteration to release httpx connections
on early exit
**Config: AutoPilot key routing** (`config.py`, `.env.default`)
- LLM key fallback: GRAPHITI_LLM_API_KEY → CHAT_API_KEY →
OPEN_ROUTER_API_KEY
- Embedder key fallback: GRAPHITI_EMBEDDER_API_KEY → CHAT_OPENAI_API_KEY
→ OPENAI_API_KEY
- Backwards-compatible: existing behavior unchanged until new keys are
provisioned
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run pytest backend/copilot/graphiti/config_test.py` — 16
tests pass (key fallback priority)
- [x] `poetry run pytest backend/copilot/tools/graphiti_store_test.py` —
store envelope tests pass
- [x] `poetry run pytest backend/copilot/graphiti/ingest_test.py` —
ingestion tests pass
- [x] `poetry run pytest backend/util/architecture_test.py` — structural
validation passes
- [x] Verify memory store/retrieve/forget cycle via copilot chat
- [x] Run AgentProbe multi-session memory benchmark (31 scenarios x3
repeats)
- [x] Confirm no CLOSE_WAIT socket accumulation under sustained
streaming load
- [x] Verify multi-worker deployment doesn't produce loop-binding errors
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- Configuration changes:
- New optional env var `CHAT_OPENAI_API_KEY` — AutoPilot-dedicated
OpenAI key for Graphiti embeddings (falls back to `OPENAI_API_KEY` if
not set)
- `CHAT_API_KEY` now used as first fallback for Graphiti LLM calls (was
`OPEN_ROUTER_API_KEY`)
- Infra action needed: add `CHAT_OPENAI_API_KEY` sealed secret in
`autogpt-shared-config` values (dev + prod)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Touches Graphiti memory ingestion/retrieval and introduces hard-delete
capabilities plus event-loop–scoped caching/queues; failures could
affect memory correctness or delete the wrong edges. Also changes
streaming resource cleanup and key routing, which could surface as
connection or billing/cost attribution issues if misconfigured.
>
> **Overview**
> **Graphiti memory is upgraded from plain text episodes to a structured
JSON `MemoryEnvelope`.** `memory_store` now wraps content with typed
metadata (source, kind, scope, status) and optional structured
`rule`/`procedure` payloads, and ingestion supports JSON episodes.
>
> **Memory retrieval and lifecycle controls are expanded.**
`memory_search` adds optional scope hard-filtering to prevent
cross-scope leakage, warm-context formatting drops non-global scoped
episodes (and avoids empty wrappers), and new two-step tools
(`memory_forget_search` → `memory_forget_confirm`) enable targeted soft-
or hard-deletion of specific graph edges by UUID.
>
> **Reliability and multi-worker safety improvements.** Graphiti client
caching and ingestion worker registries are now per-event-loop (avoiding
cross-loop `Future` errors), streaming chat completions explicitly close
async streams to prevent `CLOSE_WAIT` socket leaks, warm-context is
injected into the first user message to keep the system prompt
cacheable, and a new `architecture_test.py` blocks future process-wide
caching of event-loop–bound async clients. Config updates route Graphiti
LLM/embedder keys to AutoPilot-specific env vars first, and OpenAPI
schema exports include the new memory response types.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
5fb4bd0a43. Bugbot is set up for automated
code reviews on this repo. Configure
[here](https://www.cursor.com/dashboard/bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
## Changes
**Root cause:** When a copilot session ends with a tool result as the
last saved message (`last_role=tool`), the next assistant response is
never persisted. This happens when:
1. An intermediate flush saves the session with `last_role=tool` (after
a tool call completes)
2. The Claude Agent SDK generates a text response for the next turn
3. The client disconnects (`GeneratorExit`) at the `yield
StreamStartStep` — the very first yield of the new turn
4. `_dispatch_response(StreamTextDelta)` is never called, so the
assistant message is never appended to `ctx.session.messages`
5. The session `finally` block persists the session still with
`last_role=tool`
**Fix:** In `_run_stream_attempt`, after `convert_message()` returns the
full list of adapter responses but *before* entering the yield loop,
pre-create the assistant message placeholder in `ctx.session.messages`
when:
- `acc.has_tool_results` is True (there are pending tool results)
- `acc.has_appended_assistant` is True (at least one prior message
exists)
- A `StreamTextDelta` is present in the batch (confirms this is a text
response turn)
This ensures that even if `GeneratorExit` fires at the first `yield`,
the placeholder assistant message is already in the session and will be
persisted by the `finally` block.
**Tests:** Added `session_persistence_test.py` with 7 unit tests
covering the pre-create condition logic and delta accumulation behavior.
**Confirmed:** Langfuse trace `e57ebd26` for session
`465bf5cf-7219-4313-a1f6-5194d2a44ff8` showed the final assistant
response was logged at 13:06:49 but never reached DB — session had 51
messages with `last_role=tool`.
## Checklist
- [x] My code follows the code style of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation (N/A)
- [x] My changes generate no new warnings (Pyright warnings are
pre-existing)
- [x] I have added tests that prove my fix is effective
- [x] New and existing unit tests pass locally with my changes
---------
Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com>
### Why / What / How
**Why:** Anthropic prompt caching keys on exact system prompt content.
Two sources of per-session dynamic data were leaking into the system
prompt, making it unique per session/user — causing a full 28K-token
cache write (~$0.10 on Sonnet) on *every* first message for *every*
session instead of once globally per model.
**What:**
1. `get_sdk_supplement` was embedding the session-specific working
directory (`/tmp/copilot-<uuid>`) in the system prompt text. Every
session has a different UUID, making every session's system prompt
unique, blocking cross-session cache hits.
2. Graphiti `warm_ctx` (user-personalised memory facts fetched on the
first turn) was appended directly to the system prompt, making it unique
per user per query.
**How:**
- `get_sdk_supplement` now uses the constant placeholder
`/tmp/copilot-<session-id>` in the supplement text and memoizes the
result. The actual `cwd` is still passed to `ClaudeAgentOptions.cwd` so
the CLI subprocess uses the correct session directory.
- `warm_ctx` is now injected into the first user message as a trusted
`<memory_context>` block (prepended before `inject_user_context` runs),
following the same pattern already used for business understanding. It
is persisted to DB and replayed correctly on `--resume`.
- `sanitize_user_supplied_context` now also strips user-supplied
`<memory_context>` tags, preventing context-spoofing via the new tag.
After this change the system prompt is byte-for-byte identical across
all users and sessions for a given model.
### Changes 🏗️
- `backend/copilot/prompting.py`: `get_sdk_supplement` ignores `cwd` and
uses a constant working-directory placeholder; result is memoized in
`_LOCAL_STORAGE_SUPPLEMENT`.
- `backend/copilot/sdk/service.py`: `warm_ctx` is saved to a local
variable instead of appended to `system_prompt`; on the first turn it is
prepended to `current_message` as a `<memory_context>` block before
`inject_user_context` is called.
- `backend/copilot/service.py`: `sanitize_user_supplied_context`
extended to strip `<memory_context>` blocks alongside `<user_context>`.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run pytest backend/copilot/prompting_test.py
backend/copilot/prompt_cache_test.py` — all passed
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
---------
Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com>
Replace all `from backend.copilot.X import Y` (top-level and inline)
with `from ..X import Y` to eliminate Pyright type collisions from
mixed absolute/relative imports. Add `# isort: skip_file` to prevent
isort from reverting the change.
### Why / What / How
Fixes two SECRT-2226 bugs in copilot chat pagination.
**Bug 1 — can't load older messages when the newest page fits on
screen.** The `IntersectionObserver` in `LoadMoreSentinel` bailed when
`scrollHeight <= clientHeight`, which happens routinely once reasoning +
tool groups collapse. With no scrollbar and no button, users were stuck.
Fix: remove the guard, cap auto-fill at 3 non-scrollable rounds (keeps
the original anti-loop intent), and add a manual "Load older messages"
button as the always-available escape hatch.
**Bug 2 — older loaded pages vanish after a new turn, then reloading
them produces duplicates.** After each stream `useCopilotStream`
invalidates the session query; the refetch returns a shifted
`oldest_sequence`, which `useLoadMoreMessages` used as a signal to wipe
`olderRawMessages` and reset the local cursor. Scroll-back history was
lost on every turn, and the next load fetched a page that overlapped
with AI SDK's retained `currentMessages` — the "loops" users reported.
Fix: once any older page is loaded, preserve `olderRawMessages` and the
local cursor across same-session refetches. Only reset on session
change. The gap between the new initial window and older pages is
covered by AI SDK's retained state.
### Changes 🏗️
- `ChatMessagesContainer.tsx`: drop the scrollability guard; add
`MAX_AUTO_FILL_ROUNDS = 3` counter; add "Load older messages" button
(`ghost`/`small`); distinguish observer-triggered vs. button-triggered
loads so the button bypasses the cap; export `LoadMoreSentinel` for
testing.
- `useLoadMoreMessages.ts`: remove the wipe-and-reset branch on
`initialOldestSequence` change; preserve local state mid-session; still
mirror parent's cursor while no older page is loaded.
- New integration test `__tests__/LoadMoreSentinel.test.tsx`.
No backend changes.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Short/collapsed newest page: "Load older messages" button loads
older pages, preserves scroll
- [x] Full-viewport newest page: scroll-to-top auto-pagination still
works (no regression)
- [x] `has_more_messages=false` hides the button; `isLoadingMore=true`
shows spinner instead
- [x] Bug 2 reproduced locally with temporary `limit=5`: before fix
older page vanished and next load duplicated AI SDK messages; after fix
older page stays and next load fetches cleanly further back
- [x] `pnpm format`, `pnpm lint`, `pnpm types`, `pnpm test:unit` all
pass (1208/1208)
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**) — N/A
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Why
Two fixes bundled together:
1. **ModelToggleButton styling**: after merging the ModelToggleButton
feature, the "Standard" state was invisible — no background, no label —
while "Advanced" had a colored pill. This was inconsistent with
`ModeToggleButton` where both states (Fast / Thinking) always show a
colored background + label.
2. **Execution ID filter on platform cost admin page**: admins needed to
look up cost rows for a specific agent run but had no way to filter by
`graph_exec_id`. All other identifiers (user, model, provider, block,
tracking type) were already filterable.
## What
- **ModelToggleButton**: inactive (Standard) state now uses
`bg-neutral-100 text-neutral-700 hover:bg-neutral-200` (same palette as
ModeToggleButton inactive), always shows the "Standard" label.
- **Platform cost admin page**: added `graph_exec_id` query filter
across the full stack — backend service functions, FastAPI route
handlers, generated TypeScript params types, `usePlatformCostContent`
hook, and the filter UI in `PlatformCostContent`.
## How
### ModelToggleButton
Changed the inactive-state class from hover-only transparent to
always-visible neutral background, and added the "Standard" text label
(was empty before — only the CPU icon showed).
### Execution ID filter
Added `graph_exec_id: str | None = None` parameter to:
- `_build_prisma_where` — applies `where["graphExecId"] = graph_exec_id`
- `get_platform_cost_dashboard`, `get_platform_cost_logs`,
`get_platform_cost_logs_for_export`
- All three FastAPI route handlers (`/dashboard`, `/logs`,
`/logs/export`)
- Generated TypeScript params types
- `usePlatformCostContent`: new `executionIDInput` /
`setExecutionIDInput` state, wired into `filterParams`, `handleFilter`,
and `handleClear`
- `PlatformCostContent`: new Execution ID input field in the filter bar
## Changes
- [x] I have explained why I made the changes, not just what I changed
- [x] There are no unrelated changes in this PR
- [x] I have run the relevant linters and tests before submitting
---------
Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com>
claude-agent-sdk 0.1.58's SystemPromptPreset TypedDict does not declare
exclude_dynamic_sections, causing a reportCallIssue Pyright error at the
_build_system_prompt_value call site (service.py:820). The field was
added in 0.1.59.
Define a local _SystemPromptPreset that extends the SDK TypedDict with
exclude_dynamic_sections: NotRequired[bool] so Pyright accepts the kwarg
without a # type: ignore comment, until the SDK pin is bumped to 0.1.59.
Add early return when user has no stripe_customer_id to prevent creating
an orphaned Stripe customer if a subsequent Subscription.list call fails.
Follows the same pattern as cancel_stripe_subscription and
get_proration_credit_cents. Update tests to mock get_user_by_id and add
a test for the no-customer-id path.
## Why
When a user switches from **baseline** (fast) mode to **SDK**
(extended_thinking) mode mid-session, every subsequent SDK turn started
fresh with no memory of prior conversation.
Root cause: two complementary bugs on mode-switch T1 (first SDK turn
after baseline turns):
1. `session_id` was gated on `not has_history`. On mode-switch T1,
`has_history=True` (prior baseline turns in DB) so no `session_id` was
set. The CLI generated a random ID and could not upload the session file
under a predictable path → `--resume` failed on every following SDK
turn.
2. Even if `session_id` were set, the upload guard `(not has_history or
state.use_resume)` would block the session file upload on mode-switch T1
(`has_history=True`, `use_resume=False`), so the next turn still cannot
`--resume`.
Together these caused every SDK turn to re-inject the full compressed
history, causing model confusion (proactive tool calls, forgetting
context) observed in session `8237a27b-45d0-4688-af20-c185379e926f`.
## What
- **`service.py`**: Change `elif not has_history:` → `else:` for the
`session_id` assignment — set it whenever `--resume` is not active.
Covers T1 fresh, mode-switch T1 (`has_history=True` but no CLI session
exists), and T2+ fallback turns where restore failed.
- **`service.py` retry path**: Replace `not has_history` with
`"session_id" in sdk_options_kwargs` as the discriminator, so
mode-switch T1 retries also keep `session_id` while T2+ retries (where
`restore_cli_session` put a file on disk) correctly remove it to avoid
"Session ID already in use".
- **`service.py` upload guard**: Remove `and not skip_transcript_upload`
and `and (not has_history or state.use_resume)` from the
`upload_cli_session` guard. The CLI session file is independent of the
JSONL transcript; and upload must run on mode-switch T1 so the next turn
can `--resume`. `upload_cli_session` silently skips when the file is
absent, so unconditional upload is always safe.
## How
| Scenario | Before | After |
|---|---|---|
| T1 fresh (`has_history=False`) | `session_id` set ✓ | `session_id` set
✓ |
| Mode-switch T1 (`has_history=True`, no CLI session) | ❌ not set —
**bug** | `session_id` set ✓ |
| T2+ with `--resume` | `resume` set ✓ | `resume` set ✓ |
| T2+ retry after `--resume` failed | `session_id` removed ✓ |
`session_id` removed ✓ |
| Mode-switch T1 retry | `session_id` removed ❌ | `session_id` kept ✓ |
| Upload on mode-switch T1 | ❌ blocked by guard — **bug** | uploaded ✓ |
7 new unit tests in `TestSdkSessionIdSelection` document all session_id
cases.
6 new tests in `mode_switch_context_test.py` cover transcript bridging
for both fast→SDK and SDK→fast switches.
## Checklist
- [x] I have read the contributing guidelines
- [x] My changes are covered by tests
- [x] `poetry run format` passes
---------
Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com>
## Why
After merging #12782 to dev, a k8s rolling deployment triggered
infrastructure-level POST retries — nginx detected the old pod's
connection reset mid-stream and resent the same POST to a new pod. Both
pods independently saved the user message and ran the executor,
producing duplicate entries in the DB (seq 159, 161, 163) and a
duplicate response in the chat. The model saw the same question 3× in
its context window and spent its response commenting on that instead of
answering.
Two compounding issues:
1. **No backend idempotency**: `append_and_save_message` saves
unconditionally — k8s/nginx retries silently produce duplicate turns.
2. **Frontend dedup cleared after success**:
`lastSubmittedMsgRef.current = null` after every completed turn wipes
the dedup guard, so any rapid re-submit of the same text (from a stalled
UI or user double-click) slips through.
## What
**Backend** — Redis idempotency gate in `stream_chat_post`:
- Before saving the user message, compute `sha256(session_id +
message)[:16]` and `SET NX ex=30` in Redis
- If key already exists → duplicate: return empty SSE (`StreamFinish +
[DONE]`) immediately, skip save + executor enqueue
- User messages only (`is_user_message=True`); system/assistant messages
bypass the check
**Frontend** — Keep `lastSubmittedMsgRef` populated after success:
- Remove `lastSubmittedMsgRef.current = null` on stream complete
- `getSendSuppressionReason` already has a two-condition check: `ref ===
text AND lastUserMsg === text` — so legitimate re-asks (after a
different question was answered) still work; only rapid re-sends of the
exact same text while it's still the last user message are blocked
## How
- 30 s Redis TTL covers infrastructure retry windows (k8s SIGTERM →
connection reset → ingress retry typically < 5 s)
- Empty SSE response is well-formed (StreamFinish + [DONE]) — frontend
AI SDK marks the turn complete without rendering a ghost message
- Frontend ref kept live means: submit "foo" → success → submit "foo"
again instantly → suppressed. Submit "foo" → success → submit "bar" →
proceeds (different text updates the ref).
## Tests
- 3 new backend route tests: duplicate blocked, first POST proceeds,
non-user messages bypass
- 5 new frontend `getSendSuppressionReason` unit tests: fresh ref,
reconnecting, duplicate suppressed, different-turn re-ask allowed,
different text allowed
## Checklist
- [x] I have read the [AutoGPT Contributing
Guide](https://github.com/Significant-Gravitas/AutoGPT/blob/master/CONTRIBUTING.md)
- [x] I have performed a self-review of my code
- [x] I have added tests that prove the fix is effective
- [x] I have run `poetry run format` and `pnpm format` + `pnpm lint`
## Summary
Fixes stream disconnection bugs where the UI shows "running" with no
output when users switch between copilot chat sessions. The root cause
is that the old SSE fetch is not aborted and backend XREAD listeners
keep running until timeout when switching sessions.
### Changes
**Frontend (`useCopilotStream.ts`, `helpers.ts`)**
- Call `sdkStop()` on session switch to abort the in-flight SSE fetch
from the old session's transport
- Fire-and-forget `DELETE` to new backend disconnect endpoint so
server-side listeners release immediately
- Store `resumeStream` and `sdkStop` in refs to fix stale closure bugs
in:
- Wake re-sync visibility handler (could call stale `resumeStream` after
tab sleep)
- Reconnect timer callback (could target wrong session's transport)
- Resume effect (captured stale `resumeStream` during rapid session
switches)
**Backend (`stream_registry.py`, `routes.py`)**
- Add `disconnect_all_listeners(session_id)` to stream registry —
iterates active listener tasks, cancels any matching the session
- Add `DELETE /sessions/{session_id}/stream` endpoint — auth-protected,
calls `disconnect_all_listeners`, returns 204
### Why
Reported by multiple team members: when using Autopilot for anything
serious, the frontend loses the SSE connection — particularly when
switching between conversations. The backend completes fine (refreshing
shows full output), but the UI gets stuck showing "running". This is the
worst UX bug we have right now because real users will never know to
refresh.
### How to test
1. Start a long-running autopilot task (e.g., "build a snake game")
2. While it's streaming, switch to a different chat session
3. Switch back — the UI should correctly show the completed output or
resume the stream
4. Verify no "stuck running" state
## Test plan
- [ ] Manual: switch sessions during active stream — no stuck "running"
state
- [ ] Manual: background tab for >30s during stream, return — wake
re-sync works
- [ ] Manual: trigger reconnect (kill network briefly) — reconnects to
correct session
- [ ] Verify: `pnpm lint`, `pnpm types`, `poetry run lint` all pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: majdyz <zamil.majdy@agpt.co>
## Why
Users have different task complexity needs. Sonnet is fast and cheap for
most queries; Opus is more capable for hard reasoning tasks. Exposing
this as a simple toggle gives users control without requiring
infrastructure complexity.
Opus costs 5× more than Sonnet per Anthropic pricing ($15/$75 vs $3/$15
per M tokens). Rather than adding a separate entitlement gate, the
rate-limit multiplier (5×) ensures Opus turns deplete the daily/weekly
quota proportionally faster — users self-limit via their existing
budget.
## What
- **Standard/Advanced model toggle** in the chat input toolbar (sky-blue
star icon, label only when active — matches the simulation
DryRunToggleButton pattern but visually distinct)
- **`CopilotLlmModel = Literal["standard", "advanced"]`** —
model-agnostic tier names (not tied to Anthropic model names)
- **Backend model resolution**: `"advanced"` → `claude-opus-4-6`,
`"standard"` → `config.model` (currently Sonnet)
- **Rate-limit multiplier**: Opus turns count as 5× in Redis token
counters (daily + weekly limits). Does **not** affect `PlatformCostLog`
or `cost_usd` — those use real API-reported values
- **localStorage persistence** via `Key.COPILOT_MODEL` so preference
survives page refresh
- **`claude_agent_max_budget_usd`** reduced from $15 to $10
## How
### Backend
- `CopilotLlmModel` type added to `config.py`, imported in
routes/executor/service
- `stream_chat_completion_sdk` accepts `model: CopilotLlmModel | None`
- Model tier resolved early in the SDK path; `_normalize_model_name`
strips the OpenRouter provider prefix
- `model_cost_multiplier` (1.0 or 5.0) computed from final resolved
model name, passed to `persist_and_record_usage` → `record_token_usage`
(Redis only)
- No separate LD flag needed — rate limit is the gate
### Frontend
- `ModelToggleButton` component: sky-blue, star icon, "Advanced" label
when active
- `copilotModel` state in `useCopilotUIStore` with localStorage
hydration
- `copilotModelRef` pattern in `useCopilotStream` (avoids recreating
`DefaultChatTransport`)
- Toggle gated behind `showModeToggle && !isStreaming` in `ChatInput`
## Checklist
- [x] Tests added/updated (ModelToggleButton.test.tsx,
service_helpers_test.py, token_tracking_test.py)
- [x] Rate-limit multiplier only affects Redis counters, not cost
tracking
- [x] No new LD flag needed
## Why
When the agent generator LLM builds a graph, it may need to look up
schema details for graph-only blocks like `AgentInputBlock`,
`AgentOutputBlock`, or `OrchestratorBlock`. These blocks are correctly
hidden from regular CoPilot `find_block` results (they can't run
standalone), but that same filter was also preventing the LLM from
discovering them when composing an agent graph.
## What
Added a `for_agent_generation: bool = False` parameter to
`FindBlockTool`.
## How
- `for_agent_generation=false` (default): existing behaviour unchanged —
graph-only blocks are filtered from both UUID lookups and text search
results.
- `for_agent_generation=true`: bypasses `COPILOT_EXCLUDED_BLOCK_TYPES` /
`COPILOT_EXCLUDED_BLOCK_IDS` so the LLM can find and inspect schemas for
INPUT, OUTPUT, ORCHESTRATOR, WEBHOOK, etc. blocks when building agent
JSON.
- MCP_TOOL blocks are still excluded even with
`for_agent_generation=true` (they go through `run_mcp_tool`, not
`find_block`).
## Checklist
- [x] No new dependencies
- [x] Backward compatible (default `false` preserves existing behaviour)
- [x] No frontend changes
## Why
OpenRouter occasionally returns `null` (not `0`) for
`cache_read_input_tokens` and `cache_creation_input_tokens` on the
initial streaming event, before real token counts are available.
Python's `dict.get(key, 0)` only falls back to `0` when the key is
**missing** — when the key exists with a `null` value, `.get(key, 0)`
returns `None`. This causes `TypeError: unsupported operand type(s) for
+=: 'int' and 'NoneType'` in the usage accumulator on the first
streaming chunk from OpenRouter models.
## What
- Replace `.get(key, 0)` with `.get(key) or 0` for all four token fields
in `_run_stream_attempt`
- Add `TestTokenUsageNullSafety` unit tests in `service_helpers_test.py`
## How
Minimal targeted fix — only the four `+=` accumulation lines changed. No
behaviour change for Anthropic-native models (they never emit null
values).
## Checklist
- [x] Tests cover null event, real event, absent keys, and multi-turn
accumulation
- [x] No behaviour change for Anthropic-native models
- [x] No API changes
### Why / What / How
**Why:** The OrchestratorBlock in agent mode makes multiple LLM calls in
a single node execution (one per iteration of the tool-calling loop),
but the executor was only charging the user once per run via
`_charge_usage`. Tools spawned by the orchestrator also bypassed
`_charge_usage` entirely — they execute via `on_node_execution()`
directly without going through the main execution queue, producing free
internal block executions.
**What:**
1. Charge `base_cost * (llm_call_count - 1)` extra credits after the
orchestrator block completes — covers the additional iterations beyond
the first (which is already paid for upfront).
2. Charge user credits for tools executed inside the orchestrator, the
same way queue-driven node executions are charged.
**How:**
**1. Per-iteration LLM charging**
- New `Block.extra_runtime_cost(execution_stats)` virtual method
(default returns `0`)
- `OrchestratorBlock` overrides it to return `max(0, llm_call_count -
1)`
- New `resolve_block_cost` free function in `billing.py` centralises the
block-lookup + cost-calculation pattern (used by both `charge_usage` and
`charge_extra_runtime_cost`)
- New `billing.charge_extra_runtime_cost(node_exec, extra_count)`
function that debits `base_cost * min(extra_count,
_MAX_EXTRA_RUNTIME_COST)` via `spend_credits()`, running synchronously
in a thread-pool worker
- After `_on_node_execution` completes with COMPLETED status,
`on_node_execution` calls `charge_extra_runtime_cost` if
`extra_runtime_cost > 0` and not a dry run
- `InsufficientBalanceError` from post-hoc charging is treated as a
billing leak: logged at ERROR with `billing_leak: True` structured
fields, user is notified via `_handle_insufficient_funds_notif`, but the
run status stays COMPLETED (work already done)
**2. Tool execution charging**
- New public async `ExecutionProcessor.charge_node_usage(node_exec)`
wrapper around `charge_usage` (with `execution_count=0` to avoid
inflating execution-tier counters); also calls `_handle_low_balance`
internally
- `OrchestratorBlock._execute_single_tool_with_manager` calls
`charge_node_usage` after successful tool execution (skipped for dry
runs and failed/cancelled tool runs)
- Tool cost is added to the orchestrator's `extra_cost` so it shows up
in graph stats display
- `InsufficientBalanceError` from tool charging is re-raised (not
downgraded to a tool error) in all three execution paths:
`_execute_single_tool_with_manager`, `_agent_mode_tool_executor`, and
`_execute_tools_sdk_mode`
**3. Billing module extraction**
- All billing logic extracted from `ExecutionProcessor` into
`backend/executor/billing.py` as free functions — keeps `manager.py` and
`service.py` focused on orchestration
- `ExecutionProcessor` retains thin delegation methods
(`charge_node_usage`, `charge_extra_runtime_cost`) for backward
compatibility with blocks that call them
**4. Structured error signalling**
- Tool error detection replaced brittle `text.startswith("Tool execution
failed:")` string check with a structured `_is_error` boolean field on
the tool response dict
### Changes
- `backend/blocks/_base.py`: Add
`Block.extra_runtime_cost(execution_stats) -> int` virtual method
(default `0`)
- `backend/blocks/orchestrator.py`: Override `extra_runtime_cost`; add
tool charging in `_execute_single_tool_with_manager`; add
`InsufficientBalanceError` re-raise carve-outs in all three execution
paths; replace string-prefix error detection with `_is_error` flag
- `backend/executor/billing.py` (new): Free functions
`resolve_block_cost`, `charge_usage`, `charge_extra_runtime_cost`,
`charge_node_usage`, `handle_post_execution_billing`,
`clear_insufficient_funds_notifications` — extracted from
`ExecutionProcessor`
- `backend/executor/manager.py`: Thin delegation to `billing.*`; remove
~500 lines of billing methods from `ExecutionProcessor`
- `backend/data/credit.py`: Update lazy import source from `manager` to
`billing`
- `backend/blocks/test/test_orchestrator.py`: Add `charge_node_usage`
mock + assertion
- `backend/blocks/test/test_orchestrator_dynamic_fields.py`: Add
`charge_node_usage` async mock
- `backend/blocks/test/test_orchestrator_responses_api.py`: Add
`charge_node_usage` async mock
- `backend/blocks/test/test_orchestrator_per_iteration_cost.py`: New
test file — `extra_runtime_cost` hook, `charge_extra_runtime_cost` math
(positive/zero/negative/capped/zero-cost/block-not-found/IBE),
`charge_node_usage` delegation, `on_node_execution` gate conditions
(COMPLETED/FAILED/zero-charges/dry-run/IBE), tool charging guards
(dry-run/failed/cancelled/IBE propagation)
### Checklist
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Run `poetry run pytest
backend/blocks/test/test_orchestrator_per_iteration_cost.py`
- [ ] Verify on dev: an OrchestratorBlock run with
`agent_mode_max_iterations=5` and 5 actual iterations is charged 5x the
base cost
- [ ] Verify tool executions inside the orchestrator are charged
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: majdyz <majdy.zamil@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: majdyz <majdyz@users.noreply.github.com>
get_proration_credit_cents now checks user.stripe_customer_id before calling into
Stripe, matching the same pattern applied to cancel_stripe_subscription. Admin-granted
paid-tier users without a Stripe record previously triggered customer creation on every
billing page load; they now get 0 immediately.
Also adds tests: no_customer_id fast-path for cancel, and three proration scenarios
(zero cost, no customer id, active subscription with proration calculation).
- Guard cancel_stripe_subscription with stripe_customer_id check to prevent
creating orphaned Stripe customers for users who never had a paid subscription
- Fix upgrade dialog showing raw tier key (PRO) instead of human-readable label (Pro)
- Fix misleading footer text that contradicted downgrade-to-FREE behaviour
(downgrades to Free are scheduled at period end, not immediate)
Adds the missing get_proration_credit_cents mock to the three
GET /credits/subscription route tests so they don't attempt a live
Stripe/DB call in unit-test context, and extends assertions to cover
the proration_credit_cents field in the response.
## Why
During a live production session, the copilot lost all conversation
context mid-session. The model stated \"I don't see any implementation
plan in our conversation\" despite 9 prior turns of context. Three
compounding bugs:
**Bug 1 — Self-perpetuating upload gate:** When `restore_cli_session`
fails on a T2+ turn, `state.use_resume=False`. The old gate `and (not
has_history or state.use_resume)` then skips the CLI session upload —
even though the T1 file may exist. Each turn without `use_resume` skips
upload → next turn can't restore → also skips → etc.
**Bug 2 — Blunt message-count cap on retries:** On `prompt-too-long`,
`_reduce_context` retried 3× but rebuilt the same oversized query each
time (transcript was empty, so all 3 attempts were identical). The
`max_fallback_messages` count-cap was a blunt instrument — it threw away
middle turns blindly instead of letting the compressor summarize
intelligently.
**Bug 3 — Gap-empty path returned zero context:** When a transcript
exists but no `--resume` (CLI session unavailable), and the gap is empty
(transcript is current), the code fell through to `return
current_message, False` — the model got no history at all.
## What
1. **Remove upload gate** — upload is attempted after every successful
turn; `upload_cli_session` silently skips when the file is absent.
2. **`transcript_msg_count` set on `cli_restored=False`** — enables the
gap path on the very next turn without waiting for a full upload cycle.
3. **Token-budget compression instead of message-count cap** —
`_reduce_context` now returns `target_tokens` (50K → 15K across
retries). `compress_context` decides what to drop via LLM summarize →
content truncate → middle-out delete → first/last trim. More context
preserved at any budget vs. blindly slicing the list.
4. **Fix gap-empty case** — when transcript is current but `--resume`
unavailable, fall through to full-session compression with the token
budget instead of returning no context.
5. **Transcript seeding after fallback** — after `use_resume=False` with
no stored transcript, compress DB messages to 30K tokens and serialise
as JSONL into `transcript_builder`. Next turn uses the gap path (inject
only new messages) instead of re-compressing full history. Only fires
once per broken session (`not transcript_content` guard).
6. **Seeding guard** — seeding skips when `skip_transcript_upload=True`
(avoids wasted compression work when the result won't be saved).
7. **Structured logging** — INFO/WARNING at every branch of
`_build_query_message` with path variables, context_bytes, compression
results.
## How
**Upload gate** (`sdk/service.py` finally-block): removed `and (not
has_history or state.use_resume)`; added INFO log showing
`use_resume`/`has_history` before upload.
**`transcript_msg_count`**: set from `dl.message_count` in the
`cli_restored=False` branch.
**`_build_query_message`**: `max_fallback_messages: int | None` →
`target_tokens: int | None`; gap-empty case falls through to
full-session compression rather than returning bare message.
**`_reduce_context`**: `_FALLBACK_MSG_LIMITS` → `_RETRY_TARGET_TOKENS =
(50_000, 15_000)`; returns `ReducedContext.target_tokens`.
**`_compress_messages` / `_run_compression`**: both now accept
`target_tokens: int | None` and thread it through to `compress_context`.
**Seeding block**: added `not skip_transcript_upload` guard; uses
`_SEED_TARGET_TOKENS = 30_000` so the seeded JSONL is always compact
enough to pass `validate_transcript`.
## Checklist
- [x] `poetry run format` passes
- [x] No new lint errors introduced (pre-existing pyright errors
unrelated)
- [x] Tests added for `attempt` parameter and `target_tokens` in
`_reduce_context`
Downgrading between paid tiers (e.g. BUSINESS→PRO) previously fell through to
the Checkout Session path, which would cancel the existing subscription immediately
without crediting unused time on the old plan.
For paid→paid tier changes, modify the existing Stripe subscription in-place
via stripe.Subscription.modify with proration_behavior="create_prorations".
Stripe handles the proration automatically (crediting unused time + charging
the pro-rated difference), and the existing customer.subscription.updated
webhook fires to update the DB tier — no new Checkout flow needed.
Adds unit tests for modify_stripe_subscription_for_tier and API-level tests for
the paid→paid code path in update_subscription_tier.
Pass the Stripe invoice ID as transaction_key to _add_transaction in
handle_subscription_payment_failure. This prevents double-charging user
credits when Stripe retries the invoice.payment_failed webhook after a
transient failure (e.g. if stripe.Invoice.pay raises a network error).
_add_transaction silently skips insertion when the key already exists,
so subsequent retries deduct nothing while still attempting to mark the
invoice as paid on the Stripe side.
Adds test to verify the idempotency key is set correctly.
When handle_subscription_payment_failure successfully deducts the invoice
amount from a user's credit balance, Stripe's dunning system would still
retry the failed invoice automatically on its schedule, causing repeated
deductions per billing period. Fix by calling stripe.Invoice.pay() after
the successful balance deduction so Stripe marks the invoice as settled and
stops retrying. Invoice.pay failures are logged as warnings but do not
propagate, matching the existing best-effort pattern for cleanup operations.
Add two tests covering the pay call and the swallowed-error path.
The implementation schedules cancellation at period end via
stripe.Subscription.modify(cancel_at_period_end=True) rather than
stripe.Subscription.cancel. Update the four affected test cases to
patch the correct API call and assert the right arguments.
When a user's paid tier was admin-granted (no associated Stripe subscription),
cancel_stripe_subscription found nothing to cancel and returned. The API then
returned 200 without updating the DB tier, because the design relied on a
customer.subscription.deleted webhook — which would never fire.
Fix: have _cancel_customer_subscriptions return the count of subs cancelled and
cancel_stripe_subscription return a bool. When False (no subs found), the endpoint
now calls set_subscription_tier directly, mirroring the payment-disabled code path.
Added test_update_subscription_tier_free_no_stripe_subscription to cover this case.
- Regenerated openapi.json to include `proration_credit_cents` field added
to `SubscriptionStatusResponse` (was missing, causing check-API-types CI failure)
- Updated SubscriptionTierSection tests to go through the Confirm Upgrade
dialog before asserting mutate/redirect/error behaviour — previously tests
clicked "Upgrade to Pro" and expected a direct mutate call, but the current
code shows a confirmation dialog first (3 tests were failing because of this)
- Added `proration_credit_cents: 0` to `makeSubscription()` helper to match
the now-required field in the generated TypeScript type
- Add get_proration_credit_cents() — fetches active Stripe sub, calculates
unused portion of current billing period in cents
- Expose proration_credit_cents in GET /credits/subscription response
- Show upgrade confirmation dialog: "Your unused $X.XX will be added to
your balance" before redirecting to Stripe Checkout
- Update downgrade dialog copy to reflect cancel-at-period-end behaviour
Cancelling the subscription triggered customer.subscription.deleted
which then downgraded the user to FREE despite the balance having
already covered the invoice. Now the sub is left intact when balance
covers the cost — only cancelled (and tier downgraded) when balance
is insufficient.
User keeps their paid tier for the remainder of the billing period
they already paid for. DB tier is no longer updated immediately on
downgrade — customer.subscription.deleted webhook fires at period end
and downgrades to FREE then.
### Why / What / How
This PR simplifies frontend PR validation to one Playwright E2E suite,
moves redundant page-level browser coverage into Vitest integration
tests, and switches Playwright auth to deterministic seeded QA accounts.
It also folds in the follow-up fixes that came out of review and CI:
lint cleanup, CodeQL feedback, PR-local type regressions, and the flaky
Library run helper.
The approach is:
- keep Playwright focused on real browser and cross-page flows that
integration tests cannot prove well
- keep page-level render and mocked API behavior in Vitest
- remove the old PR-vs-full Playwright split from CI and run one
deterministic PR suite instead
- seed reusable auth states for fixed QA users so the browser suite is
less flaky and faster to bootstrap
### Changes 🏗️
- Removed the workflow indirection that selected different Playwright
suites for PRs vs other events
- Standardized frontend CI on a single command: `pnpm test:e2e:no-build`
- Consolidated the PR-gating Playwright suite around these happy-path
specs:
- `auth-happy-path.spec.ts`
- `settings-happy-path.spec.ts`
- `api-keys-happy-path.spec.ts`
- `builder-happy-path.spec.ts`
- `library-happy-path.spec.ts`
- `marketplace-happy-path.spec.ts`
- `publish-happy-path.spec.ts`
- `copilot-happy-path.spec.ts`
- Added the missing browser-only confidence checks to the PR suite:
- settings persistence across reload and re-login
- API key create, copy, and revoke
- schedule `Run now` from Library
- activity dropdown visibility for a real run
- creator dashboard verification after publish submission
- Increased Playwright CI workers from `6` to `8`
- Migrated redundant page-level browser coverage into Vitest
integration/unit tests where appropriate, including marketplace,
profile, settings, API keys, signup behavior, agent dashboard row
behavior, agent activity, and utility/auth helpers
- Seeded deterministic Playwright QA users in
`backend/test/e2e_test_data.py` and reused auth states from
`frontend/src/tests/credentials/`
- Fixed CodeQL insecure randomness feedback by replacing insecure
randomness in test auth utilities
- Fixed frontend lint issues in marketplace image rendering
- Fixed PR-local type regressions introduced during test migration
- Stabilized the Library E2E run helper to support the current Library
action states: `Setup your task`, `New task`, `Rerun task`, and `Run
now`
- Removed obsolete Playwright specs and the temporary migration planning
doc once the consolidation was complete
- Reverted unintended non-test backend source changes; only backend test
fixture changes remain in scope
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `pnpm lint`
- [x] `pnpm types`
- [x] `pnpm test:unit`
- [x] `pnpm exec playwright test --list`
- [x] `pnpm test:e2e:no-build` locally
- [ ] PR CI green after the latest push
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
Notes:
- Current local Playwright run on this branch: `28 passed`, `0 flaky`,
`0 retries`, `3m 25s`.
- Latest Codecov report on this PR showed overall coverage `63.14% ->
63.61%` (`+0.47%`), with frontend coverage up `+2.32%` and frontend E2E
coverage up `+2.10%`.
- The backend change in this PR is limited to deterministic E2E test
data setup in `backend/test/e2e_test_data.py`.
- Playwright retries remain enabled in CI; this branch does not add
fail-on-flaky behavior.
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Zamil Majdy <majdy.zamil@gmail.com>
- Use user_id="system" for global LD flag lookups (price IDs don't need user context)
- Skip Supabase lookup silently for non-UUID keys in _fetch_user_context_data
- Block paid tier changes when ENABLE_PLATFORM_PAYMENT is disabled
- Add invoice.payment_failed handler: deduct from balance or downgrade to FREE
- Hide upgrade/downgrade buttons in UI when payment flag is disabled