mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-30 03:00:41 -04:00
2f8d2e10daa86547bec8b68279ce7f6fa975e29a
8421 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
2f8d2e10da |
fix(backend/copilot): clear inflight tool-call buffer at top of outer finally
CodeRabbit review on #12871 flagged that `session.clear_inflight_tool_calls()` ran after usage persistence, session upsert and transcript upload in the baseline turn `finally`, so if any of those awaited cleanup steps raised, the process-local scratch buffer would leak into the next turn — the guide-read guard would observe a phantom in-flight call and skip its gate. Move the clear to the very first statement of the outer `finally` so it runs unconditionally once tool execution has ended, before any failure-prone cleanup. Keep the documentation pointing at the observed failure mode. |
||
|
|
4dc3d0c34c |
fix(backend/copilot): correct fast_advanced_model to OpenRouter's claude-opus-4.7 route
CodeRabbit review on #12871 flagged that the config default and pinned-default test used `anthropic/claude-opus-4-7` (hyphenated), but OpenRouter's actual model ID for Opus 4.7 is `anthropic/claude-opus-4.7` (dot-separated, per https://openrouter.ai/anthropic/claude-opus-4.7). The hyphenated form would 404 at runtime the first time anyone toggles the advanced tier on the baseline path. Fix the default in both paths (`fast_advanced_model`, `thinking_advanced_model`) and update the test assertion to match. Also add a regression test pinning the three legacy env-var aliases (`CHAT_MODEL`, `CHAT_ADVANCED_MODEL`, `CHAT_FAST_MODEL`) to the new 2x2 fields so deployments that set the pre-split names continue to override the intended cell. |
||
|
|
9cfaaba3b6 |
fix(backend/copilot): anchor Kimi reasoning-route match to reject hakimi false positives
Sentry review on #12871 flagged the `"kimi" in lowered` substring check in `_is_reasoning_route` as too broad — a hypothetical `some-provider/hakimi-large` would match and get a `reasoning` payload appended to its request. Some providers silently drop unknown fields, others 400, so this is a correctness-not-just-tidy fix. Replace the substring check with an anchored match: accept the `moonshotai/` provider prefix, or a bare `kimi-` model id (either at string start or immediately after a `/` provider prefix). `claude` / `anthropic` branches unchanged. Adds regression coverage for `hakimi`, `some-provider/hakimi-large`, `akimi-7b` and keeps the existing Kimi variants passing. |
||
|
|
f5d3a6e606 |
Merge branch 'dev' into feat/copilot-kimi-k2-fast-model
Resolved require_guide_read: kept dev's builder_graph_id bypass AND our in-turn announcement helper (session.has_tool_been_called_this_turn replaces the now-removed _guide_read_in_session). Updated agent_guide_gate_test._session_with_messages to use real ChatSession.new(..., builder_graph_id=...) so it exercises both the inflight buffer and the builder bypass path. |
||
|
|
a098f01bd2 |
feat(builder): AI chat panel for the flow builder (#12699)
### Why The flow builder had no AI assistance. Users had to switch to a separate Copilot session to ask about or modify the agent they were looking at, and that session had no context on the graph — so the LLM guessed, or the user had to describe the graph by hand. ### What An AI chat panel anchored to the `/build` page. Opens with a chat-circle button (bottom-right), binds to the currently-opened agent, and offers **only** two tools: `edit_agent` and `run_agent`. Per-agent session is persisted server-side, so a refresh resumes the same conversation. Gated behind `Flag.BUILDER_CHAT_PANEL` (default off; `NEXT_PUBLIC_FORCE_FLAG_BUILDER_CHAT_PANEL=true` to enable locally). ### How **Frontend — new**: - `(platform)/build/components/BuilderChatPanel/` — panel shell + `useBuilderChatPanel.ts` coordinator. Renders the shared Copilot `ChatMessagesContainer` + `ChatInput` (thought rendering, pulse chips, fast-mode toggle — all reused, no parallel chat stack). Auto-creates a blank agent when opened with no `flowID`. Listens for `edit_agent` / `run_agent` tool outputs and wires them to the builder in-place: edit → `flowVersion` URL param + canvas refetch; run → `flowExecutionID` URL param → builder's existing execution-follow UI opens. **Frontend — touched (minimal)**: - `copilot/components/CopilotChatActionsProvider` — new `chatSurface: "copilot" | "builder"` flag so cards can suppress "Open in library" / "Open in builder" / "View Execution" buttons when the chat is the builder panel (you're already there). - `copilot/tools/RunAgent/components/ExecutionStartedCard` — title is now status-aware (`QUEUED → "Execution started"`, `COMPLETED → "Execution completed"`, `FAILED → "Execution failed"`, etc.). - `build/components/FlowEditor/Flow/Flow.tsx` — mount the panel behind the feature flag. **Backend — new**: - `copilot/builder_context.py` — the builder-session logic module. Holds the tool whitelist (`edit_agent`, `run_agent`), the permissions resolver, the session-long system-prompt suffix (graph id/name + full agent-building guide — cacheable across turns), and the per-turn `<builder_context>` prefix (live version + compact nodes/links snapshot). - `copilot/builder_context_test.py` — covers both builders, ownership forwarding, and cap behavior. **Backend — touched**: - `api/features/chat/routes.py` — `CreateSessionRequest` gains `builder_graph_id`. When set, the endpoint routes through `get_or_create_builder_session` (keyed on `user_id`+`graph_id`, with a graph-ownership check). No new route; the former `/sessions/builder` is folded into `POST /sessions`. - `copilot/model.py` — `ChatSessionMetadata.builder_graph_id`; `get_or_create_builder_session` helper. - `data/graph.py` — `GraphSettings.builder_chat_session_id` (new typed field; stores the builder-chat session pointer per library agent). - `api/features/library/db.py` — `update_library_agent_version_and_settings` preserves `builder_chat_session_id` across graph-version bumps. - `copilot/tools/edit_agent.py`, `run_agent.py` — builder-bound guard: default missing `agent_id` to the bound graph, reject any other id. `run_agent` additionally inlines `node_executions` into dry-run responses so the LLM can inspect per-node status in the same turn instead of a follow-up `view_agent_output`. `wait_for_result` docs now explain the two dispatch modes. - `copilot/tools/helpers.py::require_guide_read` — bypassed for builder-bound sessions (the guide is already in the system-prompt suffix). - `copilot/tools/agent_generator/pipeline.py` + `tools/models.py` — `AgentSavedResponse.graph_version` so the frontend can flip `flowVersion` to the newly-saved version. - `copilot/baseline/service.py` + `sdk/service.py` — inject the builder context suffix into the system prompt and the per-turn prefix into the current user message. - `blocks/_base.py` — `validate_data(..., exclude_fields=)` so dry-run can bypass credential required-checks for blocks that need creds in normal mode (OrchestratorBlock). `blocks/perplexity.py` override signature matches. - `executor/simulator.py` — OrchestratorBlock dry-run iteration cap `1 → min(original, 10)` so multi-role patterns (Advocate/Critic) actually close the loop; `manager.py` synthesizes placeholder creds in dry-run so the block's schema validation passes. ### Session lookup The builder-chat session pointer lives on `LibraryAgent.settings.builder_chat_session_id` (typed via `GraphSettings`). `get_or_create_builder_session` reads/writes it through `library_db().get_library_agent_by_graph_id` + `update_library_agent(settings=...)` — no raw SQL or JSON-path filter. Ownership is enforced by the library-agent query's `userId` filter. The per-session builder binding still lives on `ChatSession.metadata.builder_graph_id` (used by `edit_agent`/`run_agent` guards and the system-prompt injection). ### Scope footnotes - Feature flag defaults **false**. Rollout gate lives in LaunchDarkly. - No schema migration required: `builder_chat_session_id` slots into the existing `LibraryAgent.settings` JSON column via the typed `GraphSettings` model. - Commits that address review / CI cycles are interleaved with feature commits — see the commit log for the per-change rationale. ### Test plan - [x] `pnpm test:unit` + backend `poetry run test` for new and touched modules - [x] Agent-browser pass: panel toggle / auto-create / real-time edit re-render / real-time exec URL subscribe / queue-while-streaming / cross-graph reset / hard-refresh session persist - [x] Codecov patch ≥ 80% on diff --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
627b52048b |
fix(backend/copilot): announce in-flight tool calls to unstick guide guard
Symptom (session 0d83f15c on Kimi K2.6): the agent called `get_agent_building_guide`, got the guide, retried `create_agent` — and the `require_guide_read` gate fired "Call get_agent_building_guide first" anyway, looping indefinitely. Root cause: baseline path buffers assistant rows with their `tool_calls` into `state.session_messages` (a scratch list on `_BaselineStreamState`) during the tool-call loop, and only flushes into `session.messages` at turn end. So when the second tool runs within the *same* turn, `_guide_read_in_session` — which scans `session.messages` — sees no guide call and fires the gate. SDK path didn't hit this because it mirrors tool calls straight into `ctx.session.messages`; Kimi's aggressive tool-call chaining within one turn was what surfaced the bug on baseline. Not Kimi-specific (any baseline model that calls guide + create_agent in one turn would hit it). Fix: add an in-flight announcement buffer on `ChatSession`. * `ChatSession._inflight_tool_calls: set[str]` (PrivateAttr, never serialised). * `announce_inflight_tool_call` called by `_baseline_tool_executor` the moment a tool is dispatched, before it runs. * `has_tool_been_called_this_turn` folds the in-flight set into the historical `messages` scan; `require_guide_read` now calls this instead of the messages-only helper. * `clear_inflight_tool_calls` fired in the baseline turn's finally block, right before `upsert_chat_session`, so next turn starts with a clean buffer. Deliberately didn't mirror the row into `session.messages` directly — `_baseline_conversation_updater` appends a fully-formed assistant+tool_calls row at round end, so an inline mirror would duplicate. The scratch set keeps the announcement separate from durable history. New tests: in-flight announcement lets gate pass within same turn; clear restores the gate for next turn; PrivateAttr never leaks into `model_dump`. Existing gate tests migrated from MagicMock(spec=ChatSession) to real ChatSession instances since the guard now calls the new helper. |
||
|
|
da5420fa07 |
fix(backend/copilot): coalesce reasoning deltas to unfreeze Kimi streams
Observed symptom: copilot page frozen for ~700 s on a session using the new Kimi K2.6 default. Redis `XLEN chat:stream:...` showed 4,677 reasoning-delta chunks in a single turn vs ~28 for peer Sonnet sessions. Each chunk was one Redis xadd + one SSE frame + one React re-render of the non-virtualised chat list, which paint-stormed the main thread until the stream ended. OpenRouter's Kimi endpoint tokenises reasoning at a much finer grain than Anthropic, so the 1:1 chunk→`StreamReasoningDelta` mapping in BaselineReasoningEmitter blew up on the wire while the same code was fine for Sonnet. Fix: coalesce `StreamReasoningDelta` emissions in the emitter. * First chunk in a block still emits Start + Delta atomically so the Reasoning collapse renders immediately. * Subsequent chunks buffer into `_pending_delta` and flush once either the char-size (`_COALESCE_MIN_CHARS=32`) or time (`_COALESCE_MAX_INTERVAL_MS=40`) threshold trips. `close()` always drains the tail before emitting `StreamReasoningEnd`. * DB persistence stays per-chunk — `_current_row.content` updates on every delta independent of the coalesce window, so a crash mid-turn still persists the full reasoning-so-far. * Thresholds are `__init__` kwargs so tests can disable coalescing for deterministic state-machine assertions. Net effect: ~4,700 → ~150 events per turn (30x), well under the browser's paint-storm threshold; reasoning still appears live at ~25 Hz (40 ms window) which is below human perception. Pre-existing issues flagged for follow-up (out of scope — the freeze is gone without them): * `ChatMessagesContainer` has no React.memo per message and no list virtualisation — a very long session still re-renders every prior message on each new chunk. * `routes.py:1163-1171` replays from `0-0` with `count=1000` on every SSE reconnect (6 reconnects observed), duplicating up to 6,000 chunks. Proper Last-Event-ID support requires threading Redis stream message IDs through every SSE event + a frontend handshake — material refactor deferred to a dedicated PR. |
||
|
|
59273fe6a0 |
fix(frontend): forward sentry-trace and baggage across API proxy (#12835)
### Why / What / How
**Why:** Every request that went through Next's rewrite proxy broke
distributed tracing. The browser Sentry SDK emitted `sentry-trace` and
`baggage`, but `createRequestHeaders` only forwarded impersonation + API
key, so the backend started a disconnected transaction. The frontend →
backend lineage never appeared in Sentry. Same gap on
direct-from-browser requests: the custom mutator never attached the
trace headers itself, so even non-proxied paths lost the link.
**What:**
- **Server side:** forward `sentry-trace` and `baggage` from
`originalRequest.headers` alongside the existing impersonation/API key
forwarding.
- **Client side:** the custom mutator pulls trace data via
`Sentry.getTraceData()` and attaches it to outgoing headers when running
on the client.
**How:** Inline additions — no new observability module, no new
dependencies beyond `@sentry/nextjs` which the frontend already uses for
Sentry init.
### Changes 🏗️
- `src/lib/autogpt-server-api/helpers.ts` — forward `sentry-trace` +
`baggage` in `createRequestHeaders`.
- `src/app/api/mutators/custom-mutator.ts` — import `@sentry/nextjs`,
attach `Sentry.getTraceData()` on client-side requests.
- `src/app/api/mutators/__tests__/custom-mutator.test.ts` — three new
tests: trace-data present, trace-data empty, server-side no-op.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [x] `pnpm vitest run
src/app/api/mutators/__tests__/custom-mutator.test.ts` passes (6/6
locally)
- [x] `pnpm format && pnpm lint` clean
- [x] `pnpm types` clean for touched files (pre-existing unrelated type
errors on dev are untouched)
- [ ] In a local session with Sentry enabled, a `/copilot` chat turn
produces a distributed trace that spans frontend transaction → backend
transaction (single trace ID in Sentry)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Low Risk**
> Low risk: header-only changes to request construction for
observability, with added tests; primary risk is unintended header
propagation affecting upstream/proxy behavior.
>
> **Overview**
> Restores **Sentry distributed tracing continuity** for
frontend→backend calls by propagating `sentry-trace`/`baggage` headers.
>
> On the client, `customMutator` now reads `Sentry.getTraceData()` and
attaches string trace headers to outgoing requests (guarded for
server-side and older Sentry builds). On the server/proxy path,
`createRequestHeaders` now forwards `sentry-trace` and `baggage` from
the incoming `originalRequest` alongside existing impersonation/API-key
forwarding, with new unit tests covering these cases.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
|
||
|
|
38c2844b83 |
feat(admin): Add system diagnostics and execution management dashboard (#11235)
### Changes 🏗️
This PR adds a comprehensive admin diagnostics dashboard for monitoring
system health and managing running executions.
https://github.com/user-attachments/assets/f7afa3ed-63d8-4b5c-85e4-8756d9e3879e
#### Backend Changes:
- **New data layer** (backend/data/diagnostics.py): Created a dedicated
diagnostics module following the established data layer pattern
- get_execution_diagnostics() - Retrieves execution metrics (running,
queued, completed counts)
- get_agent_diagnostics() - Fetches agent-related metrics
- get_running_executions_details() - Lists all running executions with
detailed info
- stop_execution() and stop_executions_bulk() - Admin controls for
stopping executions
- **Admin API endpoints**
(backend/server/v2/admin/diagnostics_admin_routes.py):
- GET /admin/diagnostics/executions - Execution status metrics
- GET /admin/diagnostics/agents - Agent utilization metrics
- GET /admin/diagnostics/executions/running - Paginated list of running
executions
- POST /admin/diagnostics/executions/stop - Stop single execution
- POST /admin/diagnostics/executions/stop-bulk - Stop multiple
executions
- All endpoints secured with admin-only access
#### Frontend Changes:
- **Diagnostics Dashboard**
(frontend/src/app/(platform)/admin/diagnostics/page.tsx):
- Real-time system metrics display (running, queued, completed
executions)
- RabbitMQ queue depth monitoring
- Agent utilization statistics
- Auto-refresh every 30 seconds
- **Execution Management Table**
(frontend/src/app/(platform)/admin/diagnostics/components/ExecutionsTable.tsx):
- Displays running executions with: ID, Agent Name, Version, User
Email/ID, Status, Start Time
- Multi-select functionality with checkboxes
- Individual stop buttons for each execution
- "Stop Selected" and "Stop All" bulk actions
- Confirmation dialogs for safety
- Pagination for handling large datasets
- Toast notifications for user feedback
#### Security:
- All admin endpoints properly secured with requires_admin_user
decorator
- Frontend routes protected with role-based access controls
- Admin navigation link only visible to admin users
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified admin-only access to diagnostics page
- [x] Tested execution metrics display and auto-refresh
- [x] Confirmed RabbitMQ queue depth monitoring works
- [x] Tested stopping individual executions
- [x] Tested bulk stop operations with multi-select
- [x] Verified pagination works for large datasets
- [x] Confirmed toast notifications appear for all actions
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
(no changes needed)
- [x] `docker-compose.yml` is updated or already compatible with my
changes (no changes needed)
- [x] I have included a list of my configuration changes in the PR
description (no config changes required)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Adds new admin-only endpoints that can stop, requeue, and bulk-mark
executions as `FAILED`, plus schedule deletion, which can directly
impact production workload and data integrity if misused or buggy.
>
> **Overview**
> Introduces a **System Diagnostics** admin feature spanning backend +
frontend to monitor execution/schedule health and perform remediation
actions.
>
> On the backend, adds a new `backend/data/diagnostics.py` data layer
and `diagnostics_admin_routes.py` with admin-secured endpoints to fetch
execution/agent/schedule metrics (including RabbitMQ queue depths and
invalid-state detection), list problem executions/schedules, and perform
bulk operations like `stop`, `requeue`, and `cleanup` (marking
orphaned/stuck items as `FAILED` or deleting orphaned schedules). It
also extends `get_graph_executions`/`get_graph_executions_count` with
`execution_ids` filtering, pagination, started/updated time filters, and
configurable ordering to support efficient bulk/admin queries.
>
> On the frontend, adds an admin diagnostics page with summary cards and
tables for executions and schedules (tabs for
orphaned/failed/long-running/stuck-queued/invalid, plus confirmation
dialogs for destructive actions), wires it into admin navigation, and
adds comprehensive unit tests for both the new API routes and UI
behavior.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
|
||
|
|
fce7a59713 |
refactor(backend/copilot): split model config into (path, tier) 2x2 matrix
Per PR review: and `advanced_model` were implicitly shared between baseline (fast) and SDK (extended_thinking) paths, but the paths have different hard constraints (baseline can route to any OpenRouter provider; SDK needs Anthropic endpoints). Replace the ambiguous 2-field schema with an explicit 2x2 of (path × tier). New fields: * `fast_standard_model` — baseline standard tier (Kimi K2.6) * `fast_advanced_model` — baseline advanced tier (Opus by default; same as SDK advanced so the top tier is a clean A/B across paths. Kimi K2-Thinking evaluated and deferred — it's 6 months older than K2.6, ~9pp behind on SWE-Bench Verified, ~23pp behind on BrowseComp, and text-only.) * `thinking_standard_model` — SDK standard tier (Sonnet) * `thinking_advanced_model` — SDK advanced tier (Opus) Backward-compat env var aliases: `CHAT_MODEL` → thinking_standard, `CHAT_ADVANCED_MODEL` → thinking_advanced, `CHAT_FAST_MODEL` → fast_standard. `populate_by_name=True` so ChatConfig(field=...) kwargs work alongside the alias names. Resolver split: `resolve_chat_model` (SDK) → thinking_*; `_resolve_baseline_model` (baseline) → fast_*. All call sites in sdk/service.py updated; test constructors migrated to new names. |
||
|
|
95d3679e14 |
test(backend/copilot): assert Field defaults, not env-backed singleton
Address coderabbit[bot] review comment on PR #12871: three resolver tests read `config.fast_model`, `config.model`, `config.advanced_model` from the env-backed singleton, which fails in CI whenever an operator sets `CHAT_FAST_MODEL=anthropic/claude-sonnet-4-6` (the documented rollback path). Swap to `ChatConfig.model_fields[...].default` so the assertion pins the shipped default regardless of env overrides. |
||
|
|
89f8060c5d |
feat(backend/copilot): default baseline fast_model to Kimi K2.6 via OpenRouter
Kimi K2.6 prices at $0.60/$2.80 per MTok (5x cheaper input, 5.4x cheaper output than Sonnet 4.6), ties Opus on SWE-Bench Verified (80.2% vs 80.8%), and ships OpenRouter's `reasoning` / `include_reasoning` extension on its Moonshot endpoints — meaning the baseline reasoning plumbing lit up in #12870 lights up unchanged. Three focused deltas: * `config.py`: new `fast_model` field defaulting to `moonshotai/kimi-k2.6`, separate from `model` (which still resolves to Sonnet for the SDK / extended-thinking path where the Claude Agent SDK CLI requires an Anthropic endpoint). `advanced_model` stays Opus on both paths — no Kimi equivalent at the top tier. * `_resolve_baseline_model`: no longer delegates to SDK's `resolve_chat_model`. Baseline standard/None → `config.fast_model`; advanced → `config.advanced_model`. SDK untouched. * `baseline/reasoning.py::_is_reasoning_route`: new gate covering Anthropic + Moonshot Kimi variants, used by `reasoning_extra_body`. The existing `_is_anthropic_model` in service.py stays narrow — it still gates `cache_control` markers + the `anthropic-beta` header, which Moonshot doesn't need (it auto-caches) and which would be dropped (or worst-case 400) on Kimi. Tests: extended extractor variant / kill-switch coverage in reasoning_test.py (new `TestIsReasoningRoute`, Kimi branches in `TestReasoningExtraBody`), added `_is_anthropic_model_rejects_kimi_routes` regression guard, added end-to-end `test_kimi_route_sends_reasoning_but_no_cache_control` through `_baseline_llm_caller` to pin the split-gate contract, and rewired `TestResolveBaselineModel` around `config.fast_model`. Rollback: `CHAT_FAST_MODEL=anthropic/claude-sonnet-4-6` restores prior behavior without code changes. Known risk to validate before we raise confidence: K2.5 had documented many-tool-selection regressions (vLLM had to ship accuracy patches) — we ship 43 tools per call, so /pr-test with the full payload is a must before this default is locked in. |
||
|
|
24850e2a3e |
feat(backend/autopilot): stream extended_thinking on baseline via OpenRouter (#12870)
### Why / What / How **Why:** Fast-mode autopilot never renders a Reasoning block. The frontend already has `ReasoningCollapse` wired up and the wire protocol already carries `StreamReasoning*` events (landed for SDK mode in #12853), but the baseline (OpenRouter OpenAI-compat) path never asks Anthropic for extended thinking and never parses reasoning deltas off the stream. Result: users on fast/standard get a good answer with no visible chain-of-thought, while SDK users see the full Reasoning collapse. **What:** Plumb reasoning end-to-end through the baseline path by opting into OpenRouter's non-OpenAI `reasoning` extension, parsing the reasoning delta fields off each chunk, and emitting the same `StreamReasoningStart/Delta/End` events the SDK adapter already uses. **How:** - **New config:** `baseline_reasoning_max_tokens` (default 8192; 0 disables). Sent as `extra_body={"reasoning": {"max_tokens": N}}` only on Anthropic routes — other providers drop the field, and `is_anthropic_model()` already gates this. - **Delta extraction:** `_extract_reasoning_delta()` handles all three OpenRouter/provider variants in priority order — legacy `delta.reasoning` (string), DeepSeek-style `delta.reasoning_content`, and the structured `delta.reasoning_details` list (text/summary entries; encrypted or unknown entries are skipped). - **Event emission:** Reasoning uses the same state-machine rules the SDK adapter uses — a text delta or tool_use delta arriving mid-stream closes the open reasoning block first, so the AI SDK v5 transport keeps reasoning / text / tool-use as distinct UI parts. On stream end, any still-open reasoning block gets a matching `reasoning-end` so a reasoning-only turn still finalises the frontend collapse. - **Scope:** Live streaming only. Reasoning is not persisted to `ChatMessage` rows or the transcript builder in this PR (SDK path does so via `content_blocks=[{type: 'thinking', ...}]`, but that round-trip requires Anthropic signature plumbing baseline doesn't have today). Reload will still not show reasoning on baseline sessions — can follow up if we decide it's worth the signature handling. ### Changes - `backend/copilot/config.py` — new `baseline_reasoning_max_tokens` field. - `backend/copilot/baseline/service.py` — new `_extract_reasoning_delta()` helper; reasoning block state on `_BaselineStreamState`; `reasoning` gated into `extra_body`; chunk loop emits `StreamReasoning*` events with text/tool_use transition rules; stream-end closes any open reasoning block. - `backend/copilot/baseline/service_unit_test.py` — 11 new tests covering extractor variants (legacy string, deepseek alias, structured list with text/summary aliases, encrypted-skip, empty), paired event ordering (reasoning-end before text-start), reasoning-only streams, and that the `reasoning` request param is correctly gated by model route (Anthropic vs non-Anthropic) and by the config flag. ### Checklist For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [ ] I have tested my changes according to the test plan: - [x] `poetry run pytest backend/copilot/baseline/service_unit_test.py backend/copilot/baseline/transcript_integration_test.py` — 103 passed - [ ] Manual: with `CHAT_USE_CLAUDE_AGENT_SDK=false` and `CHAT_MODEL=anthropic/claude-sonnet-4-6`, send a multi-step prompt on fast mode and confirm a Reasoning collapse appears alongside the final text - [ ] Manual: flip `CHAT_BASELINE_REASONING_MAX_TOKENS=0` and confirm baseline responses revert to text-only (no reasoning param, no reasoning UI) - [ ] Manual: with a non-Anthropic baseline model (`openai/gpt-4o`), confirm the request does NOT include `reasoning` and nothing regresses For configuration changes: - [x] `.env.default` is compatible — new setting falls back to the pydantic default |
||
|
|
e17e9f13c4 |
fix(backend/copilot): reduce SDK + baseline prompt cache waste (#12866)
## Summary Four cost-reduction changes for the copilot feature. Consolidated into one PR at user request; each commit is self-contained and bisectable. ### 1. SDK: full cross-user cache on every turn (CLI 2.1.116 bump) Previous behavior: CLI 2.1.97 crashed when `excludeDynamicSections=True` was combined with `--resume`, so the code fell back to a raw `system_prompt` string on resume, losing Claude Code's default prompt and all cache markers. Every Turn 2+ of an SDK session wrote ~33K tokens to cache instead of reading. Fix: install `@anthropic-ai/claude-code@2.1.116` in the backend Docker image and point the SDK at it via `CHAT_CLAUDE_AGENT_CLI_PATH=/usr/bin/claude`. CLI 2.1.98+ fixes the crash, so we can use the preset with `exclude_dynamic_sections=True` on every turn — Turn 1, 2, 3+ all share the same static prefix and hit the **cross-user** prompt cache. **Local dev requirement:** if `CHAT_CLAUDE_AGENT_CLI_PATH` is unset, the bundled 2.1.97 fallback will crash on `--resume`. Install the CLI globally (`npm install -g @anthropic-ai/claude-code@2.1.116`) or set the env var. ### 2. Baseline: add `cache_control` markers (commit `756b3ecd9` + follow-ups) Baseline path had zero `cache_control` across `backend/copilot/**`. Every turn was full uncached input (~18.6K tokens, ~$0.058). Two ephemeral markers — on the system message (content-blocks form) and the last tool schema — plus `anthropic-beta: prompt-caching-2024-07-31` via `extra_headers` as defense-in-depth. Helpers split into `_mark_tools_*` (precomputed once per session) and `_mark_system_*` (per-round, O(1)). Repeat hellos: ~$0.058 → ~$0.006. ### 3. Drop `get_baseline_supplement()` (commit `6e6c4d791`) `_generate_tool_documentation()` emitted ~4.3K tokens of `(tool_name, description)` pairs that exactly duplicated the tools array already in the same request. Deleted. `SHARED_TOOL_NOTES` (cross-tool workflow rules) is preserved. Baseline "hello" input: ~18.7K → ~14.4K tokens. ### 4. Langfuse "CoPilot Prompt" v26 (published under `review` label) Separate, out-of-repo change. v25 had three duplicate "Example Response" blocks + a 10-step "Internal Reasoning Process" section. v26 collapses to one example + bullet-form reasoning. Char count 20,481 → 7,075 (rough 4 chars/token → ~5,100 → ~1,770 tokens). - v26 is published with label `review` (NOT `production`); v25 remains active. - Promote via `mcp__langfuse__updatePromptLabels(name="CoPilot Prompt", version=26, newLabels=["production"])` after smoke-test. - Rollback: relabel v25 `production`. ## Test plan - [x] Unit tests for `_build_system_prompt_value` (fresh vs resumed turns emit identical preset dict) - [x] SDK compat tests pass including `test_bundled_cli_version_is_known_good_against_openrouter` - [x] `cli_openrouter_compat_test.py` passes against CLI 2.1.116 (locally verified with `CHAT_CLAUDE_AGENT_CLI_PATH=/opt/homebrew/bin/claude`) - [x] 8 new `_mark_*` unit tests + identity regression test for `_fresh_*` helpers - [x] `SHARED_TOOL_NOTES` public-constant test passes; 5 old tool-docs tests removed - [ ] **Manual cost verification (commit 1):** send two consecutive SDK turns; Turn 2 and Turn 3 should both show `cacheReadTokens` ≈ 33K (full cross-user cache hits). - [ ] **Manual cost verification (commit 2):** send two "hello" turns on baseline <5 min apart; Turn 2 reports `cacheReadTokens` ≈ 18K and cost ≈ $0.006. - [ ] **Regression sweep for commit 3:** one turn per tool family — `search_agents`, `run_agent`, `add_memory`/`forget_memory`/`search_memory`, `search_docs`, `read_workspace_file` — to verify no tool-selection regression from dropping the prose tool docs. - [ ] **Langfuse v26 smoke test:** 5-10 varied turns after relabelling to `production`; compare responses vs v25 for regression on persona, concision, capability-gap handling, credential security flows. ## Deployment notes - Production Docker image now installs CLI 2.1.116 (~20 MB added). - `CHAT_CLAUDE_AGENT_CLI_PATH=/usr/bin/claude` set in the Dockerfile; runtime can override via env. - First deploy after this merge needs a fresh image rebuild to pick up the new CLI. |
||
|
|
f238c153a5 |
fix(backend/copilot): release session cluster lock on completion (#12867)
## Summary Fixes a bug where a chat session gets silently stuck after the user presses Stop mid-turn. **Root cause:** the cancel endpoint marks the session `failed` after polling 5s, but the cluster lock held by the still-running task is only released by `on_run_done` when the task actually finishes. If the task hangs past the 5s poll (slow LLM call, agent-browser step, etc.), the lock lingers for up to 5 min — `stream_chat_post`'s `is_turn_in_flight` check sees the flipped meta (`failed`) and enqueues a new turn, but the run handler sees the stale lock and drops the user's message at `manager.py:379` (`reject+requeue=False`). The new SSE stream hangs until its 60s idle timeout. ### Fix Two cooperating changes: 1. **`mark_session_completed` force-releases the cluster lock** in the same transaction that flips status to `completed`/`failed`. Unconditional delete — by the time we're declaring the session dead, we don't care who the current lock holder is; the lock has to go so the next enqueued turn can acquire. This is what closes the stuck-session window. 2. **`ClusterLock.release()` is now owner-checked** (Lua CAS — `GET == token ? DEL : noop` atomically). Force-release means another pod may legitimately own the key by the time the original task's `on_run_done` eventually fires. Without the CAS, that late `release()` would wipe the successor's lock. With it, the late `release()` is a safe no-op when the owner has changed. Together: prompt release on completion (via force-delete) + safe cleanup when on_run_done catches up (via CAS). That re-syncs the API-level `is_turn_in_flight` check with the actual lock state, so the contention window disappears. No changes to the worker-level contention handler: `stream_chat_post` already queues incoming messages into the pending buffer when a turn is in flight (via `queue_pending_for_http`). With these fixes, the worker never sees contention in the common case; if it does (true multi-pod race), the pre-existing `reject+requeue=False` behaviour still applies — we'll revisit that path with its own PR if it becomes a production symptom. ### Verification - Reproduced the original stuck-session symptom locally (Stop mid-turn → send new message → backend logs `Session … already running on pod …`, user message silently lost, SSE stream idle 60s then closes). - After the fix: cancel → new message → turn starts normally (lock released by `mark_session_completed`). - `poetry run pyright` — 0 errors on edited files. - `pytest backend/copilot/stream_registry_test.py backend/executor/cluster_lock_test.py` — 33 passed (includes the successor-not-wiped test). ## Changes - `autogpt_platform/backend/backend/copilot/executor/utils.py` — extract `get_session_lock_key(session_id)` helper so the lock-key format has a single source of truth. - `autogpt_platform/backend/backend/copilot/executor/manager.py` — use the helper where the cluster lock is created. - `autogpt_platform/backend/backend/copilot/stream_registry.py` — `mark_session_completed` deletes the lock key after the atomic status swap (force-release). - `autogpt_platform/backend/backend/executor/cluster_lock.py` — `ClusterLock.release()` (sync + async) uses a Lua CAS to only delete when `GET == token`, protecting against wiping a successor after a force-release. ## Test plan - [ ] Send a message in /copilot that triggers a long turn (e.g. `run_agent`), press Stop before it finishes, then send another message. Expect: new turn starts promptly (no 5-min wait for lock TTL). - [ ] Happy path regression — send a normal message, verify turn completes and the session lock key is deleted after completion. - [ ] Successor protection — unit test `test_release_does_not_wipe_successor_lock` covers: A acquires, external DEL, B acquires, A.release() is a no-op, B's lock intact. |
||
|
|
01f1289aac |
feat(copilot): real OpenRouter cost + cost-based rate limits (percent-only public API) (#12864)
## Why
After
|
||
|
|
343222ace1 |
feat(platform): defer paid-to-paid subscription downgrades + cancel-pending flow (#12865)
### Why / What / How
**Why:** Only downgrades to FREE were scheduled at period end; paid→paid
downgrades (e.g. BUSINESS→PRO) applied immediately via Stripe proration.
The asymmetry meant users lost their higher tier mid-cycle in exchange
for a Stripe credit voucher only redeemable on a future subscription — a
confusing pattern that produces negative-value paths for users actually
cancelling. There was also no way to cancel a pending downgrade or
paid→FREE cancellation once scheduled.
**What:** Standardize on "upgrade = immediate, downgrade = next cycle"
and let users cancel a pending change by clicking their current tier.
Harden the new code against conflicting subscription state, concurrent
tab races, flaky Stripe calls, and hot-path latency regressions.
**How:**
Subscription state machine:
- **Upgrade** (PRO→BUSINESS) — `stripe.Subscription.modify` with
immediate proration (unchanged). If a downgrade schedule is already
attached, release it first so the upgrade wins.
- **Paid→paid downgrade** (BUSINESS→PRO) — creates a
`stripe.SubscriptionSchedule` with two phases (current tier until
`current_period_end`, target tier after). No mid-cycle tier demotion.
Defensive pre-clear: existing schedule → release;
`cancel_at_period_end=True` → set to False.
- **Paid→FREE** — unchanged: `cancel_at_period_end=True`.
- **Same-tier update** — reuses the existing `POST
/credits/subscription` route. When `target_tier == current_tier`,
backend calls `release_pending_subscription_schedule` (idempotent) and
returns status. No dedicated cancel-pending endpoint — "Keep my current
tier" IS the cancel operation.
- `release_pending_subscription_schedule` is idempotent on
terminal-state schedules and clears both `schedule` and
`cancel_at_period_end` atomically per call.
API surface:
- New fields on `SubscriptionStatusResponse`: `pending_tier` +
`pending_tier_effective_at` (pulled from the schedule's next-phase
`start_date` so dashboard-authored schedules report the correct
timestamp).
- `POST /credits/subscription` now returns `SubscriptionStatusResponse`
(previously `SubscriptionCheckoutResponse`); the response still carries
`url` for checkout flows and adds the status fields inline.
- `get_pending_subscription_change` is cached with a 30s TTL — avoids
hammering Stripe on every home-page load.
- Webhook dispatches
`subscription_schedule.{released,completed,updated}` through the main
`sync_subscription_from_stripe` flow so both event sources converge to
the same DB state.
Implementation notes:
- New Stripe calls use native async (`stripe.Subscription.list_async`
etc.) and typed attribute access — no `run_in_threadpool` wrapping in
the new helpers.
- Shared `_get_active_subscription` helper collapses the "list
active/trialing subs, take first" pattern used by 4 callers.
Frontend:
- `PendingChangeBanner` sub-component above the tier grid with formatted
effective date + "Keep [CurrentTier]" button. `aria-live="polite"` for
screen readers; locale pinned to `en-US` to avoid SSR/CSR hydration
mismatch.
- "Keep [CurrentTier]" also available as a button on the current tier
card.
- Other tier buttons disabled while a change is pending — user must
resolve pending first to prevent stacked schedules.
- `cancelPendingChange` reuses `useUpdateSubscriptionTier` with `tier:
current_tier`; awaits `refetch()` on both success and error paths so the
UI reconciles even if the server succeeded but the client didn't receive
the response.
### Changes
**Backend (`credit.py`, `v1.py`)**
- Tier-ordering helpers (`is_tier_upgrade`/`is_tier_downgrade`).
- `modify_stripe_subscription_for_tier` routes downgrades through
`_schedule_downgrade_at_period_end`; upgrade path releases any pending
schedule first.
- `_schedule_downgrade_at_period_end` defensively releases pre-existing
schedules and clears `cancel_at_period_end` before creating the new
schedule.
- `release_pending_subscription_schedule` idempotent on terminal-state
schedules; logs partial-failure outcomes.
- `_next_phase_tier_and_start` returns both tier and phase-start
timestamp; warns on unknown prices.
- `get_pending_subscription_change` cached (30s TTL), narrow exception
handling.
- `sync_subscription_schedule_from_stripe` delegates to
`sync_subscription_from_stripe` for convergence with the main webhook
path.
- Shared `_get_active_subscription` +
`_release_schedule_ignoring_terminal` helpers.
- `POST /credits/subscription` absorbs the same-tier "cancel pending
change" branch.
**Frontend (`SubscriptionTierSection/*`)**
- `PendingChangeBanner` new sub-component (a11y, locale-pinned date,
paid→FREE vs paid→paid copy split, non-null effective-date assertion, no
`dark:` utilities).
- "Keep [CurrentTier]" button on current tier card.
- `useSubscriptionTierSection` — `cancelPendingChange` reuses the
update-tier mutation.
- Copy: downgrade dialog + status hint updated.
- `helpers.ts` extracted from the main component.
**Tests**
- Backend: +24 tests (95/95 passing): upgrade-releases-pending-schedule,
schedule-releases-existing-schedule, cancel-at-period-end collision,
terminal-state release idempotency, unknown-price logging, status
response population, same-tier-POST-with-pending, webhook delegation.
- Frontend: +5 integration tests (21/21 passing): banner render/hide,
Keep-button click from banner + current card, paid→paid dialog copy.
### Checklist
- [x] Backend unit tests: 95 pass
- [x] Frontend integration tests: 21 pass
- [x] `poetry run format` / `poetry run lint` clean
- [x] `pnpm format` / `pnpm lint` / `pnpm types` clean
- [ ] Manual E2E on live Stripe (dev env) — pending deploy: BUSINESS→PRO
creates schedule, DB tier unchanged until period end
- [ ] Manual E2E: "Keep BUSINESS" in banner releases schedule
- [ ] Manual E2E: cancel pending paid→FREE flips `cancel_at_period_end`
back to false
- [ ] Manual E2E: BUSINESS→PRO (scheduled) then attempt BUSINESS→FREE
clears the PRO schedule, sets cancel_at_period_end
- [ ] Manual E2E: BUSINESS→PRO (scheduled) then upgrade back to BUSINESS
releases the schedule
|
||
|
|
a8226af725 |
fix(copilot): dedupe tool row, lift bash_exec timeout, Stop+resend recovery (#12862)
Closes #12861 · [OPEN-3096](https://linear.app/autogpt/issue/OPEN-3096) ## Why Four related copilot UX / stability issues surfaced on dev once action tools started rendering inline in the chat (see #12813): ### 1. Duplicate bash_exec row `GenericTool` rendered two rows saying the same thing for every completed tool call — a muted subtitle line ("Command exited with code 1" / "Ran: sleep 20") **and** a `ToolAccordion` with the command echoed in its description. Previously hidden inside the "Show reasoning" / "Show steps" collapse, now visibly duplicated. ### 2. `bash_exec` capped at 120s via advisory text The tool schema said `"Max seconds (default 30, max 120)"`; the model obeyed, so long-running scripts got clipped at 120s with a vague `Timed out after 120s` even though the E2B sandbox has no such limit. Confirmed via Langfuse traces — the model picks `120` for long scripts because that's what the schema told it the max was. E2B path never had a server-side clamp. Originally added in #12103 (default 30) and tightened to "max 120" advisory in #12398 (token-reduction pass). ### 3. 30s default was too aggressive `pip install`, small data-processing scripts, etc. routinely cross 30s and got killed before the model thought to retry with a bigger timeout. ### 4. Stop + edit + resend → "The assistant encountered an error" ([OPEN-3096](https://linear.app/autogpt/issue/OPEN-3096)) Two independent bugs both land on the same banner — fixing only one leaves the other visible on the next action. **4a. Stream lock never released on Stop** *(the error in the ticket screenshot)*. The executor's `async for chunk in stream_and_publish(...)` broke out on `cancel.is_set()` without calling `aclose()` on the wrapper. `async for` does NOT auto-close iterators on `break`, so `stream_chat_completion_sdk` stayed suspended at its current `await` — still holding the per-session Redis lock (TTL 120s) until GC eventually closed it. The next `POST /stream` hit `lock.try_acquire()` at [sdk/service.py](autogpt_platform/backend/backend/copilot/sdk/service.py) and yielded `StreamError("Another stream is already active for this session. Please wait or stop it.")`. The `except GeneratorExit → lock.release()` handler written exactly for this case never fired because nothing sent GeneratorExit. **4b. Orphan `tool_use` after stop-mid-tool.** Even with the lock released, the stop path persists the session ending on an assistant row whose `tool_calls` have no matching `role="tool"` row. On the next turn, `_session_messages_to_transcript` hands Claude CLI `--resume` a JSONL with a `tool_use` and no paired `tool_result`, and the SDK raises a vague error — same banner. The ticket's "Open questions" explicitly flags this. ## What **Frontend — `GenericTool.tsx`** split responsibilities between the two rows so they don't duplicate: - **Subtitle row** (always visible, muted): *what ran* — `Ran: sleep 120`. Never the exit code. - **Accordion description**: *how it ended* — `completed` / `status code 127 · bash: missing-bin: command not found` / `Timed out after 120s` / (fallback to command preview for legacy rows missing `exit_code` / `timed_out`). Pulled from the first non-empty line of `stdout` / `stderr` when available. - **Expanded accordion**: full command + stdout + stderr code blocks (unchanged). **Backend — `bash_exec.py`**: - Drop the "max 120" advisory from the schema description. - Bump default `timeout: 30 → 120`. - Clean up the result message — `"Command executed with status code 0"` (no "on E2B", no parens). **Backend — `executor/processor.py` + `stream_registry.py` (OPEN-3096 #4a)**: wrap the consumer `async for` in `try/finally: await stream.aclose()`. Close now propagates through `stream_and_publish` into `stream_chat_completion_sdk`, whose existing `except GeneratorExit → lock.release()` releases the Redis lock immediately on cancel. Stream types tightened to `AsyncGenerator[StreamBaseResponse, None]` so the defensive `getattr(stream, "aclose", None)` goes away. **Backend — `session_cleanup.py` (OPEN-3096 #4b)**: new `prune_orphan_tool_calls()` helper walks the trailing session tail and drops any trailing assistant row whose `tool_calls` have unresolved ids (plus everything after it) and any trailing `STOPPED_BY_USER_MARKER` system-stop row. Single backward pass — tolerates the marker being present or absent. Called from the existing turn-start cleanup in both `sdk/service.py` and `baseline/service.py`; takes an optional `log_prefix` so both paths emit the same INFO log when something was popped. In-memory only — the DB save path is append-only via `start_sequence`. ## Test plan - [x] `pnpm exec vitest run src/app/(platform)/copilot/tools/GenericTool src/app/(platform)/copilot/components/ChatMessagesContainer` — 105 pass (6 new for GenericTool subtitle/description variants + legacy-fallback case). - [x] `pnpm format` / `pnpm lint` / `pnpm types` — clean. - [x] `poetry run pytest backend/copilot/sdk/session_persistence_test.py` — 17 pass (6 + 3 new covering the orphan-tool-call prune and its optional-log-prefix branch). - [x] `poetry run pytest backend/copilot/stream_registry_test.py backend/copilot/executor/processor_test.py` — 19 pass (2 for aclose propagation on the `stream_and_publish` wrapper, 2 for `_execute_async` aclose propagation on both exit paths, 1 for publish_chunk RedisError warning ladder). - [x] `poetry run ruff check` / `poetry run pyright` on touched files — clean. - [x] Manual: fire a `bash_exec` — one labelled row, accordion description reads sensibly (`completed` / `status code 1 · …` / `Timed out after 120s`). - [x] Manual: script that needs >120s — no longer clipped. - [x] Manual: Stop mid-tool + edit + resend — Autopilot resumes without "Another stream is already active" and without the vague SDK error. ## Scope note Does not touch `splitReasoningAndResponse` — re-collapsing action tools back into "Show steps" is #12813's responsibility. |
||
|
|
f06b5293de |
fix(frontend/library): compute monthly spend for AgentBriefingPanel (#12854)
### Why / What / How <img width="900" alt="Screenshot 2026-04-20 at 19 52 22" src="https://github.com/user-attachments/assets/c30d5f18-2842-4a8a-ac3d-5bfee18fcd56" /> **Why:** The "Spent this month" tile in the Agent Briefing Panel on the Library page always showed `$0`, even for users with real execution usage. The tile is meant to give a quick sense of monthly spend across all agents. **What:** Compute `monthlySpend` from actual execution data and format it as currency. **How:** - `useLibraryFleetSummary` now sums `stats.cost` (cents) across every execution whose `started_at` falls within the current calendar month. Previously `monthlySpend` was hardcoded to `0`. - `FleetSummary.monthlySpend` is documented as being in cents (consistent with backend + `formatCents`). - `StatsGrid` now uses `formatCents` from the copilot usage helpers to render the tile (e.g. `$12.34` instead of the broken `$0`). ### Changes 🏗️ - `autogpt_platform/frontend/src/app/(platform)/library/hooks/useLibraryFleetSummary.ts`: aggregate `stats.cost` across executions started in the current calendar month; add `toTimestamp` and `startOfCurrentMonth` helpers. - `autogpt_platform/frontend/src/app/(platform)/library/components/AgentBriefingPanel/StatsGrid.tsx`: format the "Spent this month" tile via shared `formatCents` helper. - `autogpt_platform/frontend/src/app/(platform)/library/types.ts`: document that `FleetSummary.monthlySpend` is in cents. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [ ] I have tested my changes according to the test plan: - [ ] Load `/library` with the `AGENT_BRIEFING` flag enabled and at least one completed execution in the current month — the "Spent this month" tile shows the correct cumulative cost. - [ ] With no executions this month, the tile shows `$0.00`. - [ ] Type-check (`pnpm types`), lint (`pnpm lint`), and integration tests (`pnpm test:unit`) pass locally. --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
70b591d74f |
fix(copilot): persist reasoning, split steps/reasoning UX, fix mid-turn promote stream stall (#12853)
## Why
Four related issues that surfaced when queued follow-ups hit an
extended_thinking turn:
1. **Mid-turn promote stalled the SSE stream.** `pollBackendAndPromote`
used `setMessages((prev) => [...prev, bubble])` — Vercel AI SDK's
`useChat` streams SSE deltas into `messages[-1]`, so once a user bubble
ended up there, every subsequent chunk silently landed on the wrong
message. Chat sat frozen until a page refresh, even though the backend's
stream completed cleanly.
2. **Thinking-only final turn looked identical to a frozen UI.** When
Claude's last LLM call after a tool_result produced only a
`ThinkingBlock` (no `TextBlock`, no `ToolUseBlock`), the response
adapter silently dropped it and the UI hung on "Thought for Xs" with no
response text.
3. **Reasoning was invisible.** `ThinkingBlock` was dropped live and
never persisted in a way the frontend could render — sessions on reload
/ shared links showed no thinking, a confusing UX gap ("display for
nothing").
4. **Cross-pod Redis replay dropped reasoning events.** The
`stream_registry._reconstruct_chunk` type map had no entries for
`reasoning-*` types, so any client that subscribed mid-stream (share,
reload, cross-pod) silently dropped them with `Unknown chunk type:
reasoning-delta`.
## What
### Mid-turn promote — splice before the trailing assistant
In `useCopilotPendingChips.ts::pollBackendAndPromote`:
```ts
setMessages((prev) => {
const bubble = makePromotedUserBubble(drained, "midturn", crypto.randomUUID());
const lastIdx = prev.length - 1;
if (lastIdx >= 0 && prev[lastIdx].role === "assistant") {
return [...prev.slice(0, lastIdx), bubble, prev[lastIdx]];
}
return [...prev, bubble];
});
```
Streaming assistant stays at `messages[-1]`, AI SDK deltas keep routing
correctly. `useHydrateOnStreamEnd` snaps the bubble to the DB-canonical
position when the stream ends.
### Reasoning — end-to-end visibility (live + persisted)
- **Wire protocol**: new `StreamReasoningStart` / `StreamReasoningDelta`
/ `StreamReasoningEnd` events matching AI SDK v5's `reasoning-*` wire
names, so `useChat` accumulates them into a `type: 'reasoning'`
UIMessage part natively.
- **Response adapter**: every `ThinkingBlock` now emits reasoning
events; text/tool_use transitions close the open reasoning block so AI
SDK doesn't merge distinct parts.
- **Stream registry**: added `reasoning-*` types to
`_reconstruct_chunk`'s type_to_class map so Redis replay no longer drops
them on cross-pod / reload / share.
- **Persistence** (new): each `StreamReasoningStart` opens a
`ChatMessage(role="reasoning")` row in `session.messages`; deltas
accumulate into its content; `StreamReasoningEnd` closes it. No schema
migration — `ChatMessage.role` is already `String`.
`extract_context_messages` filters `role="reasoning"` out of LLM context
(the `--resume` CLI session already carries thinking separately) so the
model never re-ingests prior reasoning.
- **Frontend conversion**: `convertChatSessionMessagesToUiMessages` maps
`role="reasoning"` DB rows into `{type: "reasoning", text}` parts on the
surrounding assistant bubble, so reload / shared-link sessions render
reasoning identically to live stream.
### Steps / Reasoning UX — modal + accordion split
- **`StepsCollapse`** (new): a Dialog-backed "Show steps" modal wraps
the pre-final-answer group (tool timeline + per-block reasoning). Modal
keeps the steps visually grouped and out of the reading flow.
- **`ReasoningCollapse`** (rewritten): inline accordion with "Show
reasoning" / "Hide reasoning" toggle — no longer a modal, so it expands
*inside* the Steps modal without stacking two dialogs. Reasoning text
appears indented with a left border.
- **`splitReasoningAndResponse`**: reasoning parts now stay in the
reasoning group (instead of being pinned out), so they show up inside
the Steps modal alongside the tool-use timeline.
### Thinking-only final turn — synthesize a closing line
(belt-and-suspenders)
- **Prompt rule** (`_USER_FOLLOW_UP_NOTE`): "Every turn MUST end with at
least one short user-facing text sentence."
- **Adapter fallback**: tracks `_text_since_last_tool_result`; at
`ResultMessage success` with tools run + zero text since, opens a fresh
step (`UserMessage` already closed the previous one) and injects `"(Done
— no further commentary.)"` before `StreamFinish`. Only fires for the
pathological case — pure-text turns untouched.
## Test plan
- [x] `pnpm vitest run` on copilot files — all 638 prior tests pass;
**17 new tests** added covering:
- `convertChatSessionToUiMessages`: reasoning row alone / merged with
assistant text / multi-row / empty skip / duration capture
- `ReasoningCollapse`: initial collapsed, toggle, `rotate-90`,
`aria-expanded`
- `StepsCollapse`: trigger + dialog open renders children
- `MessagePartRenderer`: reasoning → `<pre>` inside collapse,
whitespace/missing text → null
- `splitReasoningAndResponse`: reasoning-stays-in-reasoning regression
- [x] `poetry run pytest backend/copilot/sdk/response_adapter_test.py` —
36 pass (7 new: 4 reasoning streaming, 3 thinking-only fallback)
- [x] Manual: reasoning streams live and persists across reload on a
fresh session
- [x] Manual: previously-created sessions (pre-persistence) don't have
`role="reasoning"` rows — behaves as a clean no-op (no reasoning shown,
no error), new sessions render reasoning inside Steps modal
## Notes
- No DB migration — `ChatMessage.role` is already an open `String`;
`role="reasoning"` is simply filtered out of LLM context builds but
rendered by the frontend.
- Addresses /pr-review blockers: (a) stream_registry missing reasoning
types in Redis round-trip, (b) fallback text emitted outside a step, (c)
dead `case "thinking"` in renderer (now uses the live `reasoning` type
uniformly).
|
||
|
|
b1c043c2d8 |
feat(copilot): queue follow-up messages on busy sessions (UI + run_sub_session + AutoPilot block) (#12737)
## Why
Users and tools can target a copilot session that already has a turn
running. Before this PR there was no uniform behaviour for that case —
the UI manually routed to a separate queue endpoint, `run_sub_session`
and the AutoPilot block raced the cluster lock, and in-turn follow-ups
only reached the model at turn-end via auto-continue. Outcome: dropped
messages, duplicate tool rows, missed mid-turn intent, latent
correctness bugs in block execution.
## What
A single "message arrived → turn already running?" primitive, shared by
every caller:
1. **POST `/stream`** (UI chat): self-defensive. Session idle → SSE as
today; session busy → `202 application/json` with `{buffer_length,
max_buffer_length, turn_in_flight}`. The deprecated `POST
/messages/pending` endpoint is removed (`GET /messages/pending` peek
stays).
2. **`run_copilot_turn_via_queue`** (shared primitive from #12841, used
by `run_sub_session` + `AutoPilotBlock`): gains the same busy-check.
Busy session → push to pending buffer, return `("queued",
SessionResult(queued=True, pending_buffer_length=N))` without creating a
stream registry session or enqueueing a RabbitMQ job. All callers
inherit queueing.
3. **Mid-turn delivery**: drained follow-ups are attached to every
tool_result's `additionalContext` via the SDK's `PostToolUse` hook —
covers both MCP and built-in tools (WebSearch/Read/Agent/etc.), not just
`run_block`. Claude reads the queued text on the next LLM round of the
same turn.
4. **UI observability**: chips promote to a proper user bubble at the
correct chronological position (after the tool_result row that consumed
them). Auto-continue handles end-of-turn drainage; mid-turn backend poll
handles the tool-boundary drainage path.
## How
**Data plane**
- `backend/copilot/pending_messages.py` — Redis list per session
(LPOP-count for atomic drain), TTL, fire-and-forget pub/sub notify. MAX
10 per session.
- `backend/copilot/pending_message_helpers.py` — `is_turn_in_flight`,
`queue_user_message`, `drain_and_format_for_injection`,
`persist_pending_as_user_rows` (shared persist+rollback used by both
baseline and SDK paths).
- `backend/data/redis_helpers.py` — centralised `incr_with_ttl`,
`capped_rpush`, `hash_compare_and_set`; every Lua script and pipeline
atomicity lives in one place.
**Injection sites**
- `backend/copilot/sdk/security_hooks.py::post_tool_use_hook` — drains +
returns `additionalContext`. Single hook covers built-in + MCP tools.
- `backend/copilot/sdk/service.py` — `StreamToolOutputAvailable`
dispatch persists the drained follow-up as a real user row right after
the tool_result (UI bubble at the right index).
`state.midturn_user_rows` keeps the CLI upload watermark honest.
- `backend/copilot/baseline/service.py` — same drain at round
boundaries, uses the shared `persist_pending_as_user_rows` helper so
baseline + SDK code paths don't diverge.
**Dispatch**
- `backend/copilot/sdk/session_waiter.py::run_copilot_turn_via_queue` —
`is_turn_in_flight` short-circuit; `SessionResult` gains `queued` +
`pending_buffer_length`; `SessionOutcome` gains `"queued"`.
- `backend/api/features/chat/routes.py::stream_chat_post` — busy-check
returns 202 with `QueuePendingMessageResponse`; `POST /messages/pending`
deleted.
- `backend/copilot/tools/run_sub_session.py` / `models.py` —
`SubSessionStatusResponse.status` gains `"queued"`;
`response_from_outcome` renders a clear queued-state message with the
pending-buffer depth and a link to watch live.
- `backend/blocks/autopilot.py::execute_copilot` — surfaces queued state
as descriptive response text + empty `tool_calls`/history when
`result.queued`.
**Frontend**
- `src/app/(platform)/copilot/useCopilotPendingChips.ts` — hook owning
the chip lifecycle: backend peek on session load, auto-continue
promotion when a second assistant id appears, mid-turn poll that
promotes when the backend count drops.
- `src/app/(platform)/copilot/useHydrateOnStreamEnd.ts` —
force-hydrate-waits-for-fresh-reference dance extracted.
- `src/app/(platform)/copilot/helpers/stripReplayPrefix.ts` — pure
function with drop / strip / streaming-catch-up cases + helper
decomposition.
- `src/app/(platform)/copilot/helpers/makePromotedBubble.ts` — one-line
helper for the promoted bubble shape.
- `src/app/(platform)/copilot/helpers/queueFollowUpMessage.ts` — thin
`fetch` wrapper for the 202 path (AI SDK's `useChat` fetcher only
handles SSE, so we can't reuse `sendMessage` for the queued response).
## Test plan
Backend unit + integration (`poetry run pytest backend/copilot
backend/api/features/chat`):
- [x] 107 tests pass — pending buffer, drain helpers, routes,
session_waiter queue branch, run_sub_session outcome rendering,
autopilot block
- [x] New `session_waiter_test.py` proves the queue branch
short-circuits `stream_registry.create_session` + `enqueue_copilot_turn`
- [x] Mid-turn persist has a rollback-and-re-queue path tested for when
`session.messages` persist silently fails to back-fill sequences
Frontend unit (`pnpm vitest run`):
- [x] 630 tests pass incl. 22 new for extracted helpers + hooks
- [x] Frontend coverage on touched copilot files: 91%+ (patch 87.37%)
Manual (once merged):
- [ ] Queue two chips while a tool is running; Claude acknowledges both
on the next round, UI shows bubbles in typing order after the tool
output
- [ ] Hand AutoPilot block an existing session_id that has a live turn;
block returns queued status, in-flight turn drains the message on its
next round
- [ ] `run_sub_session` against a busy sub — status=`queued`,
`sub_autopilot_session_link` lets user watch live
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
fcaebd1bb7 |
refactor(backend/copilot): unified queue-backed copilot turns + async sub-AutoPilot + guide-read gate (#12841)
### Why / What / How **Why:** the 10-min stream-level idle timeout was killing legitimate long-running tool calls — notably sub-AutoPilot runs via `run_block(AutoPilotBlock)`, which routinely take 15–45 min. The symptom users saw was `"A tool call appears to be stuck"` even though AutoPilot was actively working. A second long-standing rough edge was shipped alongside: agents often skipped `get_agent_building_guide` when generating agent JSON, producing schemas that failed validation and burned turns on auto-fix loops. **What:** three threaded pieces. 1. **Async sub-AutoPilot via `run_sub_session`.** New copilot tool that delegates a task to a fresh (or resumed) sub-AutoPilot, and its companion `get_sub_session_result` for polling/cancelling. The agent starts with `run_sub_session(prompt, wait_for_result≤300s)` and, if the sub isn't done inside the cap, receives a handle + polls via `get_sub_session_result(wait_if_running≤300s)`. No single MCP call ever blocks the stream for more than 5 min, so the 10-min stream-idle timer stays simple and effective (derived as `MAX_TOOL_WAIT_SECONDS * 2`). 2. **Queue-backed copilot turn dispatch** — one code path for all three callers. - `run_sub_session` enqueues a `CoPilotExecutionEntry` on the existing `copilot_execution` exchange instead of spawning an in-process `asyncio.Task`. - `AutoPilotBlock.execute_copilot` (graph block) now uses the **same queue** instead of `collect_copilot_response` inline. - The HTTP SSE endpoint was already queue-backed. - All three share a single primitive: `run_copilot_turn_via_queue` → `create_session` → `enqueue_copilot_turn` → `wait_for_session_result`. The event-aggregation logic (`EventAccumulator`/`process_event`) is a shared module used by both the direct-stream path and the cross-process waiter. - Benefits: **deploy/crash resilience** (RabbitMQ redelivery survives worker restarts), **natural load balancing** across copilot_executor workers, **sessions as first-class resources** (UI users can `/copilot?sessionId=<inner>` into any sub or AutoPilot block's session), and every future stream-level feature (pending-messages drain #12737, compaction policies, etc.) applies uniformly instead of bypassing graph-block sessions. 3. **Guide-read gate on agent-generation tools.** `create_agent` / `edit_agent` / `validate_agent_graph` / `fix_agent_graph` refuse until the session has called `get_agent_building_guide`. The pre-existing soft hint was routinely ignored; the gate makes the dependency enforceable. All four tool descriptions advertise the requirement in one tightened sentence ("Requires get_agent_building_guide first (refuses otherwise).") that stays under the 32000-char schema budget. **How:** #### Queue-backed sub-AutoPilot + AutoPilotBlock - `sdk/session_waiter.py` — new module. `SessionResult` dataclass mirrors `CopilotResult`. `wait_for_session_result` subscribes to `stream_registry`, drains events via shared `process_event`, returns `(outcome, result)`. `wait_for_session_completion` is the cheaper outcome-only variant. `run_copilot_turn_via_queue` is the canonical three-step dispatch. Every exit path unsubscribes the listener. - `sdk/stream_accumulator.py` — new module. `EventAccumulator`, `ToolCallEntry`, `process_event` extracted from `collect.py`. Both the direct-stream and cross-process paths now use the same fold logic. - `tools/run_sub_session.py` / `tools/get_sub_session_result.py` — rewritten around the shared primitive. `sub_session_id` is now the sub's `ChatSession` id directly (no separate registry handle). Ownership re-verified on every call via `get_chat_session`. Cancel via `enqueue_cancel_task` on the existing `copilot_cancel` fan-out exchange. - `blocks/autopilot.py` — `execute_copilot` replaced its inline `collect_copilot_response` with `run_copilot_turn_via_queue`. `SessionResult` carries response text, tool calls, and token usage back from the worker so no DB round-trip is needed. The block's public I/O contract (inputs, outputs, `ToolCallEntry` shape) is unchanged. - `CoPilotExecutionEntry` gains a `permissions: CopilotPermissions | None` field forwarded to the worker's `stream_fn` so the sub's capability filter survives the queue hop. The processor passes it through to `stream_chat_completion_sdk` / `stream_chat_completion_baseline`. - **Deleted**: `sdk/sub_session_registry.py` (module-level dict, done-callback, abandoned-task cap, `notify_shutdown_and_cancel_all`, `_reset_for_test`), plus the shutdown-notifier hook in `copilot_executor.processor.cleanup` — redundant under queue-backed execution. #### Run_block single-tool cap (3) - `tools/helpers.execute_block` caps block execution at `MAX_TOOL_WAIT_SECONDS = 5 min` via `asyncio.wait_for` around the generator consumption. - On timeout: logs `copilot_tool_timeout tool=run_block block=… block_id=… input_keys=… user=… session=… cap_s=…` (grep-friendly) and returns an `ErrorResponse` that redirects the LLM to `run_agent` / `run_sub_session`. - Billing protection: `_charge_block_credits` is called in a `finally` guarded by `asyncio.shield` and marked `charge_handled` **before** the await so cancel-mid-charge doesn't double-bill and cancel-mid-generator-before-charge still settles via the finally. #### Guide-read gate - `helpers.require_guide_read(session, tool_name)` scans `session.messages` for any prior assistant tool call named `get_agent_building_guide` (handles both OpenAI and flat shapes). Applied at the top of `_execute` in `create_agent`, `edit_agent`, `validate_agent_graph`, `fix_agent_graph`. Tool descriptions advertise the requirement. #### Shared timing constants - `MAX_TOOL_WAIT_SECONDS = 5 * 60` + `STREAM_IDLE_TIMEOUT_SECONDS = 2 * MAX_TOOL_WAIT_SECONDS` in `constants.py`. Every long-running tool (`run_agent`, `view_agent_output`, `run_sub_session`, `get_sub_session_result`, `run_block`) imports from one place; no more hardcoded 300 / `10*60` literals drifting apart. Stream-idle invariant ("no single tool blocks close to the idle timeout") holds by construction. ### Frontend - Friendlier tool-card labels: `run_sub_session` → "Sub-AutoPilot", `get_sub_session_result` → "Sub-AutoPilot result", `run_block` → "Action" (matches the builder UI's own naming), `run_agent` → "Agent". Fixes the double-verb "Running Run …" phrasing. - `SubSessionStatusResponse.sub_autopilot_session_link` surfaces `/copilot?sessionId=<inner>` so users can click into any sub's session from the tool-call card — same pattern as `run_agent`'s `library_agent_link`. ### Changes 🏗️ - **New modules**: `sdk/session_waiter.py`, `sdk/stream_accumulator.py`, `tools/run_sub_session.py`, `tools/get_sub_session_result.py`, `tools/sub_session_test.py`, `tools/agent_guide_gate_test.py`. - **New response types**: `SubSessionStatusResponse`, `SubSessionProgressSnapshot`, `SessionResult`. - **New gate helper**: `require_guide_read` in `tools/helpers.py`. - **Queue protocol**: `permissions` field on `CoPilotExecutionEntry`, threaded through `processor.py` → `stream_fn`. - **Hidden**: `AUTOPILOT_BLOCK_ID` in `COPILOT_EXCLUDED_BLOCK_IDS` (run_block can't execute AutoPilotBlock; agents use `run_sub_session` instead). - **Deleted**: `sdk/sub_session_registry.py`, processor shutdown-notifier hook. - **Regenerated**: `openapi.json` for the new response types; block-docs for the updated `ToolName` Literal. - **Tool descriptions**: tightened the guide-gate hint across the four agent-builder tools to stay under the 32000-char schema budget. - **40+ tests** across sub_session, execute_block cap + billing races, stream_accumulator, agent_guide_gate, frontend helpers. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Unit suite green on the full copilot tree; `poetry run format` + `pyright` clean - [x] Schema character budget test passes (tool descriptions trimmed to stay under 32000) - [x] Native UI E2E (`poetry run app` + `pnpm dev`): `run_sub_session(wait_for_result=60)` returns `status="completed"` + `sub_autopilot_session_link` inline; `run_sub_session(wait_for_result=1)` returns `status="running"` + handle, `get_sub_session_result(wait_if_running=60)` observes `running → completed` transition - [x] AutoPilotBlock (graph) goes through `copilot_executor` queue end-to-end (verified via logs: ExecutionManager's AutoPilotBlock node spawned session `f6de335b-…`, a different `CoPilotExecutor` worker acquired its cluster lock and ran the SDK stream) - [x] Guide gate: `create_agent` without a prior `get_agent_building_guide` returns the refusal; agent reads the guide and retries successfully |
||
|
|
3a01874911 |
fix(frontend/builder): preserve agent name in AgentExecutor node title after reload (#12805)
## Summary Fixes #11041 When an `AgentExecutorBlock` is placed in the builder, it initially displays the agent's name (e.g., "Researcher v2"). After saving and reloading the page, the title reverts to the generic "Agent Executor." ## Root Cause The backend correctly persists `agent_name` and `graph_version` in `hardcodedValues` (via `input_default` in `AgentExecutorBlock`). However, `NodeHeader.tsx` always resolves the display title from `data.title` (the generic block name), ignoring the persisted agent name. ## Fix Modified the title resolution chain in `NodeHeader.tsx` to check `data.hardcodedValues.agent_name` between the user's custom name and the generic block title: 1. `data.metadata.customized_name` (user's manual rename) — highest priority 2. `agent_name` + ` v{graph_version}` from `hardcodedValues` — **new** 3. `data.title` (generic block name) — fallback This is a frontend-only change. No backend modifications needed. ## Files Changed - `autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeHeader.tsx` (+11, -1) ## Test Plan - [x] Place an AgentExecutorBlock, select an agent — title shows agent name - [x] Save graph, reload page — title still shows agent name (was "Agent Executor" before) - [x] Double-click to rename — custom name takes priority over agent name - [x] Clear custom name — falls back to agent name - [x] Non-AgentExecutor blocks — unaffected, show generic title as before --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
6d770d9917 |
fix(platform/copilot): revert forward pagination, add visibility guarantee for blank chat (#12831)
## Why / What / How **Why:** PR #12796 changed completed copilot sessions to load messages from sequence 0 forward (ascending), which broke the standard chat UX — users now land at the beginning of the conversation instead of the most recent messages. Reported in Discord. **What:** Reverts the forward pagination approach and replaces it with a visibility guarantee that ensures every page contains at least one user/assistant message. **How:** - **Backend**: Removed after_sequence, from_start, forward_paginated, newest_sequence — always use backward (newest-first) pagination. Added _expand_for_visibility() helper: after fetching, if the entire page is tool messages (invisible in UI), expand backward up to 200 messages until a visible user/assistant message is found. - **Frontend**: Removed all forwardPaginated/newestSequence plumbing from hooks and components. Removed bottom LoadMoreSentinel. Simplified message merge to always prepend paged messages. ### Changes - routes.py: Reverted to simple backward pagination, removed TOCTOU re-fetch logic - db.py: Removed forward mode, extracted _expand_tool_boundary() and added _expand_for_visibility() - SessionDetailResponse: Removed newest_sequence and forward_paginated fields - openapi.json: Removed after_sequence param and forward pagination response fields - Frontend hooks/components: Removed forward pagination props and logic (-1000 lines) - Updated all tests (backend: 63 pass, frontend: 1517 pass) ### Checklist - [x] I have clearly listed my changes in the PR description - [x] Backend unit tests: 63 pass - [x] Frontend unit tests: 1517 pass - [x] Frontend lint + types: clean - [x] Backend format + pyright: clean |
||
|
|
334ec18c31 |
docs: convert in-code comments to MkDocs admonitions in block-sdk-gui… (#12819)
### Why / What / How <!-- Why: Why does this PR exist? What problem does it solve, or what's broken/missing without it? --> This PR converts inline Python comments in code examples within `block-sdk-guide.md` into MkDocs `!!! note` admonitions. This makes code examples cleaner and more copy-paste friendly while preserving all explanatory content. <!-- What: What does this PR change? Summarize the changes at a high level. --> Converts inline comments in code blocks to admonitions following the pattern established in PR #12396 (new_blocks.md) and PR #12313. <!-- How: How does it work? Describe the approach, key implementation details, or architecture decisions. --> - Wrapped code examples with `!!! note` admonitions - Removed inline comments from code blocks for clean copy-paste - Added explanatory admonitions after each code block ### Changes 🏗️ - Provider configuration examples (API key and OAuth) - Block class Input/Output schema annotations - Block initialization parameters - Test configuration - OAuth and webhook handler implementations - Authentication types and file handling patterns ### Checklist 📋 #### For documentation changes: - [x] Follows the admonition pattern from PR #12396 - [x] No code changes, documentation only - [x] Admonition syntax verified correct #### For configuration changes: - [ ] `.env.default` is updated or already compatible with my changes - [ ] `docker-compose.yml` is updated or already compatible with my changes --- **Related Issues**: Closes #8946 Co-authored-by: slepybear <slepybear@users.noreply.github.com> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
ea5cfdfa2e |
fix(frontend): remove debug console.log statements (#12823)
## Why Debug console.log statements were left in production code, which can leak sensitive information and pollute browser developer consoles. ## What Removed console.log from 4 non-legacy frontend components: - useNavbar.ts: isLoggedIn debug log - WalletRefill.tsx: autoRefillForm debug log - EditAgentForm.tsx: category field debug log - TimezoneForm.tsx: currentTimezone debug log ## How Simply deleted the console.log lines as they served no purpose other than debugging during development. ## Checklist - [x] Code follows project conventions - [x] Only frontend changes (4 files, 6 lines removed) - [x] No functionality changes Co-authored-by: slepybear <slepybear@users.noreply.github.com> |
||
|
|
d13a85bef7 |
feat(frontend): surface scheduled agents in library & copilot briefings (#12818)
## Why
Scheduled agents weren't well-surfaced in the Library and Copilot
briefings:
- The Library fleet summary didn't count agents that are scheduled
purely via the scheduler (only those with a `recommended_schedule_cron`
set at the agent level).
- Sitrep items didn't distinguish scheduled or listening (trigger-based)
agents, so they often fell back to a generic "idle" state.
- Scheduled chips showed a generic message with no indication of when
the next run would happen.
- The Copilot Agent Briefing surfaced every scheduled agent regardless
of how far out the next run was — an agent scheduled a month away would
take a slot from something actually happening soon.
- Long sitrep messages overflowed the row.
## What
- Add `is_scheduled` to `LibraryAgent` (sourced from the scheduler) so
the frontend can reliably detect schedule-only agents.
- Count scheduled agents in `useLibraryFleetSummary`.
- Include scheduled and listening agents in sitrep items, with a
priority ordering (error → running → stale → success → listening →
scheduled → idle).
- Show a relative next-run time on scheduled sitrep chips (e.g.
"Scheduled to run in 2h" / "in 3d").
- Filter the Copilot Agent Briefing to scheduled agents whose next run
is within the next 3 days.
- Truncate long sitrep messages to 1 line with `OverflowText` and show
the full text in a tooltip on hover.
## How
- Scheduler → `LibraryAgent` mapping populates `is_scheduled` /
`next_scheduled_run`.
- `useSitrepItems` gains an optional `scheduledWithinMs` parameter.
Copilot's `usePulseChips` passes `3 * 24 * 60 * 60 * 1000`; the Library
briefing omits it to keep its existing (unbounded) behavior.
- Scheduled config-based sitrep items are skipped when
`next_scheduled_run` is missing or outside the window.
- `SitrepItem` wraps the message in `OverflowText` so a single-line
ellipsis + hover tooltip replaces raw overflow.
## Test plan
- [ ] `/library` — scheduled and listening agents appear in the sitrep
with accurate copy; fleet summary counts scheduled agents correctly;
long messages truncate with a tooltip on hover.
- [ ] `/copilot` — on an empty session with the `AGENT_BRIEFING` flag
on, the briefing only shows scheduled agents whose next run is within 3
days; agents scheduled further out no longer appear as "scheduled"
chips.
- [ ] Scheduled chip text reads "Scheduled to run in {Nm|Nh|Nd}"
matching `next_scheduled_run`.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
60b85640e7 |
fix(backend/copilot): replace dedup lock with idempotent append_and_save_message (#12814)
## Why
The Redis dedup lock (`chat:msg_dedup:{session}:{content_hash}`, 30s
TTL) was solving the wrong problem:
- Its purpose: block infra/nginx retries from calling
`append_and_save_message` twice after a client disconnect, writing a
duplicate user message to the DB.
- The approach: deliberately hold the lock for 30s on `GeneratorExit`.
- Why unnecessary: the executor's cluster lock already prevents
duplicate *execution*. The only real gap was duplicate *DB writes* in
the ~1s before the executor picks up the turn.
## What
- **Deleted** `message_dedup.py` and `message_dedup_test.py` (~150 lines
removed).
- **Removed** all dedup lock code from `routes.py` (~40 lines removed).
- **`append_and_save_message`** is now idempotent and self-contained:
- Uses redis-py's built-in `Lock(timeout=10, blocking_timeout=2)` —
Lua-script atomic acquire/release, no manual poll/sleep loop.
- Lock context manager yields `bool` (`True` = acquired, `False` =
degraded). When degraded (Redis down or 2s timeout), reads from DB
directly instead of cache to avoid stale-state duplicates.
- Idempotency check: if `session.messages[-1]` already matches the
incoming role+content, returns `None` instead of the session.
- Lock released explicitly as soon as the write completes; `try/except`
in `finally` so a cleanup error after a successful write never surfaces
a false 500.
- On cache-write failure, the stale cache entry is invalidated so future
reads fall back to the authoritative DB.
- **`routes.py`** uses the `None` signal: `is_duplicate_message = (await
append_and_save_message(...)) is None`
- Skips `create_session` and `enqueue_copilot_turn` for duplicates —
client re-attaches to the existing turn's Redis stream.
- `track_user_message` and `turn_id` generation only happen when
`is_duplicate_message` is false.
- **`subscribe_to_session`** retry window increased from 1×50ms to
3×100ms — covers the window where a duplicate request subscribes before
the original's `create_session` hset completes.
- **Cleaned up** `routes_test.py`: removed 5 dedup-specific tests and
the `mock_redis` setup from `_mock_stream_internals`; added
duplicate-skips-enqueue test.
## How
The idempotency guard distinguishes legit same-text messages from
retries via the **assistant turn between them**: if the user said "yes",
got a response, and says "yes" again, `session.messages[-1]` is the
assistant reply, so the role check fails and the second message goes
through. A retry (no response yet) sees the user message as the last
entry and is blocked.
```python
if (
session.messages
and session.messages[-1].role == message.role
and session.messages[-1].content == message.content
):
return None # duplicate — caller skips enqueue
```
The Redis lock ensures this check always sees authoritative state even
in multi-replica deployments. When the lock is unavailable (Redis down
or contention), reading from DB directly (bypassing potentially stale
cache) provides the same safety guarantee at the cost of a DB
round-trip.
## Checklist
- [x] PR targets `dev`
- [x] Conventional commit title with scope
- [x] Tests added/updated (duplicate detection, lock degradation, DB
error, cache invalidation paths)
- [x] `poetry run format` and `poetry run pyright` pass clean
- [x] No new linter suppressors
|
||
|
|
87e4d42750 |
fix(backend/copilot): fix initial load missing messages + forward pagination for completed sessions (#12796)
### Why / What / How **Why:** Completed copilot sessions with many messages showed a completely empty chat view. A user reported a 158-message session that appeared blank on reload. **What:** Two bugs fixed: 1. **Backend** — initial page load always returned the newest 50 messages in DESC order. For sessions heavy in tool calls, the user's original messages (seq 0–5) were never included; all 50 slots consumed by mid-session tool outputs. 2. **Frontend** — convertChatSessionToUiMessages silently dropped user messages with null/empty content. **How:** For completed sessions (no active stream), the backend now loads from sequence 0 in ASC order. Active/streaming sessions keep newest-first for streaming context. A new after_sequence forward cursor enables infinite-scroll for subsequent pages (sentinel moves to bottom). The frontend wires forward_paginated + newest_sequence end-to-end. ### Changes 🏗️ - db.py: added from_start (ASC) and after_sequence (forward cursor) modes; added newest_sequence to PaginatedMessages - routes.py: detect completed vs active on initial load; pass from_start=True for completed; expose newest_sequence + forward_paginated; accept after_sequence param - convertChatSessionToUiMessages.ts: never drop user messages with empty content - useLoadMoreMessages.ts: forward pagination via after_sequence; append pages to end - ChatMessagesContainer.tsx: LoadMoreSentinel at bottom for forward-paginated sessions - Wire newestSequence + forwardPaginated end-to-end through useChatSession/useCopilotPage/ChatContainer - openapi.json: add after_sequence + newest_sequence/forward_paginated; regenerate types - db_test.py: 9 new unit tests for from_start and after_sequence modes ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Open a completed session with many messages — first user message visible on initial load - [x] Scroll to bottom of completed session — load more appends next page - [x] Open active/streaming session — newest messages shown first, streaming unaffected - [x] Backend unit tests: all 28 pass - [x] Frontend lint/format: clean, no new type errors --------- Co-authored-by: chernistry <73943355+chernistry@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
0339d95d12 |
fix(frontend): small UI fixes, sort menu bg, name update auth, stats grid overflow, pulse chips (#12815)
## Summary - **LibrarySortMenu / AgentFilterMenu**: Force `!bg-transparent` and neutralise legacy `SelectTrigger` styles (`m-0.5`, `ring-offset-white`, `shadow-sm`) that caused a white background around the trigger - **EditNameDialog**: Replace client-side `supabase.auth.updateUser()` with server-side `PUT /api/auth/user` route — fixes "Auth session missing!" error caused by `httpOnly` cookies being inaccessible to browser JS - **StatsGrid**: Swap label `Text` for `OverflowText` so tile labels truncate with `…` and show a tooltip instead of wrapping when the grid is squeezed - **PulseChips**: Set fixed `15rem` chip width with `shrink-0`, horizontal scroll, and styled thin scrollbar - **Tests**: Updated `EditNameDialog` tests to use MSW instead of mocking Supabase client; added 7 new `PulseChips` integration tests ## Test plan - [x] `pnpm test:unit` — all 1495 tests pass (91 files) - [x] `pnpm format && pnpm lint` — clean - [x] `pnpm types` — no new errors (pre-existing only) - [ ] QA `/library?sort=updatedAt` — sort menu trigger has no white bg - [ ] QA `/library` — StatsGrid labels truncate with tooltip on narrow viewports - [ ] QA `/copilot` — PulseChips scroll horizontally at fixed width - [ ] QA `/copilot` — Edit name dialog saves successfully (no "Auth session missing!") 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
f410929560 |
feat(platform): Add xAI Grok 4.20 models from OpenRouter (#12620)
Requested by @Torantulino Adds the 2 xAI Grok 4.20 models available on OpenRouter that are missing from the platform. ## Why `x-ai/grok-4.20` and `x-ai/grok-4.20-multi-agent` are xAI's current flagship models (released March 2026) and are available via OpenRouter, but weren't accessible from the platform's LLM blocks. ## Changes **`autogpt_platform/backend/backend/blocks/llm.py`** - Added `GROK_4_20` and `GROK_4_20_MULTI_AGENT` enum members - Added corresponding `MODEL_METADATA` entries (open_router provider, 2M context window, price tier 3) **`autogpt_platform/backend/backend/data/block_cost_config.py`** - Added `MODEL_COST` entries at 5 credits each (flagship tier, $2/M in) **`docs/integrations/block-integrations/llm.md`** - Added new model IDs to all LLM block tables | Model | Pricing | Context | |-------|---------|---------| | `x-ai/grok-4.20` | $2/M in, $6/M out | 2M | | `x-ai/grok-4.20-multi-agent` | $2/M in, $6/M out | 2M | Both models use the standard OpenRouter chat completions API — no special handling needed. Resolves: SECRT-2196 --------- Co-authored-by: Torantulino <22963551+Torantulino@users.noreply.github.com> Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com> Co-authored-by: Otto (AGPT) <otto@agpt.co> |
||
|
|
2bbec09e1a |
feat(platform): subscription tier billing via Stripe Checkout (#12727)
## Why Introducing paid subscription tiers (PRO, BUSINESS) so we can charge for AutoPilot capacity beyond the free tier. Without a billing integration, all users share the same rate limits regardless of their willingness to pay for additional capacity. ## What End-to-end subscription billing system using Stripe Checkout Sessions: **Backend:** - `SubscriptionTier` enum (`FREE`, `PRO`, `BUSINESS`, `ENTERPRISE`) on the `User` model - `POST /credits/subscription` — creates a Stripe Checkout Session for paid upgrades; for FREE tier or when `ENABLE_PLATFORM_PAYMENT` is off, sets tier directly - `GET /credits/subscription` — returns current tier, monthly cost (cents), and all tier costs - `POST /credits/stripe_webhook` — handles `customer.subscription.created/updated/deleted`, `checkout.session.completed`, `charge.dispute.*`, `refund.created` - `sync_subscription_from_stripe()` — keeps `User.subscriptionTier` in sync from webhook events; guards against out-of-order delivery (cancelled event after new sub created), ENTERPRISE overwrite, and duplicate webhook replay - Open-redirect protection on `success_url`/`cancel_url` via `_validate_checkout_redirect_url()` - `_cancel_customer_subscriptions()` — cancels both active and trialing subs; propagates errors so callers can avoid updating DB tier on Stripe failure - `_cleanup_stale_subscriptions()` — best-effort cancellation of old subs when a new one becomes active (paid-to-paid upgrade), to prevent double-billing - `get_stripe_customer_id()` with idempotency key to prevent duplicate Stripe customers on concurrent requests - `cache_none=False` sentinel fix in `@cached` decorator so Stripe price lookups retry on transient error instead of poisoning the cache with `None` - Stripe Price IDs read from LaunchDarkly (`stripe-price-id-pro`, `stripe-price-id-business`). If not configured, upgrade returns 422. **Frontend:** - `SubscriptionTierSection` component on billing page: tier cards (FREE/PRO/BUSINESS), upgrade/downgrade buttons, per-tier cost display, Stripe redirect on upgrade - Confirmation dialog for downgrades - ENTERPRISE users see a read-only admin-managed banner - Success toast on return from Stripe Checkout (`?subscription=success`) - Uses generated `useGetSubscriptionStatus` / `useUpdateSubscriptionTier` hooks ## How - Paid upgrades use Stripe Checkout Sessions (not server-side subscription creation) — Stripe handles PCI-compliant card collection and the subscription lifecycle - Tier is synced back via webhook on `customer.subscription.created/updated/deleted` - Downgrade to FREE cancels via Stripe API immediately; a `stripe.StripeError` during cancellation returns 502 with a generic message (no Stripe detail leakage) - LaunchDarkly flags: `stripe-price-id-pro` (string), `stripe-price-id-business` (string), `enable-platform-payment` (bool) - `ENABLE_PLATFORM_PAYMENT=false` bypasses Stripe for beta/internal access (sets tier directly) ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `ENABLE_PLATFORM_PAYMENT=false` → tier change updates directly, no Stripe redirect - [x] `ENABLE_PLATFORM_PAYMENT=true` with price IDs configured → paid upgrade redirects to Stripe Checkout - [x] Stripe webhook `customer.subscription.created` → `User.subscriptionTier` updated - [x] Unrecognised price ID in webhook → logs warning, tier unchanged - [x] ENTERPRISE user webhook event → tier not overwritten - [x] Empty `STRIPE_WEBHOOK_SECRET` → 503 (prevents HMAC bypass) - [x] Open-redirect attack on `success_url`/`cancel_url` → 422 #### For configuration changes: - [x] No `.env` or `docker-compose.yml` changes - [x] LaunchDarkly flags to create: `stripe-price-id-pro` (string), `stripe-price-id-business` (string), `enable-platform-payment` (bool) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: majdyz <majdy.zamil@gmail.com> |
||
|
|
31b88a6e56 |
feat(frontend): add Agent Briefing Panel (#12764)
## Summary <img width="800" height="772" alt="Screenshot_2026-04-13_at_18 29 19" src="https://github.com/user-attachments/assets/3da6eaf2-1485-4c08-9651-18f2f4220eba" /> <img width="800" height="285" alt="Screenshot_2026-04-13_at_18 29 24" src="https://github.com/user-attachments/assets/6a5f981a-1e1d-4d22-a33d-9e1b0e7555a7" /> <img width="800" height="288" alt="Screenshot_2026-04-13_at_18 29 27" src="https://github.com/user-attachments/assets/f97b4611-7c23-4fc9-a12d-edf6314a77ef" /> <img width="800" height="433" alt="Screenshot_2026-04-13_at_18 29 31" src="https://github.com/user-attachments/assets/e6d7241d-84f3-4936-b8cd-e0b12df392bb" /> <img width="700" height="554" alt="Screenshot_2026-04-13_at_18 29 40" src="https://github.com/user-attachments/assets/92c08f21-f950-45cd-8c1d-529905a6e85f" /> Implements the Agent Intelligence Layer — real-time agent awareness across the Library and Copilot pages. ### Core Features - **Agent Briefing Panel** — stats grid with fleet-wide counts (running, recently completed, needs attention, scheduled, idle, monthly spend) and tab-driven content below - **Enhanced Library Cards** — StatusBadge, run counts, contextual action buttons (See tasks, Start, Chat) with consistent icon-left styling - **Situation Report Items** — prioritized sitrep with error-first ranking, "See task" deep-links for completed runs, and "Ask AutoPilot" bridge - **Home Pulse Chips** — agent status chips on Copilot empty state with hover-reveal actions (slide-up animation + backdrop blur on desktop, always visible on touch) - **Edit Display Name** — pencil icon on Copilot greeting to update Supabase user metadata inline ### Backend - **Execution count API** — batch `COUNT(*)` query on `AgentGraphExecution` grouped by `agentGraphId` for the current user, avoiding loading full execution rows. Wired into `list_library_agents` and `list_favorite_library_agents` via `execution_count_override` on `LibraryAgent.from_db()` ### UI Polish - Subtler gradient on AgentBriefingPanel (reduced opacity on background + animated border) - Consistent button styles across all action buttons (icon-left, same sizing) - Removed duplicate "Open in builder" menu item (kept "Edit agent") - "Recently completed" tab replaces "Listening" in briefing panel, showing agents with completed runs in last 72h ## Changes ### Backend - `backend/api/features/library/db.py` — added `_fetch_execution_counts()` batch COUNT query, wired into list endpoints - `backend/api/features/library/model.py` — added `execution_count_override` param to `LibraryAgent.from_db()` ### Frontend — New files - `EditNameDialog/EditNameDialog.tsx` — modal to update display name via Supabase auth - `PulseChips/PulseChips.module.css` — hover-reveal animation + glass panel styles ### Frontend — Modified files - `EmptySession.tsx` — added EditNameDialog and PulseChips - `PulseChips.tsx` — redesigned with See/Ask buttons, hover overlay on desktop - `usePulseChips.ts` — added agentID for deep-linking - `AgentBriefingPanel.tsx` — subtler gradient, adjusted padding - `AgentBriefingPanel.module.css` — reduced conic gradient opacity - `BriefingTabContent.tsx` — added "completed" tab routing - `StatsGrid.tsx` — replaced Listening with Recently completed, reordered tabs - `SitrepItem.tsx` — consistent button styles, "See task" link for completed items, updated copilot prompt - `ContextualActionButton.tsx` — icon-left, smaller icon, renamed Run to Start - `LibraryAgentCard.tsx` — icon-left on all buttons, EyeIcon for See tasks - `AgentCardMenu.tsx` — removed duplicate "Open in builder" - `useAgentStatus.ts` — added completed count to FleetSummary - `useLibraryFleetSummary.ts` — added recent completion tracking - `types.ts` — added `completed` to FleetSummary and AgentStatusFilter ## Test plan - [ ] Library page renders Agent Briefing Panel with stats grid - [ ] "Recently completed" tab shows agents with completed runs in last 72h - [ ] Agent cards show real execution counts (not 0) - [ ] Action buttons have consistent styling with icon on the left - [ ] "See task" on completed items deep-links to agent page with execution selected - [ ] "Ask AutoPilot" generates last-run-specific prompt for completed items - [ ] Copilot empty state shows PulseChips with hover-reveal actions on desktop - [ ] PulseChips show See/Ask buttons always on touch screens - [ ] Pencil icon on greeting opens edit name dialog - [ ] Name update persists via Supabase and refreshes greeting - [ ] `pnpm format && pnpm lint && pnpm types` pass - [ ] `poetry run format` passes for backend changes 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: John Ababseh <jababseh7@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Bentlybro <Github@bentlybro.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: CodeRabbit <noreply@coderabbit.ai> Co-authored-by: majdyz <zamil.majdy@agpt.co> |
||
|
|
d357956d98 |
refactor(backend/copilot): make session-file helper fns public to fix Pyright warnings (#12812)
## Why After PR #12804 was squashed into dev, two module-level helper functions in `backend/copilot/sdk/service.py` remained private (`_`-prefixed) while being directly imported by name in `sdk/transcript_test.py`. Pyright reports `reportAttributeAccessIssue` when tests (even those excluded from CI lint) import private symbols from outside their defining module. ## What Rename two helpers to remove the underscore prefix: - `_process_cli_restore` → `process_cli_restore` - `_read_cli_session_from_disk` → `read_cli_session_from_disk` Update call sites in `service.py` and imports/calls/docstrings in `sdk/transcript_test.py`. ## How Pure rename — no logic change. Both functions were already module-level helpers with no reason to be private; the underscore was convention carried over during the refactor but they are directly unit-tested and should be public. All 66 `sdk/transcript_test.py` tests pass after the rename. ## Checklist - [x] Tests pass (`poetry run pytest backend/copilot/sdk/transcript_test.py`) - [x] No `_`-prefixed symbols imported across module boundaries - [x] No linter suppressors added |
||
|
|
697ffa81f0 | fix(backend/copilot): update transcript_test to use strip_for_upload after upload_cli_session removal | ||
|
|
2b4727e8b2 |
chore: merge master into dev, resolve baseline/transcript conflicts
Conflicts in baseline/service.py, baseline/transcript_integration_test.py,
and transcript.py arose because dev-only commit
|
||
|
|
0d4b31e8a1 |
refactor(backend/copilot): unified transcript context — extract_context_messages, mode-gated --resume, compaction-aware gap-fill (#12804)
### Why / What / How
**Why:** The copilot had two separate GCS paths (`cli-sessions/` and
`chat-transcripts/`), redundant function names
(`upload_cli_session`/`restore_cli_session`), and no shared context
strategy between modes. When switching from baseline→SDK or
SDK→baseline, the receiving mode discarded the stored transcript and
fell back to full DB reconstruction — loading all raw messages instead
of the compacted form — causing inflated context, wasted tokens, and
loss of CLI compaction summaries.
**What:**
- Single GCS path (`cli-sessions/`) for both modes — `chat-transcripts/`
removed
- Unified public API: `upload_transcript` / `download_transcript` /
`TranscriptDownload`
- `TranscriptMode = Literal["sdk", "baseline"]` persisted in
`.meta.json` — SDK skips `--resume` when `mode != "sdk"`
(baseline-written JSONL has stripped fields / synthetic IDs)
- `extract_context_messages(download, session_messages)` — shared
context primitive used by **both SDK and baseline**: reads compacted
transcript content + fills only the DB gap (messages after watermark),
so CLI compaction summaries are preserved across mode switches
- Watermark fix: `_jsonl_covered = transcript_msg_count + 2` when a real
transcript is present, preventing false gap detection after `--resume`
- Baseline gap-fill: `_append_gap_to_builder` converts `ChatMessage` →
JSONL entries; no more silently discarded stale transcripts
**How:**
```
SDK turn (mode="sdk" transcript available):
──► --resume [full CLI session restored natively]
──► inject gap prefix if DB has messages after watermark
SDK turn (mode="baseline" transcript available):
──► cannot --resume (synthetic CLI IDs)
──► extract_context_messages(download, session_messages):
returns transcript JSONL (compacted, isCompactSummary preserved) + gap
excludes session_messages[-1] (current turn — caller injects it separately)
──► format as <conversation_history> + "Now, the user says: {current}"
Baseline turn (any transcript):
──► _load_prior_transcript → TranscriptDownload
──► extract_context_messages(download, session_messages) + session_messages[-1]
replaces full session.messages DB read
──► LLM messages: [compacted history + gap] + [current user turn]
Transcript unavailable — both SDK (use_resume=False) and baseline:
──► extract_context_messages(None, session_messages) returns session_messages[:-1]
(all prior DB messages except the current user turn at [-1])
──► graceful fallback — no crash, no empty context
──► covers: first turn, GCS error, corrupt JSONL, missing .meta.json
──► next successful response uploads a fresh transcript
```
`extract_context_messages` is the shared primitive — both modes call the
same function, which handles:
- `download=None` (first turn, GCS unavailable) → falls back to
`session_messages[:-1]`
- Empty/corrupt content → falls back to `session_messages[:-1]`
- `bytes` content (raw GCS) or `str` content (pre-decoded baseline path)
- `isCompactSummary=True` entries → preserved so CLI compaction survives
mode switches
- Missing/corrupt `.meta.json` → `message_count` defaults to `0`, `mode`
defaults to `"sdk"`
**Why `[:-1]` and not all messages?** `session_messages[-1]` is always
the current user turn being handled right now. Both callers inject it
separately — SDK wraps it as `"Now, the user says: ..."`, baseline
appends it as the final message in the LLM array. Returning it inside
`extract_context_messages` would double-inject it.
### Changes 🏗️
- **`transcript.py`**: `CliSessionRestore` → `TranscriptDownload` +
`mode` field; `upload_cli_session` → `upload_transcript`;
`restore_cli_session` → `download_transcript`; add `TranscriptMode`,
`detect_gap`, `extract_context_messages`; import `ChatMessage` via
relative path to match `service.py` style
- **`sdk/service.py`**: mode-check before `--resume`; `_RestoreResult`
carries `baseline_download` + `context_messages` + `transcript_content`;
`_build_query_message` accepts `prior_messages` override;
`_restore_cli_session_for_turn` populates `context_messages` via
`extract_context_messages` and sets `transcript_content` to prevent
duplicate DB reconstruction; watermark fix (`_jsonl_covered =
transcript_msg_count + 2`)
- **`baseline/service.py`**: `_load_prior_transcript` returns `(bool,
TranscriptDownload | None)`; LLM context replaced with
`extract_context_messages(download, messages)`; `_append_gap_to_builder`
+ `detect_gap` call; `upload_transcript(mode="baseline")`
- **`sdk/transcript.py`**: updated re-exports, old aliases removed
- **`scripts/download_transcripts.py`**: updated for `bytes | str`
content type
- **Test files**: 179 tests total; `transcript_test.py`,
`baseline/transcript_integration_test.py`,
`sdk/service_helpers_test.py`, `sdk/test_transcript_watermark.py`,
`test/copilot/test_transcript_watermark.py` all updated/added
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] 179 unit tests pass — `transcript_test`,
`baseline/transcript_integration_test`, `sdk/service_helpers_test`,
`sdk/test_transcript_watermark`
- [x] pyright 0 errors on all changed files
- [x] SDK `--resume` path still works when `mode="sdk"` transcript is
present
- [x] SDK fallback uses `extract_context_messages` (compacted baseline
content + gap) when `mode="baseline"` transcript is stored — no more
full DB reconstruction
- [x] Baseline uses `extract_context_messages` per turn instead of full
`session.messages` DB read
- [x] `isCompactSummary=True` entries preserved across mode switches
- [x] Watermark (`_jsonl_covered`) fix prevents false gap detection
after `--resume`
- [x] Baseline gap detection no longer silently discards stale
transcripts
- [x] `TranscriptDownload.content` accepts `bytes | str` — backward
compatible
- [x] Transcript unavailable (GCS error, first turn, corrupt file)
gracefully falls back to `session_messages[:-1]` without crash — applies
to both SDK and baseline paths
---------
Co-authored-by: chernistry <73943355+chernistry@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
|
||
|
|
0cd0a76305 |
fix(backend/copilot): baseline always uploads when GCS has no transcript
_load_prior_transcript was returning False for missing/invalid transcripts, which caused should_upload_transcript to suppress the upload. The original intent was to protect against overwriting a *newer* GCS version — but a missing or corrupt file is not 'newer'. Only stale (watermark ahead) and download errors (unknown GCS state) should suppress upload. Also renames transcript_covers_prefix → transcript_upload_safe throughout to accurately describe what the flag means. |
||
|
|
d01a51be0e |
Add check for GitHub account connection status (#12807)
Added instruction to check GitHub authentication status before prompting user. This prevents repeated, unnecessary asking of the user to add their GitHub credentials when they're already added, which is currently a prevalent bug. ### Changes 🏗️ - Added one line to `autogpt_platform/backend/backend/copilot/prompting.py` instructing AutoPilot to run `gh auth status` before prompting the user to connect their GitHub account. Co-authored-by: Toran Bruce Richards <22963551+Torantulino@users.noreply.github.com> |
||
|
|
bd2efed080 |
fix(frontend): allow zooming out more in the builder (#12690)
Reduced minZoom on the builder canvas from 0.1 to 0.05 to allow zooming out further when working with large agent graphs. Fixes #9325 Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
5fccd8a762 | Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev | ||
|
|
2740b2be3a |
fix(backend/copilot): disable fallback model to fix prod CLI rejection (#12802)
### Why / What / How **Why:** `fffbe0aad8` changed both `ChatConfig.model` and `ChatConfig.claude_agent_fallback_model` to `claude-sonnet-4-6`. The Claude Code CLI rejects this with `Error: Fallback model cannot be the same as the main model`, causing every standard-mode copilot turn to fail with exit code 1 — the session "completes" in ~30s but produces no response and drops the transcript. **What:** Set `claude_agent_fallback_model` default to `""`. `_resolve_fallback_model()` already returns `None` on empty string, which means the `--fallback-model` flag is simply not passed to the CLI. On 529 overload errors the turn will surface normally instead of silently retrying with a fallback. **How:** One-line config change + test update. ### Changes 🏗️ - `ChatConfig.claude_agent_fallback_model` default: `"claude-sonnet-4-6"` → `""` - Update `test_fallback_model_default` to assert the empty default ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run pytest backend/copilot/sdk/p0_guardrails_test.py` #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes |
||
|
|
d27d22159d | Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev | ||
|
|
fffbe0aad8 |
fix(backend): default copilot sonnet to 4.6 (#12799)
### Why / What / How
Why: Copilot/Autopilot standard requests were still defaulting to Claude
Sonnet 4, while the expected default for this path is Sonnet 4.6.
What: This PR updates the backend Copilot defaults so the
standard/default path and fast path use Sonnet 4.6, and aligns the SDK
fallback model and related test expectations.
How: It changes `ChatConfig.model`, `ChatConfig.fast_model`, and
`ChatConfig.claude_agent_fallback_model` to Sonnet 4.6 values, then
updates backend tests that assert the default Sonnet model strings.
### Changes 🏗️
- Switch `ChatConfig.model` from `anthropic/claude-sonnet-4` to
`anthropic/claude-sonnet-4-6`
- Switch `ChatConfig.fast_model` from `anthropic/claude-sonnet-4` to
`anthropic/claude-sonnet-4-6`
- Switch `ChatConfig.claude_agent_fallback_model` from
`claude-sonnet-4-20250514` to `claude-sonnet-4-6`
- Update backend Copilot tests that assert the default Sonnet model
strings
- Configuration changes:
- No new environment variables or docker-compose changes are required
- Existing `.env.default` and compose files remain compatible because
this only changes backend default model values in code
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run pytest
backend/copilot/baseline/transcript_integration_test.py`
- [x] `poetry run pytest backend/copilot/sdk/service_helpers_test.py`
- [x] `poetry run pytest backend/copilot/sdk/service_test.py`
- [x] `poetry run pytest backend/copilot/sdk/p0_guardrails_test.py`
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Changes default/fallback LLM model identifiers for Copilot requests,
which can affect runtime behavior, cost, and availability
characteristics across both baseline and SDK paths. Risk is mitigated by
being a small, config-only change with updated tests.
>
> **Overview**
> Updates Copilot backend defaults so both the standard (`model`) and
fast (`fast_model`) paths use `anthropic/claude-sonnet-4-6`, and aligns
the Claude Agent SDK fallback model to `claude-sonnet-4-6`.
>
> Adjusts related test expectations in baseline transcript integration
and SDK helper tests to match the new Sonnet 4.6 model strings.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
|
||
|
|
df205b5444 |
fix(backend/copilot): strip CLI session file to prevent auto-compaction context loss
The Claude Code CLI auto-compacts its native session JSONL when the context approaches the model's token limit (~200K for Sonnet). After compaction the detailed conversation history is replaced by a ~27K-token summary, causing the silent context loss users see as memory failures in long sessions. Root cause identified from production logs for session 93ecf7c9: - T6 CLI session: 233KB / ~207K tokens (near Sonnet limit) - T7 CLI compacted session -> ~167KB / ~47K tokens (PreCompact hook missed) - T12 second compaction -> ~176KB / ~27K tokens (just system prompt + summary) - T14-T21: cache_read=26714 constantly -- only system prompt visible to Claude The same stripping we already apply to our transcript (stale thinking blocks, progress/metadata entries) now also runs on the CLI native session file. At ~2x the size of the stripped transcript, unstripped sessions routinely hit the compaction threshold within 6-10 turns of a heavy Opus/thinking session. After stripping: - same-pod turns reuse the stripped local file (no compaction trigger) - cross-pod turns restore the stripped GCS file (same benefit) |
||
|
|
4efa1c4310 |
fix(copilot): set session_id on mode-switch T1 to enable --resume on subsequent turns
When a user switches from baseline (fast) mode to SDK (extended_thinking) mode mid-session, the first SDK turn has has_history=True (prior baseline messages in DB) but no CLI session file in storage. The old code gated session_id on `not has_history`, so mode-switch T1 never received a session_id — the CLI generated a random ID that wasn't uploaded under the expected key. Every subsequent SDK turn would fail to restore the CLI session and run without --resume, injecting the full compressed history on each turn, causing model confusion. Fix: set session_id whenever not using --resume (the `else` branch), covering T1 fresh, mode-switch T1, and T2+ fallback turns. The retry path is updated to use `"session_id" in sdk_options_kwargs` as the discriminator (instead of `not has_history`) so mode-switch T1 retries also keep the session_id while T2+ retries (where T1 restored a session file via restore_cli_session) still remove it to avoid "Session ID already in use". |
||
|
|
ab3221a251 |
feat(backend): MemoryEnvelope metadata model, scoped retrieval, and memory hardening (#12765)
### Why / What / How
**Why:** CoPilot's Graphiti memory system needed structured metadata to
distinguish memory types (rules, procedures, facts, preferences),
support scoped retrieval, enable targeted deletion, and track memory
costs under the AutoPilot billing account separately from the platform.
**What:** Adds the MemoryEnvelope metadata model, structured
rule/procedure memory types, a derived-finding lane for
assistant-distilled knowledge, two-step forget tools, scope-aware
retrieval filtering, AutoPilot-dedicated API key routing, and several
reliability fixes (streaming socket leaks, event-loop-scoped caches,
ingestion hardening).
**How:** MemoryEnvelope wraps every stored episode with typed metadata
(source_kind, memory_kind, scope, status, confidence) serialized as
JSON. Retrieval filters by scope at the context layer. The forget flow
uses a search-then-confirm two-step pattern. Ingestion queues and client
caches are scoped per event loop via WeakKeyDictionary to prevent
cross-loop RuntimeErrors in multi-worker deployments. API key resolution
falls back to AutoPilot-dedicated keys (CHAT_API_KEY,
CHAT_OPENAI_API_KEY) before platform-wide keys.
### Changes 🏗️
**New: MemoryEnvelope metadata model** (`memory_model.py`)
- Typed memory categories: fact, preference, rule, finding, plan, event,
procedure
- Source tracking: user_asserted, assistant_derived, tool_observed
- Scope namespacing: `real:global`, `project:<name>`, `book:<title>`,
`session:<id>`
- Status lifecycle: active, tentative, superseded, contradicted
- Structured `RuleMemory` and `ProcedureMemory` models for complex
instructions
**New: Targeted forget tools** (`graphiti_forget.py`)
- `memory_forget_search`: returns candidate facts with UUIDs for user
confirmation
- `memory_forget_confirm`: deletes specific edges by UUID after
confirmation
**New: Architecture test** (`architecture_test.py`)
- Validates no new `@cached(...)` usage around event-loop-bound async
clients
- Allowlists pre-existing violations for future cleanup
**Enhanced: memory_store tool** (`graphiti_store.py`)
- Accepts MemoryEnvelope metadata fields (source_kind, scope,
memory_kind, rule, procedure)
- Wraps content in MemoryEnvelope before ingestion
**Enhanced: memory_search tool** (`graphiti_search.py`)
- Scope-aware retrieval with hard filtering on group_id
**Enhanced: Ingestion pipeline** (`ingest.py`)
- Derived-finding lane: distills substantive assistant responses into
tentative findings
- Event-loop-scoped queues and workers via WeakKeyDictionary (fixes
multi-worker RuntimeError)
- Improved error handling and dropped-episode reporting
**Enhanced: Client cache** (`client.py`)
- Per-loop client cache and lock via WeakKeyDictionary (fixes "Future
attached to a different loop")
**Enhanced: Warm context** (`context.py`)
- Filters out non-global-scope episodes from warm context
**Fix: Streaming socket leak** (`baseline/service.py`)
- try/finally around async stream iteration to release httpx connections
on early exit
**Config: AutoPilot key routing** (`config.py`, `.env.default`)
- LLM key fallback: GRAPHITI_LLM_API_KEY → CHAT_API_KEY →
OPEN_ROUTER_API_KEY
- Embedder key fallback: GRAPHITI_EMBEDDER_API_KEY → CHAT_OPENAI_API_KEY
→ OPENAI_API_KEY
- Backwards-compatible: existing behavior unchanged until new keys are
provisioned
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run pytest backend/copilot/graphiti/config_test.py` — 16
tests pass (key fallback priority)
- [x] `poetry run pytest backend/copilot/tools/graphiti_store_test.py` —
store envelope tests pass
- [x] `poetry run pytest backend/copilot/graphiti/ingest_test.py` —
ingestion tests pass
- [x] `poetry run pytest backend/util/architecture_test.py` — structural
validation passes
- [x] Verify memory store/retrieve/forget cycle via copilot chat
- [x] Run AgentProbe multi-session memory benchmark (31 scenarios x3
repeats)
- [x] Confirm no CLOSE_WAIT socket accumulation under sustained
streaming load
- [x] Verify multi-worker deployment doesn't produce loop-binding errors
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- Configuration changes:
- New optional env var `CHAT_OPENAI_API_KEY` — AutoPilot-dedicated
OpenAI key for Graphiti embeddings (falls back to `OPENAI_API_KEY` if
not set)
- `CHAT_API_KEY` now used as first fallback for Graphiti LLM calls (was
`OPEN_ROUTER_API_KEY`)
- Infra action needed: add `CHAT_OPENAI_API_KEY` sealed secret in
`autogpt-shared-config` values (dev + prod)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Touches Graphiti memory ingestion/retrieval and introduces hard-delete
capabilities plus event-loop–scoped caching/queues; failures could
affect memory correctness or delete the wrong edges. Also changes
streaming resource cleanup and key routing, which could surface as
connection or billing/cost attribution issues if misconfigured.
>
> **Overview**
> **Graphiti memory is upgraded from plain text episodes to a structured
JSON `MemoryEnvelope`.** `memory_store` now wraps content with typed
metadata (source, kind, scope, status) and optional structured
`rule`/`procedure` payloads, and ingestion supports JSON episodes.
>
> **Memory retrieval and lifecycle controls are expanded.**
`memory_search` adds optional scope hard-filtering to prevent
cross-scope leakage, warm-context formatting drops non-global scoped
episodes (and avoids empty wrappers), and new two-step tools
(`memory_forget_search` → `memory_forget_confirm`) enable targeted soft-
or hard-deletion of specific graph edges by UUID.
>
> **Reliability and multi-worker safety improvements.** Graphiti client
caching and ingestion worker registries are now per-event-loop (avoiding
cross-loop `Future` errors), streaming chat completions explicitly close
async streams to prevent `CLOSE_WAIT` socket leaks, warm-context is
injected into the first user message to keep the system prompt
cacheable, and a new `architecture_test.py` blocks future process-wide
caching of event-loop–bound async clients. Config updates route Graphiti
LLM/embedder keys to AutoPilot-specific env vars first, and OpenAPI
schema exports include the new memory response types.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
autogpt-platform-beta-v0.6.56
|
||
|
|
b2f7faabc7 |
fix(backend/copilot): pre-create assistant msg before first yield to prevent last_role=tool (#12797)
## Changes **Root cause:** When a copilot session ends with a tool result as the last saved message (`last_role=tool`), the next assistant response is never persisted. This happens when: 1. An intermediate flush saves the session with `last_role=tool` (after a tool call completes) 2. The Claude Agent SDK generates a text response for the next turn 3. The client disconnects (`GeneratorExit`) at the `yield StreamStartStep` — the very first yield of the new turn 4. `_dispatch_response(StreamTextDelta)` is never called, so the assistant message is never appended to `ctx.session.messages` 5. The session `finally` block persists the session still with `last_role=tool` **Fix:** In `_run_stream_attempt`, after `convert_message()` returns the full list of adapter responses but *before* entering the yield loop, pre-create the assistant message placeholder in `ctx.session.messages` when: - `acc.has_tool_results` is True (there are pending tool results) - `acc.has_appended_assistant` is True (at least one prior message exists) - A `StreamTextDelta` is present in the batch (confirms this is a text response turn) This ensures that even if `GeneratorExit` fires at the first `yield`, the placeholder assistant message is already in the session and will be persisted by the `finally` block. **Tests:** Added `session_persistence_test.py` with 7 unit tests covering the pre-create condition logic and delta accumulation behavior. **Confirmed:** Langfuse trace `e57ebd26` for session `465bf5cf-7219-4313-a1f6-5194d2a44ff8` showed the final assistant response was logged at 13:06:49 but never reached DB — session had 51 messages with `last_role=tool`. ## Checklist - [x] My code follows the code style of this project - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have made corresponding changes to the documentation (N/A) - [x] My changes generate no new warnings (Pyright warnings are pre-existing) - [x] I have added tests that prove my fix is effective - [x] New and existing unit tests pass locally with my changes --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com> |
||
|
|
c9fa6bcd62 |
fix(backend/copilot): make system prompt fully static for cross-user prompt caching (#12790)
### Why / What / How **Why:** Anthropic prompt caching keys on exact system prompt content. Two sources of per-session dynamic data were leaking into the system prompt, making it unique per session/user — causing a full 28K-token cache write (~$0.10 on Sonnet) on *every* first message for *every* session instead of once globally per model. **What:** 1. `get_sdk_supplement` was embedding the session-specific working directory (`/tmp/copilot-<uuid>`) in the system prompt text. Every session has a different UUID, making every session's system prompt unique, blocking cross-session cache hits. 2. Graphiti `warm_ctx` (user-personalised memory facts fetched on the first turn) was appended directly to the system prompt, making it unique per user per query. **How:** - `get_sdk_supplement` now uses the constant placeholder `/tmp/copilot-<session-id>` in the supplement text and memoizes the result. The actual `cwd` is still passed to `ClaudeAgentOptions.cwd` so the CLI subprocess uses the correct session directory. - `warm_ctx` is now injected into the first user message as a trusted `<memory_context>` block (prepended before `inject_user_context` runs), following the same pattern already used for business understanding. It is persisted to DB and replayed correctly on `--resume`. - `sanitize_user_supplied_context` now also strips user-supplied `<memory_context>` tags, preventing context-spoofing via the new tag. After this change the system prompt is byte-for-byte identical across all users and sessions for a given model. ### Changes 🏗️ - `backend/copilot/prompting.py`: `get_sdk_supplement` ignores `cwd` and uses a constant working-directory placeholder; result is memoized in `_LOCAL_STORAGE_SUPPLEMENT`. - `backend/copilot/sdk/service.py`: `warm_ctx` is saved to a local variable instead of appended to `system_prompt`; on the first turn it is prepended to `current_message` as a `<memory_context>` block before `inject_user_context` is called. - `backend/copilot/service.py`: `sanitize_user_supplied_context` extended to strip `<memory_context>` blocks alongside `<user_context>`. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run pytest backend/copilot/prompting_test.py backend/copilot/prompt_cache_test.py` — all passed #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com>autogpt-platform-beta-v0.6.55 |
||
|
|
c955b3901c |
fix(frontend/copilot): load older chat messages reliably and preserve scrollback across turns (#12792)
### Why / What / How Fixes two SECRT-2226 bugs in copilot chat pagination. **Bug 1 — can't load older messages when the newest page fits on screen.** The `IntersectionObserver` in `LoadMoreSentinel` bailed when `scrollHeight <= clientHeight`, which happens routinely once reasoning + tool groups collapse. With no scrollbar and no button, users were stuck. Fix: remove the guard, cap auto-fill at 3 non-scrollable rounds (keeps the original anti-loop intent), and add a manual "Load older messages" button as the always-available escape hatch. **Bug 2 — older loaded pages vanish after a new turn, then reloading them produces duplicates.** After each stream `useCopilotStream` invalidates the session query; the refetch returns a shifted `oldest_sequence`, which `useLoadMoreMessages` used as a signal to wipe `olderRawMessages` and reset the local cursor. Scroll-back history was lost on every turn, and the next load fetched a page that overlapped with AI SDK's retained `currentMessages` — the "loops" users reported. Fix: once any older page is loaded, preserve `olderRawMessages` and the local cursor across same-session refetches. Only reset on session change. The gap between the new initial window and older pages is covered by AI SDK's retained state. ### Changes 🏗️ - `ChatMessagesContainer.tsx`: drop the scrollability guard; add `MAX_AUTO_FILL_ROUNDS = 3` counter; add "Load older messages" button (`ghost`/`small`); distinguish observer-triggered vs. button-triggered loads so the button bypasses the cap; export `LoadMoreSentinel` for testing. - `useLoadMoreMessages.ts`: remove the wipe-and-reset branch on `initialOldestSequence` change; preserve local state mid-session; still mirror parent's cursor while no older page is loaded. - New integration test `__tests__/LoadMoreSentinel.test.tsx`. No backend changes. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Short/collapsed newest page: "Load older messages" button loads older pages, preserves scroll - [x] Full-viewport newest page: scroll-to-top auto-pagination still works (no regression) - [x] `has_more_messages=false` hides the button; `isLoadingMore=true` shows spinner instead - [x] Bug 2 reproduced locally with temporary `limit=5`: before fix older page vanished and next load duplicated AI SDK messages; after fix older page stays and next load fetches cleanly further back - [x] `pnpm format`, `pnpm lint`, `pnpm types`, `pnpm test:unit` all pass (1208/1208) #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) — N/A --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |