mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-30 03:00:41 -04:00
577b1de8351e1c0ca4e85875396783b877bd338a
8406 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
577b1de835 | test(frontend/builder): add NodeHeader component tests for title display and editing | ||
|
|
e47e04c1ac | fix(frontend/builder): guard handleTitleEdit no-op save, add getNodeDisplayName tests | ||
|
|
0887c7a858 |
refactor(frontend): DRY up getNodeDisplayName by delegating to getNodeDisplayTitle
getNodeDisplayName duplicated the 3-tier fallback logic already in getNodeDisplayTitle. Delegate to the canonical helper to keep one source of truth. |
||
|
|
5c72ee8225 |
fix(frontend/builder): address PR review — extract shared title helper, guard useEffect, add tests
- Extract getNodeDisplayTitle/formatNodeDisplayTitle into CustomNode/helpers.ts with 3-tier fallback: customized_name > agent_name+version > block title - Add isEditingTitle guard to useEffect so edits aren't reset mid-typing - Extract displayTitle const to remove duplicated ternary in text/tooltip - Update GraphContent, BuilderChatPanel/helpers, useGraphMenuSearchBar to use the agent_name fallback so all title consumers are consistent - Add Vitest tests covering all 3 tiers and formatting behavior (10 tests) - Add code comments explaining customized_name vs agent_name distinction |
||
|
|
a85925782b | Merge remote-tracking branch 'origin/dev' into fix/agent-executor-name-display | ||
|
|
6d770d9917 |
fix(platform/copilot): revert forward pagination, add visibility guarantee for blank chat (#12831)
## Why / What / How **Why:** PR #12796 changed completed copilot sessions to load messages from sequence 0 forward (ascending), which broke the standard chat UX — users now land at the beginning of the conversation instead of the most recent messages. Reported in Discord. **What:** Reverts the forward pagination approach and replaces it with a visibility guarantee that ensures every page contains at least one user/assistant message. **How:** - **Backend**: Removed after_sequence, from_start, forward_paginated, newest_sequence — always use backward (newest-first) pagination. Added _expand_for_visibility() helper: after fetching, if the entire page is tool messages (invisible in UI), expand backward up to 200 messages until a visible user/assistant message is found. - **Frontend**: Removed all forwardPaginated/newestSequence plumbing from hooks and components. Removed bottom LoadMoreSentinel. Simplified message merge to always prepend paged messages. ### Changes - routes.py: Reverted to simple backward pagination, removed TOCTOU re-fetch logic - db.py: Removed forward mode, extracted _expand_tool_boundary() and added _expand_for_visibility() - SessionDetailResponse: Removed newest_sequence and forward_paginated fields - openapi.json: Removed after_sequence param and forward pagination response fields - Frontend hooks/components: Removed forward pagination props and logic (-1000 lines) - Updated all tests (backend: 63 pass, frontend: 1517 pass) ### Checklist - [x] I have clearly listed my changes in the PR description - [x] Backend unit tests: 63 pass - [x] Frontend unit tests: 1517 pass - [x] Frontend lint + types: clean - [x] Backend format + pyright: clean |
||
|
|
334ec18c31 |
docs: convert in-code comments to MkDocs admonitions in block-sdk-gui… (#12819)
### Why / What / How <!-- Why: Why does this PR exist? What problem does it solve, or what's broken/missing without it? --> This PR converts inline Python comments in code examples within `block-sdk-guide.md` into MkDocs `!!! note` admonitions. This makes code examples cleaner and more copy-paste friendly while preserving all explanatory content. <!-- What: What does this PR change? Summarize the changes at a high level. --> Converts inline comments in code blocks to admonitions following the pattern established in PR #12396 (new_blocks.md) and PR #12313. <!-- How: How does it work? Describe the approach, key implementation details, or architecture decisions. --> - Wrapped code examples with `!!! note` admonitions - Removed inline comments from code blocks for clean copy-paste - Added explanatory admonitions after each code block ### Changes 🏗️ - Provider configuration examples (API key and OAuth) - Block class Input/Output schema annotations - Block initialization parameters - Test configuration - OAuth and webhook handler implementations - Authentication types and file handling patterns ### Checklist 📋 #### For documentation changes: - [x] Follows the admonition pattern from PR #12396 - [x] No code changes, documentation only - [x] Admonition syntax verified correct #### For configuration changes: - [ ] `.env.default` is updated or already compatible with my changes - [ ] `docker-compose.yml` is updated or already compatible with my changes --- **Related Issues**: Closes #8946 Co-authored-by: slepybear <slepybear@users.noreply.github.com> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
ea5cfdfa2e |
fix(frontend): remove debug console.log statements (#12823)
## Why Debug console.log statements were left in production code, which can leak sensitive information and pollute browser developer consoles. ## What Removed console.log from 4 non-legacy frontend components: - useNavbar.ts: isLoggedIn debug log - WalletRefill.tsx: autoRefillForm debug log - EditAgentForm.tsx: category field debug log - TimezoneForm.tsx: currentTimezone debug log ## How Simply deleted the console.log lines as they served no purpose other than debugging during development. ## Checklist - [x] Code follows project conventions - [x] Only frontend changes (4 files, 6 lines removed) - [x] No functionality changes Co-authored-by: slepybear <slepybear@users.noreply.github.com> |
||
|
|
d13a85bef7 |
feat(frontend): surface scheduled agents in library & copilot briefings (#12818)
## Why
Scheduled agents weren't well-surfaced in the Library and Copilot
briefings:
- The Library fleet summary didn't count agents that are scheduled
purely via the scheduler (only those with a `recommended_schedule_cron`
set at the agent level).
- Sitrep items didn't distinguish scheduled or listening (trigger-based)
agents, so they often fell back to a generic "idle" state.
- Scheduled chips showed a generic message with no indication of when
the next run would happen.
- The Copilot Agent Briefing surfaced every scheduled agent regardless
of how far out the next run was — an agent scheduled a month away would
take a slot from something actually happening soon.
- Long sitrep messages overflowed the row.
## What
- Add `is_scheduled` to `LibraryAgent` (sourced from the scheduler) so
the frontend can reliably detect schedule-only agents.
- Count scheduled agents in `useLibraryFleetSummary`.
- Include scheduled and listening agents in sitrep items, with a
priority ordering (error → running → stale → success → listening →
scheduled → idle).
- Show a relative next-run time on scheduled sitrep chips (e.g.
"Scheduled to run in 2h" / "in 3d").
- Filter the Copilot Agent Briefing to scheduled agents whose next run
is within the next 3 days.
- Truncate long sitrep messages to 1 line with `OverflowText` and show
the full text in a tooltip on hover.
## How
- Scheduler → `LibraryAgent` mapping populates `is_scheduled` /
`next_scheduled_run`.
- `useSitrepItems` gains an optional `scheduledWithinMs` parameter.
Copilot's `usePulseChips` passes `3 * 24 * 60 * 60 * 1000`; the Library
briefing omits it to keep its existing (unbounded) behavior.
- Scheduled config-based sitrep items are skipped when
`next_scheduled_run` is missing or outside the window.
- `SitrepItem` wraps the message in `OverflowText` so a single-line
ellipsis + hover tooltip replaces raw overflow.
## Test plan
- [ ] `/library` — scheduled and listening agents appear in the sitrep
with accurate copy; fleet summary counts scheduled agents correctly;
long messages truncate with a tooltip on hover.
- [ ] `/copilot` — on an empty session with the `AGENT_BRIEFING` flag
on, the briefing only shows scheduled agents whose next run is within 3
days; agents scheduled further out no longer appear as "scheduled"
chips.
- [ ] Scheduled chip text reads "Scheduled to run in {Nm|Nh|Nd}"
matching `next_scheduled_run`.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
60b85640e7 |
fix(backend/copilot): replace dedup lock with idempotent append_and_save_message (#12814)
## Why
The Redis dedup lock (`chat:msg_dedup:{session}:{content_hash}`, 30s
TTL) was solving the wrong problem:
- Its purpose: block infra/nginx retries from calling
`append_and_save_message` twice after a client disconnect, writing a
duplicate user message to the DB.
- The approach: deliberately hold the lock for 30s on `GeneratorExit`.
- Why unnecessary: the executor's cluster lock already prevents
duplicate *execution*. The only real gap was duplicate *DB writes* in
the ~1s before the executor picks up the turn.
## What
- **Deleted** `message_dedup.py` and `message_dedup_test.py` (~150 lines
removed).
- **Removed** all dedup lock code from `routes.py` (~40 lines removed).
- **`append_and_save_message`** is now idempotent and self-contained:
- Uses redis-py's built-in `Lock(timeout=10, blocking_timeout=2)` —
Lua-script atomic acquire/release, no manual poll/sleep loop.
- Lock context manager yields `bool` (`True` = acquired, `False` =
degraded). When degraded (Redis down or 2s timeout), reads from DB
directly instead of cache to avoid stale-state duplicates.
- Idempotency check: if `session.messages[-1]` already matches the
incoming role+content, returns `None` instead of the session.
- Lock released explicitly as soon as the write completes; `try/except`
in `finally` so a cleanup error after a successful write never surfaces
a false 500.
- On cache-write failure, the stale cache entry is invalidated so future
reads fall back to the authoritative DB.
- **`routes.py`** uses the `None` signal: `is_duplicate_message = (await
append_and_save_message(...)) is None`
- Skips `create_session` and `enqueue_copilot_turn` for duplicates —
client re-attaches to the existing turn's Redis stream.
- `track_user_message` and `turn_id` generation only happen when
`is_duplicate_message` is false.
- **`subscribe_to_session`** retry window increased from 1×50ms to
3×100ms — covers the window where a duplicate request subscribes before
the original's `create_session` hset completes.
- **Cleaned up** `routes_test.py`: removed 5 dedup-specific tests and
the `mock_redis` setup from `_mock_stream_internals`; added
duplicate-skips-enqueue test.
## How
The idempotency guard distinguishes legit same-text messages from
retries via the **assistant turn between them**: if the user said "yes",
got a response, and says "yes" again, `session.messages[-1]` is the
assistant reply, so the role check fails and the second message goes
through. A retry (no response yet) sees the user message as the last
entry and is blocked.
```python
if (
session.messages
and session.messages[-1].role == message.role
and session.messages[-1].content == message.content
):
return None # duplicate — caller skips enqueue
```
The Redis lock ensures this check always sees authoritative state even
in multi-replica deployments. When the lock is unavailable (Redis down
or contention), reading from DB directly (bypassing potentially stale
cache) provides the same safety guarantee at the cost of a DB
round-trip.
## Checklist
- [x] PR targets `dev`
- [x] Conventional commit title with scope
- [x] Tests added/updated (duplicate detection, lock degradation, DB
error, cache invalidation paths)
- [x] `poetry run format` and `poetry run pyright` pass clean
- [x] No new linter suppressors
|
||
|
|
87e4d42750 |
fix(backend/copilot): fix initial load missing messages + forward pagination for completed sessions (#12796)
### Why / What / How **Why:** Completed copilot sessions with many messages showed a completely empty chat view. A user reported a 158-message session that appeared blank on reload. **What:** Two bugs fixed: 1. **Backend** — initial page load always returned the newest 50 messages in DESC order. For sessions heavy in tool calls, the user's original messages (seq 0–5) were never included; all 50 slots consumed by mid-session tool outputs. 2. **Frontend** — convertChatSessionToUiMessages silently dropped user messages with null/empty content. **How:** For completed sessions (no active stream), the backend now loads from sequence 0 in ASC order. Active/streaming sessions keep newest-first for streaming context. A new after_sequence forward cursor enables infinite-scroll for subsequent pages (sentinel moves to bottom). The frontend wires forward_paginated + newest_sequence end-to-end. ### Changes 🏗️ - db.py: added from_start (ASC) and after_sequence (forward cursor) modes; added newest_sequence to PaginatedMessages - routes.py: detect completed vs active on initial load; pass from_start=True for completed; expose newest_sequence + forward_paginated; accept after_sequence param - convertChatSessionToUiMessages.ts: never drop user messages with empty content - useLoadMoreMessages.ts: forward pagination via after_sequence; append pages to end - ChatMessagesContainer.tsx: LoadMoreSentinel at bottom for forward-paginated sessions - Wire newestSequence + forwardPaginated end-to-end through useChatSession/useCopilotPage/ChatContainer - openapi.json: add after_sequence + newest_sequence/forward_paginated; regenerate types - db_test.py: 9 new unit tests for from_start and after_sequence modes ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Open a completed session with many messages — first user message visible on initial load - [x] Scroll to bottom of completed session — load more appends next page - [x] Open active/streaming session — newest messages shown first, streaming unaffected - [x] Backend unit tests: all 28 pass - [x] Frontend lint/format: clean, no new type errors --------- Co-authored-by: chernistry <73943355+chernistry@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
0339d95d12 |
fix(frontend): small UI fixes, sort menu bg, name update auth, stats grid overflow, pulse chips (#12815)
## Summary - **LibrarySortMenu / AgentFilterMenu**: Force `!bg-transparent` and neutralise legacy `SelectTrigger` styles (`m-0.5`, `ring-offset-white`, `shadow-sm`) that caused a white background around the trigger - **EditNameDialog**: Replace client-side `supabase.auth.updateUser()` with server-side `PUT /api/auth/user` route — fixes "Auth session missing!" error caused by `httpOnly` cookies being inaccessible to browser JS - **StatsGrid**: Swap label `Text` for `OverflowText` so tile labels truncate with `…` and show a tooltip instead of wrapping when the grid is squeezed - **PulseChips**: Set fixed `15rem` chip width with `shrink-0`, horizontal scroll, and styled thin scrollbar - **Tests**: Updated `EditNameDialog` tests to use MSW instead of mocking Supabase client; added 7 new `PulseChips` integration tests ## Test plan - [x] `pnpm test:unit` — all 1495 tests pass (91 files) - [x] `pnpm format && pnpm lint` — clean - [x] `pnpm types` — no new errors (pre-existing only) - [ ] QA `/library?sort=updatedAt` — sort menu trigger has no white bg - [ ] QA `/library` — StatsGrid labels truncate with tooltip on narrow viewports - [ ] QA `/copilot` — PulseChips scroll horizontally at fixed width - [ ] QA `/copilot` — Edit name dialog saves successfully (no "Auth session missing!") 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
f410929560 |
feat(platform): Add xAI Grok 4.20 models from OpenRouter (#12620)
Requested by @Torantulino Adds the 2 xAI Grok 4.20 models available on OpenRouter that are missing from the platform. ## Why `x-ai/grok-4.20` and `x-ai/grok-4.20-multi-agent` are xAI's current flagship models (released March 2026) and are available via OpenRouter, but weren't accessible from the platform's LLM blocks. ## Changes **`autogpt_platform/backend/backend/blocks/llm.py`** - Added `GROK_4_20` and `GROK_4_20_MULTI_AGENT` enum members - Added corresponding `MODEL_METADATA` entries (open_router provider, 2M context window, price tier 3) **`autogpt_platform/backend/backend/data/block_cost_config.py`** - Added `MODEL_COST` entries at 5 credits each (flagship tier, $2/M in) **`docs/integrations/block-integrations/llm.md`** - Added new model IDs to all LLM block tables | Model | Pricing | Context | |-------|---------|---------| | `x-ai/grok-4.20` | $2/M in, $6/M out | 2M | | `x-ai/grok-4.20-multi-agent` | $2/M in, $6/M out | 2M | Both models use the standard OpenRouter chat completions API — no special handling needed. Resolves: SECRT-2196 --------- Co-authored-by: Torantulino <22963551+Torantulino@users.noreply.github.com> Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com> Co-authored-by: Otto (AGPT) <otto@agpt.co> |
||
|
|
2bbec09e1a |
feat(platform): subscription tier billing via Stripe Checkout (#12727)
## Why Introducing paid subscription tiers (PRO, BUSINESS) so we can charge for AutoPilot capacity beyond the free tier. Without a billing integration, all users share the same rate limits regardless of their willingness to pay for additional capacity. ## What End-to-end subscription billing system using Stripe Checkout Sessions: **Backend:** - `SubscriptionTier` enum (`FREE`, `PRO`, `BUSINESS`, `ENTERPRISE`) on the `User` model - `POST /credits/subscription` — creates a Stripe Checkout Session for paid upgrades; for FREE tier or when `ENABLE_PLATFORM_PAYMENT` is off, sets tier directly - `GET /credits/subscription` — returns current tier, monthly cost (cents), and all tier costs - `POST /credits/stripe_webhook` — handles `customer.subscription.created/updated/deleted`, `checkout.session.completed`, `charge.dispute.*`, `refund.created` - `sync_subscription_from_stripe()` — keeps `User.subscriptionTier` in sync from webhook events; guards against out-of-order delivery (cancelled event after new sub created), ENTERPRISE overwrite, and duplicate webhook replay - Open-redirect protection on `success_url`/`cancel_url` via `_validate_checkout_redirect_url()` - `_cancel_customer_subscriptions()` — cancels both active and trialing subs; propagates errors so callers can avoid updating DB tier on Stripe failure - `_cleanup_stale_subscriptions()` — best-effort cancellation of old subs when a new one becomes active (paid-to-paid upgrade), to prevent double-billing - `get_stripe_customer_id()` with idempotency key to prevent duplicate Stripe customers on concurrent requests - `cache_none=False` sentinel fix in `@cached` decorator so Stripe price lookups retry on transient error instead of poisoning the cache with `None` - Stripe Price IDs read from LaunchDarkly (`stripe-price-id-pro`, `stripe-price-id-business`). If not configured, upgrade returns 422. **Frontend:** - `SubscriptionTierSection` component on billing page: tier cards (FREE/PRO/BUSINESS), upgrade/downgrade buttons, per-tier cost display, Stripe redirect on upgrade - Confirmation dialog for downgrades - ENTERPRISE users see a read-only admin-managed banner - Success toast on return from Stripe Checkout (`?subscription=success`) - Uses generated `useGetSubscriptionStatus` / `useUpdateSubscriptionTier` hooks ## How - Paid upgrades use Stripe Checkout Sessions (not server-side subscription creation) — Stripe handles PCI-compliant card collection and the subscription lifecycle - Tier is synced back via webhook on `customer.subscription.created/updated/deleted` - Downgrade to FREE cancels via Stripe API immediately; a `stripe.StripeError` during cancellation returns 502 with a generic message (no Stripe detail leakage) - LaunchDarkly flags: `stripe-price-id-pro` (string), `stripe-price-id-business` (string), `enable-platform-payment` (bool) - `ENABLE_PLATFORM_PAYMENT=false` bypasses Stripe for beta/internal access (sets tier directly) ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `ENABLE_PLATFORM_PAYMENT=false` → tier change updates directly, no Stripe redirect - [x] `ENABLE_PLATFORM_PAYMENT=true` with price IDs configured → paid upgrade redirects to Stripe Checkout - [x] Stripe webhook `customer.subscription.created` → `User.subscriptionTier` updated - [x] Unrecognised price ID in webhook → logs warning, tier unchanged - [x] ENTERPRISE user webhook event → tier not overwritten - [x] Empty `STRIPE_WEBHOOK_SECRET` → 503 (prevents HMAC bypass) - [x] Open-redirect attack on `success_url`/`cancel_url` → 422 #### For configuration changes: - [x] No `.env` or `docker-compose.yml` changes - [x] LaunchDarkly flags to create: `stripe-price-id-pro` (string), `stripe-price-id-business` (string), `enable-platform-payment` (bool) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: majdyz <majdy.zamil@gmail.com> |
||
|
|
31b88a6e56 |
feat(frontend): add Agent Briefing Panel (#12764)
## Summary <img width="800" height="772" alt="Screenshot_2026-04-13_at_18 29 19" src="https://github.com/user-attachments/assets/3da6eaf2-1485-4c08-9651-18f2f4220eba" /> <img width="800" height="285" alt="Screenshot_2026-04-13_at_18 29 24" src="https://github.com/user-attachments/assets/6a5f981a-1e1d-4d22-a33d-9e1b0e7555a7" /> <img width="800" height="288" alt="Screenshot_2026-04-13_at_18 29 27" src="https://github.com/user-attachments/assets/f97b4611-7c23-4fc9-a12d-edf6314a77ef" /> <img width="800" height="433" alt="Screenshot_2026-04-13_at_18 29 31" src="https://github.com/user-attachments/assets/e6d7241d-84f3-4936-b8cd-e0b12df392bb" /> <img width="700" height="554" alt="Screenshot_2026-04-13_at_18 29 40" src="https://github.com/user-attachments/assets/92c08f21-f950-45cd-8c1d-529905a6e85f" /> Implements the Agent Intelligence Layer — real-time agent awareness across the Library and Copilot pages. ### Core Features - **Agent Briefing Panel** — stats grid with fleet-wide counts (running, recently completed, needs attention, scheduled, idle, monthly spend) and tab-driven content below - **Enhanced Library Cards** — StatusBadge, run counts, contextual action buttons (See tasks, Start, Chat) with consistent icon-left styling - **Situation Report Items** — prioritized sitrep with error-first ranking, "See task" deep-links for completed runs, and "Ask AutoPilot" bridge - **Home Pulse Chips** — agent status chips on Copilot empty state with hover-reveal actions (slide-up animation + backdrop blur on desktop, always visible on touch) - **Edit Display Name** — pencil icon on Copilot greeting to update Supabase user metadata inline ### Backend - **Execution count API** — batch `COUNT(*)` query on `AgentGraphExecution` grouped by `agentGraphId` for the current user, avoiding loading full execution rows. Wired into `list_library_agents` and `list_favorite_library_agents` via `execution_count_override` on `LibraryAgent.from_db()` ### UI Polish - Subtler gradient on AgentBriefingPanel (reduced opacity on background + animated border) - Consistent button styles across all action buttons (icon-left, same sizing) - Removed duplicate "Open in builder" menu item (kept "Edit agent") - "Recently completed" tab replaces "Listening" in briefing panel, showing agents with completed runs in last 72h ## Changes ### Backend - `backend/api/features/library/db.py` — added `_fetch_execution_counts()` batch COUNT query, wired into list endpoints - `backend/api/features/library/model.py` — added `execution_count_override` param to `LibraryAgent.from_db()` ### Frontend — New files - `EditNameDialog/EditNameDialog.tsx` — modal to update display name via Supabase auth - `PulseChips/PulseChips.module.css` — hover-reveal animation + glass panel styles ### Frontend — Modified files - `EmptySession.tsx` — added EditNameDialog and PulseChips - `PulseChips.tsx` — redesigned with See/Ask buttons, hover overlay on desktop - `usePulseChips.ts` — added agentID for deep-linking - `AgentBriefingPanel.tsx` — subtler gradient, adjusted padding - `AgentBriefingPanel.module.css` — reduced conic gradient opacity - `BriefingTabContent.tsx` — added "completed" tab routing - `StatsGrid.tsx` — replaced Listening with Recently completed, reordered tabs - `SitrepItem.tsx` — consistent button styles, "See task" link for completed items, updated copilot prompt - `ContextualActionButton.tsx` — icon-left, smaller icon, renamed Run to Start - `LibraryAgentCard.tsx` — icon-left on all buttons, EyeIcon for See tasks - `AgentCardMenu.tsx` — removed duplicate "Open in builder" - `useAgentStatus.ts` — added completed count to FleetSummary - `useLibraryFleetSummary.ts` — added recent completion tracking - `types.ts` — added `completed` to FleetSummary and AgentStatusFilter ## Test plan - [ ] Library page renders Agent Briefing Panel with stats grid - [ ] "Recently completed" tab shows agents with completed runs in last 72h - [ ] Agent cards show real execution counts (not 0) - [ ] Action buttons have consistent styling with icon on the left - [ ] "See task" on completed items deep-links to agent page with execution selected - [ ] "Ask AutoPilot" generates last-run-specific prompt for completed items - [ ] Copilot empty state shows PulseChips with hover-reveal actions on desktop - [ ] PulseChips show See/Ask buttons always on touch screens - [ ] Pencil icon on greeting opens edit name dialog - [ ] Name update persists via Supabase and refreshes greeting - [ ] `pnpm format && pnpm lint && pnpm types` pass - [ ] `poetry run format` passes for backend changes 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: John Ababseh <jababseh7@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Bentlybro <Github@bentlybro.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: CodeRabbit <noreply@coderabbit.ai> Co-authored-by: majdyz <zamil.majdy@agpt.co> |
||
|
|
d357956d98 |
refactor(backend/copilot): make session-file helper fns public to fix Pyright warnings (#12812)
## Why After PR #12804 was squashed into dev, two module-level helper functions in `backend/copilot/sdk/service.py` remained private (`_`-prefixed) while being directly imported by name in `sdk/transcript_test.py`. Pyright reports `reportAttributeAccessIssue` when tests (even those excluded from CI lint) import private symbols from outside their defining module. ## What Rename two helpers to remove the underscore prefix: - `_process_cli_restore` → `process_cli_restore` - `_read_cli_session_from_disk` → `read_cli_session_from_disk` Update call sites in `service.py` and imports/calls/docstrings in `sdk/transcript_test.py`. ## How Pure rename — no logic change. Both functions were already module-level helpers with no reason to be private; the underscore was convention carried over during the refactor but they are directly unit-tested and should be public. All 66 `sdk/transcript_test.py` tests pass after the rename. ## Checklist - [x] Tests pass (`poetry run pytest backend/copilot/sdk/transcript_test.py`) - [x] No `_`-prefixed symbols imported across module boundaries - [x] No linter suppressors added |
||
|
|
697ffa81f0 | fix(backend/copilot): update transcript_test to use strip_for_upload after upload_cli_session removal | ||
|
|
2b4727e8b2 |
chore: merge master into dev, resolve baseline/transcript conflicts
Conflicts in baseline/service.py, baseline/transcript_integration_test.py,
and transcript.py arose because dev-only commit
|
||
|
|
0d4b31e8a1 |
refactor(backend/copilot): unified transcript context — extract_context_messages, mode-gated --resume, compaction-aware gap-fill (#12804)
### Why / What / How
**Why:** The copilot had two separate GCS paths (`cli-sessions/` and
`chat-transcripts/`), redundant function names
(`upload_cli_session`/`restore_cli_session`), and no shared context
strategy between modes. When switching from baseline→SDK or
SDK→baseline, the receiving mode discarded the stored transcript and
fell back to full DB reconstruction — loading all raw messages instead
of the compacted form — causing inflated context, wasted tokens, and
loss of CLI compaction summaries.
**What:**
- Single GCS path (`cli-sessions/`) for both modes — `chat-transcripts/`
removed
- Unified public API: `upload_transcript` / `download_transcript` /
`TranscriptDownload`
- `TranscriptMode = Literal["sdk", "baseline"]` persisted in
`.meta.json` — SDK skips `--resume` when `mode != "sdk"`
(baseline-written JSONL has stripped fields / synthetic IDs)
- `extract_context_messages(download, session_messages)` — shared
context primitive used by **both SDK and baseline**: reads compacted
transcript content + fills only the DB gap (messages after watermark),
so CLI compaction summaries are preserved across mode switches
- Watermark fix: `_jsonl_covered = transcript_msg_count + 2` when a real
transcript is present, preventing false gap detection after `--resume`
- Baseline gap-fill: `_append_gap_to_builder` converts `ChatMessage` →
JSONL entries; no more silently discarded stale transcripts
**How:**
```
SDK turn (mode="sdk" transcript available):
──► --resume [full CLI session restored natively]
──► inject gap prefix if DB has messages after watermark
SDK turn (mode="baseline" transcript available):
──► cannot --resume (synthetic CLI IDs)
──► extract_context_messages(download, session_messages):
returns transcript JSONL (compacted, isCompactSummary preserved) + gap
excludes session_messages[-1] (current turn — caller injects it separately)
──► format as <conversation_history> + "Now, the user says: {current}"
Baseline turn (any transcript):
──► _load_prior_transcript → TranscriptDownload
──► extract_context_messages(download, session_messages) + session_messages[-1]
replaces full session.messages DB read
──► LLM messages: [compacted history + gap] + [current user turn]
Transcript unavailable — both SDK (use_resume=False) and baseline:
──► extract_context_messages(None, session_messages) returns session_messages[:-1]
(all prior DB messages except the current user turn at [-1])
──► graceful fallback — no crash, no empty context
──► covers: first turn, GCS error, corrupt JSONL, missing .meta.json
──► next successful response uploads a fresh transcript
```
`extract_context_messages` is the shared primitive — both modes call the
same function, which handles:
- `download=None` (first turn, GCS unavailable) → falls back to
`session_messages[:-1]`
- Empty/corrupt content → falls back to `session_messages[:-1]`
- `bytes` content (raw GCS) or `str` content (pre-decoded baseline path)
- `isCompactSummary=True` entries → preserved so CLI compaction survives
mode switches
- Missing/corrupt `.meta.json` → `message_count` defaults to `0`, `mode`
defaults to `"sdk"`
**Why `[:-1]` and not all messages?** `session_messages[-1]` is always
the current user turn being handled right now. Both callers inject it
separately — SDK wraps it as `"Now, the user says: ..."`, baseline
appends it as the final message in the LLM array. Returning it inside
`extract_context_messages` would double-inject it.
### Changes 🏗️
- **`transcript.py`**: `CliSessionRestore` → `TranscriptDownload` +
`mode` field; `upload_cli_session` → `upload_transcript`;
`restore_cli_session` → `download_transcript`; add `TranscriptMode`,
`detect_gap`, `extract_context_messages`; import `ChatMessage` via
relative path to match `service.py` style
- **`sdk/service.py`**: mode-check before `--resume`; `_RestoreResult`
carries `baseline_download` + `context_messages` + `transcript_content`;
`_build_query_message` accepts `prior_messages` override;
`_restore_cli_session_for_turn` populates `context_messages` via
`extract_context_messages` and sets `transcript_content` to prevent
duplicate DB reconstruction; watermark fix (`_jsonl_covered =
transcript_msg_count + 2`)
- **`baseline/service.py`**: `_load_prior_transcript` returns `(bool,
TranscriptDownload | None)`; LLM context replaced with
`extract_context_messages(download, messages)`; `_append_gap_to_builder`
+ `detect_gap` call; `upload_transcript(mode="baseline")`
- **`sdk/transcript.py`**: updated re-exports, old aliases removed
- **`scripts/download_transcripts.py`**: updated for `bytes | str`
content type
- **Test files**: 179 tests total; `transcript_test.py`,
`baseline/transcript_integration_test.py`,
`sdk/service_helpers_test.py`, `sdk/test_transcript_watermark.py`,
`test/copilot/test_transcript_watermark.py` all updated/added
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] 179 unit tests pass — `transcript_test`,
`baseline/transcript_integration_test`, `sdk/service_helpers_test`,
`sdk/test_transcript_watermark`
- [x] pyright 0 errors on all changed files
- [x] SDK `--resume` path still works when `mode="sdk"` transcript is
present
- [x] SDK fallback uses `extract_context_messages` (compacted baseline
content + gap) when `mode="baseline"` transcript is stored — no more
full DB reconstruction
- [x] Baseline uses `extract_context_messages` per turn instead of full
`session.messages` DB read
- [x] `isCompactSummary=True` entries preserved across mode switches
- [x] Watermark (`_jsonl_covered`) fix prevents false gap detection
after `--resume`
- [x] Baseline gap detection no longer silently discards stale
transcripts
- [x] `TranscriptDownload.content` accepts `bytes | str` — backward
compatible
- [x] Transcript unavailable (GCS error, first turn, corrupt file)
gracefully falls back to `session_messages[:-1]` without crash — applies
to both SDK and baseline paths
---------
Co-authored-by: chernistry <73943355+chernistry@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
|
||
|
|
0cd0a76305 |
fix(backend/copilot): baseline always uploads when GCS has no transcript
_load_prior_transcript was returning False for missing/invalid transcripts, which caused should_upload_transcript to suppress the upload. The original intent was to protect against overwriting a *newer* GCS version — but a missing or corrupt file is not 'newer'. Only stale (watermark ahead) and download errors (unknown GCS state) should suppress upload. Also renames transcript_covers_prefix → transcript_upload_safe throughout to accurately describe what the flag means. |
||
|
|
d01a51be0e |
Add check for GitHub account connection status (#12807)
Added instruction to check GitHub authentication status before prompting user. This prevents repeated, unnecessary asking of the user to add their GitHub credentials when they're already added, which is currently a prevalent bug. ### Changes 🏗️ - Added one line to `autogpt_platform/backend/backend/copilot/prompting.py` instructing AutoPilot to run `gh auth status` before prompting the user to connect their GitHub account. Co-authored-by: Toran Bruce Richards <22963551+Torantulino@users.noreply.github.com> |
||
|
|
bd2efed080 |
fix(frontend): allow zooming out more in the builder (#12690)
Reduced minZoom on the builder canvas from 0.1 to 0.05 to allow zooming out further when working with large agent graphs. Fixes #9325 Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
5feee01450 | style(frontend/builder): fix Prettier formatting in NodeHeader | ||
|
|
3223cc1ed8 | fix(frontend/builder): address review — sync editedTitle state and scope Block suffix removal | ||
|
|
f5ef508334 |
fix(frontend/builder): preserve agent name in AgentExecutor node title after reload
When an AgentExecutorBlock node was saved and the page reloaded, the node title reverted to the generic "Agent Executor" because NodeHeader always used data.title (the block name). This reads agent_name and graph_version from hardcodedValues (persisted via input_default) and uses them as the display title, falling back to data.title for non-agent blocks. Closes #11041 |
||
|
|
5fccd8a762 | Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev | ||
|
|
2740b2be3a |
fix(backend/copilot): disable fallback model to fix prod CLI rejection (#12802)
### Why / What / How **Why:** `fffbe0aad8` changed both `ChatConfig.model` and `ChatConfig.claude_agent_fallback_model` to `claude-sonnet-4-6`. The Claude Code CLI rejects this with `Error: Fallback model cannot be the same as the main model`, causing every standard-mode copilot turn to fail with exit code 1 — the session "completes" in ~30s but produces no response and drops the transcript. **What:** Set `claude_agent_fallback_model` default to `""`. `_resolve_fallback_model()` already returns `None` on empty string, which means the `--fallback-model` flag is simply not passed to the CLI. On 529 overload errors the turn will surface normally instead of silently retrying with a fallback. **How:** One-line config change + test update. ### Changes 🏗️ - `ChatConfig.claude_agent_fallback_model` default: `"claude-sonnet-4-6"` → `""` - Update `test_fallback_model_default` to assert the empty default ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run pytest backend/copilot/sdk/p0_guardrails_test.py` #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes |
||
|
|
d27d22159d | Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev | ||
|
|
fffbe0aad8 |
fix(backend): default copilot sonnet to 4.6 (#12799)
### Why / What / How
Why: Copilot/Autopilot standard requests were still defaulting to Claude
Sonnet 4, while the expected default for this path is Sonnet 4.6.
What: This PR updates the backend Copilot defaults so the
standard/default path and fast path use Sonnet 4.6, and aligns the SDK
fallback model and related test expectations.
How: It changes `ChatConfig.model`, `ChatConfig.fast_model`, and
`ChatConfig.claude_agent_fallback_model` to Sonnet 4.6 values, then
updates backend tests that assert the default Sonnet model strings.
### Changes 🏗️
- Switch `ChatConfig.model` from `anthropic/claude-sonnet-4` to
`anthropic/claude-sonnet-4-6`
- Switch `ChatConfig.fast_model` from `anthropic/claude-sonnet-4` to
`anthropic/claude-sonnet-4-6`
- Switch `ChatConfig.claude_agent_fallback_model` from
`claude-sonnet-4-20250514` to `claude-sonnet-4-6`
- Update backend Copilot tests that assert the default Sonnet model
strings
- Configuration changes:
- No new environment variables or docker-compose changes are required
- Existing `.env.default` and compose files remain compatible because
this only changes backend default model values in code
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run pytest
backend/copilot/baseline/transcript_integration_test.py`
- [x] `poetry run pytest backend/copilot/sdk/service_helpers_test.py`
- [x] `poetry run pytest backend/copilot/sdk/service_test.py`
- [x] `poetry run pytest backend/copilot/sdk/p0_guardrails_test.py`
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Changes default/fallback LLM model identifiers for Copilot requests,
which can affect runtime behavior, cost, and availability
characteristics across both baseline and SDK paths. Risk is mitigated by
being a small, config-only change with updated tests.
>
> **Overview**
> Updates Copilot backend defaults so both the standard (`model`) and
fast (`fast_model`) paths use `anthropic/claude-sonnet-4-6`, and aligns
the Claude Agent SDK fallback model to `claude-sonnet-4-6`.
>
> Adjusts related test expectations in baseline transcript integration
and SDK helper tests to match the new Sonnet 4.6 model strings.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
|
||
|
|
df205b5444 |
fix(backend/copilot): strip CLI session file to prevent auto-compaction context loss
The Claude Code CLI auto-compacts its native session JSONL when the context approaches the model's token limit (~200K for Sonnet). After compaction the detailed conversation history is replaced by a ~27K-token summary, causing the silent context loss users see as memory failures in long sessions. Root cause identified from production logs for session 93ecf7c9: - T6 CLI session: 233KB / ~207K tokens (near Sonnet limit) - T7 CLI compacted session -> ~167KB / ~47K tokens (PreCompact hook missed) - T12 second compaction -> ~176KB / ~27K tokens (just system prompt + summary) - T14-T21: cache_read=26714 constantly -- only system prompt visible to Claude The same stripping we already apply to our transcript (stale thinking blocks, progress/metadata entries) now also runs on the CLI native session file. At ~2x the size of the stripped transcript, unstripped sessions routinely hit the compaction threshold within 6-10 turns of a heavy Opus/thinking session. After stripping: - same-pod turns reuse the stripped local file (no compaction trigger) - cross-pod turns restore the stripped GCS file (same benefit) |
||
|
|
4efa1c4310 |
fix(copilot): set session_id on mode-switch T1 to enable --resume on subsequent turns
When a user switches from baseline (fast) mode to SDK (extended_thinking) mode mid-session, the first SDK turn has has_history=True (prior baseline messages in DB) but no CLI session file in storage. The old code gated session_id on `not has_history`, so mode-switch T1 never received a session_id — the CLI generated a random ID that wasn't uploaded under the expected key. Every subsequent SDK turn would fail to restore the CLI session and run without --resume, injecting the full compressed history on each turn, causing model confusion. Fix: set session_id whenever not using --resume (the `else` branch), covering T1 fresh, mode-switch T1, and T2+ fallback turns. The retry path is updated to use `"session_id" in sdk_options_kwargs` as the discriminator (instead of `not has_history`) so mode-switch T1 retries also keep the session_id while T2+ retries (where T1 restored a session file via restore_cli_session) still remove it to avoid "Session ID already in use". |
||
|
|
ab3221a251 |
feat(backend): MemoryEnvelope metadata model, scoped retrieval, and memory hardening (#12765)
### Why / What / How
**Why:** CoPilot's Graphiti memory system needed structured metadata to
distinguish memory types (rules, procedures, facts, preferences),
support scoped retrieval, enable targeted deletion, and track memory
costs under the AutoPilot billing account separately from the platform.
**What:** Adds the MemoryEnvelope metadata model, structured
rule/procedure memory types, a derived-finding lane for
assistant-distilled knowledge, two-step forget tools, scope-aware
retrieval filtering, AutoPilot-dedicated API key routing, and several
reliability fixes (streaming socket leaks, event-loop-scoped caches,
ingestion hardening).
**How:** MemoryEnvelope wraps every stored episode with typed metadata
(source_kind, memory_kind, scope, status, confidence) serialized as
JSON. Retrieval filters by scope at the context layer. The forget flow
uses a search-then-confirm two-step pattern. Ingestion queues and client
caches are scoped per event loop via WeakKeyDictionary to prevent
cross-loop RuntimeErrors in multi-worker deployments. API key resolution
falls back to AutoPilot-dedicated keys (CHAT_API_KEY,
CHAT_OPENAI_API_KEY) before platform-wide keys.
### Changes 🏗️
**New: MemoryEnvelope metadata model** (`memory_model.py`)
- Typed memory categories: fact, preference, rule, finding, plan, event,
procedure
- Source tracking: user_asserted, assistant_derived, tool_observed
- Scope namespacing: `real:global`, `project:<name>`, `book:<title>`,
`session:<id>`
- Status lifecycle: active, tentative, superseded, contradicted
- Structured `RuleMemory` and `ProcedureMemory` models for complex
instructions
**New: Targeted forget tools** (`graphiti_forget.py`)
- `memory_forget_search`: returns candidate facts with UUIDs for user
confirmation
- `memory_forget_confirm`: deletes specific edges by UUID after
confirmation
**New: Architecture test** (`architecture_test.py`)
- Validates no new `@cached(...)` usage around event-loop-bound async
clients
- Allowlists pre-existing violations for future cleanup
**Enhanced: memory_store tool** (`graphiti_store.py`)
- Accepts MemoryEnvelope metadata fields (source_kind, scope,
memory_kind, rule, procedure)
- Wraps content in MemoryEnvelope before ingestion
**Enhanced: memory_search tool** (`graphiti_search.py`)
- Scope-aware retrieval with hard filtering on group_id
**Enhanced: Ingestion pipeline** (`ingest.py`)
- Derived-finding lane: distills substantive assistant responses into
tentative findings
- Event-loop-scoped queues and workers via WeakKeyDictionary (fixes
multi-worker RuntimeError)
- Improved error handling and dropped-episode reporting
**Enhanced: Client cache** (`client.py`)
- Per-loop client cache and lock via WeakKeyDictionary (fixes "Future
attached to a different loop")
**Enhanced: Warm context** (`context.py`)
- Filters out non-global-scope episodes from warm context
**Fix: Streaming socket leak** (`baseline/service.py`)
- try/finally around async stream iteration to release httpx connections
on early exit
**Config: AutoPilot key routing** (`config.py`, `.env.default`)
- LLM key fallback: GRAPHITI_LLM_API_KEY → CHAT_API_KEY →
OPEN_ROUTER_API_KEY
- Embedder key fallback: GRAPHITI_EMBEDDER_API_KEY → CHAT_OPENAI_API_KEY
→ OPENAI_API_KEY
- Backwards-compatible: existing behavior unchanged until new keys are
provisioned
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run pytest backend/copilot/graphiti/config_test.py` — 16
tests pass (key fallback priority)
- [x] `poetry run pytest backend/copilot/tools/graphiti_store_test.py` —
store envelope tests pass
- [x] `poetry run pytest backend/copilot/graphiti/ingest_test.py` —
ingestion tests pass
- [x] `poetry run pytest backend/util/architecture_test.py` — structural
validation passes
- [x] Verify memory store/retrieve/forget cycle via copilot chat
- [x] Run AgentProbe multi-session memory benchmark (31 scenarios x3
repeats)
- [x] Confirm no CLOSE_WAIT socket accumulation under sustained
streaming load
- [x] Verify multi-worker deployment doesn't produce loop-binding errors
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- Configuration changes:
- New optional env var `CHAT_OPENAI_API_KEY` — AutoPilot-dedicated
OpenAI key for Graphiti embeddings (falls back to `OPENAI_API_KEY` if
not set)
- `CHAT_API_KEY` now used as first fallback for Graphiti LLM calls (was
`OPEN_ROUTER_API_KEY`)
- Infra action needed: add `CHAT_OPENAI_API_KEY` sealed secret in
`autogpt-shared-config` values (dev + prod)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Touches Graphiti memory ingestion/retrieval and introduces hard-delete
capabilities plus event-loop–scoped caching/queues; failures could
affect memory correctness or delete the wrong edges. Also changes
streaming resource cleanup and key routing, which could surface as
connection or billing/cost attribution issues if misconfigured.
>
> **Overview**
> **Graphiti memory is upgraded from plain text episodes to a structured
JSON `MemoryEnvelope`.** `memory_store` now wraps content with typed
metadata (source, kind, scope, status) and optional structured
`rule`/`procedure` payloads, and ingestion supports JSON episodes.
>
> **Memory retrieval and lifecycle controls are expanded.**
`memory_search` adds optional scope hard-filtering to prevent
cross-scope leakage, warm-context formatting drops non-global scoped
episodes (and avoids empty wrappers), and new two-step tools
(`memory_forget_search` → `memory_forget_confirm`) enable targeted soft-
or hard-deletion of specific graph edges by UUID.
>
> **Reliability and multi-worker safety improvements.** Graphiti client
caching and ingestion worker registries are now per-event-loop (avoiding
cross-loop `Future` errors), streaming chat completions explicitly close
async streams to prevent `CLOSE_WAIT` socket leaks, warm-context is
injected into the first user message to keep the system prompt
cacheable, and a new `architecture_test.py` blocks future process-wide
caching of event-loop–bound async clients. Config updates route Graphiti
LLM/embedder keys to AutoPilot-specific env vars first, and OpenAPI
schema exports include the new memory response types.
>
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
autogpt-platform-beta-v0.6.56
|
||
|
|
b2f7faabc7 |
fix(backend/copilot): pre-create assistant msg before first yield to prevent last_role=tool (#12797)
## Changes **Root cause:** When a copilot session ends with a tool result as the last saved message (`last_role=tool`), the next assistant response is never persisted. This happens when: 1. An intermediate flush saves the session with `last_role=tool` (after a tool call completes) 2. The Claude Agent SDK generates a text response for the next turn 3. The client disconnects (`GeneratorExit`) at the `yield StreamStartStep` — the very first yield of the new turn 4. `_dispatch_response(StreamTextDelta)` is never called, so the assistant message is never appended to `ctx.session.messages` 5. The session `finally` block persists the session still with `last_role=tool` **Fix:** In `_run_stream_attempt`, after `convert_message()` returns the full list of adapter responses but *before* entering the yield loop, pre-create the assistant message placeholder in `ctx.session.messages` when: - `acc.has_tool_results` is True (there are pending tool results) - `acc.has_appended_assistant` is True (at least one prior message exists) - A `StreamTextDelta` is present in the batch (confirms this is a text response turn) This ensures that even if `GeneratorExit` fires at the first `yield`, the placeholder assistant message is already in the session and will be persisted by the `finally` block. **Tests:** Added `session_persistence_test.py` with 7 unit tests covering the pre-create condition logic and delta accumulation behavior. **Confirmed:** Langfuse trace `e57ebd26` for session `465bf5cf-7219-4313-a1f6-5194d2a44ff8` showed the final assistant response was logged at 13:06:49 but never reached DB — session had 51 messages with `last_role=tool`. ## Checklist - [x] My code follows the code style of this project - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have made corresponding changes to the documentation (N/A) - [x] My changes generate no new warnings (Pyright warnings are pre-existing) - [x] I have added tests that prove my fix is effective - [x] New and existing unit tests pass locally with my changes --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com> |
||
|
|
c9fa6bcd62 |
fix(backend/copilot): make system prompt fully static for cross-user prompt caching (#12790)
### Why / What / How **Why:** Anthropic prompt caching keys on exact system prompt content. Two sources of per-session dynamic data were leaking into the system prompt, making it unique per session/user — causing a full 28K-token cache write (~$0.10 on Sonnet) on *every* first message for *every* session instead of once globally per model. **What:** 1. `get_sdk_supplement` was embedding the session-specific working directory (`/tmp/copilot-<uuid>`) in the system prompt text. Every session has a different UUID, making every session's system prompt unique, blocking cross-session cache hits. 2. Graphiti `warm_ctx` (user-personalised memory facts fetched on the first turn) was appended directly to the system prompt, making it unique per user per query. **How:** - `get_sdk_supplement` now uses the constant placeholder `/tmp/copilot-<session-id>` in the supplement text and memoizes the result. The actual `cwd` is still passed to `ClaudeAgentOptions.cwd` so the CLI subprocess uses the correct session directory. - `warm_ctx` is now injected into the first user message as a trusted `<memory_context>` block (prepended before `inject_user_context` runs), following the same pattern already used for business understanding. It is persisted to DB and replayed correctly on `--resume`. - `sanitize_user_supplied_context` now also strips user-supplied `<memory_context>` tags, preventing context-spoofing via the new tag. After this change the system prompt is byte-for-byte identical across all users and sessions for a given model. ### Changes 🏗️ - `backend/copilot/prompting.py`: `get_sdk_supplement` ignores `cwd` and uses a constant working-directory placeholder; result is memoized in `_LOCAL_STORAGE_SUPPLEMENT`. - `backend/copilot/sdk/service.py`: `warm_ctx` is saved to a local variable instead of appended to `system_prompt`; on the first turn it is prepended to `current_message` as a `<memory_context>` block before `inject_user_context` is called. - `backend/copilot/service.py`: `sanitize_user_supplied_context` extended to strip `<memory_context>` blocks alongside `<user_context>`. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run pytest backend/copilot/prompting_test.py backend/copilot/prompt_cache_test.py` — all passed #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com>autogpt-platform-beta-v0.6.55 |
||
|
|
c955b3901c |
fix(frontend/copilot): load older chat messages reliably and preserve scrollback across turns (#12792)
### Why / What / How Fixes two SECRT-2226 bugs in copilot chat pagination. **Bug 1 — can't load older messages when the newest page fits on screen.** The `IntersectionObserver` in `LoadMoreSentinel` bailed when `scrollHeight <= clientHeight`, which happens routinely once reasoning + tool groups collapse. With no scrollbar and no button, users were stuck. Fix: remove the guard, cap auto-fill at 3 non-scrollable rounds (keeps the original anti-loop intent), and add a manual "Load older messages" button as the always-available escape hatch. **Bug 2 — older loaded pages vanish after a new turn, then reloading them produces duplicates.** After each stream `useCopilotStream` invalidates the session query; the refetch returns a shifted `oldest_sequence`, which `useLoadMoreMessages` used as a signal to wipe `olderRawMessages` and reset the local cursor. Scroll-back history was lost on every turn, and the next load fetched a page that overlapped with AI SDK's retained `currentMessages` — the "loops" users reported. Fix: once any older page is loaded, preserve `olderRawMessages` and the local cursor across same-session refetches. Only reset on session change. The gap between the new initial window and older pages is covered by AI SDK's retained state. ### Changes 🏗️ - `ChatMessagesContainer.tsx`: drop the scrollability guard; add `MAX_AUTO_FILL_ROUNDS = 3` counter; add "Load older messages" button (`ghost`/`small`); distinguish observer-triggered vs. button-triggered loads so the button bypasses the cap; export `LoadMoreSentinel` for testing. - `useLoadMoreMessages.ts`: remove the wipe-and-reset branch on `initialOldestSequence` change; preserve local state mid-session; still mirror parent's cursor while no older page is loaded. - New integration test `__tests__/LoadMoreSentinel.test.tsx`. No backend changes. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Short/collapsed newest page: "Load older messages" button loads older pages, preserves scroll - [x] Full-viewport newest page: scroll-to-top auto-pagination still works (no regression) - [x] `has_more_messages=false` hides the button; `isLoadingMore=true` shows spinner instead - [x] Bug 2 reproduced locally with temporary `limit=5`: before fix older page vanished and next load duplicated AI SDK messages; after fix older page stays and next load fetches cleanly further back - [x] `pnpm format`, `pnpm lint`, `pnpm types`, `pnpm test:unit` all pass (1208/1208) #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) — N/A --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
56864aea87 |
fix(copilot/frontend): align ModelToggleButton styling + add execution ID filter to platform cost page (#12793)
## Why Two fixes bundled together: 1. **ModelToggleButton styling**: after merging the ModelToggleButton feature, the "Standard" state was invisible — no background, no label — while "Advanced" had a colored pill. This was inconsistent with `ModeToggleButton` where both states (Fast / Thinking) always show a colored background + label. 2. **Execution ID filter on platform cost admin page**: admins needed to look up cost rows for a specific agent run but had no way to filter by `graph_exec_id`. All other identifiers (user, model, provider, block, tracking type) were already filterable. ## What - **ModelToggleButton**: inactive (Standard) state now uses `bg-neutral-100 text-neutral-700 hover:bg-neutral-200` (same palette as ModeToggleButton inactive), always shows the "Standard" label. - **Platform cost admin page**: added `graph_exec_id` query filter across the full stack — backend service functions, FastAPI route handlers, generated TypeScript params types, `usePlatformCostContent` hook, and the filter UI in `PlatformCostContent`. ## How ### ModelToggleButton Changed the inactive-state class from hover-only transparent to always-visible neutral background, and added the "Standard" text label (was empty before — only the CPU icon showed). ### Execution ID filter Added `graph_exec_id: str | None = None` parameter to: - `_build_prisma_where` — applies `where["graphExecId"] = graph_exec_id` - `get_platform_cost_dashboard`, `get_platform_cost_logs`, `get_platform_cost_logs_for_export` - All three FastAPI route handlers (`/dashboard`, `/logs`, `/logs/export`) - Generated TypeScript params types - `usePlatformCostContent`: new `executionIDInput` / `setExecutionIDInput` state, wired into `filterParams`, `handleFilter`, and `handleClear` - `PlatformCostContent`: new Execution ID input field in the filter bar ## Changes - [x] I have explained why I made the changes, not just what I changed - [x] There are no unrelated changes in this PR - [x] I have run the relevant linters and tests before submitting --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com> |
||
|
|
d23ca824ad |
fix(copilot): set session_id on mode-switch T1 to enable --resume on subsequent SDK turns (#12795)
## Why When a user switches from **baseline** (fast) mode to **SDK** (extended_thinking) mode mid-session, every subsequent SDK turn started fresh with no memory of prior conversation. Root cause: two complementary bugs on mode-switch T1 (first SDK turn after baseline turns): 1. `session_id` was gated on `not has_history`. On mode-switch T1, `has_history=True` (prior baseline turns in DB) so no `session_id` was set. The CLI generated a random ID and could not upload the session file under a predictable path → `--resume` failed on every following SDK turn. 2. Even if `session_id` were set, the upload guard `(not has_history or state.use_resume)` would block the session file upload on mode-switch T1 (`has_history=True`, `use_resume=False`), so the next turn still cannot `--resume`. Together these caused every SDK turn to re-inject the full compressed history, causing model confusion (proactive tool calls, forgetting context) observed in session `8237a27b-45d0-4688-af20-c185379e926f`. ## What - **`service.py`**: Change `elif not has_history:` → `else:` for the `session_id` assignment — set it whenever `--resume` is not active. Covers T1 fresh, mode-switch T1 (`has_history=True` but no CLI session exists), and T2+ fallback turns where restore failed. - **`service.py` retry path**: Replace `not has_history` with `"session_id" in sdk_options_kwargs` as the discriminator, so mode-switch T1 retries also keep `session_id` while T2+ retries (where `restore_cli_session` put a file on disk) correctly remove it to avoid "Session ID already in use". - **`service.py` upload guard**: Remove `and not skip_transcript_upload` and `and (not has_history or state.use_resume)` from the `upload_cli_session` guard. The CLI session file is independent of the JSONL transcript; and upload must run on mode-switch T1 so the next turn can `--resume`. `upload_cli_session` silently skips when the file is absent, so unconditional upload is always safe. ## How | Scenario | Before | After | |---|---|---| | T1 fresh (`has_history=False`) | `session_id` set ✓ | `session_id` set ✓ | | Mode-switch T1 (`has_history=True`, no CLI session) | ❌ not set — **bug** | `session_id` set ✓ | | T2+ with `--resume` | `resume` set ✓ | `resume` set ✓ | | T2+ retry after `--resume` failed | `session_id` removed ✓ | `session_id` removed ✓ | | Mode-switch T1 retry | `session_id` removed ❌ | `session_id` kept ✓ | | Upload on mode-switch T1 | ❌ blocked by guard — **bug** | uploaded ✓ | 7 new unit tests in `TestSdkSessionIdSelection` document all session_id cases. 6 new tests in `mode_switch_context_test.py` cover transcript bridging for both fast→SDK and SDK→fast switches. ## Checklist - [x] I have read the contributing guidelines - [x] My changes are covered by tests - [x] `poetry run format` passes --------- Co-authored-by: Zamil Majdy <zamilmajdy@gmail.com> |
||
|
|
227c60abd3 |
fix(backend/copilot): idempotency guard + frontend dedup fix for duplicate messages (#12788)
## Why After merging #12782 to dev, a k8s rolling deployment triggered infrastructure-level POST retries — nginx detected the old pod's connection reset mid-stream and resent the same POST to a new pod. Both pods independently saved the user message and ran the executor, producing duplicate entries in the DB (seq 159, 161, 163) and a duplicate response in the chat. The model saw the same question 3× in its context window and spent its response commenting on that instead of answering. Two compounding issues: 1. **No backend idempotency**: `append_and_save_message` saves unconditionally — k8s/nginx retries silently produce duplicate turns. 2. **Frontend dedup cleared after success**: `lastSubmittedMsgRef.current = null` after every completed turn wipes the dedup guard, so any rapid re-submit of the same text (from a stalled UI or user double-click) slips through. ## What **Backend** — Redis idempotency gate in `stream_chat_post`: - Before saving the user message, compute `sha256(session_id + message)[:16]` and `SET NX ex=30` in Redis - If key already exists → duplicate: return empty SSE (`StreamFinish + [DONE]`) immediately, skip save + executor enqueue - User messages only (`is_user_message=True`); system/assistant messages bypass the check **Frontend** — Keep `lastSubmittedMsgRef` populated after success: - Remove `lastSubmittedMsgRef.current = null` on stream complete - `getSendSuppressionReason` already has a two-condition check: `ref === text AND lastUserMsg === text` — so legitimate re-asks (after a different question was answered) still work; only rapid re-sends of the exact same text while it's still the last user message are blocked ## How - 30 s Redis TTL covers infrastructure retry windows (k8s SIGTERM → connection reset → ingress retry typically < 5 s) - Empty SSE response is well-formed (StreamFinish + [DONE]) — frontend AI SDK marks the turn complete without rendering a ghost message - Frontend ref kept live means: submit "foo" → success → submit "foo" again instantly → suppressed. Submit "foo" → success → submit "bar" → proceeds (different text updates the ref). ## Tests - 3 new backend route tests: duplicate blocked, first POST proceeds, non-user messages bypass - 5 new frontend `getSendSuppressionReason` unit tests: fresh ref, reconnecting, duplicate suppressed, different-turn re-ask allowed, different text allowed ## Checklist - [x] I have read the [AutoGPT Contributing Guide](https://github.com/Significant-Gravitas/AutoGPT/blob/master/CONTRIBUTING.md) - [x] I have performed a self-review of my code - [x] I have added tests that prove the fix is effective - [x] I have run `poetry run format` and `pnpm format` + `pnpm lint` |
||
|
|
0284614df0 |
fix(copilot): abort SSE stream and disconnect backend listeners on session switch (#12766)
## Summary
Fixes stream disconnection bugs where the UI shows "running" with no
output when users switch between copilot chat sessions. The root cause
is that the old SSE fetch is not aborted and backend XREAD listeners
keep running until timeout when switching sessions.
### Changes
**Frontend (`useCopilotStream.ts`, `helpers.ts`)**
- Call `sdkStop()` on session switch to abort the in-flight SSE fetch
from the old session's transport
- Fire-and-forget `DELETE` to new backend disconnect endpoint so
server-side listeners release immediately
- Store `resumeStream` and `sdkStop` in refs to fix stale closure bugs
in:
- Wake re-sync visibility handler (could call stale `resumeStream` after
tab sleep)
- Reconnect timer callback (could target wrong session's transport)
- Resume effect (captured stale `resumeStream` during rapid session
switches)
**Backend (`stream_registry.py`, `routes.py`)**
- Add `disconnect_all_listeners(session_id)` to stream registry —
iterates active listener tasks, cancels any matching the session
- Add `DELETE /sessions/{session_id}/stream` endpoint — auth-protected,
calls `disconnect_all_listeners`, returns 204
### Why
Reported by multiple team members: when using Autopilot for anything
serious, the frontend loses the SSE connection — particularly when
switching between conversations. The backend completes fine (refreshing
shows full output), but the UI gets stuck showing "running". This is the
worst UX bug we have right now because real users will never know to
refresh.
### How to test
1. Start a long-running autopilot task (e.g., "build a snake game")
2. While it's streaming, switch to a different chat session
3. Switch back — the UI should correctly show the completed output or
resume the stream
4. Verify no "stuck running" state
## Test plan
- [ ] Manual: switch sessions during active stream — no stuck "running"
state
- [ ] Manual: background tab for >30s during stream, return — wake
re-sync works
- [ ] Manual: trigger reconnect (kill network briefly) — reconnects to
correct session
- [ ] Verify: `pnpm lint`, `pnpm types`, `poetry run lint` all pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: majdyz <zamil.majdy@agpt.co>
|
||
|
|
f835674498 |
feat(copilot): standard/advanced model toggle with Opus rate-limit multiplier (#12786)
## Why Users have different task complexity needs. Sonnet is fast and cheap for most queries; Opus is more capable for hard reasoning tasks. Exposing this as a simple toggle gives users control without requiring infrastructure complexity. Opus costs 5× more than Sonnet per Anthropic pricing ($15/$75 vs $3/$15 per M tokens). Rather than adding a separate entitlement gate, the rate-limit multiplier (5×) ensures Opus turns deplete the daily/weekly quota proportionally faster — users self-limit via their existing budget. ## What - **Standard/Advanced model toggle** in the chat input toolbar (sky-blue star icon, label only when active — matches the simulation DryRunToggleButton pattern but visually distinct) - **`CopilotLlmModel = Literal["standard", "advanced"]`** — model-agnostic tier names (not tied to Anthropic model names) - **Backend model resolution**: `"advanced"` → `claude-opus-4-6`, `"standard"` → `config.model` (currently Sonnet) - **Rate-limit multiplier**: Opus turns count as 5× in Redis token counters (daily + weekly limits). Does **not** affect `PlatformCostLog` or `cost_usd` — those use real API-reported values - **localStorage persistence** via `Key.COPILOT_MODEL` so preference survives page refresh - **`claude_agent_max_budget_usd`** reduced from $15 to $10 ## How ### Backend - `CopilotLlmModel` type added to `config.py`, imported in routes/executor/service - `stream_chat_completion_sdk` accepts `model: CopilotLlmModel | None` - Model tier resolved early in the SDK path; `_normalize_model_name` strips the OpenRouter provider prefix - `model_cost_multiplier` (1.0 or 5.0) computed from final resolved model name, passed to `persist_and_record_usage` → `record_token_usage` (Redis only) - No separate LD flag needed — rate limit is the gate ### Frontend - `ModelToggleButton` component: sky-blue, star icon, "Advanced" label when active - `copilotModel` state in `useCopilotUIStore` with localStorage hydration - `copilotModelRef` pattern in `useCopilotStream` (avoids recreating `DefaultChatTransport`) - Toggle gated behind `showModeToggle && !isStreaming` in `ChatInput` ## Checklist - [x] Tests added/updated (ModelToggleButton.test.tsx, service_helpers_test.py, token_tracking_test.py) - [x] Rate-limit multiplier only affects Redis counters, not cost tracking - [x] No new LD flag needed |
||
|
|
da18f372f7 |
feat(backend/copilot): add for_agent_generation flag to find_block (#12787)
## Why When the agent generator LLM builds a graph, it may need to look up schema details for graph-only blocks like `AgentInputBlock`, `AgentOutputBlock`, or `OrchestratorBlock`. These blocks are correctly hidden from regular CoPilot `find_block` results (they can't run standalone), but that same filter was also preventing the LLM from discovering them when composing an agent graph. ## What Added a `for_agent_generation: bool = False` parameter to `FindBlockTool`. ## How - `for_agent_generation=false` (default): existing behaviour unchanged — graph-only blocks are filtered from both UUID lookups and text search results. - `for_agent_generation=true`: bypasses `COPILOT_EXCLUDED_BLOCK_TYPES` / `COPILOT_EXCLUDED_BLOCK_IDS` so the LLM can find and inspect schemas for INPUT, OUTPUT, ORCHESTRATOR, WEBHOOK, etc. blocks when building agent JSON. - MCP_TOOL blocks are still excluded even with `for_agent_generation=true` (they go through `run_mcp_tool`, not `find_block`). ## Checklist - [x] No new dependencies - [x] Backward compatible (default `false` preserves existing behaviour) - [x] No frontend changes |
||
|
|
d82ecac363 |
fix(backend/copilot): null-safe token accumulation for OpenRouter null cache fields (#12789)
## Why OpenRouter occasionally returns `null` (not `0`) for `cache_read_input_tokens` and `cache_creation_input_tokens` on the initial streaming event, before real token counts are available. Python's `dict.get(key, 0)` only falls back to `0` when the key is **missing** — when the key exists with a `null` value, `.get(key, 0)` returns `None`. This causes `TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType'` in the usage accumulator on the first streaming chunk from OpenRouter models. ## What - Replace `.get(key, 0)` with `.get(key) or 0` for all four token fields in `_run_stream_attempt` - Add `TestTokenUsageNullSafety` unit tests in `service_helpers_test.py` ## How Minimal targeted fix — only the four `+=` accumulation lines changed. No behaviour change for Anthropic-native models (they never emit null values). ## Checklist - [x] Tests cover null event, real event, absent keys, and multi-turn accumulation - [x] No behaviour change for Anthropic-native models - [x] No API changes |
||
|
|
8a2e2365f7 |
fix(backend/executor): charge per LLM iteration and per tool call in OrchestratorBlock (#12735)
### Why / What / How
**Why:** The OrchestratorBlock in agent mode makes multiple LLM calls in
a single node execution (one per iteration of the tool-calling loop),
but the executor was only charging the user once per run via
`_charge_usage`. Tools spawned by the orchestrator also bypassed
`_charge_usage` entirely — they execute via `on_node_execution()`
directly without going through the main execution queue, producing free
internal block executions.
**What:**
1. Charge `base_cost * (llm_call_count - 1)` extra credits after the
orchestrator block completes — covers the additional iterations beyond
the first (which is already paid for upfront).
2. Charge user credits for tools executed inside the orchestrator, the
same way queue-driven node executions are charged.
**How:**
**1. Per-iteration LLM charging**
- New `Block.extra_runtime_cost(execution_stats)` virtual method
(default returns `0`)
- `OrchestratorBlock` overrides it to return `max(0, llm_call_count -
1)`
- New `resolve_block_cost` free function in `billing.py` centralises the
block-lookup + cost-calculation pattern (used by both `charge_usage` and
`charge_extra_runtime_cost`)
- New `billing.charge_extra_runtime_cost(node_exec, extra_count)`
function that debits `base_cost * min(extra_count,
_MAX_EXTRA_RUNTIME_COST)` via `spend_credits()`, running synchronously
in a thread-pool worker
- After `_on_node_execution` completes with COMPLETED status,
`on_node_execution` calls `charge_extra_runtime_cost` if
`extra_runtime_cost > 0` and not a dry run
- `InsufficientBalanceError` from post-hoc charging is treated as a
billing leak: logged at ERROR with `billing_leak: True` structured
fields, user is notified via `_handle_insufficient_funds_notif`, but the
run status stays COMPLETED (work already done)
**2. Tool execution charging**
- New public async `ExecutionProcessor.charge_node_usage(node_exec)`
wrapper around `charge_usage` (with `execution_count=0` to avoid
inflating execution-tier counters); also calls `_handle_low_balance`
internally
- `OrchestratorBlock._execute_single_tool_with_manager` calls
`charge_node_usage` after successful tool execution (skipped for dry
runs and failed/cancelled tool runs)
- Tool cost is added to the orchestrator's `extra_cost` so it shows up
in graph stats display
- `InsufficientBalanceError` from tool charging is re-raised (not
downgraded to a tool error) in all three execution paths:
`_execute_single_tool_with_manager`, `_agent_mode_tool_executor`, and
`_execute_tools_sdk_mode`
**3. Billing module extraction**
- All billing logic extracted from `ExecutionProcessor` into
`backend/executor/billing.py` as free functions — keeps `manager.py` and
`service.py` focused on orchestration
- `ExecutionProcessor` retains thin delegation methods
(`charge_node_usage`, `charge_extra_runtime_cost`) for backward
compatibility with blocks that call them
**4. Structured error signalling**
- Tool error detection replaced brittle `text.startswith("Tool execution
failed:")` string check with a structured `_is_error` boolean field on
the tool response dict
### Changes
- `backend/blocks/_base.py`: Add
`Block.extra_runtime_cost(execution_stats) -> int` virtual method
(default `0`)
- `backend/blocks/orchestrator.py`: Override `extra_runtime_cost`; add
tool charging in `_execute_single_tool_with_manager`; add
`InsufficientBalanceError` re-raise carve-outs in all three execution
paths; replace string-prefix error detection with `_is_error` flag
- `backend/executor/billing.py` (new): Free functions
`resolve_block_cost`, `charge_usage`, `charge_extra_runtime_cost`,
`charge_node_usage`, `handle_post_execution_billing`,
`clear_insufficient_funds_notifications` — extracted from
`ExecutionProcessor`
- `backend/executor/manager.py`: Thin delegation to `billing.*`; remove
~500 lines of billing methods from `ExecutionProcessor`
- `backend/data/credit.py`: Update lazy import source from `manager` to
`billing`
- `backend/blocks/test/test_orchestrator.py`: Add `charge_node_usage`
mock + assertion
- `backend/blocks/test/test_orchestrator_dynamic_fields.py`: Add
`charge_node_usage` async mock
- `backend/blocks/test/test_orchestrator_responses_api.py`: Add
`charge_node_usage` async mock
- `backend/blocks/test/test_orchestrator_per_iteration_cost.py`: New
test file — `extra_runtime_cost` hook, `charge_extra_runtime_cost` math
(positive/zero/negative/capped/zero-cost/block-not-found/IBE),
`charge_node_usage` delegation, `on_node_execution` gate conditions
(COMPLETED/FAILED/zero-charges/dry-run/IBE), tool charging guards
(dry-run/failed/cancelled/IBE propagation)
### Checklist
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Run `poetry run pytest
backend/blocks/test/test_orchestrator_per_iteration_cost.py`
- [ ] Verify on dev: an OrchestratorBlock run with
`agent_mode_max_iterations=5` and 5 actual iterations is charged 5x the
base cost
- [ ] Verify tool executions inside the orchestrator are charged
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: majdyz <majdy.zamil@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: majdyz <majdyz@users.noreply.github.com>
|
||
|
|
55869d3c75 |
fix(backend/copilot): robust context fallback — upload gate, gap-fill, token-budget compression (#12782)
## Why During a live production session, the copilot lost all conversation context mid-session. The model stated \"I don't see any implementation plan in our conversation\" despite 9 prior turns of context. Three compounding bugs: **Bug 1 — Self-perpetuating upload gate:** When `restore_cli_session` fails on a T2+ turn, `state.use_resume=False`. The old gate `and (not has_history or state.use_resume)` then skips the CLI session upload — even though the T1 file may exist. Each turn without `use_resume` skips upload → next turn can't restore → also skips → etc. **Bug 2 — Blunt message-count cap on retries:** On `prompt-too-long`, `_reduce_context` retried 3× but rebuilt the same oversized query each time (transcript was empty, so all 3 attempts were identical). The `max_fallback_messages` count-cap was a blunt instrument — it threw away middle turns blindly instead of letting the compressor summarize intelligently. **Bug 3 — Gap-empty path returned zero context:** When a transcript exists but no `--resume` (CLI session unavailable), and the gap is empty (transcript is current), the code fell through to `return current_message, False` — the model got no history at all. ## What 1. **Remove upload gate** — upload is attempted after every successful turn; `upload_cli_session` silently skips when the file is absent. 2. **`transcript_msg_count` set on `cli_restored=False`** — enables the gap path on the very next turn without waiting for a full upload cycle. 3. **Token-budget compression instead of message-count cap** — `_reduce_context` now returns `target_tokens` (50K → 15K across retries). `compress_context` decides what to drop via LLM summarize → content truncate → middle-out delete → first/last trim. More context preserved at any budget vs. blindly slicing the list. 4. **Fix gap-empty case** — when transcript is current but `--resume` unavailable, fall through to full-session compression with the token budget instead of returning no context. 5. **Transcript seeding after fallback** — after `use_resume=False` with no stored transcript, compress DB messages to 30K tokens and serialise as JSONL into `transcript_builder`. Next turn uses the gap path (inject only new messages) instead of re-compressing full history. Only fires once per broken session (`not transcript_content` guard). 6. **Seeding guard** — seeding skips when `skip_transcript_upload=True` (avoids wasted compression work when the result won't be saved). 7. **Structured logging** — INFO/WARNING at every branch of `_build_query_message` with path variables, context_bytes, compression results. ## How **Upload gate** (`sdk/service.py` finally-block): removed `and (not has_history or state.use_resume)`; added INFO log showing `use_resume`/`has_history` before upload. **`transcript_msg_count`**: set from `dl.message_count` in the `cli_restored=False` branch. **`_build_query_message`**: `max_fallback_messages: int | None` → `target_tokens: int | None`; gap-empty case falls through to full-session compression rather than returning bare message. **`_reduce_context`**: `_FALLBACK_MSG_LIMITS` → `_RETRY_TARGET_TOKENS = (50_000, 15_000)`; returns `ReducedContext.target_tokens`. **`_compress_messages` / `_run_compression`**: both now accept `target_tokens: int | None` and thread it through to `compress_context`. **Seeding block**: added `not skip_transcript_upload` guard; uses `_SEED_TARGET_TOKENS = 30_000` so the seeded JSONL is always compact enough to pass `validate_transcript`. ## Checklist - [x] `poetry run format` passes - [x] No new lint errors introduced (pre-existing pyright errors unrelated) - [x] Tests added for `attempt` parameter and `target_tokens` in `_reduce_context` |
||
|
|
142c5dbe99 |
fix(frontend): tighten artifact preview behavior (#12770)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
b06648de8c |
ci(frontend): add Playwright PR smoke suite with seeded QA accounts (#12682)
### Why / What / How This PR simplifies frontend PR validation to one Playwright E2E suite, moves redundant page-level browser coverage into Vitest integration tests, and switches Playwright auth to deterministic seeded QA accounts. It also folds in the follow-up fixes that came out of review and CI: lint cleanup, CodeQL feedback, PR-local type regressions, and the flaky Library run helper. The approach is: - keep Playwright focused on real browser and cross-page flows that integration tests cannot prove well - keep page-level render and mocked API behavior in Vitest - remove the old PR-vs-full Playwright split from CI and run one deterministic PR suite instead - seed reusable auth states for fixed QA users so the browser suite is less flaky and faster to bootstrap ### Changes 🏗️ - Removed the workflow indirection that selected different Playwright suites for PRs vs other events - Standardized frontend CI on a single command: `pnpm test:e2e:no-build` - Consolidated the PR-gating Playwright suite around these happy-path specs: - `auth-happy-path.spec.ts` - `settings-happy-path.spec.ts` - `api-keys-happy-path.spec.ts` - `builder-happy-path.spec.ts` - `library-happy-path.spec.ts` - `marketplace-happy-path.spec.ts` - `publish-happy-path.spec.ts` - `copilot-happy-path.spec.ts` - Added the missing browser-only confidence checks to the PR suite: - settings persistence across reload and re-login - API key create, copy, and revoke - schedule `Run now` from Library - activity dropdown visibility for a real run - creator dashboard verification after publish submission - Increased Playwright CI workers from `6` to `8` - Migrated redundant page-level browser coverage into Vitest integration/unit tests where appropriate, including marketplace, profile, settings, API keys, signup behavior, agent dashboard row behavior, agent activity, and utility/auth helpers - Seeded deterministic Playwright QA users in `backend/test/e2e_test_data.py` and reused auth states from `frontend/src/tests/credentials/` - Fixed CodeQL insecure randomness feedback by replacing insecure randomness in test auth utilities - Fixed frontend lint issues in marketplace image rendering - Fixed PR-local type regressions introduced during test migration - Stabilized the Library E2E run helper to support the current Library action states: `Setup your task`, `New task`, `Rerun task`, and `Run now` - Removed obsolete Playwright specs and the temporary migration planning doc once the consolidation was complete - Reverted unintended non-test backend source changes; only backend test fixture changes remain in scope ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `pnpm lint` - [x] `pnpm types` - [x] `pnpm test:unit` - [x] `pnpm exec playwright test --list` - [x] `pnpm test:e2e:no-build` locally - [ ] PR CI green after the latest push #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) Notes: - Current local Playwright run on this branch: `28 passed`, `0 flaky`, `0 retries`, `3m 25s`. - Latest Codecov report on this PR showed overall coverage `63.14% -> 63.61%` (`+0.47%`), with frontend coverage up `+2.32%` and frontend E2E coverage up `+2.10%`. - The backend change in this PR is limited to deterministic E2E test data setup in `backend/test/e2e_test_data.py`. - Playwright retries remain enabled in CI; this branch does not add fail-on-flaky behavior. --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> Co-authored-by: Zamil Majdy <majdy.zamil@gmail.com> |
||
|
|
7240dd4fb1 |
feat(platform/admin): enhance cost dashboard with token breakdown and averages (#12757)
## Summary - **Token breakdown in provider table**: Added separate Input Tokens and Output Tokens columns to the By Provider table, making it easy to see whether costs are driven by large contexts (input) or verbose responses/thinking (output) - **New summary cards (8 total)**: Added Avg Cost/Request, Avg Input Tokens, Avg Output Tokens, and Total Tokens (in/out split) cards plus P50/P75/P95/P99 cost percentile cards at the top of the dashboard for at-a-glance cost analysis - **Cost distribution histogram**: Added a cost distribution section showing request count across configurable price buckets ($0–0.50, $0.50–1, $1–2, $2–5, $5–10, $10+) - **Per-user avg cost**: Added Avg Cost/Req column to the By User table to identify users with unusually expensive requests - **Backend aggregations**: Extended `PlatformCostDashboard` model with `total_input_tokens`, `total_output_tokens`, `avg_input_tokens_per_request`, `avg_output_tokens_per_request`, `avg_cost_microdollars_per_request`, `cost_p50/p75/p95/p99_microdollars`, and `cost_buckets` fields - **Correct denominators**: Avg cost uses cost-bearing requests only; avg token stats use token-bearing requests only — no artificial dilution from non-cost/non-token rows ## Test plan - [x] Verify the admin cost dashboard loads without errors at `/admin/platform-costs` - [x] Check that the new summary cards display correct values - [x] Verify Input/Output Tokens columns appear in the By Provider table - [x] Verify Avg Cost/Req column appears in the By User table - [x] Confirm existing functionality (filters, export, rate overrides) still works - [x] Verify backward compatibility — new fields have defaults so old API responses still work |
||
|
|
b4cd00bea9 |
dx(frontend): untrack auto-generated API client model files (#12778)
## Why `src/app/api/__generated__/` is listed in `.gitignore` but 4 model files were committed before that rule existed, so git kept tracking them and they showed up in every PR that touched the API schema. ## What Run `git rm --cached` on all 4 tracked files so the existing gitignore rule takes effect. No gitignore content changes needed — the rule was already correct. ## How The `check API types` CI job only diffs `openapi.json` against the backend's exported schema — it does not diff the generated TypeScript models. So removing these from tracking does not break any CI check. After this merges, `pnpm generate:api` output will be gitignored everywhere and future API-touching PRs won't include generated model diffs. |
||
|
|
e17914d393 |
perf(backend): enable cross-user prompt caching via SystemPromptPreset (#12758)
## Summary - Use `SystemPromptPreset` with `exclude_dynamic_sections=True` in the SDK path so the Claude Code default prompt serves as a cacheable prefix shared across all users, reducing input token cost by ~90% - Add `claude_agent_cross_user_prompt_cache` config field (default `True`) to make this configurable, with fallback to raw string when disabled - Extract `_build_system_prompt_value()` helper for testability, with `_SystemPromptPreset` TypedDict for proper type annotation > **Depends on #12747** — requires SDK >=0.1.58 which adds `SystemPromptPreset` with `exclude_dynamic_sections`. Must be merged after #12747. ## Changes - **`config.py`**: New `claude_agent_cross_user_prompt_cache: bool = True` field on `ChatConfig` - **`sdk/service.py`**: `_SystemPromptPreset` TypedDict for type safety; `_build_system_prompt_value()` helper that constructs the preset dict or returns the raw string; call site uses the helper - **`sdk/service_test.py`**: Tests exercise the production `_build_system_prompt_value()` helper directly — verifying preset dict structure (enabled), raw string fallback (disabled), and default config value ## How it works The Claude Code CLI supports `SystemPromptPreset` which uses the built-in Claude Code default prompt as a static prefix. By setting `exclude_dynamic_sections=True`, per-user dynamic sections (working dir, git status, auto-memory) are stripped from that prefix so it stays identical across users and benefits from Anthropic's prompt caching. Our custom prompt (tool notes, supplements, graphiti context) is appended after the cacheable prefix. ## Test plan - [x] CI passes (formatting, linting, unit tests) - [x] Verify `_build_system_prompt_value()` returns correct preset dict when enabled - [x] Verify fallback to raw string when `CHAT_CLAUDE_AGENT_CROSS_USER_PROMPT_CACHE=false` |
||
|
|
b3a58389e5 |
fix(copilot): baseline cost tracking and cache token display (#12762)
## Why The baseline copilot path (OpenAI-compatible / OpenRouter) did not record any cost when the `x-total-cost` response header was absent, even though token counts were always available. The admin cost dashboard also lacked cache token columns. ## What - **`x-total-cost` header extraction**: Reads the OpenRouter cost header per LLM call in the `finally` block (so cost is captured even when the stream errors mid-way). Accumulated across multi-round tool-calling turns. - **Cache token extraction**: Extracts `prompt_tokens_details.cached_tokens` and `cache_creation_input_tokens` from streaming usage chunks and passes `cache_read_tokens`/`cache_creation_tokens` through to `persist_and_record_usage` for storage in `PlatformCostLog`. - **Dashboard cache token display**: Adds cache read/write columns to the Raw Logs and By User tables on the admin platform costs dashboard. Adds `total_cache_read_tokens` and `total_cache_creation_tokens` to `UserCostSummary`. - **No cost estimation**: When `x-total-cost` is absent, `cost_usd` is left as `None` and `persist_and_record_usage` records the entry under `tracking_type="tokens"`. Token-based cost estimation was removed — the platform dashboard already handles per-token cost display, and estimates would introduce inaccuracy in the reported figures. ## How - In `_baseline_llm_caller`: extract the `x-total-cost` header in the `finally` block; accumulate to `state.cost_usd`. - In `_BaselineStreamState`: add `turn_cache_read_tokens` / `turn_cache_creation_tokens` counters, populated from streaming usage chunks. - In `persist_and_record_usage` / `record_cost_log`: pass through `cache_read_tokens` and `cache_creation_tokens` to `PlatformCostEntry`. - Frontend: add `total_cache_read_tokens` / `total_cache_creation_tokens` fields to `UserCostSummary` and render them as columns in the cost dashboard. ## Test plan - [x] Verify baseline copilot sessions log cost when `x-total-cost` header is present - [x] Verify `cost_usd` stays `None` and token count is logged when header is absent - [x] Verify cache tokens appear in the dashboard logs table for sessions using prompt caching - [x] Verify the By User tab shows Cache Read and Cache Write columns - [x] Unit tests: `test_cost_usd_extracted_from_response_header`, `test_cost_usd_remains_none_when_header_missing`, `test_cache_tokens_extracted_from_usage_details` |