Compare commits

...

82 Commits

Author SHA1 Message Date
Zamil Majdy
d3954e2bbd Merge branch 'dev' into branch10 2026-04-30 12:24:45 +07:00
majdyz
c93dac9043 fix(backend/copilot): self-recovery hint on storage-full + close TOCTOU rollback blob leak
WriteWorkspaceFileTool: when write_file rejects with "Storage limit exceeded",
append a recovery hint pointing at list_workspace_files + delete_workspace_file.
The agent reads this on the next turn and can offer to clean up before falling
back to "ask the user to upgrade", so the user doesn't have to leave the chat.

Upload route TOCTOU rollback: route the over-quota cleanup through
WorkspaceManager.delete_file instead of soft_delete_workspace_file. The latter
only flips isDeleted on the DB row and renames the path — the storage backend
blob would survive, leaking storage on every concurrent-upload race. delete_file
removes the blob first, then soft-deletes the DB record.
2026-04-30 12:18:54 +07:00
majdyz
bb46059d2c refactor(backend): drop noisy 80% storage warning + document write_file raises
The 80% warning fired on every write inside the band, producing repeated log
entries during a single CoPilot turn that writes multiple files. The frontend
storage bar already conveys usage to the user, and ops alerting belongs on a
metric, not a log line that scales with write rate.

Also document `VirusDetectedError` / `VirusScanError` in `WorkspaceManager.write_file`
so callers can decide between explicit handling and generic `Exception` catches.
2026-04-30 11:36:33 +07:00
Zamil Majdy
4a1741cc15 fix(platform): cancel-banner copy + clearer 422 on currency mismatch (#12947)
## Why

Two regressions surfaced after
[#12933](https://github.com/Significant-Gravitas/AutoGPT/pull/12933)
merged to `dev`:

1. **Cancel-pending banner shows wrong copy.** The merged PR moved
cancel-at-period-end from `BASIC` → `NO_TIER`, but
`PendingChangeBanner.isCancellation` was still keyed on `"BASIC"`. As a
result, a user who cancels their sub now sees *"Scheduled to downgrade
to No subscription on …"* instead of the intended *"Scheduled to cancel
your subscription on …"*. Caught by Sentry on the merged PR.

2. **Currency-mismatch downgrade returns 502 (looks like outage).** A
user with an existing GBP-active sub (Max Price has
`currency_options.gbp`) tried to downgrade to Pro and got 502. The
backend logs show:
   ```
stripe._error.InvalidRequestError: The price specified only supports
`usd`.
   This doesn't match the expected currency: `gbp`.
   ```
The Pro Price is USD-only; Stripe rejects `SubscriptionSchedule.modify`
because phases must share currency. Wrapping that in a generic 502 hid
the real cause and made it read like a Stripe outage.

## What

* Frontend: flip `PendingChangeBanner.isCancellation` from `pendingTier
=== "BASIC"` to `"NO_TIER"`. Update both component and page-level tests
that exercised the cancellation branch.
* Backend: catch `stripe.InvalidRequestError` whose message mentions
`currency` in `update_subscription_tier`, and return **422** with *"Tier
change unavailable for your current billing currency. Cancel your
subscription and re-subscribe at the target tier, or contact support."*
— so users see the actual reason, not a misleading outage message. Other
`StripeError` paths still return 502.
* New backend test asserts the currency-mismatch branch returns 422 with
the new copy.

## How

* `PendingChangeBanner.tsx` line 28: 1-char change (`"BASIC"` →
`"NO_TIER"`).
* `subscription_routes_test.py` and `PendingChangeBanner.test.tsx`
updated to use `NO_TIER` for the cancellation fixture.
* `v1.py` `update_subscription_tier` adds a typed `except
stripe.InvalidRequestError` branch ahead of the generic `StripeError`;
only currency-mismatch messages get the special 422, everything else
falls through to the existing 502.

## The real fix lives in Stripe config

The defensive 422 here is just a clearer error surface. To actually
unblock GBP/EUR users from changing tiers, the per-tier Stripe Prices
(Pro, and Basic if priced) need `currency_options` for GBP added — Max
already has this, which is why Max checkout shows the £/$ toggle. Stripe
locks `currency_options` after a Price has been transacted, so the
procedure is: create a new Price with USD + GBP from the start → update
the `stripe-price-ids` LD flag to the new Price ID. No further code
change required; same Price ID stays per tier, multiple currencies
inside it.

## Checklist

- [x] Component test for new banner copy
- [x] Backend test for 422 currency-mismatch branch
- [x] Format / lint / types pass
- [x] No protected route added — N/A
2026-04-30 10:25:02 +07:00
Nicholas Tindle
ea4c617093 ci: lower frontend patch coverage target from 80% to 70%
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 17:52:38 -05:00
Nicholas Tindle
dbb4090c52 fix(backend): reject zero LD storage values, fix prettier formatting
- Change LD workspace storage validation from `< 0` to `<= 0` to prevent
  a zero value from silently disabling quota enforcement
- Fix prettier formatting in UsagePanelContentRender test

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 16:59:24 -05:00
Nicholas Tindle
489ccf96f5 fix(frontend): isort fix and additional StorageBar test coverage
- Fix import sort order in rate_limit_test.py (isort)
- Add tests: zero limit hiding, 80%+ orange bar, singular file text,
  tiny usage display, header/tier label, showHeader=false

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-29 15:57:32 -05:00
Nicholas Tindle
db66f34cf4 feat(backend): pull workspace storage limits from LaunchDarkly
- Add _DEFAULT_TIER_WORKSPACE_STORAGE_MB with explicit NO_TIER entry (250 MB)
- Add _fetch_workspace_storage_limits_flag() and get_workspace_storage_limits_mb()
  mirroring the chat-limit LD pattern
- Add Flag.COPILOT_TIER_WORKSPACE_STORAGE_LIMITS enum entry
- Update get_workspace_storage_limit_bytes() to use LD-backed map
- Add tests: LD resolution, NO_TIER behavior, unsubscribe downgrade,
  upload rejection when over cap, frontend null-usage-windows rendering

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-29 15:34:47 -05:00
Nicholas Tindle
bf044a2634 Merge branch 'dev' into pr/12780/branch10
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-29 15:06:32 -05:00
Krzysztof Czerwinski
c08b9774dc fix(backend/push): skip OS push for onboarding payloads (#12944)
## Why

[#12723](https://github.com/Significant-Gravitas/AutoGPT/pull/12723)
wired Web Push fanout into `AsyncRedisNotificationEventBus.publish()` so
copilot completion events reach users with the tab closed. But the bus
is also used by `data/onboarding.py` for in-page step toasts, and those
started firing OS-level system notifications (`increment_runs`,
`step_completed`, etc.) — unwanted noise.

## What

Smallest possible patch: skip the OS push fanout when `payload.type ==
"onboarding"`. WebSocket delivery is unchanged.

## How

```python
async def publish(self, event: NotificationEvent) -> None:
    await self.publish_event(event, event.user_id)
    # Skip OS push for onboarding step toasts — those are in-page only.
    # TODO: remove once the onboarding/wallet rework lands.
    if event.payload.model_dump().get("type") == "onboarding":
        return
    ...
```

Five-line addition in `backend/data/notification_bus.py`. Marked `TODO`
to remove once the upcoming onboarding/wallet rework decides per-event
whether a system notification is desired.

Tests: added `test_publish_skips_web_push_for_onboarding`; existing
fanout tests continue to validate the happy path with non-onboarding
payloads.

## Test plan

- [x] `poetry run format` (ruff + isort + black + pyright)
- [ ] CI: `poetry run pytest backend/data/notification_bus_test.py`
- [ ] Manual on dev: trigger onboarding step → confirm no OS
notification; finish copilot session → confirm OS notification still
fires.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 16:20:53 +00:00
Zamil Majdy
fe3d6fb118 feat(platform): subscription credit grants + paywall gate + dialog UX + cross-pod cache (#12933)
## Why

Started as a regression fix for admin-granted user downgrades hitting
Stripe Checkout, broadened to close the surrounding gaps in the Stripe
billing flow that surfaced during testing. Three concrete user-facing
problems the PR resolves:

1. **Admin-granted users couldn't change tier in-app** when their
current tier had no `stripe-price-id-*` LD configured — clicking
Downgrade silently routed to a paid-signup Stripe Checkout instead of
just changing the tier.
2. **Subscription payments granted nothing visible to users** — paying
£20–£320/mo gave higher rate-limit multipliers but no AutoPilot credits
in the user's balance, despite a dialog promising "credit to your next
Stripe invoice" (which users naturally read as AutoGPT credits).
3. **Tier oscillated across page refreshes** — `get_user_by_id` was
process-local cached, so dev's 4 server pods each held their own copy.
Tier could read MAX on one pod and BASIC on another for ~5 min after a
webhook update, depending on which pod the request landed on.

Plus three structural improvements caught during review:

4. **No paywall enforcement for paid-cohort users without subscription**
— non-beta users on `BASIC` (no Stripe sub) could freely use AutoPilot.
5. **Upgrade/downgrade dialog copy was misleading** — implied a Stripe
redirect that doesn't happen for existing-sub modifications, used
"credit" ambiguously, and didn't surface the next-invoice date.
6. **Top-up Checkout created an ephemeral Stripe Product per session** —
no canonical Product for dashboard reporting, no way to scope coupons to
top-ups.

## What

### 1. Admin-granted downgrades skip Checkout (price-id-pruning
regression)

`update_subscription_tier()` used to gate its modify-or-DB-flip block on
`current_tier_price_id is not None`. When a tier was pruned from
`stripe-price-ids` LD, that gate skipped the inner DB-flip branch and
the request fell through to Checkout — sending admin-granted users to a
paid-signup flow when they were trying to *reduce* their tier. Drop the
gate and call `modify_stripe_subscription_for_tier()` unconditionally —
the function self-reports `False` when there's no Stripe sub. One
uniform path for everyone now.

### 2. Subscription credit grant on every paid Stripe invoice

New `invoice.payment_succeeded` webhook handler at
[`credit.py:handle_subscription_payment_success`](autogpt_platform/backend/backend/data/credit.py)
adds a `GRANT` transaction equal to `invoice.amount_paid`, keyed by
`INVOICE-{id}` for idempotency (Stripe webhook retries cannot
double-grant). Initial signup, monthly renewal, and prorated upgrade
charges all surface as AutoGPT balance bumps the moment Stripe confirms
the charge. Skipped: non-subscription invoices, $0 invoices, ENTERPRISE
users.

### 3. Cross-pod user cache

[`user.py:31`](autogpt_platform/backend/backend/data/user.py#L31)
`cache_user_lookup = cached(maxsize=1000, ttl_seconds=300,
shared_cache=True)`. Single line — moves the cache to Redis so all
server pods read/write the same key. The existing
`get_user_by_id.cache_delete(user_id)` invalidations now propagate
cross-pod.

### 4. PaywallGate

New
[`PaywallGate`](autogpt_platform/frontend/src/app/(platform)/PaywallGate/PaywallGate.tsx)
wraps the `(platform)/layout.tsx` route group. When
`ENABLE_PLATFORM_PAYMENT === true` (paid cohort) AND `subscription.tier
=== "BASIC"`, redirects to `/profile/credits` where the credits page
shows a "Pick a plan to continue using AutoGPT" banner above the tier
picker.

Notes:
- **Beta cohort skips entirely** (flag off → `useGetSubscriptionStatus`
query disabled, no redirect).
- **Gates on DB tier, not `has_active_stripe_subscription`** — Sentry
caught that a transient Stripe API error in
`get_active_subscription_period_end()` would set `has_active=false` for
paying users, locking them out. The DB tier is set by webhooks and
persists locally; Stripe API hiccups don't flip it.
- **Exempt routes**: `/profile`, `/admin`, `/auth`, `/login`, `/signup`,
`/reset-password`, `/error`, `/unauthorized`, `/health`. Onboarding
lives in the sibling `(no-navbar)` group, so this gate doesn't conflict
with the in-flight onboarding-paywall integration.

### 5. Upgrade/downgrade dialog clarity

`SubscriptionStatusResponse` now exposes
`has_active_stripe_subscription: bool` and `current_period_end: int |
None`, computed via a new
[`get_active_subscription_period_end`](autogpt_platform/backend/backend/data/credit.py)
helper. Frontend dialogs branch on those:

**Upgrade — modify-in-place** (existing sub):
> Your subscription is upgraded to MAX immediately. On your next invoice
on May 21, 2026, your saved card is charged for the upgrade proration
since today plus the next month at the new rate, with the unused portion
of your current plan automatically deducted. Credits matching the paid
amount are added to your AutoGPT balance once Stripe confirms the
charge.

**Upgrade — Checkout** (no sub):
> You'll be redirected to Stripe to enter payment details and start your
MAX subscription. The first invoice's amount is added to your AutoGPT
balance once Stripe confirms the charge.

**Downgrade (paid → paid)**:
> Switching to PRO takes effect at the end of your current billing
period on May 21, 2026 — no charge today. You keep your current plan
until then. From that date your saved card is billed at the PRO rate,
and matching credits are added to your AutoGPT balance with each paid
invoice.

Toast wording on success matches dialog. Tier labels run through
`getTierLabel()` so we render "Pro/Max/Business" not "PRO/MAX/BUSINESS"
(Sentry-flagged in review).

### 6. Top-up Stripe Product ID via LD flag

New `STRIPE_PRODUCT_ID_TOPUP` LD flag. **Unset (default)** → legacy
inline `product_data` (Stripe creates an ephemeral product per Checkout
— backward-compatible with current behavior). **Set to a Stripe Product
ID** → line item references that Product so all top-ups group under one
entity in Stripe Dashboard reporting; per-session amount stays dynamic
via `price_data.unit_amount`. The two paths are mutually exclusive
(Stripe rejects `product` + `product_data` together).

## How

- Backend changes confined to
[`v1.py`](autogpt_platform/backend/backend/api/features/v1.py),
[`credit.py`](autogpt_platform/backend/backend/data/credit.py),
[`user.py`](autogpt_platform/backend/backend/data/user.py),
[`feature_flag.py`](autogpt_platform/backend/backend/util/feature_flag.py).
- Frontend changes: new
[`PaywallGate`](autogpt_platform/frontend/src/app/(platform)/PaywallGate/PaywallGate.tsx)
component + small edits to
[`(platform)/layout.tsx`](autogpt_platform/frontend/src/app/(platform)/layout.tsx),
`SubscriptionTierSection.tsx`, `useSubscriptionTierSection.ts`,
`helpers.ts`.
- Both backend and frontend pass `user.id` to LD context (verified in
[`feature_flag.py:_fetch_user_context_data`](autogpt_platform/backend/backend/util/feature_flag.py)
and
[`feature-flag-provider.tsx`](autogpt_platform/frontend/src/services/feature-flags/feature-flag-provider.tsx))
for proper per-user targeting.

### Out of scope (follow-ups)

- Hard-paywall onboarding integration (Lluis's work — coordinated;
PaywallGate wraps `(platform)/layout.tsx` and onboarding lives in
`(no-navbar)`, so they don't conflict).
- Beta-users-as-Stripe-trial migration.
- Max-cap usage alerting + "Contact us" routing.
- "No Active Subscription" state rename.
- "Your credits" → "Automation Credits" rename + helper tooltip.
- BASIC tier resurface as a free / cancel-subscription option
(deliberately deferred per current product direction).

## Test plan

### Backend (all green in CI)

- [x] `poetry run pytest
backend/api/features/subscription_routes_test.py` — 41 passed.
- [x] `poetry run pytest backend/data/credit_subscription_test.py`
covering: `handle_subscription_payment_success` (grants credits, skips
non-sub/zero/missing-customer/unknown-user/ENTERPRISE, idempotent on
retry), `get_active_subscription_period_end` (happy path, no-customer
short-circuit, Stripe error swallow), top-up Product ID flag both
branches.
- [x] Type-check (3.11/3.12/3.13) — green after explicit
`list[stripe.checkout.Session.CreateParamsLineItem]` typing on top-up
`line_items`.
- [x] Codecov patch — both backend + frontend green.

### Frontend (all green in CI)

- [x] `pnpm test:unit` — 2154/2154 pass, including 5 new PaywallGate
tests (beta-cohort skip, paid-cohort BASIC redirect, no-redirect for
PRO/MAX/BUSINESS, exempt-prefix matrix, loading-state guard) and updated
`formatCost`/dialog-copy assertions.
- [x] `pnpm types`, `pnpm format`, `pnpm lint` — clean.

### Live verification on `dev-builder.agpt.co` (5/5 pass — see PR
comments)

- [x] Login + credits page renders correctly with Pro + Max cards, BASIC
+ BUSINESS hidden, no paywall banner for active subscriber.
- [x] Downgrade dialog shows new copy with concrete date + "no charge
today" + credit-grant explanation.
- [x] PaywallGate does NOT redirect paying users (MAX tier with active
sub).
- [x] PaywallGate REDIRECTS BASIC user (DB-flipped via `kubectl exec`
for testing, restored after) → `/build` redirects to `/profile/credits`,
violet "Pick a plan to continue using AutoGPT" banner displayed.
- [x] Upgrade dialog (modify-in-place) shows the corrected proration
phrasing.
- [ ] Manual: real production-like test of `invoice.payment_succeeded`
granting credits — fires on next billing cycle (2026-05-21 for the dev
test user); not testable today without manipulating Stripe webhook.
2026-04-29 23:15:29 +07:00
Ubbe
c6d31f8252 feat(frontend): gate onboarding SubscriptionStep behind ENABLE_PLATFORM_PAYMENT (#12943)
### Why / What / How

**Why:** The onboarding `SubscriptionStep` (added in #12935) is
currently shown to every new user, but the platform payment system is
rolled out behind the `ENABLE_PLATFORM_PAYMENT` LaunchDarkly flag. We
need the onboarding plan-selection step to honor the same flag so users
in flag-off cohorts don't hit a payment surface that the rest of the
product won't support.

**What:** Conditionally render the `SubscriptionStep` based on
`ENABLE_PLATFORM_PAYMENT`. When the flag is off the wizard runs `Welcome
→ Role → PainPoints → Preparing` (3 user-interactive steps +
transition); when on, behavior is unchanged (`Welcome → Role →
PainPoints → Subscription → Preparing`).

**How:**
- `page.tsx` reads the flag, computes `totalSteps` (3 vs. 4) and
`preparingStep` (4 vs. 5), and only renders `SubscriptionStep` when the
flag is on.
- `useOnboardingPage.ts` threads the same `preparingStep` into the URL
`parseStep` clamp and into the "submit profile when entering Preparing"
effect, so both adapt to the flag state.
- The Zustand store is left unchanged — its hard `Math.min(5, …)` clamp
is unreachable in flag-off flow because PainPointsStep advances 3 → 4
(Preparing) and that's the terminal step.
- `playwright/utils/onboarding.ts`: with `NEXT_PUBLIC_PW_TEST=true`
LaunchDarkly returns `defaultFlags` (`ENABLE_PLATFORM_PAYMENT: false`),
so the helper now waits up to 2s for the Subscription header and only
clicks a plan CTA if the step is actually rendered.

### Changes 🏗️

- `autogpt_platform/frontend/src/app/(no-navbar)/onboarding/page.tsx` —
gate `SubscriptionStep` on `ENABLE_PLATFORM_PAYMENT`; derive
`totalSteps`/`preparingStep` from the flag.
-
`autogpt_platform/frontend/src/app/(no-navbar)/onboarding/useOnboardingPage.ts`
— make `parseStep` and the profile-submission effect respect the
flag-derived `preparingStep`.
- `autogpt_platform/frontend/src/playwright/utils/onboarding.ts` — make
the Subscription step optional in `completeOnboardingWizard` so E2E
works in both flag states.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [x] Existing onboarding unit tests pass (`pnpm test:unit` — 2447
passed, including `PainPointsStep`, `RoleStep`, `SubscriptionStep`,
store)
  - [x] `pnpm format`, `pnpm lint`, `pnpm types` clean
- [ ] Manual: with flag **off**, walk onboarding and confirm wizard goes
Welcome → Role → PainPoints → Preparing → /copilot, progress bar shows 3
steps
- [ ] Manual: with flag **on** (LD or
`NEXT_PUBLIC_FORCE_FLAG_ENABLE_PLATFORM_PAYMENT=true`), walk onboarding
and confirm SubscriptionStep is present at step 4, progress bar shows 4
steps
- [ ] Manual: with flag **off**, hit `/onboarding?step=5` directly and
confirm it clamps back to step 1 (no orphan Subscription state)
- [ ] Playwright: `completeOnboardingWizard` E2E flow continues to pass
under default `NEXT_PUBLIC_PW_TEST=true` (flag off path)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
(no config changes — flag already exists in LaunchDarkly +
`defaultFlags`)
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (none needed)

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 23:12:04 +07:00
John Ababseh
28ae7ebac8 feat(onboarding): add subscription plan selection step (#12935)
## Summary

Adds a new **Subscription Step** (Step 4) to the onboarding wizard,
allowing users to choose a plan (Pro, Max, or Team) before reaching the
"Preparing" step.

## Changes

### New files
- **`steps/SubscriptionStep.tsx`** — Full subscription UI with:
  - Three plan cards (Pro $50/mo, Max $320/mo, Team — coming soon)
- Monthly / yearly billing toggle (yearly shows annual total with 20%
discount, plus monthly equivalent)
- Country selector (28 Stripe-supported countries) that opens upward as
a search modal
  - Localized pricing using live exchange rates
- **`steps/countries.ts`** — Currency data module with exchange rates,
`formatPrice()` helper, and zero-decimal currency handling (JPY, KRW,
HUF, CLP)

### Modified files
- **`store.ts`** — Extended `Step` type to `1 | 2 | 3 | 4 | 5`, added
`selectedPlan` and `selectedBilling` state/actions
- **`page.tsx`** — Wired `SubscriptionStep` as Step 4, moved
`PreparingStep` to Step 5, adjusted progress bar and dot indicators
- **`useOnboardingPage.ts`** — Updated `parseStep` range to 1–5, profile
submission now triggers at Step 5

## Design decisions
- Follows existing component patterns: uses `FadeIn`, `Text`, `Button`
atoms, `cn()` utility, Phosphor icons
- Country selector opens **upward** to avoid clipping below the viewport
- Plan selection advances to Step 5 immediately (Stripe integration is
TODO)
- Exchange rates are hardcoded for now — should be fetched from an API
in production

## TODO
- [ ] Integrate with Stripe checkout / backend subscription API
- [ ] Fetch live exchange rates instead of hardcoded values
- [ ] Add responsive layout for mobile viewports

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2026-04-29 21:26:18 +07:00
Krzysztof Czerwinski
e0f9146d54 feat(platform): add Web Push notifications via VAPID for background delivery (#12723)
### Why / What / How

**Why:** When a user kicks off an AutoPilot task and leaves the platform
(closes the tab, switches to another page, or minimizes the browser),
they have no way of knowing when it completes unless they come back and
check. This breaks the "set it and forget it" promise of automation.

**What:** Adds Web Push notifications using the standard Push API
(VAPID). Push notifications are delivered through free browser vendor
services (Google FCM, Apple APNs, Mozilla Push) to a service worker —
even when all AutoGPT tabs are closed, as long as the browser process is
running. The system is generic and extensible to all notification types,
with copilot session completion as the first integration.

**How:**
- **Backend:** A new `PushSubscription` Prisma model stores per-user
push subscriptions. When a `NotificationEvent` is published to the Redis
notification bus, the existing `notification_worker` in `ws_api.py`
fires a tracked background `send_push_for_user()` task. This uses
`pywebpush` to call the browser push services with VAPID authentication.
Includes per-user TTL-bounded debounce (5s), per-user subscription cap
(20), 410/404 auto-cleanup, periodic scheduler-driven cleanup of
high-failure rows, and route-level SSRF rejection of untrusted
endpoints.
- **Frontend:** A `push-sw.js` service worker handles `push` events and
shows OS notifications via `self.registration.showNotification()`, with
click-to-navigate. A `PushNotificationProvider` mounted at the platform
layout registers the SW and subscription on all pages, posts the current
URL to the SW on every Next.js navigation (since Chrome's
`WindowClient.url` is stale for SPA routing), forwards the user's
notifications-toggle setting to the SW, and tears down on logout. The
copilot in-page notification path defers to the SW when a push
subscription is active so users don't get duplicate alerts.

### Behavior — when does an OS notification fire?

| Where the user is focused | Notifications toggle | OS notification? |
|---|---|---|
| Any `/copilot` page (any session, tab visible + browser focused) | on
| suppressed — sidebar green check + title badge handle it |
| `/library` (or any non-`/copilot` route) | on | **fires** |
| `/copilot` but tab hidden (Cmd-Tab away, minimized, different tab) |
on | **fires** |
| All AutoGPT tabs closed (browser process still running) | on |
**fires** |
| Any state | off | suppressed |
| Anywhere | permission not granted / no push subscription | falls back
to in-page `Notification()` if user is away on `/copilot`; nothing
otherwise |

Click any OS notification → focuses an existing tab and navigates it to
`/copilot?sessionId=<id>`, or opens a new window if no AutoGPT tab is
open.

### Test plan

#### Setup
- [ ] Generate VAPID keys via the snippet in `backend/.env.default` and
set `VAPID_PRIVATE_KEY`, `VAPID_PUBLIC_KEY`, `VAPID_CLAIM_EMAIL` in
`backend/.env`
- [ ] Leave `NEXT_PUBLIC_VAPID_PUBLIC_KEY` unset on the frontend (single
source of truth via `/api/push/vapid-key`)
- [ ] Start backend + frontend, grant notification permission on the
copilot page
- [ ] Verify `push-sw.js` is "activated and is running" in DevTools →
Application → Service Workers
- [ ] Verify `POST /api/push/subscribe` created exactly one DB row in
`PushSubscription` for your user

#### Notification show / suppress matrix
- [ ] Trigger completion **on `/copilot` viewing the same session**, tab
visible + focused → no OS notification (sidebar green check appears)
- [ ] Trigger completion **on `/copilot` viewing a different session**,
tab visible + focused → no OS notification (still considered "in the
feature")
- [ ] Trigger completion **on `/library`**, tab visible + focused → OS
notification fires
- [ ] Trigger completion **on `/copilot`** but with the tab hidden
(Cmd-Tab to another app) → OS notification fires
- [ ] Trigger completion with all AutoGPT tabs closed → OS notification
fires (browser must still be running)
- [ ] Toggle notifications **off** in the copilot UI → trigger
completion → no OS notification
- [ ] Toggle notifications **back on** → trigger completion → OS
notification fires

#### Click behavior
- [ ] OS notification → click → focuses an existing AutoGPT tab and
navigates to `/copilot?sessionId=<id>`
- [ ] OS notification with no AutoGPT tab open → click → opens a new tab
on `/copilot?sessionId=<id>`

#### Lifecycle
- [ ] Logout → DB row removed, browser unsubscribed; no further OS
notifications until login + re-subscribe
- [ ] Stale subscription (e.g. unsubscribed externally) → backend gets
410 from FCM → row auto-deleted; second push attempts no longer fan out
to it

### Changes 🏗️

**Backend — New files:**
- `backend/data/push_subscription.py` — CRUD for push subscriptions:
`upsert` (with `MAX_SUBSCRIPTIONS_PER_USER` cap), `find_many`, `delete`,
`increment_fail_count`, `cleanup_failed_subscriptions`,
`validate_push_endpoint` (HTTPS + push-service hostname allowlist for
SSRF prevention)
- `backend/data/push_sender.py` — Fire-and-forget push delivery with
`cachetools.TTLCache`-bounded debounce, defense-in-depth re-validation
at send time, 410/404 auto-cleanup with regex-based status extraction
(covers pywebpush versions where `e.response` is unset)
- `backend/api/features/push/routes.py` — 3 endpoints: `GET
/api/push/vapid-key`, `POST /api/push/subscribe`, `POST
/api/push/unsubscribe` (all with `requires_user` auth and 400 on invalid
endpoints)
- `backend/api/features/push/model.py` — Pydantic models with
`min_length`/`max_length` constraints on endpoint and crypto keys

**Backend — Modified files:**
- `schema.prisma` — Added `PushSubscription` model + `User` relation
- `pyproject.toml` — Added `pywebpush ^2.3` dependency
- `backend/util/settings.py` — VAPID key fields on `Secrets`;
`push_subscription_cleanup_interval_hours` config
- `backend/api/rest_api.py` — Registered push router at `/api/push`
- `backend/api/ws_api.py` — Notification worker now fires
`send_push_for_user()` as a tracked background task (strong-ref set +
done callback so asyncio doesn't GC it mid-run)
- `backend/data/db_manager.py` — Exposed push subscription RPC methods
on the DB manager async client
- `backend/executor/scheduler.py` — Periodic
`cleanup_failed_push_subscriptions` job (default 24h)
- `backend/.env.default` — VAPID env vars with key generation snippet

**Frontend — New files:**
- `public/push-sw.js` — Service worker: routes pushes via
`NOTIFICATION_MAP`, suppresses when user is on `/copilot`, accepts
`CLIENT_URL` and `NOTIFICATIONS_ENABLED` postMessages so SW logic stays
in sync with SPA navigation and the toggle, click handler with focus →
navigate → openWindow fallback, `pushsubscriptionchange` re-subscribe
with `credentials: include`
- `src/services/push-notifications/registration.ts`, `api.ts`,
`helpers.ts` — SW registration / Push API subscription / backend API
helpers
- `src/services/push-notifications/usePushNotifications.ts` — Hook that
auto-subscribes on login and tears down on logout
- `src/services/push-notifications/useReportClientUrl.ts` — Posts
current pathname+search to SW on every Next.js route change (works
around stale `WindowClient.url`)
- `src/services/push-notifications/useReportNotificationsEnabled.ts` —
Forwards the user's notifications toggle to the SW
- `src/services/push-notifications/PushNotificationProvider.tsx` —
Mounts all three hooks at the platform layout level

**Frontend — Modified files:**
- `src/app/(platform)/layout.tsx` — Mounted `<PushNotificationProvider
/>`
- `src/app/(platform)/copilot/useCopilotNotifications.ts` — Skips
in-page `Notification()` when a SW push subscription is active (avoids
duplicate alerts)
- `src/services/storage/local-storage.ts` — Added
`PUSH_SUBSCRIPTION_REGISTERED` key
- `frontend/.env.default` — Optional `NEXT_PUBLIC_VAPID_PUBLIC_KEY`
(left unset by default to keep `/api/push/vapid-key` as the single
source of truth)

**Configuration changes:**
- New env vars: `VAPID_PRIVATE_KEY`, `VAPID_PUBLIC_KEY`,
`VAPID_CLAIM_EMAIL` (backend); optional `NEXT_PUBLIC_VAPID_PUBLIC_KEY`
(frontend)
- New `push_subscription_cleanup_interval_hours` setting (default 24,
range 1–168)
- New DB migration: `PushSubscription` table
(`20260420120000_add_push_subscription`)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] All blockers and should-fixes from the autogpt-pr-reviewer review
have been addressed (see PR thread)
- [x] All inline review threads resolved (49 threads addressed)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 13:28:21 +00:00
Bently
c3c2737c42 feat(platform): copilot-bot (Python / discord.py) (#12618)
## Why

AutoPilot needs to reach users on chat platforms — Discord first,
Telegram / Slack / Teams / WhatsApp next. This PR adds the bot service
that bridges those platforms to the AutoPilot backend via the
`PlatformLinkingManager` AppService introduced in #12615.

Two independent linking flows (see #12615 for the rationale):

- **SERVER links**: first person to run `/setup` in a guild claims it.
Anyone in the server can mention the bot; all usage bills to the owner.
- **USER links**: an individual DMs the bot, links their personal
account, DMs bill to their own AutoPilot. A server owner still has to
link their DMs separately.

## What

A Python service using `discord.py`, living alongside the rest of the
backend. Connects to the platform_linking service via cluster-internal
RPC (no shared bearer token) and subscribes to copilot streams directly
on Redis (no HTTP SSE proxy).

Originally prototyped in Node.js with Vercel's Chat SDK — rewritten in
Python after team feedback: the rest of the platform is Python,
`discord.py` was already a dependency, and the Chat SDK's streaming-UI
abstractions don't apply to a headless chat bot.

### Deployment

- **Shares the existing backend Docker image** — no separate Dockerfile,
no separate Artifact Registry. A `copilot-bot` poetry script entry lets
the same image run with `command: ["copilot-bot"]` in the Helm chart.
- **Auto-starts with `poetry run app`** when
`AUTOPILOT_BOT_DISCORD_TOKEN` is set, so the full local dev stack
includes the bot without extra setup.
- **Runs standalone** via `poetry run copilot-bot` for the production
pod.

Infra PR:
[AutoGPT_cloud_infrastructure#310](https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/310).

### File layout

```
backend/copilot/bot/
├── app.py              # CoPilotChatBridge(AppService) + adapter factory + outbound @expose
├── config.py           # Shared (platform-agnostic) config
├── handler.py          # Core logic: routing, linking, batched streaming
├── platform_api.py     # Thin facade over PlatformLinkingManagerClient + stream_registry
├── platform_api_test.py
├── text.py             # split_at_boundary + format_batch
├── threads.py          # Redis-backed thread subscription tracking
├── README.md
└── adapters/
    ├── base.py         # PlatformAdapter ABC + MessageContext
    └── discord/
        ├── adapter.py  # Gateway connection, events, thread creation, buttons
        ├── commands.py # /setup, /help, /unlink
        └── config.py   # Discord token + message limits
```

**Locality rule:** anything platform-specific lives under
`adapters/<platform>/`. `app.py` is the only file that names specific
platforms — it's the factory that picks adapters based on which tokens
are set. Adding Telegram later = drop in `adapters/telegram/` with the
same shape.

### `CoPilotChatBridge` — now an `AppService`

Previously `AppProcess`. Now inherits `AppService`, runs its RPC server
on `Config.copilot_chat_bridge_port=8010`, and exposes two scaffolding
`@expose` methods for the backend→chat-platform direction:

- `send_message_to_channel(platform, channel_id, content)` — stub
- `send_dm(platform, platform_user_id, content)` — stub

Both currently raise `NotImplementedError` — they unlock the
architecture for future features (scheduled agent outputs piped to
Discord, etc.) without another structural change. A matching
`CoPilotChatBridgeClient` + `get_copilot_chat_bridge_client()` factory
lets other services call the bot by the same `AppServiceClient` pattern
used for `NotificationManager` and `PlatformLinkingManager`.

### Bot behaviour

- `/setup` — server only, ephemeral, returns a "Link Server" button.
Rejects DM invocations up front.
- `/help` — ephemeral usage info.
- `/unlink` — ephemeral, opens a "Settings" button pointing at
`AUTOGPT_FRONTEND_URL/profile/settings` (real unlinking needs JWT auth).
- **Thread per conversation**: @mentioning the bot in a channel creates
a thread and routes the reply there. Subsequent messages in that thread
don't need another @mention — thread subscriptions are tracked in Redis
with a 7-day TTL.
- **Batched follow-ups**: messages arriving mid-stream append to a
per-thread pending list; drained as a single follow-up turn when the
current stream ends.
- **Persistent typing indicator**: 8-second re-fire loop.
- **Per-user identity prefix**: every forwarded message tagged `[Message
sent by {name} (Discord user ID: ...)]`.
- **Platform-aware chunking**: long responses split at paragraph → line
→ sentence → word boundaries (1900 chars for Discord).
- **Link buttons** for DM link prompts and `/setup` / `/unlink`
responses.
- **Duplicate message guard**: on `DuplicateChatMessageError` the bot
stays quiet — no double response.

### Env vars

| Variable | Purpose |
|----------|---------|
| `AUTOPILOT_BOT_DISCORD_TOKEN` | Discord bot token — enables the
Discord adapter |
| `AUTOGPT_FRONTEND_URL` | Frontend base URL for link confirmation pages
|
| `REDIS_HOST` / `REDIS_PORT` | Shared with backend — session +
thread-subscription state + direct copilot stream subscription |
| `PLATFORMLINKINGMANAGER_HOST` | Cluster DNS name of the
`PlatformLinkingManager` service (RPC target) |

Gone vs. the previous REST design: `AUTOGPT_API_URL`,
`PLATFORM_BOT_API_KEY`, `SSE_IDLE_TIMEOUT`.

## How

- **Adapter pattern**: `PlatformAdapter` ABC defines `start`, `stop`,
`send_message`, `send_link`, `start_typing`, `create_thread`,
`max_message_length`, `chunk_flush_at`, etc. Each platform implements
the interface; the shared `MessageHandler` calls through it.
- **Control plane over RPC**: `PlatformAPI` (~180 lines) is a thin
facade over `PlatformLinkingManagerClient` — `resolve_server`,
`resolve_user`, `create_link_token`, `create_user_link_token`,
`stream_chat`. The bot never constructs HTTP requests or handles an API
key.
- **Streaming over Redis Streams**: `stream_chat` calls
`start_chat_turn` (backend `@expose`), receives a
`ChatTurnHandle(session_id, turn_id, user_id, subscribe_from="0-0")`,
then subscribes directly via
`stream_registry.subscribe_to_session(...)`. Yields text from
`StreamTextDelta`, terminates on `StreamFinish`, surfaces
`StreamError.errorText` to the user. No SSE parsing, no X-Session-Id
header dance.
- **Error model**: backend domain exceptions (`NotFoundError`,
`LinkAlreadyExistsError`, `DuplicateChatMessageError`) cross the RPC
boundary cleanly (all `ValueError`-based, registered in
`backend.util.exceptions`). The bot catches them by type instead of
inspecting HTTP status codes.
- **Cooperative batching**: `TargetState.processing` flag + per-target
`pending` list. Messages arriving while `processing=True` append; the
running stream's finally block loops to drain the list before releasing.
- **Typing helper for `endpoint_to_async`**: added an `@overload` so
`async def` `@expose` methods on the server type-check correctly on the
client side (the scheduler pattern avoids this by using sync `@expose`,
but the new managers are async).

## Tests

- `backend/copilot/bot/platform_api_test.py` — new. Covers resolve
(server + user), create link tokens (success + `LinkAlreadyExistsError`
propagation), stream chat (yields deltas, terminates on `StreamFinish`,
surfaces `StreamError`, propagates `DuplicateChatMessageError` and
`NotFoundError`, handles `subscribe_to_session` returning `None`).
- `poetry run pyright backend/copilot/bot/` — clean.
- `poetry run ruff check backend/copilot/bot/` — clean.
- `poetry run copilot-bot` starts and connects to Discord Gateway, syncs
slash commands.
- `/setup` in a guild → confirm on frontend → mention bot → AutoPilot
streams back in a created thread.
- Thread follow-ups work without re-mentioning.
- Spamming messages mid-stream produces one batched follow-up.
- Long responses chunk at natural boundaries.
- DM to unlinked user → "Link Account" button → confirm → DMs stream as
that user's AutoPilot.

## Stack

- Backend API: #12615 — merge first
- Frontend link page: #12624
- Infra:
[AutoGPT_cloud_infrastructure#310](https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/310)

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: CodeRabbit <noreply@coderabbit.ai>
2026-04-29 08:12:15 +00:00
Abhimanyu Yadav
37f247c795 feat(frontend): creator dashboard page for settings v2 (SECRT-2281) (#12934)
### Why / What / How

**Why:** The creator dashboard route under settings v2 currently shows a
"Coming soon" placeholder. SECRT-2281 fills it in so creators can manage
their store submissions from one place.

**What:** Implements the full creator dashboard at
`/settings/creator-dashboard` — stats overview, desktop submissions
table, mobile submissions list, filtering/sorting, selection bar, edit
modal, and empty/loading/error states.

**How:** Page logic lives in `useCreatorDashboardPage.ts` (data fetch,
filter state, modal state, CRUD callbacks); pure transforms in
`helpers.ts`; UI broken into colocated `components/*` (one folder per
component, each ~200–400 lines). Reuses generated API hooks,
`ErrorCard`, and `EditAgentModal` from the design system. Mobile/desktop
split via Tailwind `md:` breakpoints rather than runtime detection.

### Changes 🏗️

- Replace placeholder `page.tsx` with the real dashboard, wired to
`useCreatorDashboardPage`
- Add `useCreatorDashboardPage.ts` (page-level state + handlers) and
`helpers.ts` (filter/sort/stat utilities)
- Add components: `DashboardHeader`, `DashboardSkeleton`, `EmptyState`,
`StatsOverview`, `SubmissionsList` (+ `columns/*`,
`useSubmissionSelection`), `SubmissionItem` (+ `useSubmissionItem`),
`SubmissionSelectionBar`, `MobileSubmissionsList` (+
`MobileSelectionBar`), `MobileSubmissionItem`, `ColumnFilter`
- Set document title to "Creator dashboard – AutoGPT Platform"
- Surface fetch errors via `ErrorCard` with retry; show
`DashboardSkeleton` while loading; show `EmptyState` when there are no
submissions

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  - [ ] Loading state renders skeleton until submissions load
  - [ ] Empty state renders when the creator has no submissions
  - [ ] Error state renders `ErrorCard` and retry refetches the list
  - [ ] Stats overview reflects approved/pending/rejected/draft counts
- [ ] Desktop list: sort/filter by status and other columns updates the
visible rows
- [ ] Desktop list: selection bar appears on row select and clears on
reset
- [ ] Mobile list (≤ md breakpoint): renders mobile items + selection
bar
- [ ] Edit modal opens for a submission, saves, and refreshes the list
on success
  - [ ] Delete action removes the submission and updates stats
  - [ ] View action navigates to the submission's public detail
  - [ ] Submit/publish entry point opens the publish modal
  - [ ] Document title shows "Creator dashboard – AutoGPT Platform"
2026-04-28 16:51:52 +00:00
Abhimanyu Yadav
ae4a421620 fix(platform): small fixes and stagger animations on settings pages (#12937)
## Why

The new Settings v2 surfaces (preferences, api-keys, integrations,
profile) shipped with a few rough edges spotted in self-review:

- **Timezone saves silently dropped on refresh.** Backend `GET
/auth/user/timezone` resolved the user via
`get_or_create_user(user_data)` (a 5-min in-process cache keyed by the
JWT-payload dict). `update_user_timezone` only invalidates
`get_user_by_id`'s cache, so the GET kept returning the pre-save tz
until TTL expired — looked exactly like "save did nothing."
- **Confusing "Looks like you're in X" CTA on the Time zone card** that
did nothing in the common case (server tz already matched the browser
tz, so clicking it produced no dirty state).
- **Save was disabled out of the gate when server tz was `"not-set"`** —
the hook substituted the browser tz into both `formState` and
`savedState`, so they were equal and `dirty` was false.
- **Lists felt static.** No motion when API keys / integrations mount,
and the loading skeletons popped in all at once instead of handing off
cleanly to the loaded rows.
- **Profile bio textarea** corner clipped against the rounded-3xl border
and the scrollbar overflowed the rounded container.

## What

### Bug fixes
- `GET /auth/user/timezone` now reads via `get_user_by_id(user_id)` —
the same cache `update_user_timezone` already invalidates — so a save
followed by refresh shows the new tz immediately.
- `usePreferencesPage` now treats the raw server tz (`"not-set"`
included) as the saved baseline, while `formState` uses the browser tz
only as a *display* fallback. Effect: when the user has never set a tz,
Save is enabled on first paint and a single click persists the detected
tz.
- Frontend save flow swapped `setQueryData` for `invalidateQueries`,
mirroring the older `/profile/(user)/settings` page so we always re-read
the persisted value.
- Removed the auto-detect "Looks like you're in X" button + its dead
helpers.

### Animations (per Emil Kowalski's guidelines)
Added orchestrated stagger animations that run on both the loaded list
**and** its skeleton, so the loading→loaded handoff is continuous
in-position:

- **API keys list + skeleton:** 280ms ease-out `cubic-bezier(0.16, 1,
0.3, 1)`, 40ms stagger, opacity + 6px translate.
- **Integrations list + skeleton:** 300ms ease-out, 80ms stagger,
opacity + 16px translate (rows are bigger / fewer).
- Both honor `prefers-reduced-motion` via `useReducedMotion`; only
`opacity` and `transform` are animated.

### Misc polish
- Profile bio textarea: `!rounded-tr-md` so the top-right corner doesn't
fight the surrounding `rounded-3xl`, plus a thin styled scrollbar
(`scrollbar-thin scrollbar-thumb-zinc-200
hover:scrollbar-thumb-zinc-300`) that lives inside the rounded container
instead of breaking out of it.

## How

| File | Change |
| --- | --- |
| `backend/api/features/v1.py` | `get_user_timezone_route` now uses
`get_user_by_id` + `Security(get_user_id)` instead of
`get_or_create_user(user_data)` |
| `frontend/.../preferences/usePreferencesPage.ts` | Split init into
`initialFormState` (browser-fallback display) vs `initialSavedState`
(raw server value); swap optimistic `setQueryData` for
`invalidateQueries` after tz mutate |
| `frontend/.../preferences/components/TimezoneCard/TimezoneCard.tsx` |
Drop `initialValue` prop, remove auto-detect button + unused imports |
| `frontend/.../preferences/page.tsx` | Drop `savedState`/`initialValue`
wiring |
| `frontend/.../api-keys/components/APIKeyList/APIKeyList.tsx` | Wrap
rows in container `motion.div` with `staggerChildren`; per-row
`motion.div` with opacity + y variants |
|
`frontend/.../api-keys/components/APIKeyListSkeleton/APIKeyListSkeleton.tsx`
| Same stagger config so loading→loaded matches |
|
`frontend/.../integrations/components/IntegrationsList/IntegrationsList.tsx`
+ `IntegrationsListSkeleton.tsx` | Same pattern for the providers list |
| `frontend/.../profile/components/ProfileForm/ProfileForm.tsx` |
Tailwind classes only — `!rounded-tr-md` + `scrollbar-thin
scrollbar-thumb-zinc-200 hover:scrollbar-thumb-zinc-300` |

## Test plan

- [ ] On `/settings/preferences`: pick a different tz → Save →
hard-refresh → new tz still selected.
- [ ] First-time user (server tz = `not-set`): land on page, Save button
should already be enabled; click Save → toast confirms; refresh → tz
persists.
- [ ] No "Looks like you're in X" button visible.
- [ ] On `/settings/api-keys`: rows fade/slide in staggered on first
mount; loading skeleton uses the same motion.
- [ ] On `/settings/integrations`: provider groups fade/slide in
staggered; skeleton matches.
- [ ] OS "Reduce motion" enabled → no transforms, content appears
instantly on all four surfaces.
- [ ] On `/settings/profile`: bio textarea top-right corner is no longer
hard-cornered against the card; scrollbar fits inside the rounded shape.
- [ ] Existing unit tests still pass: `pnpm test:unit
src/app/\(platform\)/settings/preferences` and `.../api-keys`.
2026-04-28 16:51:40 +00:00
Zamil Majdy
2879528308 feat(backend): Redis Cluster client support (#12900)
## Why

Pre-launch scaling. Redis is currently a single-master pod — a real
SPOF, and not scalable horizontally. To move it to a sharded Redis
Cluster (via KubeBlocks in GKE), the backend has to speak the cluster
protocol.

Keeping both "standalone" and "cluster" code paths would have local dev
not reflect prod. Going **cluster-only**.

## What

- `backend.data.redis_client` now always constructs `RedisCluster`
(sync) / `redis.asyncio.cluster.RedisCluster` (async). Type aliases
`RedisClient` / `AsyncRedisClient` point at the cluster classes.
- `RedisCluster` uses the existing `REDIS_HOST` / `REDIS_PORT` as a
startup node and auto-discovers peers via `CLUSTER SLOTS`.
- Classic Redis pub/sub is broadcast cluster-wide and redis-py's async
`RedisCluster` has no `.pubsub()`; dedicated `get_redis_pubsub[_async]`
helpers return plain `(Async)Redis` clients to the seed node. All
pub/sub callers (`event_bus`, `notification_bus`,
`copilot.pending_messages`) route through these helpers.
- `rate_limit.py` MULTI/EXEC pipelines are split per-counter — daily and
weekly counters hash to different slots, which `RedisCluster` correctly
rejects as `CrossSlotTransactionError`. Per-counter `INCRBY + EXPIRE`
atomicity is preserved; the counters are logically independent budgets.
- `util/cache.py` shared-cache client is also `RedisCluster` now.
- Pre-existing mock-based unit tests updated; new `redis_client_test.py`
covers the swap.

## Local dev

`docker-compose.platform.yml` now runs **2-master Redis Cluster**
(`redis` + `redis-2`, 16384 slots split 0-8191 / 8192-16383). A one-shot
`redis-init` sidecar bootstraps it on first boot via raw `CLUSTER MEET`
+ `CLUSTER ADDSLOTSRANGE` (bundled `redis-cli --cluster create` enforces
a 3-node minimum).

This deliberately catches cross-slot bugs on a laptop rather than in
prod:

```
>>> ALL SMOKE TESTS PASS <<<
[sync] class: RedisCluster
[sync] 20 keys across slots: OK
[sync] colocated MULTI/EXEC: OK [5, 12, 1]
[sync] cross-slot MULTI/EXEC rejected as expected: CrossSlotTransactionError
[sync] EVAL single-key: OK
[sync] pub/sub (classic, broadcast): OK
[async] class: RedisCluster
[async] 15 keys across slots: OK
[async] colocated pipeline: OK
[async] pub/sub: OK
```

`rest_server` `/health` → 200, both shards have connected clients + keys
distributed 19/19 under the smoke run. `executor` boots + connects to
RabbitMQ + Redis cleanly.

For a 3-shard override (6 pods, with replicas) when you want to test
real KubeBlocks topology:
```
docker compose -f docker-compose.yml -f docker-compose.redis-cluster.yml up -d
```

## Deploy order (companion infra PR:
[cloud_infrastructure#312](https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/312))

The existing `helm/redis` chart is updated in that PR to run as a
1-shard cluster (backwards-compatible toggle, default on). That rollout
must land before this PR's image goes live so the backend's
`RedisCluster` client has something to discover.

Sequence:
1. Infra: `helm upgrade redis` (1-shard cluster-enabled)
2. Infra: `helm upgrade rabbit-mq` (3-node cluster)
3. Backend: merge + deploy this PR
4. Follow-up: swap to KubeBlocks `redis-cluster` chart (3-shard sharded,
already staged in infra PR)

## Caveats / follow-ups

- Classic pub/sub via seed node means every node in the cluster sees
every message (broadcast). Fine at current volume; if it becomes hot,
migrate to `SPUBLISH`/`SSUBSCRIBE` (Redis 7+ sharded pub/sub).
- Per-user rate-limit counters (daily vs weekly) lost cross-counter
transactionality, but per-counter atomicity is preserved — the two
counters are independent budgets so no correctness regression.
- Local 2-master cluster crashes lose the cluster state; `redis-init`
idempotently rebootstraps.

## Checklist

- [x] Lint + format pass (`poetry run format` + `poetry run lint`)
- [x] Unit tests pass — `redis_client_test`, `redis_helpers_test`,
`event_bus_test`, `pending_messages_test`, `rate_limit_test`,
`cluster_lock_test`
- [x] Live smoke against 2-master cluster — sync + async; MULTI/EXEC;
EVAL; pub/sub; cross-slot rejection
- [x] Full stack smoke — `rest_server` /health, `executor` boot, keys
distributed across both shards
- [ ] Dev deploy (pending infra PR merge + manual validation)
2026-04-28 22:21:23 +07:00
Ubbe
1974ec6260 fix(frontend/copilot): fix streaming reconnect races, hydration ordering, and reasoning split (#12813)
## Summary

Improves Copilot/AutoPilot streaming reliability across frontend and
backend. The diff now covers the original streaming investigation issues
plus follow-up CI and review fixes from the latest merge with `dev`.

Addresses [SECRT-2240](https://linear.app/autogpt/issue/SECRT-2240),
[SECRT-2241](https://linear.app/autogpt/issue/SECRT-2241), and
[SECRT-2242](https://linear.app/autogpt/issue/SECRT-2242).

## Changes

- Fixes reasoning vs response rendering so action tools such as
`run_block` and `run_agent` do not cause assistant response text to be
hidden inside the collapsed reasoning section.
- Reworks Copilot session lifecycle handling: active-stream hydration,
resume ordering, reconnect timeout recovery, wake resync, session
deletion, title polling, stop handling, and session-switch stale
callback guards.
- Adds a per-session Copilot stream store/registry and transport helpers
to prevent duplicate resumes, duplicate sends, and cross-session
contamination during reconnect or reload flows.
- Adds pending follow-up message support and backend pending-message
safeguards, including sanitization of queued user content and
requeue-on-persist-failure behavior.
- Improves backend stream and executor robustness: active stream
registry checks, bounded cancellation drain with sync fail-close
fallback, Redis helper coverage, and updated SDK response adapter
expectations for post-tool status events.
- Adds and polishes usage-limit UI, including reset gate behavior,
backdrop blending behind the usage-limit card, and usage panel/card
coverage.
- Fixes a chat input Enter-submit race where Playwright and fast users
could fill the textarea and press Enter before React had re-enabled the
submit button, causing the visible message not to send.
- Refactors the Copilot page into smaller hooks/components and adds
focused tests around stream recovery, hydration, pending queueing,
rate-limit gates, and message rendering.

## Test plan

- [x] `poetry run format`
- [x] `poetry run pytest backend/copilot/sdk/response_adapter_test.py
backend/copilot/executor/processor_test.py`
- [x] `pnpm prettier --write` on touched frontend files
- [x] `pnpm vitest run
src/app/(platform)/copilot/components/ChatInput/__tests__/useChatInput.test.ts`
- [x] `pnpm types`
- [x] `pnpm lint` (passes with existing unrelated `next/no-img-element`
warnings)
- [ ] Full GitHub CI after latest push

## Review notes

- The Sentry review thread about unbounded cancellation cleanup is
addressed in `375ec9d5f`: cancellation now waits for normal async
cleanup but exits after `_CANCEL_GRACE_SECONDS` and falls through to the
sync fail-close path.
- The previous backend CI failures were stale test expectations around
the new `StreamStatus("Analyzing result…")` event after tool output;
tests now assert that event explicitly.
- The previous full-stack E2E failure was the Copilot input Enter race;
the input now submits from the live form value instead of depending on a
possibly stale disabled button state.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-04-28 15:40:37 +07:00
Zamil Majdy
932ecd3a07 fix(backend/copilot): normalize model name based on actual transport, not config shape (#12932)
## Summary

When `CHAT_USE_CLAUDE_CODE_SUBSCRIPTION=true` is paired with a populated
`CHAT_BASE_URL=https://openrouter.ai/api/v1` (e.g. left over from an
earlier OpenRouter setup), the SDK was passing the OpenRouter slug
`anthropic/claude-opus-4.7` straight through to the Claude Code CLI
subprocess. The CLI uses OAuth and ignores
`CHAT_BASE_URL`/`CHAT_API_KEY`, so it rejects the slug:

> There's an issue with the selected model (anthropic/claude-opus-4.7).
It may not exist or you may not have access to it.

The bug was in `_normalize_model_name`, which gated on
`config.openrouter_active` (config-shape check) instead of the transport
the CLI actually uses for the turn.

## Changes

- Add `ChatConfig.effective_transport` property returning `subscription`
| `openrouter` | `direct_anthropic`, detected in that priority order.
Subscription wins over OpenRouter config because the CLI subprocess uses
OAuth and ignores the OpenRouter env vars (see `build_sdk_env` mode 1).
- Switch `_normalize_model_name` to gate on `effective_transport`.
Subscription and direct-Anthropic transports both produce the
CLI-friendly hyphenated form (`claude-opus-4-7`) and reject
non-Anthropic vendors loudly.
- `_resolve_sdk_model_for_request` already routes any LD-served override
through `_normalize_model_name`, so a per-user advanced-tier override
under subscription now correctly becomes `claude-opus-4-7` instead of
the OpenRouter slug. The standard-tier \"no LD override → return None\"
behaviour is preserved.
- Update two existing service tests to assert the corrected behaviour
(Kimi LD override under subscription falls back to tier default
normalised for the CLI; Opus advanced override returns hyphenated form).

## Test plan

- [x] `poetry run pytest backend/copilot/sdk/service_helpers_test.py
backend/copilot/sdk/service_test.py backend/copilot/config_test.py -v` —
165 passed.
- [x] `poetry run pytest backend/copilot/sdk/env_test.py
backend/copilot/sdk/p0_guardrails_test.py` — 136 passed (other call
sites of `openrouter_active` unchanged).
- [x] `poetry run ruff format` + `ruff check` clean on touched files.

### New tests added (service_helpers_test.py)

- Subscription transport with OpenRouter base URL set + advanced-tier LD
override → returns `claude-opus-4-7` (not the OpenRouter slug, not
None).
- Subscription transport with OpenRouter base URL set + standard-tier no
override → returns None (existing behaviour preserved).
- Subscription transport rejects non-Anthropic vendor (`moonshotai/...`)
→ ValueError.
- `effective_transport` returns `subscription` when subscription is on
regardless of OpenRouter config; returns `openrouter` when subscription
is off and OpenRouter is fully configured; returns `direct_anthropic`
otherwise.
2026-04-28 11:40:31 +07:00
Zamil Majdy
4a567a55a4 fix(backend/copilot): pause idle timer during pending tools (#12927)
## Summary

Pause the SDK idle timer while a tool call is pending, with a 2-hour
hung-tool cap as backstop. Fixes SECRT-2239 — long-running tools (10+
min, e.g. sub-agent execution) were being silently aborted by the
10-minute idle timeout introduced in #12660.

## What changed (backend only)

- `_IDLE_TIMEOUT_SECONDS = 1800` (30 min) — soft cap when no tool
pending (raised from 10 min)
- `_HUNG_TOOL_CAP_SECONDS = 7200` (2 h) — hard cap when a tool is
pending; protects against truly hung tool calls without false-aborting
legitimate long-running ones
- `_idle_timeout_threshold(adapter)` — returns the appropriate threshold
based on whether any tool is currently pending in the adapter

Backed by 7 regression tests in
`service_test.py::TestIdleTimeoutThreshold`.

## Frontend coordination

The original cherry-pick batch included a `useStreamActivityWatchdog`
hook for client-side wire-silence detection. That hook is dropped from
this PR because it overlaps with Lluis's #12813, which ships the same
component as part of a comprehensive copilot streaming refactor. End
state on dev: his PR contributes the watchdog, this PR contributes the
backend pause + cap.

## Test plan

- 7/7 unit tests in
`backend/copilot/sdk/service_test.py::TestIdleTimeoutThreshold` pass
- pyright clean on `service.py` + `service_test.py`
- /pr-test --fix posted with native-stack run + screenshots:
https://github.com/Significant-Gravitas/AutoGPT/pull/12927#issuecomment-4328320714

## Linear

SECRT-2239
2026-04-28 09:07:16 +07:00
John Ababseh
2b28434786 feat(platform/backend): Filter store creators with approved agents (#10014)
Filtering store creators to only show profiles with an approved agent
keeps the marketplace focused on usable inventory and prevents empty
creator cards.
 
### Changes 🏗️
 
- add a `num_agents > 0` filter to `get_store_creators`
- add a regression test ensuring we only return creators with approved
agents
- keep the existing SQL injection regression tests intact after rebasing
onto `dev`
 
### Checklist 📋
 
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] python3 -m pytest
autogpt_platform/backend/backend/server/v2/store/db_test.py -k
get_store_creators_only_returns_approved *(blocked: repo environment
lacks pytest and related deps)*
 
<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Filter `get_store_creators` to creators with `num_agents > 0` and add
a test to validate the behavior.
> 
> - **Store backend**:
> - Update `get_store_creators` in `backend/server/v2/store/db.py` to
filter creators by `num_agents > 0`.
> - **Tests**:
> - Add `test_get_store_creators_only_returns_approved` in
`backend/server/v2/store/db_test.py` to verify filtering and pagination
count calls.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
c2fca584cce5a8c26dbdadd68696a0033642f193. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-27 22:01:24 +00:00
Zamil Majdy
5d1cdc2bad fix(backend/copilot): surface empty-success ResultMessage as stream error (SECRT-2252) (#12926)
## Summary

- Detect ghost-finished sessions where the SDK returns a `ResultMessage`
with `subtype="success"`, empty `result`, no produced content, and
`output_tokens == 0`.
- Emit `StreamError(code="empty_completion")` instead of silently
calling `StreamFinish`, so the caller (and the user) sees the failure.

## Background

Linear: [SECRT-2252](https://linear.app/agpt/issue/SECRT-2252) — SDK
silent empty completion not retried, leaving the user with a blank
stream (`start -> start-step -> finish-step -> finish`).

## Changes

- `response_adapter.py::convert_message`: in the `ResultMessage` branch,
check `_is_empty_completion()` before falling through to the existing
success path. When matched, close any open step, emit `StreamError`, and
skip `StreamFinish`.
- `response_adapter.py::_is_empty_completion`: new helper that returns
`True` only when `result` is falsy, no text/reasoning was emitted, no
tool calls were registered, no tool results were seen, and
`usage["output_tokens"]` is `0`.
- `response_adapter_test.py`: 4 new unit tests covering empty-success
(None and empty-string variants), non-empty success, and the
non-empty-tokens-but-empty-result fallthrough.

## Out of scope (per ticket)

- Retry-once behavior. This PR only surfaces the error; the caller
decides retry semantics. Follow-up work can wire automatic retry on
`code="empty_completion"`.

## Test plan

- [x] `poetry run pytest backend/copilot/sdk/response_adapter_test.py` —
all 58 tests pass (4 new + 54 existing).
- [x] `poetry run pyright backend/copilot/sdk/response_adapter.py
backend/copilot/sdk/response_adapter_test.py` — clean.

## Checklist

- [x] My code follows the style of this project.
- [x] I have added tests covering my changes.
- [x] I have updated the documentation accordingly. (N/A — internal
adapter behavior)
2026-04-27 17:24:57 +00:00
Abhimanyu Yadav
3c08b90500 feat(frontend): preferences v2 page (SECRT-2279) (#12925)
### Why / What / How

**Why** — Settings v2 needs a dedicated Preferences page covering
account info (email, password reset), time zone, and notification
preferences with a single, predictable save flow. The existing legacy
page at `/profile/(user)/settings` mixes concerns and uses `__legacy__`
UI primitives we are migrating away from. SECRT-2279.

**What** — A new preferences page at `/settings/preferences` built from
atomic / molecular design-system components. Three cards (Account, Time
zone, Notifications) share one inline Save / Discard bar at the bottom.
The page replaces the legacy settings page from a UX standpoint while
keeping the same backend mutations.

<img width="1511" height="899" alt="Screenshot 2026-04-27 at 6 43 09 PM"
src="https://github.com/user-attachments/assets/5762fc41-1654-4764-8fbf-d5dd262e031a"
/>

**How**
- **Page composition (`page.tsx`)** uses a single `usePreferencesPage`
hook that owns dirty/saved/form state. Renders `PreferencesHeader`,
`AccountCard`, `TimezoneCard`, `NotificationsCard`, and the inline
`SaveBar` below the cards.
- **Account card** — Email row shows the current address with a compact
pencil button that opens a width-constrained `Dialog` for editing
(Cancel / Update). Password row is a `NextLink` button that routes to
`/reset-password`.
- **Time zone card** — Single row inside a card: label + info-icon
tooltip on the left; small-size `Select` and the GMT offset chip on the
right. The "auto-detect" prompt shows up only when the saved tz differs
from the browser tz, rendered as a pill on the right side of the card.
- **Notifications card** — Tabs for Agents / Marketplace / Credits;
toggling a switch flips a flag in formState and enables Save.
- **Save flow** — `usePreferencesPage` keeps a `savedState` snapshot
(mirrors the old `react-hook-form` `defaultValues` capture-once
semantics) so the dirty check is fully decoupled from any backend GET
refetch. After a successful mutation, `savedState ← formState`, and the
timezone query cache gets an optimistic `setQueryData` write so the
value isn't snapped back by the (cached) GET endpoint.
- **Skeleton** — `PreferencesSkeleton` mirrors the real layout — header,
Account card with the two row shapes, Time zone single row,
Notifications tabs + toggle rows, and the Save / Discard buttons.
- **Sidebar** — Renames the entry "Settings" → "Preferences" with
`SlidersHorizontalIcon`. Profile and Creator Dashboard get clearer
affordances (`UserIcon`, `ChartLineUpIcon`).

### Changes 🏗️

- New page: `src/app/(platform)/settings/preferences/page.tsx` and
`usePreferencesPage.ts`
- New components: `AccountCard`, `TimezoneCard`, `NotificationsCard`,
`PreferencesHeader`, `PreferencesSkeleton`, `SaveBar`
- Helpers: `helpers.ts` (timezones list, GMT-offset formatter, dirty
utilities, notification-group definitions)
- Sidebar: rename "Settings" → "Preferences", swap to
`SlidersHorizontalIcon` + cleaner Profile / Creator Dashboard icons
(`SettingsSidebar/helpers.ts`)
- Tests:
- `preferences/__tests__/main.test.tsx` — page render, edit-email dialog
open/cancel, time zone info trigger, Save/Discard disabled-on-clean,
notification toggle → save submission, discard revert
- Updated `SettingsSidebar.test.tsx` and `SettingsMobileNav.test.tsx`
for the "Preferences" label

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Sidebar shows "Preferences" with the new icon; clicking routes to
`/settings/preferences`
- [ ] Account card: pencil button opens a 420px-wide dialog; Update
button stays disabled until the email differs and is valid; Cancel
closes the dialog
- [ ] Password row: "Reset password" button navigates to
`/reset-password`
- [ ] Time zone: changing the select enables Save; clicking Save
persists the value and the form keeps showing the saved value (does not
snap back), the inline GMT offset chip updates, info tooltip appears on
hover
- [ ] Auto-detect prompt appears only when saved tz ≠ browser tz;
clicking it sets the select to the browser tz
- [ ] Notifications: toggling any switch enables Save; saving flips the
flag in the request body; switching tabs preserves toggles; Discard
reverts unsaved toggles
- [ ] Save / Discard render below the cards on the right and stay
disabled until any field is dirty
  - [ ] Loading state shows the new skeleton that mirrors the layout
- [ ] `pnpm test:unit` passes (covers the page-level integration tests
above)

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2026-04-27 15:13:51 +00:00
Abhimanyu Yadav
599f370206 feat(frontend): add settings v2 profile page (#12924)
### Why / What / How

**Why:** SECRT-2278 — settings v2 needs a Profile page so users can
manage how they appear on the marketplace (display name, handle, bio,
links, avatar) without hopping into the legacy settings UI.

**What:** Adds the `/settings/profile` page end-to-end:
- Form fields for display name, handle, bio, avatar, and up to 5 links —
wired to the `getV2GetUserProfile`, `postV2UpdateUserProfile`, and
`postV2UploadSubmissionMedia` endpoints.
- Bio editor gets a markdown toolbar (bold / italic / strikethrough /
link / bulleted list) with a live preview toggle that renders via
`react-markdown` + `remark-gfm`.
- Save/Discard bar with full validation (handle regex, bio length, dirty
tracking) and toast feedback for success and failure paths.
- Forward refs through the `Input` atom so consumers can target the
underlying `<textarea>` / `<input>` (needed for the toolbar's
selection/cursor manipulation).
- Comprehensive integration tests (Vitest + RTL + MSW) at the page level
plus pure-helper unit tests, in line with the project's
"integration-first" testing strategy. Coverage is reported via
`cobertura` for Codecov.

**How:**
- The toolbar applies markdown syntax by reading `selectionStart` /
`selectionEnd` from a forwarded textarea ref. To avoid the textarea
jumping to the top on click: buttons `preventDefault` on `mousedown` (so
the textarea keeps focus), and the handler captures `scrollTop` before
mutation and restores it (with `focus({ preventScroll: true })`) after
React commits the new value in the next animation frame.
- The preview pane styles markdown elements via Tailwind arbitrary child
selectors (`[&_ul]:list-disc` etc.) instead of pulling in
`@tailwindcss/typography`, since the plugin isn't installed and the
project's `prose` usage was a no-op.
- Profile data hydration tolerates nullish API fields by mapping through
`profileToFormState`, padding `links` to 3 slots so the UI always has
the initial layout.
- Tests use Orval-generated MSW handlers from `store.msw.ts`, mock
`useSupabase` to inject an authenticated user, and assert UI behavior
via Testing Library queries.

### Changes 🏗️

- New: `app/(platform)/settings/profile/__tests__/helpers.test.ts`,
`__tests__/main.test.tsx`
- Updated: `settings/profile/page.tsx`, `useProfilePage.ts`,
`helpers.ts`, plus `ProfileForm`, `ProfileHeader`, `ProfileSkeleton`,
`LinksSection`, `SaveBar`
- Updated: `settings/layout.tsx` (settings v2 chrome adjustments to host
the profile page)
- Atom change: `components/atoms/Input/Input.tsx` now forwards refs
(`HTMLInputElement | HTMLTextAreaElement`) — backward-compatible for
existing consumers

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open `/settings/profile` and confirm the page hydrates the
existing display name, handle, bio, avatar, and links
- [x] Edit each field and verify validation messages (empty name,
invalid handle with spaces, bio over 280 chars)
- [x] Bio markdown toolbar: select text and click Bold / Italic / Strike
— selection wraps, cursor stays in place, textarea does not scroll to
top
- [x] Bio toolbar with no selection: each button inserts the markdown
placeholder template
- [x] Click Bulleted list twice on the same line — line is prefixed with
`- ` only once
- [x] Toggle Preview — bio renders bullets, bold, italic, strikethrough,
links correctly; toolbar buttons dim and become inert
  - [x] Toggle Edit — textarea returns with the same content
- [x] Add link → 4th and 5th slots appear; the 6th attempt is blocked by
the "Limit of 5 reached" button label
  - [x] Remove link 1 — the rest reorder correctly
- [x] Avatar upload — happy path replaces the avatar; failure path
surfaces a destructive toast
- [x] Save with valid data → success toast, query invalidates, save
button disables until next edit
  - [x] Save with a server 422 → destructive toast, no state corruption
  - [x] Discard reverts every field back to the loaded profile
- [x] `pnpm test:unit` passes locally; `coverage/cobertura-coverage.xml`
shows ≥ 80% line coverage for `src/app/(platform)/settings/profile/**`
2026-04-27 10:34:09 +00:00
Otto
8786c00f9c feat(blocks): add Claude Opus 4.7 model support (#12826)
Requested by @Bentlybro

Anthropic released [Claude Opus
4.7](https://www.anthropic.com/news/claude-opus-4-7) today. This PR adds
it to the platform's supported model list.

## Why

Users and developers need access to `claude-opus-4-7` via the platform's
LLM block and API. The model is available on Anthropic's API today.

## What

- Adds `CLAUDE_4_7_OPUS = "claude-opus-4-7"` to the `LlmModel` enum
- Adds corresponding `ModelMetadata` entry: 200k context, 128k output,
price tier 3 ($5/M input, $25/M output — same as Opus 4.6)

## How

Two lines added to `llm.py`, following the exact same pattern as all
other Anthropic model additions. No migrations, no frontend changes
needed — the frontend reads model metadata from the backend's JSON
schema endpoint automatically.

Closes SECRT-2248

---------

Co-authored-by: Bentlybro <Github@bentlybro.com>
2026-04-27 07:00:00 +00:00
SymbolStar
384cbd3ccd fix(frontend): redirect www to non-www with 308 to preserve request method (#9188) (#12920)
## Summary
Fixes #9188 — redirects `www.` to non-www to prevent cookie/auth domain
mismatch.

Uses **308** (Permanent Redirect) instead of 301, which preserves the
HTTP method and request body. This is important because the middleware
matcher runs on `/auth/authorize` and `/auth/integrations/*`, where
OAuth callbacks may use POST.

Split from #12895 per reviewer request.

## Changes
- Added www→non-www redirect in Next.js middleware using
`NextResponse.redirect(url, 308)`

---------

Co-authored-by: majdyz <zamil.majdy@agpt.co>
2026-04-27 06:35:56 +00:00
SymbolStar
8be9cf70af fix(frontend): filter null query params in buildUrlWithQuery (#11237) (#12921)
## Summary
Fixes #11237 — `buildUrlWithQuery` now filters out `null` values in
addition to `undefined`, preventing them from being serialized as
literal `"null"` strings in URL query parameters.

Split from #12895 per reviewer request.

## Changes
- Added `value !== null` check alongside existing `value !== undefined`
in `buildUrlWithQuery`

---------

Co-authored-by: majdyz <zamil.majdy@agpt.co>
2026-04-27 06:31:34 +00:00
Abhimanyu Yadav
a723966e0b feat(platform): settings v2 integrations page + provider description SDK (#12911)
### Why / What / How

**Why:** The settings v2 integrations surface renders from a hardcoded
`MOCK_PROVIDERS` array — no real user credentials, no delete, no way to
connect a new service. Provider display metadata (descriptions +
supported auth types) was scattered across frontend maps with no backend
source of truth, leaving each new provider to be manually registered in
two places.

**What:** Full-featured settings v2 integrations page driven by live
backend data, plus a backend SDK extension so every provider carries a
description **and** declares its supported auth types. Settings UI uses
both to render the connect-a-service dialog: descriptions in the list,
auth types to pick the right tabs in the detail view.

**How:**
- **Credentials list** — single-fetch via `useGetV1ListCredentials`,
grouped client-side by provider, debounced (250 ms) Unicode-normalized
in-memory search (no roundtrip per keystroke), managed/system creds
filtered via the shared `filterSystemCredentials` helper from
`CredentialsInput`. Loading → skeletons that mirror the real accordion
shape, error → `ErrorCard` with retry, empty → `IntegrationsListEmpty`
with custom marquee illustration.
- **Delete flow** — `useDeleteIntegration` returns per-target `succeeded
/ failed / needsConfirmation` so the UI can name failed items and keep
them selected for one-click retry. Single + bulk both gated by
`DeleteConfirmDialog`. Per-row delete button disables + shows a spinner
via `isDeletingId` so double-clicks can't fire two requests. Success
toast names the credential ("Removed GitHub key").
- **Connect-a-service dialog** — backend-driven (`useGetV1ListProviders`
returns `ProviderMetadata[]` with description + supported_auth_types),
Emil-spec animations (150 ms ease-out step swap, 200 ms ease-out height
resize, 180 ms entry fade+slide+blur on tab swap, all respecting
`prefers-reduced-motion`). Detail view picks tab order via deterministic
`TAB_PRIORITY` (oauth → api_key → user_password → host_scoped) and
remembers last-selected tab per provider for the session.
- **OAuth tab** → `openOAuthPopup` + `getV1InitiateOauthFlow` →
`postV1ExchangeOauthCodeForTokens`
- **API key tab** → zod-validated form with
`autoComplete="new-password"` + `spellCheck=false` so browsers don't
autofill the wrong stored key → `postV1CreateCredentials`
- **Provider metadata SDK** — chainable
`ProviderBuilder.with_description(...)` +
`.with_supported_auth_types(...)` (the latter populated automatically by
`with_oauth` / `with_api_key` / `with_managed_api_key` /
`with_user_password`; explicit form reserved for legacy providers whose
auth lives outside the builder chain). `GET /integrations/providers`
upgraded from `List[str]` → `List[ProviderMetadata]` carrying both
fields.
- **Backward compat** — `BackendAPI.listProviders()` maps the new
`ProviderMetadata[]` shape down to `string[]` so the deprecated
`CredentialsProvider` (used by the builder/library credential pickers)
keeps working without ripple changes.
- **Routing** — page lives at `/settings/integrations` directly. No
feature flag gate (settings v2 layout is already on dev).

### Changes 🏗️

**Backend**
- `backend/sdk/builder.py` — `with_description()` +
`with_supported_auth_types()` chain methods; the latter is
auto-populated by every existing auth-method chain call so explicit
declaration is only needed for legacy providers.
- `backend/sdk/provider.py` — `description` + `supported_auth_types`
fields on `Provider`.
- `backend/api/features/integrations/router.py` — `GET /providers` now
returns `List[ProviderMetadata]`; calls `load_all_blocks()` (cached
`@cached(ttl_seconds=3600)`) before reading `AutoRegistry`.
- `backend/api/features/integrations/models.py` — `ProviderMetadata`
with `name + description + supported_auth_types`;
`get_provider_description` + `get_supported_auth_types` helpers reading
from `AutoRegistry`.
- 13 existing `_config.py` files updated with `.with_description(...)`:
agent_mail, airtable, ayrshare, baas, bannerbear, dataforseo, exa,
firecrawl, linear, stagehand, wordpress, wolfram, generic_webhook.
- 20 new `_config.py` files (one per provider block dir): apollo,
compass, discord, elevenlabs, enrichlayer, fal, github, google, hubspot,
jina, mcp, notion, nvidia, replicate, slant3d, smartlead, telegram,
todoist, twitter, zerobounce. Each declares
`with_supported_auth_types(...)` because their auth handlers live in
`backend/integrations/oauth/` (legacy) or are block-level
`CredentialsMetaInput` declarations — outside the builder chain.
- 1 new `backend/blocks/_static_provider_configs.py` registering
description + auth types for ~24 providers that live in shared files
(openai, anthropic, groq, ollama, open_router, v0, aiml_api, llama_api,
reddit, medium, d_id, e2b, http, ideogram, openweathermap, pinecone,
revid, screenshotone, unreal_speech, webshare_proxy, google_maps, mem0,
smtp, database). Comment documents the migration path (each entry
retires when the provider graduates to its own `_config.py`).

**Frontend**
- `src/app/(platform)/settings/integrations/page.tsx` — replaces mock
page; composes header + list + connect dialog.
-
`src/app/(platform)/settings/integrations/components/IntegrationsList/`
— list + skeleton + selection (Record<string, true> instead of Set) +
delete orchestration hook.
-
`src/app/(platform)/settings/integrations/components/ConnectServiceDialog/`
— split per the 200-line house rule into `ConnectServiceDialog`,
`ListView`, `ProviderRow`, `useMeasuredHeight`. DetailView's nested
helpers extracted to siblings: `MethodPanel`, `UnsupportedNotice`,
`ProviderAvatar`. Tabs render in deterministic priority order;
last-selected tab persisted per provider in module-scope.
-
`src/app/(platform)/settings/integrations/components/DeleteConfirmDialog/`
— new confirm dialog gating single + bulk deletes (shows up to 3 names +
remaining count for bulk).
-
`src/app/(platform)/settings/integrations/components/IntegrationsListEmpty/components/IntegrationsMarquee.tsx`
— switched from `next/image unoptimized` to plain `<img loading="lazy"
decoding="async">` for decorative logos (no LCP candidate, avoids Next
Image runtime overhead).
-
`src/app/(platform)/settings/integrations/components/hooks/useDeleteIntegration.ts`
— bulk delete now returns per-target
`succeeded/failed/needsConfirmation`; failed items stay selected for
retry; per-id pending tracking via `isDeletingId`; toast names the
credential.
-
`src/app/(platform)/settings/integrations/components/hooks/useDebouncedValue.ts`
— small reusable debounce hook (250 ms, used by both list + dialog
search).
- `src/app/(platform)/settings/integrations/helpers.ts` —
`formatProviderName` guarded against non-string input; `filterProviders`
now Unicode-normalized (NFKD + strip combining marks) so accented
queries match.
- `src/providers/agent-credentials/helper.ts` — `toDisplayName` same
`typeof string` guard.
- `src/components/contextual/CredentialsInput/helpers.ts` — loosened
`filterSystemCredentials` / `getSystemCredentials` generic constraint to
accept `title?: string | null` so it consumes `CredentialsMetaResponse`
directly.
- `src/lib/autogpt-server-api/client.ts` — `listProviders()` maps the
new shape to `string[]` for backward compat.
- `src/app/api/openapi.json` — regenerated spec includes
`ProviderMetadata` with `supported_auth_types`.

### PR feedback addressed 🛠️

Round of fixes after the first review pass:
- Bulk delete: per-item names in failure toast, failed items kept
selected for retry.
- Confirmation dialog before any delete (single or bulk) —
`DeleteConfirmDialog`.
- Per-row delete button disabled + spinner while pending (no
double-click double-fire).
- Toast names the credential ("Removed GitHub key") instead of generic
copy.
- API key input: `autoComplete="new-password"` + `spellCheck=false`;
title field `autoComplete="off"`.
- Search debounced 250 ms on both list + dialog; Unicode-normalized so
"Açai" matches "acai".
- `toDisplayName` / `formatProviderName` guarded against non-string
input (`provider.split is not a function` was reproducible).
- Skeleton mirrors the real accordion shape — no layout shift on data
load.
- Selection bar sticky position fixed for <375 px (`top-2 sm:top-0`).
- Last-selected auth tab persisted per provider for the session.
- Tabs ordered deterministically (oauth → api_key → user_password →
host_scoped) instead of insertion order.
- `useMemo` removed from `useIntegrationsList` per project rule (no
measured perf need).
- Selection state migrated from `Set<string>` to `Record<string, true>`
(idiomatic React state shape).
- ConnectServiceDialog 288 LoC → ~130 (extracted `ListView`,
`ProviderRow`, `useMeasuredHeight`); DetailView helpers → siblings.
- `next/image unoptimized` → plain `<img>` for decorative logos in
marquee + provider rows + avatar.
- `with_supported_auth_types(...)` pruned in 11 `_config.py` files where
it was redundant with `with_oauth` / `with_api_key` /
`with_managed_api_key` / `with_user_password`. Kept in legacy ones
(github, discord, google, notion, ...) where the docstring says it's
required because auth handlers live outside the builder chain.
- Tab swap + dialog step animations re-tuned vs Emil Kowalski's
animation rules: ease-out default, under 300 ms,
transform+opacity+filter only, blur-bridge to soften swap, willChange to
dodge the 1px shift, reduced-motion fallbacks via `useReducedMotion`.
- Merged latest `dev` (api-keys SECRT-2273 took dev's version, no
api-keys diff in this PR; settings layout took dev's version, no
`SETTINGS_V2` feature flag in this PR; `scroll-area` took dev's
version).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Navigate to `/settings/integrations` — loading skeletons appear,
then user's real credentials grouped by provider.
- [ ] Type in the search box — filters client-side after 250 ms, no new
network requests (DevTools → Network stays quiet). Try "açai" with
diacritics — matches "acai" providers.
- [ ] Connect a service → dialog loads provider list with backend
descriptions; search matches name, slug, and description.
- [ ] Click a provider → dialog tweens height smoothly (no jump); header
shows provider avatar + name + description; tabs render in oauth →
api_key priority; last-used tab restored on reopen.
- [ ] Open `linear` (oauth + api_key) — switching tabs animates with a
quick fade+slide+blur entry, no flash.
- [ ] OAuth tab → "Continue with X" opens popup, completes consent,
popup closes, dialog closes, new credential appears with success toast.
- [ ] API key tab → paste a key (browser does NOT offer to autofill any
stored password), Save → toast names the credential, dialog closes.
- [ ] Delete (single) via trash icon → confirmation dialog → button
disables with spinner during the request → toast names the credential.
- [ ] Delete (bulk) via selection bar → confirmation lists up to 3 names
→ if some fail, failed ones stay selected for retry; toast lists which
failed.
  - [ ] Double-click a delete button rapidly — only one request fires.
- [ ] Managed credentials (e.g. "Use Credits for AI/ML API") do **not**
appear in the list.
- [ ] Test on a fresh account (no credentials) — `IntegrationsMarquee`
empty state renders.
- [ ] Throttle network to Slow 3G — skeleton (mirroring real shape)
visible, then list slides in.
  - [ ] Block `/api/integrations/credentials` → `ErrorCard` with retry.
- [ ] `curl /api/integrations/providers` returns `[{ name, description,
supported_auth_types }, ...]` with every provider carrying both fields.
- [ ] `prefers-reduced-motion: reduce` set → all motion collapses to
opacity-only fades.
  - [ ] On <375 px viewport — selection bar clears the mobile nav.
- [ ] `pnpm format && pnpm lint && pnpm types && pnpm test:unit` all
pass.

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**) — no flag changes in this PR.
2026-04-27 06:28:31 +00:00
Zamil Majdy
5b1d9763ed fix(backend/copilot): preserve interrupted SDK partial work on final-failure exit (#12918)
## Background

[SECRT-2275](https://linear.app/autogpt/issue/SECRT-2275). User report:
when a copilot ("autopilot") turn is interrupted by a usage-limit,
tool-call-limit, or other run interruption, the user's recent work
disappears. User described it as: "my initial message was lost 3 times
and it disappeared, then when I would say 'continue' it would do a
random old task."

Investigation surfaced two distinct failure modes. This PR addresses
both.

- **Mode 1** — rate-limit (or other pre-stream rejection) at turn start:
the user's text only ever lives in the optimistic `useChat` bubble; the
backend rejects before the message is persisted, so the bubble is a lie
and a refresh / retry would lose the text.
- **Mode 2** — long-running turn interrupted mid-stream: the entire
turn's progress (assistant text, tool calls, reasoning) vanishes on
interruption — what users describe as "the turn is gone."

## Mode 1 — frontend: restore unsent text on 429

Backend can't recover this on its own: `check_rate_limit` raises before
`append_and_save_message`, so by the time the 429 surfaces there is no
DB row to roll forward. See
`autogpt_platform/backend/backend/api/features/chat/routes.py:916-922`
(rate-limit check) and `routes.py:945` (later append-and-save).

Frontend fix in
`autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotStream.ts`:
when `useChat`'s `onError` reports a usage-limit error, we

- drop the optimistic user bubble (DB has no record of it, so leaving it
would be a phantom),
- push `lastSubmittedMsgRef.current` back into the composer via the
existing `setInitialPrompt` slot — the same slot URL pre-fills use, so
`useChatInput`'s `consumeInitialPrompt` effect picks it up
automatically,
- clear `lastSubmittedMsgRef` so the dedup guard doesn't block re-send.

In-memory only; surviving a hard refresh while rate-limited is a
separate follow-up (would need localStorage persistence with TTL).

Test:
`autogpt_platform/frontend/src/app/(platform)/copilot/__tests__/useCopilotStream.test.ts`
— verifies the composer is repopulated and the optimistic bubble is
dropped on a 429.

## Mode 2 — backend: preserve interrupted partial in DB

### Root cause

The SDK retry loop in `stream_chat_completion_sdk` always rolls back
`session.messages` to the pre-attempt watermark on any exception. That
rollback is correct **before a retry** so attempt #2 doesn't duplicate
attempt #1's content. But it runs **before the retry decision is made**,
so when retries are exhausted (or no retry is attempted) the partial
work is discarded too.

Three branches of the retry loop ended in a final-failure state with
side effects worse than just losing the partial:
- `_HandledStreamError` non-transient: rollback then add error marker —
partial gone
- `Exception` with `events_yielded > 0`: rollback then break — **no
error marker added either**, so on refresh the chat looks like nothing
happened even though the user just watched tokens stream live
- `Exception` non-context-non-transient + the while-`else:` exhaustion
path: same, no marker
- Outer except (cancellation, GeneratorExit cleanup): didn't restore
captured partial

### Fix

`autogpt_platform/backend/backend/copilot/sdk/service.py`:

1. **`_InterruptedAttempt` dataclass** — holds the rolled-back `partial:
list[ChatMessage]` + optional `handled_error: _HandledErrorInfo`. Three
methods drive the contract:
- `capture(session, transcript_builder, transcript_snap,
pre_attempt_msg_count)` — slices `session.messages`, restores the
transcript, strips trailing error markers to prevent duplicate markers
after restore.
- `clear()` — drops captured state on a successful retry so outer
cleanup paths don't replay pre-retry content.
- `finalize(session, state, display_msg, retryable=...) ->
list[StreamBaseResponse]` — re-attaches partial, synthesizes
`tool_result` rows for orphan `tool_use` blocks, appends the canonical
error marker, and returns the flushed events so the caller can yield
them to the client (no double-flush).
2. **`_flush_orphan_tool_uses_to_session(session, state) ->
list[StreamBaseResponse]`** — synthesizes `tool_result` rows for any
`tool_use` that never resolved before the error so the next turn's LLM
context stays API-valid (Anthropic rejects orphan tool_use). Uses the
public `adapter.flush_unresolved_tool_calls` and returns the events for
the caller to yield.
3. **`_classify_final_failure(...) -> _FinalFailure | None`** — picks
the display message + stream code + retryable flag for the final-failure
exit. One source of truth for the in-history error marker and the
client-facing `StreamError` SSE yield so they can't drift.
4. **Consolidated post-loop emit**: the former three scattered blocks
(partial restore + redundant re-flush + two separate `yield StreamError`
sites) collapsed to one block driven by `_classify_final_failure` →
`_FinalFailure` → `finalize()` → yield events + single `StreamError`.
5. **Adapter `flush_unresolved_tool_calls`** (renamed from
`_flush_unresolved_tool_calls` to drop the `# noqa: SLF001` suppressors
on cross-module callers).

Each retry-loop rollback site calls `interrupted.capture(...)`; the
success break calls `interrupted.clear()`; the post-loop failure block
calls `interrupted.finalize(...)` exactly once.

The baseline service already preserves partial work via its existing
finally block — no change needed there.

## Tests

Backend (`backend/copilot/sdk/interrupted_partial_test.py`, new, 18
tests):

- `TestInterruptedAttemptCapture` — slice semantics + stale-marker
stripping
- `TestInterruptedAttemptFinalize` — appends partial then marker,
handles empty partial, no-op on `None` session, flushes unresolved tools
between partial and marker, returns flushed events for caller to yield
- `TestFlushOrphanToolUses` — synthesizes `tool_result` rows, returns
events, no-op on None state / no unresolved
- `TestClassifyFinalFailure` — handled_error wins, attempts_exhausted,
transient_exhausted, stream_err fallback, returns None on success path
- `TestRetryRollbackContract` — end-to-end: capture + finalize yields
the exact content the user saw streaming live plus the error marker

1022 total SDK tests pass (baseline + new).

Frontend (`useCopilotStream.test.ts`): 1 new test — `restores the unsent
text and drops the optimistic user bubble on 429 usage-limit`.

## Out of scope

- Frontend rendering tweaks for the interrupted-turn marker (existing
error-marker rendering already works).
- Refresh-survival of the unsent text in Mode 1 (would require
localStorage persistence with TTL) — separate follow-up.
- Hard process-kill / OOM where Python `finally` doesn't run — needs a
different mechanism (pod-level checkpoint sweeper).

## Checklist

- [x] My code follows the style guidelines of this project
(black/isort/ruff via `poetry run format`)
- [x] I have performed a self-review of my own code
- [x] I have added relevant unit tests
- [x] I have run lint and tests locally (1022 SDK tests pass)

## Test plan

- [ ] Verify a long-running turn that hits transient-retry exhaustion
preserves partial assistant text + tool results in chat history after
refresh
- [ ] Verify the next user message after an interrupted turn carries
enough context that the model can continue the prior task instead of
inventing a new one
- [ ] Verify a successful retry (attempt #1 fails, attempt #2 succeeds)
shows ONLY attempt #2's content (no leaked partial from #1)
- [ ] Verify hitting daily usage limit at turn start re-populates the
composer with the unsent text and removes the optimistic user bubble

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-25 14:21:18 +07:00
Zamil Majdy
10ea46663f fix(backend/notifications): atomic upsert + drop eager include (#12919)
## Problem

`create_or_add_to_user_notification_batch` does:
1. `find_unique(..., include={"Notifications": True})` — loads ALL
notifications for the batch (thousands for heavy AGENT_RUN users),
causing Postgres `statement_timeout` in dev.
2. Find-then-update is non-atomic — concurrent invocations either hit
`@@unique([userId, type])` violations or drop notifications.

Real Sentry: `canceling statement due to statement timeout` on this
exact query, traced to `database-manager` pod.

## Fix

Use Prisma `upsert` (atomic) and skip the eager include. Only load
notifications if the caller actually needs them (audited — the sole
caller `NotificationManager._should_batch` ignores the returned DTO and
separately fetches the oldest message via
`get_user_notification_oldest_message_in_batch`).

## Tests

- Single + existing-batch upsert paths
- Concurrent race regression

## Out of scope

Unrelated to PR #12900 (Redis Cluster migration). Separate change.
2026-04-25 13:28:28 +07:00
Zamil Majdy
06188a86a6 refactor(platform/copilot): consolidate 4 model-routing LD flags into 1 JSON flag (#12917)
## What

Replaces 4 string-valued LaunchDarkly flags with a single JSON-valued
flag for copilot model routing:

- ~~`copilot-fast-standard-model`~~
- ~~`copilot-fast-advanced-model`~~
- ~~`copilot-thinking-standard-model`~~
- ~~`copilot-thinking-advanced-model`~~

**New:** `copilot-model-routing` (JSON), keyed `{mode: {tier: model}}`:
```json
{
  "fast":     { "standard": "anthropic/claude-sonnet-4-6", "advanced": "anthropic/claude-opus-4-6" },
  "thinking": { "standard": "moonshotai/kimi-k2.6",         "advanced": "anthropic/claude-opus-4-6" }
}
```

## Why

Same pattern as the sibling consolidation in #12915 (pricing /
cost-limits flags) and the merged #12910 (tier-multipliers):

- One flag per config domain — less LD UI clutter, easier audit trail.
- Atomic updates — rotating fast.standard + thinking.standard is a
single save.
- Fewer LD entities to name, version, target, explain.
- Mirrors the now-uniform copilot-* JSON-flag shape.

## How

- `backend/util/feature_flag.py`: drop the four `COPILOT_*_MODEL` enum
values, add `COPILOT_MODEL_ROUTING`.
- `backend/copilot/model_router.py`: rewrite `resolve_model` to fetch
the JSON flag once per call and walk `payload[mode][tier]`. Missing
mode, missing tier-within-mode, non-string cell value, non-dict payload,
or LD failure all fall back to the corresponding `ChatConfig` default
(same user-visible semantics as before). `_FLAG_BY_CELL` removed
entirely; `_config_default` / `ModelMode` / `ModelTier` unchanged.
- Per-user LD targeting preserved — cohorts can still receive different
routing.
- No caching added (preserves existing uncached behaviour).
- Docstring references in `copilot/config.py` + `copilot/sdk/service.py`
updated to point at the new nested key path; one docstring in
`service_test.py` likewise.

## Operator action required BEFORE merging

This PR removes 4 LD flags and introduces 1 replacement.

1. In LaunchDarkly, create `copilot-model-routing` (type: **JSON**,
server-side only). Default variation = union of the current four string
flags, shaped as:
   ```json
   {
"fast": { "standard": "<current copilot-fast-standard-model>",
"advanced": "<current copilot-fast-advanced-model>" },
"thinking": { "standard": "<current copilot-thinking-standard-model>",
"advanced": "<current copilot-thinking-advanced-model>" }
   }
   ```
Omit any cell that's currently unset (its `ChatConfig` default will be
used).

2. Merge this PR.

3. After deploy + smoke, delete the four legacy flags:
   - `copilot-fast-standard-model`
   - `copilot-fast-advanced-model`
   - `copilot-thinking-standard-model`
   - `copilot-thinking-advanced-model`

## Testing

- `backend/copilot/model_router_test.py` rewritten — 27 tests pass:
  - LD unset / `None` payload → fallback for every cell.
  - Full JSON → each cell maps to its value (parametrized).
- Partial JSON (missing mode, missing tier-within-mode, mode value not a
dict).
  - Non-dict payloads (str / list / int / bool) → fallback + warning.
- Non-string cell values (number, list, bool, dict) → fallback +
'non-string' warning.
- Empty-string cell → fallback + 'empty string' warning (not
'non-string').
  - LD raises → fallback + warning with `exc_info`.
  - `user_id=None` → skip LD entirely.
- Single-LD-call regression guard against re-introducing per-cell flag
fan-out.
- `backend/copilot/sdk/service_test.py`: 61 tests still pass (it mocks
`_resolve_thinking_model_for_user`, so the inner flag change is
transparent).
- `black --check` / `ruff check` / `isort --check` all clean.

## Sibling

- #12915 — same consolidation pattern for stripe-price / cost-limits
flags.

## Checklist

- [x] I have read the project's contributing guide.
- [x] I have clearly described what this PR changes and why.
- [x] My code follows the style guidelines of this project.
- [x] I have added tests that prove my fix is effective or that my
feature works.
- [ ] New and existing unit tests pass locally with my changes (CI will
confirm).
2026-04-25 08:15:36 +07:00
Zamil Majdy
2deac2073e fix(block_cost_config): audit + correct stale LLM/block rates + migrate generic ReplicateModelBlock to COST_USD (#12912)
## Why

PR #12909's pricing refresh was sourced from aggregators (pricepertoken,
blog mirrors) instead of provider pricing pages. Follow-up audit against
**official provider docs** caught **22 stale entries** — 9 LLM token
rates + 12 non-LLM block rates + 1 block that needed a code refactor to
bill dynamically. Also flagged by Sentry: Mistral models were sitting on
the wrong provider's rate table.

Cross-verified JS-rendered pages (docs.x.ai, DeepSeek, Kimi) via
agent-browser.

## Corrections applied

### LLM TOKEN_COST (9 entries)

| Model | Old | New | Reason |
|---|---|---|---|
| `GPT5` | 94/750 | **188/1500** | Was OpenAI Batch API rate; Standard
is $1.25/$10 |
| `DEEPSEEK_CHAT` | 42/63 | **21/42** | Unified to deepseek-v4-flash at
$0.14/$0.28 (Sept 2025) |
| `DEEPSEEK_R1_0528` | 82/329 | **21/42** | Same v4-flash routing |
| `MISTRAL_LARGE_3` | 300/900 | **300/900** (restored after brief 75/225
detour) | Routes via OpenRouter ($2/$6), not Mistral direct |
| `MISTRAL_NEMO` | 3/6 → 23/23 | **5/5** | Routes via OpenRouter
($0.035/$0.035); Mistral-direct $0.15 doesn't apply |
| `KIMI_K2_0905` | 82/330 | **90/375** | Matches K2 family $0.60/$2.50 |
| `KIMI_K2_5` | 90/450 | **66/300** | OpenRouter pass-through $0.44/$2 |
| `KIMI_K2_6` | 143/600 | **112/698** | OpenRouter pass-through
$0.7448/$4.655 |
| `META_LLAMA_4_MAVERICK` | 30/90 | **75/116** | Groq $0.50/$0.77
(deprecated 2026-02-20) |

### Non-LLM BLOCK_COSTS — rate corrections (11 entries)

Under-billing fixes:
- `AIVideoGeneratorBlock` (FAL) SECOND 3 → **15 cr/s**
- `CreateTalkingAvatarVideoBlock` (D-ID) RUN 15 → **100 cr**
- Nano Banana Pro/2 across 3 blocks: RUN 14 → **21 cr**
- `UnrealTextToSpeechBlock` RUN 5 → **COST_USD 150 cr/$** (block now
emits `chars × $0.000016`)

Over-billing fixes:
- `IdeogramModelBlock` default 16 → **12**, V_3 18 → **14**
- `AIImageEditorBlock` FLUX_KONTEXT_MAX 20 → **12**
- `ValidateEmailsBlock` 250 → **150 cr/$**
- `SearchTheWebBlock` 100 → **150 cr/$**
- `GetLinkedinProfilePictureBlock` 3 → **1 cr**

### Non-LLM BLOCK_COSTS — block refactored for dynamic billing (1 entry)

- **`ReplicateModelBlock`** (the generic "run any Replicate model"
wrapper) migrated from flat RUN 10 cr → **COST_USD 150 cr/$**. Block now
uses `client.predictions.async_create + async_wait` instead of
`async_run(wait=False)` so it can read `prediction.metrics.predict_time`
and bill `predict_time × $0.0014/s` (Nvidia L40S mid-tier, where most
popular public models run).

Additionally (addressing CodeRabbit's critical review on this refactor):
`async_wait()` returns normally regardless of terminal status — it
doesn't raise on `failed`/`canceled` like the old `async_run` did. The
block now explicitly checks `prediction.status` after `async_wait()` and
raises `RuntimeError` on `failed` (with `prediction.error` as context)
or `canceled` **before** `merge_stats`, so failed runs are never billed
for partial compute time.

**Why this matters:** flat 10 cr was 10–500× under-billing long
video/LLM runs (users could wire in a $50/hr A100 Llama inference and
pay us $0.10). It was also 20× over-billing trivial SDXL runs. Now
scales with real compute time AND no longer bills failed predictions.

### Documentation-only

- **Grok legacy models** (grok-3, grok-4-0709, grok-4-fast,
grok-code-fast-1): dropped from docs.x.ai's public pricing page but
still callable via the API. Added inline comment noting this; rates kept
at their verified launch pricing.
- **Mistral routing**: added comment explaining why TOKEN_COST for
MISTRAL_* is the OpenRouter safety floor (not Mistral-direct) since
`ModelMetadata.provider = "open_router"` for all Mistral entries.

## How

- For each entry, opened the **official provider pricing page** directly
and computed `our_cr = round(1.5 × provider_usd × 100)`.
- For JS-rendered pages (docs.x.ai, api-docs.deepseek.com), used
agent-browser headless to render + extract rates from the DOM.
- Migrated 2 blocks (`UnrealTextToSpeechBlock`, `ReplicateModelBlock`)
from flat RUN to COST_USD — the Replicate migration touched the block's
SDK interaction.
- Updated 2 FAL-video unit tests that asserted the old `3 cr/s` rate.
- Updated 3 stale test assertions: 2 for Unreal TTS (still on
`characters` cost_type) + 1 for ZeroBounce (old 250 cr).

## Known remaining risk (explicitly out of scope)

- **`ReplicateFluxAdvancedModelBlock`** not migrated — bounded to Flux
models ($0.04–$0.08), flat 10 cr stays within 1.25–2.5× margin. Separate
PR if desired.
- **AgentMail** on free tier (1 RUN). When paid pricing publishes,
revisit.
- **Live Replicate API verification**: mitigated via 9 unit tests
covering the refactored path (`async_create` version-vs-model branching,
metrics-based billing emission, failed/canceled raises,
zero/missing-metrics no-emission, `async_wait` ordering), and SDK
signature confirmed via `inspect.signature` — but no real API call
executed. A smoke test on a cheap model before merge is still
recommended.

## Test plan

- [x] `poetry run pytest backend/data/block_cost_config_test.py
backend/executor/block_usage_cost_test.py
backend/blocks/claude_code_cost_test.py
backend/blocks/cost_leak_fixes_test.py
backend/blocks/block_cost_tracking_test.py
backend/copilot/tools/helpers_test.py
backend/blocks/replicate/replicate_block_cost_test.py -q` — all passing
(80+ tests).
- [x] Sources: openai.com/api/pricing, claude.com/pricing,
api-docs.deepseek.com, mistral.ai/pricing,
platform.kimi.ai/docs/pricing, docs.x.ai, groq.com/pricing,
replicate.com, fal.ai, d-id.com, ideogram.ai, zerobounce.net, jina.ai,
unrealspeech.com, enrichlayer.com.
- [ ] Live Replicate API call to verify `predictions.async_create +
async_wait + metrics.predict_time` path.
2026-04-25 06:10:16 +07:00
Zamil Majdy
24406dfcec refactor(platform): consolidate 6 LD flags into 2 JSON flags (#12915)
## What

Consolidates two groups of LaunchDarkly flags into single JSON-valued
flags, matching the pattern established by `copilot-tier-multipliers`
(merged in #12910):

**Stripe prices** — 4 string flags → 1 JSON flag:
- ~~`stripe-price-id-basic`~~ / ~~`-pro`~~ / ~~`-max`~~ /
~~`-business`~~
- **New:** `copilot-tier-stripe-prices` (JSON)
  ```json
  { "PRO": "price_xxx", "MAX": "price_yyy" }
  ```

**Cost limits** — 2 number flags → 1 JSON flag:
- ~~`copilot-daily-cost-limit-microdollars`~~ /
~~`copilot-weekly-cost-limit-microdollars`~~
- **New:** `copilot-cost-limits` (JSON)
  ```json
  { "daily": 625000, "weekly": 3125000 }
  ```

## Why

- One flag to manage per config domain (LD UI less cluttered, easier
audit trail).
- Atomic updates — e.g., rotating Pro + Max prices happens in a single
save.
- Fewer LD entities to name, version, target, and explain.
- Mirrors the just-merged `copilot-tier-multipliers` shape so the whole
pricing/limits config is uniform.

## How

- `get_subscription_price_id(tier)` now parses
`copilot-tier-stripe-prices` and looks up `tier.value` — returns `None`
when the flag is unset, non-dict, tier key missing, or value isn't a
non-empty string.
- `get_global_rate_limits` uses a new sibling
`_fetch_cost_limits_flag()` helper (60s cache, `cache_none=False`) that
extracts `daily` / `weekly` int keys independently and falls back to the
existing `ChatConfig` defaults when any key is missing / non-int /
negative. A broken `daily` doesn't wipe out `weekly` (or vice versa).
- Tests rewritten to mock the new JSON shapes + cover partial / invalid
/ missing-key fallbacks.

## ⚠️ Operator action required BEFORE merging

This PR **removes 6 LD flags** and introduces 2 replacements. To avoid a
pricing/rate-limit outage, do this in LaunchDarkly first:

1. Create `copilot-tier-stripe-prices` (type: **JSON**). Default
variation = union of the current `stripe-price-id-*` values:
   ```json
{ "PRO": "<current stripe-price-id-pro>", "MAX": "<current
stripe-price-id-max>" }
   ```
   Omit BASIC / BUSINESS if those flags are unset today.

2. Create `copilot-cost-limits` (type: **JSON**). Default variation =
the current two flags' values:
   ```json
{ "daily": <current daily microdollars>, "weekly": <current weekly
microdollars> }
   ```

3. Merge this PR.

4. After deploy + smoke test, delete the six legacy flags:
   - `stripe-price-id-{basic,pro,max,business}`
   - `copilot-daily-cost-limit-microdollars`
   - `copilot-weekly-cost-limit-microdollars`

## Testing

- Backend unit tests: `pytest backend/copilot/rate_limit_test.py
backend/data/credit_subscription_test.py
backend/api/features/subscription_routes_test.py` — rewritten to
exercise the JSON flag shapes + fallback paths; passes locally.
- `black --check` / `ruff check` / `isort --check` — all clean.

## Checklist

- [x] I have read the project's contributing guide.
- [x] I have clearly described what this PR changes and why.
- [x] My code follows the style guidelines of this project.
- [x] I have added tests that prove my fix is effective or that my
feature works.
- [ ] New and existing unit tests pass locally with my changes (CI will
confirm).
2026-04-25 06:08:09 +07:00
Nicholas Tindle
33eb9e9ad9 fix(platform): address review — format_bytes rollup, storage bar visibility
- Rename _format_bytes → format_bytes (public API, used cross-module)
- Add unit boundary rollup (1024 KB → 1.0 MB) matching frontend formatBytes
- Show WorkspaceStorageSection even when token limits are null
- Move import to top level in routes.py

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 14:32:24 -05:00
Nicholas Tindle
938cf9ea5f test(backend): add guard test for tier multiplier enum completeness
Ensures every SubscriptionTier has an entry in _DEFAULT_TIER_MULTIPLIERS.
Matches the existing storage limit guard — both fail immediately if
someone adds a new tier without updating the mappings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 14:17:06 -05:00
Nicholas Tindle
1fed970a88 fix(backend): move mid-file import to top, add tier coverage guard test
- Move _format_bytes import to top of routes.py (was mid-function)
- Add test_every_subscription_tier_has_storage_limit that iterates all
  SubscriptionTier enum values and asserts each has a TIER_WORKSPACE_STORAGE_MB
  entry — fails immediately if someone adds a new tier without a storage limit

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 14:14:21 -05:00
Nicholas Tindle
4d85a68d9a fix(platform): resolve merge conflicts with dev, update tier names
- Resolve conflicts in test files and SubscriptionTierSection
- Update TIER_WORKSPACE_STORAGE_MB for renamed tiers: FREE→BASIC, add MAX
- Update tier descriptions in helpers.ts with storage limits
- Update rate_limit_test.py for new tier names

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 14:05:45 -05:00
Zamil Majdy
000ddb007a dx: use $REPO_ROOT in pr-test skill instead of hardcoded absolute path (#12914)
## Summary
- `.claude/skills/pr-test/SKILL.md` referenced
`/Users/majdyz/Code/AutoGPT/.ign.testing.{lock,log}` in 5 places, which
breaks the skill for anyone else who clones the repo.
- Replaced with `$REPO_ROOT`, which is already defined in Step 0 as `git
-C "$WORKTREE_PATH" worktree list | head -1 | awk '{print $1}'`. That
resolves to the main/primary worktree from any sibling worktree,
preserving the original "always pin the lock to the root checkout so all
siblings see the same file" semantics.
- No behavior change for the existing user; repo becomes portable for
everyone else.

## Test plan
- [x] `grep -n "/Users/majdyz" .claude/skills/pr-test/SKILL.md` returns
only the two intentional mentions in the "never paste absolute paths
into PR comments" warning.
- [x] `$REPO_ROOT` is defined in Step 0 before any Step 3.0 usage.
2026-04-24 23:10:20 +07:00
Zamil Majdy
408b205515 feat(platform): LD-configurable rate-limit multipliers + relative UI display (#12910)
## Summary

- **Backend (`copilot/rate_limit`)** — ``TIER_MULTIPLIERS`` is now
float-typed and resolvable through a new LaunchDarkly flag
``copilot-tier-multipliers``. The integer defaults live on as
``_DEFAULT_TIER_MULTIPLIERS`` and are merged with whatever LD returns
(missing / invalid keys inherit defaults; LD failures fall back to
defaults without raising). ``get_global_rate_limits`` now honours the
flag per-user and casts ``int(base * multiplier)`` so downstream
microdollar math stays integer even when LD hands back a fractional
multiplier (e.g. 8.5×). Cached for 60 s via ``@cached(ttl_seconds=60,
maxsize=8, cache_none=False)`` to match the pattern in
``get_subscription_price_id``.
- **Backend (`api/features/v1`)** — ``SubscriptionStatusResponse`` gains
``tier_multipliers: dict[str, float]``, populated for the same set of
tiers that make it into ``tier_costs`` so hidden tiers never get a
rendered badge.
- **Frontend (`SubscriptionTierSection`)** — drops the hard-coded ``"5x"
/ "20x"`` strings from ``TIERS`` and introduces
``formatRelativeMultiplier(tierKey, tierMultipliers)``: the lowest
*visible* multiplier becomes the baseline (no badge), every other tier
renders ``"N.Nx rate limits"`` relative to it. Fractional LD values like
8.5× round to one decimal.

The admin rate-limit page (``/admin/rate-limits``) keeps the static
``TIER_MULTIPLIERS`` defaults — it's admin-facing, infrequently viewed,
and fine to lag the LD value until next deploy (noted in-code).

Related upstream: this PR stacks logically after #12903 (which added the
``MAX`` tier + LD-configurable prices) but does **not** require it —
each PR can merge in either order. No schema changes, no migration.

## Test plan

- [x] ``poetry run black backend/... --check`` + ``poetry run ruff check
backend/...`` pass
- [x] ``pnpm format`` pass (modified files unchanged)
- [x] New backend tests: ``TestGetTierMultipliers`` (defaults, LD
override, invalid JSON, unknown tier / non-positive values, LD failure)
— **5 / 5 pass**
- [x] New backend test:
``TestGetGlobalRateLimitsWithTiers::test_ld_override_applies_fractional_multiplier``
— **pass**
- [x] ``backend/copilot/rate_limit_test.py`` — non-DB subset **72 / 72
pass**; ``TestGetUserTier`` / ``TestSetUserTier`` require the full
test-server fixture (Redis + Prisma) and are not run in this worktree —
same behaviour on clean ``dev``
- [x] ``backend/api/features/subscription_routes_test.py`` — **40 / 40
pass** (includes new
``test_get_subscription_status_tier_multipliers_ld_override``)
- [x] Frontend vitest targeted suite — **51 / 51 pass**
- ``helpers.test.ts`` — new ``formatRelativeMultiplier`` cases
(lowest-tier null, integer ratio, fractional ratio, hidden-tier null,
fractional LD)
- ``SubscriptionTierSection.test.tsx`` — three new cases for relative
badges, rebasing when the lowest tier is hidden, fractional LD overrides
2026-04-24 22:05:55 +07:00
Zamil Majdy
f8c123a8c3 feat(blocks): dynamic COST_USD billing + close 8 cost-leak surfaces (#12909)
## Why

`ClaudeCodeBlock` was a flat `RUN, 100 cr/run` entry when real cost is
**$0.02–$1.50/run**. Plugging that leak surfaced the question "are other
blocks doing the same?" — an audit found **7 more cost-leak surfaces**.
This PR closes all of them atomically so the cost pipeline is uniform
post-#12894.

## What

### 1. ClaudeCodeBlock → COST_USD 150 cr/$ (the headline)

Claude Code CLI's `--output-format json` already returns
`total_cost_usd` on every call, rolling up Anthropic LLM + internal
tool-call spend. Block now emits it via `merge_stats`:
```python
total_cost_usd = output_data.get("total_cost_usd")
if total_cost_usd is not None:
    self.merge_stats(NodeExecutionStats(
        provider_cost=float(total_cost_usd),
        provider_cost_type="cost_usd",
    ))
```
Registered as `COST_USD, 150 cr/$` — matches the 1.5× margin baked into
every `TOKEN_COST` entry.

### 2. Exa websets — ~40 blocks instrumented

Registered as `COST_USD 100 cr/$` but **never emitted `provider_cost`**
→ ran wallet-free. Added `extract_exa_cost_usd` + `merge_exa_cost`
helpers in `exa/helpers.py` and threaded `merge_exa_cost(self,
response)` through every Exa SDK call across 14 files (59 call sites).
Future-proof: lights up as soon as `exa_py` surfaces `cost_dollars` on
webset response types.

### 3. AIConditionBlock — registered under LLM_COST

Full LLM block with token-count instrumentation already in place, but
**no `BLOCK_COSTS` entry at all** → wallet-free. One-line fix: added to
the LLM_COST group next to AIConversationBlock.

### 4. Pinecone × 3 — added BLOCK_COSTS

- `PineconeInitBlock` + `PineconeQueryBlock`: 1 cr/run RUN (platform
overhead; user pays Pinecone directly).
- `PineconeInsertBlock`: ITEMS scaling with `len(vectors)` emitted via
`merge_stats`.

### 5. Perplexity Sonar (all 3 tiers) → COST_USD 150 cr/$

Block already extracted OpenRouter's `x-total-cost` header into
`execution_stats.provider_cost`; just tagged it `cost_usd` and flipped
the registry. **Deep Research was under-billing up to 30×** ($0.20–$2.00
real vs flat 10 cr).

### 6. CodeGenerationBlock (Codex / GPT-5.1-Codex) → COST_USD 150 cr/$

Block computes USD from `response.usage.input_tokens / output_tokens`
using GPT-5.1-Codex rates ($1.25/M in + $10/M out) and emits `cost_usd`.
Was flat 5 cr for arbitrary-length generations.

### 7. VideoNarrationBlock (ElevenLabs) → COST_USD 150 cr/$

Block computes USD from `len(script) × $0.000167` (Starter tier per-char
price) and emits `cost_usd`. **Was under-billing ~25–30× on long
scripts** (5K-char narration: flat 5 cr vs ~$0.83 real = 125 cr).

### 8. Meeting BaaS FetchMeetingData → COST_USD 150 cr/$

Join block keeps its flat 30 cr commit. FetchMeetingData now extracts
`duration_seconds` from the response metadata, computes USD via
`duration × $0.000192/sec`, and emits `cost_usd`. Long meetings (hours)
no longer fit inside the 30 cr deposit.

## Why 150 cr/$

Matches the **1.5× margin already baked into `TOKEN_COST` for every
direct LLM block**:

| Model | Real | Our rate (per 1M) | Markup |
|---|---|---|---|
| Claude Sonnet 4 | $3/$15 | 450/2250 cr | 1.5× |
| GPT-5 | $2.50/$10 | 375/1500 cr | 1.5× |
| Gemini 2.5 Pro | $1.25/$5 | 187/750 cr | 1.5× |

Applying the same ratio to `total_cost_usd` ≡ `cost_amount=150` (1 cr ≈
$0.01 → 100 cr/$ pass-through × 1.5× = 150).

## Test plan

- [x] **Unit**: new `claude_code_cost_test.py` (9 tests) + existing
`exa/cost_tracking_test.py` (16 tests) + full cost pipeline. **119/119
pass**.
- [x] `poetry run ruff format` + `poetry run ruff check backend/` —
clean.
- [ ] Live E2E: real ClaudeCode / Perplexity Deep Research / Codex run
with balance delta verification (post-merge).

## Follow-ups (not in this PR)

- `exa_py` SDK update to surface `cost_dollars` on Webset response types
(upstream) — unlocks real billing for the 40 webset blocks.
- Replicate suite: migrate per-model RUN entries to COST_USD via
`prediction.metrics["predict_time"] × per-model $/sec`.
2026-04-24 22:05:42 +07:00
Abhimanyu Yadav
34374dfd55 feat(frontend): Settings v2 API keys page (SECRT-2273) (#12907)
### Why / What / How

**Why:** The Settings v2 API keys page was a UI-only stub with 100 mock
rows, a noop "Create Key" button, noop delete buttons, and no
empty/loading states. Users couldn't actually manage their keys from the
new Settings UI. Ships SECRT-2273.

**What:** Replaces the mock with a working page: paginated list
(15/page) with infinite scroll, create flow with one-time plaintext
reveal, single + batch revoke with confirmation dialogs, per-key details
dialog, skeleton loader, animated empty state, toast + mutation-loading
feedback, and responsive header.



https://github.com/user-attachments/assets/bc576de3-0369-4e73-b945-c66c142ebfe5

<img width="397" height="860" alt="Screenshot 2026-04-24 at 11 26 53 AM"
src="https://github.com/user-attachments/assets/ed8681ea-7d16-40cc-96f7-72d798857229"
/>


**How:**
- **Backend** adds a new `GET /api/api-keys/paginated` route returning
`{ items, total_count, page, page_size, has_more }`. The legacy `GET
/api/api-keys` is untouched so the existing profile page keeps working.
The list fn runs `find_many` + `count` in parallel and filters to
`ACTIVE` status by default so revoked keys stay hidden.
- **Frontend** fetches via TanStack Query. Right now the hook consumes
the legacy endpoint with client-side slicing (15/page) so the page works
against staging today; once the paginated route ships we swap to the
generated `useGetV1ListUserApiKeysPaginatedInfinite` hook that's already
in the regenerated client.
- All new UI lives in `src/app/(platform)/settings/api-keys/components/`
— no legacy components reused. Shared primitives (Dialog, Form, Toast,
Skeleton, InfiniteScroll, BaseTooltip) come from the atoms/molecules
design system.
- Empty state uses a vertical marquee of ghost key-cards (framer-motion,
translateY 0→-50% on a duplicated stack, linear easing, symmetric mask
fade). Respects `prefers-reduced-motion`.
- Settings layout ScrollArea switched to `h-full` on mobile and
`md:h-[calc(100vh-60px)]` on desktop to remove a double scrollbar that
appeared when the mobile nav took space above the fixed-height scroll
region.

### Changes 🏗️

**Backend**
- `GET /api/api-keys/paginated` — new route, page + page_size query
params, `ListAPIKeysPaginatedResponse`.
- `list_user_api_keys_paginated` — new data fn, gathers find_many +
count, default ACTIVE-only filter.
- Existing `/api/api-keys` routes untouched.

**Frontend (settings/api-keys)**
- `page.tsx` + `components/APIKeyList/`, `APIKeyRow/`, `APIKeysHeader/`,
`APIKeySelectionBar/` — real-data wiring, drop mock array.
- `components/hooks/` — `useAPIKeysList`, `useCreateAPIKey`,
`useRevokeAPIKey`.
- `components/CreateAPIKeyDialog/` — zod-validated form + success view
with copy.
- `components/DeleteAPIKeyDialog/` — confirm with loading state; single
+ batch.
- `components/APIKeyInfoDialog/` — shows masked key, scopes,
description, created/last_used.
- `components/APIKeyListEmpty/` +
`APIKeyListEmpty/components/APIKeyMarquee.tsx` — animated empty state.
- `components/APIKeyListSkeleton/` — 6-row skeleton.

**Other**
- `settings/layout.tsx` — responsive ScrollArea height (fixes
double-scrollbar on mobile).
- `components/ui/scroll-area.tsx` — optional `showScrollToTop` FAB.
- `__tests__/placeholder-pages.test.tsx` — drop api-keys from
placeholder list.
- `AGENTS.md` — Phosphor `-Icon` suffix convention note.
- `api/openapi.json` — regenerated with new paginated endpoint.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  - [ ] Page loads → skeleton → list with real keys
- [ ] Empty state renders with the vertical marquee (and stays static
with `prefers-reduced-motion`)
- [ ] Create key dialog: name + description + permissions validates;
success view shows plaintext once + copy works; closing resets state
- [ ] Revoke single key via row trash icon → confirm dialog → toast on
success → row disappears
  - [ ] Batch-revoke via selection bar → confirm dialog → all revoked
- [ ] Info icon next to each key opens the details dialog (scopes,
timestamps, masked key)
- [ ] Infinite scroll loads more rows when scrolling past page 1 (≥16
keys)
- [ ] Mobile (<640px): single scrollbar, Create Key button below title
at size=small
- [ ] Desktop (md+): same layout as before, scroll-to-top FAB appears
after scrolling

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2026-04-24 14:08:18 +00:00
Abhimanyu Yadav
2cb52e5d19 feat(frontend): add Settings v2 page layout behind SETTINGS_V2 flag (SECRT-2272) (#12885)
### Why / What / How

**Why:** The Settings area is getting a redesign (per Figma
[Settings-Page](https://www.figma.com/design/YGck0Hb0GEgFzwbX47kSNs/Settings-Page?node-id=1-2)).
Ticket SECRT-2272 covers just the shell so content/forms for each
section can land in follow-up PRs without blocking on the nav
restructure. v1 at `/profile/settings` must stay intact for end users
during the rollout.

**What:** Adds a new parallel Settings hub at `/settings` (dedicated
sidebar + 7 placeholder sub-routes) behind a new `SETTINGS_V2`
LaunchDarkly flag. Default `false` so nothing changes for users until
the flag flips. Backend is untouched.


https://github.com/user-attachments/assets/dd680eaf-3d41-4a9a-87f3-d06d536a2503


**How:**
- New `Flag.SETTINGS_V2 = "settings-v2"` added to `use-get-flag.ts` with
`defaultFlags[Flag.SETTINGS_V2] = false`. Gate the whole route group at
`layout.tsx` via existing `FeatureFlagPage` HOC which redirects to
`/profile/settings` when the flag is off.
- `SettingsSidebar` replicates the Figma spec (237px, 7 items at 217×38,
`gap-[7px]`, rounded-[8px], active `bg-[#EFEFF0]` + text `#1F1F20` Geist
Medium, inactive text `#505057` Geist Regular, icon 16px Phosphor
light/regular at `#1F1F20`). Colors + typography use the canonical
tokens exported by Figma (zinc-50 `#F9F9FA`, zinc-200 `#DADADC` for the
right-border, etc.).
- `SettingsNavItem` is extracted as its own component and owns its
per-item entrance variant.
- Per-link loading indicator uses Next.js 15's `useLinkStatus()` hook —
spinner appears on the right of the clicked item and clears
automatically once the target page renders.
- `SettingsMobileNav` (< md breakpoint): sidebar hides; a pill trigger
with the current section's icon + label opens a Radix Popover listing
all 7 sections.
- Entrance animations via framer-motion, tuned to Emil Kowalski's
guidelines — `cubic-bezier(0, 0, 0.2, 1)` ease-out, all durations ≤
280ms, only `transform` and `opacity`, `useReducedMotion` disables
movement but keeps fade. Sidebar items stagger in (40ms offset). Main
content re-animates on every route change via `key={pathname}`.
- All 7 placeholder pages render the section title (Poppins Medium 22/28
via `variant="h4"`, `#1F1F20`) + "Coming soon" copy; they are
intentionally client components to avoid hook-order issues with the
client-side flag gate in the layout.

### Changes 🏗️

- `src/services/feature-flags/use-get-flag.ts`: register
`Flag.SETTINGS_V2` + default `false`
- `src/app/(platform)/settings/layout.tsx`: flag gate + responsive shell
+ route-keyed content animation
- `src/app/(platform)/settings/page.tsx`: client-side redirect to
`/settings/profile`
- `src/app/(platform)/settings/components/SettingsSidebar/`:
  - `SettingsSidebar.tsx` — aside with staggered entrance
- `SettingsNavItem.tsx` — per-item Link + icon + label + loader
(extracted)
- `useSettingsSidebar.ts` — hook mapping nav items with `isActive` from
`usePathname`
- `helpers.ts` — typed nav item config (label / href / Phosphor icon) ×
7
-
`src/app/(platform)/settings/components/SettingsMobileNav/SettingsMobileNav.tsx`:
mobile Popover trigger
- 7 placeholder pages: `profile`, `creator-dashboard`, `billing`,
`integrations`, `preferences`, `api-keys`, `oauth-apps`

**Follow-up PRs will migrate real content into each tab.** LaunchDarkly
flag key `settings-v2` must be created in the LD dashboard before
enabling for users.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `NEXT_PUBLIC_FORCE_FLAG_SETTINGS_V2=true` → `/settings` redirects
to `/settings/profile`, sidebar renders 7 items with "Profile" active
- [x] Click each nav item → URL changes, active item highlights, content
pane re-animates, per-link spinner shows during navigation
- [x] Viewport < 768px → sidebar hides, mobile pill trigger opens
Popover with all 7 items; selecting one navigates and closes
- [x] Without the flag env override, `/settings` redirects to
`/profile/settings` (v1 unchanged)
  - [x] `pnpm types` clean; prettier clean on touched files
- [x] Manual a11y pass with `prefers-reduced-motion` enabled — fade
remains, translations disabled

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
*(no new env vars required; existing `NEXT_PUBLIC_FORCE_FLAG_*` pattern
covers local override)*
- [x] `docker-compose.yml` is updated or already compatible with my
changes *(no docker changes)*
- [x] I have included a list of my configuration changes in the PR
description *(LaunchDarkly dashboard must have `settings-v2` flag
created before enabling; no other config changes)*
2026-04-24 10:39:54 +00:00
Zamil Majdy
ab88d03b13 refactor(backend/integrations): clearer naming + docs for managed-cred sweep (#12908)
## Why

Review comments on #12883 (thanks @Pwuts) surfaced a few spots where the
managed-credential plumbing's names and docstrings didn't match what the
code actually does:

- `_read_or_create_profile_key` suggests "read from any source or create
new", but only migrates the legacy
`managed_credentials.ayrshare_profile_key` side-channel — it doesn't
read an existing managed credential. (That check lives in the outer
`_provision_under_lock`.)
- Docstrings refer to "the startup sweep" in several places — there's no
startup hook; the sweep runs on `/credentials` fetches.
- `is_available` / `auto_provision` relationship wasn't explicit;
readers couldn't tell whether `is_available` was a config check or a
liveness check, or which of the two gates the sweep checks first.

## What

Naming + docstring cleanup. **Zero behavior changes.**

- Rename `_read_or_create_profile_key` →
`_migrate_legacy_or_create_profile_key` with docstring explaining why it
doesn't re-check the managed cred.
- Replace "startup sweep" → "credentials sweep" everywhere.
- `ManagedCredentialProvider` class docstring now names the two gates:
1. `auto_provision` — does this provider participate in the sweep at
all?
  2. `is_available` — are the required env vars / secrets set?
- `is_available` docstring now spells out: what it checks (env vars),
what it does NOT check (upstream health), and that it's only consulted
when `auto_provision=True`.
- `ensure_managed_credentials` docstring defines "credentials sweep",
when it fires, how the per-user in-memory cache works.
- Module-level docstring drops the stale "non-blocking background task"
wording (#12883 made the sweep bounded-await).

## How

4 files, all backend:
- `backend/integrations/managed_credentials.py`
- `backend/integrations/managed_providers/ayrshare.py`
- `backend/integrations/managed_providers/ayrshare_test.py`
- `backend/api/features/integrations/router.py`

Tests: 13/13 Ayrshare tests pass against the rename.

## Checklist

- [x] Follows style guide
- [x] Existing tests still pass (no functional change)
- [x] No new tests needed — pure rename + docstring change
2026-04-24 16:22:09 +07:00
An Vy Le
3aa72b4245 feat(backend/copilot): inline picker-backed inputs via run_block + accept AgentInputBlock subclasses (#12880)
### Why / What / How

**Why:** Resolves #12875. CoPilot's agent-builder was hardcoding Google
Drive file IDs into consuming blocks' `input_default` instead of wiring
an `AgentGoogleDriveFileInputBlock`. A beta user hit this across **13
saved versions** of one agent. Root causes:

1. `validate_io_blocks` only accepted the literal base `AgentInputBlock`
/ `AgentOutputBlock` IDs, so even when CoPilot used a specialized
subclass like `AgentGoogleDriveFileInputBlock` as the only input, the
validator forced it to keep a throwaway base alongside — entrenching the
anti-pattern.
2. Running a Drive consumer directly via CoPilot's `run_block` silently
failed because the auto-credentials flow (picker attaches
`_credentials_id`) existed only in the graph executor, never in
CoPilot's direct-execution path.
3. Drive picker guidance lived in `agent_generation_guide.md` instead of
on the blocks themselves, so it duplicated and drifted from the code.
4. Observed in a live session: when asked to read a private sheet,
CoPilot refused with "share publicly or use the builder" instead of
calling `run_block` and letting the picker render — the prompt rule was
buried and the fallback path (omitted required picker field) returned a
generic schema preview.

**What:** Four coordinated platform + CoPilot improvements. No
block-specific validator rules, no Drive-specific code in UI or prompt.

**How:**

#### 1. `validate_io_blocks` subclass support

Accepts any block with `uiType == "Input"` / `"Output"` (populated from
`Block.block_type` at registration). `AgentGoogleDriveFileInputBlock`,
`AgentDropdownInputBlock`, `AgentTableInputBlock`, etc. stand alone.
Base-ID fallback preserved for call sites that pass a minimal blocks
list.

#### 2. Inline picker via `run_block`

- Extracted `_acquire_auto_credentials` from
`backend/executor/manager.py` into shared
`backend/executor/auto_credentials.py` (exports
`acquire_auto_credentials` + `MissingAutoCredentialsError`).
- Wired it into `backend/copilot/tools/helpers.py::execute_block`. When
`_credentials_id` is present, the block executes with creds injected
(chained flows work). When missing/null, `execute_block` returns the
existing `SetupRequirementsResponse` — frontend's `FormRenderer` renders
the picker inline via the existing
`GoogleDrivePickerField`/`GoogleDrivePickerInput`. On pick, the LLM
re-invokes `run_block` with the populated input — same continuation
pattern as OAuth-missing-credentials. No new response types, no new
continuation tool, no new frontend component.
- `run_block` now short-circuits to `SetupRequirementsResponse` when
missing required fields include a picker-backed field, skipping the
schema-preview round trip the LLM would otherwise take.
- `get_inputs_from_schema` spreads the full property schema (`**schema`)
instead of whitelisting — any `format` / `json_schema_extra` / custom
widget config flows through to the generic custom-field dispatch on the
frontend. Future picker formats (date pickers, file pickers, etc.) work
without backend changes.
- Frontend `SetupRequirementsCard/helpers.ts` uses index-signature
passthrough for arbitrary schema keys — no widget-specific code in that
layer.

#### 3. `validate_only` parameter on `run_block`

`run_block(id, {})` is not always a safe probe — for blocks with zero
required inputs, it executes. New `validate_only: true` parameter
returns `BlockDetailsResponse` (schema + missing-input list) without
executing, rendering picker cards, or charging credits. Same response
shape as the existing schema preview — no new branch, just an extra
condition on the existing one. LLM uses this for pre-flight when it's
unsure whether a block has required inputs.

#### 4. Block-local picker guidance

Agent-generation picker guidance relocated from the guide onto the
blocks themselves — surfaced at `find_block` time, exactly when the LLM
decides to wire a picker-backed consumer:

- `GoogleDriveFileField` (shared factory for every Drive field on
Sheets/Docs/etc.) appends a standard hint to the caller's description
covering: feed from the specialized input block, never hardcode (even
one parsed from a URL), picker is the only credential source.
- `AgentGoogleDriveFileInputBlock`'s block description now covers when
it's required, the `allowed_views` mapping, wiring direction, and a
concrete link-shape example.
- `agent_generation_guide.md` loses the dedicated 71-line Drive section.
The IO-blocks section now tells the LLM specialized subclasses satisfy
the requirement and carry their own usage guidance in block/field
descriptions — read them when `find_block` surfaces a match.
- New "Picker-backed inputs via `run_block`" section in the CoPilot
prompt, written generically (picker fields detected via `format` /
`auto_credentials` schema hints, no provider names hardcoded) — covers:
don't ask the user for URLs/IDs, don't refuse private-resource asks,
chained picker objects pass through as-is.
- Sharpened `MissingAutoCredentialsError` message so when a bare ID
reaches execution, the error explicitly tells the LLM the picker renders
inline (not "ask the user for something").

### Changes 🏗️

- `backend/copilot/tools/agent_generator/validator.py` —
`_collect_io_block_ids` + subclass-aware `validate_io_blocks`.
- `backend/executor/auto_credentials.py` (new) — shared
`acquire_auto_credentials` + `MissingAutoCredentialsError`.
- `backend/executor/manager.py` — imports from the shared module, drops
the local copy.
- `backend/copilot/tools/helpers.py` — `execute_block` calls
`acquire_auto_credentials`, merges kwargs, releases locks in `finally`,
returns `SetupRequirementsResponse` on missing creds.
`get_inputs_from_schema` spreads the full property schema.
- `backend/copilot/tools/run_block.py` — picker-field short-circuit +
`validate_only` parameter.
- `backend/copilot/prompting.py` — "Picker-backed inputs via
`run_block`" + "Pre-flight with `validate_only`" sections.
- `backend/blocks/google/_drive.py` — `GoogleDriveFileField` appends the
agent-builder hint to every Drive consumer's description.
- `backend/blocks/io.py` — `AgentGoogleDriveFileInputBlock` description
expanded.
- `backend/copilot/sdk/agent_generation_guide.md` — Drive section
removed, IO-blocks subclass note expanded.
- `frontend/.../SetupRequirementsCard/helpers.ts` — index-signature
passthrough for arbitrary schema keys; schema fields propagate into the
generated RJSF schema.
- Tests: new `TestExecuteBlockAutoCredentials` (4 cases) +
`validate_only` + picker-short-circuit cases in `run_block_test.py`;
`manager_auto_credentials_test.py` moved to new import path; 6 new
frontend cases in `SetupRequirementsCard/__tests__/helpers.test.ts`
covering schema passthrough.
- Also: one-line hoist of `import secrets` in
`backend/integrations/managed_providers/ayrshare.py` — ruff E402
introduced by #12883 was blocking our lint post-merge.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Backend unit suites: validator_test (48), helpers_test (40),
run_block_test (19), manager_auto_credentials_test (15) — **all green**
- [x] Frontend `SetupRequirementsCard` helpers — **75/75 pass**
(including 6 new passthrough cases)
- [x] `poetry run format` (ruff + isort + black) clean on touched files
(pre-existing pyright errors in unrelated `graphiti_core` /
`StreamEvent` / etc. files not introduced by this PR)
- [x] Live CoPilot chat on dev-builder confirmed the setup card renders
`custom/google_drive_picker_field` for a Drive consumer block called via
`run_block`
- [x] Live agent-generation confirmed CoPilot creates a subclass-only
agent (`AgentGoogleDriveFileInputBlock` → `GoogleSheetsReadBlock` →
`AgentOutputBlock`) with no throwaway base `AgentInputBlock`

#### For configuration changes:
- [x] N/A — no config changes

---------

Co-authored-by: majdyz <zamil.majdy@agpt.co>
2026-04-24 13:05:11 +07:00
Zamil Majdy
cc1f692fec feat(platform): add MAX tier + LD-configurable pricing + hide unconfigured tiers (#12903)
## What

Introduces a new `MAX` tier slot between `PRO` and `BUSINESS`
(self-service $320/mo at 20× capacity), routes every self-service tier's
Stripe price ID through LaunchDarkly, and hides tiers from the UI when
their price isn't configured. `BUSINESS` stays in the enum at 60× as a
reserved/future self-service slot (hidden by default until its LD price
flag is set). ENTERPRISE stays admin-managed.

## Tier shape after this PR

| Enum | UI label | Multiplier | LD price flag | Surfaced in UI by
default |
|---|---|---|---|---|
| `FREE` | Basic | 1× | `stripe-price-id-basic` | no (flag unset) |
| `PRO` | Pro | 5× | `stripe-price-id-pro` | yes (already live) |
| `MAX` **(new)** | Max | 20× | `stripe-price-id-max` | no (flag unset
until $320 price ready) |
| `BUSINESS` | Business | 60× | `stripe-price-id-business` | no
(reserved / future) |
| `ENTERPRISE` | — | 60× | — (admin-managed) | no (Contact-Us only) |

## Prisma

- Added `MAX` between `PRO` and `BUSINESS` in `SubscriptionTier`.
- Migration `add_subscription_tier_max/migration.sql` uses `ALTER TYPE
... ADD VALUE IF NOT EXISTS 'MAX' BEFORE 'BUSINESS'` (transactional
since PG 12). No data migration — no rows currently on BUSINESS via
self-service flows.

## Backend

- `get_subscription_price_id` flag map covers
`FREE`/`PRO`/`MAX`/`BUSINESS`. ENTERPRISE returns `None`.
- `GET /credits/subscription.tier_costs` only includes tiers whose LD
price ID is set. Current tier always present as a safety net.
- `POST /credits/subscription` routes by LD-resolved prices instead of
hard-coding `tier == FREE`:
- Target `FREE` + `stripe-price-id-basic` unset → legacy
cancel-at-period-end (unchanged behaviour).
- Target has LD price → modify in-place when user has an active sub,
else Checkout Session.
- Priced-FREE users with no sub fall through to Checkout (admin-granted
DB-flip shortcut gated on `current_tier != FREE`).
- `sync_subscription_from_stripe` + `get_pending_subscription_change`
cover FREE/PRO/MAX/BUSINESS in the price-to-tier map so every tier's
Stripe webhook reconciles cleanly.
- Pending-tier mapping collapsed into a single membership check.
- `TIER_MULTIPLIERS`: `FREE=1, PRO=5, MAX=20, BUSINESS=60,
ENTERPRISE=60`.

## Frontend

- UI labels: FREE→"Basic", MAX→"Max", BUSINESS→"Business" (PRO
unchanged). `TIER_ORDER` now `[FREE, PRO, MAX, BUSINESS, ENTERPRISE]`.
- `SubscriptionTierSection` filters by `tier_costs` — any tier without a
backend-provided price is hidden (current tier always visible).
- `formatCost` surfaces "Free" only when `FREE` is actually `$0`;
non-zero `stripe-price-id-basic` renders `$X.XX/mo`.
- Admin rate-limit display lists all five tiers with multiplier badges.

## LaunchDarkly flag actions (operator)

- **New:** `stripe-price-id-basic` → FREE tier. Set to `""` or a `$0`
Stripe price.
- **New:** `stripe-price-id-max` → MAX tier. Point at the `$320` Stripe
price when you launch the Max tier.
- **Unchanged:** `stripe-price-id-pro` (PRO), `stripe-price-id-business`
(BUSINESS — leave unset until you're ready for the 60× Business tier).
- Base rate limits stay on `copilot-daily-cost-limit-microdollars` /
`copilot-weekly-cost-limit-microdollars` (Basic's limit; everything else
= × tier multiplier).

## Out of scope

- Subscription-required onboarding screen / middleware gating (separate
PR).
- "Pricing available soon" vs Stripe-failure disambiguation in the UI
(follow-up).

## Testing

- Backend: 213 tests across `subscription_routes_test.py`,
`credit_subscription_test.py`, `rate_limit_test.py`,
`admin/rate_limit_admin_routes_test.py` — all passing.
- Frontend: 91 tests across `credits/` + `admin/rate-limits/` — all
passing.
- Fresh-backend manual E2E on the pre-MAX commit confirmed tier-hiding
works (`tier_costs` returns only the current tier when LD flags are
unset).

## Checklist

- [x] I have read the project's contributing guide.
- [x] I have clearly described what this PR changes and why.
- [x] My code follows the style guidelines of this project.
- [x] I have added tests that prove my fix is effective or that my
feature works.
- [ ] New and existing unit tests pass locally with my changes (CI will
confirm).
2026-04-24 11:11:33 +07:00
Zamil Majdy
be61dc4304 fix(backend): use {schema_prefix} in raw SQL migrations instead of hardcoded 'platform.' (#12905)
### Why / What / How

**Why.** Backend CI was failing at startup with `relation
"platform.AgentNode" does not exist`. Prisma's `migrate deploy` uses the
`schema.prisma` datasource, which doesn't declare a schema, so when
`DATABASE_URL` has no `?schema=platform` query param (as in CI / raw
Supabase), Prisma creates tables in `public` — but the lifespan
migration `backend.data.graph.migrate_llm_models` hardcoded
`platform."AgentNode"` in its raw SQL and crashed the boot.

**What.** Switched `migrate_llm_models` to use the
`execute_raw_with_schema` helper and the `{schema_prefix}` placeholder —
the same pattern already used by the sibling
`fix_llm_provider_credentials` migration in the same file. The helper in
`backend/data/db.py` reads the schema from `DATABASE_URL` at runtime and
substitutes `"platform".` or an empty prefix, so the query works in both
dev (schema=platform) and CI / raw Supabase (public).

**How.**
- Template change: `UPDATE platform."AgentNode"` → `UPDATE
{{schema_prefix}}"AgentNode"` (f-string double-brace escape so
`{schema_prefix}` survives to `.format()` inside
`execute_raw_with_schema`).
- Replace `db.execute_raw(...)` with `execute_raw_with_schema(...)`;
drop the now-unused `prisma as db` import.
- Regression test: mocks `execute_raw_with_schema` and asserts every
emitted query contains `{schema_prefix}` and no longer contains
`platform."AgentNode"`.

### Audit

Audited the other three lifespan migrations in
`backend/api/rest_api.py::lifespan_context`:
- `backend.data.user.migrate_and_encrypt_user_integrations` — uses
Prisma ORM, no raw SQL. OK.
- `backend.data.graph.fix_llm_provider_credentials` — already uses
`query_raw_with_schema` + `{schema_prefix}`. OK.
- `backend.integrations.webhooks.utils.migrate_legacy_triggered_graphs`
— uses Prisma ORM, no raw SQL. OK.

Also grepped the whole backend for `platform."` in Python files —
`migrate_llm_models` was the only offender; the other hits were
unrelated string content (docstrings, error messages, test data).

### Changes

- `autogpt_platform/backend/backend/data/graph.py`: `migrate_llm_models`
now uses `execute_raw_with_schema` with the `{schema_prefix}`
placeholder; unused `prisma as db` import dropped.
- `autogpt_platform/backend/backend/data/graph_test.py`: added
`test_migrate_llm_models_uses_schema_prefix_placeholder` regression
test.

### Checklist

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Ran `migrate_llm_models` under mocked `execute_raw_with_schema` —
all 7 emitted UPDATE queries contain `{schema_prefix}` and none hardcode
`platform."AgentNode"`.
- [x] Verified the f-string double-brace escape by evaluating the
template and running `.format(schema_prefix=...)` — substitution is
correct for both `"platform".` and empty-prefix (public-schema) cases.
- [x] `poetry run pyright backend/data/graph.py` clean (pre-existing
pyright error on `backend/api/features/v1.py:834` on `origin/dev` is
unrelated).
- [x] Grepped the whole backend for other hardcoded `platform."..."`
raw-SQL occurrences — none found.

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
(N/A — no config changes)
- [x] `docker-compose.yml` is updated or already compatible with my
changes (N/A — no config changes)
2026-04-24 10:00:22 +07:00
Zamil Majdy
575f75edf4 refactor(platform): migrate Ayrshare to standard managed-credential flow (#12883)
## Why

Beta user report: AutoPilot told them to sign up for Ayrshare themselves
— which AutoGPT actually manages — because AutoPilot inferred the
requirement from the block description string rather than any structured
schema. Root cause: Ayrshare was the only block family whose
"credential" lived in a bespoke
`UserIntegrations.managed_credentials.ayrshare_profile_key` side channel
and whose blocks declared **no** `credentials` field. `find_block` /
`resolve_block_credentials` had nothing to show the LLM, so the LLM
guessed.

(An initial commit added a runtime `gh` CLI bootstrap for a separate "gh
isn't installed in the sandbox" report — that work was empirically
verified unnecessary and reverted; see the commit history for the bench
results.)

## What

**Ayrshare now goes through the standard managed-credential flow:**

- New `AyrshareManagedProvider` alongside the existing
`AgentMailManagedProvider`. Provisions the per-user profile as
`APIKeyCredentials(provider="ayrshare", is_managed=True)` via the shared
`add_managed_credential` path. Reuses any legacy
`managed_credentials.ayrshare_profile_key` value on first provision so
existing users keep their linked social accounts.
- `AyrshareManagedProvider.is_available()` returns `False` so the
`ensure_managed_credentials` startup sweep **never** auto-provisions
Ayrshare (profile quota is a real per-user subscription cost). New
public `ensure_managed_credential(user_id, store, provider)` helper lets
the `/api/integrations/ayrshare/sso_url` route provision on demand,
reusing the same distributed Redis lock + upsert path as AgentMail.
- New `ProviderBuilder.with_managed_api_key()` method registers
`api_key` as a supported auth type without the env-var-backed default
credential that `with_api_key()` creates — so the org-level Ayrshare
admin key cannot leak to blocks as a "profile key".
- `BaseAyrshareInput` gains a shared `credentials` field; all 13 social
blocks inherit it. Each `run()` now takes `credentials:
APIKeyCredentials`; the inline `get_profile_key` guard + "please link a
social account" error is gone. Standard `resolve_block_credentials`
pre-run check owns the "not connected" path, returning a normal
`SetupRequirementsResponse`.
- **Migration-ordering safety:** `post_provision` hook on
`ManagedCredentialProvider` clears the legacy `ayrshare_profile_key`
field **only after** `add_managed_credential` has durably stored the
managed credential. If persistence fails, the legacy key stays intact so
a retry can reuse it — covered by `TestMigrationOrderingSafety`.
- New public `IntegrationCredentialsStore.get_user_integrations()` —
reads no longer have to reach past the `_get_user_integrations` privacy
fence or abuse `edit_user_integrations` as a pseudo-read.
- `/api/integrations/ayrshare/sso_url` collapses from a 60-line
provision-then-sign dance to: pre-flight `settings_available()`,
`ensure_managed_credential`, fetch the credential, sign a JWT.
- `IntegrationCredentialsStore.set_ayrshare_profile_key` removed — the
managed credential is now the only write path.
- Legacy `UserIntegrations.ManagedCredentials.ayrshare_profile_key`
field is retained so the managed provider can migrate existing users on
first provision; removing the field is a follow-up once rollout has
propagated.

## How

After this PR, `find_block` returns Ayrshare blocks with a structured
`credentials_provider: ['ayrshare']`. AutoPilot sees the credential
requirement the same way it sees GitHub's or AgentMail's, calls
`run_block`, and gets a plain `SetupRequirementsResponse` when the
managed credential has not been provisioned yet. No more
description-string speculation; the whole Ayrshare flow is the normal
flow.

The Builder's `AyrshareConnectButton` (`BlockType.AYRSHARE`) still works
— it hits the same endpoint, now a thin wrapper over the managed
provider — so users still get the "Connect Social Accounts" popup for
OAuth'ing individual social networks.

## Test plan

- [x] `poetry run pytest backend/blocks/test/test_block.py -k "ayrshare
or PostTo"` — 26/26 pass.
- [x] `poetry run pytest
backend/integrations/managed_providers/ayrshare_test.py` — 10/10 pass.
- [x] `poetry run pytest
backend/api/features/integrations/router_test.py` — 21/21 pass.
- [x] `poetry run pyright` on all touched backend files — 0 errors.
- [x] Runtime sanity: `find_block` on `PostToXBlock` lists
`credentials_provider: ['ayrshare']` in the JSON schema.
- [ ] Manual QA in preview: connect social account via Builder's
"Connect Social Accounts" button → post to X via CoPilot end-to-end.
- [ ] Verify existing users with
`managed_credentials.ayrshare_profile_key` continue to work without
re-linking.
2026-04-24 09:37:38 +07:00
Zamil Majdy
0f6eea06c4 feat(platform/backend): dynamic BlockCostType (SECOND/ITEMS/COST_USD/TOKENS) + E2B/FAL migration (#12894)
## Why

PR #12893 shipped flat-floor credit charges so no provider sits
wallet-free. This PR is the next step: make dynamic pricing actually
dynamic. Blocks that scale with walltime, item count, provider-reported
USD, or token volume now get billed based on captured execution stats
instead of a fixed floor.

Before this PR `BlockCostType` only had `RUN` / `BYTE` / `SECOND`, and
`SECOND` was dead code — no caller ever passed `run_time > 0`, so every
per-second entry evaluated to 0. This PR wires the stats plumbing
through, adds the cost-type variants that cover the real billing models
our providers charge on, and migrates blocks across the codebase to use
them.

## What

### Machinery

- `BlockCostType` gains `ITEMS`, `COST_USD`, `TOKENS`. `BlockCost` gains
`cost_divisor: int = 1` so SECOND/ITEMS/TOKENS can express "1 credit per
N units" without fractional amounts.
- `block_usage_cost(..., stats: NodeExecutionStats | None = None)` —
pre-flight (no stats) dynamic types return 0 so the balance check isn't
blocked on unknown-future cost; post-flight (stats populated) they
consume captured execution stats.
- `TokenRate` model + `TOKEN_COST` table (~60 models: Claude family,
GPT-5 family, Gemini 2.5, Groq/Llama, Mistral, Cohere, DeepSeek, Grok,
Kimi, Perplexity Sonar). Rates are credits per 1M tokens with input /
output / cache-read / cache-creation split.
- `compute_token_credits(input_data, stats)` — reads
`stats.input_token_count / output_token_count / cache_read_token_count /
cache_creation_token_count`, multiplies by `TOKEN_COST[model]`, ceils to
integer credits. Falls back to flat `MODEL_COST[model]` for unmapped
models (no silent under-billing).
- `billing.charge_reconciled_usage(node_exec, stats)` — runs
post-flight, charges positive delta / refunds negative delta. RUN-only
blocks produce zero delta (no-op). Swallows `InsufficientBalanceError` +
unexpected errors so reconciliation never poisons the success path.
- Pre-flight balance guard — dynamic-cost blocks (0 pre-flight charge)
are blocked when the wallet is non-positive. Closes Sentry `r3132206798`
(HIGH).
- Reconciliation fires `handle_low_balance` on positive delta so users
still get alerted after post-flight reconciliation.

### Block migrations — cost-type changes

| Provider / block family | Old | New | Cost type |
|---|---|---|---|
| All LLM blocks (Anthropic / OpenAI / Groq / Open Router / Llama API /
v0 / AIML, via `LLM_COST` list) | RUN, flat per-model from `MODEL_COST`
| `TOKEN_COST` per-token rate table (input / output / cache-read /
cache-creation) | **TOKENS** |
| Jina `SearchTheWebBlock` | RUN, 1 cr | 100 cr / $ (≈ 1 cr per $0.01
call) | **COST_USD** |
| ZeroBounce `ValidateEmailsBlock` | RUN, 2 cr | 250 cr / $ (≈ 2 cr per
$0.008 validation) | **COST_USD** |
| Apollo `SearchOrganizationsBlock` | RUN, 2 cr flat | 1 cr / 2 orgs
(divisor=2) | **ITEMS** |
| Apollo `SearchPeopleBlock` (no enrich) | RUN, 10 cr flat | 1 cr /
person | **ITEMS** |
| Apollo `SearchPeopleBlock` (enrich_info=true) | RUN, 20 cr flat | 2 cr
/ person | **ITEMS** |
| Firecrawl (all blocks — Crawl, MapWebsite, Search, Extract, Scrape,
via `ProviderBuilder.with_base_cost`) | RUN, 1 cr | 1000 cr / $ (1 cr
per Firecrawl credit ≈ $0.001) | **COST_USD** |
| DataForSEO (KeywordSuggestions, RelatedKeywords, via `with_base_cost`)
| RUN, 1 cr | 1000 cr / $ | **COST_USD** |
| Exa (~45 blocks, via `with_base_cost`) | RUN, 1 cr | 100 cr / $ (Deep
Research $0.20 → 20 cr) | **COST_USD** |
| E2B `ExecuteCodeBlock` / `InstantiateCodeSandboxBlock` /
`ExecuteCodeStepBlock` | RUN, 2 cr flat | 1 cr / 10 s walltime
(divisor=10) | **SECOND** |
| FAL `AIVideoGeneratorBlock` | RUN, 10 cr flat | 3 cr / walltime s |
**SECOND** |

### Cost-leak fixes — interim values (flagged 🔴 CONSERVATIVE INTERIM in
Notion)

Separate from the type migrations above, these 3 providers had real API
costs but were under-billed (or wallet-free):

| Provider / block | Old | New | Cost type | Plan for proper fix |
|---|---|---|---|---|
| Stagehand (`StagehandObserve` / `Act` / `Extract`, via
`with_base_cost`) | RUN, 1 cr | 1 cr / 3 walltime s (divisor=3) |
**SECOND** | Have blocks emit `provider_cost` USD (session_seconds ×
$0.00028 + real LLM USD) → migrate to `COST_USD 100 cr/$`. |
| Meeting BaaS `BaasBotJoinMeetingBlock` (via `@cost` decorator
override) | RUN, 5 cr | RUN, 30 cr | RUN | Surface meeting duration on
`FetchMeetingData` response → migrate Join to `SECOND` or `COST_USD`
post-flight. |
| AgentMail (~37 blocks, via `with_base_cost`) | **0 cr (unbilled)** |
RUN, 1 cr | RUN | Revisit when AgentMail publishes paid-tier pricing
(currently beta). |

### UI

- `NodeCost.tsx` dynamic labels: RUN → `N /run`, SECOND → `~N /sec` (or
`~N / Xs` with divisor), ITEMS → `~N /item` (or `/ X items`), COST_USD →
`~N · by USD`, TOKENS → `~N · by tokens` (tooltip explains cache
discount).
- Floor amounts prefixed with `~` for dynamic types so users see an
estimate, not a hard guarantee.

## How

The resolver split is the key design decision. Instead of charging the
"true" cost entirely post-flight (which would let a user burn credits
they don't have), pre-flight returns a safe estimate:
- RUN: full `cost_amount` (same as before — backwards compatible).
- SECOND/ITEMS/COST_USD: `0` when stats aren't populated yet.
- TOKENS: `MODEL_COST[model]` as a flat floor from the existing rate
table.

Post-flight, the executor calls `charge_reconciled_usage`, which
evaluates the same resolver with stats and charges the positive delta
(or refunds the negative delta). RUN blocks get a 0-delta no-op; dynamic
blocks get their actual charge. Failure modes are bounded: insufficient
balance is logged (not raised; reconciliation must never poison a
success), unexpected errors are swallowed and alerted via Discord.

TOKENS routes through a dedicated `compute_token_credits` helper so the
rate table (`TOKEN_COST`) can grow organically without touching resolver
logic. Models not yet in `TOKEN_COST` fall back to the flat `MODEL_COST`
tier.

Migration for providers with a real USD spend (Exa, Firecrawl,
DataForSEO, Jina Search, ZeroBounce) is a one-line `_config.py` change
via the extended `ProviderBuilder.with_base_cost`. Each block's `run()`
populates `provider_cost` from the response (Exa's `cost_dollars.total`,
Firecrawl's `credits_used`, etc.) via `merge_stats`, and the post-flight
resolver multiplies by `cost_amount` credits/$.

## Test plan

- [x] 92/92 cost-pipeline tests pass — `block_usage_cost_test.py`,
`billing_reconciliation_test.py`, `manager_cost_tracking_test.py`,
`block_cost_config_test.py`.
- [x] Deep E2E against live stack (real DB, `database_manager` RPC): 8/8
scenarios pass — RUN pre-flight, dry-run no-charge, TOKENS refund, ITEMS
scaling, ITEMS zero-items short-circuit, COST_USD exact + ceil
semantics, pre-flight balance guard. Report:
https://github.com/Significant-Gravitas/AutoGPT/pull/12894#issuecomment-4307672357
- [x] `poetry run ruff check` / `ruff format` / `pnpm format` / `pnpm
lint` / `pnpm types` — clean.
- [x] Manual UI: `NodeCost.tsx` renders `~N · by tokens` for
AITextGeneratorBlock, `~N · by USD` for Jina/Exa/Firecrawl.

## Follow-ups (not in this PR)

- Stagehand / Meeting BaaS / Ayrshare: expose provider-side unit cost
(session-seconds, meeting duration, platform analytics credits) to
migrate from interim flat/walltime to fully dynamic `COST_USD`.
- Replicate / Revid: walltime-based billing once response cost is piped
through.
- AgentMail: final rate once paid tier is published.
2026-04-24 08:45:39 +07:00
Zamil Majdy
43b38f6989 fix(backend/copilot): surface non-zero E2B exits as real results, not sandbox errors (#12904)
## Why

`gh auth status` looked flaky in the E2B sandbox. Not actually flaky: it
fails deterministically when the user has not connected GitHub (or the
token is missing/expired), and our wrapper disguises that legitimate
exit-1 as a sandbox infrastructure failure.

Root cause: E2B's `sandbox.commands.run()` raises `CommandExitException`
for **any** non-zero exit. We caught it as a generic `Exception` and
returned an `ErrorResponse` with message:

```
E2B execution failed: Command exited with code 1 and error:
{stderr}
```

When the model runs `gh auth status 2>&1`, stderr is redirected to
stdout — so `exc.stderr` is empty **and** `exc.stdout` (which carries
the real info, e.g. "You are not logged into any GitHub hosts") is
discarded. The model sees a generic infra failure, can't tell it's an
auth-check signal, and prompts the user with broken-looking errors
instead of calling `connect_integration(provider="github")`.

Compare: the local bubblewrap path already handles non-zero exits
correctly by returning a `BashExecResponse` with `exit_code` set. The
E2B path was asymmetric.

## What

- Import `CommandExitException` and catch it explicitly in
`_execute_on_e2b` before the generic handler.
- Return a `BashExecResponse` with the real `exit_code`, `stdout`, and
`stderr` from the exception (scrubbed of injected secret values, same as
the success path).
- Extract shared scrub/build logic into `_build_response` to avoid
duplicating it across the success and exit-exception branches.
- Keep `TimeoutException` and the catch-all `except Exception` for real
infra failures.

## How

Result shape now matches bubblewrap: non-zero exit is a valid result,
not an error. The model sees:

```
message: "Command executed with status code 1"
exit_code: 1
stdout: "You are not logged into any GitHub hosts. ..."
stderr: ""
```

instead of the prior cryptic "E2B execution failed" message.

## Test plan

- [x] New unit test `test_nonzero_exit_returned_as_bash_exec_response`
in `bash_exec_test.py` — mocks `sandbox.commands.run` to raise
`CommandExitException`, asserts `BashExecResponse` with correct
`exit_code`, and verifies secret scrubbing on both `stdout` and
`stderr`.
- [x] `poetry run pytest backend/copilot/tools/bash_exec_test.py` — 5
passed.
- [x] `poetry run pyright` on changed files — 0 errors.
- [x] `poetry run ruff` — clean.
2026-04-24 07:49:57 +07:00
Nicholas Tindle
f3b4b0e1d9 fix(platform): human-readable storage quota error messages
Backend: Replace raw byte counts ("262,144,000 bytes") with formatted
sizes ("250 MB of your 1 GB quota") and include upgrade guidance.

Frontend: Parse 413 JSON response in direct-upload.ts to extract the
human-readable detail message instead of dumping raw JSON to the user.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 15:23:24 -05:00
Nicholas Tindle
10e421cd3e fix(platform): resolve autopilot beta blockers (SECRT-2266/2267/2268/2269) (#12874)
### Why / What / How

**Why:** A beta user spent significant time trying to build and run
agents that read Google Sheets. Four separate failures compounded on
their session — all already open in Linear as SECRT-2266 through
SECRT-2269. Three in-flight PRs each addressed a piece but conflicted on
the same files (`backend/data/model.py`, `backend/blocks/_base.py`,
`autogpt_libs/.../types.py`), so landing them individually would have
been churn. One of the four reported issues (the credential-delete
crash) is also the top unresolved Sentry issue `AUTOGPT-SERVER-6HB` with
100+ events going back to 2025-10-20 — it was archived as "ignored" but
is a real regression. Bug #4 required new work; the others we got by
adopting the existing open PRs and addressing a pending review comment.

**What:** This PR consolidates the three in-flight PRs, adds the two
pieces of new work needed to fully close the beta blockers, and
addresses the pending review on one of the three PRs so it doesn't
require a second round.

- **Closes PR #12004** — Google Drive auto-credentials handling (merged
in)
- **Closes PR #12748** — Incremental OAuth for scope upgrades (merged
in)
- **Closes PR #12588** — superseded by the systemic None-guard here (see
"How" below)
- **Adds Bug 2 fix** — Google credential deletion no longer crashes on
`revoke_tokens`
- **Adds Bug 4 validator** — the agent builder can no longer save a
graph with a hardcoded Drive file ID

**How:**

1. **Adopt PR #12004 (Bug 1 — auto-credentials resolution).** Tags
Drive-file fields as `is_auto_credential` on `CredentialsFieldInfo`,
exposes `BlockSchema.get_auto_credentials_fields()` and
`Graph.regular_credentials_inputs` / `auto_credentials_inputs`, extracts
`_acquire_auto_credentials()` in the executor to resolve embedded
`_credentials_id` at run time, clears `_credentials_id` on agent fork so
cloned agents don't inherit the original author's credential, and fixes
the Firefox referrer policy on the Google Drive picker script load.

2. **Adopt PR #12748 (Bug 3 — credential accumulation).** OAuth callback
now merges scopes into an existing credential (explicit via
`credential_id` in OAuth state, or implicit via `provider + username`
match) instead of appending a new row on every reconnect. GitHub's
non-incremental OAuth path requests the union of existing + new scopes
at login so the upgrade path works there too.

3. **Replace PR #12588 with a systemic None-guard (addresses reviewer
feedback).** The original PR added a per-block `credentials:
GoogleCredentials | None = None` + early guard pattern that would need
to be repeated across 50+ blocks with `GoogleDriveFileField`. Per the
reviewer's ask, we moved the guard into `Block._execute()` once: after
the `setdefault` loop, if `kwargs[kwarg_name] is None` we raise
`BlockExecutionError` with a clean user-facing message. The per-block
change in `sheets.py` is dropped so `credentials: GoogleCredentials`
stays non-`Optional`. Dry-run path skips the guard (executor
intentionally runs blocks without resolved creds for schema validation).

4. **Fix Bug 2 — Google revoke_tokens (SECRT-2267,
AUTOGPT-SERVER-6HB).** `revoke_tokens()` was handing our Pydantic
`OAuth2Credentials` into google-auth's `AuthorizedSession`, which calls
`self.credentials.before_request(...)` on the object and crashes with
`AttributeError: 'OAuth2Credentials' object has no attribute
'before_request'`. Google's token revoke endpoint doesn't need any auth
header — just `token=<token>` in the form body per [Google's
docs](https://developers.google.com/identity/protocols/oauth2/web-server#tokenrevoke).
Switched to the platform's async `Requests` helper, matching how
`reddit.py` / `github.py` / `todoist.py` / other providers do
revocation. No google-auth objects involved.

5. **Fix Bug 4 — hardcoded Drive file IDs in agent graphs
(SECRT-2269).** Evidence from the beta user's session: CoPilot's
agent-builder produced 13 saved graph versions in one session where each
one stuffed either a bare string (`"1KAv…"`) or a partial object
(`{"id": "1KAv…"}`) into
`GoogleSheetsReadBlock.constantInput.spreadsheet`, never wiring an
`AgentGoogleDriveFileInputBlock` as the intended input. Bare-string
versions failed pydantic validation with `is not of type 'object'`;
object-with-only-`id` versions would have crashed at run time because
`_acquire_auto_credentials` has no `_credentials_id` to resolve. Added a
validator in `GraphModel._validate_graph_get_errors` that flags any
auto-credentials field whose `input_default.<field>` is a bare string OR
a dict missing `_credentials_id`, when there's no upstream link feeding
the field. Remediation text is format-aware: when
`field_schema["format"] == "google-drive-picker"` it names
`AgentGoogleDriveFileInputBlock` specifically; for any other future
auto-credentials format (OneDrive / Dropbox / etc.) the remediation is
generic, so we don't ship a stale Google-specific hint that doesn't
apply.

A companion handoff for the CoPilot agent-builder team is drafted at
`/tmp/agent-builder-ticket-drive-file-input.md` (to be filed in their
tracker). The validator here is a safety net so reviewers and the LLM
both get a clear error with the correct remediation; the agent-builder
itself still needs to learn the correct pattern so it stops trying to
hardcode Drive files in the first place.

### Changes 🏗️

**Backend**

- `backend/data/model.py` — merged `is_auto_credential` +
`input_field_name` (#12004) with `OAuthState.credential_id` (#12748);
kept HEAD's defensive `set()` copy on `discriminator_values`.
- `backend/blocks/_base.py` — `_execute()` runs the auto-credentials
setdefault loop + raises `BlockExecutionError` when a resolved value is
`None`.
- `backend/blocks/google/sheets_test.py` — 2 new tests (systemic
None-guard behaviour).
- `backend/blocks/google/_drive.py`, `_drive_test.py` — unchanged on
this branch (earlier bare-string validator was reverted after feedback;
see "Out of scope" below).
- `backend/data/graph.py` — auto-credentials anti-pattern validator in
`_validate_graph_get_errors`.
- `backend/data/graph_test.py` — 11 new tests for the validator.
- `backend/integrations/oauth/google.py` — `revoke_tokens` swapped to
`Requests().post`, removed `AuthorizedSession` misuse.
- `backend/integrations/oauth/google_test.py` — 3 new tests covering the
revoke happy path, no-access-token, and non-2xx-response.
- `backend/integrations/credentials_store.py` — from #12748.
- `backend/api/features/integrations/router.py` — incremental-OAuth
callback + scope upgrade helpers (from #12748).
- `backend/api/features/integrations/incremental_oauth_test.py` — 15
tests (from #12748).
- `backend/api/features/chat/tools/utils.py` → renamed to
`backend/copilot/tools/utils.py` during merge; now uses
`regular_credentials_inputs` for missing-creds + matching (from #12004).
- `backend/copilot/tools/utils_test.py` — moved from
`api/features/chat/tools/`, import paths updated.
- `backend/api/features/library/db.py` — library preset guard uses
`regular_credentials_inputs` (from #12004).
- `backend/data/graph.py` — `regular_credentials_inputs` /
`auto_credentials_inputs` properties + `_reassign_ids` clears
`_credentials_id` on fork (from #12004).
- `backend/executor/manager.py` — `_acquire_auto_credentials()`
extracted + validation (from #12004).
- `backend/executor/utils.py`, `utils_test.py`,
`manager_auto_credentials_test.py` — auto-credentials tests (from
#12004).

**Frontend**

- `frontend/src/components/contextual/GoogleDrivePicker/helpers.ts` —
Firefox referrer fix (from #12004).
-
`frontend/src/components/contextual/CredentialsInput/useCredentialsInput.ts`,
`src/hooks/useCredentials.ts`, `src/lib/autogpt-server-api/client.ts`,
`src/providers/agent-credentials/credentials-provider.tsx`,
`src/app/api/openapi.json` — incremental-OAuth scope upgrade UI (from
#12748).

**Shared libs**

- `autogpt_libs/supabase_integration_credentials_store/types.py` —
merged additions from both #12004 and #12748.

### Test plan 📋

- [x] `poetry run lint` — clean
- [x] `poetry run pytest backend/data/graph_test.py` — 55 passed
including 11 new validator tests
- [x] `poetry run pytest backend/integrations/oauth/google_test.py` — 3
new tests passing
- [x] `poetry run pytest backend/blocks/google/sheets_test.py` — 2 new
tests passing
- [x] `poetry run pytest backend/blocks/google/
backend/integrations/oauth/ backend/executor/ backend/data/graph_test.py
backend/api/features/integrations/ backend/copilot/tools/utils_test.py`
— 250 passed, 6 pre-existing failures that require the docker stack
(RabbitMQ/Redis/Postgres) and fail identically on `origin/dev`
- [x] `pnpm format` — clean
- [x] `pnpm lint` — 3 pre-existing `<img>` warnings on files I didn't
touch, no errors
- [x] `pnpm types` — pre-existing errors on `AgentActivityDropdown` that
also fail on `origin/dev` (unrelated to this PR; needs a separate fix on
dev)
- [x] Live repro on dev verified Bug 2 fires against current prod code —
two fresh Sentry events in `AUTOGPT-SERVER-6HB` at 2026-04-21T21:35:54Z
on `app:dev-behave:cloud` matching the exact `DELETE
/api/integrations/google/credentials/{cred_id}` path. Airtable OAuth2
delete as a control worked cleanly, confirming Google-specific.
- [x] Live repro on dev verified Bug 4 (CoPilot direct-run variant) —
`{"spreadsheet": {"id": "..."}}` → `Cannot use file 'None' (type: None)`
from `_validate_spreadsheet_file` mimeType check, as expected.

Reviewer post-merge verification:
- [ ] Delete a Google OAuth credential via the Integrations UI —
succeeds cleanly, no Sentry event fires
- [ ] Connect Google twice (same account, same scopes) — credential
count stays at 1 (dedup)
- [ ] Save an agent graph with
`GoogleSheetsReadBlock.constantInput.spreadsheet = "bare-id"` via API —
graph validator rejects with `AgentGoogleDriveFileInputBlock`
remediation
- [ ] Save an agent graph with `GoogleSheetsReadBlock` whose
`spreadsheet` is fed by an upstream
`AgentGoogleDriveFileInputBlock.result` — validator accepts, agent runs

### Out of scope (for follow-ups)

- **Bug 1 — "Failed to retrieve Google OAuth credentials"** in
`frontend/src/components/contextual/GoogleDrivePicker/useGoogleDrivePicker.ts:163`.
Zero hits for this string in the beta user's Langfuse traces and we
weren't able to reproduce it from a clean flow. Most likely a
stale-credential race condition (delete in another tab, picker queries a
stale React-Query cache). Tracked as a separate task; not blocking.
- **CoPilot first-attempt mimeType retry loop.** Observed on dev:
CoPilot's first call to `GoogleSheetsReadBlock` sends `{"spreadsheet":
{"id": "..."}}` without `mimeType`, hits `_validate_spreadsheet_file`,
retries with mimeType. Costs a round-trip. Two possible fixes (relax
`_validate_spreadsheet_file` to skip when mimeType is `None` and let
Google's API surface the real error; OR extend
`get_auto_credentials_fields` metadata so CoPilot's tool description
prompts it to always include mimeType). Deliberately deferred — fixing
only one of "API caller sends a bare string" or "CoPilot sends an
incomplete object" risked the same auth-ambiguity the bare-string commit
in this branch history hit.
- **CoPilot agent-builder prompt/guide update.** The validator here
produces the correct error message, but the agent-builder model still
needs to learn to use `AgentGoogleDriveFileInputBlock` upfront rather
than discover it through validator retries. Separate handoff ticket
filed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Touches OAuth credential issuance/upgrade paths and introduces a new
endpoint that returns raw access tokens (scope-gated), plus broad
changes to execution-time credential resolution/validation; mistakes
could impact auth/security or break integrations.
> 
> **Overview**
> Fixes several Google/Drive agent-builder blockers by **supporting
incremental OAuth scope upgrades** and by hardening how
credential-bearing file inputs (“auto-credentials”) are validated,
resolved, and cleared on graph fork.
> 
> On the integrations API, `/{provider}/login` now accepts
`credential_id` and persists it in `OAuthState` to upgrade an existing
OAuth2 credential on callback (explicit upgrade), with an implicit merge
path for same `provider+username`. The callback path now merges
scopes/metadata, preserves ID/title, preserves existing
`refresh_token`/`username` when missing from incremental responses,
blocks upgrades for managed/system credentials, and adds a **new
`/{provider}/credentials/{cred_id}/picker-token` endpoint** to return a
short-lived access token for provider-hosted pickers (currently
allowlisted to Google Drive scopes).
> 
> For auto-credentials, `CredentialsFieldInfo` gains
`is_auto_credential` + `input_field_name`, graphs now expose
`regular_credentials_inputs` vs `auto_credentials_inputs`, and multiple
callers switch from `aggregate_credentials_inputs()` to
`regular_credentials_inputs` so embedded picker credentials aren’t
treated as user-mapped inputs. Execution-time auto-credential
acquisition is extracted into `_acquire_auto_credentials()` with clearer
error handling and lock cleanup; block execution adds a systemic guard
to surface a clean `Missing credentials` error when auto-credentials are
absent.
> 
> Separately fixes Google credential deletion by rewriting
`GoogleOAuthHandler.revoke_tokens()` to use the platform `Requests`
helper (bounded retries) instead of `AuthorizedSession`, and expands
test coverage across these flows (incremental OAuth, picker-token,
auto-credential validation/acquisition, graph validator, and frontend
diagnostics test stubs).
> 
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
cac36eae9f. Bugbot is set up for automated
code reviews on this repo. Configure
[here](https://www.cursor.com/dashboard/bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-23 17:16:30 +00:00
Zamil Majdy
80bfde1ca6 feat(blocks): charge Ayrshare per-post + align Bannerbear/Jina floors (#12893)
## Why

The cost-tracking audit on 2026-04-23 ([Platform System
Credentials](https://www.notion.so/auto-gpt/4d251f343fe146bcb91b6a037d1bfc3c))
surfaced three gaps where the user wallet was silently subsidising
third-party spend:

1. **Ayrshare (13 blocks)** — zero charge on every social post. No
`BLOCK_COSTS` entry, no SDK `.with_base_cost` registration. Platform
absorbs the entire ~$149/mo Business plan.
2. **Bannerbear** — flat 1 credit/call below the ~$0.025/image unit cost
on the Starter tier ($49/mo / 2K images).
3. **JinaChunkingBlock** — wallet-free; siblings (`JinaEmbeddingBlock`,
`SearchTheWebBlock`) are charged.

## What

- New `backend/blocks/ayrshare/_cost.py` with two-tier
`AYRSHARE_POST_COSTS` (5 credits when `is_video=True`, 2 credits
otherwise — first-match wins in `block_usage_cost`).
- All 13 `PostTo*Block` classes decorated with
`@cost(*AYRSHARE_POST_COSTS)`.
- `BannerbearTextOverlayBlock` floor: 1 → 3 credits in
`bannerbear/_config.py`.
- `JinaChunkingBlock` added to `BLOCK_COSTS` with a flat 1-credit floor.
- `cost(...)` decorator generic-ized via `TypeVar`, so pyright retains
`PostToXBlock.Input/Output` narrowing.

## How

Ayrshare uses a decorator-based registration (not a direct `BLOCK_COSTS`
entry) because each `post_to_*.py` block imports from `backend.sdk`, and
`backend.sdk.cost_integration` imports `BLOCK_COSTS` — listing the
blocks in `block_cost_config.py` would create a circular import. The
`@cost` decorator defined in `sdk/cost_integration.py` was already the
approved escape hatch for this exact shape.

cost_filter in `block_usage_cost` already supports boolean-field
matching (see Apollo's `enrich_info` tier), so `{"is_video": True}` and
`{"is_video": False}` select the right tier at execution time.
`is_video` defaults to `False` on `BaseAyrshareInput`, so posts that
omit the field still land on the 2-credit default.

## Test plan

- [x] `poetry run pytest backend/data/block_cost_config_test.py` — new
6-test suite covers Ayrshare video/non-video/default tiers, the
Bannerbear floor, and the Jina chunking floor
- [x] `poetry run pytest backend/executor/manager_cost_tracking_test.py`
— no regressions (45 pre-existing tests still pass)
- [x] `poetry run ruff format` + `poetry run isort` + `poetry run ruff
check --fix`
- [x] `poetry run pyright` on touched files — 0 errors, 0 warnings
(pre-existing `LlmModel.KIMI_K2_*` errors are on dev and unrelated)
- [ ] Manual: run an Ayrshare post through the builder and confirm 2cr
(text/image) vs 5cr (video) charge
2026-04-23 20:39:35 +07:00
Zamil Majdy
81d6e91f37 feat(platform/copilot): message timestamps + accurate thought-for time (#12890)
## Why

The "Thought for 1m 46s" label under assistant replies has been
misleading
because the backend persists the whole-turn wall clock (from turn start
to
stream end) — which includes tool execution, browser sessions, graph
runs,
etc. Users also had no way to see when a message was actually sent /
received.

## What

- **Per-message timestamps** — `ChatMessage.created_at` (already on the
DB row)
is now serialised through the pydantic model and the
`SessionDetailResponse`,
then plumbed into the UI. Hovering the "Thought for X" label now shows
the
  absolute local date/time via a tooltip.
- **Accurate reasoning duration** — new
`ChatMessage.reasoningDurationMs`
  column. Backend accumulates time between `reasoning-start` and
`reasoning-end` SSE events inside `publish_chunk` (via the session meta
hash). `mark_session_completed` reads the total and persists it
alongside
the existing `durationMs`. Frontend prefers `reasoning_duration_ms` when
  present, falls back to `duration_ms` for legacy rows.

## How

- `schema.prisma` gains `reasoningDurationMs Int?`; migration
  `20260423120000_add_reasoning_duration_ms` adds the column.
- `publish_chunk` gains a side-effect that writes `reasoning_started_at`
/
`reasoning_ms_total` into the existing per-session Redis meta hash when
  reasoning events pass through. No extra IO path, no extra Redis key.
- `set_turn_duration` accepts an optional `reasoning_duration_ms` arg
and
  patches both the DB row and the cached session in place, mirroring the
  existing behaviour for `duration_ms`.
- Frontend: `convertChatSessionMessagesToUiMessages` now returns
`durations`, `reasoningDurations`, and `timestamps` maps. `TurnStatsBar`
picks the best available value and wraps the label in the design-system
  `BaseTooltip` so hover reveals the local timestamp.

## Test plan

- [x] `poetry run pytest
backend/copilot/db_test.py::test_set_turn_duration_*`
- [x] `poetry run pytest backend/copilot/stream_registry_test.py`
- [x] `pnpm format` / `pnpm lint` / `pnpm types` (copilot area)
- [x] `pnpm test:unit src/app/\(platform\)/copilot` — 705 tests pass (4
pre-existing `jszip` module resolution failures unrelated to this
change)
- [ ] Manual: open a session with a long tool run and confirm the new
"Thought for X" reflects only reasoning time (falls back for old rows)
      and the tooltip surfaces the local timestamp.
2026-04-23 18:55:34 +07:00
Zamil Majdy
39cdc0a5e0 fix(backend/copilot): tame Kimi compaction storm + tunable threshold + Langfuse cost backfill (#12889)
## Why

Investigation of two reported sessions
([85804387](https://dev-builder.agpt.co/copilot?sessionId=85804387-7708-4fdc-8ec9-64283cdd902d),
[19d69dec](https://dev-builder.agpt.co/copilot?sessionId=19d69dec-210f-4439-a94b-2d7d443b9909))
where Kimi K2.6 via OpenRouter was running ~30 min per turn with no
actions completed (Discord report from Toran). Langfuse traces showed:

- 31 generation calls per turn at p90 = 151s, max = 415s
- 2.57M uncached tokens, `cache_create=0`, ~4% cache_read — Moonshot's
OpenRouter endpoint silently drops Anthropic-style cache writes
- **3 SDK-internal compactions per turn** — each compaction is itself a
slow LLM round-trip
- Reconciled OpenRouter cost was being recorded to a DB row but never
surfaced on the Langfuse trace, leaving operators to grep pod logs

## What

Four commits, split by concern.

### 1. `fix(backend/copilot): skip CLAUDE_AUTOCOMPACT_PCT_OVERRIDE for
Moonshot/Kimi` (`5fd9c5aa`)

`env.py` was unconditionally setting
`CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=50` (introduced in #12747 to cap
cache-creation cost on Anthropic where context >200K = 54% of total
cost). On Kimi where `cache_create=0` silently, the cache-cost rationale
doesn't apply — but the 50% threshold still made the bundled CLI
auto-compact at ~100K tokens, triggering 3+ compactions per turn against
Kimi's larger effective window. Each compaction added a slow LLM
round-trip (one in our test ran 166s and burned the budget cap before
the user got any output).

Threads the resolved `sdk_model` (and `fallback_model`) into
`build_sdk_env` and skips the env var when the model matches
`is_moonshot_model(...)`. The CLI then uses its default ~93% threshold,
cutting compaction passes to 0–1.

### 2. `feat(backend/copilot): backfill OpenRouter reconciled cost to
Langfuse trace` (`f3de3624` + follow-ups `5ce3d038`, `d2c1a2cd`,
`d8e08525`, `d243bf6c9`)

`record_turn_cost_from_openrouter` runs as a fire-and-forget task after
the OTel span closes, so the Langfuse trace UI showed the SDK CLI's
rate-card estimate only — for non-Anthropic OpenRouter routes that
estimate is Sonnet pricing on Kimi tokens (~5x too high).

The backfill captures `langfuse.get_current_trace_id()` and threads it
into the reconcile task, which emits an `openrouter-cost-reconcile`
child event with the authoritative cost + token usage. **Bug caught
during /pr-test:** `propagate_attributes` only annotates an existing
OTel span, it doesn't create one — by the time the `finally` block runs,
SDK-emitted spans have ended and `get_current_trace_id()` returns None.
Fixed in `d8e08525` by wrapping the turn in
`langfuse.start_as_current_span(name="copilot-sdk-turn")`. Also tags
fallback-path events with `cost_source` so operators can distinguish
reconciled vs estimated turns.

### 3. `feat(backend/copilot): expose CLAUDE_AUTOCOMPACT_PCT_OVERRIDE as
a config knob` (`72416f73`)

The previously-hardcoded `50` is now
`claude_agent_autocompact_pct_override` (default 50, env
`CHAT_CLAUDE_AGENT_AUTOCOMPACT_PCT_OVERRIDE`). Setting to 0 omits the
env var entirely so the CLI uses its native ~93% threshold — useful when
the post-compact floor (system prompt + tool defs ≈ 65–110K) sits close
to an aggressive trigger and operators see back-to-back compaction
cascades. Moonshot routes still skip the env var unconditionally
regardless of config.

### 4. `fix(backend/copilot): align SDK retry compaction target with CLI
autocompact threshold` (`730ad256`)

`_reduce_context` was calling `compact_transcript` without an explicit
`target_tokens`, so it fell back to `get_compression_target(model) =
context_window - 60K`. For Sonnet 200K that's 140K — well above the
CLI's PCT=50 trigger of 90K — and for Kimi 256K it's 196K, above the
CLI's default 167K trigger. Result: a successful retry compaction landed
at 140K/196K and the CLI immediately re-compacted on the next call →
**two compactions per recovered turn**.

New `_compaction_target_tokens(model)` mirrors the CLI's `i6_()` formula
(`min(window * pct/100, window - 13K)`) with a 20K safety buffer so the
post-compact context sits comfortably below the CLI's trigger.

## How — empirical validation against the actual long Kimi transcript

Replayed the 199-message transcript from session 85804387 through the
bundled CLI in two configurations:

| | Post-fix (no override) | Pre-fix (`PCT_OVERRIDE=50`) |
|---|---|---|
| `autocompact: tokens=` | 126,312 | 126,341 |
| `threshold=` | **167,000** | **90,000** |
| Decision | 126K < 167K → **skip** | 126K > 90K → **COMPACTION FIRES**
|
| Duration | 21s | **166s** (8x slower) |
| Cost | $0.34 | **$0.82** (2.4x more) |
| Output | PONG (success) | empty (hit $0.50 budget cap, exit 1) |

The pre-fix configuration burned $0.82 of compaction work over 166s and
never produced a user response — exactly the failure mode reported.

**Why cascade happens at 50%, not at 93%:** post-compaction context is
`summary (~5–10K) + system_prompt + tool_definitions + skills + active
TodoWrite + memory ≈ 65–110K floor`. With trigger at 90K, post-compact
floor sits AT or above the trigger → next assistant message tips over →
immediate re-compaction → cascade until the CLI's rapid-refill breaker
trips at 3 attempts. With trigger at 167K, the same floor sits
comfortably below trigger → no cascade.

## Considered but not done

- **Force `cache_control` markers to reach Moonshot**: bundled CLI sends
them by default; Moonshot silently drops them per their own docs (uses
`X-Msh-Context-Cache` headers, not body markers). Real fix needs
bypassing OpenRouter — out of scope.
- **Slim the system prompt + tool definitions** to lower the
post-compact floor: real win but separate refactor with tool-use
accuracy A/B.
- **LD-driven auto-fallback to Sonnet on Kimi degradation**:
`claude_agent_fallback_model` already wires `--fallback-model` for
overload (529); auto-flipping on slowness needs latency aggregation
infra that doesn't exist yet.

## Test plan

- [x] `poetry run pytest backend/copilot/sdk/env_test.py
backend/copilot/sdk/openrouter_cost_test.py
backend/copilot/sdk/service_helpers_test.py` — 111 passed (37 env + 23
cost + 51 helpers, including 6 new env tests, 3 backfill tests, 6 new
compaction-target tests)
- [x] `poetry run pytest backend/copilot/sdk/` — 970+ passed
- [x] `poetry run pyright .` — 0 errors
- [x] `poetry run format` — clean
- [x] /pr-test --fix end-to-end against dev — 5/5 scenarios PASS,
including Anthropic route ($0.0174 cost +0.0% delta) and Moonshot route
($0.028 vs $0.018 → +58.2% delta validates reconcile rationale)
- [x] Transcript replay validation: pre-fix vs post-fix on real
126K-token transcript → 8x slower / 2.4x more expensive / fails entirely
on pre-fix; clean PONG on post-fix
2026-04-23 18:46:35 +07:00
Zamil Majdy
4242da79f0 fix(backend/copilot): raise baseline tool-round limit to 100 + graceful finish hint (#12892)
## Why

On prod, longer copilot runs (complex feature implementations, multi-bug
fix chains) error out with `Exceeded 30 tool-call rounds without a final
response`, lose mid-stream assistant output, and the UI appears to
re-dispatch an older prompt. Reported by @itsababseh in #breakage for
session `661ba0cc-a905-4c66-bf11-61eb5423d775`.

Langfuse trace of that session shows 52 turns / 344 LLM calls; **two
turns hit exactly 30 rounds** (Turn 38: implementing kill-cam/headshot
juice pass; Turn 42: fixing multi-bug list). Both were legitimate,
non-looping work that simply needed more rounds to complete. Round 30
fired `bash_exec`, the loop cut off cold, no summary was ever produced,
and the stream surfaced `baseline_tool_round_limit`. Frontend
subsequently re-dispatched the same user message several times (turns
39–41 × 3, turns 43–47 × 5 with identical prompt), which is what the
user perceives as "falling back into acting on an older command."

Root cause: [`_MAX_TOOL_ROUNDS =
30`](https://github.com/Significant-Gravitas/AutoGPT/blob/cf6d7034f/autogpt_platform/backend/backend/copilot/baseline/service.py#L125)
has been unchanged since the baseline path was introduced (#12276).
Modern agent turns with Claude Code / Kimi / Sonnet routinely need more.

## What

- Raise `_MAX_TOOL_ROUNDS` from 30 → 100.
- Pass `last_iteration_message` to `tool_call_loop` so the final round
receives a "stop calling tools, wrap up" system hint. The model now
produces a graceful summary on the last round instead of being cut off
mid-tool.

## How

Two-line change in
[`backend/copilot/baseline/service.py`](https://github.com/Significant-Gravitas/AutoGPT/blob/fix/copilot-baseline-tool-round-limit/autogpt_platform/backend/backend/copilot/baseline/service.py):
- Bump the module-level constant.
- Define `_LAST_ITERATION_HINT` and wire it via the existing
`last_iteration_message` kwarg on
[`tool_call_loop`](https://github.com/Significant-Gravitas/AutoGPT/blob/cf6d7034f/autogpt_platform/backend/backend/util/tool_call_loop.py#L188).
The shared loop already handles appending it only on the final iteration
(see `tool_call_loop_test.py::test_last_iteration_message_appended`).

Frontend retry cascade on `baseline_tool_round_limit` is a separate UX
issue — logging it as a follow-up.

## Checklist

- [x] My code follows the project's style guidelines
- [x] I have performed a self-review
- [x] Existing `tool_call_loop_test.py` covers `last_iteration_message`
behavior (10/10 passing)
- [x] No new migrations
- [x] No breaking changes (constant/kwarg only)
2026-04-23 18:38:52 +07:00
Zamil Majdy
cf6d7034fa fix(backend/copilot): sync safety net for Redis-induced zombie sessions (#12886)
## Why

A 25-min-old copilot turn ended up a zombie in Redis (`status=running`
for 60+ min, queued user messages never drained) after a rolling deploy
of `autogpt-copilot-executor`. Root cause:

1. Cluster churn during the rollout broke a Redis call mid-turn.
2. `_execute_async`'s `finally` tried to publish the failure via
`mark_session_completed` on the same (now-broken) event loop +
thread-local Redis client.
3. That Redis call *also* failed; the exception was caught and logged
but never reached Redis — so the session meta stayed `running`.
4. `on_run_done` then completed the future normally, `active_tasks`
drained, the pod exited.
5. The zombie persisted until the 65-min stale-session watchdog reaped
it. While it was live, queued-message pushes succeeded (HTTP only checks
`status=running`), so the UI showed "Queued" bubbles that never drained.

## What

The fix is **one small addition** in the per-turn lifecycle:

### `sync_fail_close_session` — last line of defense in
`processor.execute`'s `finally`

Invoked from `CoPilotProcessor.execute()`'s `finally` on every turn
exit. Submits the CAS coroutine to the processor's long-lived
`self.execution_loop` via `asyncio.run_coroutine_threadsafe` — the same
pattern `ExecutionProcessor.on_graph_execution` uses at
[executor/manager.py:881-892](autogpt_platform/backend/backend/executor/manager.py#L881-L892)
to bridge sync→async through `node_execution_loop`.

- Calls `mark_session_completed(session_id,
error_message=SHUTDOWN_ERROR_MESSAGE)`, which is a CAS on `status ==
"running"`. If the async path already wrote a terminal state the CAS
no-ops; otherwise we mark `failed` and the UI transitions cleanly.
- Bounded by inner `asyncio.wait_for(timeout=10s)` and outer
`future.result(timeout=12s)` so a genuinely unreachable Redis can't hang
the safety net.
- Reuses the long-lived execution loop (no per-turn TCP connect, no
`@thread_cached` thrashing).

The outer `future.result()` in `_execute()` is bounded by
`_CANCEL_GRACE_SECONDS` (5s) so a wedged event loop can't trap the flow
before the safety net fires.

### `cleanup()` stays aligned with agent-executor

Mirrors the pattern from `backend.executor.manager.cleanup` — a single
method that:

1. Flags + tells the broker to stop consuming.
2. Passively waits for `active_tasks` to drain (up to
`GRACEFUL_SHUTDOWN_TIMEOUT_SECONDS`).
3. Worker / executor / lock teardown.

No pre-emptive cancellation of healthy turns, no fail-close step for
stuck turns. Same proven shape agent-executor uses.

### Timeout alignment

Raised both `COPILOT_CONSUMER_TIMEOUT_SECONDS` and
`GRACEFUL_SHUTDOWN_TIMEOUT_SECONDS` to 6h so a rolling deploy can let
the longest legitimate turn finish via its own lifecycle path. Matched
in infra at `terminationGracePeriodSeconds: 21600`
(Significant-Gravitas/AutoGPT_cloud_infrastructure#311).

### RabbitMQ policy — deploy prep

The `x-consumer-timeout` queue argument is changing from 1h → 6h. Tested
empirically on dev's RabbitMQ 4.1.4: `queue_declare` is tolerant of
`x-consumer-timeout` mismatches, so no queue delete is needed. To make
the new timeout **immediately effective for running consumers** (so pods
mid-shutdown don't have their consumer cancelled at the old 1h limit),
apply a policy before deploying:

```bash
rabbitmqctl set_policy copilot-consumer-timeout \
  "^copilot_execution_queue$" \
  '{"consumer-timeout": 21600000}' \
  --apply-to queues
```

Already applied on dev. Apply on prod before the PR's prod deploy.

### Incidental rename

- `_clear_pending_messages_unsafe` → `clear_pending_messages_unsafe`
(keeps the `_unsafe` warning suffix; importable without the
leading-underscore private marker).

## How

Before: transient Redis failure → async finally silently fails → zombie
session → queued messages never drain.
After: transient Redis failure → `execute()`'s sync finally runs
`mark_session_completed` on the processor's long-lived loop → session
correctly marked failed → UI sees terminal state immediately.

SIGTERM path unchanged from the "let in-flight work finish" design: old
pod stops taking new work, existing turns complete naturally.

## Test plan

- [x] `TestSyncFailCloseSession` unit tests — invokes
`mark_session_completed` with the shutdown error, swallows Redis
failures, bounded timeout fires when Redis hangs.
- [x] `TestExecuteSafetyNet` — verifies the `finally` always fires,
including SIGTERM-interrupted and zombie-Redis scenarios.
- [x] Existing `TestExecuteAsyncAclose` + pending_messages tests still
pass (18 passed).
- [x] `pyright` on touched files: 0 errors.
- [x] Manual E2E on native dev stack: sent a `sleep 300 && echo hewwo`
task, SIGTERMed mid-turn at +40s, observed:
   - `[CoPilotExecutor] [cleanup N] Starting graceful shutdown...`
   - Drain-wait ran for ~4.5 min ("1 tasks still active, waiting...")
- Turn finished with `result=Done! The command finished after 5 minutes
and printed: hewwo`
   - `Cleaned up completed session` → `Graceful shutdown completed`
   - No zombie.
- [x] `poetry run format` applied.
- [x] RabbitMQ policy verified on dev. Apply on prod before prod deploy.
- [ ] Verified behavior on next production rolling deploy.
2026-04-23 06:49:06 +07:00
Zamil Majdy
c56c1e5dd6 fix(backend/copilot): disable ask_question tool pending UX rework (#12887)
### Why / What / How

**Why:** The in-conversation Question GUI is unreliable in production —
users submitting answers can get their messages dropped and the agent
gets stuck on the auto-generated "please proceed" step with no way to
make progress. Discord report:
https://discord.com/channels/1126875755960336515/1496474512966029472/1496537943287005365
(see attached video). Pause/queue semantics still need a rework; until
then, the right call is to stop the model from reaching for this tool.

**What:** Removes `ask_question` from the copilot tool registry so the
model never sees or calls it. Historical sessions that already contain
`ask_question` tool calls still render (frontend renderers + response
model untouched), so this is non-destructive to existing chats.
Re-enabling once UX is reworked is a small revert.

**How:**
- Drop the `AskQuestionTool` import + registry entry from
`backend/copilot/tools/__init__.py`.
- Drop `"ask_question"` from the `ToolName` literal in
`backend/copilot/permissions.py` — required because a runtime
consistency check asserts the literal matches `TOOL_REGISTRY.keys()`.
- Delete the "Clarifying — Before or During Building" section from
`backend/copilot/sdk/agent_generation_guide.md` so the SDK-mode system
prompt no longer instructs the model to call `ask_question`.
- Drop the three `prompting_test.py` tests that asserted the guide
mentions that section.
- Keep `ask_question.py`, its unit test, `ClarificationNeededResponse`,
and the frontend `AskQuestion`/`ClarificationQuestionsCard` components
untouched so old sessions still render and re-enabling is a small
revert.

### Changes 🏗️

- `backend/copilot/tools/__init__.py` — remove `AskQuestionTool` import
and `"ask_question"` entry in `TOOL_REGISTRY`.
- `backend/copilot/permissions.py` — remove `"ask_question"` from the
`ToolName` literal.
- `backend/copilot/sdk/agent_generation_guide.md` — remove the
"Clarifying — Before or During Building" section.
- `backend/copilot/prompting_test.py` — remove
`TestAgentGenerationGuideContainsClarifySection` and the now-unused
`Path` import.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [x] `poetry run pytest backend/copilot/tools/
backend/copilot/permissions_test.py backend/copilot/prompting_test.py` —
805+78 tests pass, consistency check between `ToolName` literal and
`TOOL_REGISTRY` still holds.
- [ ] Smoke-test in dev: start a copilot session and confirm the model
no longer lists/calls `ask_question` (its OpenAI tool schema is gone
from `get_available_tools()` and from the SDK `allowed_tools`).
- [ ] Load a historical session that contains an `ask_question` tool
call in its transcript — confirm the frontend still renders the question
card (no regression on legacy sessions).
2026-04-22 23:34:04 +07:00
Bentlybro
6fcbe95645 Merge branch 'master' into dev 2026-04-22 15:36:37 +01:00
Zamil Majdy
9703da3dfd refactor(backend/copilot): Moonshot module + cache_control widening + partial-messages default-on + title cost (#12882)
## Why

Several loose ends from the Kimi SDK-default merge (#12878), plus
follow-ups surfaced during review + E2E testing:

1. **Kimi-specific pricing lived inline in `sdk/service.py`** alongside
unrelated SDK plumbing — any future non-Anthropic vendor would have
piled onto the same file.
2. **Moonshot's Anthropic-compat endpoint honours `cache_control: {type:
ephemeral}`**, but the baseline cache-marking gate
(`_is_anthropic_model`) was narrow enough to exclude it → Moonshot fell
back to automatic prefix caching, which drifts readily between turns.
3. **Kimi reasoning rendered AFTER the answer text** on dev because the
summary-walk hoist only reorders within one `AssistantMessage.content`
list, and Moonshot splits each turn into multiple sequential
AssistantMessages (text-only, then thinking-only).
4. **Title generation's LLM call bypassed cost tracking** — admin
dashboard under-reported total provider spend by the aggregate of those
per-session calls.
5. **Cost override** was using the requested primary model, not the
actually-executed model — when the SDK fallback activates the override
mis-routes pricing.

## What

### Moonshot module
New `backend/copilot/moonshot.py`:
- `is_moonshot_model(model)` — prefix check against `moonshotai/`
- `rate_card_usd(model)` — published Moonshot rates, default `(0.60,
2.80)` per MTok with per-slug override slot
- `override_cost_usd(...)` — moved from `sdk/service.py`, replaces CLI's
Sonnet-rate estimate with real rate card
- `moonshot_supports_cache_control(model)` — narrow gate for cache
markers

Rate card is **not canonical** — authoritative cost comes from the
OpenRouter `/generation` reconcile; this module only improves the
in-turn estimate and the reconcile's lookup-fail fallback. Signal
authority: reconcile >> rate card >> CLI.

### Baseline cache-control widened to Moonshot
- New `_supports_prompt_cache_markers` = `_is_anthropic_model OR
is_moonshot_model`
- Both call sites (system-message cache dict, last-tool cache marker)
switched to the wider gate
- OpenAI / Grok / Gemini still return `false` — those endpoints 400 on
the unknown field

**Measured impact in /pr-test:** baseline Kimi continuation turns jumped
to ~98% cache hit (334 uncached + 12.8K cache_read on a 13.1K prompt).

### SDK partial-messages default-on (fixes the reasoning-order bug)
- `CHAT_SDK_INCLUDE_PARTIAL_MESSAGES` flipped from `default=False` →
`default=True`
- Kimi stream now emits `reasoning-start → reasoning-delta* →
reasoning-end → text-start → text-delta*` in the correct order —
verified in /pr-test
- Kill-switch: set `CHAT_SDK_INCLUDE_PARTIAL_MESSAGES=false` to fall
back to summary-only emission

### SDK cost override scoped to Moonshot
- Call site now explicitly gates `if _is_moonshot_model(active_model)` —
Anthropic turns trust CLI's number directly
- Added `_RetryState.observed_model` populated from
`AssistantMessage.model`, preferred over `state.options.model` so
fallback-model turns bill correctly (addresses CodeRabbit review)

### Title cost capture
- `_generate_session_title` now returns `(title, ChatCompletion)` so the
caller controls cost persistence
- `_update_title_async` runs title-persist and cost-record as
independent best-effort steps
- `_title_usage_from_response` helper reads `prompt_tokens /
completion_tokens / cost_usd` (OR's `usage.cost` off `model_extra`)
- Provider label derived from `ChatConfig.base_url` (`open_router` /
`openai`)
- No exception suppressors — `isinstance(cost_raw, (int, float))` check
replaces the inner `float()` try/except

### Misc
- Kimi tool-name whitespace strip in the response adapter — Kimi
occasionally emits tool names with leading spaces the CLI dispatcher
can't resolve
- TODO marker on the rate-card for post-prod-soak removal

## How

- Detection is **prefix-based** (`moonshotai/`) — future Kimi SKUs
transparently inherit rate card + cache-control gate
- Baseline cache-marking was already structured; only the gate changes
- Partial-messages default-on relies on the adapter's diff-based
reconcile (shipped in #12878) which has soaked stable
- Title cost path mirrors `tools/web_search.py`'s pattern for reading
OR's `usage.cost`

## Test plan

- [x] `pytest backend/copilot/moonshot_test.py` — 21 tests
- [x] `pytest backend/copilot/baseline/service_unit_test.py` — updated
for widened gate
- [x] `pytest backend/copilot/sdk/*_test.py
backend/copilot/service_test.py` — no regressions
- [x] Full E2E on local native stack — 10/10 scenarios pass (see
test-report comment)
- [x] Measured: baseline Kimi ~98% cache hit on continuation, SDK Kimi
~62% (capped by Moonshot's prefix ceiling)

## Deferred

SDK-path Moonshot cache hit rate stays at ~62% on long prompts.
`native_tokens_cached=18432` regardless of turn/session suggests a
Moonshot-side cap on cached prefix size. Not fixable by our code —
requires proxy rewriting requests or upstream Moonshot change.
2026-04-22 20:42:47 +07:00
Zamil Majdy
ebb0d3b95b feat(backend/copilot): LaunchDarkly per-user model routing (#12881)
## Summary

Per-user model routing for the copilot via LaunchDarkly. Replaces the
pure-env-var pick on every `(mode, tier)` cell of the model matrix with
an LD-first resolver that falls back to the `ChatConfig` default. Lets
us roll out non-default routes (e.g. Kimi K2.6 on baseline standard) to
a user cohort without shipping a deploy.

| | standard | advanced |

|----------|------------------------------------|------------------------------------|
| fast | `copilot-fast-standard-model` | `copilot-fast-advanced-model` |
| thinking | `copilot-thinking-standard-model` |
`copilot-thinking-advanced-model` |

All four flags are **string-valued** — the value IS the model identifier
(e.g. `"anthropic/claude-sonnet-4-6"` or `"moonshotai/kimi-k2.6"`).

## What ships

- **New module `backend/copilot/model_router.py`** with a single
`resolve_model(mode, tier, user_id, *, config)` coroutine. That's the
one place both paths consult.
- **4 new `Flag` enum values** in `backend/util/feature_flag.py`
(reusing the existing `get_feature_flag_value` helper which already
supports arbitrary return types).
- **`baseline/service.py::_resolve_baseline_model`** → async, takes
`user_id`.
- **`sdk/service.py::_resolve_sdk_model_for_request`** → takes
`user_id`, consults LD for both standard and advanced thinking cells.
- **Default flip**: `fast_standard_model` default goes back to
`anthropic/claude-sonnet-4-6`. Non-Anthropic routes now ship via LD
targeting — safer rollback, per-user cohort control, no redeploy
required to flip.

## Behavior preserved

- `config.claude_agent_model` explicit override still wins
unconditionally (existing escape hatch for ops).
- `use_claude_code_subscription=true` on the standard thinking tier
still returns `None` so the CLI picks the model tied to the user's
Claude Code subscription.
- All legacy env var aliases (`CHAT_MODEL`, `CHAT_ADVANCED_MODEL`,
`CHAT_FAST_MODEL`) still bind to their cells.
- LD client exceptions / misconfigured (non-string) flag values fall
back silently to config default with a single warning log — never fails
the request.

## Files

| File | Change |
|---|---|
| `backend/copilot/model_router.py` | new — `resolve_model` +
`_config_default` + `_FLAG_BY_CELL` map |
| `backend/copilot/model_router_test.py` | new — 11 cases |
| `backend/util/feature_flag.py` | add 4 string-valued `Flag` entries |
| `backend/copilot/config.py` | flip `fast_standard_model` default to
Sonnet |
| `backend/copilot/baseline/service.py` | `_resolve_baseline_model` →
async + LD resolver |
| `backend/copilot/sdk/service.py` | `_resolve_sdk_model_for_request` →
LD resolver + user_id |
| `backend/copilot/baseline/transcript_integration_test.py` | update
tests for new signature + default |

## Test plan

- [x] `poetry run pytest backend/copilot/model_router_test.py
backend/copilot/baseline/transcript_integration_test.py
backend/copilot/sdk/service_test.py backend/copilot/config_test.py` —
**112 passing**
- [x] 11 resolver cases: missing user → fallback, LD string wins,
whitespace stripped, non-string value → fallback, empty string →
fallback, LD exception → fallback + warn, each of 4 cells routes to its
distinct flag
- [x] Legacy env aliases still bind to their new fields
- [ ] Manual dev-env smoke: flip `copilot-fast-standard-model` LD
targeting to `moonshotai/kimi-k2.6` for one user and confirm baseline
uses Kimi while other users stay on Sonnet
- [ ] Confirm SDK path still honors subscription mode (LD not consulted
when `use_claude_code_subscription=true` + standard tier)

## Rollout

1. Merge this PR → default stays Sonnet / Opus across the matrix, no
behavior change.
2. Create the 4 LD flags as string-typed in the LaunchDarkly console
(defaults matching config, so no drift if targeting empty).
3. Add per-user / per-cohort targeting in LD for the routes we want to
roll out (Kimi on baseline standard for a percentage, etc.).
2026-04-22 20:08:37 +07:00
Zamil Majdy
b98bcf31c8 feat(backend/copilot): SDK fast tier defaults to Kimi K2.6 via OpenRouter + vendor-aware cost + cross-model fix (#12878)
## Summary

Make Kimi K2.6 the default for the SDK (extended-thinking) copilot path,
mirroring the baseline default landed in #12871. The SDK already routes
through OpenRouter (see
[`build_sdk_env`](autogpt_platform/backend/backend/copilot/sdk/env.py) —
`ANTHROPIC_BASE_URL` is set to OpenRouter's Anthropic-compatible
`/v1/messages` endpoint), but the model resolver was unconditionally
stripping the vendor prefix, which prevented routing to anything except
Anthropic models. This PR unblocks Kimi (and any other non-Anthropic
OpenRouter vendor) on the SDK fast tier and flips the default to match
the baseline path.

## Why

After #12871 the baseline (`fast_*`) path runs Kimi K2.6 by default —
~5x cheaper than Sonnet at SWE-Bench parity — but the SDK (`thinking_*`)
path was still pinned to Sonnet because:

1. **Model name normalization stripped the vendor prefix.**
`_normalize_model_name("moonshotai/kimi-k2.6")` returned `"kimi-k2.6"`,
which OpenRouter cannot route — the unprefixed form only resolves for
Anthropic models. The docstring on `thinking_standard_model` claimed
"the Claude Agent SDK CLI only speaks to Anthropic endpoints", but the
env builder shows the CLI happily talks to OpenRouter's `/messages`
endpoint, which routes to any vendor in the catalog.
2. **The default was `anthropic/claude-sonnet-4-6`.** Same model on a
more expensive route.
3. **Cost label was hardcoded to `provider="anthropic"`** on the SDK
path's `persist_and_record_usage` call, making cost-analytics rows
misleading once Kimi runs.

## What

1. **`_normalize_model_name`**
([sdk/service.py](autogpt_platform/backend/backend/copilot/sdk/service.py))
— when `config.openrouter_active` is True, the canonical `vendor/model`
slug is preserved unchanged so OpenRouter can route to the correct
provider. Direct-Anthropic mode keeps the existing strip-prefix +
dot-to-hyphen conversion (Anthropic API requires both) and now **raises
`ValueError`** when paired with a non-Anthropic vendor slug — silent
strip would have sent `kimi-k2.6` to the Anthropic API and produced an
opaque `model_not_found`.
2. **`thinking_standard_model`**
([config.py](autogpt_platform/backend/backend/copilot/config.py)) —
default flipped from `anthropic/claude-sonnet-4-6` to
`moonshotai/kimi-k2.6`. Field description rewritten; rollback to Sonnet
is one env var
(`CHAT_THINKING_STANDARD_MODEL=anthropic/claude-sonnet-4.6`).
3. **`@model_validator(mode="after")`** on `ChatConfig`
([config.py:_validate_sdk_model_vendor_compatibility](autogpt_platform/backend/backend/copilot/config.py))
— fail at config load when `use_openrouter=False` is paired with a
non-Anthropic SDK slug. The runtime guard in `_normalize_model_name` is
kept as defence-in-depth, but the validator turns a per-request 500 into
a boot-time error message the operator sees once, before any traffic
lands. Covers `thinking_standard_model`, `thinking_advanced_model`, and
`claude_agent_fallback_model`. Subscription mode is exempt (resolver
returns `None` and never normalizes). The credential-missing case
(`use_openrouter=True` + no `api_key`) is intentionally NOT a boot-time
error so CI builds and OpenAPI-schema export jobs that construct
`ChatConfig()` without secrets keep working — the runtime guard still
catches it on the first SDK turn.
4. **Cost provider attribution**
([sdk/service.py:stream_chat_completion_sdk](autogpt_platform/backend/backend/copilot/sdk/service.py))
— `persist_and_record_usage` now passes `provider="open_router" if
config.openrouter_active else "anthropic"` instead of hardcoded
`"anthropic"`. The dollar value still comes from
`ResultMessage.total_cost_usd`; this just fixes the analytics label.
5. **Baseline rollback example** ([config.py:fast_standard_model
description](autogpt_platform/backend/backend/copilot/config.py)) — same
dot-vs-hyphen footgun fix (CodeRabbit catch).
6. **Tests** — `TestNormalizeModelName` (sdk/) monkeypatches a
deterministic config per case (the helper-test variants were passing
accidentally based on ambient env). New
`TestSdkModelVendorCompatibility` class in `config_test.py` covers all
five validator shapes (default-Kimi + direct-Anthropic raises, anthropic
override succeeds, openrouter mode succeeds, subscription mode skips
check, advanced+fallback tier also validated, empty fallback skipped).
`_ENV_VARS_TO_CLEAR` extended to all model/SDK/subscription env aliases
so a leftover dev `.env` value can't mask validator behaviour. New
`_make_direct_safe_config` helper for direct-Anthropic tests.

## Test plan

- [x] `poetry run pytest backend/copilot/config_test.py
backend/copilot/sdk/service_test.py
backend/copilot/sdk/service_helpers_test.py
backend/copilot/sdk/env_test.py
backend/copilot/sdk/p0_guardrails_test.py` — 238 pass
- [x] `poetry run pytest backend/copilot/` — 2560 pass + 5 pre-existing
integration failures (need real API keys / browser env, unrelated)
- [x] CI green on `feat/copilot-sdk-kimi-default` (35 pass / 0 fail / 1
neutral)
- [x] Manual: SDK extended_thinking turn against Kimi K2.6 via
OpenRouter on the native dev stack — request lands with
`model=moonshotai/kimi-k2.6`, response streams back, multi-turn
`--resume` recalls facts across turns. Backend log: `[SDK] Per-request
model override: standard (moonshotai/kimi-k2.6)`.
- [x] Manual: rollback path —
`CHAT_THINKING_STANDARD_MODEL=anthropic/claude-sonnet-4.6` resumes
Sonnet routing.

## Known follow-ups (not in this PR)

These surfaced during manual testing and will need separate PRs:

- **SDK CLI cost is wrong for non-Anthropic models.**
`ResultMessage.total_cost_usd` comes from a static Anthropic pricing
table baked into the CLI binary; for Kimi K2.6 it falls back to Sonnet
rates, **over-billing ~5x** ($0.089 vs the real ~$0.018 for ~30K prompt
+ ~80 completion). The `provider` label is now correct but the dollar
value isn't. Needs either a per-model rate card override on our side or
a CLI patch upstream.
- **Mid-session model switch (Kimi → Opus) breaks.** Kimi's
`ThinkingBlock`s have no Anthropic `signature` field; when the user
toggles standard → advanced after a Kimi turn, Opus rejects the replayed
transcript with `Invalid signature in thinking block`. Needs transcript
scrubbing on model switch (similar to existing
`TestStripStaleThinkingBlocks` pattern).
- **Reasoning UI ordering on Kimi.** Moonshot/OpenRouter places
`reasoning` AFTER text in the response; the SDK's
`AssistantMessage.content` reflects that order, and `response_adapter`
emits SSE events in the same order — so reasoning lands BELOW the answer
in the UI instead of above. Needs `ThinkingBlock` hoisting in
`response_adapter.py`.
2026-04-22 18:35:01 +07:00
Zamil Majdy
4f11867d92 feat(backend/copilot): TodoWrite for baseline copilot (#12879)
## Summary

Add `TodoWrite` to baseline copilot so the "task checklist" UI works on
non-Claude models (Kimi, GPT, Grok, etc.) the same way it works on the
SDK path. Baseline previously had no `TodoWrite` tool at all — only SDK
mode did via the Claude Code CLI's built-in — so models on baseline just
couldn't reach for a planning checklist.

This closes the last clear feature gap blocking baseline from being the
primary copilot path without giving up model flexibility.

## What ships

- **New MCP tool `TodoWrite`** in `TOOL_REGISTRY`, schema matching the
one the frontend's `GenericTool.helpers.ts` (`getToolCategory → "todo"`)
already renders as the **Steps** accordion. The tool is a stateless echo
— the canonical list lives in the model's latest tool-call args and
replays from transcript on subsequent turns.
- **Prompt guidance** in `SHARED_TOOL_NOTES` teaching the model when to
use it (3+ step tasks; always send the full list; exactly one
`in_progress` at a time).
- **Sharpened `run_sub_session` guidance** in the same prompt section —
framed explicitly as the context-isolation primitive for baseline.
Clearer for the model, no dual-primitive confusion.

## How the SDK path stays untouched

- SDK mode keeps using the CLI-native `TodoWrite` built-in.
- `BASELINE_ONLY_MCP_TOOLS = {"TodoWrite"}` in `sdk/tool_adapter.py`
filters the baseline MCP wrapper out of SDK's `allowed_tools` — no name
shadowing.
- `SDK_BUILTIN_TOOL_NAMES` is now an explicit allowlist (not
auto-derived from capitalization) so the classification stays coherent
when a capitalized tool is platform-owned.

## Files

| File | Change |
|---|---|
| `backend/copilot/tools/todo_write.py` | new — `TodoWriteTool` |
| `backend/copilot/tools/__init__.py` | register in `TOOL_REGISTRY` |
| `backend/copilot/tools/models.py` | add `TodoItem` +
`TodoWriteResponse` + `ResponseType.TODO_WRITE` |
| `backend/copilot/permissions.py` | explicit `SDK_BUILTIN_TOOL_NAMES`;
`apply_tool_permissions` maps baseline-only tools to CLI name for SDK |
| `backend/copilot/sdk/tool_adapter.py` | `BASELINE_ONLY_MCP_TOOLS`
filter |
| `backend/copilot/prompting.py` | `TodoWrite` + sharpened
`run_sub_session` guidance |
| `backend/api/features/chat/routes.py` | add `TodoWriteResponse` to
`ToolResponseUnion` |
| `backend/copilot/tools/todo_write_test.py` | new — schema + execute
tests |
| `frontend/src/app/api/openapi.json` | regenerated |
| `tools/tool_schema_test.py` | budget bumped `32_800 → 34_000` (actual
33_865, +1_065 headroom) |

## Test plan

- [x] `poetry run pytest backend/copilot/
backend/api/features/chat/routes_test.py` — **1010 passing**
- [x] Tool schema char budget regression gate passes
- [x] `_assert_tool_names_consistent` passes
- [x] **E2E on local native stack (Kimi K2.6 via OpenRouter,
`CHAT_USE_CLAUDE_AGENT_SDK=false`)**: baseline called `TodoWrite` on a
3-step prompt, SSE stream carried the exact `{content, activeForm,
status}` shape the UI expects, "Steps" dialog renders `Task list — 0/3
completed` with all three items (see test-report comment below).
- [x] Negative cases covered: two `in_progress` → rejected, missing
`activeForm` → rejected, non-list `todos` → rejected.
2026-04-22 17:28:15 +07:00
Zamil Majdy
33a608ec78 feat(platform/copilot): live baseline streaming + render flag + Sonar web_search + simulator cost tracking + reconnect fixes (#12873)
### Why / What / How

**Why.** Three problems on the baseline copilot path that compound:
extended-thinking turns froze the UI for minutes because Kimi K2.6
events were buffered in `state.pending_events: list` until the full
`tool_call_loop` iteration finished (reasoning arrived in one lump at
the end); the SSE stream replayed 1000 events on every reconnect and the
frontend opened multiple SSE streams in quick succession on tab-focus
thrash (reconnect storm → UI flickers, tab freezes); the `web_search`
tool hit Anthropic's server-side beta directly via a dispatch-model
round-trip that fed entire page contents back through the model for a
second inference pass (observed $0.072 on a 74K-token call); and the
simulator dry-run path ran on Gemini Flash without any cost tracking at
all, so every dry-run was free on the platform's microdollar ledger.

**What.** Grouped deltas, all targeting reliability, cost, and UX of the
copilot live-answer pipeline:

- **Live per-token baseline streaming.** `state.pending_events` is now
an `asyncio.Queue` drained concurrently by the outer async generator.
The tool-call loop runs as a background task; reasoning / text / tool
events reach the SSE wire during the upstream OpenRouter stream, not
after it. `None` is the close sentinel; inner-task exceptions are
re-raised via `await loop_task` once the sentinel arrives. An
`emitted_events: list` mirror preserves post-hoc test inspection.
Coalescing widened 32/40 → 64/50 ms to halve the React re-render rate on
extended-thinking turns while staying under the ~100 ms perceptual
threshold.
- **Reasoning render flag** — `ChatConfig.render_reasoning_in_ui: bool =
True` wired through both `BaselineReasoningEmitter` and
`SDKResponseAdapter`. When False the wire `StreamReasoning*` events are
suppressed while the persisted `ChatMessage(role='reasoning')` rows
always survive (decoupled from the render flag so audit/replay is
unaffected); the service-layer yield filter does the gating. Tokens are
still billed upstream; operator kill-switch for UI-level flicker
investigations.
- **Reconnect storm mitigations** — `ChatConfig.stream_replay_count: int
= 200` (was hard-coded 1000) caps `stream_registry.subscribe_to_session`
XREAD size. Frontend `useCopilotStream::handleReconnect` adds a 1500 ms
debounce via `lastReconnectResumeAtRef`, so tab-focus thrash doesn't fan
out into 5–6 parallel replays in the same second.
- **web_search rewritten to Perplexity Sonar via OpenRouter** — single
unified credential, real `usage.cost` flows through
`persist_and_record_usage(provider='open_router')`. Two tiers via a
`deep` param: `perplexity/sonar` (~$0.005/call quick) and
`perplexity/sonar-deep-research` (~$0.50–$1.30/call multi-step
research). Replaces the Anthropic-native + server-tool dispatches; drops
the hardcoded pricing constants entirely.
- **Synthesised answer surfaced end-to-end** — Sonar already writes a
web-grounded answer on the same call we pay for; the new
`WebSearchResponse.answer` field passes it through and the accordion UI
renders it above citations so the agent doesn't re-fetch URLs that are
usually bot-protected anyway.
- **Deep-tier cost warning + UI affordances** — `deep` param description
is explicit that it's ~100× pricier; UI labels read "Researching /
Researched / N research sources" when `deep=true` so users know what's
running.
- **Simulator cost tracking + cheaper default** —
`google/gemini-2.5-flash` → `google/gemini-2.5-flash-lite` (3× cheaper
tokens) and every dry-run now hits
`persist_and_record_usage(provider='open_router')` with real
`usage.cost`. Previously each sim was free against the user's
microdollar budget.
- **Typed access everywhere** — cost extractors now use
`openai.types.CompletionUsage.model_extra["cost"]` and
`openai.types.chat.ChatCompletion` / `Annotation` /
`AnnotationURLCitation` with no `getattr` / duck typing. Mirrors the
baseline service's `_extract_usage_cost` pattern; keep in sync.

**How.** Key file touches:

1. `copilot/config.py` — `render_reasoning_in_ui`,
`stream_replay_count`, `simulation_model` default.
2. `copilot/baseline/service.py` — `_BaselineStreamState.pending_events:
asyncio.Queue`, `_emit` / `_emit_all` helpers, outer generator runs
`tool_call_loop` as a background task + yields from queue concurrently.
3. `copilot/baseline/reasoning.py` —
`BaselineReasoningEmitter(render_in_ui=...)`, coalescing bumped to 64
chars / 50 ms.
4. `copilot/sdk/service.py` — `state.adapter.render_reasoning_in_ui`
threaded through every adapter construction.
5. `copilot/sdk/response_adapter.py` — `render_reasoning_in_ui` wiring +
service-layer yield filter gating for wire suppression while persistence
stays intact.
6. `copilot/stream_registry.py` — `count=config.stream_replay_count`.
7. `frontend/.../useCopilotStream.ts::handleReconnect` — 1500 ms
debounce.
8. `copilot/tools/web_search.py` + `models.py` — Sonar quick/deep paths,
`WebSearchResponse.answer` + typed extractors.
9. `frontend/.../GenericTool/*` — `answer` render + deep-aware labels /
accordion titles.
10. `executor/simulator.py` + `executor/manager.py` +
`copilot/config.py` — cost tracking + model swap + `user_id` threading.

### Changes

- `copilot/config.py` — new `render_reasoning_in_ui`,
`stream_replay_count`; `simulation_model` default flipped to Flash-Lite.
- `copilot/baseline/service.py` — `pending_events: asyncio.Queue`
refactor; outer gen runs loop as task, yields from queue live.
- `copilot/baseline/reasoning.py` —
`BaselineReasoningEmitter(render_in_ui=...)` + 64/50 coalesce.
- `copilot/sdk/service.py` + `response_adapter.py` —
`render_reasoning_in_ui` wire suppression (persistence preserved).
- `copilot/stream_registry.py` — replay cap from config.
- `copilot/tools/web_search.py` + `models.py` — Sonar quick/deep +
`answer` field + typed extractors.
- `copilot/tools/helpers.py` — tool description tightens `deep=true`
cost warning.
- `frontend/.../useCopilotStream.ts` — reconnect debounce.
- `frontend/.../GenericTool/GenericTool.tsx` + `helpers.ts` + tests —
render `answer`, deep-aware verbs / titles.
- `executor/simulator.py` + `simulator_test.py` + `executor/manager.py`
— cost tracking + model swap + user_id plumbing.

### Follow-up (deferred to a separate PR)

SDK per-token streaming via `include_partial_messages=True` was
attempted (commits `599e83543` + `530fa8f95`) and reverted here. The
two-signal model (StreamEvent partial deltas + AssistantMessage summary)
needs proper per-block diff tracking — when the partial stream delivers
a subset of the final block content, emit only
`summary.text[len(already_emitted):]` from the summary rather than
gating on a binary flag. Binary gating truncated replies in the field
when the partial stream delivered less than the summary (observed: "The
analysis template you" cut off mid-sentence because partial had streamed
that much and the rest only lived in the summary). SDK reasoning still
renders end-of-phase (as today); this PR's baseline per-token streaming
is unaffected.

### Checklist

For code changes:
- [x] Changes listed above
- [x] Test plan below
- [x] Tested according to the test plan:
- [x] `poetry run pytest backend/copilot/baseline/ backend/copilot/sdk/
backend/copilot/tools/web_search_test.py
backend/executor/simulator_test.py` — all pass (155 baseline + 927 SDK +
web_search + simulator)
- [x] `pnpm types && pnpm vitest run
src/app/(platform)/copilot/tools/GenericTool/` — pass
- [x] Manual: baseline live-streaming — Kimi K2.6 reasoning arrives
token-by-token, coalesced (no end-of-stream burst).
- [x] Manual: quick web_search via copilot UI — ~$0.005/call, answer +
citations rendered, cost logged as `provider=open_router`.
- [x] Manual: deep web_search — dispatched only on explicit research
phrasing; `sonar-deep-research` billed, UI labels say "Researched" / "N
research sources".
- [x] Manual: simulator dry-run — Gemini Flash-Lite, `[simulator] Turn
usage` log entry, PlatformCostLog row visible.
- [x] Manual: reconnect debounce — tab-focus thrash no longer produces
parallel XREADs in backend log.
- [ ] Manual: `CHAT_RENDER_REASONING_IN_UI=false` smoke-check —
reasoning collapse absent, no persisted reasoning row on reload.

For configuration changes:
- [x] `.env.default` — new config knobs fall back to pydantic defaults;
existing `CHAT_MODEL`/`CHAT_FAST_MODEL`/`CHAT_ADVANCED_MODEL` legacy
envs still honored upstream (unchanged by this PR).

### Companion PR

PR #12876 closes the `run_block`-via-copilot cost-leak gap (registers
`PerplexityBlock` / `FactCheckerBlock` in `BLOCK_COSTS`; documents the
credit/microdollar wallet boundary). Separate because the credit-wallet
side is orthogonal to the copilot microdollar / rate-limit surface this
PR ships.
2026-04-22 13:52:18 +07:00
Zamil Majdy
e3f6d36759 feat(backend/blocks): register 13 paid blocks + document credit/microdollar wallet boundary (#12876)
### Why / What / How

**Why.** Audit of `BLOCK_COSTS` against `credentials_store.py` system
credentials revealed **13 paid blocks** running for free from the credit
wallet's perspective — `BLOCK_COSTS.get(type(block))` returned `None`,
`cost = 0`, no `spend_credits` deduction. Users without their own API
key consumed system credentials with zero credit drain. Separately, the
credit wallet (user-facing prepaid balance) and the copilot microdollar
counter (operator-side meter that gates `daily_cost_limit_microdollars`)
were never documented as separate systems, so future readers kept
tripping on the "why isn't this block charging my limit?" question.

**What.** Three deltas, all credit-wallet-side:

- **Register the 13 paid blocks in `BLOCK_COSTS`** with reasonable
per-call credit prices (1 credit = $0.01). Pricing researched against
the providers' published rates with ~2-3x markup.
- **Document the credit/microdollar boundary** in
`copilot/rate_limit.py`: credits = user-facing prepaid wallet with
marketplace-creator charging; microdollars = operator-side meter that
only ticks on copilot LLM turns (baseline / SDK / web_search /
simulator). Block execution bills credits, not microdollars — explicit
contract.
- **Populate `provider_cost`** on PerplexityBlock so PlatformCostLog
rows carry the real OpenRouter `x-total-cost` value via the existing
`executor/cost_tracking.log_system_credential_cost` path (separate flow
from credit deduction).

### Block costs registered

| Provider | Block | Credits | Raw cost / markup |
|---|---|---|---|
| Perplexity (OpenRouter) | PerplexityBlock — Sonar | 1 | $0.001-0.005 /
call |
| | PerplexityBlock — Sonar Pro | 5 | $0.025 / call |
| | PerplexityBlock — Sonar Deep Research | 10 | up to $0.05 / call |
| Jina | FactCheckerBlock | 1 | $0.005 / call |
| Mem0 | AddMemoryBlock | 1 | $0.0004 / call (1c floor) |
| | SearchMemoryBlock | 1 | $0.004 / call |
| | GetAllMemoriesBlock | 1 | $0.004 / call |
| | GetLatestMemoryBlock | 1 | $0.004 / call |
| ScreenshotOne | ScreenshotWebPageBlock | 2 | $0.0085 / call (2.4x) |
| Nvidia | NvidiaDeepfakeDetectBlock | 2 | est $0.005 (no public SKU) |
| Smartlead | CreateCampaignBlock | 2 | $0.0065 send-equivalent (3x) |
| | AddLeadToCampaignBlock | 1 | $0.0065 (1.5x) |
| | SaveCampaignSequencesBlock | 1 | config-only |
| ZeroBounce | ValidateEmailsBlock | 2 | $0.008 / email (2.5x) |
| E2B + Anthropic | ClaudeCodeBlock | **100** | $0.50-$2 / typical
session (E2B sandbox + in-sandbox Claude) |

**Not in scope** — already covered via the SDK
`ProviderBuilder.with_base_cost()` pattern in their respective
`_config.py`: Exa, Linear, Airtable, Bannerbear, Wolfram, Firecrawl,
Wordpress, Baas, Stagehand, Dataforseo.

### How

1. `backend/data/block_cost_config.py` — 13 new `BlockCost` entries (3
Perplexity models + Fact Checker + 11 from this round).
2. `backend/copilot/rate_limit.py` — boundary docstring.
3. `backend/blocks/perplexity.py` — populate
`NodeExecutionStats.provider_cost` so PlatformCostLog rows carry the
real OpenRouter `x-total-cost` value.
4. Tests — `TestUnregisteredBlockRunsFree` regression +
`TestNewlyRegisteredBlockCosts` pinning every new entry by `cost_amount`
so a future refactor can't quietly drop one.

The companion Notion "Platform System Credentials" database has been
updated with a new `Platform Credit Cost` column populated across all 30
provider rows.

### Scope trim

An earlier revision piped block execution cost into the **copilot
microdollar counter** via `_record_block_microdollar_cost` in
`copilot/tools/helpers.py::execute_block`. That was reverted in
`16ae0f7b5` — the microdollar counter stays scoped to copilot LLM turns
only, credit wallet handles block execution. The pipe-through crossed a
boundary we explicitly want to keep.

### Changes

- `backend/data/block_cost_config.py` — 13 × `BlockCost` entries across
7 providers.
- `backend/blocks/perplexity.py` — populate `provider_cost` on the
execution stats (feeds PlatformCostLog).
- `backend/copilot/rate_limit.py` — boundary docstring only (no
behaviour change).
- `backend/copilot/tools/helpers_test.py` —
`TestUnregisteredBlockRunsFree` + `TestNewlyRegisteredBlockCosts` (8 new
regression tests).
- `backend/blocks/block_cost_tracking_test.py` — provider-cost
extraction pins.

### Checklist

For code changes:
- [x] Changes listed above
- [x] Test plan below
- [x] Tested according to the test plan:
- [x] `poetry run pytest backend/copilot/tools/helpers_test.py
backend/copilot/tools/run_block_test.py
backend/copilot/tools/continue_run_block_test.py
backend/blocks/block_cost_tracking_test.py
backend/blocks/test/test_perplexity.py` — passes
- [x] `poetry run pytest backend/executor/manager_cost_tracking_test.py
backend/copilot/rate_limit_test.py
backend/copilot/token_tracking_test.py` — passes (confirms docstring
edits didn't regress the LLM-turn microdollar path)
  - [x] Pyright clean on all touched files
- [ ] Manual: run PerplexityBlock via copilot `run_block` — credits
deduct, PlatformCostLog row visible with `provider_cost`, no
microdollar-counter tick.
- [ ] Manual: run an unregistered block via copilot — no error, no
credit drain, no silent billing.
- [ ] Manual: run ClaudeCodeBlock via builder — 100 credits deducted
from wallet.

### Companion PR

PR #12873 ships the copilot microdollar / rate-limit work (web_search
cost, simulator cost, reasoning / reconnect fixes). This PR is
credit-wallet only.
2026-04-22 12:03:02 +07:00
Nicholas Tindle
c1b9ed1f5e fix(backend/copilot): allow multiple compactions per turn (#12834)
### Why / What / How

**Why:** The old `CompactionTracker` set a `_done` flag after the first
completion and short-circuited every subsequent compaction in the same
turn. That blocked the SDK-internal compaction from running after a
pre-query compaction had already fired, so prompt-too-long errors
couldn't actually recover — retries saw the flag, bailed, and we re-hit
the context limit.

**What:** Drop the `_done` flag, track attempts and completions as
separate lists, and expose counters + an observability metadata builder
so callers can record compaction activity per turn.

**How:**
- Remove `_done` and `_compact_start` short-circuits.
- Track `_attempted_sources` / `_completed_sources` /
`_completed_count`.
- Expose `attempt_count`, `completed_count`, and
`get_observability_metadata()` / `get_log_summary()` for downstream
instrumentation (no caller change required in this PR).

### Changes 🏗️

- `backend/copilot/sdk/compaction.py` — rewritten `CompactionTracker`
internals; adds properties + observability helpers.
- `backend/copilot/sdk/compaction_test.py` — tests for multi-compaction
flow + new counters.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] `poetry run pytest backend/copilot/sdk/compaction_test.py -xvs`
passes
- [ ] Local chat that hits prompt-too-long now recovers via SDK
compaction instead of failing the turn

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Changes core streaming compaction state transitions and persistence
timing, which could affect UI event sequencing or compaction completion
behavior under concurrency; coverage is improved with new
multi-compaction tests.
> 
> **Overview**
> Fixes `CompactionTracker` so compaction is no longer single-shot per
turn: removes the `_done`/event-gate behavior, queues multiple
`on_compact()` hook firings via a pending transcript-path deque, and
allows subsequent SDK-internal compactions after a pre-query compaction
within the same query.
> 
> Adds lightweight instrumentation by tracking attempt/completion
sources and counts, plus `get_observability_metadata()` and
`get_log_summary()` (including source summaries like `sdk_internal:2`).
Updates/expands tests to cover multi-compaction flows, transcript-path
handling, and the new counters/metadata.
> 
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
9bf8cdd367. Bugbot is set up for automated
code reviews on this repo. Configure
[here](https://www.cursor.com/dashboard/bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: majdyz <zamil.majdy@agpt.co>
2026-04-22 02:02:03 +00:00
Zamil Majdy
45bc167184 feat(backend/copilot): Kimi K2.6 fast default + 4-config matrix + coalesced reasoning + web_search tool (#12871)
### Why / What / How

**Why.** Three unrelated but interlocking problems on the baseline
(OpenRouter) copilot path, all blocking us from making Kimi K2.6 the
default fast model:

1. **Cost / capability gap on the default.** Kimi K2.6 prices at $0.60 /
$2.80 per MTok — ~5x cheaper input and ~5.4x cheaper output than Sonnet
4.6 — while tying Opus on SWE-Bench Verified (80.2% vs 80.8%) and
beating it on SWE-Bench Pro (58.6% vs 53.4%). OpenRouter exposes the
same `reasoning` / `include_reasoning` extension on Moonshot endpoints
that #12870 plumbed for Anthropic, so the reasoning collapse lights up
end-to-end without per-provider code.
2. **Kimi reasoning deltas freeze the UI.** K2.6 emits ~4,700
reasoning-delta SSE events per turn vs ~28 on Sonnet — the AI SDK v6
Reasoning UIMessagePart can't keep up and the tab locks. Needs a
coalescing buffer upstream.
3. **Kimi loops on `require_guide_read`.** The guide-guard checks
`session.messages` for a prior `agent_building_guide` call, but tool
calls aren't flushed to `session.messages` until the end of the turn —
mid-turn the check keeps returning False and Kimi calls the guide-load
tool repeatedly in the same turn. Needs an in-flight tracker that lives
on `ChatSession`.
4. **No `web_search` tool on either path.** Kimi doesn't have a native
web-search equivalent and the SDK path's native `WebSearch` (the Claude
Code CLI's built-in) doesn't carry cost accounting. We need one
implementation that both paths share and that reports cost through the
same tracker as every other tool call.

**What.** Five grouped deltas on the baseline service, tool layer, and
config:

- **Kimi K2.6 default.** `fast_standard_model` defaults to
`moonshotai/kimi-k2.6`. Full 2×2 model matrix below. Rollback is one env
var.
- **4-config model matrix.** `fast_standard_model` /
`fast_advanced_model` / `thinking_standard_model` /
`thinking_advanced_model`. Each cell independent so baseline can run a
cheap provider at the standard tier without leaking into the SDK path
(which is Anthropic-only by CLI contract). Legacy env vars
(`CHAT_MODEL`, `CHAT_FAST_MODEL`, `CHAT_ADVANCED_MODEL`) stay aliased
via `validation_alias` so live deployments keep resolving to the same
effective cell.
- **Reasoning delta coalescing.** `BaselineReasoningEmitter` buffers
deltas and flushes on a char-count OR time-interval threshold (32 chars
/ 40 ms). ~4,700 → ~150 SSE events per turn on Kimi; no perceptible
change on Sonnet (which was already well under the threshold).
- **In-flight tool-call tracker.** `ChatSession._inflight_tool_calls`
PrivateAttr is populated when a tool-call block is emitted and cleared
at turn end. `session.has_tool_been_called_this_turn(name)` now returns
True mid-turn, not just after the tool-result lands in
`session.messages` — which is what `require_guide_read` needs to cut the
loop.
- **New `web_search` copilot tool.** Wraps Anthropic's server-side
`web_search_20250305` beta via `AsyncAnthropic` (direct — OpenRouter
can't proxy server-side tool execution). Dispatches through
`claude-haiku-4-5` with `max_uses=1`. Cost estimated from published
rates ($0.010 per search + Haiku tokens) since the Anthropic Messages
API doesn't report cost on the response; reported to
`persist_and_record_usage(provider='anthropic')` on both paths. SDK
native `WebSearch` moved from `_SDK_BUILTIN_ALWAYS` into
`SDK_DISALLOWED_TOOLS` so both paths now dispatch through
`mcp__copilot__web_search`.

**How.**

1. `copilot/config.py` — 2×2 model fields with `AliasChoices` preserving
legacy env var names. `populate_by_name = True` so
`ChatConfig(fast_standard_model=...)` works in tests.
2. `copilot/baseline/service.py::_resolve_baseline_model` — resolves the
active baseline cell from `mode` + `tier`, no longer delegates to the
SDK resolver.
3. `copilot/baseline/reasoning.py` — `BaselineReasoningEmitter` gains
`_pending_delta` / `_last_flush_monotonic` and flushes on
`len(_pending_delta) >= _COALESCE_MIN_CHARS` OR `monotonic() -
_last_flush_monotonic >= _COALESCE_MAX_INTERVAL_MS / 1000`.
`_is_reasoning_route` rewritten as an anchored prefix match covering
`anthropic/`, `anthropic.`, `moonshotai/`, and `openrouter/kimi-` —
split from the narrower `_is_anthropic_model` gate that still governs
`cache_control` markers (which Kimi doesn't support).
4. `copilot/model.py::ChatSession` — `_inflight_tool_calls: set[str] =
PrivateAttr(default_factory=set)` plus `announce_inflight_tool_call` /
`clear_inflight_tool_calls` / `has_tool_been_called_this_turn`.
5. `copilot/tools/helpers.py::require_guide_read` — check
`session.has_tool_been_called_this_turn(_AGENT_GUIDE_TOOL_NAME)` before
falling back to scanning `session.messages`.
6. `copilot/tools/web_search.py` — new `WebSearchTool` +
`_extract_results` + `_estimate_cost_usd`. `is_available` gated on
`Settings().secrets.anthropic_api_key` so the deployment can roll back
just by unsetting the key.
7. `copilot/tools/__init__.py` — registers `web_search` in
`TOOL_REGISTRY` so it becomes `mcp__copilot__web_search` in the SDK
path.
8. `copilot/sdk/tool_adapter.py` — `WebSearch` moves to
`SDK_DISALLOWED_TOOLS`.

### Changes

- `copilot/config.py` — 2×2 model matrix with legacy env alias
preservation; `populate_by_name=True`.
- `copilot/baseline/service.py::_resolve_baseline_model` — resolves
against the new matrix.
- `copilot/baseline/reasoning.py` — `BaselineReasoningEmitter`
coalescing buffer; `_is_reasoning_route` rewritten as anchored prefix
match (covers `anthropic/`, `anthropic.`, `moonshotai/`,
`openrouter/kimi-`).
- `copilot/model.py::ChatSession` — `_inflight_tool_calls` PrivateAttr +
helpers.
- `copilot/baseline/service.py::_baseline_tool_executor` — calls
`announce_inflight_tool_call` after emitting `StreamToolInputAvailable`;
`clear_inflight_tool_calls` in the outer `finally` before persist.
- `copilot/tools/helpers.py::require_guide_read` — reads the new tracker
first.
- `copilot/tools/web_search.py` (new) — Anthropic `web_search_20250305`
wrapper + cost estimator.
- `copilot/tools/web_search_test.py` (new) — extractor / cost / dispatch
/ registry tests (12 total).
- `copilot/tools/models.py` — `WebSearchResponse` + `WebSearchResult` +
`ResponseType.WEB_SEARCH`.
- `copilot/tools/__init__.py` — registers `web_search`.
- `copilot/sdk/tool_adapter.py` — moves native `WebSearch` to
`SDK_DISALLOWED_TOOLS`.

### Checklist

For code changes:
- [x] Changes listed above
- [x] Test plan below
- [ ] Tested according to the test plan:
  - [x] `poetry run pytest backend/copilot/baseline/` — all pass
- [x] `poetry run pytest backend/copilot/sdk/` — all pass (SDK resolver
untouched)
- [x] `poetry run pytest backend/copilot/tools/web_search_test.py` — 12
pass
- [ ] Manual: send a multi-step prompt on fast mode with default config;
confirm backend routes to `moonshotai/kimi-k2.6`, SSE stream carries
`reasoning-start/delta/end` (coalesced), Reasoning collapse renders +
survives hard reload.
- [ ] Manual: 43-tool payload reliability on Kimi — watch for malformed
tool-call JSON or wrong-tool selection.
- [ ] Manual: `CHAT_FAST_STANDARD_MODEL=anthropic/claude-sonnet-4-6`
restarts confirm Sonnet routing (rollback path works).
- [ ] Manual: SDK path (`CHAT_USE_CLAUDE_AGENT_SDK=true`) still selects
the SDK service and uses `thinking_standard_model` = Sonnet (no Kimi
leaked into extended thinking).
- [ ] Manual: prompt that forces `web_search` — confirm results render,
`persist_and_record_usage(provider='anthropic')` runs, cost lands in the
per-user ledger.
- [ ] Manual: ask Kimi a question that would require
`agent_building_guide` — confirm the guide loads exactly once per turn
(no loop).

For configuration changes:
- [x] `.env.default` — all four model fields fall back to the pydantic
defaults; legacy `CHAT_MODEL` / `CHAT_FAST_MODEL` /
`CHAT_ADVANCED_MODEL` remain honored via `AliasChoices`.
2026-04-22 08:47:08 +07:00
Nicholas Tindle
64b7560291 Merge branch 'branch10' of https://github.com/Significant-Gravitas/AutoGPT into branch10 2026-04-20 12:47:46 -05:00
Nicholas Tindle
7351fba17a fix(backend): address autogpt-pr-reviewer-in-dev blockers
- Add WriteWorkspaceFileTool tests for quota, virus, scan, and conflict
  error handling (the PR's stated motivation was untested)
- Add ZeroDivisionError guard on storage_limit in quota check
- Assert scan_content_safe not called on quota rejection (fast-path)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-20 12:47:38 -05:00
Nicholas Tindle
5c5dd3733e Merge branch 'dev' into branch10 2026-04-20 11:58:43 -05:00
Nicholas Tindle
6f80a109ef fix(frontend): fix lint, add storage bar test coverage
- Fix prettier formatting in SubscriptionTierSection
- Add tests for WorkspaceStorageSection rendering (shown/hidden states)
- Mock useWorkspaceStorage in UsagePanelContent test suite

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 14:49:59 -05:00
Nicholas Tindle
81e4403216 fix(platform): resolve merge conflicts with dev, gather sequential awaits
- Resolve merge conflict in SubscriptionTierSection tier descriptions
  (merge dev's improved copy with our storage limit additions)
- Use asyncio.gather for sequential awaits in get_storage_usage endpoint

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 13:56:27 -05:00
Nicholas Tindle
6905038081 dx: remove accidentally committed scheduled_tasks.lock
Ephemeral lock file from Claude Code — should not be in the repo.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 12:44:31 -05:00
Nicholas Tindle
0af8f239f0 feat(frontend): update credits page copy for tier-based file storage
- Add file storage limits to subscription tier cards (250 MB / 1 GB / 5 GB)
- Update usage section header to "AutoPilot Usage & Storage"

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 12:42:26 -05:00
Toran Bruce Richards
1c0c7a6b44 fix(copilot): add gh auth status check to Tool Discovery Priority section (#12832)
## Problem

The CoPilot system prompt contains a `gh auth status` instruction in the
E2B-specific `GitHub CLI` section, but models pattern-match to
`connect_integration` from the **Tool Discovery Priority** section —
which is where the actual decision to call an external service is made.

Because the GitHub auth check lives in a separate, later section, it's
not salient at the point of decision-making. This causes the model to
call `connect_integration(provider='github')` even when `gh` is already
authenticated via `GH_TOKEN`, unnecessarily prompting the user.

## Fix

Add a 3-line callout directly inside the **Tool Discovery Priority**
section:

```
> 🔑 **GitHub exception:** Before calling `connect_integration` for GitHub,
> always run `gh auth status` first. If it shows `Logged in`, proceed
> directly with `gh`/`git` — no integration connection needed.
```

This places the rule at the exact location where the model decides which
tool path to take, preventing the miss.

## Why this works

- **Placement over repetition**: The existing instruction isn't wrong —
it's just in the wrong spot relative to where the decision is made
- **Negative framing**: Explicitly says "before calling
`connect_integration`" which directly intercepts the incorrect reflex
- **Minimal change**: 4 lines added, zero removed

Co-authored-by: Toran Bruce Richards <22963551+Torantulino@users.noreply.github.com>
2026-04-17 15:22:10 +00:00
Nicholas Tindle
6794908692 fix(platform): address human review — asyncio.gather, formatBytes tests
- Use asyncio.gather for independent storage_limit + current_usage lookups
- Fix formatBytes unit boundary rollup to use 1024 threshold (binary units)
- Add parametrized formatBytes tests covering all unit boundaries

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 15:04:05 -05:00
Nicholas Tindle
9aa72fb46d fix(platform): address third-round review comments
- Use explicit except clauses for VirusDetectedError and VirusScanError
  in CoPilot WriteWorkspaceFileTool instead of isinstance check
- Log VirusDetectedError as security event (warning level)
- Fix formatBytes rounding at unit boundaries (1024 KB → 1.0 MB)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 21:42:59 -05:00
Nicholas Tindle
c06880412d refactor(frontend): use generated hook for workspace storage usage
Replace custom fetch with generated useGetWorkspaceStorageUsage hook
and StorageUsageResponse type from Orval-generated API client.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 21:27:19 -05:00
Nicholas Tindle
ad971698a0 feat(frontend): show workspace storage usage in usage limits panel
Add file storage bar to the CoPilot usage limits popover showing
used/limit bytes, percentage, and file count. Fetches from the
existing GET /workspace/storage/usage endpoint.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 21:23:28 -05:00
Nicholas Tindle
a81f305a74 fix(backend): address second-round review comments
- Move duplicate-path check before virus scan for non-overwrite writes
  (avoids slow scan on already-doomed requests)
- Add VirusScanError logging in CoPilot WriteWorkspaceFileTool so
  infrastructure failures (ClamAV outage) are logged at ERROR level

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 21:05:50 -05:00
Nicholas Tindle
5df958d302 fix(backend): address PR review — scan bug, quota overwrite, test coverage
- Fix VirusScanError (ClamAV outage) returning 409 instead of 500
- Fix duplicate virus scan in CoPilot WriteWorkspaceFileTool
- Fix overwrite quota miscalculation (subtract old file size from usage)
- Add parametrized tests for get_workspace_storage_limit_bytes (all tiers)
- Add test for quota rejection path in WorkspaceManager.write_file()
- Add test for 80% warning log
- Add test for VirusScanError → 500 mapping
- Add test for get_storage_usage endpoint with tier-based limit
- Extract projected_usage variable, remove redundant truthiness guard

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 14:59:11 -05:00
Nicholas Tindle
406ea4213d feat(backend): tier-based workspace file storage limits
Replace the global `max_workspace_storage_mb` config with per-tier limits:
- FREE: 250 MB, PRO: 1 GB, BUSINESS: 5 GB, ENTERPRISE: 15 GB

Move quota enforcement into WorkspaceManager.write_file() so both REST
uploads and CoPilot agent writes are covered (previously only REST had it).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 13:31:53 -05:00
634 changed files with 63193 additions and 6642 deletions

View File

@@ -0,0 +1,245 @@
---
name: pr-polish
description: Alternate /pr-review and /pr-address on a PR until the PR is truly mergeable — no new review findings, zero unresolved inline threads, zero unaddressed top-level reviews or issue comments, all CI checks green, and two consecutive quiet polls after CI settles. Use when the user wants a PR polished to merge-ready without setting a fixed number of rounds.
user-invocable: true
argument-hint: "[PR number or URL] — if omitted, finds PR for current branch."
metadata:
author: autogpt-team
version: "1.0.0"
---
# PR Polish
**Goal.** Drive a PR to merge-ready by alternating `/pr-review` and `/pr-address` until **all** of the following hold:
1. The most recent `/pr-review` produces **zero new findings** (no new inline comments, no new top-level reviews with a non-empty body).
2. Every inline review thread reachable via GraphQL reports `isResolved: true`.
3. Every non-bot, non-author top-level review has been acknowledged (replied-to) OR resolved via a thread it spawned.
4. Every non-bot, non-author issue comment has been acknowledged (replied-to).
5. Every CI check is `conclusion: "success"` or `"skipped"` / `"neutral"` — none `"failure"` or still pending.
6. **Two consecutive post-CI polls** (≥60s apart) stay clean — no new threads, no new non-empty reviews, no new issue comments. Bots (coderabbitai, sentry, autogpt-reviewer) frequently post late after CI settles; a single green snapshot is not sufficient.
**Do not stop at a fixed number of rounds.** If round N introduces new comments, round N+1 is required. Cap at `_MAX_ROUNDS = 10` as a safety valve, but expect 25 in practice.
## TodoWrite
Before starting, write two todos so the user can see the loop progression:
- `Round {current}: /pr-review + /pr-address on PR #{N}` — current iteration.
- `Final polish polling: 2 consecutive clean polls, CI green, 0 unresolved` — runs after the last non-empty review round.
Update the `current` round counter at the start of each iteration; mark `completed` only when the round's address step finishes (all new threads addressed + resolved).
## Find the PR
```bash
ARG_PR="${ARG:-}"
# Normalize URL → numeric ID if the skill arg is a pull-request URL.
if [[ "$ARG_PR" =~ ^https?://github\.com/[^/]+/[^/]+/pull/([0-9]+) ]]; then
ARG_PR="${BASH_REMATCH[1]}"
fi
PR="${ARG_PR:-$(gh pr list --head "$(git branch --show-current)" --repo Significant-Gravitas/AutoGPT --json number --jq '.[0].number')}"
if [ -z "$PR" ] || [ "$PR" = "null" ]; then
echo "No PR found for current branch. Provide a PR number or URL as the skill arg."
exit 1
fi
echo "Polishing PR #$PR"
```
## The outer loop
```text
round = 0
while round < _MAX_ROUNDS:
round += 1
baseline = snapshot_state(PR) # see "Snapshotting state" below
invoke_skill("pr-review", PR) # posts findings as inline comments / top-level review
findings = diff_state(PR, baseline)
if findings.total == 0:
break # no new findings → go to polish polling
invoke_skill("pr-address", PR) # resolves every unresolved thread + CI failure
# Post-loop: polish polling (see below).
polish_polling(PR)
```
### Snapshotting state
Before each `/pr-review`, capture a baseline so the diff after the review reflects **only** what the review just added (not pre-existing threads):
```bash
# Inline threads — total count + latest databaseId per thread
gh api graphql -f query="
{
repository(owner: \"Significant-Gravitas\", name: \"AutoGPT\") {
pullRequest(number: ${PR}) {
reviewThreads(first: 100) {
totalCount
nodes {
id
isResolved
comments(last: 1) { nodes { databaseId } }
}
}
}
}
}" > /tmp/baseline_threads.json
# Top-level reviews — count + latest id per non-empty review
gh api "repos/Significant-Gravitas/AutoGPT/pulls/${PR}/reviews" --paginate \
--jq '[.[] | select((.body // "") != "") | {id, user: .user.login, state, submitted_at}]' \
> /tmp/baseline_reviews.json
# Issue comments — count + latest id per non-bot, non-author comment.
# Bots are filtered by User.type == "Bot" (GitHub sets this for app/bot
# accounts like coderabbitai, github-actions, sentry-io). The author is
# filtered by comparing login to the PR author — export it so jq can see it.
AUTHOR=$(gh api "repos/Significant-Gravitas/AutoGPT/pulls/${PR}" --jq '.user.login')
gh api "repos/Significant-Gravitas/AutoGPT/issues/${PR}/comments" --paginate \
--jq --arg author "$AUTHOR" \
'[.[] | select(.user.type != "Bot" and .user.login != $author)
| {id, user: .user.login, created_at}]' \
> /tmp/baseline_issue_comments.json
```
### Diffing after a review
After `/pr-review` runs, any of these counting as "new findings" means another address round is needed:
- New inline thread `id` not in the baseline.
- An existing thread whose latest comment `databaseId` is higher than the baseline's (new reply on an old thread).
- A new top-level review `id` with a non-empty body.
- A new issue comment `id` from a non-bot, non-author user.
If any of the four buckets is non-empty → not done; invoke `/pr-address` and loop.
## Polish polling
Once `/pr-review` produces zero new findings, do **not** exit yet. Bots (coderabbitai, sentry, autogpt-reviewer) commonly post late reviews after CI settles — 3090 seconds after the final push. Poll at 60-second intervals:
```text
NON_SUCCESS_TERMINAL = {"failure", "cancelled", "timed_out", "action_required", "startup_failure"}
clean_polls = 0
required_clean = 2
while clean_polls < required_clean:
# 1. CI gate — any terminal non-success conclusion (not just "failure")
# must trigger /pr-address. "success", "skipped", "neutral" are clean;
# anything else (including cancelled, timed_out, action_required) is a
# blocker that won't self-resolve.
ci = fetch_check_runs(PR)
if any ci.conclusion in NON_SUCCESS_TERMINAL:
invoke_skill("pr-address", PR) # address failures + any new comments
baseline = snapshot_state(PR) # reset — push during address invalidates old baseline
clean_polls = 0
continue
if any ci.conclusion is None (still in_progress):
sleep 60; continue # wait without counting this as clean
# 2. Comment / thread gate
threads = fetch_unresolved_threads(PR)
new_issue_comments = diff_against_baseline(issue_comments)
new_reviews = diff_against_baseline(reviews)
if threads or new_issue_comments or new_reviews:
invoke_skill("pr-address", PR)
baseline = snapshot_state(PR) # reset — the address loop just dealt with these,
# otherwise they stay "new" relative to the old baseline forever
clean_polls = 0
continue
# 3. Mergeability gate
mergeable = gh api repos/.../pulls/${PR} --jq '.mergeable'
if mergeable == false (CONFLICTING):
resolve_conflicts(PR) # see pr-address skill
clean_polls = 0
continue
if mergeable is null (UNKNOWN):
sleep 60; continue
clean_polls += 1
sleep 60
```
Only after `clean_polls == 2` do you report `ORCHESTRATOR:DONE`.
### Why 2 clean polls, not 1
A single green snapshot can be misleading — the final CI check often completes ~30s before a bot posts its delayed review. One quiet cycle does not prove the PR is stable; two consecutive cycles with no new threads, reviews, or issue comments arriving gives high confidence nothing else is incoming.
### Why checking every source each poll
`/pr-address` polling inside a single round already re-checks its own comments, but `/pr-polish` sits a level above and must also catch:
- New top-level reviews (autogpt-reviewer sometimes posts structured feedback only after several CI green cycles).
- Issue comments from human reviewers (not caught by inline thread polling).
- Sentry bug predictions that land on new line numbers post-push.
- Merge conflicts introduced by a race between your push and a merge to `dev`.
## Invocation pattern
Delegate to existing skills with the `Skill` tool; do not re-implement the review or address logic inline. This keeps the polish loop focused on orchestration and lets the child skills evolve independently.
```python
Skill(skill="pr-review", args=pr_url)
Skill(skill="pr-address", args=pr_url)
```
After each child invocation, re-query GitHub state directly — never trust a summary for the stop condition. The orchestrator's `ORCHESTRATOR:DONE` is verified against actual GraphQL / REST responses per the rules in `pr-address`'s "Verify actual count before outputting ORCHESTRATOR:DONE" section.
### **Auto-continue: do NOT end your response between child skills**
`/pr-polish` is a single orchestration task — one invocation drives the PR all the way to merge-ready. When a child `Skill()` call returns control to you:
- Do NOT summarize and stop.
- Do NOT wait for user confirmation to continue.
- Immediately, in the same response, perform the next loop step: state diff → decide next action → next `Skill()` call or polling sleep.
The child skill returning is a **loop iteration boundary**, not a conversation turn boundary. You are expected to keep going until one of the exit conditions in the opening section is met (2 consecutive clean polls, `_MAX_ROUNDS` hit, or an unrecoverable error).
If the user needs to approve a risky action mid-loop (e.g., a force-push or a destructive git operation), pause there — but not at the routine "round N finished, round N+1 needed" boundary. Those are silent transitions.
## GitHub rate limits
This skill issues many GraphQL calls (one review-thread query per outer iteration plus per-poll queries inside polish polling). Expect the GraphQL budget to be tight on large PRs. When `gh api rate_limit --jq .resources.graphql.remaining` drops below ~200, back off:
- Fall back to REST for reads (flat `/pulls/{N}/comments`, `/pulls/{N}/reviews`, `/issues/{N}/comments`) per the `pr-address` skill's GraphQL-fallback section.
- Queue thread resolutions (GraphQL-only) until the budget resets; keep making progress on fixes + REST replies meanwhile.
- `sleep 5` between any batch of ≥20 writes to avoid secondary rate limits.
## Safety valves
- `_MAX_ROUNDS = 10` — if review+address rounds exceed this, stop and escalate to the user with a summary of what's still unresolved. A PR that cannot converge in 10 rounds has systemic issues that need human judgment.
- After each commit, run `poetry run format` / `pnpm format && pnpm lint && pnpm types` per the target codebase's conventions. A failing format check is CI `failure` that will never self-resolve.
- Every `/pr-review` round checks for **duplicate** concerns first (via `pr-review`'s own "Fetch existing review comments" step) so the loop does not re-post the same finding that a prior round already resolved.
## Reporting
When the skill finishes (either via two clean polls or hitting `_MAX_ROUNDS`), produce a compact summary:
```
PR #{N} polish complete ({rounds_completed} rounds):
- {X} inline threads opened and resolved
- {Y} CI failures fixed
- {Z} new commits pushed
Final state: CI green, {total} threads all resolved, mergeable.
```
If exiting via `_MAX_ROUNDS`, flag explicitly:
```
PR #{N} polish stopped at {_MAX_ROUNDS} rounds — NOT merge-ready:
- {N} threads still unresolved: {titles}
- CI status: {summary}
Needs human review.
```
## When to use this skill
Use when the user says any of:
- "polish this PR"
- "keep reviewing and addressing until it's mergeable"
- "loop /pr-review + /pr-address until done"
- "make sure the PR is actually merge-ready"
Do **not** use when:
- User wants just one review pass (→ `/pr-review`).
- User wants to address already-posted comments without further self-review (→ `/pr-address`).
- A fixed round count is explicitly requested (e.g., "do 3 rounds") — honour the count instead of converging.

View File

@@ -186,7 +186,7 @@ Multiple worktrees share the same host — Docker infra (postgres, redis, clamav
### Lock file contract
Path (**always** the root worktree so all siblings see it): `/Users/majdyz/Code/AutoGPT/.ign.testing.lock`
Path (**always** the root worktree so all siblings see it): `$REPO_ROOT/.ign.testing.lock`
Body (one `key=value` per line):
```
@@ -202,7 +202,7 @@ intent=<one-line description + rough duration>
### Claim
```bash
LOCK=/Users/majdyz/Code/AutoGPT/.ign.testing.lock
LOCK=$REPO_ROOT/.ign.testing.lock
NOW=$(date -u +%Y-%m-%dT%H:%MZ)
STALE_AFTER_MIN=5
@@ -252,7 +252,7 @@ echo "$HEARTBEAT_PID" > /tmp/pr-test-heartbeat.pid
kill "$HEARTBEAT_PID" 2>/dev/null
rm -f "$LOCK" /tmp/pr-test-heartbeat.pid
echo "$(date -u +%Y-%m-%dT%H:%MZ) [pr-${PR_NUMBER}] released lock" \
>> /Users/majdyz/Code/AutoGPT/.ign.testing.log
>> $REPO_ROOT/.ign.testing.log
```
Use a `trap` so release runs even on `exit 1`:
@@ -260,12 +260,38 @@ Use a `trap` so release runs even on `exit 1`:
trap 'kill "$HEARTBEAT_PID" 2>/dev/null; rm -f "$LOCK"' EXIT INT TERM
```
### **Release the lock AS SOON AS the test run is done**
The lock guards **test execution**, not **app lifecycle**. Once Step 5 (record results) and Step 6 (post PR comment) are complete, release the lock IMMEDIATELY — even if:
- The native `poetry run app` / `pnpm dev` processes are still running so the user can keep poking at the app manually.
- You're leaving docker containers up.
- You're tailing logs for a minute or two.
Keeping the lock held past the test run is the single most common way `/pr-test` stalls other agents. **The app staying up is orthogonal to the lock; don't conflate them.** Sibling worktrees running their own `/pr-test` will kill the stray processes and free the ports themselves (Step 3c/3e-native handle that) — they just need the lock file gone.
Concretely, the sequence at the end of every `/pr-test` run (success or failure) is:
```bash
# 1. Write the final report + post PR comment — done above in Step 5/6.
# 2. Release the lock right now, even if the app is still up.
kill "$HEARTBEAT_PID" 2>/dev/null
rm -f "$LOCK" /tmp/pr-test-heartbeat.pid
echo "$(date -u +%Y-%m-%dT%H:%MZ) [pr-${PR_NUMBER}] released lock (app may still be running)" \
>> $REPO_ROOT/.ign.testing.log
# 3. Optionally leave the app running and note it so the user knows:
echo "Native stack still running on :3000 / :8006 for manual poking. Kill with:"
echo " pkill -9 -f 'poetry run app'; pkill -9 -f 'next-server|next dev'"
```
If a sibling agent's `/pr-test` needs to take over, it'll do the kill+rebuild dance from Step 3c/3e-native on its own — your only job is to not hold the lock file past the end of your test.
### Shared status log
`/Users/majdyz/Code/AutoGPT/.ign.testing.log` is an append-only channel any agent can read/write. Use it for "I'm waiting", "I'm done, resources free", or post-run notes:
`$REPO_ROOT/.ign.testing.log` is an append-only channel any agent can read/write. Use it for "I'm waiting", "I'm done, resources free", or post-run notes:
```bash
echo "$(date -u +%Y-%m-%dT%H:%MZ) [pr-${PR_NUMBER}] <message>" \
>> /Users/majdyz/Code/AutoGPT/.ign.testing.log
>> $REPO_ROOT/.ign.testing.log
```
## Step 3: Environment setup
@@ -755,6 +781,19 @@ Upload screenshots to the PR using the GitHub Git API (no local git operations
**CRITICAL — NEVER post a bare directory link like `https://github.com/.../tree/...`.** Every screenshot MUST appear as `![name](raw_url)` inline in the PR comment so reviewers can see them without clicking any links. After posting, the verification step below greps the comment for `![` tags and exits 1 if none are found — the test run is considered incomplete until this passes.
**CRITICAL — NEVER paste absolute local paths into the PR comment.** Strings like `/Users/…`, `/home/…`, `C:\…` are useless to every reviewer except you. Before posting, grep the final body for `/Users/`, `/home/`, `/tmp/`, `/private/`, `C:\`, `~/` and either drop those lines entirely or rewrite them as repo-relative paths (`autogpt_platform/backend/…`). The PR comment is an artifact reviewers on GitHub read — it must be self-contained on github.com. Keep local paths in `$RESULTS_DIR/test-report.md` for yourself; only copy the *content* they reference (excerpts, test names, log lines) into the PR comment, not the path.
**Pre-post sanity check** (paste after building the comment body, before `gh api ... comments`):
```bash
# Reject any local-looking absolute path or home-dir shortcut in the body
if grep -nE '(^|[^A-Za-z])(/Users/|/home/|/tmp/|/private/|C:\\|~/)[A-Za-z0-9]' "$COMMENT_FILE" ; then
echo "ABORT: local filesystem paths detected in PR comment body."
echo "Remove or rewrite as repo-relative (autogpt_platform/...) before posting."
exit 1
fi
```
```bash
# Upload screenshots via GitHub Git API (creates blobs, tree, commit, and ref remotely)
REPO="Significant-Gravitas/AutoGPT"

View File

@@ -119,10 +119,12 @@ jobs:
runs-on: ubuntu-latest
services:
redis:
image: redis:latest
ports:
- 6379:6379
# Redis is provisioned as a real 3-shard cluster below via docker
# run (see the "Start Redis Cluster" step). GHA services can't
# override the image CMD or stand up multi-container clusters, so
# that setup is inlined — it mirrors the topology of the local dev
# compose stack (autogpt_platform/docker-compose.platform.yml) and
# prod helm chart.
rabbitmq:
image: rabbitmq:4.1.4
ports:
@@ -166,6 +168,68 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
- name: Start Redis Cluster (3 shards)
run: |
# 3-master Redis Cluster matching the local compose stack
# (autogpt_platform/docker-compose.platform.yml) and prod. Each
# shard runs in its own container on a dedicated bridge network,
# announces its compose-style hostname for intra-network clients,
# and publishes 1700N on the GHA host so tests can reach every
# shard via localhost. The backend's ``_address_remap`` rewrites
# every CLUSTER SLOTS reply to localhost:<announced-port>, which
# picks the right published port per shard.
#
# Not reusing docker-compose.platform.yml directly because compose
# validates the full file even when only some services are ``up``,
# and that file references services (db/kong/...) defined in a
# sibling compose file — pulling both in would needlessly couple
# CI to the full local-dev stack.
docker network create redis-cluster-ci
for i in 0 1 2; do
port=$((17000 + i))
bus=$((27000 + i))
docker run -d --name redis-$i --network redis-cluster-ci \
--network-alias redis-$i \
-p $port:$port \
redis:7 \
redis-server --port $port \
--cluster-enabled yes \
--cluster-config-file nodes.conf \
--cluster-node-timeout 5000 \
--cluster-require-full-coverage no \
--cluster-announce-hostname redis-$i \
--cluster-announce-port $port \
--cluster-announce-bus-port $bus \
--cluster-preferred-endpoint-type hostname
done
# Wait for each shard to accept commands.
for i in 0 1 2; do
port=$((17000 + i))
for _ in $(seq 1 30); do
docker exec redis-$i redis-cli -p $port ping 2>/dev/null | grep -q PONG && break
sleep 1
done
done
# Form the cluster from an init container on the same network so
# --cluster-preferred-endpoint-type hostname resolves redis-0/1/2.
docker run --rm --network redis-cluster-ci redis:7 \
redis-cli --cluster create \
redis-0:17000 redis-1:17001 redis-2:17002 \
--cluster-replicas 0 --cluster-yes
# Confirm convergence.
for _ in $(seq 1 30); do
state=$(docker exec redis-0 redis-cli -p 17000 cluster info | awk -F: '/^cluster_state:/ {print $2}' | tr -d '[:cntrl:]')
if [ "$state" = "ok" ]; then
echo "Redis Cluster ready (3 shards, state=ok)"
docker exec redis-0 redis-cli -p 17000 cluster nodes
exit 0
fi
sleep 1
done
echo "Redis Cluster failed to reach ok state" >&2
docker exec redis-0 redis-cli -p 17000 cluster info >&2 || true
exit 1
- name: Setup Supabase
uses: supabase/setup-cli@v1
with:
@@ -286,8 +350,13 @@ jobs:
SUPABASE_SERVICE_ROLE_KEY: ${{ steps.supabase.outputs.SERVICE_ROLE_KEY }}
JWT_VERIFY_KEY: ${{ steps.supabase.outputs.JWT_SECRET }}
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
REDIS_PORT: "17000"
ENCRYPTION_KEY: "dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=" # DO NOT USE IN PRODUCTION!!
# Opt-in: lets backend/data/e2e_redis_restart_test.py spin up its
# own isolated 3-shard cluster (ports 2711027112) and exercise
# ``docker restart <shard>`` mid-stream. Off locally so a
# contributor's ``poetry run test`` doesn't pay the ~15s cost.
E2E_RESTART_ISOLATED: "1"
- name: Upload coverage reports to Codecov
if: ${{ !cancelled() }}

4
.gitignore vendored
View File

@@ -196,3 +196,7 @@ test.db
plans/
.claude/worktrees/
test-results/
# Playwright MCP / local browser-testing artifacts
.playwright-mcp/
copilot-session-switch-qa/

View File

@@ -267,7 +267,7 @@
"filename": "autogpt_platform/backend/backend/blocks/replicate/replicate_block.py",
"hashed_secret": "8bbdd6f26368f58ea4011d13d7f763cb662e66f0",
"is_verified": false,
"line_number": 55
"line_number": 67
}
],
"autogpt_platform/backend/backend/blocks/slant3d/webhook.py": [
@@ -467,5 +467,5 @@
}
]
},
"generated_at": "2026-04-09T14:20:23Z"
"generated_at": "2026-04-24T16:42:44Z"
}

View File

@@ -1,33 +0,0 @@
from typing import Optional
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
class RateLimitSettings(BaseSettings):
redis_host: str = Field(
default="redis://localhost:6379",
description="Redis host",
validation_alias="REDIS_HOST",
)
redis_port: str = Field(
default="6379", description="Redis port", validation_alias="REDIS_PORT"
)
redis_password: Optional[str] = Field(
default=None,
description="Redis password",
validation_alias="REDIS_PASSWORD",
)
requests_per_minute: int = Field(
default=60,
description="Maximum number of requests allowed per minute per API key",
validation_alias="RATE_LIMIT_REQUESTS_PER_MINUTE",
)
model_config = SettingsConfigDict(case_sensitive=True, extra="ignore")
RATE_LIMIT_SETTINGS = RateLimitSettings()

View File

@@ -1,51 +0,0 @@
import time
from typing import Tuple
from redis import Redis
from .config import RATE_LIMIT_SETTINGS
class RateLimiter:
def __init__(
self,
redis_host: str = RATE_LIMIT_SETTINGS.redis_host,
redis_port: str = RATE_LIMIT_SETTINGS.redis_port,
redis_password: str | None = RATE_LIMIT_SETTINGS.redis_password,
requests_per_minute: int = RATE_LIMIT_SETTINGS.requests_per_minute,
):
self.redis = Redis(
host=redis_host,
port=int(redis_port),
password=redis_password,
decode_responses=True,
)
self.window = 60
self.max_requests = requests_per_minute
async def check_rate_limit(self, api_key_id: str) -> Tuple[bool, int, int]:
"""
Check if request is within rate limits.
Args:
api_key_id: The API key identifier to check
Returns:
Tuple of (is_allowed, remaining_requests, reset_time)
"""
now = time.time()
window_start = now - self.window
key = f"ratelimit:{api_key_id}:1min"
pipe = self.redis.pipeline()
pipe.zremrangebyscore(key, 0, window_start)
pipe.zadd(key, {str(now): now})
pipe.zcount(key, window_start, now)
pipe.expire(key, self.window)
_, _, request_count, _ = pipe.execute()
remaining = max(0, self.max_requests - request_count)
reset_time = int(now + self.window)
return request_count <= self.max_requests, remaining, reset_time

View File

@@ -1,32 +0,0 @@
from fastapi import HTTPException, Request
from starlette.middleware.base import RequestResponseEndpoint
from .limiter import RateLimiter
async def rate_limit_middleware(request: Request, call_next: RequestResponseEndpoint):
"""FastAPI middleware for rate limiting API requests."""
limiter = RateLimiter()
if not request.url.path.startswith("/api"):
return await call_next(request)
api_key = request.headers.get("Authorization")
if not api_key:
return await call_next(request)
api_key = api_key.replace("Bearer ", "")
is_allowed, remaining, reset_time = await limiter.check_rate_limit(api_key)
if not is_allowed:
raise HTTPException(
status_code=429, detail="Rate limit exceeded. Please try again later."
)
response = await call_next(request)
response.headers["X-RateLimit-Limit"] = str(limiter.max_requests)
response.headers["X-RateLimit-Remaining"] = str(remaining)
response.headers["X-RateLimit-Reset"] = str(reset_time)
return response

View File

@@ -59,6 +59,8 @@ class OAuthState(BaseModel):
code_verifier: Optional[str] = None
scopes: list[str]
"""Unix timestamp (seconds) indicating when this OAuth state expires"""
credential_id: Optional[str] = None
"""If set, this OAuth flow upgrades an existing credential's scopes."""
class UserMetadata(BaseModel):

View File

@@ -1,13 +1,16 @@
import asyncio
from contextlib import asynccontextmanager
from typing import TYPE_CHECKING, Any
from typing import TYPE_CHECKING, Any, Union
from expiringdict import ExpiringDict
if TYPE_CHECKING:
from redis.asyncio import Redis as AsyncRedis
from redis.asyncio.cluster import RedisCluster as AsyncRedisCluster
from redis.asyncio.lock import Lock as AsyncRedisLock
AsyncRedisLike = Union[AsyncRedis, AsyncRedisCluster]
class AsyncRedisKeyedMutex:
"""
@@ -17,7 +20,7 @@ class AsyncRedisKeyedMutex:
in case the key is not unlocked for a specified duration, to prevent memory leaks.
"""
def __init__(self, redis: "AsyncRedis", timeout: int | None = 60):
def __init__(self, redis: "AsyncRedisLike", timeout: int | None = 60):
self.redis = redis
self.timeout = timeout
self.locks: dict[Any, "AsyncRedisLock"] = ExpiringDict(

View File

@@ -37,6 +37,23 @@ JWT_VERIFY_KEY=your-super-secret-jwt-token-with-at-least-32-characters-long
ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=
UNSUBSCRIBE_SECRET_KEY=HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio=
# Web Push (VAPID) — generate with: poetry run python -c "
# from py_vapid import Vapid; import base64
# from cryptography.hazmat.primitives.serialization import Encoding, PublicFormat
# v = Vapid(); v.generate_keys()
# raw_priv = v.private_key.private_numbers().private_value.to_bytes(32, 'big')
# print('VAPID_PRIVATE_KEY=' + base64.urlsafe_b64encode(raw_priv).rstrip(b'=').decode())
# raw_pub = v.public_key.public_bytes(Encoding.X962, PublicFormat.UncompressedPoint)
# print('VAPID_PUBLIC_KEY=' + base64.urlsafe_b64encode(raw_pub).rstrip(b'=').decode())
# "
# Dev-only keypair below — DO NOT use in staging/production. Regenerate
# your own with the snippet above before any non-local deployment.
VAPID_PRIVATE_KEY=17hBPdSdn6TR_yAgQxA0TjTcvRj3Lf6znHnASZ4rOKc
VAPID_PUBLIC_KEY=BBg49iVTWthVbRYphwmZNvZyiSJDqtSO4nmLxDzLKe3Oo9jbtu0Usa14xX4HQQNLUeiEfzD42zWSlrvY1PR12bs
# Per RFC 8292 push services use this in 410 Gone reports; set to a real
# mailbox in production. Defaults to a placeholder for local dev.
VAPID_CLAIM_EMAIL=mailto:dev@example.com
## ===== IMPORTANT OPTIONAL CONFIGURATION ===== ##
# Platform URLs (set these for webhooks and OAuth to work)
PLATFORM_BASE_URL=http://localhost:8000
@@ -182,6 +199,10 @@ GOOGLE_MAPS_API_KEY=
# Platform Bot Linking
PLATFORM_LINK_BASE_URL=http://localhost:3000/link
# CoPilot chat-platform bridge (Discord/Telegram/Slack)
# Uses FRONTEND_BASE_URL (above) for link confirmation pages.
AUTOPILOT_BOT_DISCORD_TOKEN=
# Communication Services
DISCORD_BOT_TOKEN=
MEDIUM_API_KEY=

View File

@@ -1,14 +1,44 @@
import asyncio
from typing import Dict, Set
import json
import logging
import time
from typing import Awaitable, Callable, Dict, Optional, Set
from fastapi import WebSocket
from fastapi import WebSocket, WebSocketDisconnect
from redis.asyncio import Redis as AsyncRedis
from redis.asyncio.client import PubSub as AsyncPubSub
from redis.exceptions import MovedError, RedisError, ResponseError
from starlette.websockets import WebSocketState
from backend.api.model import NotificationPayload, WSMessage, WSMethod
from backend.api.model import WSMessage, WSMethod
from backend.data import redis_client as redis
from backend.data.event_bus import _assert_no_wildcard
from backend.data.execution import (
ExecutionEventType,
GraphExecutionEvent,
NodeExecutionEvent,
exec_channel,
get_graph_execution_meta,
graph_all_channel,
)
from backend.data.notification_bus import NotificationEvent
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
_settings = Settings()
def _is_ws_close_race(exc: BaseException, websocket: WebSocket) -> bool:
"""A SPUBLISH→WS send racing with WS close — benign, drop quietly."""
if isinstance(exc, WebSocketDisconnect):
return True
if (
getattr(websocket, "application_state", None) == WebSocketState.DISCONNECTED
or getattr(websocket, "client_state", None) == WebSocketState.DISCONNECTED
):
return True
if isinstance(exc, RuntimeError) and "close message has been sent" in str(exc):
return True
return False
_EVENT_TYPE_TO_METHOD_MAP: dict[ExecutionEventType, WSMethod] = {
ExecutionEventType.GRAPH_EXEC_UPDATE: WSMethod.GRAPH_EXECUTION_EVENT,
@@ -16,128 +46,379 @@ _EVENT_TYPE_TO_METHOD_MAP: dict[ExecutionEventType, WSMethod] = {
}
def event_bus_channel(channel_key: str) -> str:
"""Prefix a channel key with the execution event bus name."""
return f"{_settings.config.execution_event_bus_name}/{channel_key}"
def _notification_bus_channel(user_id: str) -> str:
"""Return the full sharded channel name for a user's notifications."""
return f"{_settings.config.notification_event_bus_name}/{user_id}"
MessageHandler = Callable[[Optional[bytes | str]], Awaitable[None]]
def _is_moved_error(exc: BaseException) -> bool:
"""A MOVED redirect — slot migration mid-stream; pump should reconnect."""
if isinstance(exc, MovedError):
return True
if isinstance(exc, ResponseError) and str(exc).startswith("MOVED "):
return True
return False
# Reconnect tunables for shard-failover during pubsub.listen().
_PUMP_RECONNECT_DEADLINE_S = 60.0
_PUMP_RECONNECT_BACKOFF_INITIAL_S = 0.5
_PUMP_RECONNECT_BACKOFF_MAX_S = 8.0
class _Subscription:
"""One SSUBSCRIBE lifecycle bound to a WebSocket, pinned to the owning shard."""
def __init__(self, full_channel: str) -> None:
_assert_no_wildcard(full_channel)
self.full_channel = full_channel
self._client: AsyncRedis | None = None
self._pubsub: AsyncPubSub | None = None
self._task: asyncio.Task | None = None
async def start(self, on_message: MessageHandler) -> None:
await self._open_pubsub()
self._task = asyncio.create_task(self._pump(on_message))
async def _open_pubsub(self) -> None:
"""(Re)establish the sharded pubsub connection + SSUBSCRIBE."""
self._client = await redis.connect_sharded_pubsub_async(self.full_channel)
self._pubsub = self._client.pubsub()
await self._pubsub.execute_command("SSUBSCRIBE", self.full_channel)
# redis-py 6.x async PubSub.listen() exits when ``channels`` is
# empty; raw SSUBSCRIBE doesn't populate it, so do it ourselves.
self._pubsub.channels[self.full_channel] = None # type: ignore[index]
async def _close_pubsub_quietly(self) -> None:
"""Best-effort teardown before reconnect — never raises."""
if self._pubsub is not None:
try:
await self._pubsub.aclose()
except Exception:
pass
self._pubsub = None
if self._client is not None:
try:
await self._client.aclose()
except Exception:
pass
self._client = None
async def _pump(self, on_message: MessageHandler) -> None:
if self._pubsub is None:
return
backoff = _PUMP_RECONNECT_BACKOFF_INITIAL_S
deadline = time.monotonic() + _PUMP_RECONNECT_DEADLINE_S
while True:
pubsub = self._pubsub
if pubsub is None:
return
needs_reconnect = False
try:
async for message in pubsub.listen():
msg_type = message.get("type")
# Server-pushed sunsubscribe: slot ownership changed and
# Redis revoked our SSUBSCRIBE without dropping the TCP.
# Treat as a reconnect trigger so we re-resolve the shard.
if msg_type == "sunsubscribe":
needs_reconnect = True
break
if msg_type not in ("smessage", "message", "pmessage"):
continue
# Successful read resets the reconnect budget.
backoff = _PUMP_RECONNECT_BACKOFF_INITIAL_S
deadline = time.monotonic() + _PUMP_RECONNECT_DEADLINE_S
try:
await on_message(message.get("data"))
except Exception:
logger.exception(
"Websocket message-handler failed for channel %s",
self.full_channel,
)
if not needs_reconnect:
# listen() exited cleanly (channels emptied) — pump is done.
return
except asyncio.CancelledError:
raise
except (ConnectionError, RedisError) as exc:
if isinstance(exc, ResponseError) and not _is_moved_error(exc):
logger.exception(
"Pubsub pump crashed on non-retryable ResponseError for %s",
self.full_channel,
)
return
if time.monotonic() > deadline:
logger.exception(
"Pubsub pump giving up after reconnect deadline for %s",
self.full_channel,
)
return
logger.warning(
"Pubsub pump reconnecting for %s after %s: %s",
self.full_channel,
type(exc).__name__,
exc,
)
except Exception:
logger.exception("Pubsub pump crashed for %s", self.full_channel)
return
# Either a retryable error was raised, or the server pushed a
# sunsubscribe — close the stale pubsub and reopen against the
# (possibly migrated) shard.
await self._close_pubsub_quietly()
await asyncio.sleep(backoff)
backoff = min(backoff * 2, _PUMP_RECONNECT_BACKOFF_MAX_S)
try:
await self._open_pubsub()
except (ConnectionError, RedisError) as reopen_exc:
logger.warning(
"Pubsub pump reopen failed for %s: %s",
self.full_channel,
reopen_exc,
)
# Loop again — deadline check will eventually exit.
continue
async def stop(self) -> None:
if self._task is not None:
self._task.cancel()
try:
await self._task
except (asyncio.CancelledError, Exception):
pass
self._task = None
if self._pubsub is not None:
try:
await self._pubsub.execute_command("SUNSUBSCRIBE", self.full_channel)
except Exception:
logger.warning(
"SUNSUBSCRIBE failed for %s", self.full_channel, exc_info=True
)
try:
await self._pubsub.aclose()
except Exception:
pass
self._pubsub = None
if self._client is not None:
try:
await self._client.aclose()
except Exception:
pass
self._client = None
class ConnectionManager:
def __init__(self):
self.active_connections: Set[WebSocket] = set()
# channel_key → sockets subscribed (public channel keys, not raw Redis channels)
self.subscriptions: Dict[str, Set[WebSocket]] = {}
self.user_connections: Dict[str, Set[WebSocket]] = {}
# websocket → {channel_key: _Subscription}
self._ws_subs: Dict[WebSocket, Dict[str, _Subscription]] = {}
# websocket → notification subscription
self._ws_notifications: Dict[WebSocket, _Subscription] = {}
async def connect_socket(self, websocket: WebSocket, *, user_id: str):
await websocket.accept()
self.active_connections.add(websocket)
if user_id not in self.user_connections:
self.user_connections[user_id] = set()
self.user_connections[user_id].add(websocket)
self._ws_subs.setdefault(websocket, {})
await self._start_notification_subscription(websocket, user_id=user_id)
def disconnect_socket(self, websocket: WebSocket, *, user_id: str):
async def disconnect_socket(self, websocket: WebSocket, *, user_id: str):
self.active_connections.discard(websocket)
for subscribers in self.subscriptions.values():
# Stop SSUBSCRIBE pumps before dropping bookkeeping to avoid leaks.
subs = self._ws_subs.pop(websocket, {})
for sub in subs.values():
await sub.stop()
notif_sub = self._ws_notifications.pop(websocket, None)
if notif_sub is not None:
await notif_sub.stop()
for channel_key, subscribers in list(self.subscriptions.items()):
subscribers.discard(websocket)
user_conns = self.user_connections.get(user_id)
if user_conns is not None:
user_conns.discard(websocket)
if not user_conns:
self.user_connections.pop(user_id, None)
if not subscribers:
self.subscriptions.pop(channel_key, None)
async def subscribe_graph_exec(
self, *, user_id: str, graph_exec_id: str, websocket: WebSocket
) -> str:
return await self._subscribe(
_graph_exec_channel_key(user_id, graph_exec_id=graph_exec_id), websocket
# Hash-tagged channel needs graph_id; resolve once per subscribe.
meta = await get_graph_execution_meta(user_id, graph_exec_id)
if meta is None:
raise ValueError(
f"graph_exec #{graph_exec_id} not found for user #{user_id}"
)
channel_key = graph_exec_channel_key(user_id, graph_exec_id=graph_exec_id)
full_channel = event_bus_channel(
exec_channel(user_id, meta.graph_id, graph_exec_id)
)
await self._open_subscription(websocket, channel_key, full_channel)
return channel_key
async def subscribe_graph_execs(
self, *, user_id: str, graph_id: str, websocket: WebSocket
) -> str:
return await self._subscribe(
_graph_execs_channel_key(user_id, graph_id=graph_id), websocket
)
channel_key = _graph_execs_channel_key(user_id, graph_id=graph_id)
full_channel = event_bus_channel(graph_all_channel(user_id, graph_id))
await self._open_subscription(websocket, channel_key, full_channel)
return channel_key
async def unsubscribe_graph_exec(
self, *, user_id: str, graph_exec_id: str, websocket: WebSocket
) -> str | None:
return await self._unsubscribe(
_graph_exec_channel_key(user_id, graph_exec_id=graph_exec_id), websocket
)
channel_key = graph_exec_channel_key(user_id, graph_exec_id=graph_exec_id)
return await self._close_subscription(websocket, channel_key)
async def unsubscribe_graph_execs(
self, *, user_id: str, graph_id: str, websocket: WebSocket
) -> str | None:
return await self._unsubscribe(
_graph_execs_channel_key(user_id, graph_id=graph_id), websocket
)
channel_key = _graph_execs_channel_key(user_id, graph_id=graph_id)
return await self._close_subscription(websocket, channel_key)
async def send_execution_update(
self, exec_event: GraphExecutionEvent | NodeExecutionEvent
) -> int:
graph_exec_id = (
exec_event.id
if isinstance(exec_event, GraphExecutionEvent)
else exec_event.graph_exec_id
)
async def _open_subscription(
self, websocket: WebSocket, channel_key: str, full_channel: str
) -> None:
self.subscriptions.setdefault(channel_key, set()).add(websocket)
per_ws = self._ws_subs.setdefault(websocket, {})
if channel_key in per_ws:
return
sub = _Subscription(full_channel)
n_sent = 0
async def on_message(data: Optional[bytes | str]) -> None:
await self._forward_exec_event(websocket, channel_key, data)
channels: set[str] = {
# Send update to listeners for this graph execution
_graph_exec_channel_key(exec_event.user_id, graph_exec_id=graph_exec_id)
}
if isinstance(exec_event, GraphExecutionEvent):
# Send update to listeners for all executions of this graph
channels.add(
_graph_execs_channel_key(
exec_event.user_id, graph_id=exec_event.graph_id
)
)
await sub.start(on_message)
per_ws[channel_key] = sub
for channel in channels.intersection(self.subscriptions.keys()):
message = WSMessage(
method=_EVENT_TYPE_TO_METHOD_MAP[exec_event.event_type],
channel=channel,
data=exec_event.model_dump(),
).model_dump_json()
for connection in self.subscriptions[channel]:
await connection.send_text(message)
n_sent += 1
return n_sent
async def send_notification(
self, *, user_id: str, payload: NotificationPayload
) -> int:
"""Send a notification to all websocket connections belonging to a user."""
message = WSMessage(
method=WSMethod.NOTIFICATION,
data=payload.model_dump(),
).model_dump_json()
connections = tuple(self.user_connections.get(user_id, set()))
if not connections:
return 0
await asyncio.gather(
*(connection.send_text(message) for connection in connections),
return_exceptions=True,
)
return len(connections)
async def _subscribe(self, channel_key: str, websocket: WebSocket) -> str:
if channel_key not in self.subscriptions:
self.subscriptions[channel_key] = set()
self.subscriptions[channel_key].add(websocket)
async def _close_subscription(
self, websocket: WebSocket, channel_key: str
) -> str | None:
subscribers = self.subscriptions.get(channel_key)
if subscribers is None:
return None
subscribers.discard(websocket)
if not subscribers:
self.subscriptions.pop(channel_key, None)
per_ws = self._ws_subs.get(websocket)
if per_ws and channel_key in per_ws:
sub = per_ws.pop(channel_key)
await sub.stop()
return channel_key
async def _unsubscribe(self, channel_key: str, websocket: WebSocket) -> str | None:
if channel_key in self.subscriptions:
self.subscriptions[channel_key].discard(websocket)
if not self.subscriptions[channel_key]:
del self.subscriptions[channel_key]
return channel_key
return None
async def _forward_exec_event(
self,
websocket: WebSocket,
channel_key: str,
raw_payload: Optional[bytes | str],
) -> None:
if raw_payload is None:
return
# Unwrap the `_EventPayloadWrapper` envelope, then re-wrap as a WS message.
try:
wrapper = (
raw_payload.decode()
if isinstance(raw_payload, (bytes, bytearray))
else raw_payload
)
except Exception:
logger.warning(
"Failed to decode pubsub payload on %s", channel_key, exc_info=True
)
return
try:
parsed = json.loads(wrapper)
event_data = parsed.get("payload")
if not isinstance(event_data, dict):
return
event_type = event_data.get("event_type")
method = _EVENT_TYPE_TO_METHOD_MAP.get(ExecutionEventType(event_type))
if method is None:
return
message = WSMessage(
method=method,
channel=channel_key,
data=event_data,
).model_dump_json()
await websocket.send_text(message)
except Exception as e:
if _is_ws_close_race(e, websocket):
logger.debug("Dropped exec event on closed WS for %s", channel_key)
return
logger.exception("Failed to forward exec event on %s", channel_key)
async def _start_notification_subscription(
self, websocket: WebSocket, *, user_id: str
) -> None:
full_channel = _notification_bus_channel(user_id)
sub = _Subscription(full_channel)
async def on_message(data: Optional[bytes | str]) -> None:
await self._forward_notification(websocket, user_id, data)
try:
await sub.start(on_message)
except Exception:
logger.exception(
"Failed to open notification SSUBSCRIBE for user=%s", user_id
)
return
self._ws_notifications[websocket] = sub
async def _forward_notification(
self,
websocket: WebSocket,
user_id: str,
raw_payload: Optional[bytes | str],
) -> None:
if raw_payload is None:
return
try:
wrapper_json = (
raw_payload.decode()
if isinstance(raw_payload, (bytes, bytearray))
else raw_payload
)
parsed = json.loads(wrapper_json)
inner = parsed.get("payload") if isinstance(parsed, dict) else None
if not isinstance(inner, dict):
return
event = NotificationEvent.model_validate(inner)
except Exception:
logger.warning(
"Failed to parse notification payload for user=%s",
user_id,
exc_info=True,
)
return
# Defense in depth against cross-user payloads.
if event.user_id != user_id:
return
message = WSMessage(
method=WSMethod.NOTIFICATION,
data=event.payload.model_dump(),
).model_dump_json()
try:
await websocket.send_text(message)
except Exception as e:
if _is_ws_close_race(e, websocket):
logger.debug("Dropped notification on closed WS for user=%s", user_id)
return
logger.warning(
"Failed to deliver notification to WS for user=%s",
user_id,
exc_info=True,
)
def _graph_exec_channel_key(user_id: str, *, graph_exec_id: str) -> str:
def graph_exec_channel_key(user_id: str, *, graph_exec_id: str) -> str:
return f"{user_id}|graph_exec#{graph_exec_id}"

View File

@@ -0,0 +1,386 @@
"""ConnectionManager integration over the live 3-shard Redis cluster:
SSUBSCRIBE → SPUBLISH → WebSocket forwarding with no Redis mocks. Skips
when the cluster is unreachable."""
import asyncio
import json
from datetime import datetime, timezone
from unittest.mock import AsyncMock
from uuid import uuid4
import pytest
from fastapi import WebSocket
import backend.data.redis_client as redis_client
from backend.api.conn_manager import (
ConnectionManager,
_graph_execs_channel_key,
event_bus_channel,
graph_exec_channel_key,
)
from backend.api.model import WSMethod
from backend.data.execution import (
ExecutionStatus,
GraphExecutionEvent,
GraphExecutionMeta,
NodeExecutionEvent,
exec_channel,
graph_all_channel,
)
def _has_live_cluster() -> bool:
try:
c = redis_client.connect()
except Exception: # noqa: BLE001 — any connect failure → skip
return False
try:
c.close()
except Exception:
pass
return True
pytestmark = pytest.mark.skipif(
not _has_live_cluster(),
reason="local redis cluster not reachable; skip conn_manager integration",
)
def _meta(user_id: str, graph_id: str, graph_exec_id: str) -> GraphExecutionMeta:
"""Build a minimal GraphExecutionMeta for ``subscribe_graph_exec`` to use."""
return GraphExecutionMeta(
id=graph_exec_id,
user_id=user_id,
graph_id=graph_id,
graph_version=1,
inputs=None,
credential_inputs=None,
nodes_input_masks=None,
preset_id=None,
status=ExecutionStatus.RUNNING,
started_at=datetime.now(tz=timezone.utc),
ended_at=None,
stats=GraphExecutionMeta.Stats(),
)
def _node_event_payload(
*, user_id: str, graph_id: str, graph_exec_id: str, marker: str
) -> bytes:
"""Wire-format a NodeExecutionEvent the way RedisExecutionEventBus would."""
inner = NodeExecutionEvent(
user_id=user_id,
graph_id=graph_id,
graph_version=1,
graph_exec_id=graph_exec_id,
node_exec_id=f"node-exec-{marker}",
node_id="node-1",
block_id="block-1",
status=ExecutionStatus.COMPLETED,
input_data={"in": marker},
output_data={"out": [marker]},
add_time=datetime.now(tz=timezone.utc),
queue_time=None,
start_time=datetime.now(tz=timezone.utc),
end_time=datetime.now(tz=timezone.utc),
).model_dump(mode="json")
return json.dumps({"payload": inner}).encode()
def _graph_event_payload(
*, user_id: str, graph_id: str, graph_exec_id: str, marker: str
) -> bytes:
inner = GraphExecutionEvent(
id=graph_exec_id,
user_id=user_id,
graph_id=graph_id,
graph_version=1,
preset_id=None,
status=ExecutionStatus.COMPLETED,
started_at=datetime.now(tz=timezone.utc),
ended_at=datetime.now(tz=timezone.utc),
stats=GraphExecutionEvent.Stats(
cost=0,
duration=1.0,
node_exec_time=0.5,
node_exec_count=1,
),
inputs={"x": marker},
credential_inputs=None,
nodes_input_masks=None,
outputs={"y": [marker]},
).model_dump(mode="json")
return json.dumps({"payload": inner}).encode()
async def _wait_until(predicate, timeout: float = 5.0, interval: float = 0.05) -> bool:
"""Poll ``predicate()`` until truthy or timeout — used to wait for pubsub."""
deadline = asyncio.get_event_loop().time() + timeout
while asyncio.get_event_loop().time() < deadline:
if predicate():
return True
await asyncio.sleep(interval)
return False
@pytest.mark.asyncio
async def test_two_clients_get_independent_ssubscribes_on_right_shards(
monkeypatch,
) -> None:
"""Two WS clients on different graph_exec_ids each receive ONLY their
own publish, even when the channels land on different shards."""
user_id = "user-conn-int-1"
graph_a = f"graph-a-{uuid4().hex[:8]}"
graph_b = f"graph-b-{uuid4().hex[:8]}"
exec_a = f"exec-a-{uuid4().hex[:8]}"
exec_b = f"exec-b-{uuid4().hex[:8]}"
# Stub Prisma lookup so tests don't need a DB.
async def _fake_meta(_uid, gex_id):
return _meta(user_id, graph_a if gex_id == exec_a else graph_b, gex_id)
monkeypatch.setattr("backend.api.conn_manager.get_graph_execution_meta", _fake_meta)
cm = ConnectionManager()
ws_a: AsyncMock = AsyncMock(spec=WebSocket)
ws_b: AsyncMock = AsyncMock(spec=WebSocket)
sent_a: list[str] = []
sent_b: list[str] = []
ws_a.send_text = AsyncMock(side_effect=lambda m: sent_a.append(m))
ws_b.send_text = AsyncMock(side_effect=lambda m: sent_b.append(m))
redis_client.get_redis.cache_clear()
cluster = redis_client.get_redis()
try:
await cm.subscribe_graph_exec(
user_id=user_id, graph_exec_id=exec_a, websocket=ws_a
)
await cm.subscribe_graph_exec(
user_id=user_id, graph_exec_id=exec_b, websocket=ws_b
)
# Let SSUBSCRIBE settle on each shard.
await asyncio.sleep(0.2)
# Publish to each per-exec channel.
chan_a = event_bus_channel(exec_channel(user_id, graph_a, exec_a))
chan_b = event_bus_channel(exec_channel(user_id, graph_b, exec_b))
cluster.spublish(
chan_a,
_node_event_payload(
user_id=user_id,
graph_id=graph_a,
graph_exec_id=exec_a,
marker="A",
).decode(),
)
cluster.spublish(
chan_b,
_node_event_payload(
user_id=user_id,
graph_id=graph_b,
graph_exec_id=exec_b,
marker="B",
).decode(),
)
delivered = await _wait_until(lambda: sent_a and sent_b, timeout=5.0)
assert delivered, f"timeout: sent_a={sent_a!r} sent_b={sent_b!r}"
msg_a = json.loads(sent_a[0])
msg_b = json.loads(sent_b[0])
assert msg_a["channel"] == graph_exec_channel_key(user_id, graph_exec_id=exec_a)
assert msg_b["channel"] == graph_exec_channel_key(user_id, graph_exec_id=exec_b)
assert msg_a["data"]["graph_exec_id"] == exec_a
assert msg_b["data"]["graph_exec_id"] == exec_b
# No cross-talk: each socket got exactly one message.
assert len(sent_a) == 1 and len(sent_b) == 1
finally:
await cm.disconnect_socket(ws_a, user_id=user_id)
await cm.disconnect_socket(ws_b, user_id=user_id)
redis_client.disconnect()
@pytest.mark.asyncio
async def test_aggregate_channel_receives_per_exec_publishes(monkeypatch) -> None:
"""A subscriber on the ``graph_execs`` aggregate channel must receive the
GraphExecutionEvent published to the ``/all`` channel — even though
per-exec events go to a different channel."""
user_id = "user-conn-int-2"
graph_id = f"graph-{uuid4().hex[:8]}"
exec_id = f"exec-{uuid4().hex[:8]}"
async def _fake_meta(_uid, gex_id):
return _meta(user_id, graph_id, gex_id)
monkeypatch.setattr("backend.api.conn_manager.get_graph_execution_meta", _fake_meta)
cm = ConnectionManager()
ws_agg: AsyncMock = AsyncMock(spec=WebSocket)
ws_per: AsyncMock = AsyncMock(spec=WebSocket)
sent_agg: list[str] = []
sent_per: list[str] = []
ws_agg.send_text = AsyncMock(side_effect=lambda m: sent_agg.append(m))
ws_per.send_text = AsyncMock(side_effect=lambda m: sent_per.append(m))
redis_client.get_redis.cache_clear()
cluster = redis_client.get_redis()
try:
await cm.subscribe_graph_execs(
user_id=user_id, graph_id=graph_id, websocket=ws_agg
)
await cm.subscribe_graph_exec(
user_id=user_id, graph_exec_id=exec_id, websocket=ws_per
)
await asyncio.sleep(0.2)
# The eventbus publishes the same event to both channels — replicate.
chan_per = event_bus_channel(exec_channel(user_id, graph_id, exec_id))
chan_all = event_bus_channel(graph_all_channel(user_id, graph_id))
payload = _graph_event_payload(
user_id=user_id,
graph_id=graph_id,
graph_exec_id=exec_id,
marker="agg",
).decode()
cluster.spublish(chan_per, payload)
cluster.spublish(chan_all, payload)
delivered = await _wait_until(lambda: sent_agg and sent_per, timeout=5.0)
assert delivered, f"sent_agg={sent_agg!r} sent_per={sent_per!r}"
agg_msg = json.loads(sent_agg[0])
per_msg = json.loads(sent_per[0])
# Aggregate subscriber's channel key is the per-graph executions key.
assert agg_msg["channel"] == _graph_execs_channel_key(
user_id, graph_id=graph_id
)
assert per_msg["channel"] == graph_exec_channel_key(
user_id, graph_exec_id=exec_id
)
assert agg_msg["method"] == WSMethod.GRAPH_EXECUTION_EVENT.value
finally:
await cm.disconnect_socket(ws_agg, user_id=user_id)
await cm.disconnect_socket(ws_per, user_id=user_id)
redis_client.disconnect()
@pytest.mark.asyncio
async def test_disconnect_unsubscribes_and_drops_future_publishes(monkeypatch) -> None:
"""After ``disconnect_socket`` runs, a subsequent SPUBLISH must NOT reach
the dead websocket — exercises the SUNSUBSCRIBE + bookkeeping cleanup."""
user_id = "user-conn-int-3"
graph_id = f"graph-{uuid4().hex[:8]}"
exec_id = f"exec-{uuid4().hex[:8]}"
async def _fake_meta(_uid, gex_id):
return _meta(user_id, graph_id, gex_id)
monkeypatch.setattr("backend.api.conn_manager.get_graph_execution_meta", _fake_meta)
cm = ConnectionManager()
ws: AsyncMock = AsyncMock(spec=WebSocket)
sent: list[str] = []
ws.send_text = AsyncMock(side_effect=lambda m: sent.append(m))
redis_client.get_redis.cache_clear()
cluster = redis_client.get_redis()
chan = event_bus_channel(exec_channel(user_id, graph_id, exec_id))
payload = _node_event_payload(
user_id=user_id, graph_id=graph_id, graph_exec_id=exec_id, marker="live"
).decode()
try:
await cm.subscribe_graph_exec(
user_id=user_id, graph_exec_id=exec_id, websocket=ws
)
await asyncio.sleep(0.15)
# First publish — must reach the socket.
cluster.spublish(chan, payload)
delivered = await _wait_until(lambda: bool(sent), timeout=5.0)
assert delivered
assert len(sent) == 1
# Disconnect → SUNSUBSCRIBE + bookkeeping cleared.
await cm.disconnect_socket(ws, user_id=user_id)
# Pump cancellation may drain in-flight messages; wait for it.
await asyncio.sleep(0.2)
# Channel bookkeeping must be gone.
assert (
graph_exec_channel_key(user_id, graph_exec_id=exec_id)
not in cm.subscriptions
)
assert ws not in cm._ws_subs
# Second publish — must NOT reach the (already-disconnected) socket.
cluster.spublish(
chan,
_node_event_payload(
user_id=user_id,
graph_id=graph_id,
graph_exec_id=exec_id,
marker="post-disconnect",
).decode(),
)
await asyncio.sleep(0.5)
# Still only the one pre-disconnect message.
assert len(sent) == 1
finally:
redis_client.disconnect()
@pytest.mark.asyncio
async def test_slow_consumer_receives_all_events_without_loss(monkeypatch) -> None:
"""Burst-publish many SPUBLISHes; assert every one reaches the subscriber
in order — guards against drops/reorderings in the pubsub pump."""
user_id = "user-conn-int-4"
graph_id = f"graph-{uuid4().hex[:8]}"
exec_id = f"exec-{uuid4().hex[:8]}"
n_events = 100
async def _fake_meta(_uid, gex_id):
return _meta(user_id, graph_id, gex_id)
monkeypatch.setattr("backend.api.conn_manager.get_graph_execution_meta", _fake_meta)
cm = ConnectionManager()
ws: AsyncMock = AsyncMock(spec=WebSocket)
sent: list[str] = []
ws.send_text = AsyncMock(side_effect=lambda m: sent.append(m))
redis_client.get_redis.cache_clear()
cluster = redis_client.get_redis()
chan = event_bus_channel(exec_channel(user_id, graph_id, exec_id))
try:
await cm.subscribe_graph_exec(
user_id=user_id, graph_exec_id=exec_id, websocket=ws
)
await asyncio.sleep(0.2)
# Burst-publish n_events without yielding to the pump.
for i in range(n_events):
cluster.spublish(
chan,
_node_event_payload(
user_id=user_id,
graph_id=graph_id,
graph_exec_id=exec_id,
marker=f"m{i}",
).decode(),
)
delivered = await _wait_until(
lambda: len(sent) >= n_events, timeout=15.0, interval=0.1
)
assert delivered, f"only delivered {len(sent)}/{n_events}"
# Validate ordering — Redis pub/sub is FIFO per channel.
markers = [json.loads(m)["data"]["input_data"]["in"] for m in sent[:n_events]]
assert markers == [f"m{i}" for i in range(n_events)]
finally:
await cm.disconnect_socket(ws, user_id=user_id)
redis_client.disconnect()

File diff suppressed because it is too large Load Diff

View File

@@ -57,7 +57,7 @@ def _patch_rate_limit_deps(
mocker.patch(
f"{_MOCK_MODULE}.get_global_rate_limits",
new_callable=AsyncMock,
return_value=(2_500_000, 12_500_000, SubscriptionTier.FREE),
return_value=(2_500_000, 12_500_000, SubscriptionTier.BASIC),
)
mocker.patch(
f"{_MOCK_MODULE}.get_usage_status",
@@ -89,7 +89,7 @@ def test_get_rate_limit(
assert data["weekly_cost_limit_microdollars"] == 12_500_000
assert data["daily_cost_used_microdollars"] == 500_000
assert data["weekly_cost_used_microdollars"] == 3_000_000
assert data["tier"] == "FREE"
assert data["tier"] == "BASIC"
configured_snapshot.assert_match(
json.dumps(data, indent=2, sort_keys=True) + "\n",
@@ -163,7 +163,7 @@ def test_reset_user_usage_daily_only(
assert data["daily_cost_used_microdollars"] == 0
# Weekly is untouched
assert data["weekly_cost_used_microdollars"] == 3_000_000
assert data["tier"] == "FREE"
assert data["tier"] == "BASIC"
mock_reset.assert_awaited_once_with(target_user_id, reset_weekly=False)
@@ -194,7 +194,7 @@ def test_reset_user_usage_daily_and_weekly(
data = response.json()
assert data["daily_cost_used_microdollars"] == 0
assert data["weekly_cost_used_microdollars"] == 0
assert data["tier"] == "FREE"
assert data["tier"] == "BASIC"
mock_reset.assert_awaited_once_with(target_user_id, reset_weekly=True)
@@ -231,7 +231,7 @@ def test_get_rate_limit_email_lookup_failure(
mocker.patch(
f"{_MOCK_MODULE}.get_global_rate_limits",
new_callable=AsyncMock,
return_value=(2_500_000, 12_500_000, SubscriptionTier.FREE),
return_value=(2_500_000, 12_500_000, SubscriptionTier.BASIC),
)
mocker.patch(
f"{_MOCK_MODULE}.get_usage_status",
@@ -324,7 +324,7 @@ def test_set_user_tier(
mocker.patch(
f"{_MOCK_MODULE}.get_user_tier",
new_callable=AsyncMock,
return_value=SubscriptionTier.FREE,
return_value=SubscriptionTier.BASIC,
)
mock_set = mocker.patch(
f"{_MOCK_MODULE}.set_user_tier",
@@ -347,7 +347,7 @@ def test_set_user_tier_downgrade(
mocker: pytest_mock.MockerFixture,
target_user_id: str,
) -> None:
"""Test downgrading a user's tier from PRO to FREE."""
"""Test downgrading a user's tier from PRO to BASIC."""
mocker.patch(
f"{_MOCK_MODULE}.get_user_email_by_id",
new_callable=AsyncMock,
@@ -365,14 +365,14 @@ def test_set_user_tier_downgrade(
response = client.post(
"/admin/rate_limit/tier",
json={"user_id": target_user_id, "tier": "FREE"},
json={"user_id": target_user_id, "tier": "BASIC"},
)
assert response.status_code == 200
data = response.json()
assert data["user_id"] == target_user_id
assert data["tier"] == "FREE"
mock_set.assert_awaited_once_with(target_user_id, SubscriptionTier.FREE)
assert data["tier"] == "BASIC"
mock_set.assert_awaited_once_with(target_user_id, SubscriptionTier.BASIC)
def test_set_user_tier_invalid_tier(
@@ -456,7 +456,7 @@ def test_set_user_tier_db_failure(
mocker.patch(
f"{_MOCK_MODULE}.get_user_tier",
new_callable=AsyncMock,
return_value=SubscriptionTier.FREE,
return_value=SubscriptionTier.BASIC,
)
mocker.patch(
f"{_MOCK_MODULE}.set_user_tier",

View File

@@ -8,7 +8,7 @@ from uuid import uuid4
from autogpt_libs import auth
from fastapi import APIRouter, HTTPException, Query, Response, Security
from fastapi.responses import JSONResponse, StreamingResponse
from fastapi.responses import StreamingResponse
from pydantic import BaseModel, ConfigDict, Field, field_validator
from backend.copilot import service as chat_service
@@ -47,7 +47,14 @@ from backend.copilot.rate_limit import (
release_reset_lock,
reset_daily_usage,
)
from backend.copilot.response_model import StreamError, StreamFinish, StreamHeartbeat
from backend.copilot.response_model import (
StreamError,
StreamFinish,
StreamFinishStep,
StreamHeartbeat,
StreamStart,
StreamStartStep,
)
from backend.copilot.service import strip_injected_context_for_display
from backend.copilot.tools.e2b_sandbox import kill_sandbox
from backend.copilot.tools.models import (
@@ -75,6 +82,7 @@ from backend.copilot.tools.models import (
NoResultsResponse,
SetupRequirementsResponse,
SuggestedGoalResponse,
TodoWriteResponse,
UnderstandingUpdatedResponse,
)
from backend.copilot.tracking import track_user_message
@@ -153,6 +161,14 @@ class StreamChatRequest(BaseModel):
)
class QueuePendingMessageRequest(BaseModel):
"""Request model for queueing a follow-up while a turn is running."""
message: str = Field(max_length=64_000)
context: dict[str, str] | None = None
file_ids: list[str] | None = Field(default=None, max_length=20)
class PeekPendingMessagesResponse(BaseModel):
"""Response for the pending-message peek (GET) endpoint.
@@ -208,6 +224,11 @@ class ActiveStreamInfo(BaseModel):
turn_id: str
last_message_id: str # Redis Stream message ID for resumption
# ISO-8601 timestamp (UTC) marking when the backend registered the turn
# as running. Lets the frontend seed its elapsed-time counter so restored
# turns show honest "time since turn started" instead of the misleading
# "time since this mount resumed the SSE".
started_at: str | None = None
class SessionDetailResponse(BaseModel):
@@ -299,8 +320,11 @@ async def list_sessions(
redis = await get_redis_async()
pipe = redis.pipeline(transaction=False)
for session in sessions:
# Use the canonical helper so the hash-tag braces match every
# other writer; building the key inline drops the braces and
# silently misses every running session on cluster mode.
pipe.hget(
f"{config.session_meta_prefix}{session.session_id}",
stream_registry.get_session_meta_key(session.session_id),
"status",
)
statuses = await pipe.execute()
@@ -528,6 +552,7 @@ async def get_session(
active_stream_info = ActiveStreamInfo(
turn_id=active_session.turn_id,
last_message_id=last_message_id,
started_at=active_session.created_at.isoformat(),
)
# Skip session metadata on "load more" — frontend only needs messages
@@ -815,17 +840,45 @@ async def cancel_session_task(
return CancelSessionResponse(cancelled=True)
def _ui_message_stream_headers() -> dict[str, str]:
return {
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1",
}
def _empty_ui_message_stream_response() -> StreamingResponse:
# Stable placeholder messageId for the empty queued-mid-turn stream.
# Real turns generate per-message UUIDs via the executor; this stream
# has no message to attach to, but the AI SDK parser still requires a
# non-empty ``messageId`` field on ``StreamStart``.
message_id = uuid4().hex
async def event_generator() -> AsyncGenerator[str, None]:
# Vercel AI SDK's UI-message-stream parser expects symmetric
# start/finish framing at both stream and step level — every
# non-empty turn emits the pair. Without an opener, today's parser
# tolerates the closer (no active parts to flush) but a future SDK
# tightening would silently break the queue-mid-turn UX. Emit the
# full empty pair so the contract stays correct.
yield StreamStart(messageId=message_id).to_sse()
yield StreamStartStep().to_sse()
yield StreamFinishStep().to_sse()
yield StreamFinish().to_sse()
yield "data: [DONE]\n\n"
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers=_ui_message_stream_headers(),
)
@router.post(
"/sessions/{session_id}/stream",
responses={
202: {
"model": QueuePendingMessageResponse,
"description": (
"Session has a turn in flight — message queued into the pending "
"buffer and will be picked up between tool-call rounds by the "
"executor currently processing the turn."
),
},
404: {"description": "Session not found or access denied"},
429: {"description": "Cost rate-limit or call-frequency cap exceeded"},
},
@@ -835,19 +888,18 @@ async def stream_chat_post(
request: StreamChatRequest,
user_id: str = Security(auth.get_user_id),
):
"""Start a new turn OR queue a follow-up — decided server-side.
"""Start a new turn and return an AI SDK UI message stream.
- **Session idle**: starts a turn. Returns an SSE stream (``text/event-stream``)
with Vercel AI SDK chunks (text fragments, tool-call UI, tool results).
The generation runs in a background task that survives client disconnects;
reconnect via ``GET /sessions/{session_id}/stream`` to resume.
Returns an SSE stream (``text/event-stream``) with Vercel AI SDK chunks
(text fragments, tool-call UI, tool results). The generation runs in a
background task that survives client disconnects; reconnect via
``GET /sessions/{session_id}/stream`` to resume.
- **Session has a turn in flight**: pushes the message into the per-session
pending buffer and returns ``202 application/json`` with
``QueuePendingMessageResponse``. The executor running the current turn
drains the buffer between tool-call rounds (baseline) or at the start of
the next turn (SDK). Clients should detect the 202 and surface the
message as a queued-chip in the UI.
Follow-up messages typed while a turn is already running should use
``POST /sessions/{session_id}/messages/pending``. If an older client still
posts that follow-up here, we queue it defensively but still return a valid
empty UI-message stream so AI SDK transports never receive a JSON body from
the stream endpoint.
Args:
session_id: The chat session identifier.
@@ -871,26 +923,29 @@ async def stream_chat_post(
extra={"json_fields": log_meta},
)
session = await _validate_and_get_session(session_id, user_id)
builder_permissions = resolve_session_permissions(session)
# Self-defensive queue-fallback: if a turn is already running, don't race
# it on the cluster lock — drop the message into the pending buffer and
# return 202 so the caller can render a chip. Both UI chips and autopilot
# block follow-ups route through this path; keeping the decision on the
# server means every caller gets uniform behaviour.
if (
request.is_user_message
and request.message
and await is_turn_in_flight(session_id)
):
response = await queue_pending_for_http(
session_id=session_id,
user_id=user_id,
message=request.message,
context=request.context,
file_ids=request.file_ids,
)
return JSONResponse(status_code=202, content=response.model_dump())
try:
await queue_pending_for_http(
session_id=session_id,
user_id=user_id,
message=request.message,
context=request.context,
file_ids=request.file_ids,
)
return _empty_ui_message_stream_response()
except HTTPException as exc:
if exc.status_code != 409:
raise
# Permission resolution is only needed below for the actual turn — keep
# it after the queue-fall-through so a queued mid-turn request returns
# without paying the work.
builder_permissions = resolve_session_permissions(session)
logger.info(
f"[TIMING] session validated in {(time.perf_counter() - stream_start_time) * 1000:.1f}ms",
@@ -1129,12 +1184,37 @@ async def stream_chat_post(
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
"x-vercel-ai-ui-message-stream": "v1", # AI SDK protocol header
},
headers=_ui_message_stream_headers(),
)
@router.post(
"/sessions/{session_id}/messages/pending",
response_model=QueuePendingMessageResponse,
responses={
404: {"description": "Session not found or access denied"},
409: {"description": "Session has no active turn to receive pending messages"},
429: {"description": "Call-frequency cap exceeded"},
},
)
async def queue_pending_message(
session_id: str,
request: QueuePendingMessageRequest,
user_id: str = Security(auth.get_user_id),
):
"""Queue a follow-up message while the session has an active turn."""
await _validate_and_get_session(session_id, user_id)
if not await is_turn_in_flight(session_id):
raise HTTPException(
status_code=409,
detail="Session has no active turn. Start a new turn with POST /stream.",
)
return await queue_pending_for_http(
session_id=session_id,
user_id=user_id,
message=request.message,
context=request.context,
file_ids=request.file_ids,
)
@@ -1168,6 +1248,7 @@ async def get_pending_messages(
)
async def resume_session_stream(
session_id: str,
last_chunk_id: str | None = Query(default=None, include_in_schema=False),
user_id: str = Security(auth.get_user_id),
):
"""
@@ -1177,27 +1258,26 @@ async def resume_session_stream(
Checks for an active (in-progress) task on the session and either replays
the full SSE stream or returns 204 No Content if nothing is running.
Args:
session_id: The chat session identifier.
user_id: Optional authenticated user ID.
Returns:
StreamingResponse (SSE) when an active stream exists,
or 204 No Content when there is nothing to resume.
Always replays the active turn from ``0-0``. The AI SDK UI-message parser
keeps text/reasoning part state inside a single parser instance; resuming
from a Redis cursor can skip the ``*-start`` events required by later
``*-delta`` chunks.
"""
import asyncio
active_session, last_message_id = await stream_registry.get_active_session(
active_session, _latest_backend_id = await stream_registry.get_active_session(
session_id, user_id
)
if not active_session:
return Response(status_code=204)
# Always replay from the beginning ("0-0") on resume.
# We can't use last_message_id because it's the latest ID in the backend
# stream, not the latest the frontend received — the gap causes lost
# messages. The frontend deduplicates replayed content.
if last_chunk_id:
logger.info(
"Ignoring deprecated last_chunk_id on stream resume",
extra={"session_id": session_id, "last_chunk_id": last_chunk_id},
)
subscriber_queue = await stream_registry.subscribe_to_session(
session_id=session_id,
user_id=user_id,
@@ -1258,12 +1338,7 @@ async def resume_session_stream(
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1",
},
headers=_ui_message_stream_headers(),
)
@@ -1419,6 +1494,7 @@ ToolResponseUnion = (
| MemorySearchResponse
| MemoryForgetCandidatesResponse
| MemoryForgetConfirmResponse
| TodoWriteResponse
)

View File

@@ -157,6 +157,11 @@ def _mock_stream_internals(mocker: pytest_mock.MockerFixture):
"backend.api.features.chat.routes._validate_and_get_session",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.is_turn_in_flight",
new_callable=AsyncMock,
return_value=False,
)
mock_save = mocker.patch(
"backend.api.features.chat.routes.append_and_save_message",
return_value=MagicMock(), # non-None = message was saved (not a duplicate)
@@ -380,7 +385,7 @@ def _mock_usage(
weekly_used: int = 2000,
daily_limit: int = 10000,
weekly_limit: int = 50000,
tier: "SubscriptionTier" = SubscriptionTier.FREE,
tier: "SubscriptionTier" = SubscriptionTier.BASIC,
) -> AsyncMock:
"""Mock get_usage_status and get_global_rate_limits for usage endpoint tests.
@@ -440,7 +445,7 @@ def test_usage_returns_daily_and_weekly(
daily_cost_limit=10000,
weekly_cost_limit=50000,
rate_limit_reset_cost=chat_routes.config.rate_limit_reset_cost,
tier=SubscriptionTier.FREE,
tier=SubscriptionTier.BASIC,
)
@@ -461,7 +466,7 @@ def test_usage_uses_config_limits(
daily_cost_limit=99999,
weekly_cost_limit=77777,
rate_limit_reset_cost=500,
tier=SubscriptionTier.FREE,
tier=SubscriptionTier.BASIC,
)
@@ -637,7 +642,7 @@ class TestStreamChatRequestModeValidation:
assert req.mode is None
# ─── POST /stream queue-fallback (when a turn is already in flight) ──
# ─── Pending message queue (when a turn is already in flight) ─────────
def _mock_stream_queue_internals(
@@ -646,11 +651,9 @@ def _mock_stream_queue_internals(
session_exists: bool = True,
turn_in_flight: bool = True,
call_count: int = 1,
push_length: int | None = 1,
):
"""Mock dependencies for the POST /stream queue-fallback path.
When ``turn_in_flight`` is True the handler takes the 202 queue branch.
"""
"""Mock dependencies for the pending-message queue path."""
if session_exists:
mock_session = mocker.MagicMock()
mock_session.id = "sess-1"
@@ -692,12 +695,10 @@ def _mock_stream_queue_internals(
return_value=call_count,
)
mocker.patch(
"backend.copilot.pending_message_helpers.push_pending_message",
"backend.copilot.pending_message_helpers.push_pending_message_if_session_running",
new_callable=AsyncMock,
return_value=1,
return_value=push_length,
)
# queue_user_message re-runs is_turn_in_flight via the helper module —
# stub that path out too so we don't need a fake stream_registry.
mocker.patch(
"backend.copilot.pending_message_helpers.get_active_session_meta",
new_callable=AsyncMock,
@@ -705,37 +706,65 @@ def _mock_stream_queue_internals(
)
def test_stream_queue_returns_202_when_turn_in_flight(
def test_queue_pending_message_returns_200_when_turn_in_flight(
mocker: pytest_mock.MockerFixture,
) -> None:
"""Happy path: POST /stream to a session with a live turn → 202 queue."""
"""Happy path: POST /messages/pending to a live turn queues the message."""
_mock_stream_queue_internals(mocker)
response = client.post(
"/sessions/sess-1/stream",
json={"message": "follow-up", "is_user_message": True},
"/sessions/sess-1/messages/pending",
json={"message": "follow-up"},
)
assert response.status_code == 202
assert response.status_code == 200
data = response.json()
assert data["buffer_length"] == 1
assert "turn_in_flight" in data
def test_stream_queue_session_not_found_returns_404(
def test_queue_pending_message_session_not_found_returns_404(
mocker: pytest_mock.MockerFixture,
) -> None:
"""If the session doesn't exist or belong to the user, returns 404."""
_mock_stream_queue_internals(mocker, session_exists=False)
response = client.post(
"/sessions/bad-sess/stream",
json={"message": "hi", "is_user_message": True},
"/sessions/bad-sess/messages/pending",
json={"message": "hi"},
)
assert response.status_code == 404
def test_stream_queue_call_frequency_limit_returns_429(
def test_queue_pending_message_without_active_turn_returns_409(
mocker: pytest_mock.MockerFixture,
) -> None:
"""A pending-message push needs an active turn to consume it."""
_mock_stream_queue_internals(mocker, turn_in_flight=False)
response = client.post(
"/sessions/sess-1/messages/pending",
json={"message": "hi"},
)
assert response.status_code == 409
def test_queue_pending_message_race_after_active_check_returns_409(
mocker: pytest_mock.MockerFixture,
) -> None:
"""If the active turn ends before the atomic push, the message is not queued."""
_mock_stream_queue_internals(mocker, push_length=None)
response = client.post(
"/sessions/sess-1/messages/pending",
json={"message": "hi"},
)
assert response.status_code == 409
def test_queue_pending_message_call_frequency_limit_returns_429(
mocker: pytest_mock.MockerFixture,
) -> None:
"""Per-user call-frequency cap rejects rapid-fire queued pushes."""
@@ -744,14 +773,14 @@ def test_stream_queue_call_frequency_limit_returns_429(
_mock_stream_queue_internals(mocker, call_count=PENDING_CALL_LIMIT + 1)
response = client.post(
"/sessions/sess-1/stream",
json={"message": "hi", "is_user_message": True},
"/sessions/sess-1/messages/pending",
json={"message": "hi"},
)
assert response.status_code == 429
assert "Too many queued message requests this minute" in response.json()["detail"]
def test_stream_queue_converts_context_dict_to_pending_context(
def test_queue_pending_message_converts_context_dict_to_pending_context(
mocker: pytest_mock.MockerFixture,
) -> None:
"""StreamChatRequest.context is a raw dict; must be coerced to the
@@ -768,15 +797,14 @@ def test_stream_queue_converts_context_dict_to_pending_context(
)
response = client.post(
"/sessions/sess-1/stream",
"/sessions/sess-1/messages/pending",
json={
"message": "hi",
"is_user_message": True,
"context": {"url": "https://example.test", "content": "body"},
},
)
assert response.status_code == 202
assert response.status_code == 200
queue_spy.assert_awaited_once()
kwargs = queue_spy.await_args.kwargs
from backend.copilot.pending_messages import PendingMessageContext
@@ -786,7 +814,7 @@ def test_stream_queue_converts_context_dict_to_pending_context(
assert kwargs["context"].content == "body"
def test_stream_queue_passes_none_context_when_omitted(
def test_queue_pending_message_passes_none_context_when_omitted(
mocker: pytest_mock.MockerFixture,
) -> None:
"""When request.context is omitted, the queue call receives context=None."""
@@ -802,15 +830,31 @@ def test_stream_queue_passes_none_context_when_omitted(
)
response = client.post(
"/sessions/sess-1/stream",
json={"message": "hi", "is_user_message": True},
"/sessions/sess-1/messages/pending",
json={"message": "hi"},
)
assert response.status_code == 202
assert response.status_code == 200
queue_spy.assert_awaited_once()
assert queue_spy.await_args.kwargs["context"] is None
def test_stream_chat_queues_legacy_inflight_post_but_returns_sse(
mocker: pytest_mock.MockerFixture,
) -> None:
"""POST /stream must not return JSON to an AI SDK transport."""
_mock_stream_queue_internals(mocker)
response = client.post(
"/sessions/sess-1/stream",
json={"message": "follow-up", "is_user_message": True},
)
assert response.status_code == 200
assert response.headers["content-type"].startswith("text/event-stream")
assert '"type":"finish"' in response.text
# ─── get_pending_messages (GET /sessions/{session_id}/messages/pending) ─────
@@ -996,7 +1040,7 @@ def test_stream_chat_accepts_exactly_max_length_message(
mocker.patch(
"backend.api.features.chat.routes.get_global_rate_limits",
new_callable=AsyncMock,
return_value=(0, 0, SubscriptionTier.FREE),
return_value=(0, 0, SubscriptionTier.BASIC),
)
response = client.post(
@@ -1267,7 +1311,7 @@ def _mock_reset_internals(
enable_credit: bool = True,
daily_limit: int = 10_000,
weekly_limit: int = 50_000,
tier: "SubscriptionTier" = SubscriptionTier.FREE,
tier: "SubscriptionTier" = SubscriptionTier.BASIC,
daily_used: int = 10_001,
weekly_used: int = 1_000,
reset_count: int | None = 0,
@@ -1366,7 +1410,7 @@ def test_reset_usage_returns_400_when_no_daily_limit(
mocker.patch(
"backend.api.features.chat.routes.get_global_rate_limits",
new_callable=AsyncMock,
return_value=(0, 50_000, SubscriptionTier.FREE),
return_value=(0, 50_000, SubscriptionTier.BASIC),
)
mocker.patch(
"backend.api.features.chat.routes.get_daily_reset_count",
@@ -1389,7 +1433,7 @@ def test_reset_usage_returns_503_when_redis_unavailable(
mocker.patch(
"backend.api.features.chat.routes.get_global_rate_limits",
new_callable=AsyncMock,
return_value=(10_000, 50_000, SubscriptionTier.FREE),
return_value=(10_000, 50_000, SubscriptionTier.BASIC),
)
mocker.patch(
"backend.api.features.chat.routes.get_daily_reset_count",
@@ -1412,7 +1456,7 @@ def test_reset_usage_returns_429_when_max_resets_reached(
mocker.patch(
"backend.api.features.chat.routes.get_global_rate_limits",
new_callable=AsyncMock,
return_value=(10_000, 50_000, SubscriptionTier.FREE),
return_value=(10_000, 50_000, SubscriptionTier.BASIC),
)
mocker.patch(
"backend.api.features.chat.routes.get_daily_reset_count",
@@ -1436,7 +1480,7 @@ def test_reset_usage_returns_429_when_lock_not_acquired(
mocker.patch(
"backend.api.features.chat.routes.get_global_rate_limits",
new_callable=AsyncMock,
return_value=(10_000, 50_000, SubscriptionTier.FREE),
return_value=(10_000, 50_000, SubscriptionTier.BASIC),
)
mocker.patch(
"backend.api.features.chat.routes.get_daily_reset_count",
@@ -1581,9 +1625,14 @@ def test_resume_session_stream_no_subscriber_queue(
mock_registry.subscribe_to_session = AsyncMock(return_value=None)
mocker.patch("backend.api.features.chat.routes.stream_registry", mock_registry)
response = client.get("/sessions/sess-1/stream")
response = client.get("/sessions/sess-1/stream?last_chunk_id=9999-9")
assert response.status_code == 204
mock_registry.subscribe_to_session.assert_awaited_once_with(
session_id="sess-1",
user_id=TEST_USER_ID,
last_message_id="0-0",
)
# ─── DELETE /sessions/{id}/stream — disconnect listeners ──────────────

View File

@@ -7,6 +7,7 @@ allowing frontend code generators like Orval to create corresponding TypeScript
from pydantic import BaseModel, Field
from backend.data.model import CredentialsType
from backend.integrations.providers import ProviderName
from backend.sdk.registry import AutoRegistry
@@ -47,6 +48,57 @@ class ProviderNamesResponse(BaseModel):
)
class ProviderMetadata(BaseModel):
"""Display metadata for a provider, shown in the settings integrations UI."""
name: str = Field(description="Provider slug (e.g. ``github``)")
description: str | None = Field(
default=None,
description=(
"One-line human-readable summary of what the provider does. "
"Declared via ``ProviderBuilder.with_description(...)`` in the "
"provider's ``_config.py``. ``None`` if not set."
),
)
supported_auth_types: list[CredentialsType] = Field(
default_factory=list,
description=(
"Credential types this provider accepts. Drives which connection "
"tabs the settings UI renders for the provider. Empty list means "
"no auth types declared."
),
)
def get_supported_auth_types(name: str) -> list[CredentialsType]:
"""Return the provider's supported credential types from :class:`AutoRegistry`.
Populated by :meth:`ProviderBuilder.with_supported_auth_types` (or by
``with_oauth`` / ``with_api_key`` / ``with_user_password`` when the provider
uses the full builder chain). Returns an empty list for providers with no
auth types declared.
"""
provider = AutoRegistry.get_provider(name)
if provider is None:
return []
return sorted(provider.supported_auth_types)
def get_provider_description(name: str) -> str | None:
"""Return the provider's description from :class:`AutoRegistry`.
Descriptions are declared via ``ProviderBuilder.with_description(...)`` in
the provider's ``_config.py`` (SDK path) or in
``blocks/_static_provider_configs.py`` (for providers that don't yet have
their own directory). Returns ``None`` for providers with no registered
description.
"""
provider = AutoRegistry.get_provider(name)
if provider is None:
return None
return provider.description
class ProviderConstants(BaseModel):
"""
Model that exposes all provider names as a constant in the OpenAPI schema.

View File

@@ -14,7 +14,7 @@ from fastapi import (
Security,
status,
)
from pydantic import BaseModel, Field, SecretStr, model_validator
from pydantic import BaseModel, Field, model_validator
from starlette.status import HTTP_500_INTERNAL_SERVER_ERROR, HTTP_502_BAD_GATEWAY
from backend.api.features.library.db import set_preset_webhook, update_preset
@@ -29,15 +29,14 @@ from backend.data.integrations import (
wait_for_webhook_event,
)
from backend.data.model import (
APIKeyCredentials,
Credentials,
CredentialsType,
HostScopedCredentials,
OAuth2Credentials,
UserIntegrations,
is_sdk_default,
)
from backend.data.onboarding import OnboardingStep, complete_onboarding_step
from backend.data.user import get_user_integrations
from backend.executor.utils import add_graph_execution
from backend.integrations.ayrshare import AyrshareClient, SocialPlatform
from backend.integrations.credentials_store import (
@@ -48,7 +47,14 @@ from backend.integrations.creds_manager import (
IntegrationCredentialsManager,
create_mcp_oauth_handler,
)
from backend.integrations.managed_credentials import ensure_managed_credentials
from backend.integrations.managed_credentials import (
ensure_managed_credential,
ensure_managed_credentials,
)
from backend.integrations.managed_providers.ayrshare import AyrshareManagedProvider
from backend.integrations.managed_providers.ayrshare import (
settings_available as ayrshare_settings_available,
)
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks import get_webhook_manager
@@ -60,7 +66,14 @@ from backend.util.exceptions import (
)
from backend.util.settings import Settings
from .models import ProviderConstants, ProviderNamesResponse, get_all_provider_names
from .models import (
ProviderConstants,
ProviderMetadata,
ProviderNamesResponse,
get_all_provider_names,
get_provider_description,
get_supported_auth_types,
)
if TYPE_CHECKING:
from backend.integrations.oauth import BaseOAuthHandler
@@ -87,14 +100,23 @@ async def login(
scopes: Annotated[
str, Query(title="Comma-separated list of authorization scopes")
] = "",
credential_id: Annotated[
str | None,
Query(title="ID of existing credential to upgrade scopes for"),
] = None,
) -> LoginResponse:
handler = _get_provider_oauth_handler(request, provider)
requested_scopes = scopes.split(",") if scopes else []
if credential_id:
requested_scopes = await _prepare_scope_upgrade(
user_id, provider, credential_id, requested_scopes
)
# Generate and store a secure random state token along with the scopes
state_token, code_challenge = await creds_manager.store.store_state_token(
user_id, provider, requested_scopes
user_id, provider, requested_scopes, credential_id=credential_id
)
login_url = handler.get_login_url(
requested_scopes, state_token, code_challenge=code_challenge
@@ -216,7 +238,9 @@ async def callback(
)
# TODO: Allow specifying `title` to set on `credentials`
await creds_manager.create(user_id, credentials)
credentials = await _merge_or_create_credential(
user_id, provider, credentials, valid_state.credential_id
)
logger.debug(
f"Successfully processed OAuth callback for user {user_id} "
@@ -226,13 +250,38 @@ async def callback(
return to_meta_response(credentials)
# Bound the first-time sweep so a slow upstream (e.g. Ayrshare) can't hang
# the credential-list endpoint. On timeout we still kick off a fire-and-
# forget sweep so provisioning eventually completes; the user just won't
# see the managed cred until the next refresh.
_MANAGED_PROVISION_TIMEOUT_S = 10.0
async def _ensure_managed_credentials_bounded(user_id: str) -> None:
try:
await asyncio.wait_for(
ensure_managed_credentials(user_id, creds_manager.store),
timeout=_MANAGED_PROVISION_TIMEOUT_S,
)
except asyncio.TimeoutError:
logger.warning(
"Managed credential sweep exceeded %.1fs for user=%s; "
"continuing without it — provisioning will complete in background",
_MANAGED_PROVISION_TIMEOUT_S,
user_id,
)
asyncio.create_task(ensure_managed_credentials(user_id, creds_manager.store))
@router.get("/credentials", summary="List Credentials")
async def list_credentials(
user_id: Annotated[str, Security(get_user_id)],
) -> list[CredentialsMetaResponse]:
# Fire-and-forget: provision missing managed credentials in the background.
# The credential appears on the next page load; listing is never blocked.
asyncio.create_task(ensure_managed_credentials(user_id, creds_manager.store))
# Block on provisioning so managed credentials appear on the first load
# instead of after a refresh, but with a timeout so a slow upstream
# can't hang the endpoint. `_provisioned_users` short-circuits on
# repeat calls.
await _ensure_managed_credentials_bounded(user_id)
credentials = await creds_manager.store.get_all_creds(user_id)
return [
@@ -247,7 +296,7 @@ async def list_credentials_by_provider(
],
user_id: Annotated[str, Security(get_user_id)],
) -> list[CredentialsMetaResponse]:
asyncio.create_task(ensure_managed_credentials(user_id, creds_manager.store))
await _ensure_managed_credentials_bounded(user_id)
credentials = await creds_manager.store.get_creds_by_provider(user_id, provider)
return [
@@ -281,6 +330,115 @@ async def get_credential(
return to_meta_response(credential)
class PickerTokenResponse(BaseModel):
"""Short-lived OAuth access token shipped to the browser for rendering a
provider-hosted picker UI (e.g. Google Drive Picker). Deliberately narrow:
only the fields the client needs to initialize the picker widget. Issued
from the user's own stored credential so ownership and scope gating are
enforced by the credential lookup."""
access_token: str = Field(
description="OAuth access token suitable for the picker SDK call."
)
access_token_expires_at: int | None = Field(
default=None,
description="Unix timestamp at which the access token expires, if known.",
)
# Allowlist of (provider, scopes) tuples that may mint picker tokens. Only
# Drive-picker-capable scopes qualify so a caller can't use this endpoint to
# extract a GitHub / other-provider OAuth token for unrelated purposes. If a
# future provider integrates a hosted picker that needs a raw access token,
# add its specific picker-relevant scopes here.
_PICKER_TOKEN_ALLOWED_SCOPES: dict[ProviderName, frozenset[str]] = {
ProviderName.GOOGLE: frozenset(
[
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.readonly",
"https://www.googleapis.com/auth/drive",
]
),
}
@router.post(
"/{provider}/credentials/{cred_id}/picker-token",
summary="Issue a short-lived access token for a provider-hosted picker",
operation_id="postV1GetPickerToken",
)
async def get_picker_token(
provider: Annotated[
ProviderName, Path(title="The provider that owns the credentials")
],
cred_id: Annotated[
str, Path(title="The ID of the OAuth2 credentials to mint a token from")
],
user_id: Annotated[str, Security(get_user_id)],
) -> PickerTokenResponse:
"""Return the raw access token for an OAuth2 credential so the frontend
can initialize a provider-hosted picker (e.g. Google Drive Picker).
`GET /{provider}/credentials/{cred_id}` deliberately strips secrets (see
`CredentialsMetaResponse` + `TestGetCredentialReturnsMetaOnly` in
`router_test.py`). That hardening broke the Drive picker, which needs the
raw access token to call `google.picker.Builder.setOAuthToken(...)`. This
endpoint carves a narrow, explicit hole: the caller must own the
credential, it must be OAuth2, and the endpoint returns only the access
token + its expiry — nothing else about the credential. SDK-default
credentials are excluded for the same reason as `get_credential`.
"""
if is_sdk_default(cred_id):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Credentials not found"
)
credential = await creds_manager.get(user_id, cred_id)
if not credential:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Credentials not found"
)
if not provider_matches(credential.provider, provider):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Credentials not found"
)
if not isinstance(credential, OAuth2Credentials):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Picker tokens are only available for OAuth2 credentials",
)
if not credential.access_token:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Credential has no access token; reconnect the account",
)
# Gate on provider+scope: only credentials that actually grant access to
# a provider-hosted picker flow may mint a token through this endpoint.
# Prevents using this path to extract bearer tokens for unrelated OAuth
# integrations (e.g. GitHub) that happen to be stored under the same user.
allowed_scopes = _PICKER_TOKEN_ALLOWED_SCOPES.get(provider)
if not allowed_scopes:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=(f"Picker tokens are not available for provider '{provider.value}'"),
)
cred_scopes = set(credential.scopes or [])
if cred_scopes.isdisjoint(allowed_scopes):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=(
"Credential does not grant any scope eligible for the picker. "
"Reconnect with the appropriate scope."
),
)
return PickerTokenResponse(
access_token=credential.access_token.get_secret_value(),
access_token_expires_at=credential.access_token_expires_at,
)
@router.post("/{provider}/credentials", status_code=201, summary="Create Credentials")
async def create_credentials(
user_id: Annotated[str, Security(get_user_id)],
@@ -574,6 +732,186 @@ async def _execute_webhook_preset_trigger(
# Continue processing - webhook should be resilient to individual failures
# -------------------- INCREMENTAL AUTH HELPERS -------------------- #
async def _prepare_scope_upgrade(
user_id: str,
provider: ProviderName,
credential_id: str,
requested_scopes: list[str],
) -> list[str]:
"""Validate an existing credential for scope upgrade and compute scopes.
For providers without native incremental auth (e.g. GitHub), returns the
union of existing + requested scopes. For providers that handle merging
server-side (e.g. Google with ``include_granted_scopes``), returns the
requested scopes unchanged.
Raises HTTPException on validation failure.
"""
# Platform-owned system credentials must never be upgraded — scope
# changes here would leak across every user that shares them.
if is_system_credential(credential_id):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="System credentials cannot be upgraded",
)
existing = await creds_manager.store.get_creds_by_id(user_id, credential_id)
if not existing:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Credential to upgrade not found",
)
if not isinstance(existing, OAuth2Credentials):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Only OAuth2 credentials can be upgraded",
)
if not provider_matches(existing.provider, provider.value):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Credential provider does not match the requested provider",
)
if existing.is_managed:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Managed credentials cannot be upgraded",
)
# Google handles scope merging via include_granted_scopes; others need
# the union of existing + new scopes in the login URL.
if provider != ProviderName.GOOGLE:
requested_scopes = list(set(requested_scopes) | set(existing.scopes))
return requested_scopes
async def _merge_or_create_credential(
user_id: str,
provider: ProviderName,
credentials: OAuth2Credentials,
credential_id: str | None,
) -> OAuth2Credentials:
"""Either upgrade an existing credential or create a new one.
When *credential_id* is set (explicit upgrade), merges scopes and updates
the existing credential. Otherwise, checks for an implicit merge (same
provider + username) before falling back to creating a new credential.
"""
if credential_id:
return await _upgrade_existing_credential(user_id, credential_id, credentials)
# Implicit merge: check for existing credential with same provider+username.
# Skip managed/system credentials and require a non-None username on both
# sides so we never accidentally merge unrelated credentials.
if credentials.username is None:
await creds_manager.create(user_id, credentials)
return credentials
existing_creds = await creds_manager.store.get_creds_by_provider(user_id, provider)
matching = next(
(
c
for c in existing_creds
if isinstance(c, OAuth2Credentials)
and not c.is_managed
and not is_system_credential(c.id)
and c.username is not None
and c.username == credentials.username
),
None,
)
if matching:
# Only merge into the existing credential when the new token
# already covers every scope we're about to advertise on it.
# Without this guard we'd overwrite ``matching.access_token`` with
# a narrower token while storing a wider ``scopes`` list — the
# record would claim authorizations the token does not grant, and
# blocks using the lost scopes would fail with opaque 401/403s
# until the user hits re-auth. On a narrowing login, keep the
# two credentials separate instead.
if set(credentials.scopes).issuperset(set(matching.scopes)):
return await _upgrade_existing_credential(user_id, matching.id, credentials)
await creds_manager.create(user_id, credentials)
return credentials
async def _upgrade_existing_credential(
user_id: str,
existing_cred_id: str,
new_credentials: OAuth2Credentials,
) -> OAuth2Credentials:
"""Merge scopes from *new_credentials* into an existing credential."""
# Defense-in-depth: re-check system and provider invariants right before
# the write. The login-time check in `_prepare_scope_upgrade` can go stale
# by the time the callback runs, and the implicit-merge path bypasses
# login-time validation entirely, so every write-path must enforce these
# on its own.
if is_system_credential(existing_cred_id):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="System credentials cannot be upgraded",
)
existing = await creds_manager.store.get_creds_by_id(user_id, existing_cred_id)
if not existing or not isinstance(existing, OAuth2Credentials):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Credential to upgrade not found",
)
if existing.is_managed:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Managed credentials cannot be upgraded",
)
if not provider_matches(existing.provider, new_credentials.provider):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Credential provider does not match the requested provider",
)
if (
existing.username
and new_credentials.username
and existing.username != new_credentials.username
):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Username mismatch: authenticated as a different user",
)
# Operate on a copy so the caller's ``new_credentials`` object is not
# mutated out from under them. Every caller today immediately discards
# or replaces its reference, but the implicit-merge path in
# ``_merge_or_create_credential`` reads ``credentials.scopes`` before
# calling into us — a future reader after the call would otherwise
# silently see the overwritten values.
merged = new_credentials.model_copy(deep=True)
merged.id = existing.id
merged.title = existing.title
merged.scopes = list(set(existing.scopes) | set(new_credentials.scopes))
merged.metadata = {
**(existing.metadata or {}),
**(new_credentials.metadata or {}),
}
# Preserve the existing refresh_token and username if the incremental
# response doesn't carry them. Providers like Google only return a
# refresh_token on first authorization — dropping it here would orphan
# the credential on the next access-token expiry, forcing the user to
# re-auth from scratch. Username is similarly sticky: if we've already
# resolved it for this credential, keep it rather than silently
# blanking it on an incremental upgrade.
if not merged.refresh_token and existing.refresh_token:
merged.refresh_token = existing.refresh_token
merged.refresh_token_expires_at = existing.refresh_token_expires_at
if not merged.username and existing.username:
merged.username = existing.username
await creds_manager.update(user_id, merged)
return merged
# --------------------------- UTILITIES ---------------------------- #
@@ -784,12 +1122,21 @@ def _get_provider_oauth_handler(
async def get_ayrshare_sso_url(
user_id: Annotated[str, Security(get_user_id)],
) -> AyrshareSSOResponse:
"""
Generate an SSO URL for Ayrshare social media integration.
"""Generate a JWT SSO URL so the user can link their social accounts.
Returns:
dict: Contains the SSO URL for Ayrshare integration
The per-user Ayrshare profile key is provisioned and persisted as a
standard ``is_managed=True`` credential by
:class:`~backend.integrations.managed_providers.ayrshare.AyrshareManagedProvider`.
This endpoint only signs a short-lived JWT pointing at the Ayrshare-
hosted social-linking page; all profile lifecycle logic lives with the
managed provider.
"""
if not ayrshare_settings_available():
raise HTTPException(
status_code=HTTP_500_INTERNAL_SERVER_ERROR,
detail="Ayrshare integration is not configured",
)
try:
client = AyrshareClient()
except MissingConfigError:
@@ -798,66 +1145,63 @@ async def get_ayrshare_sso_url(
detail="Ayrshare integration is not configured",
)
# Ayrshare profile key is stored in the credentials store
# It is generated when creating a new profile, if there is no profile key,
# we create a new profile and store the profile key in the credentials store
user_integrations: UserIntegrations = await get_user_integrations(user_id)
profile_key = user_integrations.managed_credentials.ayrshare_profile_key
if not profile_key:
logger.debug(f"Creating new Ayrshare profile for user {user_id}")
try:
profile = await client.create_profile(
title=f"User {user_id}", messaging_active=True
)
profile_key = profile.profileKey
await creds_manager.store.set_ayrshare_profile_key(user_id, profile_key)
except Exception as e:
logger.error(f"Error creating Ayrshare profile for user {user_id}: {e}")
raise HTTPException(
status_code=HTTP_502_BAD_GATEWAY,
detail="Failed to create Ayrshare profile",
)
else:
logger.debug(f"Using existing Ayrshare profile for user {user_id}")
profile_key_str = (
profile_key.get_secret_value()
if isinstance(profile_key, SecretStr)
else str(profile_key)
# On-demand provisioning: AyrshareManagedProvider opts out of the
# credentials sweep (profile quota is per-user subscription-bound). This
# endpoint is the only trigger that provisions a profile — one Ayrshare
# profile per user who actually opens the connect flow, not one per
# every authenticated user.
provisioned = await ensure_managed_credential(
user_id, creds_manager.store, AyrshareManagedProvider()
)
if not provisioned:
raise HTTPException(
status_code=HTTP_502_BAD_GATEWAY,
detail="Failed to provision Ayrshare profile",
)
ayrshare_creds = [
c
for c in await creds_manager.store.get_creds_by_provider(user_id, "ayrshare")
if c.is_managed and isinstance(c, APIKeyCredentials)
]
if not ayrshare_creds:
logger.error(
"Ayrshare credential provisioning did not produce a credential "
"for user %s",
user_id,
)
raise HTTPException(
status_code=HTTP_502_BAD_GATEWAY,
detail="Failed to provision Ayrshare profile",
)
profile_key_str = ayrshare_creds[0].api_key.get_secret_value()
private_key = settings.secrets.ayrshare_jwt_key
# Ayrshare JWT expiry is 2880 minutes (48 hours)
# Ayrshare JWT max lifetime is 2880 minutes (48 h).
max_expiry_minutes = 2880
try:
logger.debug(f"Generating Ayrshare JWT for user {user_id}")
jwt_response = await client.generate_jwt(
private_key=private_key,
profile_key=profile_key_str,
# `allowed_social` is the set of networks the Ayrshare-hosted
# social-linking page will *offer* the user to connect. Blocks
# exist for more platforms than are listed here; the list is
# deliberately narrower so the rollout can verify each network
# end-to-end before widening the user-visible surface. Keep
# in sync with tested platforms — extend as each is verified
# against the block + Ayrshare's network-specific quirks.
allowed_social=[
# NOTE: We are enabling platforms one at a time
# to speed up the development process
# SocialPlatform.FACEBOOK,
SocialPlatform.TWITTER,
SocialPlatform.LINKEDIN,
SocialPlatform.INSTAGRAM,
SocialPlatform.YOUTUBE,
# SocialPlatform.REDDIT,
# SocialPlatform.TELEGRAM,
# SocialPlatform.GOOGLE_MY_BUSINESS,
# SocialPlatform.PINTEREST,
SocialPlatform.TIKTOK,
# SocialPlatform.BLUESKY,
# SocialPlatform.SNAPCHAT,
# SocialPlatform.THREADS,
],
expires_in=max_expiry_minutes,
verify=True,
)
except Exception as e:
logger.error(f"Error generating Ayrshare JWT for user {user_id}: {e}")
except Exception as exc:
logger.error("Error generating Ayrshare JWT for user %s: %s", user_id, exc)
raise HTTPException(
status_code=HTTP_502_BAD_GATEWAY, detail="Failed to generate JWT"
)
@@ -867,20 +1211,37 @@ async def get_ayrshare_sso_url(
# === PROVIDER DISCOVERY ENDPOINTS ===
@router.get("/providers", response_model=List[str])
async def list_providers() -> List[str]:
@router.get("/providers", response_model=List[ProviderMetadata])
async def list_providers() -> List[ProviderMetadata]:
"""
Get a list of all available provider names.
Get metadata for every available provider.
Returns both statically defined providers (from ProviderName enum)
and dynamically registered providers (from SDK decorators).
Returns both statically defined providers (from ``ProviderName`` enum) and
dynamically registered providers (from SDK decorators). Each entry includes
a ``description`` declared via ``ProviderBuilder.with_description(...)`` in
the provider's ``_config.py``.
Note: The complete list of provider names is also available as a constant
in the generated TypeScript client via PROVIDER_NAMES.
"""
# Get all providers at runtime
# Ensure all block modules (and therefore every provider's _config.py) are
# imported before we read from AutoRegistry. Cached on first call.
try:
from backend.blocks import load_all_blocks
load_all_blocks()
except Exception as e:
logger.warning(f"Failed to load blocks for provider metadata: {e}")
all_providers = get_all_provider_names()
return all_providers
return [
ProviderMetadata(
name=name,
description=get_provider_description(name),
supported_auth_types=get_supported_auth_types(name),
)
for name in all_providers
]
@router.get("/providers/system", response_model=List[str])

View File

@@ -393,7 +393,7 @@ class TestEnsureManagedCredentials:
_PROVIDERS.update(saved)
_provisioned_users.pop("user-1", None)
provider.provision.assert_awaited_once_with("user-1")
provider.provision.assert_awaited_once_with("user-1", store)
store.add_managed_credential.assert_awaited_once_with("user-1", cred)
@pytest.mark.asyncio
@@ -568,3 +568,181 @@ class TestCleanupManagedCredentials:
_PROVIDERS.update(saved)
# No exception raised — cleanup failure is swallowed.
class TestGetPickerToken:
"""POST /{provider}/credentials/{cred_id}/picker-token must:
1. Return the access token for OAuth2 creds the caller owns.
2. 404 for non-owned, non-existent, or wrong-provider creds.
3. 400 for non-OAuth2 creds (API key, host-scoped, user/password).
4. 404 for SDK default creds (same hardening as get_credential).
5. Preserve the `TestGetCredentialReturnsMetaOnly` contract — the
existing meta-only endpoint must still strip secrets even after
this picker-token endpoint exists."""
def test_oauth2_owner_gets_access_token(self):
# Use a Google cred with a drive.file scope — only picker-eligible
# (provider, scope) pairs can mint a token. GitHub-style creds are
# explicitly rejected; see `test_non_picker_provider_rejected_as_400`.
cred = _make_oauth2_cred(
cred_id="cred-gdrive",
provider="google",
)
cred.scopes = ["https://www.googleapis.com/auth/drive.file"]
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.post("/google/credentials/cred-gdrive/picker-token")
assert resp.status_code == 200
data = resp.json()
# The whole point of this endpoint: the access token IS returned here.
assert data["access_token"] == "ghp_secret_token"
# Only the two declared fields come back — nothing else leaks.
assert set(data.keys()) <= {"access_token", "access_token_expires_at"}
def test_non_picker_provider_rejected_as_400(self):
"""Provider allowlist: even with a valid OAuth2 credential, a
non-picker provider (GitHub, etc.) cannot mint a picker token.
Stops this endpoint from being used as a generic bearer-token
extraction path for any stored OAuth cred under the same user."""
cred = _make_oauth2_cred(provider="github")
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.post("/github/credentials/cred-456/picker-token")
assert resp.status_code == 400
assert "not available for provider" in resp.json()["detail"]
assert "ghp_secret_token" not in str(resp.json())
def test_google_oauth_without_drive_scope_rejected(self):
"""Scope allowlist: a Google OAuth2 cred that only carries non-picker
scopes (e.g. gmail.readonly, calendar) cannot mint a picker token.
Forces the frontend to reconnect with a Drive scope before the
picker is available."""
cred = _make_oauth2_cred(provider="google")
cred.scopes = [
"https://www.googleapis.com/auth/gmail.readonly",
"https://www.googleapis.com/auth/calendar",
]
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.post("/google/credentials/cred-456/picker-token")
assert resp.status_code == 400
assert "picker" in resp.json()["detail"].lower()
def test_api_key_credential_rejected_as_400(self):
cred = _make_api_key_cred()
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.post("/openai/credentials/cred-123/picker-token")
assert resp.status_code == 400
# API keys must not silently fall through to a 200 response of some
# other shape — the client should see a clear shape rejection.
body = str(resp.json())
assert "sk-secret-key-value" not in body
def test_user_password_credential_rejected_as_400(self):
cred = _make_user_password_cred()
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.post("/openai/credentials/cred-789/picker-token")
assert resp.status_code == 400
body = str(resp.json())
assert "s3cret-pass" not in body
assert "admin" not in body
def test_host_scoped_credential_rejected_as_400(self):
cred = _make_host_scoped_cred()
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.post("/openai/credentials/cred-host/picker-token")
assert resp.status_code == 400
assert "top-secret" not in str(resp.json())
def test_missing_credential_returns_404(self):
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=None)
resp = client.post("/github/credentials/nonexistent/picker-token")
assert resp.status_code == 404
assert resp.json()["detail"] == "Credentials not found"
def test_wrong_provider_returns_404(self):
"""Symmetric with get_credential: provider mismatch is a generic
404, not a 400, so we don't leak existence of a credential the
caller doesn't own on that provider."""
cred = _make_oauth2_cred(provider="github")
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.post("/google/credentials/cred-456/picker-token")
assert resp.status_code == 404
assert resp.json()["detail"] == "Credentials not found"
def test_sdk_default_returns_404(self):
"""SDK defaults are invisible to the user-facing API — picker-token
must not mint a token for them either."""
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock()
resp = client.post("/openai/credentials/openai-default/picker-token")
assert resp.status_code == 404
mock_mgr.get.assert_not_called()
def test_oauth2_without_access_token_returns_400(self):
"""A stored OAuth2 cred whose access_token is missing can't satisfy
a picker init. Surface a clear reconnect instruction rather than
returning an empty string."""
cred = _make_oauth2_cred()
# Simulate a cred that lost its access token
object.__setattr__(cred, "access_token", None)
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.post("/github/credentials/cred-456/picker-token")
assert resp.status_code == 400
assert "reconnect" in resp.json()["detail"].lower()
def test_meta_only_endpoint_still_strips_access_token(self):
"""Regression guard for the coexistence contract: the new
picker-token endpoint must NOT accidentally leak the token through
the meta-only GET endpoint. TestGetCredentialReturnsMetaOnly
covers this more broadly; this is a fast sanity check co-located
with the new endpoint's tests."""
cred = _make_oauth2_cred()
with patch(
"backend.api.features.integrations.router.creds_manager"
) as mock_mgr:
mock_mgr.get = AsyncMock(return_value=cred)
resp = client.get("/github/credentials/cred-456")
assert resp.status_code == 200
body = resp.json()
assert "access_token" not in body
assert "refresh_token" not in body
assert "ghp_secret_token" not in str(body)

View File

@@ -1804,7 +1804,7 @@ async def create_preset_from_graph_execution(
raise NotFoundError(
f"Graph #{graph_execution.graph_id} not found or accessible"
)
elif len(graph.aggregate_credentials_inputs()) > 0:
elif len(graph.regular_credentials_inputs) > 0:
raise ValueError(
f"Graph execution #{graph_exec_id} can't be turned into a preset "
"because it was run before this feature existed "

View File

@@ -0,0 +1,20 @@
import pydantic
class PushSubscriptionKeys(pydantic.BaseModel):
p256dh: str = pydantic.Field(min_length=1, max_length=512)
auth: str = pydantic.Field(min_length=1, max_length=512)
class PushSubscribeRequest(pydantic.BaseModel):
endpoint: str = pydantic.Field(min_length=1, max_length=2048)
keys: PushSubscriptionKeys
user_agent: str | None = pydantic.Field(default=None, max_length=512)
class PushUnsubscribeRequest(pydantic.BaseModel):
endpoint: str = pydantic.Field(min_length=1, max_length=2048)
class VapidPublicKeyResponse(pydantic.BaseModel):
public_key: str

View File

@@ -0,0 +1,64 @@
from typing import Annotated
from autogpt_libs.auth import get_user_id, requires_user
from fastapi import APIRouter, HTTPException, Security
from starlette.status import HTTP_204_NO_CONTENT, HTTP_400_BAD_REQUEST
from backend.api.features.push.model import (
PushSubscribeRequest,
PushUnsubscribeRequest,
VapidPublicKeyResponse,
)
from backend.data.push_subscription import (
delete_push_subscription,
upsert_push_subscription,
validate_push_endpoint,
)
from backend.util.settings import Settings
router = APIRouter()
_settings = Settings()
@router.get(
"/vapid-key",
summary="Get VAPID public key for push subscription",
)
async def get_vapid_public_key() -> VapidPublicKeyResponse:
return VapidPublicKeyResponse(public_key=_settings.secrets.vapid_public_key)
@router.post(
"/subscribe",
summary="Register a push subscription for the current user",
status_code=HTTP_204_NO_CONTENT,
dependencies=[Security(requires_user)],
)
async def subscribe_push(
user_id: Annotated[str, Security(get_user_id)],
body: PushSubscribeRequest,
) -> None:
try:
await validate_push_endpoint(body.endpoint)
await upsert_push_subscription(
user_id=user_id,
endpoint=body.endpoint,
p256dh=body.keys.p256dh,
auth=body.keys.auth,
user_agent=body.user_agent,
)
except ValueError as e:
raise HTTPException(status_code=HTTP_400_BAD_REQUEST, detail=str(e))
@router.post(
"/unsubscribe",
summary="Remove a push subscription",
status_code=HTTP_204_NO_CONTENT,
dependencies=[Security(requires_user)],
)
async def unsubscribe_push(
user_id: Annotated[str, Security(get_user_id)],
body: PushUnsubscribeRequest,
) -> None:
await delete_push_subscription(user_id, body.endpoint)

View File

@@ -0,0 +1,240 @@
"""Tests for push notification routes."""
from unittest.mock import AsyncMock, MagicMock
import fastapi
import fastapi.testclient
import pytest
from backend.api.features.push.routes import router
app = fastapi.FastAPI()
app.include_router(router)
client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
def test_get_vapid_public_key(mocker):
mock_settings = MagicMock()
mock_settings.secrets.vapid_public_key = "test-vapid-public-key-base64url"
mocker.patch(
"backend.api.features.push.routes._settings",
mock_settings,
)
response = client.get("/vapid-key")
assert response.status_code == 200
data = response.json()
assert data["public_key"] == "test-vapid-public-key-base64url"
def test_get_vapid_public_key_empty(mocker):
mock_settings = MagicMock()
mock_settings.secrets.vapid_public_key = ""
mocker.patch(
"backend.api.features.push.routes._settings",
mock_settings,
)
response = client.get("/vapid-key")
assert response.status_code == 200
data = response.json()
assert data["public_key"] == ""
def test_subscribe_push(mocker, test_user_id):
mock_upsert = mocker.patch(
"backend.api.features.push.routes.upsert_push_subscription",
new_callable=AsyncMock,
)
response = client.post(
"/subscribe",
json={
"endpoint": "https://fcm.googleapis.com/fcm/send/abc123",
"keys": {
"p256dh": "test-p256dh-key",
"auth": "test-auth-key",
},
"user_agent": "Mozilla/5.0 Test",
},
)
assert response.status_code == 204
mock_upsert.assert_awaited_once_with(
user_id=test_user_id,
endpoint="https://fcm.googleapis.com/fcm/send/abc123",
p256dh="test-p256dh-key",
auth="test-auth-key",
user_agent="Mozilla/5.0 Test",
)
def test_subscribe_push_without_user_agent(mocker, test_user_id):
mock_upsert = mocker.patch(
"backend.api.features.push.routes.upsert_push_subscription",
new_callable=AsyncMock,
)
response = client.post(
"/subscribe",
json={
"endpoint": "https://fcm.googleapis.com/fcm/send/abc123",
"keys": {
"p256dh": "test-p256dh-key",
"auth": "test-auth-key",
},
},
)
assert response.status_code == 204
mock_upsert.assert_awaited_once_with(
user_id=test_user_id,
endpoint="https://fcm.googleapis.com/fcm/send/abc123",
p256dh="test-p256dh-key",
auth="test-auth-key",
user_agent=None,
)
def test_subscribe_push_missing_keys():
response = client.post(
"/subscribe",
json={
"endpoint": "https://fcm.googleapis.com/fcm/send/abc123",
},
)
assert response.status_code == 422
def test_subscribe_push_missing_endpoint():
response = client.post(
"/subscribe",
json={
"keys": {
"p256dh": "test-p256dh-key",
"auth": "test-auth-key",
},
},
)
assert response.status_code == 422
def test_subscribe_push_rejects_empty_crypto_keys():
response = client.post(
"/subscribe",
json={
"endpoint": "https://fcm.googleapis.com/fcm/send/abc123",
"keys": {"p256dh": "", "auth": ""},
},
)
assert response.status_code == 422
def test_subscribe_push_rejects_oversized_endpoint():
response = client.post(
"/subscribe",
json={
"endpoint": "https://fcm.googleapis.com/fcm/send/" + "x" * 3000,
"keys": {"p256dh": "k", "auth": "a"},
},
)
assert response.status_code == 422
def test_unsubscribe_push(mocker, test_user_id):
mock_delete = mocker.patch(
"backend.api.features.push.routes.delete_push_subscription",
new_callable=AsyncMock,
)
response = client.post(
"/unsubscribe",
json={
"endpoint": "https://fcm.googleapis.com/fcm/send/abc123",
},
)
assert response.status_code == 204
mock_delete.assert_awaited_once_with(
test_user_id,
"https://fcm.googleapis.com/fcm/send/abc123",
)
def test_unsubscribe_push_missing_endpoint():
response = client.post(
"/unsubscribe",
json={},
)
assert response.status_code == 422
@pytest.mark.parametrize(
"untrusted_endpoint",
[
"https://localhost/evil",
"https://127.0.0.1/evil",
"https://169.254.169.254/latest/meta-data/",
"https://internal-service.local/api",
"https://attacker.example.com/push",
"http://fcm.googleapis.com/fcm/send/abc",
"file:///etc/passwd",
],
)
def test_subscribe_push_rejects_untrusted_endpoints(mocker, untrusted_endpoint):
mock_upsert = mocker.patch(
"backend.api.features.push.routes.upsert_push_subscription",
new_callable=AsyncMock,
)
response = client.post(
"/subscribe",
json={
"endpoint": untrusted_endpoint,
"keys": {
"p256dh": "test-p256dh-key",
"auth": "test-auth-key",
},
},
)
assert response.status_code == 400
mock_upsert.assert_not_awaited()
def test_subscribe_push_surfaces_cap_as_400(mocker):
mocker.patch(
"backend.api.features.push.routes.upsert_push_subscription",
new_callable=AsyncMock,
side_effect=ValueError("Subscription limit of 20 per user reached"),
)
response = client.post(
"/subscribe",
json={
"endpoint": "https://fcm.googleapis.com/fcm/send/abc123",
"keys": {
"p256dh": "test-p256dh-key",
"auth": "test-auth-key",
},
},
)
assert response.status_code == 400
assert "Subscription limit" in response.json()["detail"]

View File

@@ -490,6 +490,9 @@ async def get_store_creators(
# Build where clause with sanitized inputs
where = {}
# Only return creators with approved agents
where["num_agents"] = {"gt": 0}
if featured:
where["is_featured"] = featured

View File

@@ -1,4 +1,5 @@
from datetime import datetime
from unittest.mock import AsyncMock
import prisma.enums
import prisma.errors
@@ -50,8 +51,8 @@ async def test_get_store_agents(mocker):
# Mock prisma calls
mock_store_agent = mocker.patch("prisma.models.StoreAgent.prisma")
mock_store_agent.return_value.find_many = mocker.AsyncMock(return_value=mock_agents)
mock_store_agent.return_value.count = mocker.AsyncMock(return_value=1)
mock_store_agent.return_value.find_many = AsyncMock(return_value=mock_agents)
mock_store_agent.return_value.count = AsyncMock(return_value=1)
# Call function
result = await db.get_store_agents()
@@ -94,7 +95,7 @@ async def test_get_store_agent_details(mocker):
# Mock StoreAgent prisma call
mock_store_agent = mocker.patch("prisma.models.StoreAgent.prisma")
mock_store_agent.return_value.find_first = mocker.AsyncMock(return_value=mock_agent)
mock_store_agent.return_value.find_first = AsyncMock(return_value=mock_agent)
# Call function
result = await db.get_store_agent_details("creator", "test-agent")
@@ -133,7 +134,7 @@ async def test_get_store_creator(mocker):
# Mock prisma call
mock_creator = mocker.patch("prisma.models.Creator.prisma")
mock_creator.return_value.find_unique = mocker.AsyncMock()
mock_creator.return_value.find_unique = AsyncMock()
# Configure the mock to return values that will pass validation
mock_creator.return_value.find_unique.return_value = mock_creator_data
@@ -189,7 +190,7 @@ async def test_create_store_submission(mocker):
notifyOnAgentApproved=True,
notifyOnAgentRejected=True,
timezone="Europe/Delft",
subscriptionTier=prisma.enums.SubscriptionTier.FREE, # type: ignore[reportCallIssue,reportAttributeAccessIssue]
subscriptionTier=prisma.enums.SubscriptionTier.BASIC, # type: ignore[reportCallIssue,reportAttributeAccessIssue]
)
mock_agent = prisma.models.AgentGraph(
id="agent-id",
@@ -236,23 +237,23 @@ async def test_create_store_submission(mocker):
# Mock prisma calls
mock_agent_graph = mocker.patch("prisma.models.AgentGraph.prisma")
mock_agent_graph.return_value.find_first = mocker.AsyncMock(return_value=mock_agent)
mock_agent_graph.return_value.find_first = AsyncMock(return_value=mock_agent)
# Mock transaction context manager
mock_tx = mocker.MagicMock()
mocker.patch(
"backend.api.features.store.db.transaction",
return_value=mocker.AsyncMock(
__aenter__=mocker.AsyncMock(return_value=mock_tx),
__aexit__=mocker.AsyncMock(return_value=False),
return_value=AsyncMock(
__aenter__=AsyncMock(return_value=mock_tx),
__aexit__=AsyncMock(return_value=False),
),
)
mock_sl = mocker.patch("prisma.models.StoreListing.prisma")
mock_sl.return_value.find_unique = mocker.AsyncMock(return_value=None)
mock_sl.return_value.find_unique = AsyncMock(return_value=None)
mock_slv = mocker.patch("prisma.models.StoreListingVersion.prisma")
mock_slv.return_value.create = mocker.AsyncMock(return_value=mock_version)
mock_slv.return_value.create = AsyncMock(return_value=mock_version)
# Call function
result = await db.create_store_submission(
@@ -292,10 +293,8 @@ async def test_update_profile(mocker):
# Mock prisma calls
mock_profile_db = mocker.patch("prisma.models.Profile.prisma")
mock_profile_db.return_value.find_first = mocker.AsyncMock(
return_value=mock_profile
)
mock_profile_db.return_value.update = mocker.AsyncMock(return_value=mock_profile)
mock_profile_db.return_value.find_first = AsyncMock(return_value=mock_profile)
mock_profile_db.return_value.update = AsyncMock(return_value=mock_profile)
# Test data
profile = Profile(
@@ -336,9 +335,7 @@ async def test_get_user_profile(mocker):
# Mock prisma calls
mock_profile_db = mocker.patch("prisma.models.Profile.prisma")
mock_profile_db.return_value.find_first = mocker.AsyncMock(
return_value=mock_profile
)
mock_profile_db.return_value.find_first = AsyncMock(return_value=mock_profile)
# Call function
result = await db.get_user_profile("user-id")
@@ -396,3 +393,38 @@ async def test_get_store_agents_search_category_array_injection():
# Verify the query executed without error
# Category should be parameterized, preventing SQL injection
assert isinstance(result.agents, list)
@pytest.mark.asyncio(loop_scope="session")
async def test_get_store_creators_only_returns_approved(mocker):
mock_creators = [
prisma.models.Creator(
name="Creator One",
username="creator1",
description="desc",
links=["link1"],
avatar_url="avatar.jpg",
num_agents=1,
agent_rating=4.5,
agent_runs=10,
top_categories=["test"],
is_featured=False,
)
]
mock_creator = mocker.patch("prisma.models.Creator.prisma")
mock_creator.return_value.find_many = AsyncMock(return_value=mock_creators)
mock_creator.return_value.count = AsyncMock(return_value=1)
result = await db.get_store_creators()
assert len(result.creators) == 1
assert result.creators[0].username == "creator1"
mock_creator.return_value.find_many.assert_called_once()
mock_creator.return_value.count.assert_called_once()
_, find_kwargs = mock_creator.return_value.find_many.call_args
_, count_kwargs = mock_creator.return_value.count.call_args
assert find_kwargs["where"]["num_agents"] == {"gt": 0}
assert count_kwargs["where"]["num_agents"] == {"gt": 0}

View File

@@ -60,25 +60,47 @@ def _stub_pending_subscription_change(mocker: pytest_mock.MockFixture) -> None:
)
_DEFAULT_TIER_PRICES: dict[SubscriptionTier, str | None] = {
SubscriptionTier.BASIC: None, # Legacy: stripe-price-id-basic unset by default.
SubscriptionTier.PRO: "price_pro",
SubscriptionTier.MAX: "price_max",
SubscriptionTier.BUSINESS: None, # Reserved: Business card hidden by default.
}
@pytest.fixture(autouse=True)
def _stub_subscription_status_lookups(mocker: pytest_mock.MockFixture) -> None:
"""Stub Stripe price + proration lookups used by get_subscription_status.
"""Stub Stripe price + proration + tier-multiplier lookups used by
get_subscription_status.
The POST /credits/subscription handler now returns the full subscription
status payload from every branch (same-tier, FREE downgrade, paid→paid
status payload from every branch (same-tier, BASIC downgrade, paid→paid
modify, checkout creation), so every POST test implicitly hits these
helpers. Individual tests can override via their own mocker.patch call.
"""
async def default_price_id(tier: SubscriptionTier) -> str | None:
return _DEFAULT_TIER_PRICES.get(tier)
mocker.patch(
"backend.api.features.v1.get_subscription_price_id",
new_callable=AsyncMock,
return_value=None,
side_effect=default_price_id,
)
mocker.patch(
"backend.api.features.v1.get_proration_credit_cents",
new_callable=AsyncMock,
return_value=0,
)
# Default tier-multiplier resolver to the backend defaults so the endpoint
# never reaches LaunchDarkly during tests. Individual tests override for
# LD-override scenarios.
from backend.copilot.rate_limit import _DEFAULT_TIER_MULTIPLIERS
mocker.patch(
"backend.api.features.v1.get_tier_multipliers",
new_callable=AsyncMock,
return_value=dict(_DEFAULT_TIER_MULTIPLIERS),
)
@pytest.mark.parametrize(
@@ -122,15 +144,28 @@ def test_get_subscription_status_pro(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""GET /credits/subscription returns PRO tier with Stripe price for a PRO user."""
"""GET /credits/subscription returns PRO tier with Stripe prices for all priced tiers."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.PRO
prices = {
SubscriptionTier.BASIC: "price_basic",
SubscriptionTier.PRO: "price_pro",
SubscriptionTier.MAX: "price_max",
SubscriptionTier.BUSINESS: "price_business",
}
amounts = {
"price_basic": 0,
"price_pro": 1999,
"price_max": 4999,
"price_business": 14999,
}
async def mock_price_id(tier: SubscriptionTier) -> str | None:
return "price_pro" if tier == SubscriptionTier.PRO else None
return prices.get(tier)
async def mock_stripe_price_amount(price_id: str) -> int:
return 1999 if price_id == "price_pro" else 0
return amounts.get(price_id, 0)
mocker.patch(
"backend.api.features.v1.get_user_by_id",
@@ -158,16 +193,64 @@ def test_get_subscription_status_pro(
assert data["tier"] == "PRO"
assert data["monthly_cost"] == 1999
assert data["tier_costs"]["PRO"] == 1999
assert data["tier_costs"]["BUSINESS"] == 0
assert data["tier_costs"]["FREE"] == 0
assert data["tier_costs"]["MAX"] == 4999
assert data["tier_costs"]["BUSINESS"] == 14999
assert data["tier_costs"]["BASIC"] == 0
assert "ENTERPRISE" not in data["tier_costs"]
assert data["proration_credit_cents"] == 500
# tier_multipliers mirrors the same set of tiers that land in tier_costs,
# so the frontend never renders a multiplier badge for a hidden row.
assert set(data["tier_multipliers"].keys()) == set(data["tier_costs"].keys())
assert data["tier_multipliers"]["BASIC"] == 1.0
assert data["tier_multipliers"]["PRO"] == 5.0
assert data["tier_multipliers"]["MAX"] == 20.0
assert data["tier_multipliers"]["BUSINESS"] == 60.0
def test_get_subscription_status_defaults_to_free(
def test_get_subscription_status_tier_multipliers_ld_override(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""GET /credits/subscription when subscription_tier is None defaults to FREE."""
"""A LaunchDarkly-overridden tier multiplier flows through the response."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.BASIC
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,
return_value=mock_user,
)
# LD says PRO is 7.5× (instead of the 5× default); other tiers unchanged.
mocker.patch(
"backend.api.features.v1.get_tier_multipliers",
new_callable=AsyncMock,
return_value={
SubscriptionTier.BASIC: 1.0,
SubscriptionTier.PRO: 7.5,
SubscriptionTier.MAX: 20.0,
SubscriptionTier.BUSINESS: 60.0,
SubscriptionTier.ENTERPRISE: 60.0,
},
)
response = client.get("/credits/subscription")
assert response.status_code == 200
data = response.json()
# Only tiers that made it into tier_costs get a multiplier (default stub
# exposes PRO + MAX via _DEFAULT_TIER_PRICES).
assert data["tier_multipliers"]["PRO"] == 7.5
assert data["tier_multipliers"]["MAX"] == 20.0
# BUSINESS has no price configured → hidden from both maps.
assert "BUSINESS" not in data["tier_multipliers"]
def test_get_subscription_status_defaults_to_no_tier(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""When user has no subscription_tier, defaults to NO_TIER (the explicit
no-active-subscription state)."""
mock_user = Mock()
mock_user.subscription_tier = None
@@ -191,14 +274,9 @@ def test_get_subscription_status_defaults_to_free(
assert response.status_code == 200
data = response.json()
assert data["tier"] == SubscriptionTier.FREE.value
assert data["tier"] == SubscriptionTier.NO_TIER.value
assert data["monthly_cost"] == 0
assert data["tier_costs"] == {
"FREE": 0,
"PRO": 0,
"BUSINESS": 0,
"ENTERPRISE": 0,
}
assert data["tier_costs"] == {}
assert data["proration_credit_cents"] == 0
@@ -249,11 +327,11 @@ def test_get_subscription_status_stripe_error_falls_back_to_zero(
assert data["tier_costs"]["PRO"] == 0
def test_update_subscription_tier_free_no_payment(
def test_update_subscription_tier_no_tier_no_payment(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""POST /credits/subscription to FREE tier when payment disabled skips Stripe."""
"""POST /credits/subscription to NO_TIER (cancel) when payment disabled skips Stripe."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.PRO
@@ -274,7 +352,7 @@ def test_update_subscription_tier_free_no_payment(
new_callable=AsyncMock,
)
response = client.post("/credits/subscription", json={"tier": "FREE"})
response = client.post("/credits/subscription", json={"tier": "NO_TIER"})
assert response.status_code == 200
assert response.json()["url"] == ""
@@ -286,7 +364,7 @@ def test_update_subscription_tier_paid_beta_user(
) -> None:
"""POST /credits/subscription for paid tier when payment disabled returns 422."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.FREE
mock_user.subscription_tier = SubscriptionTier.BASIC
async def mock_feature_disabled(*args, **kwargs):
return False
@@ -313,7 +391,7 @@ def test_update_subscription_tier_paid_requires_urls(
) -> None:
"""POST /credits/subscription for paid tier without success/cancel URLs returns 422."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.FREE
mock_user.subscription_tier = SubscriptionTier.BASIC
async def mock_feature_enabled(*args, **kwargs):
return True
@@ -327,19 +405,116 @@ def test_update_subscription_tier_paid_requires_urls(
"backend.api.features.v1.is_feature_enabled",
side_effect=mock_feature_enabled,
)
mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
new_callable=AsyncMock,
return_value=False,
)
response = client.post("/credits/subscription", json={"tier": "PRO"})
assert response.status_code == 422
def test_update_subscription_tier_currency_mismatch_returns_422(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Stripe rejects a SubscriptionSchedule whose phases mix currencies (e.g.
GBP-checkout sub trying to schedule a USD-only target Price). The handler
must convert that into a specific 422 instead of the generic 502 so the
caller can tell the difference between a currency-config bug and a Stripe
outage."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.MAX
async def mock_feature_enabled(*args, **kwargs):
return True
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,
return_value=mock_user,
)
mocker.patch(
"backend.api.features.v1.is_feature_enabled",
side_effect=mock_feature_enabled,
)
mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
side_effect=stripe.InvalidRequestError(
"The price specified only supports `usd`. This doesn't match the"
" expected currency: `gbp`.",
param="phases",
),
)
response = client.post(
"/credits/subscription",
json={
"tier": "PRO",
"success_url": f"{TEST_FRONTEND_ORIGIN}/success",
"cancel_url": f"{TEST_FRONTEND_ORIGIN}/cancel",
},
)
assert response.status_code == 422
detail = response.json()["detail"]
assert "billing currency" in detail.lower()
assert "contact support" in detail.lower()
def test_update_subscription_tier_non_currency_invalid_request_returns_502(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Locks the contract that *only* currency-mismatch InvalidRequestErrors
translate to 422 — every other Stripe InvalidRequestError must still
surface as the generic 502 so that widening the conditional later is
caught by the suite."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.MAX
async def mock_feature_enabled(*args, **kwargs):
return True
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,
return_value=mock_user,
)
mocker.patch(
"backend.api.features.v1.is_feature_enabled",
side_effect=mock_feature_enabled,
)
mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
side_effect=stripe.InvalidRequestError(
"No such price: 'price_does_not_exist'",
param="items[0][price]",
),
)
response = client.post(
"/credits/subscription",
json={
"tier": "PRO",
"success_url": f"{TEST_FRONTEND_ORIGIN}/success",
"cancel_url": f"{TEST_FRONTEND_ORIGIN}/cancel",
},
)
assert response.status_code == 502
assert "billing currency" not in response.json()["detail"].lower()
def test_update_subscription_tier_creates_checkout(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""POST /credits/subscription creates Stripe Checkout Session for paid upgrade."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.FREE
mock_user.subscription_tier = SubscriptionTier.BASIC
async def mock_feature_enabled(*args, **kwargs):
return True
@@ -353,6 +528,11 @@ def test_update_subscription_tier_creates_checkout(
"backend.api.features.v1.is_feature_enabled",
side_effect=mock_feature_enabled,
)
mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
new_callable=AsyncMock,
return_value=False,
)
mocker.patch(
"backend.api.features.v1.create_subscription_checkout",
new_callable=AsyncMock,
@@ -378,7 +558,7 @@ def test_update_subscription_tier_rejects_open_redirect(
) -> None:
"""POST /credits/subscription rejects success/cancel URLs outside the frontend origin."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.FREE
mock_user.subscription_tier = SubscriptionTier.BASIC
async def mock_feature_enabled(*args, **kwargs):
return True
@@ -392,6 +572,11 @@ def test_update_subscription_tier_rejects_open_redirect(
"backend.api.features.v1.is_feature_enabled",
side_effect=mock_feature_enabled,
)
mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
new_callable=AsyncMock,
return_value=False,
)
checkout_mock = mocker.patch(
"backend.api.features.v1.create_subscription_checkout",
new_callable=AsyncMock,
@@ -572,14 +757,14 @@ def test_update_subscription_tier_same_tier_stripe_error_returns_502(
assert "contact support" in response.json()["detail"].lower()
def test_update_subscription_tier_free_with_payment_schedules_cancel_and_does_not_update_db(
def test_update_subscription_tier_no_tier_with_payment_schedules_cancel_and_does_not_update_db(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Downgrading to FREE schedules Stripe cancellation at period end.
"""Cancelling to NO_TIER schedules Stripe cancellation at period end.
The DB tier must NOT be updated immediately — the customer.subscription.deleted
webhook fires at period end and downgrades to FREE then.
webhook fires at period end and downgrades to NO_TIER then.
"""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.PRO
@@ -605,18 +790,18 @@ def test_update_subscription_tier_free_with_payment_schedules_cancel_and_does_no
side_effect=mock_feature_enabled,
)
response = client.post("/credits/subscription", json={"tier": "FREE"})
response = client.post("/credits/subscription", json={"tier": "NO_TIER"})
assert response.status_code == 200
mock_cancel.assert_awaited_once()
mock_set_tier.assert_not_awaited()
def test_update_subscription_tier_free_cancel_failure_returns_502(
def test_update_subscription_tier_no_tier_cancel_failure_returns_502(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Downgrading to FREE returns 502 with a generic error (no Stripe detail leakage)."""
"""Cancelling to NO_TIER returns 502 with a generic error (no Stripe detail leakage)."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.PRO
@@ -639,7 +824,7 @@ def test_update_subscription_tier_free_cancel_failure_returns_502(
side_effect=mock_feature_enabled,
)
response = client.post("/credits/subscription", json={"tier": "FREE"})
response = client.post("/credits/subscription", json={"tier": "NO_TIER"})
assert response.status_code == 502
detail = response.json()["detail"]
@@ -756,6 +941,16 @@ def test_update_subscription_tier_paid_to_paid_modifies_subscription(
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.PRO
async def price_id_with_business(tier: SubscriptionTier) -> str | None:
return {
**_DEFAULT_TIER_PRICES,
SubscriptionTier.BUSINESS: "price_business",
}.get(tier)
mocker.patch(
"backend.api.features.v1.get_subscription_price_id",
side_effect=price_id_with_business,
)
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,
@@ -791,15 +986,59 @@ def test_update_subscription_tier_paid_to_paid_modifies_subscription(
checkout_mock.assert_not_awaited()
def test_update_subscription_tier_admin_granted_paid_to_paid_updates_db_directly(
def test_update_subscription_tier_max_checkout(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Admin-granted paid tier users are NOT sent to Stripe checkout for paid→paid changes.
"""POST /credits/subscription from PRO→MAX modifies the existing subscription."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.PRO
When modify_stripe_subscription_for_tier returns False (no Stripe subscription
found — admin-granted tier), the endpoint must update the DB tier directly and
return 200 with url="", rather than falling through to Checkout Session creation.
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,
return_value=mock_user,
)
mocker.patch(
"backend.api.features.v1.is_feature_enabled",
new_callable=AsyncMock,
return_value=True,
)
modify_mock = mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
new_callable=AsyncMock,
return_value=True,
)
checkout_mock = mocker.patch(
"backend.api.features.v1.create_subscription_checkout",
new_callable=AsyncMock,
)
response = client.post(
"/credits/subscription",
json={
"tier": "MAX",
"success_url": f"{TEST_FRONTEND_ORIGIN}/success",
"cancel_url": f"{TEST_FRONTEND_ORIGIN}/cancel",
},
)
assert response.status_code == 200
assert response.json()["url"] == ""
modify_mock.assert_awaited_once_with(TEST_USER_ID, SubscriptionTier.MAX)
checkout_mock.assert_not_awaited()
def test_update_subscription_tier_no_active_sub_falls_through_to_checkout(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Any tier change from a user with no active Stripe sub goes through Checkout.
Admin-granted users (no Stripe sub yet) and never-paid users follow the
exact same path: modify returns False → Checkout to set up payment. The
endpoint has no admin-specific branch — admin tier grants happen out-of-band
via the admin portal, not this user-facing route.
"""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.PRO
@@ -814,7 +1053,6 @@ def test_update_subscription_tier_admin_granted_paid_to_paid_updates_db_directly
new_callable=AsyncMock,
return_value=True,
)
# Return False = no Stripe subscription (admin-granted tier)
modify_mock = mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
new_callable=AsyncMock,
@@ -827,23 +1065,146 @@ def test_update_subscription_tier_admin_granted_paid_to_paid_updates_db_directly
checkout_mock = mocker.patch(
"backend.api.features.v1.create_subscription_checkout",
new_callable=AsyncMock,
return_value="https://checkout.stripe.com/pay/cs_test_no_sub",
)
response = client.post(
"/credits/subscription",
json={
"tier": "BUSINESS",
"tier": "MAX",
"success_url": f"{TEST_FRONTEND_ORIGIN}/success",
"cancel_url": f"{TEST_FRONTEND_ORIGIN}/cancel",
},
)
assert response.status_code == 200
assert response.json()["url"] == ""
modify_mock.assert_awaited_once_with(TEST_USER_ID, SubscriptionTier.BUSINESS)
# DB tier updated directly — no Stripe Checkout Session created
set_tier_mock.assert_awaited_once_with(TEST_USER_ID, SubscriptionTier.BUSINESS)
assert response.json()["url"] == "https://checkout.stripe.com/pay/cs_test_no_sub"
modify_mock.assert_awaited_once_with(TEST_USER_ID, SubscriptionTier.MAX)
# No DB-flip — payment must be collected via Checkout regardless of direction.
set_tier_mock.assert_not_awaited()
checkout_mock.assert_awaited_once()
def test_update_subscription_tier_priced_basic_no_sub_falls_through_to_checkout(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Once stripe-price-id-basic is configured, a BASIC user without an active sub
must hit Stripe Checkout rather than being silently set_subscription_tier'd."""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.BASIC
async def mock_price_id(tier: SubscriptionTier) -> str | None:
return {
SubscriptionTier.BASIC: "price_basic",
SubscriptionTier.PRO: "price_pro",
SubscriptionTier.MAX: "price_max",
SubscriptionTier.BUSINESS: "price_business",
}.get(tier)
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,
return_value=mock_user,
)
mocker.patch(
"backend.api.features.v1.get_subscription_price_id",
side_effect=mock_price_id,
)
mocker.patch(
"backend.api.features.v1.is_feature_enabled",
new_callable=AsyncMock,
return_value=True,
)
modify_mock = mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
new_callable=AsyncMock,
return_value=False,
)
set_tier_mock = mocker.patch(
"backend.api.features.v1.set_subscription_tier",
new_callable=AsyncMock,
)
checkout_mock = mocker.patch(
"backend.api.features.v1.create_subscription_checkout",
new_callable=AsyncMock,
return_value="https://checkout.stripe.com/pay/cs_test_priced_basic",
)
response = client.post(
"/credits/subscription",
json={
"tier": "PRO",
"success_url": f"{TEST_FRONTEND_ORIGIN}/success",
"cancel_url": f"{TEST_FRONTEND_ORIGIN}/cancel",
},
)
assert response.status_code == 200
assert (
response.json()["url"] == "https://checkout.stripe.com/pay/cs_test_priced_basic"
)
# Priced-BASIC user without an active sub: must NOT silently flip DB tier —
# they need to set up payment via Checkout.
set_tier_mock.assert_not_awaited()
checkout_mock.assert_awaited_once()
# modify is still called first; returning False just means "no active sub".
modify_mock.assert_awaited_once_with(TEST_USER_ID, SubscriptionTier.PRO)
def test_update_subscription_tier_target_without_ld_price_returns_422(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Paid target with no LD-configured Stripe price must fail fast with 422.
Matches the UI hiding: if `stripe-price-id-pro` resolves to None we can't
start a Checkout Session anyway, and we don't want to surface an opaque
Stripe error mid-flow. The handler rejects the request before touching
Stripe at all.
"""
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.BASIC
async def mock_price_id(tier: SubscriptionTier) -> str | None:
return None # Neither BASIC nor PRO have an LD price.
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,
return_value=mock_user,
)
mocker.patch(
"backend.api.features.v1.get_subscription_price_id",
side_effect=mock_price_id,
)
mocker.patch(
"backend.api.features.v1.is_feature_enabled",
new_callable=AsyncMock,
return_value=True,
)
checkout_mock = mocker.patch(
"backend.api.features.v1.create_subscription_checkout",
new_callable=AsyncMock,
)
modify_mock = mocker.patch(
"backend.api.features.v1.modify_stripe_subscription_for_tier",
new_callable=AsyncMock,
)
response = client.post(
"/credits/subscription",
json={
"tier": "PRO",
"success_url": f"{TEST_FRONTEND_ORIGIN}/success",
"cancel_url": f"{TEST_FRONTEND_ORIGIN}/cancel",
},
)
assert response.status_code == 422
assert "not available" in response.json()["detail"].lower()
checkout_mock.assert_not_awaited()
modify_mock.assert_not_awaited()
def test_update_subscription_tier_paid_to_paid_stripe_error_returns_502(
@@ -854,6 +1215,16 @@ def test_update_subscription_tier_paid_to_paid_stripe_error_returns_502(
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.PRO
async def price_id_with_business(tier: SubscriptionTier) -> str | None:
return {
**_DEFAULT_TIER_PRICES,
SubscriptionTier.BUSINESS: "price_business",
}.get(tier)
mocker.patch(
"backend.api.features.v1.get_subscription_price_id",
side_effect=price_id_with_business,
)
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,
@@ -882,14 +1253,14 @@ def test_update_subscription_tier_paid_to_paid_stripe_error_returns_502(
assert response.status_code == 502
def test_update_subscription_tier_free_no_stripe_subscription(
def test_update_subscription_tier_no_tier_no_stripe_subscription(
client: fastapi.testclient.TestClient,
mocker: pytest_mock.MockFixture,
) -> None:
"""Downgrading to FREE when no Stripe subscription exists updates DB tier directly.
"""Cancelling to NO_TIER when no Stripe subscription exists updates DB tier directly.
Admin-granted paid tiers have no associated Stripe subscription. When such a
user requests a self-service downgrade, cancel_stripe_subscription returns False
user requests a self-service cancel, cancel_stripe_subscription returns False
(nothing to cancel), so the endpoint must immediately call set_subscription_tier
rather than waiting for a webhook that will never arrive.
"""
@@ -917,13 +1288,13 @@ def test_update_subscription_tier_free_no_stripe_subscription(
new_callable=AsyncMock,
)
response = client.post("/credits/subscription", json={"tier": "FREE"})
response = client.post("/credits/subscription", json={"tier": "NO_TIER"})
assert response.status_code == 200
assert response.json()["url"] == ""
cancel_mock.assert_awaited_once_with(TEST_USER_ID)
# DB tier must be updated immediately — no webhook will fire for a missing sub
set_tier_mock.assert_awaited_once_with(TEST_USER_ID, SubscriptionTier.FREE)
set_tier_mock.assert_awaited_once_with(TEST_USER_ID, SubscriptionTier.NO_TIER)
def test_get_subscription_status_includes_pending_tier(
@@ -1014,6 +1385,16 @@ def test_update_subscription_tier_downgrade_paid_to_paid_schedules(
mock_user = Mock()
mock_user.subscription_tier = SubscriptionTier.BUSINESS
async def price_id_with_business(tier: SubscriptionTier) -> str | None:
return {
**_DEFAULT_TIER_PRICES,
SubscriptionTier.BUSINESS: "price_business",
}.get(tier)
mocker.patch(
"backend.api.features.v1.get_subscription_price_id",
side_effect=price_id_with_business,
)
mocker.patch(
"backend.api.features.v1.get_user_by_id",
new_callable=AsyncMock,

View File

@@ -44,6 +44,7 @@ from backend.api.model import (
UploadFileResponse,
)
from backend.blocks import get_block, get_blocks
from backend.copilot.rate_limit import get_tier_multipliers
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.auth import api_key as api_key_db
@@ -56,12 +57,14 @@ from backend.data.credit import (
UserCredit,
cancel_stripe_subscription,
create_subscription_checkout,
get_active_subscription_period_end,
get_auto_top_up,
get_pending_subscription_change,
get_proration_credit_cents,
get_subscription_price_id,
get_user_credit_model,
handle_subscription_payment_failure,
handle_subscription_payment_success,
modify_stripe_subscription_for_tier,
release_pending_subscription_schedule,
set_auto_top_up,
@@ -699,23 +702,48 @@ async def get_user_auto_top_up(
class SubscriptionTierRequest(BaseModel):
tier: Literal["FREE", "PRO", "BUSINESS"]
tier: Literal["NO_TIER", "BASIC", "PRO", "MAX", "BUSINESS"]
success_url: str = ""
cancel_url: str = ""
class SubscriptionStatusResponse(BaseModel):
tier: Literal["FREE", "PRO", "BUSINESS", "ENTERPRISE"]
tier: Literal["NO_TIER", "BASIC", "PRO", "MAX", "BUSINESS", "ENTERPRISE"]
monthly_cost: int # amount in cents (Stripe convention)
tier_costs: dict[str, int] # tier name -> amount in cents
tier_multipliers: dict[str, float] = Field(
default_factory=dict,
description=(
"Tier → rate-limit multiplier. Covers the same tiers listed in"
" ``tier_costs`` so the frontend can render rate-limit badges"
" relative to the lowest visible tier without knowing backend"
" defaults."
),
)
proration_credit_cents: int # unused portion of current sub to convert on upgrade
pending_tier: Optional[Literal["FREE", "PRO", "BUSINESS"]] = None
has_active_stripe_subscription: bool = Field(
default=False,
description=(
"True when the user has an active/trialing Stripe subscription. The"
" frontend uses this to branch upgrade UX: modify-in-place + saved-card"
" auto-charge when True, redirect to Stripe Checkout when False."
),
)
current_period_end: Optional[int] = Field(
default=None,
description=(
"Unix timestamp of the active subscription's current_period_end. Used"
" to show the date Stripe will issue the next invoice (with prorated"
" upgrade charges, if any). None when no active sub."
),
)
pending_tier: Optional[Literal["NO_TIER", "BASIC", "PRO", "MAX", "BUSINESS"]] = None
pending_tier_effective_at: Optional[datetime] = None
url: str = Field(
default="",
description=(
"Populated only when POST /credits/subscription starts a Stripe Checkout"
" Session (FREE → paid upgrade). Empty string in all other branches —"
" Session (BASIC → paid upgrade). Empty string in all other branches —"
" the client redirects to this URL when non-empty."
),
)
@@ -794,27 +822,48 @@ async def get_subscription_status(
user_id: Annotated[str, Security(get_user_id)],
) -> SubscriptionStatusResponse:
user = await get_user_by_id(user_id)
tier = user.subscription_tier or SubscriptionTier.FREE
tier = user.subscription_tier or SubscriptionTier.NO_TIER
paid_tiers = [SubscriptionTier.PRO, SubscriptionTier.BUSINESS]
# Tiers that *can* have a Stripe price configured (and therefore appear
# in the tier picker if the LD flag exposes a price-id). NO_TIER is not
# priceable — it's the implicit "no active subscription" state.
priceable_tiers = [
SubscriptionTier.BASIC,
SubscriptionTier.PRO,
SubscriptionTier.MAX,
SubscriptionTier.BUSINESS,
]
price_ids = await asyncio.gather(
*[get_subscription_price_id(t) for t in paid_tiers]
*[get_subscription_price_id(t) for t in priceable_tiers]
)
tier_costs: dict[str, int] = {
SubscriptionTier.FREE.value: 0,
SubscriptionTier.ENTERPRISE.value: 0,
}
async def _cost(pid: str | None) -> int:
return (await _get_stripe_price_amount(pid) or 0) if pid else 0
costs = await asyncio.gather(*[_cost(pid) for pid in price_ids])
for t, cost in zip(paid_tiers, costs):
tier_costs[t.value] = cost
tier_costs: dict[str, int] = {}
for t, pid, cost in zip(priceable_tiers, price_ids, costs):
if pid:
tier_costs[t.value] = cost
# Expose the effective rate-limit multipliers alongside prices so the
# frontend can render "Nx rate limits" relative to the lowest visible
# tier without hard-coding backend defaults. Only emit entries for tiers
# that land in ``tier_costs`` — rows hidden at the price layer must stay
# hidden in the multiplier layer too.
multipliers = await get_tier_multipliers()
tier_multipliers: dict[str, float] = {
t.value: multipliers.get(t, 1.0)
for t in priceable_tiers
if t.value in tier_costs
}
current_monthly_cost = tier_costs.get(tier.value, 0)
proration_credit = await get_proration_credit_cents(user_id, current_monthly_cost)
proration_credit, current_period_end = await asyncio.gather(
get_proration_credit_cents(user_id, current_monthly_cost),
get_active_subscription_period_end(user_id),
)
try:
pending = await get_pending_subscription_change(user_id)
@@ -834,17 +883,21 @@ async def get_subscription_status(
tier=tier.value,
monthly_cost=current_monthly_cost,
tier_costs=tier_costs,
tier_multipliers=tier_multipliers,
proration_credit_cents=proration_credit,
has_active_stripe_subscription=current_period_end is not None,
current_period_end=current_period_end,
)
if pending is not None:
pending_tier_enum, pending_effective_at = pending
if pending_tier_enum == SubscriptionTier.FREE:
response.pending_tier = "FREE"
elif pending_tier_enum == SubscriptionTier.PRO:
response.pending_tier = "PRO"
elif pending_tier_enum == SubscriptionTier.BUSINESS:
response.pending_tier = "BUSINESS"
if response.pending_tier is not None:
if pending_tier_enum in (
SubscriptionTier.NO_TIER,
SubscriptionTier.BASIC,
SubscriptionTier.PRO,
SubscriptionTier.MAX,
SubscriptionTier.BUSINESS,
):
response.pending_tier = pending_tier_enum.value
response.pending_tier_effective_at = pending_effective_at
return response
@@ -860,23 +913,25 @@ async def update_subscription_tier(
request: SubscriptionTierRequest,
user_id: Annotated[str, Security(get_user_id)],
) -> SubscriptionStatusResponse:
# Pydantic validates tier is one of FREE/PRO/BUSINESS via Literal type.
# Pydantic validates tier is one of BASIC/PRO/MAX/BUSINESS via Literal type.
tier = SubscriptionTier(request.tier)
# ENTERPRISE tier is admin-managed — block self-service changes from ENTERPRISE users.
user = await get_user_by_id(user_id)
if (user.subscription_tier or SubscriptionTier.FREE) == SubscriptionTier.ENTERPRISE:
if (
user.subscription_tier or SubscriptionTier.NO_TIER
) == SubscriptionTier.ENTERPRISE:
raise HTTPException(
status_code=403,
detail="ENTERPRISE subscription changes must be managed by an administrator",
)
# Same-tier request = "stay on my current tier" = cancel any pending
# scheduled change (paid→paid downgrade or paid→FREE cancel). This is the
# scheduled change (paid→paid downgrade or paid→BASIC cancel). This is the
# collapsed behaviour that replaces the old /credits/subscription/cancel-pending
# route. Safe when no pending change exists: release_pending_subscription_schedule
# returns False and we simply return the current status.
if (user.subscription_tier or SubscriptionTier.FREE) == tier:
if (user.subscription_tier or SubscriptionTier.NO_TIER) == tier:
try:
await release_pending_subscription_schedule(user_id)
except stripe.StripeError as e:
@@ -898,22 +953,18 @@ async def update_subscription_tier(
Flag.ENABLE_PLATFORM_PAYMENT, user_id, default=False
)
# Downgrade to FREE: schedule Stripe cancellation at period end so the user
# keeps their tier for the time they already paid for. The DB tier is NOT
# updated here when a subscription exists — the customer.subscription.deleted
# webhook fires at period end and downgrades to FREE then.
# Exception: if the user has no active Stripe subscription (e.g. admin-granted
# tier), cancel_stripe_subscription returns False and we update the DB tier
# immediately since no webhook will ever fire.
# When payment is disabled entirely, update the DB tier directly.
if tier == SubscriptionTier.FREE:
target_price_id = await get_subscription_price_id(tier)
# Cancel: target NO_TIER. Schedule Stripe cancellation at period end;
# cancel_at_period_end=True lets the webhook flip the DB tier. No active
# sub (admin-granted or never-paid) or payment disabled → DB flip.
# NO_TIER is never priceable, so this branch always fires for cancel
# requests regardless of LD config.
if tier == SubscriptionTier.NO_TIER:
if payment_enabled:
try:
had_subscription = await cancel_stripe_subscription(user_id)
except stripe.StripeError as e:
# Log full Stripe error server-side but return a generic message
# to the client — raw Stripe errors can leak customer/sub IDs and
# infrastructure config details.
logger.exception(
"Stripe error cancelling subscription for user %s: %s",
user_id,
@@ -927,54 +978,73 @@ async def update_subscription_tier(
),
)
if not had_subscription:
# No active Stripe subscription found — the user was on an
# admin-granted tier. Update DB immediately since the
# subscription.deleted webhook will never fire.
await set_subscription_tier(user_id, tier)
return await get_subscription_status(user_id)
await set_subscription_tier(user_id, tier)
return await get_subscription_status(user_id)
# Paid tier changes require payment to be enabled — block self-service upgrades
# when the flag is off. Admins use the /api/admin/ routes to set tiers directly.
if not payment_enabled:
raise HTTPException(
status_code=422,
detail=f"Subscription not available for tier {tier}",
detail=f"Subscription not available for tier {tier.value}",
)
# Paid→paid tier change: if the user already has a Stripe subscription,
# modify it in-place with proration instead of creating a new Checkout
# Session. This preserves remaining paid time and avoids double-charging.
# The customer.subscription.updated webhook fires and updates the DB tier.
current_tier = user.subscription_tier or SubscriptionTier.FREE
if current_tier in (SubscriptionTier.PRO, SubscriptionTier.BUSINESS):
try:
modified = await modify_stripe_subscription_for_tier(user_id, tier)
if modified:
return await get_subscription_status(user_id)
# modify_stripe_subscription_for_tier returns False when no active
# Stripe subscription exists — i.e. the user has an admin-granted
# paid tier with no Stripe record. In that case, update the DB
# tier directly (same as the FREE-downgrade path for admin-granted
# users) rather than sending them through a new Checkout Session.
await set_subscription_tier(user_id, tier)
# Target has no LD price — not provisionable (matches the GET hiding).
if target_price_id is None:
raise HTTPException(
status_code=422,
detail=f"Subscription not available for tier {tier.value}",
)
# Modify in place if there's a sub; else fall through to Checkout below.
try:
modified = await modify_stripe_subscription_for_tier(user_id, tier)
if modified:
return await get_subscription_status(user_id)
except ValueError as e:
raise HTTPException(status_code=422, detail=str(e))
except stripe.StripeError as e:
logger.exception(
"Stripe error modifying subscription for user %s: %s", user_id, e
except ValueError as e:
raise HTTPException(status_code=422, detail=str(e))
except stripe.InvalidRequestError as e:
# Stripe rejects schedule modify when phases mix currencies, e.g. the
# active sub was checked out in GBP but the target tier's Price is
# USD-only. 502 reads as outage; surface a 422 with a specific message
# so the user/admin can see what to fix in Stripe.
msg = str(e)
if "currency" in msg.lower():
logger.warning(
"Currency mismatch on tier change for user %s: %s", user_id, msg
)
raise HTTPException(
status_code=502,
status_code=422,
detail=(
"Unable to update your subscription right now. "
"Please try again or contact support."
"Tier change unavailable for your current billing currency."
" Please contact support — the target tier needs to be"
" configured for your currency in Stripe before this"
" change can go through."
),
)
logger.exception(
"Stripe error modifying subscription for user %s: %s", user_id, e
)
raise HTTPException(
status_code=502,
detail=(
"Unable to update your subscription right now. "
"Please try again or contact support."
),
)
except stripe.StripeError as e:
logger.exception(
"Stripe error modifying subscription for user %s: %s", user_id, e
)
raise HTTPException(
status_code=502,
detail=(
"Unable to update your subscription right now. "
"Please try again or contact support."
),
)
# Paid upgrade from FREE → create Stripe Checkout Session.
# No active Stripe subscription → create Stripe Checkout Session.
if not request.success_url or not request.cancel_url:
raise HTTPException(
status_code=422,
@@ -1108,6 +1178,9 @@ async def stripe_webhook(request: Request):
):
await sync_subscription_schedule_from_stripe(data_object)
if event_type == "invoice.payment_succeeded":
await handle_subscription_payment_success(data_object)
if event_type == "invoice.payment_failed":
await handle_subscription_payment_failure(data_object)

View File

@@ -2,6 +2,7 @@
Workspace API routes for managing user file storage.
"""
import asyncio
import logging
import os
import re
@@ -14,6 +15,8 @@ from fastapi import Query, UploadFile
from fastapi.responses import Response
from pydantic import BaseModel, Field
from backend.api.features.store.exceptions import VirusDetectedError, VirusScanError
from backend.copilot.rate_limit import get_workspace_storage_limit_bytes
from backend.data.workspace import (
WorkspaceFile,
count_workspace_files,
@@ -21,11 +24,9 @@ from backend.data.workspace import (
get_workspace,
get_workspace_file,
get_workspace_total_size,
soft_delete_workspace_file,
)
from backend.util.settings import Config
from backend.util.virus_scanner import scan_content_safe
from backend.util.workspace import WorkspaceManager
from backend.util.workspace import WorkspaceManager, format_bytes
from backend.util.workspace_storage import get_workspace_storage
@@ -249,66 +250,47 @@ async def upload_file(
# Get or create workspace
workspace = await get_or_create_workspace(user_id)
# Pre-write storage cap check (soft check — final enforcement is post-write)
storage_limit_bytes = config.max_workspace_storage_mb * 1024 * 1024
current_usage = await get_workspace_total_size(workspace.id)
if storage_limit_bytes and current_usage + len(content) > storage_limit_bytes:
used_percent = (current_usage / storage_limit_bytes) * 100
raise fastapi.HTTPException(
status_code=413,
detail={
"message": "Storage limit exceeded",
"used_bytes": current_usage,
"limit_bytes": storage_limit_bytes,
"used_percent": round(used_percent, 1),
},
)
# Warn at 80% usage
if (
storage_limit_bytes
and (usage_ratio := (current_usage + len(content)) / storage_limit_bytes) >= 0.8
):
logger.warning(
f"User {user_id} workspace storage at {usage_ratio * 100:.1f}% "
f"({current_usage + len(content)} / {storage_limit_bytes} bytes)"
)
# Virus scan
await scan_content_safe(content, filename=filename)
# Write file via WorkspaceManager
# Write file via WorkspaceManager (handles virus scan, per-file size,
# and per-user tier-based storage quota internally).
manager = WorkspaceManager(user_id, workspace.id, session_id)
try:
workspace_file = await manager.write_file(
content, filename, overwrite=overwrite, metadata={"origin": "user-upload"}
)
except VirusDetectedError as e:
raise fastapi.HTTPException(status_code=400, detail=str(e)) from e
except VirusScanError as e:
raise fastapi.HTTPException(status_code=500, detail=str(e)) from e
except ValueError as e:
# write_file raises ValueError for both path-conflict and size-limit
# cases; map each to its correct HTTP status.
# write_file raises ValueError for path-conflict, size-limit, and
# storage-quota cases; map each to its correct HTTP status.
message = str(e)
if message.startswith("File too large"):
if message.startswith(("File too large", "Storage limit exceeded")):
raise fastapi.HTTPException(status_code=413, detail=message) from e
raise fastapi.HTTPException(status_code=409, detail=message) from e
# Post-write storage check — eliminates TOCTOU race on the quota.
# If a concurrent upload pushed us over the limit, undo this write.
storage_limit_bytes = await get_workspace_storage_limit_bytes(user_id)
new_total = await get_workspace_total_size(workspace.id)
if storage_limit_bytes and new_total > storage_limit_bytes:
try:
await soft_delete_workspace_file(workspace_file.id, workspace.id)
# Route through WorkspaceManager so the storage backend blob is
# removed too — soft_delete_workspace_file alone leaks the blob.
await manager.delete_file(workspace_file.id)
except Exception as e:
logger.warning(
f"Failed to soft-delete over-quota file {workspace_file.id} "
f"Failed to delete over-quota file {workspace_file.id} "
f"in workspace {workspace.id}: {e}"
)
raise fastapi.HTTPException(
status_code=413,
detail={
"message": "Storage limit exceeded (concurrent upload)",
"used_bytes": new_total,
"limit_bytes": storage_limit_bytes,
},
detail=(
f"Storage limit exceeded. "
f"You've used {format_bytes(new_total)} of your "
f"{format_bytes(storage_limit_bytes)} quota. "
f"Delete some files or upgrade your plan for more storage."
),
)
return UploadFileResponse(
@@ -331,12 +313,13 @@ async def get_storage_usage(
"""
Get storage usage information for the user's workspace.
"""
config = Config()
workspace = await get_or_create_workspace(user_id)
used_bytes = await get_workspace_total_size(workspace.id)
file_count = await count_workspace_files(workspace.id)
limit_bytes = config.max_workspace_storage_mb * 1024 * 1024
used_bytes, file_count, limit_bytes = await asyncio.gather(
get_workspace_total_size(workspace.id),
count_workspace_files(workspace.id),
get_workspace_storage_limit_bytes(user_id),
)
return StorageUsageResponse(
used_bytes=used_bytes,

View File

@@ -151,15 +151,16 @@ def test_list_files_null_metadata_coerced_to_empty_dict(
# -- upload_file metadata tests --
@patch("backend.api.features.workspace.routes.get_workspace_storage_limit_bytes")
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
@patch("backend.api.features.workspace.routes.get_workspace_total_size")
@patch("backend.api.features.workspace.routes.scan_content_safe")
@patch("backend.api.features.workspace.routes.WorkspaceManager")
def test_upload_passes_user_upload_origin_metadata(
mock_manager_cls, mock_scan, mock_total_size, mock_get_workspace
mock_manager_cls, mock_total_size, mock_get_workspace, mock_storage_limit
):
mock_get_workspace.return_value = _make_workspace()
mock_total_size.return_value = 100
mock_storage_limit.return_value = 250 * 1024 * 1024
written = _make_file(id="new-file", name="doc.pdf")
mock_instance = AsyncMock()
mock_instance.write_file.return_value = written
@@ -178,10 +179,9 @@ def test_upload_passes_user_upload_origin_metadata(
@patch("backend.api.features.workspace.routes.get_or_create_workspace")
@patch("backend.api.features.workspace.routes.get_workspace_total_size")
@patch("backend.api.features.workspace.routes.scan_content_safe")
@patch("backend.api.features.workspace.routes.WorkspaceManager")
def test_upload_returns_409_on_file_conflict(
mock_manager_cls, mock_scan, mock_total_size, mock_get_workspace
mock_manager_cls, mock_total_size, mock_get_workspace
):
mock_get_workspace.return_value = _make_workspace()
mock_total_size.return_value = 100
@@ -234,8 +234,8 @@ def test_upload_happy_path(mocker):
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
"backend.api.features.workspace.routes.get_workspace_storage_limit_bytes",
return_value=250 * 1024 * 1024,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
@@ -256,20 +256,24 @@ def test_upload_exceeds_max_file_size(mocker):
"""Files larger than max_file_size_mb should be rejected with 413."""
cfg = mocker.patch("backend.api.features.workspace.routes.Config")
cfg.return_value.max_file_size_mb = 0 # 0 MB → any content is too big
cfg.return_value.max_workspace_storage_mb = 500
response = _upload(content=b"x" * 1024)
assert response.status_code == 413
def test_upload_storage_quota_exceeded(mocker):
"""WorkspaceManager.write_file raises ValueError when quota exceeded → 413."""
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=_make_workspace(),
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(
side_effect=ValueError("Storage limit exceeded: 500 MB used of 250 MB (200.0%)")
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=500 * 1024 * 1024,
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
response = _upload()
@@ -278,33 +282,33 @@ def test_upload_storage_quota_exceeded(mocker):
def test_upload_post_write_quota_race(mocker):
"""Concurrent upload tipping over limit after write should soft-delete + 413."""
"""Concurrent upload tipping over limit after write should delete + 413."""
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=_make_workspace(),
)
# Post-write total exceeds the tier-based limit (250 MB for FREE).
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
side_effect=[0, 600 * 1024 * 1024],
return_value=600 * 1024 * 1024,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
"backend.api.features.workspace.routes.get_workspace_storage_limit_bytes",
return_value=250 * 1024 * 1024,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
mock_manager.delete_file = mocker.AsyncMock(return_value=True)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
mock_delete = mocker.patch(
"backend.api.features.workspace.routes.soft_delete_workspace_file",
return_value=None,
)
response = _upload()
assert response.status_code == 413
mock_delete.assert_called_once_with("file-aaa-bbb", "ws-001")
# Rollback must go through manager.delete_file so the storage blob is removed,
# not just soft_delete_workspace_file (which would orphan the blob).
mock_manager.delete_file.assert_awaited_once_with("file-aaa-bbb")
def test_upload_any_extension(mocker):
@@ -318,8 +322,8 @@ def test_upload_any_extension(mocker):
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
"backend.api.features.workspace.routes.get_workspace_storage_limit_bytes",
return_value=250 * 1024 * 1024,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
@@ -333,23 +337,17 @@ def test_upload_any_extension(mocker):
def test_upload_blocked_by_virus_scan(mocker):
"""Files flagged by ClamAV should be rejected and never written to storage."""
"""Files flagged by ClamAV should be rejected via WorkspaceManager."""
from backend.api.features.store.exceptions import VirusDetectedError
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=_make_workspace(),
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(
side_effect=VirusDetectedError("Eicar-Test-Signature"),
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
@@ -357,7 +355,6 @@ def test_upload_blocked_by_virus_scan(mocker):
response = _upload(filename="evil.exe", content=b"X5O!P%@AP...")
assert response.status_code == 400
mock_manager.write_file.assert_not_called()
def test_upload_file_without_extension(mocker):
@@ -371,8 +368,8 @@ def test_upload_file_without_extension(mocker):
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
"backend.api.features.workspace.routes.get_workspace_storage_limit_bytes",
return_value=250 * 1024 * 1024,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
@@ -402,8 +399,8 @@ def test_upload_strips_path_components(mocker):
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
"backend.api.features.workspace.routes.get_workspace_storage_limit_bytes",
return_value=250 * 1024 * 1024,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=_MOCK_FILE)
@@ -488,14 +485,6 @@ def test_upload_write_file_too_large_returns_413(mocker):
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=_make_workspace(),
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(
side_effect=ValueError("File too large: 900 bytes exceeds 1MB limit")
@@ -516,14 +505,6 @@ def test_upload_write_file_conflict_returns_409(mocker):
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=_make_workspace(),
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(
side_effect=ValueError("File already exists at path: /sessions/x/a.txt")
@@ -602,6 +583,54 @@ def test_list_files_offset_is_echoed_back(mock_manager_cls, mock_get_workspace):
)
def test_upload_virus_scan_infrastructure_error_returns_500(mocker):
"""VirusScanError (ClamAV outage) should return 500, not 409."""
from backend.api.features.store.exceptions import VirusScanError
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=_make_workspace(),
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(
side_effect=VirusScanError("ClamAV connection refused"),
)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
response = _upload()
assert response.status_code == 500
def test_get_storage_usage_returns_tier_based_limit(mocker):
"""get_storage_usage should return the user's tier-based limit, not a static config."""
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=_make_workspace(),
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=100 * 1024 * 1024, # 100 MB used
)
mocker.patch(
"backend.api.features.workspace.routes.count_workspace_files",
return_value=5,
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_storage_limit_bytes",
return_value=1024 * 1024 * 1024, # 1 GB (PRO tier)
)
response = client.get("/storage/usage")
assert response.status_code == 200
data = response.json()
assert data["limit_bytes"] == 1024 * 1024 * 1024
assert data["used_bytes"] == 100 * 1024 * 1024
assert data["file_count"] == 5
# -- _sanitize_filename_for_header tests --

View File

@@ -34,6 +34,7 @@ import backend.api.features.oauth
import backend.api.features.otto.routes
import backend.api.features.platform_linking.routes
import backend.api.features.postmark.postmark
import backend.api.features.push.routes as push_routes
import backend.api.features.store.model
import backend.api.features.store.routes
import backend.api.features.v1
@@ -41,6 +42,7 @@ import backend.api.features.workspace.routes as workspace_routes
import backend.data.block
import backend.data.db
import backend.data.graph
import backend.data.redis_client
import backend.data.user
import backend.integrations.webhooks.utils
import backend.util.service
@@ -95,6 +97,8 @@ async def lifespan_context(app: fastapi.FastAPI):
verify_auth_settings()
await backend.data.db.connect()
# Eager connect to fail-fast if Redis is unreachable.
await backend.data.redis_client.get_redis_async()
# Configure thread pool for FastAPI sync operation performance
# CRITICAL: FastAPI automatically runs ALL sync functions in this thread pool:
@@ -146,7 +150,18 @@ async def lifespan_context(app: fastapi.FastAPI):
except Exception as e:
logger.warning(f"Error shutting down workspace storage: {e}")
await backend.data.db.disconnect()
# Each cleanup is wrapped so one failure doesn't block the rest. The
# Redis close in particular silences asyncio's "Unclosed ClusterNode"
# GC warning at interpreter shutdown.
try:
await backend.data.redis_client.disconnect_async()
except Exception:
logger.warning("redis_client.disconnect_async failed", exc_info=True)
try:
await backend.data.db.disconnect()
except Exception:
logger.warning("db.disconnect failed", exc_info=True)
def custom_generate_unique_id(route: APIRoute):
@@ -379,6 +394,11 @@ app.include_router(
tags=["oauth"],
prefix="/api/oauth",
)
app.include_router(
push_routes.router,
tags=["push"],
prefix="/api/push",
)
app.include_router(
backend.api.features.platform_linking.routes.router,
tags=["platform-linking"],

View File

@@ -1,4 +1,3 @@
import asyncio
import logging
from contextlib import asynccontextmanager
from typing import Protocol
@@ -17,14 +16,12 @@ from backend.api.model import (
WSSubscribeGraphExecutionsRequest,
)
from backend.api.utils.cors import build_cors_params
from backend.data.execution import AsyncRedisExecutionEventBus
from backend.data.notification_bus import AsyncRedisNotificationEventBus
from backend.data import db, redis_client
from backend.data.user import DEFAULT_USER_ID
from backend.monitoring.instrumentation import (
instrument_fastapi,
update_websocket_connections,
)
from backend.util.retry import continuous_retry
from backend.util.service import AppProcess
from backend.util.settings import AppEnvironment, Config, Settings
@@ -34,10 +31,24 @@ settings = Settings()
@asynccontextmanager
async def lifespan(app: FastAPI):
manager = get_connection_manager()
fut = asyncio.create_task(event_broadcaster(manager))
fut.add_done_callback(lambda _: logger.info("Event broadcaster stopped"))
yield
# Prisma is needed to resolve graph_id from graph_exec_id on subscribe.
await db.connect()
# Eager connect to fail-fast if Redis is unreachable.
await redis_client.get_redis_async()
try:
yield
finally:
# Each cleanup is wrapped so one failure doesn't block the rest. The
# Redis close silences asyncio's "Unclosed ClusterNode" GC warning at
# interpreter shutdown.
try:
await redis_client.disconnect_async()
except Exception:
logger.warning("redis_client.disconnect_async failed", exc_info=True)
try:
await db.disconnect()
except Exception:
logger.warning("db.disconnect failed", exc_info=True)
docs_url = "/docs" if settings.config.app_env == AppEnvironment.LOCAL else None
@@ -61,31 +72,6 @@ def get_connection_manager():
return _connection_manager
@continuous_retry()
async def event_broadcaster(manager: ConnectionManager):
execution_bus = AsyncRedisExecutionEventBus()
notification_bus = AsyncRedisNotificationEventBus()
try:
async def execution_worker():
async for event in execution_bus.listen("*"):
await manager.send_execution_update(event)
async def notification_worker():
async for notification in notification_bus.listen("*"):
await manager.send_notification(
user_id=notification.user_id,
payload=notification.payload,
)
await asyncio.gather(execution_worker(), notification_worker())
finally:
# Ensure PubSub connections are closed on any exit to prevent leaks
await execution_bus.close()
await notification_bus.close()
async def authenticate_websocket(websocket: WebSocket) -> str:
if not settings.config.enable_auth:
return DEFAULT_USER_ID
@@ -297,6 +283,21 @@ async def websocket_router(
).model_dump_json()
)
continue
except ValueError as e:
logger.warning(
"Subscription rejected for user #%s on '%s': %s",
user_id,
message.method.value,
e,
)
await websocket.send_text(
WSMessage(
method=WSMethod.ERROR,
success=False,
error=str(e),
).model_dump_json()
)
continue
except Exception as e:
logger.error(
f"Error while handling '{message.method.value}' message "
@@ -321,9 +322,13 @@ async def websocket_router(
)
except WebSocketDisconnect:
manager.disconnect_socket(websocket, user_id=user_id)
logger.debug("WebSocket client disconnected")
except Exception:
logger.exception(f"Unexpected error in websocket_router for user #{user_id}")
finally:
# Always release subscription pumps + Redis connections, regardless of how
# the loop exited — otherwise non-WebSocketDisconnect failures leak both.
await manager.disconnect_socket(websocket, user_id=user_id)
update_websocket_connections(user_id, -1)

View File

@@ -44,9 +44,12 @@ def test_websocket_server_uses_cors_helper(mocker) -> None:
"backend.api.ws_api.build_cors_params", return_value=cors_params
)
with override_config(
settings, "backend_cors_allow_origins", cors_params["allow_origins"]
), override_config(settings, "app_env", AppEnvironment.LOCAL):
with (
override_config(
settings, "backend_cors_allow_origins", cors_params["allow_origins"]
),
override_config(settings, "app_env", AppEnvironment.LOCAL),
):
WebsocketServer().run()
build_cors.assert_called_once_with(
@@ -65,9 +68,12 @@ def test_websocket_server_uses_cors_helper(mocker) -> None:
def test_websocket_server_blocks_localhost_in_production(mocker) -> None:
mocker.patch("backend.api.ws_api.uvicorn.run")
with override_config(
settings, "backend_cors_allow_origins", ["http://localhost:3000"]
), override_config(settings, "app_env", AppEnvironment.PRODUCTION):
with (
override_config(
settings, "backend_cors_allow_origins", ["http://localhost:3000"]
),
override_config(settings, "app_env", AppEnvironment.PRODUCTION),
):
with pytest.raises(ValueError):
WebsocketServer().run()
@@ -290,7 +296,232 @@ async def test_handle_unsubscribe_missing_data(
message=message,
)
mock_manager._unsubscribe.assert_not_called()
mock_manager.unsubscribe_graph_exec.assert_not_called()
mock_websocket.send_text.assert_called_once()
assert '"method":"error"' in mock_websocket.send_text.call_args[0][0]
assert '"success":false' in mock_websocket.send_text.call_args[0][0]
# ---------- Per-graph subscribe branch ----------
@pytest.mark.asyncio
async def test_handle_subscribe_graph_execs_branch(
mock_websocket: AsyncMock, mock_manager: AsyncMock
) -> None:
"""The SUBSCRIBE_GRAPH_EXECS branch must route to subscribe_graph_execs,
not subscribe_graph_exec — regression guard for the aggregate channel."""
message = WSMessage(
method=WSMethod.SUBSCRIBE_GRAPH_EXECS,
data={"graph_id": "graph-abc"},
)
mock_manager.subscribe_graph_execs.return_value = (
"user-1|graph#graph-abc|executions"
)
await handle_subscribe(
connection_manager=cast(ConnectionManager, mock_manager),
websocket=cast(WebSocket, mock_websocket),
user_id="user-1",
message=message,
)
mock_manager.subscribe_graph_execs.assert_called_once_with(
user_id="user-1",
graph_id="graph-abc",
websocket=mock_websocket,
)
mock_manager.subscribe_graph_exec.assert_not_called()
mock_websocket.send_text.assert_called_once()
assert (
'"method":"subscribe_graph_executions"'
in mock_websocket.send_text.call_args[0][0]
)
assert '"success":true' in mock_websocket.send_text.call_args[0][0]
@pytest.mark.asyncio
async def test_handle_subscribe_rejects_unrelated_method(
mock_websocket: AsyncMock, mock_manager: AsyncMock
) -> None:
"""handle_subscribe must raise for methods that aren't SUBSCRIBE_*."""
import pytest as _pytest
message = WSMessage(
method=WSMethod.HEARTBEAT,
data={"graph_exec_id": "x"},
)
with _pytest.raises(ValueError):
await handle_subscribe(
connection_manager=cast(ConnectionManager, mock_manager),
websocket=cast(WebSocket, mock_websocket),
user_id="user-1",
message=message,
)
# ---------- authenticate_websocket branches ----------
@pytest.mark.asyncio
async def test_authenticate_websocket_missing_token_closes_4001(mocker) -> None:
from backend.api.ws_api import authenticate_websocket
mocker.patch.object(settings.config, "enable_auth", True)
ws = AsyncMock(spec=WebSocket)
ws.query_params = {}
user_id = await authenticate_websocket(ws)
ws.close.assert_awaited_once()
assert ws.close.call_args.kwargs["code"] == 4001
assert user_id == ""
@pytest.mark.asyncio
async def test_authenticate_websocket_invalid_token_closes_4003(mocker) -> None:
from backend.api.ws_api import authenticate_websocket
mocker.patch.object(settings.config, "enable_auth", True)
mocker.patch(
"backend.api.ws_api.parse_jwt_token", side_effect=ValueError("bad token")
)
ws = AsyncMock(spec=WebSocket)
ws.query_params = {"token": "abc"}
user_id = await authenticate_websocket(ws)
ws.close.assert_awaited_once()
assert ws.close.call_args.kwargs["code"] == 4003
assert user_id == ""
@pytest.mark.asyncio
async def test_authenticate_websocket_missing_sub_closes_4002(mocker) -> None:
from backend.api.ws_api import authenticate_websocket
mocker.patch.object(settings.config, "enable_auth", True)
mocker.patch("backend.api.ws_api.parse_jwt_token", return_value={"not_sub": "x"})
ws = AsyncMock(spec=WebSocket)
ws.query_params = {"token": "abc"}
user_id = await authenticate_websocket(ws)
ws.close.assert_awaited_once()
assert ws.close.call_args.kwargs["code"] == 4002
assert user_id == ""
@pytest.mark.asyncio
async def test_authenticate_websocket_happy_path_returns_sub(mocker) -> None:
from backend.api.ws_api import authenticate_websocket
mocker.patch.object(settings.config, "enable_auth", True)
mocker.patch("backend.api.ws_api.parse_jwt_token", return_value={"sub": "user-X"})
ws = AsyncMock(spec=WebSocket)
ws.query_params = {"token": "abc"}
user_id = await authenticate_websocket(ws)
assert user_id == "user-X"
@pytest.mark.asyncio
async def test_authenticate_websocket_auth_disabled_returns_default(mocker) -> None:
from backend.api.ws_api import authenticate_websocket
mocker.patch.object(settings.config, "enable_auth", False)
ws = AsyncMock(spec=WebSocket)
ws.query_params = {}
user_id = await authenticate_websocket(ws)
assert user_id == DEFAULT_USER_ID
# ---------- get_connection_manager singleton ----------
def test_get_connection_manager_singleton() -> None:
"""Repeated calls must return the same ConnectionManager — the WS router
depends on a single process-wide subscription table."""
import backend.api.ws_api as ws_api
ws_api._connection_manager = None
a = ws_api.get_connection_manager()
b = ws_api.get_connection_manager()
assert a is b
assert isinstance(a, ConnectionManager)
# ---------- Lifespan: Prisma connect/disconnect ----------
@pytest.mark.asyncio
async def test_lifespan_connects_and_disconnects_prisma(mocker) -> None:
"""Lifespan must both connect() and disconnect() db — the subscribe path
resolves graph_id via Prisma so a missing connect() is the regression bug."""
from fastapi import FastAPI
from backend.api.ws_api import lifespan
mock_db = mocker.patch("backend.api.ws_api.db")
mock_db.connect = AsyncMock()
mock_db.disconnect = AsyncMock()
dummy_app = FastAPI()
async with lifespan(dummy_app):
mock_db.connect.assert_awaited_once()
mock_db.disconnect.assert_not_called()
mock_db.disconnect.assert_awaited_once()
@pytest.mark.asyncio
async def test_lifespan_still_disconnects_on_exception(mocker) -> None:
"""If the app raises inside the yield, Prisma must still disconnect."""
from fastapi import FastAPI
from backend.api.ws_api import lifespan
mock_db = mocker.patch("backend.api.ws_api.db")
mock_db.connect = AsyncMock()
mock_db.disconnect = AsyncMock()
dummy_app = FastAPI()
class _Boom(Exception):
pass
with pytest.raises(_Boom):
async with lifespan(dummy_app):
raise _Boom()
mock_db.disconnect.assert_awaited_once()
# ---------- Health endpoint ----------
def test_health_endpoint_returns_ok() -> None:
# TestClient triggers lifespan — stub it out so Prisma isn't hit.
from contextlib import asynccontextmanager
from fastapi.testclient import TestClient
import backend.api.ws_api as ws_api
@asynccontextmanager
async def _noop_lifespan(app):
yield
# Replace the app-level lifespan temporarily.
real_router_lifespan = ws_api.app.router.lifespan_context
ws_api.app.router.lifespan_context = _noop_lifespan
try:
with TestClient(ws_api.app) as client:
r = client.get("/")
assert r.status_code == 200
assert r.json() == {"status": "healthy"}
finally:
ws_api.app.router.lifespan_context = real_router_lifespan

View File

@@ -38,6 +38,7 @@ def main(**kwargs):
from backend.api.rest_api import AgentServer
from backend.api.ws_api import WebsocketServer
from backend.copilot.bot.app import CoPilotChatBridge
from backend.copilot.executor.manager import CoPilotExecutor
from backend.data.db_manager import DatabaseManager
from backend.executor import ExecutionManager, Scheduler
@@ -53,6 +54,7 @@ def main(**kwargs):
AgentServer(),
ExecutionManager(),
CoPilotExecutor(),
CoPilotChatBridge(),
**kwargs,
)

View File

@@ -96,27 +96,64 @@ class BlockCategory(Enum):
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
# RUN : cost_amount credits per run.
# BYTE : cost_amount credits per byte of input data.
# SECOND : cost_amount credits per cost_divisor walltime seconds.
# ITEMS : cost_amount credits per cost_divisor items (from stats).
# COST_USD : cost_amount credits per USD of stats.provider_cost.
# TOKENS : per-(model, provider) rate table lookup; see TOKEN_COST.
RUN = "run"
BYTE = "byte"
SECOND = "second"
ITEMS = "items"
COST_USD = "cost_usd"
TOKENS = "tokens"
@property
def is_dynamic(self) -> bool:
"""Real charge is computed post-flight from stats.
Dynamic types (SECOND/ITEMS/COST_USD/TOKENS) return 0 pre-flight and
settle against stats via charge_reconciled_usage once the block runs.
"""
return self in _DYNAMIC_COST_TYPES
_DYNAMIC_COST_TYPES: frozenset[BlockCostType] = frozenset(
{
BlockCostType.SECOND,
BlockCostType.ITEMS,
BlockCostType.COST_USD,
BlockCostType.TOKENS,
}
)
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
# cost_divisor: interpret cost_amount as "credits per cost_divisor units".
# Only meaningful for SECOND / ITEMS. TOKENS routes through TOKEN_COST
# rate tables (per-model input/output/cache pricing) and ignores
# cost_divisor entirely. Defaults to 1 so existing RUN/BYTE entries stay
# point-wise. Example: cost_amount=1, cost_divisor=10 under SECOND means
# "1 credit per 10 seconds of walltime".
cost_divisor: int = 1
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
cost_divisor: int = 1,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
cost_divisor=max(1, cost_divisor),
**data,
)
@@ -333,6 +370,8 @@ class BlockSchema(BaseModel):
"credentials_provider": [config.get("provider", "google")],
"credentials_types": [config.get("type", "oauth2")],
"credentials_scopes": config.get("scopes"),
"is_auto_credential": True,
"input_field_name": info["field_name"],
}
result[kwarg_name] = CredentialsFieldInfo.model_validate(
auto_schema, by_alias=True
@@ -443,19 +482,6 @@ class BlockWebhookConfig(BlockManualWebhookConfig):
class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
_optimized_description: ClassVar[str | None] = None
def extra_runtime_cost(self, execution_stats: NodeExecutionStats) -> int:
"""Return extra runtime cost to charge after this block run completes.
Called by the executor after a block finishes with COMPLETED status.
The return value is the number of additional base-cost credits to
charge beyond the single credit already collected by charge_usage
at the start of execution. Defaults to 0 (no extra charges).
Override in blocks (e.g. OrchestratorBlock) that make multiple LLM
calls within one run and should be billed per call.
"""
return 0
def __init__(
self,
id: str = "",
@@ -762,6 +788,61 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
block_id=self.id,
)
# Ensure auto-credential kwargs are present before we hand off to
# run(). A missing auto-credential means the upstream field (e.g.
# a Google Drive picker) didn't embed a _credentials_id, or the
# executor couldn't resolve it. Without this guard, run() would
# crash with a TypeError (missing required kwarg) or an opaque
# AttributeError deep inside the provider SDK.
#
# Only raise when the field is ALSO not populated in input_data.
# ``_acquire_auto_credentials`` intentionally skips setting the
# kwarg in two legitimate cases — ``_credentials_id`` is ``None``
# (chained from upstream) or the field is missing from
# ``input_data`` at prep time (connected from upstream block).
# In both cases the upstream block is expected to populate the
# field value by execute time; raising here would break the
# documented ``AgentGoogleDriveFileInputBlock`` chaining pattern.
# Dry-run skips because the executor intentionally runs blocks
# without resolved creds for schema validation.
if not is_dry_run:
for (
kwarg_name,
info,
) in self.input_schema.get_auto_credentials_fields().items():
kwargs.setdefault(kwarg_name, None)
if kwargs[kwarg_name] is not None:
continue
# Upstream-chained pattern: the field was populated by a
# prior node (e.g. AgentGoogleDriveFileInputBlock) whose
# output carries a resolved ``_credentials_id``.
# ``_acquire_auto_credentials`` deliberately doesn't set
# the kwarg in that case because the value isn't available
# at prep time; the executor fills it in before we reach
# ``_execute``. Trust it if the ``_credentials_id`` KEY
# is present — its value may be explicitly ``None`` in
# the chained case (see sentry thread
# PRRT_kwDOJKSTjM58sJfA). Checking truthiness here would
# falsely preempt run() for every valid chained graph
# that ships ``_credentials_id=None`` in the picker
# object. Mirror ``_acquire_auto_credentials``'s own
# skip rule, which treats ``cred_id is None`` as a
# chained-skip signal.
field_name = info["field_name"]
field_value = input_data.get(field_name)
if isinstance(field_value, dict) and "_credentials_id" in field_value:
continue
raise BlockExecutionError(
message=(
f"Missing credentials for '{kwarg_name}'. "
"Select a file via the picker (which carries "
"its credentials), or connect credentials for "
"this block."
),
block_name=self.name,
block_id=self.id,
)
# Use the validated input data
async for output_name, output_data in self.run(
self.input_schema(**{k: v for k, v in input_data.items() if v is not None}),

View File

@@ -0,0 +1,56 @@
"""Provider descriptions for services that don't yet have their own ``_config.py``.
Every provider in ``_STATIC_PROVIDER_CONFIGS`` below is declared here because
its block code currently lives either in a single shared file (e.g. the 8 LLM
providers in ``blocks/llm.py``) or in a single-file block that has no dedicated
directory (e.g. ``blocks/reddit.py``).
This file gets loaded by the block auto-loader in ``blocks/__init__.py``
(``rglob("*.py")`` picks it up) so the ``ProviderBuilder(...).build()`` calls
run at startup and populate ``AutoRegistry`` before the first API request.
**Migration path:** when a provider graduates into its own directory with a
proper ``_config.py`` (following the SDK pattern, e.g. ``blocks/linear/_config.py``),
delete its entry here. The metadata will still be served by
``GET /integrations/providers`` — it just moves to live next to the provider's
auth and webhook config.
"""
from backend.data.model import CredentialsType
from backend.sdk import ProviderBuilder
_STATIC_PROVIDER_CONFIGS: dict[str, tuple[str, tuple[CredentialsType, ...]]] = {
# LLM providers that share blocks/llm.py
"aiml_api": ("Unified access to 100+ AI models", ("api_key",)),
"anthropic": ("Claude language models", ("api_key",)),
"groq": ("Fast LLM inference", ("api_key",)),
"llama_api": ("Llama model hosting", ("api_key",)),
"ollama": ("Run open-source LLMs locally", ("api_key",)),
"open_router": ("One API for every LLM", ("api_key",)),
"openai": ("GPT models and embeddings", ("api_key",)),
"v0": ("AI-generated UI components", ("api_key",)),
# Single-file providers (one provider per standalone blocks/*.py file)
"d_id": ("AI avatar and video generation", ("api_key",)),
"e2b": ("Sandboxed code execution", ("api_key",)),
"google_maps": ("Places, directions, geocoding", ("api_key",)),
"http": ("Generic HTTP requests", ("api_key", "host_scoped")),
"ideogram": ("Text-to-image generation", ("api_key",)),
"medium": ("Publish stories and posts", ("api_key",)),
"mem0": ("Long-term memory for agents", ("api_key",)),
"openweathermap": ("Weather data and forecasts", ("api_key",)),
"pinecone": ("Managed vector database", ("api_key",)),
"reddit": ("Subreddits, posts, and comments", ("oauth2",)),
"revid": ("AI-generated short-form video", ("api_key",)),
"screenshotone": ("Automated website screenshots", ("api_key",)),
"smtp": ("Send email via SMTP", ("user_password",)),
"unreal_speech": ("Low-cost text-to-speech", ("api_key",)),
"webshare_proxy": ("Rotating proxies for scraping", ("api_key",)),
}
for _name, (_description, _auth_types) in _STATIC_PROVIDER_CONFIGS.items():
(
ProviderBuilder(_name)
.with_description(_description)
.with_supported_auth_types(*_auth_types)
.build()
)

View File

@@ -171,7 +171,10 @@ class AgentExecutorBlock(Block):
)
self.merge_stats(
NodeExecutionStats(
extra_cost=event.stats.cost if event.stats else 0,
# Sub-graph already debited each of its own nodes; we
# roll up its total so graph_stats.cost reflects the
# full sub-graph spend.
reconciled_cost_delta=(event.stats.cost if event.stats else 0),
extra_steps=event.stats.node_exec_count if event.stats else 0,
)
)

View File

@@ -4,11 +4,17 @@ Shared configuration for all AgentMail blocks.
from agentmail import AsyncAgentMail
from backend.sdk import APIKeyCredentials, ProviderBuilder, SecretStr
from backend.sdk import APIKeyCredentials, BlockCostType, ProviderBuilder, SecretStr
# AgentMail is in beta with no published paid tier yet, but ~37 blocks
# without any BLOCK_COSTS entry means they currently execute wallet-free.
# 1 cr/call is a conservative interim floor so no AgentMail work leaks
# past billing. Revisit once AgentMail publishes usage-based pricing.
agent_mail = (
ProviderBuilder("agent_mail")
.with_description("Managed email accounts for agents")
.with_api_key("AGENTMAIL_API_KEY", "AgentMail API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -10,6 +10,7 @@ from ._webhook import AirtableWebhookManager
# Configure the Airtable provider with API key authentication
airtable = (
ProviderBuilder("airtable")
.with_description("Bases, tables, and records")
.with_api_key("AIRTABLE_API_KEY", "Airtable Personal Access Token")
.with_webhook_manager(AirtableWebhookManager)
.with_base_cost(1, BlockCostType.RUN)

View File

@@ -0,0 +1,15 @@
"""Provider registration for Apollo.
Registers the provider description shown in the settings integrations UI.
Apollo doesn't use a full :class:`ProviderBuilder` chain (auth is set up in
``_auth.py``), so this file only declares metadata.
"""
from backend.sdk import ProviderBuilder
apollo = (
ProviderBuilder("apollo")
.with_description("Sales intelligence and prospecting")
.with_supported_auth_types("api_key")
.build()
)

View File

@@ -7,6 +7,7 @@ import logging
import uuid
from typing import TYPE_CHECKING, Any
from pydantic import field_validator
from typing_extensions import TypedDict # Needed for Python <3.12 compatibility
from backend.blocks._base import (
@@ -17,6 +18,7 @@ from backend.blocks._base import (
BlockSchemaOutput,
)
from backend.copilot.permissions import (
DISABLED_LEGACY_TOOL_NAMES,
CopilotPermissions,
ToolName,
all_known_tool_names,
@@ -198,6 +200,13 @@ class AutoPilotBlock(Block):
# timeouts internally; wrapping with asyncio.timeout corrupts the
# SDK's internal stream (see service.py CRITICAL comment).
@field_validator("tools", mode="before")
@classmethod
def strip_disabled_legacy_tools(cls, tools: Any) -> Any:
if not isinstance(tools, list):
return tools
return [tool for tool in tools if tool not in DISABLED_LEGACY_TOOL_NAMES]
class Output(BlockSchemaOutput):
"""Output schema for the AutoPilot block."""

View File

@@ -62,6 +62,14 @@ class TestBuildAndValidatePermissions:
with pytest.raises(ValidationError, match="not_a_real_tool"):
_make_input(tools=["not_a_real_tool"])
async def test_disabled_legacy_tool_is_accepted_and_removed(self):
inp = _make_input(tools=["ask_question", "run_block"])
result = await _build_and_validate_permissions(inp)
assert inp.tools == ["run_block"]
assert isinstance(result, CopilotPermissions)
assert result.tools == ["run_block"]
async def test_valid_block_name_accepted(self):
mock_block_cls = MagicMock()
mock_block_cls.return_value.name = "HTTP Request"

View File

@@ -0,0 +1,26 @@
"""Shared provider config for Ayrshare social-media blocks.
The "credential" exposed to blocks is the **per-user Ayrshare profile key**,
not the org-level ``AYRSHARE_API_KEY``. Profile keys are provisioned per
user by :class:`~backend.integrations.managed_providers.ayrshare.AyrshareManagedProvider`
and stored in the normal credentials list with ``is_managed=True``, so every
Ayrshare block fits the standard credential flow:
credentials: CredentialsMetaInput = ayrshare.credentials_field(...)
``run_block`` / ``resolve_block_credentials`` take care of the rest.
``with_managed_api_key()`` registers ``api_key`` as a supported auth type
without the env-var-backed default credential that ``with_api_key()`` would
create — the org-level ``AYRSHARE_API_KEY`` is the admin key and must never
reach a block as a "profile key".
"""
from backend.sdk import ProviderBuilder
ayrshare = (
ProviderBuilder("ayrshare")
.with_description("Post to every social network")
.with_managed_api_key()
.build()
)

View File

@@ -0,0 +1,18 @@
from backend.sdk import BlockCost, BlockCostType
# Ayrshare is a subscription proxy ($149/mo Business). Per-post credit charges
# prevent a single heavy user from absorbing the fixed cost and align with the
# upload cost of each post variant.
# cost_filter matches on input_data.is_video BEFORE run() executes, so the flag
# has to be correct at input-eval time. Video-only platforms (YouTube, Snapchat)
# override the base default to True; platforms that accept both (TikTok, etc.)
# rely on the caller setting is_video explicitly for accurate billing.
# First match wins in block_usage_cost, so list the video tier first.
AYRSHARE_POST_COSTS = (
BlockCost(
cost_amount=5, cost_type=BlockCostType.RUN, cost_filter={"is_video": True}
),
BlockCost(
cost_amount=2, cost_type=BlockCostType.RUN, cost_filter={"is_video": False}
),
)

View File

@@ -4,22 +4,25 @@ from typing import Optional
from pydantic import BaseModel, Field
from backend.blocks._base import BlockSchemaInput
from backend.data.model import SchemaField, UserIntegrations
from backend.data.model import CredentialsMetaInput, SchemaField
from backend.integrations.ayrshare import AyrshareClient
from backend.util.clients import get_database_manager_async_client
from backend.util.exceptions import MissingConfigError
async def get_profile_key(user_id: str):
user_integrations: UserIntegrations = (
await get_database_manager_async_client().get_user_integrations(user_id)
)
return user_integrations.managed_credentials.ayrshare_profile_key
from ._config import ayrshare
class BaseAyrshareInput(BlockSchemaInput):
"""Base input model for Ayrshare social media posts with common fields."""
credentials: CredentialsMetaInput = ayrshare.credentials_field(
description=(
"Ayrshare profile credential. AutoGPT provisions this managed "
"credential automatically — the user does not create it. After "
"it's in place, the user links each social account via the "
"Ayrshare SSO popup in the Builder."
),
)
post: str = SchemaField(
description="The post text to be published", default="", advanced=False
)
@@ -29,7 +32,9 @@ class BaseAyrshareInput(BlockSchemaInput):
advanced=False,
)
is_video: bool = SchemaField(
description="Whether the media is a video", default=False, advanced=True
description="Whether the media is a video. Set to True when uploading a video so billing applies the video tier.",
default=False,
advanced=True,
)
schedule_date: Optional[datetime] = SchemaField(
description="UTC datetime for scheduling (YYYY-MM-DDThh:mm:ssZ)",

View File

@@ -1,16 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToBlueskyBlock(Block):
"""Block for posting to Bluesky with Bluesky-specific options."""
@@ -57,16 +61,10 @@ class PostToBlueskyBlock(Block):
self,
input_data: "PostToBlueskyBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to Bluesky with Bluesky-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -106,7 +104,7 @@ class PostToBlueskyBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
bluesky_options=bluesky_options if bluesky_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,21 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import (
BaseAyrshareInput,
CarouselItem,
create_ayrshare_client,
get_profile_key,
)
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, CarouselItem, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToFacebookBlock(Block):
"""Block for posting to Facebook with Facebook-specific options."""
@@ -120,15 +119,10 @@ class PostToFacebookBlock(Block):
self,
input_data: "PostToFacebookBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to Facebook with Facebook-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -204,7 +198,7 @@ class PostToFacebookBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
facebook_options=facebook_options if facebook_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,16 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToGMBBlock(Block):
"""Block for posting to Google My Business with GMB-specific options."""
@@ -110,14 +114,13 @@ class PostToGMBBlock(Block):
)
async def run(
self, input_data: "PostToGMBBlock.Input", *, user_id: str, **kwargs
self,
input_data: "PostToGMBBlock.Input",
*,
credentials: APIKeyCredentials,
**kwargs
) -> BlockOutput:
"""Post to Google My Business with GMB-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -202,7 +205,7 @@ class PostToGMBBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
gmb_options=gmb_options if gmb_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -2,22 +2,21 @@ from typing import Any
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import (
BaseAyrshareInput,
InstagramUserTag,
create_ayrshare_client,
get_profile_key,
)
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, InstagramUserTag, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToInstagramBlock(Block):
"""Block for posting to Instagram with Instagram-specific options."""
@@ -112,15 +111,10 @@ class PostToInstagramBlock(Block):
self,
input_data: "PostToInstagramBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to Instagram with Instagram-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -241,7 +235,7 @@ class PostToInstagramBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
instagram_options=instagram_options if instagram_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,16 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToLinkedInBlock(Block):
"""Block for posting to LinkedIn with LinkedIn-specific options."""
@@ -112,15 +116,10 @@ class PostToLinkedInBlock(Block):
self,
input_data: "PostToLinkedInBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to LinkedIn with LinkedIn-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -214,7 +213,7 @@ class PostToLinkedInBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
linkedin_options=linkedin_options if linkedin_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,21 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import (
BaseAyrshareInput,
PinterestCarouselOption,
create_ayrshare_client,
get_profile_key,
)
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, PinterestCarouselOption, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToPinterestBlock(Block):
"""Block for posting to Pinterest with Pinterest-specific options."""
@@ -92,15 +91,10 @@ class PostToPinterestBlock(Block):
self,
input_data: "PostToPinterestBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to Pinterest with Pinterest-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -206,7 +200,7 @@ class PostToPinterestBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
pinterest_options=pinterest_options if pinterest_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,16 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToRedditBlock(Block):
"""Block for posting to Reddit."""
@@ -35,12 +39,12 @@ class PostToRedditBlock(Block):
)
async def run(
self, input_data: "PostToRedditBlock.Input", *, user_id: str, **kwargs
self,
input_data: "PostToRedditBlock.Input",
*,
credentials: APIKeyCredentials,
**kwargs
) -> BlockOutput:
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured."
@@ -61,7 +65,7 @@ class PostToRedditBlock(Block):
random_post=input_data.random_post,
random_media_url=input_data.random_media_url,
notes=input_data.notes,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,16 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToSnapchatBlock(Block):
"""Block for posting to Snapchat with Snapchat-specific options."""
@@ -31,6 +35,14 @@ class PostToSnapchatBlock(Block):
advanced=False,
)
# Snapchat is video-only; override the base default so the @cost filter
# selects the 5-credit video tier instead of the 2-credit image tier.
is_video: bool = SchemaField(
description="Whether the media is a video (always True for Snapchat)",
default=True,
advanced=True,
)
# Snapchat-specific options
story_type: str = SchemaField(
description="Type of Snapchat content: 'story' (24-hour Stories), 'saved_story' (Saved Stories), or 'spotlight' (Spotlight posts)",
@@ -62,15 +74,10 @@ class PostToSnapchatBlock(Block):
self,
input_data: "PostToSnapchatBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to Snapchat with Snapchat-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -121,7 +128,7 @@ class PostToSnapchatBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
snapchat_options=snapchat_options if snapchat_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,16 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToTelegramBlock(Block):
"""Block for posting to Telegram with Telegram-specific options."""
@@ -57,15 +61,10 @@ class PostToTelegramBlock(Block):
self,
input_data: "PostToTelegramBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to Telegram with Telegram-specific validation."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -108,7 +107,7 @@ class PostToTelegramBlock(Block):
random_post=input_data.random_post,
random_media_url=input_data.random_media_url,
notes=input_data.notes,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,16 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToThreadsBlock(Block):
"""Block for posting to Threads with Threads-specific options."""
@@ -50,15 +54,10 @@ class PostToThreadsBlock(Block):
self,
input_data: "PostToThreadsBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to Threads with Threads-specific validation."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -103,7 +102,7 @@ class PostToThreadsBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
threads_options=threads_options if threads_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -2,15 +2,18 @@ from enum import Enum
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
class TikTokVisibility(str, Enum):
@@ -19,6 +22,7 @@ class TikTokVisibility(str, Enum):
FOLLOWERS = "followers"
@cost(*AYRSHARE_POST_COSTS)
class PostToTikTokBlock(Block):
"""Block for posting to TikTok with TikTok-specific options."""
@@ -113,14 +117,13 @@ class PostToTikTokBlock(Block):
)
async def run(
self, input_data: "PostToTikTokBlock.Input", *, user_id: str, **kwargs
self,
input_data: "PostToTikTokBlock.Input",
*,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to TikTok with TikTok-specific validation and options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -235,7 +238,7 @@ class PostToTikTokBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
tiktok_options=tiktok_options if tiktok_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -1,16 +1,20 @@
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
@cost(*AYRSHARE_POST_COSTS)
class PostToXBlock(Block):
"""Block for posting to X / Twitter with Twitter-specific options."""
@@ -115,15 +119,10 @@ class PostToXBlock(Block):
self,
input_data: "PostToXBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to X / Twitter with enhanced X-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -233,7 +232,7 @@ class PostToXBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
twitter_options=twitter_options if twitter_options else None,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -3,15 +3,18 @@ from typing import Any
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchemaOutput,
BlockType,
SchemaField,
cost,
)
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
from ._cost import AYRSHARE_POST_COSTS
from ._util import BaseAyrshareInput, create_ayrshare_client
class YouTubeVisibility(str, Enum):
@@ -20,6 +23,7 @@ class YouTubeVisibility(str, Enum):
UNLISTED = "unlisted"
@cost(*AYRSHARE_POST_COSTS)
class PostToYouTubeBlock(Block):
"""Block for posting to YouTube with YouTube-specific options."""
@@ -39,6 +43,14 @@ class PostToYouTubeBlock(Block):
advanced=False,
)
# YouTube is video-only; override the base default so the @cost filter
# selects the 5-credit video tier instead of the 2-credit image tier.
is_video: bool = SchemaField(
description="Whether the media is a video (always True for YouTube)",
default=True,
advanced=True,
)
# YouTube-specific required options
title: str = SchemaField(
description="Video title (max 100 chars, required). Cannot contain < or > characters.",
@@ -137,16 +149,10 @@ class PostToYouTubeBlock(Block):
self,
input_data: "PostToYouTubeBlock.Input",
*,
user_id: str,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Post to YouTube with YouTube-specific validation and options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
@@ -302,7 +308,7 @@ class PostToYouTubeBlock(Block):
random_media_url=input_data.random_media_url,
notes=input_data.notes,
youtube_options=youtube_options,
profile_key=profile_key.get_secret_value(),
profile_key=credentials.api_key.get_secret_value(),
)
yield "post_result", response
if response.postIds:

View File

@@ -7,6 +7,7 @@ from backend.sdk import BlockCostType, ProviderBuilder
# Configure the Meeting BaaS provider with API key authentication
baas = (
ProviderBuilder("baas")
.with_description("Meeting recording and transcription")
.with_api_key("MEETING_BAAS_API_KEY", "Meeting BaaS API Key")
.with_base_cost(5, BlockCostType.RUN) # Higher cost for meeting recording service
.build()

View File

@@ -4,21 +4,34 @@ Meeting BaaS bot (recording) blocks.
from typing import Optional
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockCost,
BlockCostType,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
cost,
)
from ._api import MeetingBaasAPI
from ._config import baas
# Meeting BaaS recording rate: $0.69 per hour.
_MEETING_BAAS_USD_PER_SECOND = 0.69 / 3600
# Join bills a flat 30 cr commit (covers median short meeting);
# FetchMeetingData bills the duration-scaled remainder from the
# `duration_seconds` field on the API response. Long meetings no
# longer under-bill.
@cost(BlockCost(cost_type=BlockCostType.RUN, cost_amount=30))
class BaasBotJoinMeetingBlock(Block):
"""
Deploy a bot immediately or at a scheduled start_time to join and record a meeting.
@@ -134,6 +147,7 @@ class BaasBotLeaveMeetingBlock(Block):
yield "left", left
@cost(BlockCost(cost_type=BlockCostType.COST_USD, cost_amount=150))
class BaasBotFetchMeetingDataBlock(Block):
"""
Pull MP4 URL, transcript & metadata for a completed meeting.
@@ -176,9 +190,21 @@ class BaasBotFetchMeetingDataBlock(Block):
include_transcripts=input_data.include_transcripts,
)
bot_meta = data.get("bot_data", {}).get("bot", {}) or {}
# Bill recording duration via COST_USD so multi-hour meetings
# scale past the Join block's flat 30 cr deposit.
duration_seconds = float(bot_meta.get("duration_seconds") or 0)
if duration_seconds > 0:
self.merge_stats(
NodeExecutionStats(
provider_cost=duration_seconds * _MEETING_BAAS_USD_PER_SECOND,
provider_cost_type="cost_usd",
)
)
yield "mp4_url", data.get("mp4", "")
yield "transcript", data.get("bot_data", {}).get("transcripts", [])
yield "metadata", data.get("bot_data", {}).get("bot", {})
yield "metadata", bot_meta
class BaasBotDeleteRecordingBlock(Block):

View File

@@ -0,0 +1,86 @@
"""Unit tests for Meeting BaaS duration-based cost emission."""
from unittest.mock import AsyncMock, patch
import pytest
from pydantic import SecretStr
from backend.blocks.baas.bots import (
_MEETING_BAAS_USD_PER_SECOND,
BaasBotFetchMeetingDataBlock,
)
from backend.data.model import APIKeyCredentials, NodeExecutionStats
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="baas",
title="Mock BaaS API Key",
api_key=SecretStr("mock-baas-api-key"),
expires_at=None,
)
def test_usd_per_second_derives_from_published_rate():
"""$0.69/hour published rate → ~$0.000192/second."""
assert _MEETING_BAAS_USD_PER_SECOND == pytest.approx(0.69 / 3600)
@pytest.mark.asyncio
@pytest.mark.parametrize(
"duration_seconds, expected_usd",
[
(3600, 0.69), # 1 hour
(1800, 0.345), # 30 min
(0, None), # no recording → no emission
(None, None), # missing duration field → no emission
],
)
async def test_fetch_meeting_data_emits_duration_cost_usd(
duration_seconds, expected_usd
):
"""FetchMeetingData extracts duration_seconds from bot metadata and
emits provider_cost / cost_usd scaled by the published $0.69/hr rate.
Emission is skipped when duration is 0 or missing.
"""
block = BaasBotFetchMeetingDataBlock()
bot_meta = {"id": "bot-xyz"}
if duration_seconds is not None:
bot_meta["duration_seconds"] = duration_seconds
mock_api = AsyncMock()
mock_api.get_meeting_data.return_value = {
"mp4": "https://example/recording.mp4",
"bot_data": {"bot": bot_meta, "transcripts": []},
}
captured: list[NodeExecutionStats] = []
with (
patch("backend.blocks.baas.bots.MeetingBaasAPI", return_value=mock_api),
patch.object(block, "merge_stats", side_effect=captured.append),
):
outputs = []
async for name, val in block.run(
block.input_schema(
credentials={
"id": TEST_CREDENTIALS.id,
"provider": TEST_CREDENTIALS.provider,
"type": TEST_CREDENTIALS.type,
},
bot_id="bot-xyz",
include_transcripts=False,
),
credentials=TEST_CREDENTIALS,
):
outputs.append((name, val))
# Always yields the 3 outputs regardless of duration.
names = [n for n, _ in outputs]
assert "mp4_url" in names and "metadata" in names
if expected_usd is None:
assert captured == []
else:
assert len(captured) == 1
assert captured[0].provider_cost == pytest.approx(expected_usd)
assert captured[0].provider_cost_type == "cost_usd"

View File

@@ -2,7 +2,8 @@ from backend.sdk import BlockCostType, ProviderBuilder
bannerbear = (
ProviderBuilder("bannerbear")
.with_description("Auto-generate images and videos")
.with_api_key("BANNERBEAR_API_KEY", "Bannerbear API Key")
.with_base_cost(1, BlockCostType.RUN)
.with_base_cost(3, BlockCostType.RUN)
.build()
)

View File

@@ -433,7 +433,7 @@ class TestJinaEmbeddingBlockCostTracking:
class TestUnrealTextToSpeechBlockCostTracking:
@pytest.mark.asyncio
async def test_merge_stats_called_with_character_count(self):
"""provider_cost equals len(text) with type='characters'."""
"""provider_cost = len(text) * $0.000016 with type='cost_usd'."""
from backend.blocks.text_to_speech_block import TEST_CREDENTIALS as TTS_CREDS
from backend.blocks.text_to_speech_block import (
TEST_CREDENTIALS_INPUT as TTS_CREDS_INPUT,
@@ -461,12 +461,12 @@ class TestUnrealTextToSpeechBlockCostTracking:
mock_merge.assert_called_once()
stats = mock_merge.call_args[0][0]
assert stats.provider_cost == float(len(test_text))
assert stats.provider_cost_type == "characters"
assert stats.provider_cost == pytest.approx(len(test_text) * 0.000016)
assert stats.provider_cost_type == "cost_usd"
@pytest.mark.asyncio
async def test_empty_text_gives_zero_characters(self):
"""An empty text string results in provider_cost=0.0."""
"""An empty text string results in provider_cost=0.0 (cost_usd)."""
from backend.blocks.text_to_speech_block import TEST_CREDENTIALS as TTS_CREDS
from backend.blocks.text_to_speech_block import (
TEST_CREDENTIALS_INPUT as TTS_CREDS_INPUT,
@@ -494,7 +494,7 @@ class TestUnrealTextToSpeechBlockCostTracking:
mock_merge.assert_called_once()
stats = mock_merge.call_args[0][0]
assert stats.provider_cost == 0.0
assert stats.provider_cost_type == "characters"
assert stats.provider_cost_type == "cost_usd"
# ---------------------------------------------------------------------------

View File

@@ -17,6 +17,7 @@ from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
NodeExecutionStats,
SchemaField,
)
from backend.integrations.providers import ProviderName
@@ -431,6 +432,7 @@ class ClaudeCodeBlock(Block):
# The JSON output contains the result
output_data = json.loads(raw_output)
response = output_data.get("result", raw_output)
self._record_cli_cost(output_data)
# Build conversation history entry
turn_entry = f"User: {prompt}\nClaude: {response}"
@@ -484,6 +486,23 @@ class ClaudeCodeBlock(Block):
escaped = prompt.replace("'", "'\"'\"'")
return f"'{escaped}'"
def _record_cli_cost(self, output_data: dict) -> None:
"""Feed Claude Code CLI's `total_cost_usd` to the COST_USD resolver.
The CLI rolls up Anthropic LLM + internal tool-call spend into
``total_cost_usd`` on its JSON response; piping it through
``merge_stats`` lets the wallet reflect real spend.
"""
total_cost_usd = output_data.get("total_cost_usd")
if total_cost_usd is None:
return
self.merge_stats(
NodeExecutionStats(
provider_cost=float(total_cost_usd),
provider_cost_type="cost_usd",
)
)
async def run(
self,
input_data: Input,

View File

@@ -0,0 +1,106 @@
"""Unit tests for ClaudeCodeBlock COST_USD billing migration.
Verifies:
- Block emits provider_cost / cost_usd when Claude Code CLI returns
total_cost_usd.
- block_usage_cost resolves the COST_USD entry to the expected ceil(usd *
cost_amount) credit charge.
- Missing total_cost_usd gracefully produces provider_cost=None (no bill).
"""
from unittest.mock import MagicMock, patch
import pytest
from backend.blocks._base import BlockCostType
from backend.blocks.claude_code import ClaudeCodeBlock
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.model import NodeExecutionStats
from backend.executor.utils import block_usage_cost
def test_claude_code_registered_as_cost_usd_150():
"""Sanity: BLOCK_COSTS holds the COST_USD, 150 cr/$ entry."""
entries = BLOCK_COSTS[ClaudeCodeBlock]
assert len(entries) == 1
entry = entries[0]
assert entry.cost_type == BlockCostType.COST_USD
assert entry.cost_amount == 150
@pytest.mark.parametrize(
"total_cost_usd, expected_credits",
[
(0.50, 75), # $0.50 × 150 = 75 cr
(1.00, 150), # $1.00 × 150 = 150 cr
(0.0134, 3), # ceil(0.0134 × 150) = ceil(2.01) = 3
(2.00, 300), # $2 × 150 = 300 cr
(0.001, 1), # ceil(0.001 × 150) = ceil(0.15) = 1 — no 0-cr leak on
# sub-cent runs
],
)
def test_cost_usd_resolver_applies_150_multiplier(total_cost_usd, expected_credits):
"""block_usage_cost with cost_usd stats returns ceil(usd * 150)."""
block = ClaudeCodeBlock()
# cost_filter requires matching e2b_credentials; supply the ones the
# registration uses so _is_cost_filter_match accepts the input.
entry = BLOCK_COSTS[ClaudeCodeBlock][0]
input_data = {"e2b_credentials": entry.cost_filter["e2b_credentials"]}
stats = NodeExecutionStats(
provider_cost=total_cost_usd,
provider_cost_type="cost_usd",
)
cost, matching_filter = block_usage_cost(
block=block, input_data=input_data, stats=stats
)
assert cost == expected_credits
assert matching_filter == entry.cost_filter
def test_cost_usd_resolver_returns_zero_when_stats_missing_cost():
"""Pre-flight (no stats) or unbilled run (provider_cost None) → 0."""
block = ClaudeCodeBlock()
entry = BLOCK_COSTS[ClaudeCodeBlock][0]
input_data = {"e2b_credentials": entry.cost_filter["e2b_credentials"]}
# No stats at all → pre-flight path, returns 0.
pre_cost, _ = block_usage_cost(block=block, input_data=input_data)
assert pre_cost == 0
# Stats present but no provider_cost → resolver can't bill.
stats = NodeExecutionStats()
post_cost, _ = block_usage_cost(block=block, input_data=input_data, stats=stats)
assert post_cost == 0
def test_record_cli_cost_emits_provider_cost_when_total_cost_present():
"""``_record_cli_cost`` (the helper called from ``execute_claude_code``)
must emit a single ``merge_stats`` with provider_cost + cost_usd tag
when the CLI JSON payload carries ``total_cost_usd``.
"""
block = ClaudeCodeBlock()
captured: list[NodeExecutionStats] = []
with patch.object(block, "merge_stats", side_effect=captured.append):
block._record_cli_cost(
{
"result": "hello from claude",
"total_cost_usd": 0.0421,
"usage": {"input_tokens": 1234, "output_tokens": 56},
}
)
assert len(captured) == 1
stats = captured[0]
assert stats.provider_cost == pytest.approx(0.0421)
assert stats.provider_cost_type == "cost_usd"
def test_record_cli_cost_skips_merge_when_total_cost_absent():
"""If the CLI payload lacks ``total_cost_usd`` (legacy / non-JSON
output), ``_record_cli_cost`` must not call ``merge_stats`` — otherwise
we'd pollute telemetry with a ``cost_usd`` emission that has no real
cost attached.
"""
block = ClaudeCodeBlock()
mock = MagicMock()
with patch.object(block, "merge_stats", mock):
block._record_cli_cost({"result": "hello"})
mock.assert_not_called()

View File

@@ -151,6 +151,17 @@ class CodeGenerationBlock(Block):
)
self.execution_stats = NodeExecutionStats()
# GPT-5.1-Codex published pricing: $1.25 / 1M input, $10 / 1M output.
_INPUT_USD_PER_1M = 1.25
_OUTPUT_USD_PER_1M = 10.0
@staticmethod
def _compute_token_usd(input_tokens: int, output_tokens: int) -> float:
return (
input_tokens * CodeGenerationBlock._INPUT_USD_PER_1M
+ output_tokens * CodeGenerationBlock._OUTPUT_USD_PER_1M
) / 1_000_000
async def call_codex(
self,
*,
@@ -189,13 +200,15 @@ class CodeGenerationBlock(Block):
response_id = response.id or ""
# Update usage stats
self.execution_stats.input_token_count = (
response.usage.input_tokens if response.usage else 0
)
self.execution_stats.output_token_count = (
response.usage.output_tokens if response.usage else 0
)
input_tokens = response.usage.input_tokens if response.usage else 0
output_tokens = response.usage.output_tokens if response.usage else 0
self.execution_stats.input_token_count = input_tokens
self.execution_stats.output_token_count = output_tokens
self.execution_stats.llm_call_count += 1
self.execution_stats.provider_cost = self._compute_token_usd(
input_tokens, output_tokens
)
self.execution_stats.provider_cost_type = "cost_usd"
return CodexCallResult(
response=text_output,

View File

@@ -0,0 +1,10 @@
"""Provider registration for Compass — metadata only (auth lives elsewhere)."""
from backend.sdk import ProviderBuilder
compass = (
ProviderBuilder("compass")
.with_description("Geospatial context for agents")
.with_supported_auth_types("api_key")
.build()
)

View File

@@ -0,0 +1,226 @@
"""Coverage tests for the cost-leak fixes in this PR.
Each block's ``run()`` / helper emits provider_cost + cost_usd (or items)
via merge_stats so the post-flight resolver bills real provider spend.
Tests here drive that emission path directly so a regression on any one
block surfaces immediately.
"""
from unittest.mock import patch
import pytest
from pydantic import SecretStr
from backend.blocks._base import BlockCostType
from backend.blocks.ai_condition import AIConditionBlock
from backend.data.block_cost_config import BLOCK_COSTS, LLM_COST
from backend.data.model import APIKeyCredentials, NodeExecutionStats
# -------- AIConditionBlock registration --------
def test_ai_condition_registered_under_llm_cost():
"""AIConditionBlock was running wallet-free before this PR; verify it
now resolves through the same per-model LLM_COST table as every other
LLM block.
"""
assert BLOCK_COSTS[AIConditionBlock] is LLM_COST
# -------- Pinecone insert ITEMS emission --------
@pytest.mark.asyncio
async def test_pinecone_insert_emits_items_provider_cost():
from backend.blocks.pinecone import PineconeInsertBlock
block = PineconeInsertBlock()
captured: list[NodeExecutionStats] = []
class _FakeIndex:
def upsert(self, **_):
return None
class _FakePinecone:
def __init__(self, *_, **__):
pass
def Index(self, _name):
return _FakeIndex()
with (
patch("backend.blocks.pinecone.Pinecone", _FakePinecone),
patch.object(block, "merge_stats", side_effect=captured.append),
):
input_data = block.input_schema(
credentials={
"id": "00000000-0000-0000-0000-000000000000",
"provider": "pinecone",
"type": "api_key",
},
index="my-index",
chunks=["alpha", "beta", "gamma"],
embeddings=[[0.1] * 4, [0.2] * 4, [0.3] * 4],
namespace="",
metadata={},
)
creds = APIKeyCredentials(
id="00000000-0000-0000-0000-000000000000",
provider="pinecone",
title="mock",
api_key=SecretStr("mock-key"),
expires_at=None,
)
outputs = [(n, v) async for n, v in block.run(input_data, credentials=creds)]
assert any(name == "upsert_response" for name, _ in outputs)
assert len(captured) == 1
stats = captured[0]
assert stats.provider_cost == pytest.approx(3.0)
assert stats.provider_cost_type == "items"
# -------- Narration model-aware per-char rate --------
@pytest.mark.parametrize(
"model_id, expected_rate_per_char",
[
("eleven_flash_v2_5", 0.000167 * 0.5),
("eleven_turbo_v2_5", 0.000167 * 0.5),
("eleven_multilingual_v2", 0.000167 * 1.0),
("eleven_turbo_v2", 0.000167 * 1.0),
],
)
def test_narration_per_char_rate_scales_with_model(model_id, expected_rate_per_char):
"""Drive VideoNarrationBlock._record_script_cost directly so a regression
that drops the model-aware branching (e.g. hardcoding 1.0 cr/char for
all models) makes this test fail.
"""
from backend.blocks.video.narration import VideoNarrationBlock
block = VideoNarrationBlock()
captured: list[NodeExecutionStats] = []
with patch.object(block, "merge_stats", side_effect=captured.append):
block._record_script_cost("x" * 5000, model_id)
assert len(captured) == 1
stats = captured[0]
assert stats.provider_cost == pytest.approx(5000 * expected_rate_per_char)
assert stats.provider_cost_type == "cost_usd"
# -------- Perplexity None-guard on x-total-cost --------
@pytest.mark.parametrize(
"openrouter_cost, expect_type",
[
(0.0421, "cost_usd"), # concrete positive USD → tagged
(None, None), # header missing → no tag (keeps gap observable)
(0.0, None), # zero → no tag (wouldn't bill anything anyway)
],
)
def test_perplexity_record_openrouter_cost_tags_only_on_concrete_value(
openrouter_cost, expect_type
):
"""Drive PerplexityBlock._record_openrouter_cost directly to verify the
None/0 guard. A regression that tags cost_usd unconditionally would
silently floor the user's bill to 0 via the resolver — this test
would catch it.
"""
from backend.blocks.perplexity import PerplexityBlock
block = PerplexityBlock()
with patch(
"backend.blocks.perplexity.extract_openrouter_cost",
return_value=openrouter_cost,
):
block._record_openrouter_cost(response=object())
assert block.execution_stats.provider_cost == openrouter_cost
assert block.execution_stats.provider_cost_type == expect_type
# -------- Codex COST_USD registration --------
def test_codex_registered_as_cost_usd_150():
from backend.blocks.codex import CodeGenerationBlock
entries = BLOCK_COSTS[CodeGenerationBlock]
assert len(entries) == 1
entry = entries[0]
assert entry.cost_type == BlockCostType.COST_USD
assert entry.cost_amount == 150
@pytest.mark.parametrize(
"input_tokens, output_tokens, expected_usd",
[
# GPT-5.1-Codex: $1.25 / 1M input, $10 / 1M output.
(1_000_000, 0, 1.25),
(0, 1_000_000, 10.0),
(100_000, 10_000, 0.225), # 0.125 + 0.100
(0, 0, 0.0),
],
)
def test_codex_computes_provider_cost_usd_from_token_counts(
input_tokens, output_tokens, expected_usd
):
"""Drive CodeGenerationBlock._compute_token_usd directly. A regression
to the wrong rate constants (e.g. swapping the $1.25 input rate for
GPT-4o's $2.50) would fail this test.
"""
from backend.blocks.codex import CodeGenerationBlock
assert CodeGenerationBlock._compute_token_usd(
input_tokens, output_tokens
) == pytest.approx(expected_usd)
# -------- ClaudeCode COST_USD registration sanity (already tested in claude_code_cost_test.py) --------
# -------- Perplexity COST_USD registration for all 3 tiers --------
def test_perplexity_sonar_all_tiers_registered_as_cost_usd_150():
from backend.blocks.perplexity import PerplexityBlock
entries = BLOCK_COSTS[PerplexityBlock]
# 3 tiers (SONAR, SONAR_PRO, SONAR_DEEP_RESEARCH) all COST_USD 150.
assert len(entries) == 3
for entry in entries:
assert entry.cost_type == BlockCostType.COST_USD
assert entry.cost_amount == 150
# -------- Narration COST_USD registration --------
def test_narration_registered_as_cost_usd_150():
from backend.blocks.video.narration import VideoNarrationBlock
entries = BLOCK_COSTS[VideoNarrationBlock]
assert len(entries) == 1
assert entries[0].cost_type == BlockCostType.COST_USD
assert entries[0].cost_amount == 150
# -------- Pinecone registrations --------
def test_pinecone_registrations():
from backend.blocks.pinecone import (
PineconeInitBlock,
PineconeInsertBlock,
PineconeQueryBlock,
)
assert BLOCK_COSTS[PineconeInitBlock][0].cost_type == BlockCostType.RUN
assert BLOCK_COSTS[PineconeQueryBlock][0].cost_type == BlockCostType.RUN
# Insert scales with item count.
assert BLOCK_COSTS[PineconeInsertBlock][0].cost_type == BlockCostType.ITEMS
assert BLOCK_COSTS[PineconeInsertBlock][0].cost_amount == 1

View File

@@ -19,6 +19,10 @@ class DataForSeoClient:
trusted_origins=["https://api.dataforseo.com"],
raise_for_status=False,
)
# USD cost reported by DataForSEO on the most recent successful call.
# Populated by keyword_suggestions / related_keywords so the caller
# can surface it via NodeExecutionStats.provider_cost for billing.
self.last_cost_usd: float = 0.0
def _get_headers(self) -> Dict[str, str]:
"""Generate the authorization header using Basic Auth."""
@@ -97,6 +101,9 @@ class DataForSeoClient:
if data.get("tasks") and len(data["tasks"]) > 0:
task = data["tasks"][0]
if task.get("status_code") == 20000: # Success code
# DataForSEO reports per-task USD cost; stash it so callers
# can populate NodeExecutionStats.provider_cost.
self.last_cost_usd = float(task.get("cost") or 0.0)
return task.get("result", [])
else:
error_msg = task.get("status_message", "Task failed")
@@ -174,6 +181,9 @@ class DataForSeoClient:
if data.get("tasks") and len(data["tasks"]) > 0:
task = data["tasks"][0]
if task.get("status_code") == 20000: # Success code
# DataForSEO reports per-task USD cost; stash it so callers
# can populate NodeExecutionStats.provider_cost.
self.last_cost_usd = float(task.get("cost") or 0.0)
return task.get("result", [])
else:
error_msg = task.get("status_message", "Task failed")

View File

@@ -7,11 +7,17 @@ from backend.sdk import BlockCostType, ProviderBuilder
# Build the DataForSEO provider with username/password authentication
dataforseo = (
ProviderBuilder("dataforseo")
.with_description("SEO and SERP data")
.with_user_password(
username_env_var="DATAFORSEO_USERNAME",
password_env_var="DATAFORSEO_PASSWORD",
title="DataForSEO Credentials",
)
.with_base_cost(1, BlockCostType.RUN)
# DataForSEO reports USD cost per task (e.g. $0.001/keyword returned).
# DataForSeoClient stashes it on last_cost_usd; each block emits it via
# merge_stats so the COST_USD resolver bills against real spend.
# 1000 platform credits per USD → 1 credit per $0.001 (≈ 1 credit/
# returned keyword on the standard tier).
.with_base_cost(1000, BlockCostType.COST_USD)
.build()
)

View File

@@ -4,6 +4,7 @@ DataForSEO Google Keyword Suggestions block.
from typing import Any, Dict, List, Optional
from backend.data.model import NodeExecutionStats
from backend.sdk import (
Block,
BlockCategory,
@@ -110,8 +111,10 @@ class DataForSeoKeywordSuggestionsBlock(Block):
test_output=[
(
"suggestion",
lambda x: hasattr(x, "keyword")
and x.keyword == "digital marketing strategy",
lambda x: (
hasattr(x, "keyword")
and x.keyword == "digital marketing strategy"
),
),
("suggestions", lambda x: isinstance(x, list) and len(x) == 1),
("total_count", 1),
@@ -167,6 +170,16 @@ class DataForSeoKeywordSuggestionsBlock(Block):
results = await self._fetch_keyword_suggestions(client, input_data)
# DataForSEO reports per-task USD cost on the response. Feed it
# into NodeExecutionStats so the COST_USD resolver bills the
# real provider spend at reconciliation time.
self.merge_stats(
NodeExecutionStats(
provider_cost=client.last_cost_usd,
provider_cost_type="cost_usd",
)
)
# Process and format the results
suggestions = []
if results and len(results) > 0:

View File

@@ -4,6 +4,7 @@ DataForSEO Google Related Keywords block.
from typing import Any, Dict, List, Optional
from backend.data.model import NodeExecutionStats
from backend.sdk import (
Block,
BlockCategory,
@@ -177,6 +178,16 @@ class DataForSeoRelatedKeywordsBlock(Block):
results = await self._fetch_related_keywords(client, input_data)
# DataForSEO reports per-task USD cost on the response. Feed it
# into NodeExecutionStats so the COST_USD resolver bills the
# real provider spend at reconciliation time.
self.merge_stats(
NodeExecutionStats(
provider_cost=client.last_cost_usd,
provider_cost_type="cost_usd",
)
)
# Process and format the results
related_keywords = []
if results and len(results) > 0:

View File

@@ -0,0 +1,10 @@
"""Provider registration for Discord — metadata only (auth lives in ``_auth.py``)."""
from backend.sdk import ProviderBuilder
discord = (
ProviderBuilder("discord")
.with_description("Messages, channels, and servers")
.with_supported_auth_types("api_key", "oauth2")
.build()
)

View File

@@ -0,0 +1,10 @@
"""Provider registration for ElevenLabs — metadata only (auth lives in ``_auth.py``)."""
from backend.sdk import ProviderBuilder
elevenlabs = (
ProviderBuilder("elevenlabs")
.with_description("Realistic AI voice synthesis")
.with_supported_auth_types("api_key")
.build()
)

View File

@@ -0,0 +1,10 @@
"""Provider registration for Enrichlayer — metadata only (auth lives in ``_auth.py``)."""
from backend.sdk import ProviderBuilder
enrichlayer = (
ProviderBuilder("enrichlayer")
.with_description("Enrich leads with company data")
.with_supported_auth_types("api_key")
.build()
)

View File

@@ -9,8 +9,14 @@ from ._webhook import ExaWebhookManager
# Configure the Exa provider once for all blocks
exa = (
ProviderBuilder("exa")
.with_description("Neural web search")
.with_api_key("EXA_API_KEY", "Exa API Key")
.with_webhook_manager(ExaWebhookManager)
.with_base_cost(1, BlockCostType.RUN)
# Exa returns `cost_dollars.total` on every response and ExaSearchBlock
# (plus ~45 sibling blocks that share this provider config) already
# populates NodeExecutionStats.provider_cost with it. Bill 100 credits
# per USD (~$0.01/credit): cheap searches stay at 12 credits, a Deep
# Research run at $0.20 lands at 20 credits, matching provider spend.
.with_base_cost(100, BlockCostType.COST_USD)
.build()
)

View File

@@ -17,6 +17,7 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import merge_exa_cost
class AnswerCitation(BaseModel):
@@ -111,3 +112,7 @@ class ExaAnswerBlock(Block):
yield "citations", citations
for citation in citations:
yield "citation", citation
# Current SDK AnswerResponse dataclass omits cost_dollars; helper
# no-ops today, but keeps billing wired when exa_py adds the field.
merge_exa_cost(self, response)

View File

@@ -9,7 +9,6 @@ from typing import Union
from pydantic import BaseModel
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -23,6 +22,7 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import merge_exa_cost
class CodeContextResponse(BaseModel):
@@ -118,9 +118,5 @@ class ExaCodeContextBlock(Block):
yield "search_time", context.search_time
yield "output_tokens", context.output_tokens
# Parse cost_dollars (API returns as string, e.g. "0.005")
try:
cost_usd = float(context.cost_dollars)
self.merge_stats(NodeExecutionStats(provider_cost=cost_usd))
except (ValueError, TypeError):
pass
# API returns costDollars as a bare numeric string like "0.005".
merge_exa_cost(self, data)

View File

@@ -4,7 +4,6 @@ from typing import Optional
from exa_py import AsyncExa
from pydantic import BaseModel
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -24,6 +23,7 @@ from .helpers import (
HighlightSettings,
LivecrawlTypes,
SummarySettings,
merge_exa_cost,
)
@@ -224,6 +224,4 @@ class ExaContentsBlock(Block):
if response.cost_dollars:
yield "cost_dollars", response.cost_dollars
self.merge_stats(
NodeExecutionStats(provider_cost=response.cost_dollars.total)
)
merge_exa_cost(self, response)

View File

@@ -143,7 +143,9 @@ class TestExaContentsCostTracking:
mock_exa_cls.return_value = mock_exa
async for _ in block.run(
block.Input(urls=["https://example.com"], credentials=TEST_CREDENTIALS_INPUT), # type: ignore[arg-type]
block.Input(
urls=["https://example.com"], credentials=TEST_CREDENTIALS_INPUT
), # type: ignore[arg-type]
credentials=TEST_CREDENTIALS,
):
pass
@@ -172,7 +174,9 @@ class TestExaContentsCostTracking:
mock_exa_cls.return_value = mock_exa
async for _ in block.run(
block.Input(urls=["https://example.com"], credentials=TEST_CREDENTIALS_INPUT), # type: ignore[arg-type]
block.Input(
urls=["https://example.com"], credentials=TEST_CREDENTIALS_INPUT
), # type: ignore[arg-type]
credentials=TEST_CREDENTIALS,
):
pass
@@ -201,7 +205,9 @@ class TestExaContentsCostTracking:
mock_exa_cls.return_value = mock_exa
async for _ in block.run(
block.Input(urls=["https://example.com"], credentials=TEST_CREDENTIALS_INPUT), # type: ignore[arg-type]
block.Input(
urls=["https://example.com"], credentials=TEST_CREDENTIALS_INPUT
), # type: ignore[arg-type]
credentials=TEST_CREDENTIALS,
):
pass
@@ -297,7 +303,9 @@ class TestExaSimilarCostTracking:
mock_exa_cls.return_value = mock_exa
async for _ in block.run(
block.Input(url="https://example.com", credentials=TEST_CREDENTIALS_INPUT), # type: ignore[arg-type]
block.Input(
url="https://example.com", credentials=TEST_CREDENTIALS_INPUT
), # type: ignore[arg-type]
credentials=TEST_CREDENTIALS,
):
pass
@@ -326,7 +334,9 @@ class TestExaSimilarCostTracking:
mock_exa_cls.return_value = mock_exa
async for _ in block.run(
block.Input(url="https://example.com", credentials=TEST_CREDENTIALS_INPUT), # type: ignore[arg-type]
block.Input(
url="https://example.com", credentials=TEST_CREDENTIALS_INPUT
), # type: ignore[arg-type]
credentials=TEST_CREDENTIALS,
):
pass

View File

@@ -1,7 +1,8 @@
from enum import Enum
from typing import Any, Dict, Literal, Optional, Union
from backend.sdk import BaseModel, MediaFileType, SchemaField
from backend.data.model import NodeExecutionStats
from backend.sdk import BaseModel, Block, MediaFileType, SchemaField
class LivecrawlTypes(str, Enum):
@@ -319,7 +320,7 @@ class CostDollars(BaseModel):
# Helper functions for payload processing
def process_text_field(
text: Union[bool, TextEnabled, TextDisabled, TextAdvanced, None]
text: Union[bool, TextEnabled, TextDisabled, TextAdvanced, None],
) -> Optional[Union[bool, Dict[str, Any]]]:
"""Process text field for API payload."""
if text is None:
@@ -400,7 +401,7 @@ def process_contents_settings(contents: Optional[ContentSettings]) -> Dict[str,
def process_context_field(
context: Union[bool, dict, ContextEnabled, ContextDisabled, ContextAdvanced, None]
context: Union[bool, dict, ContextEnabled, ContextDisabled, ContextAdvanced, None],
) -> Optional[Union[bool, Dict[str, int]]]:
"""Process context field for API payload."""
if context is None:
@@ -448,3 +449,65 @@ def add_optional_fields(
payload[api_field] = value.value
else:
payload[api_field] = value
def extract_exa_cost_usd(response: Any) -> Optional[float]:
"""Return ``cost_dollars.total`` (USD) from an Exa SDK response, or None.
Handles dataclass/pydantic responses (``response.cost_dollars.total``),
dicts with camelCase keys (``response["costDollars"]["total"]``), dicts
with snake_case keys, and bare numeric strings. Returns None whenever the
shape is missing cost info — the caller then skips merge_stats.
"""
if response is None:
return None
# Dataclass / pydantic: response.cost_dollars
cost_obj = getattr(response, "cost_dollars", None)
# Dict payloads: try both camelCase and snake_case
if cost_obj is None and isinstance(response, dict):
cost_obj = response.get("costDollars") or response.get("cost_dollars")
if cost_obj is None:
return None
# Already a scalar (code_context endpoint returns a string)
if isinstance(cost_obj, (int, float)):
return max(0.0, float(cost_obj))
if isinstance(cost_obj, str):
try:
return max(0.0, float(cost_obj))
except ValueError:
return None
# Nested object/dict: grab the `total` field
total = getattr(cost_obj, "total", None)
if total is None and isinstance(cost_obj, dict):
total = cost_obj.get("total")
if total is None:
return None
try:
return max(0.0, float(total))
except (TypeError, ValueError):
return None
def merge_exa_cost(block: Block, response: Any) -> None:
"""Pull ``cost_dollars.total`` off an Exa response and merge it into stats.
No-op when the response shape has no cost info (e.g. webset CRUD where
the SDK does not expose per-call pricing) — emission happens only when
Exa actually reports a USD amount.
"""
cost_usd = extract_exa_cost_usd(response)
if cost_usd is None:
return
block.merge_stats(
NodeExecutionStats(
provider_cost=cost_usd,
provider_cost_type="cost_usd",
)
)

View File

@@ -0,0 +1,65 @@
"""Unit tests for exa/helpers cost-extraction + merge helpers."""
from types import SimpleNamespace
from unittest.mock import MagicMock
import pytest
from backend.blocks.exa.helpers import extract_exa_cost_usd, merge_exa_cost
from backend.data.model import NodeExecutionStats
@pytest.mark.parametrize(
"response, expected",
[
# Dataclass / SimpleNamespace with cost_dollars.total
(SimpleNamespace(cost_dollars=SimpleNamespace(total=0.05)), 0.05),
# Dict camelCase
({"costDollars": {"total": 0.10}}, 0.10),
# Dict snake_case
({"cost_dollars": {"total": 0.07}}, 0.07),
# code_context endpoint shape: plain numeric string
(SimpleNamespace(cost_dollars="0.005"), 0.005),
# Scalar float on cost_dollars directly
(SimpleNamespace(cost_dollars=0.02), 0.02),
# Scalar int on cost_dollars
(SimpleNamespace(cost_dollars=3), 3.0),
# Missing cost info — returns None
({}, None),
(SimpleNamespace(other="foo"), None),
(None, None),
# Nested total=None
(SimpleNamespace(cost_dollars=SimpleNamespace(total=None)), None),
# Invalid numeric string
(SimpleNamespace(cost_dollars="not-a-number"), None),
# Negative values clamp to 0
(SimpleNamespace(cost_dollars=SimpleNamespace(total=-1.0)), 0.0),
],
)
def test_extract_exa_cost_usd_handles_all_shapes(response, expected):
assert extract_exa_cost_usd(response) == expected
def test_merge_exa_cost_emits_stats_when_cost_present():
block = MagicMock()
response = SimpleNamespace(cost_dollars=SimpleNamespace(total=0.0421))
merge_exa_cost(block, response)
block.merge_stats.assert_called_once()
stats: NodeExecutionStats = block.merge_stats.call_args.args[0]
assert stats.provider_cost == pytest.approx(0.0421)
assert stats.provider_cost_type == "cost_usd"
def test_merge_exa_cost_noops_when_no_cost():
"""Webset CRUD endpoints don't surface cost_dollars today — the helper
must silently skip instead of emitting a 0-cost telemetry record."""
block = MagicMock()
merge_exa_cost(block, SimpleNamespace(other_field="nothing"))
block.merge_stats.assert_not_called()
def test_merge_exa_cost_noops_when_response_is_none():
block = MagicMock()
merge_exa_cost(block, None)
block.merge_stats.assert_not_called()

View File

@@ -12,7 +12,6 @@ from typing import Any, Dict, List, Optional
from pydantic import BaseModel
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -26,6 +25,7 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import merge_exa_cost
class ResearchModel(str, Enum):
@@ -233,11 +233,7 @@ class ExaCreateResearchBlock(Block):
if research.cost_dollars:
yield "cost_total", research.cost_dollars.total
self.merge_stats(
NodeExecutionStats(
provider_cost=research.cost_dollars.total
)
)
merge_exa_cost(self, research)
return
await asyncio.sleep(check_interval)
@@ -352,9 +348,7 @@ class ExaGetResearchBlock(Block):
yield "cost_searches", research.cost_dollars.num_searches
yield "cost_pages", research.cost_dollars.num_pages
yield "cost_reasoning_tokens", research.cost_dollars.reasoning_tokens
self.merge_stats(
NodeExecutionStats(provider_cost=research.cost_dollars.total)
)
merge_exa_cost(self, research)
yield "error_message", research.error
@@ -441,9 +435,7 @@ class ExaWaitForResearchBlock(Block):
if research.cost_dollars:
yield "cost_total", research.cost_dollars.total
self.merge_stats(
NodeExecutionStats(provider_cost=research.cost_dollars.total)
)
merge_exa_cost(self, research)
return

View File

@@ -4,7 +4,6 @@ from typing import Optional
from exa_py import AsyncExa
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -21,6 +20,7 @@ from .helpers import (
ContentSettings,
CostDollars,
ExaSearchResults,
merge_exa_cost,
process_contents_settings,
)
@@ -207,6 +207,4 @@ class ExaSearchBlock(Block):
if response.cost_dollars:
yield "cost_dollars", response.cost_dollars
self.merge_stats(
NodeExecutionStats(provider_cost=response.cost_dollars.total)
)
merge_exa_cost(self, response)

View File

@@ -3,7 +3,6 @@ from typing import Optional
from exa_py import AsyncExa
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -20,6 +19,7 @@ from .helpers import (
ContentSettings,
CostDollars,
ExaSearchResults,
merge_exa_cost,
process_contents_settings,
)
@@ -168,6 +168,4 @@ class ExaFindSimilarBlock(Block):
if response.cost_dollars:
yield "cost_dollars", response.cost_dollars
self.merge_stats(
NodeExecutionStats(provider_cost=response.cost_dollars.total)
)
merge_exa_cost(self, response)

View File

@@ -39,6 +39,7 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import merge_exa_cost
class SearchEntityType(str, Enum):
@@ -394,6 +395,7 @@ class ExaCreateWebsetBlock(Block):
metadata=input_data.metadata,
)
)
merge_exa_cost(self, webset)
webset_result = Webset.model_validate(webset.model_dump(by_alias=True))
@@ -404,6 +406,7 @@ class ExaCreateWebsetBlock(Block):
timeout=input_data.polling_timeout,
poll_interval=5,
)
merge_exa_cost(self, final_webset)
completion_time = time.time() - start_time
item_count = 0
@@ -479,6 +482,7 @@ class ExaCreateOrFindWebsetBlock(Block):
try:
webset = await aexa.websets.get(id=input_data.external_id)
merge_exa_cost(self, webset)
webset_result = Webset.model_validate(webset.model_dump(by_alias=True))
yield "webset", webset_result
@@ -501,6 +505,7 @@ class ExaCreateOrFindWebsetBlock(Block):
metadata=input_data.metadata,
)
)
merge_exa_cost(self, webset)
webset_result = Webset.model_validate(webset.model_dump(by_alias=True))
@@ -555,6 +560,7 @@ class ExaUpdateWebsetBlock(Block):
payload["metadata"] = input_data.metadata
sdk_webset = await aexa.websets.update(id=input_data.webset_id, params=payload)
merge_exa_cost(self, sdk_webset)
status_str = (
sdk_webset.status.value
@@ -566,8 +572,9 @@ class ExaUpdateWebsetBlock(Block):
yield "status", status_str
yield "external_id", sdk_webset.external_id
yield "metadata", sdk_webset.metadata or {}
yield "updated_at", (
sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else ""
yield (
"updated_at",
(sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else ""),
)
@@ -621,6 +628,7 @@ class ExaListWebsetsBlock(Block):
cursor=input_data.cursor,
limit=input_data.limit,
)
merge_exa_cost(self, response)
websets_data = [
w.model_dump(by_alias=True, exclude_none=True) for w in response.data
@@ -679,6 +687,7 @@ class ExaGetWebsetBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, sdk_webset)
status_str = (
sdk_webset.status.value
@@ -706,11 +715,13 @@ class ExaGetWebsetBlock(Block):
yield "enrichments", enrichments_data
yield "monitors", monitors_data
yield "metadata", sdk_webset.metadata or {}
yield "created_at", (
sdk_webset.created_at.isoformat() if sdk_webset.created_at else ""
yield (
"created_at",
(sdk_webset.created_at.isoformat() if sdk_webset.created_at else ""),
)
yield "updated_at", (
sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else ""
yield (
"updated_at",
(sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else ""),
)
@@ -749,6 +760,7 @@ class ExaDeleteWebsetBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
deleted_webset = await aexa.websets.delete(id=input_data.webset_id)
merge_exa_cost(self, deleted_webset)
status_str = (
deleted_webset.status.value
@@ -799,6 +811,7 @@ class ExaCancelWebsetBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
canceled_webset = await aexa.websets.cancel(id=input_data.webset_id)
merge_exa_cost(self, canceled_webset)
status_str = (
canceled_webset.status.value
@@ -969,6 +982,7 @@ class ExaPreviewWebsetBlock(Block):
payload["entity"] = entity
sdk_preview = await aexa.websets.preview(params=payload)
merge_exa_cost(self, sdk_preview)
preview = PreviewWebsetModel.from_sdk(sdk_preview)
@@ -1052,6 +1066,7 @@ class ExaWebsetStatusBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, webset)
status = (
webset.status.value
@@ -1186,6 +1201,7 @@ class ExaWebsetSummaryBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, webset)
# Extract basic info
webset_id = webset.id
@@ -1214,6 +1230,7 @@ class ExaWebsetSummaryBlock(Block):
items_response = await aexa.websets.items.list(
webset_id=input_data.webset_id, limit=input_data.sample_size
)
merge_exa_cost(self, items_response)
sample_items_data = [
item.model_dump(by_alias=True, exclude_none=True)
for item in items_response.data
@@ -1363,6 +1380,7 @@ class ExaWebsetReadyCheckBlock(Block):
# Get webset details
webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, webset)
status = (
webset.status.value

View File

@@ -25,6 +25,7 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import merge_exa_cost
# Mirrored model for stability
@@ -205,6 +206,7 @@ class ExaCreateEnrichmentBlock(Block):
sdk_enrichment = await aexa.websets.enrichments.create(
webset_id=input_data.webset_id, params=payload
)
merge_exa_cost(self, sdk_enrichment)
enrichment_id = sdk_enrichment.id
status = (
@@ -226,6 +228,7 @@ class ExaCreateEnrichmentBlock(Block):
current_enrich = await aexa.websets.enrichments.get(
webset_id=input_data.webset_id, id=enrichment_id
)
merge_exa_cost(self, current_enrich)
current_status = (
current_enrich.status.value
if hasattr(current_enrich.status, "value")
@@ -235,6 +238,7 @@ class ExaCreateEnrichmentBlock(Block):
if current_status in ["completed", "failed", "cancelled"]:
# Estimate items from webset searches
webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, webset)
if webset.searches:
for search in webset.searches:
if search.progress:
@@ -332,6 +336,7 @@ class ExaGetEnrichmentBlock(Block):
sdk_enrichment = await aexa.websets.enrichments.get(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
merge_exa_cost(self, sdk_enrichment)
enrichment = WebsetEnrichmentModel.from_sdk(sdk_enrichment)
@@ -425,6 +430,7 @@ class ExaUpdateEnrichmentBlock(Block):
try:
response = await Requests().patch(url, headers=headers, json=payload)
data = response.json()
# PATCH /websets/{id}/enrichments/{id} doesn't return costDollars.
yield "enrichment_id", data.get("id", "")
yield "status", data.get("status", "")
@@ -477,6 +483,7 @@ class ExaDeleteEnrichmentBlock(Block):
deleted_enrichment = await aexa.websets.enrichments.delete(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
merge_exa_cost(self, deleted_enrichment)
yield "enrichment_id", deleted_enrichment.id
yield "success", "true"
@@ -528,12 +535,14 @@ class ExaCancelEnrichmentBlock(Block):
canceled_enrichment = await aexa.websets.enrichments.cancel(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
merge_exa_cost(self, canceled_enrichment)
# Try to estimate how many items were enriched before cancellation
items_enriched = 0
items_response = await aexa.websets.items.list(
webset_id=input_data.webset_id, limit=100
)
merge_exa_cost(self, items_response)
for sdk_item in items_response.data:
# Check if this enrichment is present

View File

@@ -29,6 +29,7 @@ from backend.sdk import (
from ._config import exa
from ._test import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT
from .helpers import merge_exa_cost
# Mirrored model for stability - don't use SDK types directly in block outputs
@@ -297,6 +298,7 @@ class ExaCreateImportBlock(Block):
sdk_import = await aexa.websets.imports.create(
params=payload, csv_data=input_data.csv_data
)
merge_exa_cost(self, sdk_import)
import_obj = ImportModel.from_sdk(sdk_import)
@@ -361,6 +363,7 @@ class ExaGetImportBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_import = await aexa.websets.imports.get(import_id=input_data.import_id)
merge_exa_cost(self, sdk_import)
import_obj = ImportModel.from_sdk(sdk_import)
@@ -430,6 +433,7 @@ class ExaListImportsBlock(Block):
cursor=input_data.cursor,
limit=input_data.limit,
)
merge_exa_cost(self, response)
# Convert SDK imports to our stable models
imports = [ImportModel.from_sdk(i) for i in response.data]
@@ -477,6 +481,7 @@ class ExaDeleteImportBlock(Block):
deleted_import = await aexa.websets.imports.delete(
import_id=input_data.import_id
)
merge_exa_cost(self, deleted_import)
yield "import_id", deleted_import.id
yield "success", "true"
@@ -599,7 +604,7 @@ class ExaExportWebsetBlock(Block):
try:
all_items = []
# Use SDK's list_all iterator to fetch items
# list_all paginates internally; cost_dollars is not surfaced per-page
item_iterator = aexa.websets.items.list_all(
webset_id=input_data.webset_id, limit=input_data.max_items
)

View File

@@ -30,6 +30,7 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import merge_exa_cost
# Mirrored model for enrichment results
@@ -181,6 +182,7 @@ class ExaGetWebsetItemBlock(Block):
sdk_item = await aexa.websets.items.get(
webset_id=input_data.webset_id, id=input_data.item_id
)
merge_exa_cost(self, sdk_item)
item = WebsetItemModel.from_sdk(sdk_item)
@@ -293,6 +295,7 @@ class ExaListWebsetItemsBlock(Block):
cursor=input_data.cursor,
limit=input_data.limit,
)
merge_exa_cost(self, response)
items = [WebsetItemModel.from_sdk(item) for item in response.data]
@@ -343,6 +346,7 @@ class ExaDeleteWebsetItemBlock(Block):
deleted_item = await aexa.websets.items.delete(
webset_id=input_data.webset_id, id=input_data.item_id
)
merge_exa_cost(self, deleted_item)
yield "item_id", deleted_item.id
yield "success", "true"
@@ -404,6 +408,7 @@ class ExaBulkWebsetItemsBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
all_items: List[WebsetItemModel] = []
# list_all paginates internally; cost_dollars is not surfaced per-page
item_iterator = aexa.websets.items.list_all(
webset_id=input_data.webset_id, limit=input_data.max_items
)
@@ -476,6 +481,7 @@ class ExaWebsetItemsSummaryBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, webset)
entity_type = "unknown"
if webset.searches:
@@ -498,6 +504,7 @@ class ExaWebsetItemsSummaryBlock(Block):
items_response = await aexa.websets.items.list(
webset_id=input_data.webset_id, limit=input_data.sample_size
)
merge_exa_cost(self, items_response)
# Convert to our stable models
sample_items = [
WebsetItemModel.from_sdk(item) for item in items_response.data
@@ -574,6 +581,7 @@ class ExaGetNewItemsBlock(Block):
cursor=input_data.since_cursor,
limit=input_data.max_items,
)
merge_exa_cost(self, response)
# Convert SDK items to our stable models
new_items = [WebsetItemModel.from_sdk(item) for item in response.data]

View File

@@ -25,6 +25,7 @@ from backend.sdk import (
from ._config import exa
from ._test import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT
from .helpers import merge_exa_cost
# Mirrored model for stability - don't use SDK types directly in block outputs
@@ -321,6 +322,7 @@ class ExaCreateMonitorBlock(Block):
payload["metadata"] = input_data.metadata
sdk_monitor = await aexa.websets.monitors.create(params=payload)
merge_exa_cost(self, sdk_monitor)
monitor = MonitorModel.from_sdk(sdk_monitor)
@@ -385,6 +387,7 @@ class ExaGetMonitorBlock(Block):
aexa = AsyncExa(api_key=credentials.api_key.get_secret_value())
sdk_monitor = await aexa.websets.monitors.get(monitor_id=input_data.monitor_id)
merge_exa_cost(self, sdk_monitor)
monitor = MonitorModel.from_sdk(sdk_monitor)
@@ -479,6 +482,7 @@ class ExaUpdateMonitorBlock(Block):
sdk_monitor = await aexa.websets.monitors.update(
monitor_id=input_data.monitor_id, params=payload
)
merge_exa_cost(self, sdk_monitor)
# Convert to our stable model
monitor = MonitorModel.from_sdk(sdk_monitor)
@@ -525,6 +529,7 @@ class ExaDeleteMonitorBlock(Block):
deleted_monitor = await aexa.websets.monitors.delete(
monitor_id=input_data.monitor_id
)
merge_exa_cost(self, deleted_monitor)
yield "monitor_id", deleted_monitor.id
yield "success", "true"
@@ -586,6 +591,7 @@ class ExaListMonitorsBlock(Block):
limit=input_data.limit,
webset_id=input_data.webset_id,
)
merge_exa_cost(self, response)
# Convert SDK monitors to our stable models
monitors = [MonitorModel.from_sdk(m) for m in response.data]

View File

@@ -25,6 +25,7 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import merge_exa_cost
# Import WebsetItemModel for use in enrichment samples
# This is safe as websets_items doesn't import from websets_polling
@@ -126,6 +127,7 @@ class ExaWaitForWebsetBlock(Block):
timeout=input_data.timeout,
poll_interval=input_data.check_interval,
)
merge_exa_cost(self, final_webset)
elapsed = time.time() - start_time
@@ -165,6 +167,7 @@ class ExaWaitForWebsetBlock(Block):
while time.time() - start_time < input_data.timeout:
# Get current webset status
webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, webset)
current_status = (
webset.status.value
if hasattr(webset.status, "value")
@@ -210,6 +213,7 @@ class ExaWaitForWebsetBlock(Block):
# Timeout reached
elapsed = time.time() - start_time
webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, webset)
final_status = (
webset.status.value
if hasattr(webset.status, "value")
@@ -348,6 +352,7 @@ class ExaWaitForSearchBlock(Block):
search = await aexa.websets.searches.get(
webset_id=input_data.webset_id, id=input_data.search_id
)
merge_exa_cost(self, search)
# Extract status
status = (
@@ -404,6 +409,7 @@ class ExaWaitForSearchBlock(Block):
search = await aexa.websets.searches.get(
webset_id=input_data.webset_id, id=input_data.search_id
)
merge_exa_cost(self, search)
final_status = (
search.status.value
if hasattr(search.status, "value")
@@ -506,6 +512,7 @@ class ExaWaitForEnrichmentBlock(Block):
enrichment = await aexa.websets.enrichments.get(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
merge_exa_cost(self, enrichment)
# Extract status
status = (
@@ -523,16 +530,20 @@ class ExaWaitForEnrichmentBlock(Block):
items_enriched = 0
if input_data.sample_results and status == "completed":
sample_data, items_enriched = (
await self._get_sample_enrichments(
input_data.webset_id, input_data.enrichment_id, aexa
)
(
sample_data,
items_enriched,
) = await self._get_sample_enrichments(
input_data.webset_id, input_data.enrichment_id, aexa
)
yield "enrichment_id", input_data.enrichment_id
yield "final_status", status
yield "items_enriched", items_enriched
yield "enrichment_title", enrichment.title or enrichment.description or ""
yield (
"enrichment_title",
enrichment.title or enrichment.description or "",
)
yield "elapsed_time", elapsed
if input_data.sample_results:
yield "sample_data", sample_data
@@ -551,6 +562,7 @@ class ExaWaitForEnrichmentBlock(Block):
enrichment = await aexa.websets.enrichments.get(
webset_id=input_data.webset_id, id=input_data.enrichment_id
)
merge_exa_cost(self, enrichment)
final_status = (
enrichment.status.value
if hasattr(enrichment.status, "value")
@@ -576,6 +588,7 @@ class ExaWaitForEnrichmentBlock(Block):
"""Get sample enriched data and count."""
# Get a few items to see enrichment results using SDK
response = await aexa.websets.items.list(webset_id=webset_id, limit=5)
merge_exa_cost(self, response)
sample_data: list[SampleEnrichmentModel] = []
enriched_count = 0

View File

@@ -24,6 +24,7 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import merge_exa_cost
# Mirrored model for stability
@@ -320,6 +321,7 @@ class ExaCreateWebsetSearchBlock(Block):
sdk_search = await aexa.websets.searches.create(
webset_id=input_data.webset_id, params=payload
)
merge_exa_cost(self, sdk_search)
search_id = sdk_search.id
status = (
@@ -353,6 +355,7 @@ class ExaCreateWebsetSearchBlock(Block):
current_search = await aexa.websets.searches.get(
webset_id=input_data.webset_id, id=search_id
)
merge_exa_cost(self, current_search)
current_status = (
current_search.status.value
if hasattr(current_search.status, "value")
@@ -445,6 +448,7 @@ class ExaGetWebsetSearchBlock(Block):
sdk_search = await aexa.websets.searches.get(
webset_id=input_data.webset_id, id=input_data.search_id
)
merge_exa_cost(self, sdk_search)
search = WebsetSearchModel.from_sdk(sdk_search)
@@ -526,6 +530,7 @@ class ExaCancelWebsetSearchBlock(Block):
canceled_search = await aexa.websets.searches.cancel(
webset_id=input_data.webset_id, id=input_data.search_id
)
merge_exa_cost(self, canceled_search)
# Extract items found before cancellation
items_found = 0
@@ -605,6 +610,7 @@ class ExaFindOrCreateSearchBlock(Block):
# Get webset to check existing searches
webset = await aexa.websets.get(id=input_data.webset_id)
merge_exa_cost(self, webset)
# Look for existing search with same query
existing_search = None
@@ -639,6 +645,7 @@ class ExaFindOrCreateSearchBlock(Block):
sdk_search = await aexa.websets.searches.create(
webset_id=input_data.webset_id, params=payload
)
merge_exa_cost(self, sdk_search)
search = WebsetSearchModel.from_sdk(sdk_search)

View File

@@ -0,0 +1,10 @@
"""Provider registration for fal — metadata only (auth lives in ``_auth.py``)."""
from backend.sdk import ProviderBuilder
fal = (
ProviderBuilder("fal")
.with_description("Hosted model inference")
.with_supported_auth_types("api_key")
.build()
)

View File

@@ -1,8 +1,15 @@
from backend.sdk import BlockCostType, ProviderBuilder
# Firecrawl bills in its own credits (1 credit ≈ $0.001). Each block's
# run() estimates USD spend from the operation (pages scraped, limit,
# credits_used on ExtractResponse) and merge_stats populates
# NodeExecutionStats.provider_cost before billing reconciliation. 1000
# platform credits per USD means 1 platform credit per Firecrawl credit
# — roughly matches our existing per-call tier for single-page scrape.
firecrawl = (
ProviderBuilder("firecrawl")
.with_description("Web scraping and crawling")
.with_api_key("FIRECRAWL_API_KEY", "Firecrawl API Key")
.with_base_cost(1, BlockCostType.RUN)
.with_base_cost(1000, BlockCostType.COST_USD)
.build()
)

View File

@@ -4,6 +4,7 @@ from firecrawl import FirecrawlApp
from firecrawl.v2.types import ScrapeOptions
from backend.blocks.firecrawl._api import ScrapeFormat
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -86,6 +87,14 @@ class FirecrawlCrawlBlock(Block):
wait_for=input_data.wait_for,
),
)
# Firecrawl bills 1 credit (~$0.001) per crawled page. crawl_result.data
# is the list of scraped pages actually returned.
pages = len(crawl_result.data) if crawl_result.data else 0
self.merge_stats(
NodeExecutionStats(
provider_cost=pages * 0.001, provider_cost_type="cost_usd"
)
)
yield "data", crawl_result.data
for data in crawl_result.data:

View File

@@ -2,25 +2,22 @@ from typing import Any
from firecrawl import FirecrawlApp
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockCost,
BlockCostType,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
cost,
)
from backend.util.exceptions import BlockExecutionError
from ._config import firecrawl
@cost(BlockCost(2, BlockCostType.RUN))
class FirecrawlExtractBlock(Block):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
@@ -74,4 +71,13 @@ class FirecrawlExtractBlock(Block):
block_id=self.id,
) from e
# Firecrawl surfaces actual credit spend on extract responses
# (credits_used). 1 Firecrawl credit ≈ $0.001.
credits_used = getattr(extract_result, "credits_used", None) or 0
self.merge_stats(
NodeExecutionStats(
provider_cost=credits_used * 0.001,
provider_cost_type="cost_usd",
)
)
yield "data", extract_result.data

View File

@@ -2,6 +2,7 @@ from typing import Any
from firecrawl import FirecrawlApp
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -50,6 +51,10 @@ class FirecrawlMapWebsiteBlock(Block):
map_result = app.map(
url=input_data.url,
)
# Firecrawl bills 1 credit (~$0.001) per map request.
self.merge_stats(
NodeExecutionStats(provider_cost=0.001, provider_cost_type="cost_usd")
)
# Convert SearchResult objects to dicts
results_data = [

View File

@@ -3,6 +3,7 @@ from typing import Any
from firecrawl import FirecrawlApp
from backend.blocks.firecrawl._api import ScrapeFormat
from backend.data.model import NodeExecutionStats
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -81,6 +82,11 @@ class FirecrawlScrapeBlock(Block):
max_age=input_data.max_age,
wait_for=input_data.wait_for,
)
# Firecrawl bills 1 credit (~$0.001) per scraped page; scrape is a
# single-page operation.
self.merge_stats(
NodeExecutionStats(provider_cost=0.001, provider_cost_type="cost_usd")
)
yield "data", scrape_result
for f in input_data.formats:

Some files were not shown because too many files have changed in this diff Show More