Compare commits

..

46 Commits

Author SHA1 Message Date
Nicholas Tindle
e688f4003e fix(backend): handle malformed emails in PII masking
Prevents IndexError when email has empty local part (e.g., "@example.com").

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 21:56:49 -06:00
Nicholas Tindle
3b6f1a4591 fix(frontend): check response status in delete mutation
Consistent with other mutations, now checks response.status === 200
before showing success toast.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 20:55:10 -06:00
Nicholas Tindle
6b1432d59e fix(backend): add null fallback for categories field
Prevents validation error when DB returns None for categories.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 20:42:00 -06:00
Nicholas Tindle
f91edde32a fix(backend): mask email PII in waitlist logging
Avoid logging raw email addresses by masking to first char + domain.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 20:35:13 -06:00
Nicholas Tindle
4ba6c44f61 fix(frontend): regenerate openapi.json with correct structure
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 20:22:11 -06:00
Nicholas Tindle
b4e16e7246 Merge branch 'dev' into ntindle/waitlist 2026-02-08 20:00:02 -06:00
Nicholas Tindle
adeeba76d1 fix(backend): remove unused pytest import
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 19:07:00 -06:00
Nicholas Tindle
c88918af4f Merge remote-tracking branch 'origin/dev' into ntindle/waitlist 2026-02-08 18:42:10 -06:00
Nicholas Tindle
69618a5e05 Merge branch 'dev' into ntindle/waitlist 2026-02-04 19:02:00 -06:00
Nicholas Tindle
3610be3e83 Merge branch 'dev' into ntindle/waitlist 2026-01-20 17:47:02 -06:00
Nicholas Tindle
9e1f7c9415 Merge branch 'dev' into ntindle/waitlist 2026-01-19 01:12:14 -06:00
Nicholas Tindle
0d03ebb43c fix: lint 2026-01-16 11:34:00 -06:00
Nicholas Tindle
1b37bd6da9 Merge branch 'dev' into ntindle/waitlist 2026-01-16 11:32:05 -06:00
Nicholas Tindle
db989a5eed fix: lint 2026-01-15 15:58:33 -06:00
Nicholas Tindle
e3a8c57a35 Merge branch 'dev' into ntindle/waitlist
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:54:38 -06:00
Nicholas Tindle
dfc8e53386 fix(backend): add assertions to fix type errors in waitlist admin functions
Prisma's update() returns T | None but we verify existence before updating,
so assert the result is not None to satisfy the type checker.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:48:30 -06:00
Nicholas Tindle
b5b7e5da92 fix(backend): don't mark waitlist DONE if email-only users pending
The notify_waitlist_users_on_launch function was marking waitlists as
DONE after notifying registered users, but ignoring unaffiliatedEmailUsers
who haven't been notified yet. Since DONE waitlists are excluded from
future notification queries, those email users would never receive
notifications when that functionality is implemented.

Now the waitlist remains in an active state if there are pending
email-only signups that still need notifications.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:41:31 -06:00
Nicholas Tindle
07ea2c2ab7 fix(backend): check waitlist existence before update in update_waitlist_admin
Added find_unique check before update() call to properly return 404 when
waitlist doesn't exist, following the established pattern used in other
waitlist admin functions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:37:19 -06:00
Nicholas Tindle
9c873a0158 fix(backend): add exception handling to add_self_to_waitlist route
The public waitlist join route was missing exception handling, causing
500 errors for all failures. Now properly returns:
- 404 for waitlist not found
- 400 for closed/unavailable waitlists
- 500 for unexpected errors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:30:54 -06:00
Nicholas Tindle
ed634db8f7 fix(backend): validate waitlist status enum at API boundary
Changed WaitlistUpdateRequest.status from str to the actual enum type.
Pydantic now validates the status value, returning 422 for invalid
values instead of a misleading 404 "Waitlist not found" error.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:26:26 -06:00
Nicholas Tindle
398197f3ea fix(frontend): add title attribute to YouTube iframe for accessibility
Screen readers need a title attribute on iframes to describe their
content. Added "YouTube video player" title to the embedded video.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:23:52 -06:00
Nicholas Tindle
b7df4cfdbf fix(backend): align migration FK with schema (SET NULL not CASCADE)
The migration had ON DELETE CASCADE for WaitlistEntry.storeListingId,
but the Prisma schema specifies onDelete: SetNull. This mismatch would
cause waitlist entries and all signup data to be deleted when a store
listing is removed, instead of just unlinking them.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:18:03 -06:00
Nicholas Tindle
5d8dd46759 fix(backend): align waitlist admin functions with established patterns
- delete_waitlist_admin: add find_unique check before update, raise
  ValueError if not found, add except ValueError: raise
- link_waitlist_to_listing_admin: add find_unique check for waitlist
  before update, remove dead code
- delete_waitlist route: add except ValueError: → 404, remove dead
  code bool check pattern

All waitlist admin functions now follow the consistent pattern:
1. find_unique to check existence
2. raise ValueError if not found
3. except ValueError: raise to bubble up
4. except Exception: raise DatabaseError

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 14:54:53 -06:00
Nicholas Tindle
f9518b6f8b fix(frontend): use generated query key for waitlist cache invalidation
The hardcoded query key string didn't match the actual generated key,
causing cache invalidation to fail after joining a waitlist. Now uses
the generated getGetV2GetWaitlistIdsTheCurrentUserHasJoinedQueryKey()
function for correct cache invalidation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 14:44:35 -06:00
Nicholas Tindle
205b220e90 fix(backend): filter out DONE/CANCELED waitlists before sending notifications
The notify_waitlist_users_on_launch function was not filtering by
waitlist status, which could cause duplicate notifications when an
agent is re-approved. Now excludes DONE and CANCELED waitlists,
consistent with get_waitlist() and add_user_to_waitlist().

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 14:41:37 -06:00
Nicholas Tindle
29a232fcb4 fix(frontend): add URL validation and sandbox to video player
- Add getYouTubeVideoId() to extract video IDs from YouTube URLs
- Add isValidVideoUrl() to validate video URLs before rendering
- Create VideoPlayer component that:
  - Embeds YouTube videos via iframe with safe embed URL
  - Adds sandbox attribute to restrict iframe capabilities
  - Adds proper allow attributes for media playback
  - Falls back to native video element for valid non-YouTube URLs
  - Shows error state for invalid URLs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 14:29:10 -06:00
Nicholas Tindle
a53f261812 feat(frontend): add TODO warning for email-only waitlist notifications
Adds a warning banner on the admin waitlist page indicating that
notifications for email-only signups (non-logged-in users) have not
been implemented yet.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 14:05:21 -06:00
Nicholas Tindle
00a20f77be feat(backend): add waitlist_launch email notification template
The WAITLIST_LAUNCH notification type was referencing a template that
didn't exist, causing FileNotFoundError when trying to notify users
that an agent they waitlisted has launched.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 16:38:04 -06:00
Nicholas Tindle
4d49536a40 Discard changes to autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts 2026-01-12 15:28:37 -07:00
Nicholas Tindle
6028a2528c refactor(frontend): consolidate waitlist modals and align with Figma design
- Merge JoinWaitlistModal into WaitlistDetailModal for unified experience
- Add MediaCarousel component supporting videos and images with play overlay
- Update WaitlistCard styling to match Figma (rounded-large, line-clamp-5, zinc-800 button)
- Update success state with party emoji and Close button per Figma design
- Add sticky footer for buttons during modal scroll
- Support email input for non-logged-in users

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 16:27:09 -06:00
Nicholas Tindle
b31cd05675 fix(backend): correct typo in unaffiliatedEmailUsers field name
- Rename unafilliatedEmailUsers -> unaffiliatedEmailUsers in schema.prisma
- Update migration SQL to use correct column name
- Update all references in db.py and model.py

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 15:33:38 -06:00
Nicholas Tindle
128366772f refactor(backend): remove apscheduler tables from prisma schema
- Remove apscheduler_jobs and apscheduler_jobs_batched_notifications models
- Delete migration 20260107000001_add_apscheduler_tables
- Remove index rename statements from waitlist migration

APScheduler tables are managed at runtime by APScheduler itself and
should not be part of the Prisma schema.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 15:29:17 -06:00
Nicholas Tindle
764cdf17fe refactor(frontend): migrate waitlist admin components to generated API hooks
- Convert WaitlistTable to use generated React Query hooks directly
- Convert CreateWaitlistButton to use generated hooks
- Update WaitlistDetailModal to use generated types and design system Dialog
- Remove deprecated waitlist types from types.ts
- Remove deprecated waitlist methods from BackendAPI client
- Delete actions.ts server actions (no longer needed)
- Replace lucide-react icons with Phosphor icons

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 15:26:34 -06:00
Nicholas Tindle
1dd83b4cf8 fix(frontend): add text color to status badge fallback in WaitlistTable
Ensures unknown status values have readable text contrast by adding
text-gray-700 to the fallback className.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 15:09:44 -06:00
Nicholas Tindle
24a34f7ce5 Merge branch 'dev' into ntindle/waitlist 2026-01-12 14:08:48 -07:00
Nicholas Tindle
20fe2c3877 fix(backend): remove PII-exposing fields from public waitlist model
Remove `owner` (User type) and `storeListing` (StoreListingWithVersions)
fields from StoreWaitlistEntry. These fields were never populated but
exposed PII types (email, stripe_customer_id, etc.) in the OpenAPI schema.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 14:52:51 -06:00
Nicholas Tindle
738c7e2bef fix(platform): address remaining PR review feedback for waitlist
Backend fixes:
- Fix optional field clearing by using model_fields_set
- Re-fetch waitlist data after join operation
- Only mark waitlist as DONE if all notifications succeed
- Fix race condition in email removal with transaction
- Rename waitlist_id to waitlistId for naming consistency

Frontend fixes:
- Migrate useWaitlistSection to generated API hooks
- Migrate JoinWaitlistModal to design system + generated hooks
- Migrate WaitlistSignupsDialog to design system + generated hooks
- Replace lucide-react icons with Phosphor in WaitlistTable
- Add proper error state in WaitlistSignupsDialog
- Update waitlistId naming across components

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 14:43:10 -06:00
Nicholas Tindle
9edfe0fb97 refactor(frontend): migrate EditWaitlistDialog to design system and generated API
- Replace legacy Dialog components with molecules/Dialog
- Replace legacy Input/Label/Textarea with atoms/Input
- Replace legacy Select with atoms/Select
- Replace @/lib/autogpt-server-api/types with @/app/api/__generated__/models
- Replace updateWaitlist action with usePutV2UpdateWaitlist hook
- Remove dependency on BackendAPI in favor of generated React Query hooks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:49:35 -07:00
Nicholas Tindle
4aabe71001 fix(platform): address PR review feedback for waitlist feature
Backend fixes:
- Fix creator_username null check in store URL construction
- Add embed=True to link_waitlist_to_listing endpoint body param
- Fix race condition in email list with transaction wrapper
- Replace str(e) with generic error messages in admin ValueError handlers
- Add validation requiring user_id or email in waitlist join
- Configure WAITLIST_LAUNCH in notification system (data type, queue, template, subject)
- Change StoreListing cascade delete to SetNull to preserve waitlist data

Frontend fixes:
- Escape internal quotes in CSV export for proper RFC 4180 compliance
- Remove incorrect 'use server' directive from page.tsx
- Replace lucide-react Check icon with Phosphor Icons

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:40:35 -07:00
Nicholas Tindle
b3999669f2 refactor(platform): simplify waitlist code and remove type duplication
- Backend: Extract _waitlist_to_store_entry helper to reduce duplication
- Backend: Use dict comprehension in update_waitlist_admin for cleaner code
- Frontend: Import types directly from shared types file instead of re-exporting
- Frontend: Remove redundant isMember check in WaitlistCard handleJoinClick

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:25:27 -07:00
Swifty
8c45a5ee98 Merge branch 'dev' into ntindle/waitlist 2026-01-08 12:38:46 +01:00
Nicholas Tindle
4b654c7e9f fix(frontend): Fix lint and type errors in waitlist admin components
- Remove unused WaitlistSignup import
- Change button size from "sm" to "small"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 22:48:53 -07:00
Nicholas Tindle
8d82e3b633 fix(backend): Use Prisma connect pattern for waitlist-listing relation
Use StoreListing relation with connect pattern instead of directly
setting storeListingId, which doesn't work with Prisma's typed update.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 22:01:18 -07:00
Nicholas Tindle
d4ecdb64ed feat(platform): Show "On the waitlist" status for joined users
- Add GET /api/store/waitlist/my-memberships endpoint to fetch user's joined waitlists
- Add get_user_waitlist_memberships() db function
- Update useWaitlistSection hook to fetch memberships when logged in
- Update WaitlistCard to show green "On the waitlist" button for members
- Update WaitlistDetailModal to show member status
- Add onSuccess callback to JoinWaitlistModal for optimistic UI updates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 21:15:03 -07:00
Nicholas Tindle
a73fb8f114 feat(platform): Add waitlist feature with admin management and user notifications
Backend:
- Add waitlist admin API routes for CRUD operations
- Add admin functions for waitlist management (create, update, delete, list)
- Add WaitlistLaunchData notification type for user notifications
- Integrate waitlist notifications into store submission approval flow
- Auto-notify waitlist users when linked agent is approved

Frontend:
- Add admin waitlist management page with table, create/edit dialogs
- Add WaitlistSection component to marketplace homepage
- Add WaitlistCard, WaitlistDetailModal, JoinWaitlistModal components
- Add API client methods and types for waitlist operations

Database:
- Add WAITLIST_LAUNCH notification type enum
- Add baseline migration for APScheduler tables

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 20:38:15 -07:00
Nicholas Tindle
2c60aa64ef wip: adding waitlist 2026-01-06 22:13:35 -07:00
258 changed files with 15206 additions and 13735 deletions

View File

@@ -22,7 +22,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
ref: ${{ github.event.workflow_run.head_branch }} ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0 fetch-depth: 0
@@ -42,7 +42,7 @@ jobs:
- name: Get CI failure details - name: Get CI failure details
id: failure_details id: failure_details
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
const run = await github.rest.actions.getWorkflowRun({ const run = await github.rest.actions.getWorkflowRun({

View File

@@ -30,7 +30,7 @@ jobs:
actions: read # Required for CI access actions: read # Required for CI access
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -41,7 +41,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -78,7 +78,7 @@ jobs:
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml) # Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22" node-version: "22"
@@ -91,7 +91,7 @@ jobs:
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies - name: Cache frontend dependencies
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }} key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
@@ -124,7 +124,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup # Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache - name: Set up Docker image cache
id: docker-cache id: docker-cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/docker-cache path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes # Use a versioned key for cache invalidation when image list changes

View File

@@ -40,7 +40,7 @@ jobs:
actions: read # Required for CI access actions: read # Required for CI access
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -57,7 +57,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -94,7 +94,7 @@ jobs:
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml) # Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22" node-version: "22"
@@ -107,7 +107,7 @@ jobs:
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies - name: Cache frontend dependencies
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }} key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
@@ -140,7 +140,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup # Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache - name: Set up Docker image cache
id: docker-cache id: docker-cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/docker-cache path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes # Use a versioned key for cache invalidation when image list changes

View File

@@ -58,7 +58,7 @@ jobs:
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages # your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL

View File

@@ -27,7 +27,7 @@ jobs:
# If you do not check out your code, Copilot will do this for you. # If you do not check out your code, Copilot will do this for you.
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: true submodules: true
@@ -39,7 +39,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -76,7 +76,7 @@ jobs:
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml) # Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22" node-version: "22"
@@ -89,7 +89,7 @@ jobs:
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies - name: Cache frontend dependencies
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }} key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
@@ -132,7 +132,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup # Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache - name: Set up Docker image cache
id: docker-cache id: docker-cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/docker-cache path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes # Use a versioned key for cache invalidation when image list changes

View File

@@ -23,7 +23,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -33,7 +33,7 @@ jobs:
python-version: "3.11" python-version: "3.11"
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -23,7 +23,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -33,7 +33,7 @@ jobs:
python-version: "3.11" python-version: "3.11"
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -28,7 +28,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -38,7 +38,7 @@ jobs:
python-version: "3.11" python-version: "3.11"
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -25,7 +25,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
ref: ${{ github.event.inputs.git_ref || github.ref_name }} ref: ${{ github.event.inputs.git_ref || github.ref_name }}
@@ -52,7 +52,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Trigger deploy workflow - name: Trigger deploy workflow
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DEPLOY_TOKEN }} token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure

View File

@@ -17,7 +17,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
ref: ${{ github.ref_name || 'master' }} ref: ${{ github.ref_name || 'master' }}
@@ -45,7 +45,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Trigger deploy workflow - name: Trigger deploy workflow
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DEPLOY_TOKEN }} token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure

View File

@@ -68,7 +68,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: true submodules: true
@@ -88,7 +88,7 @@ jobs:
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -17,7 +17,7 @@ jobs:
- name: Check comment permissions and deployment status - name: Check comment permissions and deployment status
id: check_status id: check_status
if: github.event_name == 'issue_comment' && github.event.issue.pull_request if: github.event_name == 'issue_comment' && github.event.issue.pull_request
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
const commentBody = context.payload.comment.body.trim(); const commentBody = context.payload.comment.body.trim();
@@ -55,7 +55,7 @@ jobs:
- name: Post permission denied comment - name: Post permission denied comment
if: steps.check_status.outputs.permission_denied == 'true' if: steps.check_status.outputs.permission_denied == 'true'
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
await github.rest.issues.createComment({ await github.rest.issues.createComment({
@@ -68,7 +68,7 @@ jobs:
- name: Get PR details for deployment - name: Get PR details for deployment
id: pr_details id: pr_details
if: steps.check_status.outputs.should_deploy == 'true' || steps.check_status.outputs.should_undeploy == 'true' if: steps.check_status.outputs.should_deploy == 'true' || steps.check_status.outputs.should_undeploy == 'true'
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
const pr = await github.rest.pulls.get({ const pr = await github.rest.pulls.get({
@@ -82,7 +82,7 @@ jobs:
- name: Dispatch Deploy Event - name: Dispatch Deploy Event
if: steps.check_status.outputs.should_deploy == 'true' if: steps.check_status.outputs.should_deploy == 'true'
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DISPATCH_TOKEN }} token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
@@ -98,7 +98,7 @@ jobs:
- name: Post deploy success comment - name: Post deploy success comment
if: steps.check_status.outputs.should_deploy == 'true' if: steps.check_status.outputs.should_deploy == 'true'
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
await github.rest.issues.createComment({ await github.rest.issues.createComment({
@@ -110,7 +110,7 @@ jobs:
- name: Dispatch Undeploy Event (from comment) - name: Dispatch Undeploy Event (from comment)
if: steps.check_status.outputs.should_undeploy == 'true' if: steps.check_status.outputs.should_undeploy == 'true'
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DISPATCH_TOKEN }} token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
@@ -126,7 +126,7 @@ jobs:
- name: Post undeploy success comment - name: Post undeploy success comment
if: steps.check_status.outputs.should_undeploy == 'true' if: steps.check_status.outputs.should_undeploy == 'true'
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
await github.rest.issues.createComment({ await github.rest.issues.createComment({
@@ -139,7 +139,7 @@ jobs:
- name: Check deployment status on PR close - name: Check deployment status on PR close
id: check_pr_close id: check_pr_close
if: github.event_name == 'pull_request' && github.event.action == 'closed' if: github.event_name == 'pull_request' && github.event.action == 'closed'
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
const comments = await github.rest.issues.listComments({ const comments = await github.rest.issues.listComments({
@@ -168,7 +168,7 @@ jobs:
github.event_name == 'pull_request' && github.event_name == 'pull_request' &&
github.event.action == 'closed' && github.event.action == 'closed' &&
steps.check_pr_close.outputs.should_undeploy == 'true' steps.check_pr_close.outputs.should_undeploy == 'true'
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DISPATCH_TOKEN }} token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
@@ -187,7 +187,7 @@ jobs:
github.event_name == 'pull_request' && github.event_name == 'pull_request' &&
github.event.action == 'closed' && github.event.action == 'closed' &&
steps.check_pr_close.outputs.should_undeploy == 'true' steps.check_pr_close.outputs.should_undeploy == 'true'
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
script: | script: |
await github.rest.issues.createComment({ await github.rest.issues.createComment({

View File

@@ -31,7 +31,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
- name: Check for component changes - name: Check for component changes
uses: dorny/paths-filter@v3 uses: dorny/paths-filter@v3
@@ -42,7 +42,7 @@ jobs:
- 'autogpt_platform/frontend/src/components/**' - 'autogpt_platform/frontend/src/components/**'
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -54,7 +54,7 @@ jobs:
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies - name: Cache dependencies
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }} key: ${{ steps.cache-key.outputs.key }}
@@ -71,10 +71,10 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -82,7 +82,7 @@ jobs:
run: corepack enable run: corepack enable
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}
@@ -107,12 +107,12 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -120,7 +120,7 @@ jobs:
run: corepack enable run: corepack enable
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}
@@ -148,12 +148,12 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -176,7 +176,7 @@ jobs:
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: Cache Docker layers - name: Cache Docker layers
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: /tmp/.buildx-cache path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-frontend-test-${{ hashFiles('autogpt_platform/docker-compose.yml', 'autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/pyproject.toml', 'autogpt_platform/backend/poetry.lock') }} key: ${{ runner.os }}-buildx-frontend-test-${{ hashFiles('autogpt_platform/docker-compose.yml', 'autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/pyproject.toml', 'autogpt_platform/backend/poetry.lock') }}
@@ -231,7 +231,7 @@ jobs:
fi fi
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}
@@ -277,12 +277,12 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -290,7 +290,7 @@ jobs:
run: corepack enable run: corepack enable
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}

View File

@@ -29,10 +29,10 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -44,7 +44,7 @@ jobs:
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies - name: Cache dependencies
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }} key: ${{ steps.cache-key.outputs.key }}
@@ -56,19 +56,19 @@ jobs:
run: pnpm install --frozen-lockfile run: pnpm install --frozen-lockfile
types: types:
runs-on: big-boi runs-on: ubuntu-latest
needs: setup needs: setup
strategy: strategy:
fail-fast: false fail-fast: false
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v4
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -85,10 +85,10 @@ jobs:
- name: Run docker compose - name: Run docker compose
run: | run: |
docker compose -f ../docker-compose.yml --profile local up -d deps_backend docker compose -f ../docker-compose.yml --profile local --profile deps_backend up -d
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v5 uses: actions/cache@v4
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}

View File

@@ -11,7 +11,7 @@ jobs:
steps: steps:
# - name: Wait some time for all actions to start # - name: Wait some time for all actions to start
# run: sleep 30 # run: sleep 30
- uses: actions/checkout@v6 - uses: actions/checkout@v4
# with: # with:
# fetch-depth: 0 # fetch-depth: 0
- name: Set up Python - name: Set up Python

View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand. # This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
[[package]] [[package]]
name = "annotated-doc" name = "annotated-doc"
@@ -67,7 +67,7 @@ description = "Backport of asyncio.Runner, a context manager that controls event
optional = false optional = false
python-versions = "<3.11,>=3.8" python-versions = "<3.11,>=3.8"
groups = ["dev"] groups = ["dev"]
markers = "python_version < \"3.11\"" markers = "python_version == \"3.10\""
files = [ files = [
{file = "backports_asyncio_runner-1.2.0-py3-none-any.whl", hash = "sha256:0da0a936a8aeb554eccb426dc55af3ba63bcdc69fa1a600b5bb305413a4477b5"}, {file = "backports_asyncio_runner-1.2.0-py3-none-any.whl", hash = "sha256:0da0a936a8aeb554eccb426dc55af3ba63bcdc69fa1a600b5bb305413a4477b5"},
{file = "backports_asyncio_runner-1.2.0.tar.gz", hash = "sha256:a5aa7b2b7d8f8bfcaa2b57313f70792df84e32a2a746f585213373f900b42162"}, {file = "backports_asyncio_runner-1.2.0.tar.gz", hash = "sha256:a5aa7b2b7d8f8bfcaa2b57313f70792df84e32a2a746f585213373f900b42162"},
@@ -326,118 +326,100 @@ files = [
[[package]] [[package]]
name = "coverage" name = "coverage"
version = "7.13.4" version = "7.10.5"
description = "Code coverage measurement for Python" description = "Code coverage measurement for Python"
optional = false optional = false
python-versions = ">=3.10" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "coverage-7.13.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0fc31c787a84f8cd6027eba44010517020e0d18487064cd3d8968941856d1415"}, {file = "coverage-7.10.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c6a5c3414bfc7451b879141ce772c546985163cf553f08e0f135f0699a911801"},
{file = "coverage-7.13.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a32ebc02a1805adf637fc8dec324b5cdacd2e493515424f70ee33799573d661b"}, {file = "coverage-7.10.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bc8e4d99ce82f1710cc3c125adc30fd1487d3cf6c2cd4994d78d68a47b16989a"},
{file = "coverage-7.13.4-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:e24f9156097ff9dc286f2f913df3a7f63c0e333dcafa3c196f2c18b4175ca09a"}, {file = "coverage-7.10.5-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:02252dc1216e512a9311f596b3169fad54abcb13827a8d76d5630c798a50a754"},
{file = "coverage-7.13.4-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8041b6c5bfdc03257666e9881d33b1abc88daccaf73f7b6340fb7946655cd10f"}, {file = "coverage-7.10.5-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:73269df37883e02d460bee0cc16be90509faea1e3bd105d77360b512d5bb9c33"},
{file = "coverage-7.13.4-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2a09cfa6a5862bc2fc6ca7c3def5b2926194a56b8ab78ffcf617d28911123012"}, {file = "coverage-7.10.5-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f8a81b0614642f91c9effd53eec284f965577591f51f547a1cbeb32035b4c2f"},
{file = "coverage-7.13.4-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:296f8b0af861d3970c2a4d8c91d48eb4dd4771bcef9baedec6a9b515d7de3def"}, {file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6a29f8e0adb7f8c2b95fa2d4566a1d6e6722e0a637634c6563cb1ab844427dd9"},
{file = "coverage-7.13.4-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e101609bcbbfb04605ea1027b10dc3735c094d12d40826a60f897b98b1c30256"}, {file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:fcf6ab569436b4a647d4e91accba12509ad9f2554bc93d3aee23cc596e7f99c3"},
{file = "coverage-7.13.4-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:aa3feb8db2e87ff5e6d00d7e1480ae241876286691265657b500886c98f38bda"}, {file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:90dc3d6fb222b194a5de60af8d190bedeeddcbc7add317e4a3cd333ee6b7c879"},
{file = "coverage-7.13.4-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:4fc7fa81bbaf5a02801b65346c8b3e657f1d93763e58c0abdf7c992addd81a92"}, {file = "coverage-7.10.5-cp310-cp310-win32.whl", hash = "sha256:414a568cd545f9dc75f0686a0049393de8098414b58ea071e03395505b73d7a8"},
{file = "coverage-7.13.4-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:33901f604424145c6e9c2398684b92e176c0b12df77d52db81c20abd48c3794c"}, {file = "coverage-7.10.5-cp310-cp310-win_amd64.whl", hash = "sha256:e551f9d03347196271935fd3c0c165f0e8c049220280c1120de0084d65e9c7ff"},
{file = "coverage-7.13.4-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:bb28c0f2cf2782508a40cec377935829d5fcc3ad9a3681375af4e84eb34b6b58"}, {file = "coverage-7.10.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c177e6ffe2ebc7c410785307758ee21258aa8e8092b44d09a2da767834f075f2"},
{file = "coverage-7.13.4-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:9d107aff57a83222ddbd8d9ee705ede2af2cc926608b57abed8ef96b50b7e8f9"}, {file = "coverage-7.10.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:14d6071c51ad0f703d6440827eaa46386169b5fdced42631d5a5ac419616046f"},
{file = "coverage-7.13.4-cp310-cp310-win32.whl", hash = "sha256:a6f94a7d00eb18f1b6d403c91a88fd58cfc92d4b16080dfdb774afc8294469bf"}, {file = "coverage-7.10.5-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:61f78c7c3bc272a410c5ae3fde7792b4ffb4acc03d35a7df73ca8978826bb7ab"},
{file = "coverage-7.13.4-cp310-cp310-win_amd64.whl", hash = "sha256:2cb0f1e000ebc419632bbe04366a8990b6e32c4e0b51543a6484ffe15eaeda95"}, {file = "coverage-7.10.5-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f39071caa126f69d63f99b324fb08c7b1da2ec28cbb1fe7b5b1799926492f65c"},
{file = "coverage-7.13.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d490ba50c3f35dd7c17953c68f3270e7ccd1c6642e2d2afe2d8e720b98f5a053"}, {file = "coverage-7.10.5-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:343a023193f04d46edc46b2616cdbee68c94dd10208ecd3adc56fcc54ef2baa1"},
{file = "coverage-7.13.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:19bc3c88078789f8ef36acb014d7241961dbf883fd2533d18cb1e7a5b4e28b11"}, {file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:585ffe93ae5894d1ebdee69fc0b0d4b7c75d8007983692fb300ac98eed146f78"},
{file = "coverage-7.13.4-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3998e5a32e62fdf410c0dbd3115df86297995d6e3429af80b8798aad894ca7aa"}, {file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:b0ef4e66f006ed181df29b59921bd8fc7ed7cd6a9289295cd8b2824b49b570df"},
{file = "coverage-7.13.4-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8e264226ec98e01a8e1054314af91ee6cde0eacac4f465cc93b03dbe0bce2fd7"}, {file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:eb7b0bbf7cc1d0453b843eca7b5fa017874735bef9bfdfa4121373d2cc885ed6"},
{file = "coverage-7.13.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a3aa4e7b9e416774b21797365b358a6e827ffadaaca81b69ee02946852449f00"}, {file = "coverage-7.10.5-cp311-cp311-win32.whl", hash = "sha256:1d043a8a06987cc0c98516e57c4d3fc2c1591364831e9deb59c9e1b4937e8caf"},
{file = "coverage-7.13.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:71ca20079dd8f27fcf808817e281e90220475cd75115162218d0e27549f95fef"}, {file = "coverage-7.10.5-cp311-cp311-win_amd64.whl", hash = "sha256:fefafcca09c3ac56372ef64a40f5fe17c5592fab906e0fdffd09543f3012ba50"},
{file = "coverage-7.13.4-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e2f25215f1a359ab17320b47bcdaca3e6e6356652e8256f2441e4ef972052903"}, {file = "coverage-7.10.5-cp311-cp311-win_arm64.whl", hash = "sha256:7e78b767da8b5fc5b2faa69bb001edafcd6f3995b42a331c53ef9572c55ceb82"},
{file = "coverage-7.13.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d65b2d373032411e86960604dc4edac91fdfb5dca539461cf2cbe78327d1e64f"}, {file = "coverage-7.10.5-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c2d05c7e73c60a4cecc7d9b60dbfd603b4ebc0adafaef371445b47d0f805c8a9"},
{file = "coverage-7.13.4-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94eb63f9b363180aff17de3e7c8760c3ba94664ea2695c52f10111244d16a299"}, {file = "coverage-7.10.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:32ddaa3b2c509778ed5373b177eb2bf5662405493baeff52278a0b4f9415188b"},
{file = "coverage-7.13.4-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e856bf6616714c3a9fbc270ab54103f4e685ba236fa98c054e8f87f266c93505"}, {file = "coverage-7.10.5-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:dd382410039fe062097aa0292ab6335a3f1e7af7bba2ef8d27dcda484918f20c"},
{file = "coverage-7.13.4-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:65dfcbe305c3dfe658492df2d85259e0d79ead4177f9ae724b6fb245198f55d6"}, {file = "coverage-7.10.5-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7fa22800f3908df31cea6fb230f20ac49e343515d968cc3a42b30d5c3ebf9b5a"},
{file = "coverage-7.13.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b507778ae8a4c915436ed5c2e05b4a6cecfa70f734e19c22a005152a11c7b6a9"}, {file = "coverage-7.10.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f366a57ac81f5e12797136552f5b7502fa053c861a009b91b80ed51f2ce651c6"},
{file = "coverage-7.13.4-cp311-cp311-win32.whl", hash = "sha256:784fc3cf8be001197b652d51d3fd259b1e2262888693a4636e18879f613a62a9"}, {file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5f1dc8f1980a272ad4a6c84cba7981792344dad33bf5869361576b7aef42733a"},
{file = "coverage-7.13.4-cp311-cp311-win_amd64.whl", hash = "sha256:2421d591f8ca05b308cf0092807308b2facbefe54af7c02ac22548b88b95c98f"}, {file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:2285c04ee8676f7938b02b4936d9b9b672064daab3187c20f73a55f3d70e6b4a"},
{file = "coverage-7.13.4-cp311-cp311-win_arm64.whl", hash = "sha256:79e73a76b854d9c6088fe5d8b2ebe745f8681c55f7397c3c0a016192d681045f"}, {file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c2492e4dd9daab63f5f56286f8a04c51323d237631eb98505d87e4c4ff19ec34"},
{file = "coverage-7.13.4-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:02231499b08dabbe2b96612993e5fc34217cdae907a51b906ac7fca8027a4459"}, {file = "coverage-7.10.5-cp312-cp312-win32.whl", hash = "sha256:38a9109c4ee8135d5df5505384fc2f20287a47ccbe0b3f04c53c9a1989c2bbaf"},
{file = "coverage-7.13.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40aa8808140e55dc022b15d8aa7f651b6b3d68b365ea0398f1441e0b04d859c3"}, {file = "coverage-7.10.5-cp312-cp312-win_amd64.whl", hash = "sha256:6b87f1ad60b30bc3c43c66afa7db6b22a3109902e28c5094957626a0143a001f"},
{file = "coverage-7.13.4-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:5b856a8ccf749480024ff3bd7310adaef57bf31fd17e1bfc404b7940b6986634"}, {file = "coverage-7.10.5-cp312-cp312-win_arm64.whl", hash = "sha256:672a6c1da5aea6c629819a0e1461e89d244f78d7b60c424ecf4f1f2556c041d8"},
{file = "coverage-7.13.4-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2c048ea43875fbf8b45d476ad79f179809c590ec7b79e2035c662e7afa3192e3"}, {file = "coverage-7.10.5-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ef3b83594d933020f54cf65ea1f4405d1f4e41a009c46df629dd964fcb6e907c"},
{file = "coverage-7.13.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b7b38448866e83176e28086674fe7368ab8590e4610fb662b44e345b86d63ffa"}, {file = "coverage-7.10.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2b96bfdf7c0ea9faebce088a3ecb2382819da4fbc05c7b80040dbc428df6af44"},
{file = "coverage-7.13.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:de6defc1c9badbf8b9e67ae90fd00519186d6ab64e5cc5f3d21359c2a9b2c1d3"}, {file = "coverage-7.10.5-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:63df1fdaffa42d914d5c4d293e838937638bf75c794cf20bee12978fc8c4e3bc"},
{file = "coverage-7.13.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:7eda778067ad7ffccd23ecffce537dface96212576a07924cbf0d8799d2ded5a"}, {file = "coverage-7.10.5-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8002dc6a049aac0e81ecec97abfb08c01ef0c1fbf962d0c98da3950ace89b869"},
{file = "coverage-7.13.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e87f6c587c3f34356c3759f0420693e35e7eb0e2e41e4c011cb6ec6ecbbf1db7"}, {file = "coverage-7.10.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:63d4bb2966d6f5f705a6b0c6784c8969c468dbc4bcf9d9ded8bff1c7e092451f"},
{file = "coverage-7.13.4-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:8248977c2e33aecb2ced42fef99f2d319e9904a36e55a8a68b69207fb7e43edc"}, {file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1f672efc0731a6846b157389b6e6d5d5e9e59d1d1a23a5c66a99fd58339914d5"},
{file = "coverage-7.13.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:25381386e80ae727608e662474db537d4df1ecd42379b5ba33c84633a2b36d47"}, {file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:3f39cef43d08049e8afc1fde4a5da8510fc6be843f8dea350ee46e2a26b2f54c"},
{file = "coverage-7.13.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:ee756f00726693e5ba94d6df2bdfd64d4852d23b09bb0bc700e3b30e6f333985"}, {file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:2968647e3ed5a6c019a419264386b013979ff1fb67dd11f5c9886c43d6a31fc2"},
{file = "coverage-7.13.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fdfc1e28e7c7cdce44985b3043bc13bbd9c747520f94a4d7164af8260b3d91f0"}, {file = "coverage-7.10.5-cp313-cp313-win32.whl", hash = "sha256:0d511dda38595b2b6934c2b730a1fd57a3635c6aa2a04cb74714cdfdd53846f4"},
{file = "coverage-7.13.4-cp312-cp312-win32.whl", hash = "sha256:01d4cbc3c283a17fc1e42d614a119f7f438eabb593391283adca8dc86eff1246"}, {file = "coverage-7.10.5-cp313-cp313-win_amd64.whl", hash = "sha256:9a86281794a393513cf117177fd39c796b3f8e3759bb2764259a2abba5cce54b"},
{file = "coverage-7.13.4-cp312-cp312-win_amd64.whl", hash = "sha256:9401ebc7ef522f01d01d45532c68c5ac40fb27113019b6b7d8b208f6e9baa126"}, {file = "coverage-7.10.5-cp313-cp313-win_arm64.whl", hash = "sha256:cebd8e906eb98bb09c10d1feed16096700b1198d482267f8bf0474e63a7b8d84"},
{file = "coverage-7.13.4-cp312-cp312-win_arm64.whl", hash = "sha256:b1ec7b6b6e93255f952e27ab58fbc68dcc468844b16ecbee881aeb29b6ab4d8d"}, {file = "coverage-7.10.5-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0520dff502da5e09d0d20781df74d8189ab334a1e40d5bafe2efaa4158e2d9e7"},
{file = "coverage-7.13.4-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:b66a2da594b6068b48b2692f043f35d4d3693fb639d5ea8b39533c2ad9ac3ab9"}, {file = "coverage-7.10.5-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d9cd64aca68f503ed3f1f18c7c9174cbb797baba02ca8ab5112f9d1c0328cd4b"},
{file = "coverage-7.13.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:3599eb3992d814d23b35c536c28df1a882caa950f8f507cef23d1cbf334995ac"}, {file = "coverage-7.10.5-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:0913dd1613a33b13c4f84aa6e3f4198c1a21ee28ccb4f674985c1f22109f0aae"},
{file = "coverage-7.13.4-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:93550784d9281e374fb5a12bf1324cc8a963fd63b2d2f223503ef0fd4aa339ea"}, {file = "coverage-7.10.5-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:1b7181c0feeb06ed8a02da02792f42f829a7b29990fef52eff257fef0885d760"},
{file = "coverage-7.13.4-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b720ce6a88a2755f7c697c23268ddc47a571b88052e6b155224347389fdf6a3b"}, {file = "coverage-7.10.5-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:36d42b7396b605f774d4372dd9c49bed71cbabce4ae1ccd074d155709dd8f235"},
{file = "coverage-7.13.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7b322db1284a2ed3aa28ffd8ebe3db91c929b7a333c0820abec3d838ef5b3525"}, {file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:b4fdc777e05c4940b297bf47bf7eedd56a39a61dc23ba798e4b830d585486ca5"},
{file = "coverage-7.13.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f4594c67d8a7c89cf922d9df0438c7c7bb022ad506eddb0fdb2863359ff78242"}, {file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:42144e8e346de44a6f1dbd0a56575dd8ab8dfa7e9007da02ea5b1c30ab33a7db"},
{file = "coverage-7.13.4-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:53d133df809c743eb8bce33b24bcababb371f4441340578cd406e084d94a6148"}, {file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:66c644cbd7aed8fe266d5917e2c9f65458a51cfe5eeff9c05f15b335f697066e"},
{file = "coverage-7.13.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:76451d1978b95ba6507a039090ba076105c87cc76fc3efd5d35d72093964d49a"}, {file = "coverage-7.10.5-cp313-cp313t-win32.whl", hash = "sha256:2d1b73023854068c44b0c554578a4e1ef1b050ed07cf8b431549e624a29a66ee"},
{file = "coverage-7.13.4-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:7f57b33491e281e962021de110b451ab8a24182589be17e12a22c79047935e23"}, {file = "coverage-7.10.5-cp313-cp313t-win_amd64.whl", hash = "sha256:54a1532c8a642d8cc0bd5a9a51f5a9dcc440294fd06e9dda55e743c5ec1a8f14"},
{file = "coverage-7.13.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:1731dc33dc276dafc410a885cbf5992f1ff171393e48a21453b78727d090de80"}, {file = "coverage-7.10.5-cp313-cp313t-win_arm64.whl", hash = "sha256:74d5b63fe3f5f5d372253a4ef92492c11a4305f3550631beaa432fc9df16fcff"},
{file = "coverage-7.13.4-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:bd60d4fe2f6fa7dff9223ca1bbc9f05d2b6697bc5961072e5d3b952d46e1b1ea"}, {file = "coverage-7.10.5-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:68c5e0bc5f44f68053369fa0d94459c84548a77660a5f2561c5e5f1e3bed7031"},
{file = "coverage-7.13.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9181a3ccead280b828fae232df12b16652702b49d41e99d657f46cc7b1f6ec7a"}, {file = "coverage-7.10.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:cf33134ffae93865e32e1e37df043bef15a5e857d8caebc0099d225c579b0fa3"},
{file = "coverage-7.13.4-cp313-cp313-win32.whl", hash = "sha256:f53d492307962561ac7de4cd1de3e363589b000ab69617c6156a16ba7237998d"}, {file = "coverage-7.10.5-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:ad8fa9d5193bafcf668231294241302b5e683a0518bf1e33a9a0dfb142ec3031"},
{file = "coverage-7.13.4-cp313-cp313-win_amd64.whl", hash = "sha256:e6f70dec1cc557e52df5306d051ef56003f74d56e9c4dd7ddb07e07ef32a84dd"}, {file = "coverage-7.10.5-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:146fa1531973d38ab4b689bc764592fe6c2f913e7e80a39e7eeafd11f0ef6db2"},
{file = "coverage-7.13.4-cp313-cp313-win_arm64.whl", hash = "sha256:fb07dc5da7e849e2ad31a5d74e9bece81f30ecf5a42909d0a695f8bd1874d6af"}, {file = "coverage-7.10.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6013a37b8a4854c478d3219ee8bc2392dea51602dd0803a12d6f6182a0061762"},
{file = "coverage-7.13.4-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:40d74da8e6c4b9ac18b15331c4b5ebc35a17069410cad462ad4f40dcd2d50c0d"}, {file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:eb90fe20db9c3d930fa2ad7a308207ab5b86bf6a76f54ab6a40be4012d88fcae"},
{file = "coverage-7.13.4-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:4223b4230a376138939a9173f1bdd6521994f2aff8047fae100d6d94d50c5a12"}, {file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:384b34482272e960c438703cafe63316dfbea124ac62006a455c8410bf2a2262"},
{file = "coverage-7.13.4-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:1d4be36a5114c499f9f1f9195e95ebf979460dbe2d88e6816ea202010ba1c34b"}, {file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:467dc74bd0a1a7de2bedf8deaf6811f43602cb532bd34d81ffd6038d6d8abe99"},
{file = "coverage-7.13.4-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:200dea7d1e8095cc6e98cdabe3fd1d21ab17d3cee6dab00cadbb2fe35d9c15b9"}, {file = "coverage-7.10.5-cp314-cp314-win32.whl", hash = "sha256:556d23d4e6393ca898b2e63a5bca91e9ac2d5fb13299ec286cd69a09a7187fde"},
{file = "coverage-7.13.4-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b8eb931ee8e6d8243e253e5ed7336deea6904369d2fd8ae6e43f68abbf167092"}, {file = "coverage-7.10.5-cp314-cp314-win_amd64.whl", hash = "sha256:f4446a9547681533c8fa3e3c6cf62121eeee616e6a92bd9201c6edd91beffe13"},
{file = "coverage-7.13.4-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:75eab1ebe4f2f64d9509b984f9314d4aa788540368218b858dad56dc8f3e5eb9"}, {file = "coverage-7.10.5-cp314-cp314-win_arm64.whl", hash = "sha256:5e78bd9cf65da4c303bf663de0d73bf69f81e878bf72a94e9af67137c69b9fe9"},
{file = "coverage-7.13.4-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:c35eb28c1d085eb7d8c9b3296567a1bebe03ce72962e932431b9a61f28facf26"}, {file = "coverage-7.10.5-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:5661bf987d91ec756a47c7e5df4fbcb949f39e32f9334ccd3f43233bbb65e508"},
{file = "coverage-7.13.4-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:eb88b316ec33760714a4720feb2816a3a59180fd58c1985012054fa7aebee4c2"}, {file = "coverage-7.10.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a46473129244db42a720439a26984f8c6f834762fc4573616c1f37f13994b357"},
{file = "coverage-7.13.4-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:7d41eead3cc673cbd38a4417deb7fd0b4ca26954ff7dc6078e33f6ff97bed940"}, {file = "coverage-7.10.5-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:1f64b8d3415d60f24b058b58d859e9512624bdfa57a2d1f8aff93c1ec45c429b"},
{file = "coverage-7.13.4-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:fb26a934946a6afe0e326aebe0730cdff393a8bc0bbb65a2f41e30feddca399c"}, {file = "coverage-7.10.5-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:44d43de99a9d90b20e0163f9770542357f58860a26e24dc1d924643bd6aa7cb4"},
{file = "coverage-7.13.4-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:dae88bc0fc77edaa65c14be099bd57ee140cf507e6bfdeea7938457ab387efb0"}, {file = "coverage-7.10.5-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a931a87e5ddb6b6404e65443b742cb1c14959622777f2a4efd81fba84f5d91ba"},
{file = "coverage-7.13.4-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:845f352911777a8e722bfce168958214951e07e47e5d5d9744109fa5fe77f79b"}, {file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:f9559b906a100029274448f4c8b8b0a127daa4dade5661dfd821b8c188058842"},
{file = "coverage-7.13.4-cp313-cp313t-win32.whl", hash = "sha256:2fa8d5f8de70688a28240de9e139fa16b153cc3cbb01c5f16d88d6505ebdadf9"}, {file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:b08801e25e3b4526ef9ced1aa29344131a8f5213c60c03c18fe4c6170ffa2874"},
{file = "coverage-7.13.4-cp313-cp313t-win_amd64.whl", hash = "sha256:9351229c8c8407645840edcc277f4a2d44814d1bc34a2128c11c2a031d45a5dd"}, {file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:ed9749bb8eda35f8b636fb7632f1c62f735a236a5d4edadd8bbcc5ea0542e732"},
{file = "coverage-7.13.4-cp313-cp313t-win_arm64.whl", hash = "sha256:30b8d0512f2dc8c8747557e8fb459d6176a2c9e5731e2b74d311c03b78451997"}, {file = "coverage-7.10.5-cp314-cp314t-win32.whl", hash = "sha256:609b60d123fc2cc63ccee6d17e4676699075db72d14ac3c107cc4976d516f2df"},
{file = "coverage-7.13.4-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:300deaee342f90696ed186e3a00c71b5b3d27bffe9e827677954f4ee56969601"}, {file = "coverage-7.10.5-cp314-cp314t-win_amd64.whl", hash = "sha256:0666cf3d2c1626b5a3463fd5b05f5e21f99e6aec40a3192eee4d07a15970b07f"},
{file = "coverage-7.13.4-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:29e3220258d682b6226a9b0925bc563ed9a1ebcff3cad30f043eceea7eaf2689"}, {file = "coverage-7.10.5-cp314-cp314t-win_arm64.whl", hash = "sha256:bc85eb2d35e760120540afddd3044a5bf69118a91a296a8b3940dfc4fdcfe1e2"},
{file = "coverage-7.13.4-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:391ee8f19bef69210978363ca930f7328081c6a0152f1166c91f0b5fdd2a773c"}, {file = "coverage-7.10.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:62835c1b00c4a4ace24c1a88561a5a59b612fbb83a525d1c70ff5720c97c0610"},
{file = "coverage-7.13.4-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:0dd7ab8278f0d58a0128ba2fca25824321f05d059c1441800e934ff2efa52129"}, {file = "coverage-7.10.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5255b3bbcc1d32a4069d6403820ac8e6dbcc1d68cb28a60a1ebf17e47028e898"},
{file = "coverage-7.13.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:78cdf0d578b15148b009ccf18c686aa4f719d887e76e6b40c38ffb61d264a552"}, {file = "coverage-7.10.5-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3876385722e335d6e991c430302c24251ef9c2a9701b2b390f5473199b1b8ebf"},
{file = "coverage-7.13.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:48685fee12c2eb3b27c62f2658e7ea21e9c3239cba5a8a242801a0a3f6a8c62a"}, {file = "coverage-7.10.5-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8048ce4b149c93447a55d279078c8ae98b08a6951a3c4d2d7e87f4efc7bfe100"},
{file = "coverage-7.13.4-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:4e83efc079eb39480e6346a15a1bcb3e9b04759c5202d157e1dd4303cd619356"}, {file = "coverage-7.10.5-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4028e7558e268dd8bcf4d9484aad393cafa654c24b4885f6f9474bf53183a82a"},
{file = "coverage-7.13.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ecae9737b72408d6a950f7e525f30aca12d4bd8dd95e37342e5beb3a2a8c4f71"}, {file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:03f47dc870eec0367fcdd603ca6a01517d2504e83dc18dbfafae37faec66129a"},
{file = "coverage-7.13.4-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:ae4578f8528569d3cf303fef2ea569c7f4c4059a38c8667ccef15c6e1f118aa5"}, {file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:2d488d7d42b6ded7ea0704884f89dcabd2619505457de8fc9a6011c62106f6e5"},
{file = "coverage-7.13.4-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:6fdef321fdfbb30a197efa02d48fcd9981f0d8ad2ae8903ac318adc653f5df98"}, {file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b3dcf2ead47fa8be14224ee817dfc1df98043af568fe120a22f81c0eb3c34ad2"},
{file = "coverage-7.13.4-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:2b0f6ccf3dbe577170bebfce1318707d0e8c3650003cb4b3a9dd744575daa8b5"}, {file = "coverage-7.10.5-cp39-cp39-win32.whl", hash = "sha256:02650a11324b80057b8c9c29487020073d5e98a498f1857f37e3f9b6ea1b2426"},
{file = "coverage-7.13.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:75fcd519f2a5765db3f0e391eb3b7d150cce1a771bf4c9f861aeab86c767a3c0"}, {file = "coverage-7.10.5-cp39-cp39-win_amd64.whl", hash = "sha256:b45264dd450a10f9e03237b41a9a24e85cbb1e278e5a32adb1a303f58f0017f3"},
{file = "coverage-7.13.4-cp314-cp314-win32.whl", hash = "sha256:8e798c266c378da2bd819b0677df41ab46d78065fb2a399558f3f6cae78b2fbb"}, {file = "coverage-7.10.5-py3-none-any.whl", hash = "sha256:0be24d35e4db1d23d0db5c0f6a74a962e2ec83c426b5cac09f4234aadef38e4a"},
{file = "coverage-7.13.4-cp314-cp314-win_amd64.whl", hash = "sha256:245e37f664d89861cf2329c9afa2c1fe9e6d4e1a09d872c947e70718aeeac505"}, {file = "coverage-7.10.5.tar.gz", hash = "sha256:f2e57716a78bc3ae80b2207be0709a3b2b63b9f2dcf9740ee6ac03588a2015b6"},
{file = "coverage-7.13.4-cp314-cp314-win_arm64.whl", hash = "sha256:ad27098a189e5838900ce4c2a99f2fe42a0bf0c2093c17c69b45a71579e8d4a2"},
{file = "coverage-7.13.4-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:85480adfb35ffc32d40918aad81b89c69c9cc5661a9b8a81476d3e645321a056"},
{file = "coverage-7.13.4-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:79be69cf7f3bf9b0deeeb062eab7ac7f36cd4cc4c4dd694bd28921ba4d8596cc"},
{file = "coverage-7.13.4-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:caa421e2684e382c5d8973ac55e4f36bed6821a9bad5c953494de960c74595c9"},
{file = "coverage-7.13.4-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:14375934243ee05f56c45393fe2ce81fe5cc503c07cee2bdf1725fb8bef3ffaf"},
{file = "coverage-7.13.4-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:25a41c3104d08edb094d9db0d905ca54d0cd41c928bb6be3c4c799a54753af55"},
{file = "coverage-7.13.4-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:6f01afcff62bf9a08fb32b2c1d6e924236c0383c02c790732b6537269e466a72"},
{file = "coverage-7.13.4-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:eb9078108fbf0bcdde37c3f4779303673c2fa1fe8f7956e68d447d0dd426d38a"},
{file = "coverage-7.13.4-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:0e086334e8537ddd17e5f16a344777c1ab8194986ec533711cbe6c41cde841b6"},
{file = "coverage-7.13.4-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:725d985c5ab621268b2edb8e50dfe57633dc69bda071abc470fed55a14935fd3"},
{file = "coverage-7.13.4-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:3c06f0f1337c667b971ca2f975523347e63ec5e500b9aa5882d91931cd3ef750"},
{file = "coverage-7.13.4-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:590c0ed4bf8e85f745e6b805b2e1c457b2e33d5255dd9729743165253bc9ad39"},
{file = "coverage-7.13.4-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:eb30bf180de3f632cd043322dad5751390e5385108b2807368997d1a92a509d0"},
{file = "coverage-7.13.4-cp314-cp314t-win32.whl", hash = "sha256:c4240e7eded42d131a2d2c4dec70374b781b043ddc79a9de4d55ca71f8e98aea"},
{file = "coverage-7.13.4-cp314-cp314t-win_amd64.whl", hash = "sha256:4c7d3cc01e7350f2f0f6f7036caaf5673fb56b6998889ccfe9e1c1fe75a9c932"},
{file = "coverage-7.13.4-cp314-cp314t-win_arm64.whl", hash = "sha256:23e3f687cf945070d1c90f85db66d11e3025665d8dafa831301a0e0038f3db9b"},
{file = "coverage-7.13.4-py3-none-any.whl", hash = "sha256:1af1641e57cf7ba1bd67d677c9abdbcd6cc2ab7da3bca7fa1e2b7e50e65f2ad0"},
{file = "coverage-7.13.4.tar.gz", hash = "sha256:e5c8f6ed1e61a8b2dcdf31eb0b9bbf0130750ca79c1c49eb898e2ad86f5ccc91"},
] ]
[package.dependencies] [package.dependencies]
@@ -541,7 +523,7 @@ description = "Backport of PEP 654 (exception groups)"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["main", "dev"] groups = ["main", "dev"]
markers = "python_version < \"3.11\"" markers = "python_version == \"3.10\""
files = [ files = [
{file = "exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10"}, {file = "exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10"},
{file = "exceptiongroup-1.3.0.tar.gz", hash = "sha256:b241f5885f560bc56a59ee63ca4c6a8bfa46ae4ad651af316d4e81817bb9fd88"}, {file = "exceptiongroup-1.3.0.tar.gz", hash = "sha256:b241f5885f560bc56a59ee63ca4c6a8bfa46ae4ad651af316d4e81817bb9fd88"},
@@ -2020,14 +2002,14 @@ files = [
[[package]] [[package]]
name = "pyright" name = "pyright"
version = "1.1.408" version = "1.1.404"
description = "Command line wrapper for pyright" description = "Command line wrapper for pyright"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pyright-1.1.408-py3-none-any.whl", hash = "sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1"}, {file = "pyright-1.1.404-py3-none-any.whl", hash = "sha256:c7b7ff1fdb7219c643079e4c3e7d4125f0dafcc19d253b47e898d130ea426419"},
{file = "pyright-1.1.408.tar.gz", hash = "sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684"}, {file = "pyright-1.1.404.tar.gz", hash = "sha256:455e881a558ca6be9ecca0b30ce08aa78343ecc031d37a198ffa9a7a1abeb63e"},
] ]
[package.dependencies] [package.dependencies]
@@ -2159,20 +2141,19 @@ dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "requests
[[package]] [[package]]
name = "pytest-asyncio" name = "pytest-asyncio"
version = "1.3.0" version = "1.1.0"
description = "Pytest support for asyncio" description = "Pytest support for asyncio"
optional = false optional = false
python-versions = ">=3.10" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pytest_asyncio-1.3.0-py3-none-any.whl", hash = "sha256:611e26147c7f77640e6d0a92a38ed17c3e9848063698d5c93d5aa7aa11cebff5"}, {file = "pytest_asyncio-1.1.0-py3-none-any.whl", hash = "sha256:5fe2d69607b0bd75c656d1211f969cadba035030156745ee09e7d71740e58ecf"},
{file = "pytest_asyncio-1.3.0.tar.gz", hash = "sha256:d7f52f36d231b80ee124cd216ffb19369aa168fc10095013c6b014a34d3ee9e5"}, {file = "pytest_asyncio-1.1.0.tar.gz", hash = "sha256:796aa822981e01b68c12e4827b8697108f7205020f24b5793b3c41555dab68ea"},
] ]
[package.dependencies] [package.dependencies]
backports-asyncio-runner = {version = ">=1.1,<2", markers = "python_version < \"3.11\""} backports-asyncio-runner = {version = ">=1.1,<2", markers = "python_version < \"3.11\""}
pytest = ">=8.2,<10" pytest = ">=8.2,<9"
typing-extensions = {version = ">=4.12", markers = "python_version < \"3.13\""}
[package.extras] [package.extras]
docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1)"] docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1)"]
@@ -2180,34 +2161,34 @@ testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)"]
[[package]] [[package]]
name = "pytest-cov" name = "pytest-cov"
version = "7.0.0" version = "6.2.1"
description = "Pytest plugin for measuring coverage." description = "Pytest plugin for measuring coverage."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861"}, {file = "pytest_cov-6.2.1-py3-none-any.whl", hash = "sha256:f5bc4c23f42f1cdd23c70b1dab1bbaef4fc505ba950d53e0081d0730dd7e86d5"},
{file = "pytest_cov-7.0.0.tar.gz", hash = "sha256:33c97eda2e049a0c5298e91f519302a1334c26ac65c1a483d6206fd458361af1"}, {file = "pytest_cov-6.2.1.tar.gz", hash = "sha256:25cc6cc0a5358204b8108ecedc51a9b57b34cc6b8c967cc2c01a4e00d8a67da2"},
] ]
[package.dependencies] [package.dependencies]
coverage = {version = ">=7.10.6", extras = ["toml"]} coverage = {version = ">=7.5", extras = ["toml"]}
pluggy = ">=1.2" pluggy = ">=1.2"
pytest = ">=7" pytest = ">=6.2.5"
[package.extras] [package.extras]
testing = ["process-tests", "pytest-xdist", "virtualenv"] testing = ["fields", "hunter", "process-tests", "pytest-xdist", "virtualenv"]
[[package]] [[package]]
name = "pytest-mock" name = "pytest-mock"
version = "3.15.1" version = "3.14.1"
description = "Thin-wrapper around the mock package for easier use with pytest" description = "Thin-wrapper around the mock package for easier use with pytest"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.8"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pytest_mock-3.15.1-py3-none-any.whl", hash = "sha256:0a25e2eb88fe5168d535041d09a4529a188176ae608a6d249ee65abc0949630d"}, {file = "pytest_mock-3.14.1-py3-none-any.whl", hash = "sha256:178aefcd11307d874b4cd3100344e7e2d888d9791a6a1d9bfe90fbc1b74fd1d0"},
{file = "pytest_mock-3.15.1.tar.gz", hash = "sha256:1849a238f6f396da19762269de72cb1814ab44416fa73a8686deac10b0d87a0f"}, {file = "pytest_mock-3.14.1.tar.gz", hash = "sha256:159e9edac4c451ce77a5cdb9fc5d1100708d2dd4ba3c3df572f14097351af80e"},
] ]
[package.dependencies] [package.dependencies]
@@ -2341,30 +2322,31 @@ pyasn1 = ">=0.1.3"
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.15.0" version = "0.12.11"
description = "An extremely fast Python linter and code formatter, written in Rust." description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "ruff-0.15.0-py3-none-linux_armv6l.whl", hash = "sha256:aac4ebaa612a82b23d45964586f24ae9bc23ca101919f5590bdb368d74ad5455"}, {file = "ruff-0.12.11-py3-none-linux_armv6l.whl", hash = "sha256:93fce71e1cac3a8bf9200e63a38ac5c078f3b6baebffb74ba5274fb2ab276065"},
{file = "ruff-0.15.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:dcd4be7cc75cfbbca24a98d04d0b9b36a270d0833241f776b788d59f4142b14d"}, {file = "ruff-0.12.11-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8e33ac7b28c772440afa80cebb972ffd823621ded90404f29e5ab6d1e2d4b93"},
{file = "ruff-0.15.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d747e3319b2bce179c7c1eaad3d884dc0a199b5f4d5187620530adf9105268ce"}, {file = "ruff-0.12.11-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d69fb9d4937aa19adb2e9f058bc4fbfe986c2040acb1a4a9747734834eaa0bfd"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:650bd9c56ae03102c51a5e4b554d74d825ff3abe4db22b90fd32d816c2e90621"}, {file = "ruff-0.12.11-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:411954eca8464595077a93e580e2918d0a01a19317af0a72132283e28ae21bee"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a6664b7eac559e3048223a2da77769c2f92b43a6dfd4720cef42654299a599c9"}, {file = "ruff-0.12.11-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6a2c0a2e1a450f387bf2c6237c727dd22191ae8c00e448e0672d624b2bbd7fb0"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f811f97b0f092b35320d1556f3353bf238763420ade5d9e62ebd2b73f2ff179"}, {file = "ruff-0.12.11-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ca4c3a7f937725fd2413c0e884b5248a19369ab9bdd850b5781348ba283f644"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:761ec0a66680fab6454236635a39abaf14198818c8cdf691e036f4bc0f406b2d"}, {file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:4d1df0098124006f6a66ecf3581a7f7e754c4df7644b2e6704cd7ca80ff95211"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:940f11c2604d317e797b289f4f9f3fa5555ffe4fb574b55ed006c3d9b6f0eb78"}, {file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5a8dd5f230efc99a24ace3b77e3555d3fbc0343aeed3fc84c8d89e75ab2ff793"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bcbca3d40558789126da91d7ef9a7c87772ee107033db7191edefa34e2c7f1b4"}, {file = "ruff-0.12.11-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4dc75533039d0ed04cd33fb8ca9ac9620b99672fe7ff1533b6402206901c34ee"},
{file = "ruff-0.15.0-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:9a121a96db1d75fa3eb39c4539e607f628920dd72ff1f7c5ee4f1b768ac62d6e"}, {file = "ruff-0.12.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4fc58f9266d62c6eccc75261a665f26b4ef64840887fc6cbc552ce5b29f96cc8"},
{file = "ruff-0.15.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:5298d518e493061f2eabd4abd067c7e4fb89e2f63291c94332e35631c07c3662"}, {file = "ruff-0.12.11-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5a0113bd6eafd545146440225fe60b4e9489f59eb5f5f107acd715ba5f0b3d2f"},
{file = "ruff-0.15.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:afb6e603d6375ff0d6b0cee563fa21ab570fd15e65c852cb24922cef25050cf1"}, {file = "ruff-0.12.11-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:0d737b4059d66295c3ea5720e6efc152623bb83fde5444209b69cd33a53e2000"},
{file = "ruff-0.15.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:77e515f6b15f828b94dc17d2b4ace334c9ddb7d9468c54b2f9ed2b9c1593ef16"}, {file = "ruff-0.12.11-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:916fc5defee32dbc1fc1650b576a8fed68f5e8256e2180d4d9855aea43d6aab2"},
{file = "ruff-0.15.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:6f6e80850a01eb13b3e42ee0ebdf6e4497151b48c35051aab51c101266d187a3"}, {file = "ruff-0.12.11-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c984f07d7adb42d3ded5be894fb4007f30f82c87559438b4879fe7aa08c62b39"},
{file = "ruff-0.15.0-py3-none-win32.whl", hash = "sha256:238a717ef803e501b6d51e0bdd0d2c6e8513fe9eec14002445134d3907cd46c3"}, {file = "ruff-0.12.11-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e07fbb89f2e9249f219d88331c833860489b49cdf4b032b8e4432e9b13e8a4b9"},
{file = "ruff-0.15.0-py3-none-win_amd64.whl", hash = "sha256:dd5e4d3301dc01de614da3cdffc33d4b1b96fb89e45721f1598e5532ccf78b18"}, {file = "ruff-0.12.11-py3-none-win32.whl", hash = "sha256:c792e8f597c9c756e9bcd4d87cf407a00b60af77078c96f7b6366ea2ce9ba9d3"},
{file = "ruff-0.15.0-py3-none-win_arm64.whl", hash = "sha256:c480d632cc0ca3f0727acac8b7d053542d9e114a462a145d0b00e7cd658c515a"}, {file = "ruff-0.12.11-py3-none-win_amd64.whl", hash = "sha256:a3283325960307915b6deb3576b96919ee89432ebd9c48771ca12ee8afe4a0fd"},
{file = "ruff-0.15.0.tar.gz", hash = "sha256:6bdea47cdbea30d40f8f8d7d69c0854ba7c15420ec75a26f463290949d7f7e9a"}, {file = "ruff-0.12.11-py3-none-win_arm64.whl", hash = "sha256:bae4d6e6a2676f8fb0f98b74594a048bae1b944aab17e9f5d504062303c6dbea"},
{file = "ruff-0.12.11.tar.gz", hash = "sha256:c6b09ae8426a65bbee5425b9d0b82796dbb07cb1af045743c79bfb163001165d"},
] ]
[[package]] [[package]]
@@ -2563,7 +2545,7 @@ description = "A lil' TOML parser"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
groups = ["dev"] groups = ["dev"]
markers = "python_version < \"3.11\"" markers = "python_version == \"3.10\""
files = [ files = [
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"}, {file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"}, {file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
@@ -2911,4 +2893,4 @@ type = ["pytest-mypy"]
[metadata] [metadata]
lock-version = "2.1" lock-version = "2.1"
python-versions = ">=3.10,<4.0" python-versions = ">=3.10,<4.0"
content-hash = "40eae94995dc0a388fa832ed4af9b6137f28d5b5ced3aaea70d5f91d4d9a179d" content-hash = "cc80d3a129b84435a0f40132d073caa37858ca2427ed372fecfd810a61712d0c"

View File

@@ -22,12 +22,12 @@ supabase = "^2.27.2"
uvicorn = "^0.40.0" uvicorn = "^0.40.0"
[tool.poetry.group.dev.dependencies] [tool.poetry.group.dev.dependencies]
pyright = "^1.1.408" pyright = "^1.1.404"
pytest = "^8.4.1" pytest = "^8.4.1"
pytest-asyncio = "^1.3.0" pytest-asyncio = "^1.1.0"
pytest-mock = "^3.15.1" pytest-mock = "^3.14.1"
pytest-cov = "^7.0.0" pytest-cov = "^6.2.1"
ruff = "^0.15.0" ruff = "^0.12.11"
[build-system] [build-system]
requires = ["poetry-core"] requires = ["poetry-core"]

View File

@@ -0,0 +1,251 @@
import logging
import autogpt_libs.auth
import fastapi
import fastapi.responses
import backend.api.features.store.db as store_db
import backend.api.features.store.model as store_model
logger = logging.getLogger(__name__)
router = fastapi.APIRouter(
prefix="/admin/waitlist",
tags=["store", "admin", "waitlist"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_admin_user)],
)
@router.post(
"",
summary="Create Waitlist",
response_model=store_model.WaitlistAdminResponse,
)
async def create_waitlist(
request: store_model.WaitlistCreateRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Create a new waitlist (admin only).
Args:
request: Waitlist creation details
user_id: Authenticated admin user creating the waitlist
Returns:
WaitlistAdminResponse with the created waitlist details
"""
try:
waitlist = await store_db.create_waitlist_admin(
admin_user_id=user_id,
data=request,
)
return waitlist
except Exception as e:
logger.exception("Error creating waitlist: %s", e)
return fastapi.responses.JSONResponse(
status_code=500,
content={"detail": "An error occurred while creating the waitlist"},
)
@router.get(
"",
summary="List All Waitlists",
response_model=store_model.WaitlistAdminListResponse,
)
async def list_waitlists():
"""
Get all waitlists with admin details (admin only).
Returns:
WaitlistAdminListResponse with all waitlists
"""
try:
return await store_db.get_waitlists_admin()
except Exception as e:
logger.exception("Error listing waitlists: %s", e)
return fastapi.responses.JSONResponse(
status_code=500,
content={"detail": "An error occurred while fetching waitlists"},
)
@router.get(
"/{waitlist_id}",
summary="Get Waitlist Details",
response_model=store_model.WaitlistAdminResponse,
)
async def get_waitlist(
waitlist_id: str = fastapi.Path(..., description="The ID of the waitlist"),
):
"""
Get a single waitlist with admin details (admin only).
Args:
waitlist_id: ID of the waitlist to retrieve
Returns:
WaitlistAdminResponse with waitlist details
"""
try:
return await store_db.get_waitlist_admin(waitlist_id)
except ValueError:
logger.warning("Waitlist not found: %s", waitlist_id)
return fastapi.responses.JSONResponse(
status_code=404,
content={"detail": "Waitlist not found"},
)
except Exception as e:
logger.exception("Error fetching waitlist: %s", e)
return fastapi.responses.JSONResponse(
status_code=500,
content={"detail": "An error occurred while fetching the waitlist"},
)
@router.put(
"/{waitlist_id}",
summary="Update Waitlist",
response_model=store_model.WaitlistAdminResponse,
)
async def update_waitlist(
request: store_model.WaitlistUpdateRequest,
waitlist_id: str = fastapi.Path(..., description="The ID of the waitlist"),
):
"""
Update a waitlist (admin only).
Args:
waitlist_id: ID of the waitlist to update
request: Fields to update
Returns:
WaitlistAdminResponse with updated waitlist details
"""
try:
return await store_db.update_waitlist_admin(waitlist_id, request)
except ValueError:
logger.warning("Waitlist not found for update: %s", waitlist_id)
return fastapi.responses.JSONResponse(
status_code=404,
content={"detail": "Waitlist not found"},
)
except Exception as e:
logger.exception("Error updating waitlist: %s", e)
return fastapi.responses.JSONResponse(
status_code=500,
content={"detail": "An error occurred while updating the waitlist"},
)
@router.delete(
"/{waitlist_id}",
summary="Delete Waitlist",
)
async def delete_waitlist(
waitlist_id: str = fastapi.Path(..., description="The ID of the waitlist"),
):
"""
Soft delete a waitlist (admin only).
Args:
waitlist_id: ID of the waitlist to delete
Returns:
Success message
"""
try:
await store_db.delete_waitlist_admin(waitlist_id)
return {"message": "Waitlist deleted successfully"}
except ValueError:
logger.warning(f"Waitlist not found for deletion: {waitlist_id}")
return fastapi.responses.JSONResponse(
status_code=404,
content={"detail": "Waitlist not found"},
)
except Exception as e:
logger.exception("Error deleting waitlist: %s", e)
return fastapi.responses.JSONResponse(
status_code=500,
content={"detail": "An error occurred while deleting the waitlist"},
)
@router.get(
"/{waitlist_id}/signups",
summary="Get Waitlist Signups",
response_model=store_model.WaitlistSignupListResponse,
)
async def get_waitlist_signups(
waitlist_id: str = fastapi.Path(..., description="The ID of the waitlist"),
):
"""
Get all signups for a waitlist (admin only).
Args:
waitlist_id: ID of the waitlist
Returns:
WaitlistSignupListResponse with all signups
"""
try:
return await store_db.get_waitlist_signups_admin(waitlist_id)
except ValueError:
logger.warning("Waitlist not found for signups: %s", waitlist_id)
return fastapi.responses.JSONResponse(
status_code=404,
content={"detail": "Waitlist not found"},
)
except Exception as e:
logger.exception("Error fetching waitlist signups: %s", e)
return fastapi.responses.JSONResponse(
status_code=500,
content={"detail": "An error occurred while fetching waitlist signups"},
)
@router.post(
"/{waitlist_id}/link",
summary="Link Waitlist to Store Listing",
response_model=store_model.WaitlistAdminResponse,
)
async def link_waitlist_to_listing(
waitlist_id: str = fastapi.Path(..., description="The ID of the waitlist"),
store_listing_id: str = fastapi.Body(
..., embed=True, description="The ID of the store listing"
),
):
"""
Link a waitlist to a store listing (admin only).
When the linked store listing is approved/published, waitlist users
will be automatically notified.
Args:
waitlist_id: ID of the waitlist
store_listing_id: ID of the store listing to link
Returns:
WaitlistAdminResponse with updated waitlist details
"""
try:
return await store_db.link_waitlist_to_listing_admin(
waitlist_id, store_listing_id
)
except ValueError:
logger.warning(
"Link failed - waitlist or listing not found: %s, %s",
waitlist_id,
store_listing_id,
)
return fastapi.responses.JSONResponse(
status_code=404,
content={"detail": "Waitlist or store listing not found"},
)
except Exception as e:
logger.exception("Error linking waitlist to listing: %s", e)
return fastapi.responses.JSONResponse(
status_code=500,
content={"detail": "An error occurred while linking the waitlist"},
)

View File

@@ -93,12 +93,6 @@ class ChatConfig(BaseSettings):
description="Name of the prompt in Langfuse to fetch", description="Name of the prompt in Langfuse to fetch",
) )
# Extended thinking configuration for Claude models
thinking_enabled: bool = Field(
default=True,
description="Enable adaptive thinking for Claude models via OpenRouter",
)
@field_validator("api_key", mode="before") @field_validator("api_key", mode="before")
@classmethod @classmethod
def get_api_key(cls, v): def get_api_key(cls, v):

View File

@@ -45,7 +45,10 @@ async def create_chat_session(
successfulAgentRuns=SafeJson({}), successfulAgentRuns=SafeJson({}),
successfulAgentSchedules=SafeJson({}), successfulAgentSchedules=SafeJson({}),
) )
return await PrismaChatSession.prisma().create(data=data) return await PrismaChatSession.prisma().create(
data=data,
include={"Messages": True},
)
async def update_chat_session( async def update_chat_session(

View File

@@ -2,7 +2,7 @@ import asyncio
import logging import logging
import uuid import uuid
from datetime import UTC, datetime from datetime import UTC, datetime
from typing import Any, cast from typing import Any
from weakref import WeakValueDictionary from weakref import WeakValueDictionary
from openai.types.chat import ( from openai.types.chat import (
@@ -104,26 +104,6 @@ class ChatSession(BaseModel):
successful_agent_runs: dict[str, int] = {} successful_agent_runs: dict[str, int] = {}
successful_agent_schedules: dict[str, int] = {} successful_agent_schedules: dict[str, int] = {}
def add_tool_call_to_current_turn(self, tool_call: dict) -> None:
"""Attach a tool_call to the current turn's assistant message.
Searches backwards for the most recent assistant message (stopping at
any user message boundary). If found, appends the tool_call to it.
Otherwise creates a new assistant message with the tool_call.
"""
for msg in reversed(self.messages):
if msg.role == "user":
break
if msg.role == "assistant":
if not msg.tool_calls:
msg.tool_calls = []
msg.tool_calls.append(tool_call)
return
self.messages.append(
ChatMessage(role="assistant", content="", tool_calls=[tool_call])
)
@staticmethod @staticmethod
def new(user_id: str) -> "ChatSession": def new(user_id: str) -> "ChatSession":
return ChatSession( return ChatSession(
@@ -192,47 +172,6 @@ class ChatSession(BaseModel):
successful_agent_schedules=successful_agent_schedules, successful_agent_schedules=successful_agent_schedules,
) )
@staticmethod
def _merge_consecutive_assistant_messages(
messages: list[ChatCompletionMessageParam],
) -> list[ChatCompletionMessageParam]:
"""Merge consecutive assistant messages into single messages.
Long-running tool flows can create split assistant messages: one with
text content and another with tool_calls. Anthropic's API requires
tool_result blocks to reference a tool_use in the immediately preceding
assistant message, so these splits cause 400 errors via OpenRouter.
"""
if len(messages) < 2:
return messages
result: list[ChatCompletionMessageParam] = [messages[0]]
for msg in messages[1:]:
prev = result[-1]
if prev.get("role") != "assistant" or msg.get("role") != "assistant":
result.append(msg)
continue
prev = cast(ChatCompletionAssistantMessageParam, prev)
curr = cast(ChatCompletionAssistantMessageParam, msg)
curr_content = curr.get("content") or ""
if curr_content:
prev_content = prev.get("content") or ""
prev["content"] = (
f"{prev_content}\n{curr_content}" if prev_content else curr_content
)
curr_tool_calls = curr.get("tool_calls")
if curr_tool_calls:
prev_tool_calls = prev.get("tool_calls")
prev["tool_calls"] = (
list(prev_tool_calls) + list(curr_tool_calls)
if prev_tool_calls
else list(curr_tool_calls)
)
return result
def to_openai_messages(self) -> list[ChatCompletionMessageParam]: def to_openai_messages(self) -> list[ChatCompletionMessageParam]:
messages = [] messages = []
for message in self.messages: for message in self.messages:
@@ -319,7 +258,7 @@ class ChatSession(BaseModel):
name=message.name or "", name=message.name or "",
) )
) )
return self._merge_consecutive_assistant_messages(messages) return messages
async def _get_session_from_cache(session_id: str) -> ChatSession | None: async def _get_session_from_cache(session_id: str) -> ChatSession | None:

View File

@@ -1,16 +1,4 @@
from typing import cast
import pytest import pytest
from openai.types.chat import (
ChatCompletionAssistantMessageParam,
ChatCompletionMessageParam,
ChatCompletionToolMessageParam,
ChatCompletionUserMessageParam,
)
from openai.types.chat.chat_completion_message_tool_call_param import (
ChatCompletionMessageToolCallParam,
Function,
)
from .model import ( from .model import (
ChatMessage, ChatMessage,
@@ -129,205 +117,3 @@ async def test_chatsession_db_storage(setup_test_user, test_user_id):
loaded.tool_calls is not None loaded.tool_calls is not None
), f"Tool calls missing for {orig.role} message" ), f"Tool calls missing for {orig.role} message"
assert len(orig.tool_calls) == len(loaded.tool_calls) assert len(orig.tool_calls) == len(loaded.tool_calls)
# --------------------------------------------------------------------------- #
# _merge_consecutive_assistant_messages #
# --------------------------------------------------------------------------- #
_tc = ChatCompletionMessageToolCallParam(
id="tc1", type="function", function=Function(name="do_stuff", arguments="{}")
)
_tc2 = ChatCompletionMessageToolCallParam(
id="tc2", type="function", function=Function(name="other", arguments="{}")
)
def test_merge_noop_when_no_consecutive_assistants():
"""Messages without consecutive assistants are returned unchanged."""
msgs = [
ChatCompletionUserMessageParam(role="user", content="hi"),
ChatCompletionAssistantMessageParam(role="assistant", content="hello"),
ChatCompletionUserMessageParam(role="user", content="bye"),
]
merged = ChatSession._merge_consecutive_assistant_messages(msgs)
assert len(merged) == 3
assert [m["role"] for m in merged] == ["user", "assistant", "user"]
def test_merge_splits_text_and_tool_calls():
"""The exact bug scenario: text-only assistant followed by tool_calls-only assistant."""
msgs = [
ChatCompletionUserMessageParam(role="user", content="build agent"),
ChatCompletionAssistantMessageParam(
role="assistant", content="Let me build that"
),
ChatCompletionAssistantMessageParam(
role="assistant", content="", tool_calls=[_tc]
),
ChatCompletionToolMessageParam(role="tool", content="ok", tool_call_id="tc1"),
]
merged = ChatSession._merge_consecutive_assistant_messages(msgs)
assert len(merged) == 3
assert merged[0]["role"] == "user"
assert merged[2]["role"] == "tool"
a = cast(ChatCompletionAssistantMessageParam, merged[1])
assert a["role"] == "assistant"
assert a.get("content") == "Let me build that"
assert a.get("tool_calls") == [_tc]
def test_merge_combines_tool_calls_from_both():
"""Both consecutive assistants have tool_calls — they get merged."""
msgs: list[ChatCompletionAssistantMessageParam] = [
ChatCompletionAssistantMessageParam(
role="assistant", content="text", tool_calls=[_tc]
),
ChatCompletionAssistantMessageParam(
role="assistant", content="", tool_calls=[_tc2]
),
]
merged = ChatSession._merge_consecutive_assistant_messages(msgs) # type: ignore[arg-type]
assert len(merged) == 1
a = cast(ChatCompletionAssistantMessageParam, merged[0])
assert a.get("tool_calls") == [_tc, _tc2]
assert a.get("content") == "text"
def test_merge_three_consecutive_assistants():
"""Three consecutive assistants collapse into one."""
msgs: list[ChatCompletionAssistantMessageParam] = [
ChatCompletionAssistantMessageParam(role="assistant", content="a"),
ChatCompletionAssistantMessageParam(role="assistant", content="b"),
ChatCompletionAssistantMessageParam(
role="assistant", content="", tool_calls=[_tc]
),
]
merged = ChatSession._merge_consecutive_assistant_messages(msgs) # type: ignore[arg-type]
assert len(merged) == 1
a = cast(ChatCompletionAssistantMessageParam, merged[0])
assert a.get("content") == "a\nb"
assert a.get("tool_calls") == [_tc]
def test_merge_empty_and_single_message():
"""Edge cases: empty list and single message."""
assert ChatSession._merge_consecutive_assistant_messages([]) == []
single: list[ChatCompletionMessageParam] = [
ChatCompletionUserMessageParam(role="user", content="hi")
]
assert ChatSession._merge_consecutive_assistant_messages(single) == single
# --------------------------------------------------------------------------- #
# add_tool_call_to_current_turn #
# --------------------------------------------------------------------------- #
_raw_tc = {
"id": "tc1",
"type": "function",
"function": {"name": "f", "arguments": "{}"},
}
_raw_tc2 = {
"id": "tc2",
"type": "function",
"function": {"name": "g", "arguments": "{}"},
}
def test_add_tool_call_appends_to_existing_assistant():
"""When the last assistant is from the current turn, tool_call is added to it."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="user", content="hi"),
ChatMessage(role="assistant", content="working on it"),
]
session.add_tool_call_to_current_turn(_raw_tc)
assert len(session.messages) == 2 # no new message created
assert session.messages[1].tool_calls == [_raw_tc]
def test_add_tool_call_creates_assistant_when_none_exists():
"""When there's no current-turn assistant, a new one is created."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="user", content="hi"),
]
session.add_tool_call_to_current_turn(_raw_tc)
assert len(session.messages) == 2
assert session.messages[1].role == "assistant"
assert session.messages[1].tool_calls == [_raw_tc]
def test_add_tool_call_does_not_cross_user_boundary():
"""A user message acts as a boundary — previous assistant is not modified."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="assistant", content="old turn"),
ChatMessage(role="user", content="new message"),
]
session.add_tool_call_to_current_turn(_raw_tc)
assert len(session.messages) == 3 # new assistant was created
assert session.messages[0].tool_calls is None # old assistant untouched
assert session.messages[2].role == "assistant"
assert session.messages[2].tool_calls == [_raw_tc]
def test_add_tool_call_multiple_times():
"""Multiple long-running tool calls accumulate on the same assistant."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="user", content="hi"),
ChatMessage(role="assistant", content="doing stuff"),
]
session.add_tool_call_to_current_turn(_raw_tc)
# Simulate a pending tool result in between (like _yield_tool_call does)
session.messages.append(
ChatMessage(role="tool", content="pending", tool_call_id="tc1")
)
session.add_tool_call_to_current_turn(_raw_tc2)
assert len(session.messages) == 3 # user, assistant, tool — no extra assistant
assert session.messages[1].tool_calls == [_raw_tc, _raw_tc2]
def test_to_openai_messages_merges_split_assistants():
"""End-to-end: session with split assistants produces valid OpenAI messages."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="user", content="build agent"),
ChatMessage(role="assistant", content="Let me build that"),
ChatMessage(
role="assistant",
content="",
tool_calls=[
{
"id": "tc1",
"type": "function",
"function": {"name": "create_agent", "arguments": "{}"},
}
],
),
ChatMessage(role="tool", content="done", tool_call_id="tc1"),
ChatMessage(role="assistant", content="Saved!"),
ChatMessage(role="user", content="show me an example run"),
]
openai_msgs = session.to_openai_messages()
# The two consecutive assistants at index 1,2 should be merged
roles = [m["role"] for m in openai_msgs]
assert roles == ["user", "assistant", "tool", "assistant", "user"]
# The merged assistant should have both content and tool_calls
merged = cast(ChatCompletionAssistantMessageParam, openai_msgs[1])
assert merged.get("content") == "Let me build that"
tc_list = merged.get("tool_calls")
assert tc_list is not None and len(list(tc_list)) == 1
assert list(tc_list)[0]["id"] == "tc1"

View File

@@ -10,8 +10,6 @@ from typing import Any
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from backend.util.json import dumps as json_dumps
class ResponseType(str, Enum): class ResponseType(str, Enum):
"""Types of streaming responses following AI SDK protocol.""" """Types of streaming responses following AI SDK protocol."""
@@ -20,10 +18,6 @@ class ResponseType(str, Enum):
START = "start" START = "start"
FINISH = "finish" FINISH = "finish"
# Step lifecycle (one LLM API call within a message)
START_STEP = "start-step"
FINISH_STEP = "finish-step"
# Text streaming # Text streaming
TEXT_START = "text-start" TEXT_START = "text-start"
TEXT_DELTA = "text-delta" TEXT_DELTA = "text-delta"
@@ -63,16 +57,6 @@ class StreamStart(StreamBaseResponse):
description="Task ID for SSE reconnection. Clients can reconnect using GET /tasks/{taskId}/stream", description="Task ID for SSE reconnection. Clients can reconnect using GET /tasks/{taskId}/stream",
) )
def to_sse(self) -> str:
"""Convert to SSE format, excluding non-protocol fields like taskId."""
import json
data: dict[str, Any] = {
"type": self.type.value,
"messageId": self.messageId,
}
return f"data: {json.dumps(data)}\n\n"
class StreamFinish(StreamBaseResponse): class StreamFinish(StreamBaseResponse):
"""End of message/stream.""" """End of message/stream."""
@@ -80,26 +64,6 @@ class StreamFinish(StreamBaseResponse):
type: ResponseType = ResponseType.FINISH type: ResponseType = ResponseType.FINISH
class StreamStartStep(StreamBaseResponse):
"""Start of a step (one LLM API call within a message).
The AI SDK uses this to add a step-start boundary to message.parts,
enabling visual separation between multiple LLM calls in a single message.
"""
type: ResponseType = ResponseType.START_STEP
class StreamFinishStep(StreamBaseResponse):
"""End of a step (one LLM API call within a message).
The AI SDK uses this to reset activeTextParts and activeReasoningParts,
so the next LLM call in a tool-call continuation starts with clean state.
"""
type: ResponseType = ResponseType.FINISH_STEP
# ========== Text Streaming ========== # ========== Text Streaming ==========
@@ -153,7 +117,7 @@ class StreamToolOutputAvailable(StreamBaseResponse):
type: ResponseType = ResponseType.TOOL_OUTPUT_AVAILABLE type: ResponseType = ResponseType.TOOL_OUTPUT_AVAILABLE
toolCallId: str = Field(..., description="Tool call ID this responds to") toolCallId: str = Field(..., description="Tool call ID this responds to")
output: str | dict[str, Any] = Field(..., description="Tool execution output") output: str | dict[str, Any] = Field(..., description="Tool execution output")
# Keep these for internal backend use # Additional fields for internal use (not part of AI SDK spec but useful)
toolName: str | None = Field( toolName: str | None = Field(
default=None, description="Name of the tool that was executed" default=None, description="Name of the tool that was executed"
) )
@@ -161,17 +125,6 @@ class StreamToolOutputAvailable(StreamBaseResponse):
default=True, description="Whether the tool execution succeeded" default=True, description="Whether the tool execution succeeded"
) )
def to_sse(self) -> str:
"""Convert to SSE format, excluding non-spec fields."""
import json
data = {
"type": self.type.value,
"toolCallId": self.toolCallId,
"output": self.output,
}
return f"data: {json.dumps(data)}\n\n"
# ========== Other ========== # ========== Other ==========
@@ -195,18 +148,6 @@ class StreamError(StreamBaseResponse):
default=None, description="Additional error details" default=None, description="Additional error details"
) )
def to_sse(self) -> str:
"""Convert to SSE format, only emitting fields required by AI SDK protocol.
The AI SDK uses z.strictObject({type, errorText}) which rejects
any extra fields like `code` or `details`.
"""
data = {
"type": self.type.value,
"errorText": self.errorText,
}
return f"data: {json_dumps(data)}\n\n"
class StreamHeartbeat(StreamBaseResponse): class StreamHeartbeat(StreamBaseResponse):
"""Heartbeat to keep SSE connection alive during long-running operations. """Heartbeat to keep SSE connection alive during long-running operations.

View File

@@ -6,7 +6,7 @@ from collections.abc import AsyncGenerator
from typing import Annotated from typing import Annotated
from autogpt_libs import auth from autogpt_libs import auth
from fastapi import APIRouter, Depends, Header, HTTPException, Query, Response, Security from fastapi import APIRouter, Depends, Header, HTTPException, Query, Security
from fastapi.responses import StreamingResponse from fastapi.responses import StreamingResponse
from pydantic import BaseModel from pydantic import BaseModel
@@ -17,29 +17,7 @@ from . import stream_registry
from .completion_handler import process_operation_failure, process_operation_success from .completion_handler import process_operation_failure, process_operation_success
from .config import ChatConfig from .config import ChatConfig
from .model import ChatSession, create_chat_session, get_chat_session, get_user_sessions from .model import ChatSession, create_chat_session, get_chat_session, get_user_sessions
from .response_model import StreamFinish, StreamHeartbeat from .response_model import StreamFinish, StreamHeartbeat, StreamStart
from .tools.models import (
AgentDetailsResponse,
AgentOutputResponse,
AgentPreviewResponse,
AgentSavedResponse,
AgentsFoundResponse,
BlockListResponse,
BlockOutputResponse,
ClarificationNeededResponse,
DocPageResponse,
DocSearchResultsResponse,
ErrorResponse,
ExecutionStartedResponse,
InputValidationErrorResponse,
NeedLoginResponse,
NoResultsResponse,
OperationInProgressResponse,
OperationPendingResponse,
OperationStartedResponse,
SetupRequirementsResponse,
UnderstandingUpdatedResponse,
)
config = ChatConfig() config = ChatConfig()
@@ -288,36 +266,12 @@ async def stream_chat_post(
""" """
import asyncio import asyncio
import time
stream_start_time = time.perf_counter()
log_meta = {"component": "ChatStream", "session_id": session_id}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] stream_chat_post STARTED, session={session_id}, "
f"user={user_id}, message_len={len(request.message)}",
extra={"json_fields": log_meta},
)
session = await _validate_and_get_session(session_id, user_id) session = await _validate_and_get_session(session_id, user_id)
logger.info(
f"[TIMING] session validated in {(time.perf_counter() - stream_start_time)*1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
"duration_ms": (time.perf_counter() - stream_start_time) * 1000,
}
},
)
# Create a task in the stream registry for reconnection support # Create a task in the stream registry for reconnection support
task_id = str(uuid_module.uuid4()) task_id = str(uuid_module.uuid4())
operation_id = str(uuid_module.uuid4()) operation_id = str(uuid_module.uuid4())
log_meta["task_id"] = task_id
task_create_start = time.perf_counter()
await stream_registry.create_task( await stream_registry.create_task(
task_id=task_id, task_id=task_id,
session_id=session_id, session_id=session_id,
@@ -326,28 +280,14 @@ async def stream_chat_post(
tool_name="chat", tool_name="chat",
operation_id=operation_id, operation_id=operation_id,
) )
logger.info(
f"[TIMING] create_task completed in {(time.perf_counter() - task_create_start)*1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
"duration_ms": (time.perf_counter() - task_create_start) * 1000,
}
},
)
# Background task that runs the AI generation independently of SSE connection # Background task that runs the AI generation independently of SSE connection
async def run_ai_generation(): async def run_ai_generation():
import time as time_module
gen_start_time = time_module.perf_counter()
logger.info(
f"[TIMING] run_ai_generation STARTED, task={task_id}, session={session_id}, user={user_id}",
extra={"json_fields": log_meta},
)
first_chunk_time, ttfc = None, None
chunk_count = 0
try: try:
# Emit a start event with task_id for reconnection
start_chunk = StreamStart(messageId=task_id, taskId=task_id)
await stream_registry.publish_chunk(task_id, start_chunk)
async for chunk in chat_service.stream_chat_completion( async for chunk in chat_service.stream_chat_completion(
session_id, session_id,
request.message, request.message,
@@ -355,79 +295,25 @@ async def stream_chat_post(
user_id=user_id, user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context, context=request.context,
_task_id=task_id, # Pass task_id so service emits start with taskId for reconnection
): ):
chunk_count += 1
if first_chunk_time is None:
first_chunk_time = time_module.perf_counter()
ttfc = first_chunk_time - gen_start_time
logger.info(
f"[TIMING] FIRST AI CHUNK at {ttfc:.2f}s, type={type(chunk).__name__}",
extra={
"json_fields": {
**log_meta,
"chunk_type": type(chunk).__name__,
"time_to_first_chunk_ms": ttfc * 1000,
}
},
)
# Write to Redis (subscribers will receive via XREAD) # Write to Redis (subscribers will receive via XREAD)
await stream_registry.publish_chunk(task_id, chunk) await stream_registry.publish_chunk(task_id, chunk)
gen_end_time = time_module.perf_counter() # Mark task as completed
total_time = (gen_end_time - gen_start_time) * 1000
logger.info(
f"[TIMING] run_ai_generation FINISHED in {total_time/1000:.1f}s; "
f"task={task_id}, session={session_id}, "
f"ttfc={ttfc or -1:.2f}s, n_chunks={chunk_count}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"time_to_first_chunk_ms": (
ttfc * 1000 if ttfc is not None else None
),
"n_chunks": chunk_count,
}
},
)
await stream_registry.mark_task_completed(task_id, "completed") await stream_registry.mark_task_completed(task_id, "completed")
except Exception as e: except Exception as e:
elapsed = time_module.perf_counter() - gen_start_time
logger.error( logger.error(
f"[TIMING] run_ai_generation ERROR after {elapsed:.2f}s: {e}", f"Error in background AI generation for session {session_id}: {e}"
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed * 1000,
"error": str(e),
}
},
) )
await stream_registry.mark_task_completed(task_id, "failed") await stream_registry.mark_task_completed(task_id, "failed")
# Start the AI generation in a background task # Start the AI generation in a background task
bg_task = asyncio.create_task(run_ai_generation()) bg_task = asyncio.create_task(run_ai_generation())
await stream_registry.set_task_asyncio_task(task_id, bg_task) await stream_registry.set_task_asyncio_task(task_id, bg_task)
setup_time = (time.perf_counter() - stream_start_time) * 1000
logger.info(
f"[TIMING] Background task started, setup={setup_time:.1f}ms",
extra={"json_fields": {**log_meta, "setup_time_ms": setup_time}},
)
# SSE endpoint that subscribes to the task's stream # SSE endpoint that subscribes to the task's stream
async def event_generator() -> AsyncGenerator[str, None]: async def event_generator() -> AsyncGenerator[str, None]:
import time as time_module
event_gen_start = time_module.perf_counter()
logger.info(
f"[TIMING] event_generator STARTED, task={task_id}, session={session_id}, "
f"user={user_id}",
extra={"json_fields": log_meta},
)
subscriber_queue = None subscriber_queue = None
first_chunk_yielded = False
chunks_yielded = 0
try: try:
# Subscribe to the task stream (this replays existing messages + live updates) # Subscribe to the task stream (this replays existing messages + live updates)
subscriber_queue = await stream_registry.subscribe_to_task( subscriber_queue = await stream_registry.subscribe_to_task(
@@ -442,70 +328,22 @@ async def stream_chat_post(
return return
# Read from the subscriber queue and yield to SSE # Read from the subscriber queue and yield to SSE
logger.info(
"[TIMING] Starting to read from subscriber_queue",
extra={"json_fields": log_meta},
)
while True: while True:
try: try:
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0) chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0)
chunks_yielded += 1
if not first_chunk_yielded:
first_chunk_yielded = True
elapsed = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] FIRST CHUNK from queue at {elapsed:.2f}s, "
f"type={type(chunk).__name__}",
extra={
"json_fields": {
**log_meta,
"chunk_type": type(chunk).__name__,
"elapsed_ms": elapsed * 1000,
}
},
)
yield chunk.to_sse() yield chunk.to_sse()
# Check for finish signal # Check for finish signal
if isinstance(chunk, StreamFinish): if isinstance(chunk, StreamFinish):
total_time = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] StreamFinish received in {total_time:.2f}s; "
f"n_chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"chunks_yielded": chunks_yielded,
"total_time_ms": total_time * 1000,
}
},
)
break break
except asyncio.TimeoutError: except asyncio.TimeoutError:
# Send heartbeat to keep connection alive
yield StreamHeartbeat().to_sse() yield StreamHeartbeat().to_sse()
except GeneratorExit: except GeneratorExit:
logger.info(
f"[TIMING] GeneratorExit (client disconnected), chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"chunks_yielded": chunks_yielded,
"reason": "client_disconnect",
}
},
)
pass # Client disconnected - background task continues pass # Client disconnected - background task continues
except Exception as e: except Exception as e:
elapsed = (time_module.perf_counter() - event_gen_start) * 1000 logger.error(f"Error in SSE stream for task {task_id}: {e}")
logger.error(
f"[TIMING] event_generator ERROR after {elapsed:.1f}ms: {e}",
extra={
"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}
},
)
finally: finally:
# Unsubscribe when client disconnects or stream ends to prevent resource leak # Unsubscribe when client disconnects or stream ends to prevent resource leak
if subscriber_queue is not None: if subscriber_queue is not None:
@@ -519,18 +357,6 @@ async def stream_chat_post(
exc_info=True, exc_info=True,
) )
# AI SDK protocol termination - always yield even if unsubscribe fails # AI SDK protocol termination - always yield even if unsubscribe fails
total_time = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] event_generator FINISHED in {total_time:.2f}s; "
f"task={task_id}, session={session_id}, n_chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time * 1000,
"chunks_yielded": chunks_yielded,
}
},
)
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"
return StreamingResponse( return StreamingResponse(
@@ -548,53 +374,44 @@ async def stream_chat_post(
@router.get( @router.get(
"/sessions/{session_id}/stream", "/sessions/{session_id}/stream",
) )
async def resume_session_stream( async def stream_chat_get(
session_id: str, session_id: str,
message: Annotated[str, Query(min_length=1, max_length=10000)],
user_id: str | None = Depends(auth.get_user_id), user_id: str | None = Depends(auth.get_user_id),
is_user_message: bool = Query(default=True),
): ):
""" """
Resume an active stream for a session. Stream chat responses for a session (GET - legacy endpoint).
Called by the AI SDK's ``useChat(resume: true)`` on page load. Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
Checks for an active (in-progress) task on the session and either replays - Text fragments as they are generated
the full SSE stream or returns 204 No Content if nothing is running. - Tool call UI elements (if invoked)
- Tool execution results
Args: Args:
session_id: The chat session identifier. session_id: The chat session identifier to associate with the streamed messages.
message: The user's new message to process.
user_id: Optional authenticated user ID. user_id: Optional authenticated user ID.
is_user_message: Whether the message is a user message.
Returns: Returns:
StreamingResponse (SSE) when an active stream exists, StreamingResponse: SSE-formatted response chunks.
or 204 No Content when there is nothing to resume.
""" """
import asyncio session = await _validate_and_get_session(session_id, user_id)
active_task, _last_id = await stream_registry.get_active_task_for_session(
session_id, user_id
)
if not active_task:
return Response(status_code=204)
subscriber_queue = await stream_registry.subscribe_to_task(
task_id=active_task.task_id,
user_id=user_id,
last_message_id="0-0", # Full replay so useChat rebuilds the message
)
if subscriber_queue is None:
return Response(status_code=204)
async def event_generator() -> AsyncGenerator[str, None]: async def event_generator() -> AsyncGenerator[str, None]:
chunk_count = 0 chunk_count = 0
first_chunk_type: str | None = None first_chunk_type: str | None = None
try: async for chunk in chat_service.stream_chat_completion(
while True: session_id,
try: message,
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0) is_user_message=is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
):
if chunk_count < 3: if chunk_count < 3:
logger.info( logger.info(
"Resume stream chunk", "Chat stream chunk",
extra={ extra={
"session_id": session_id, "session_id": session_id,
"chunk_type": str(chunk.type), "chunk_type": str(chunk.type),
@@ -604,33 +421,15 @@ async def resume_session_stream(
first_chunk_type = str(chunk.type) first_chunk_type = str(chunk.type)
chunk_count += 1 chunk_count += 1
yield chunk.to_sse() yield chunk.to_sse()
if isinstance(chunk, StreamFinish):
break
except asyncio.TimeoutError:
yield StreamHeartbeat().to_sse()
except GeneratorExit:
pass
except Exception as e:
logger.error(f"Error in resume stream for session {session_id}: {e}")
finally:
try:
await stream_registry.unsubscribe_from_task(
active_task.task_id, subscriber_queue
)
except Exception as unsub_err:
logger.error(
f"Error unsubscribing from task {active_task.task_id}: {unsub_err}",
exc_info=True,
)
logger.info( logger.info(
"Resume stream completed", "Chat stream completed",
extra={ extra={
"session_id": session_id, "session_id": session_id,
"n_chunks": chunk_count, "chunk_count": chunk_count,
"first_chunk_type": first_chunk_type, "first_chunk_type": first_chunk_type,
}, },
) )
# AI SDK protocol termination
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"
return StreamingResponse( return StreamingResponse(
@@ -639,8 +438,8 @@ async def resume_session_stream(
headers={ headers={
"Cache-Control": "no-cache", "Cache-Control": "no-cache",
"Connection": "keep-alive", "Connection": "keep-alive",
"X-Accel-Buffering": "no", "X-Accel-Buffering": "no", # Disable nginx buffering
"x-vercel-ai-ui-message-stream": "v1", "x-vercel-ai-ui-message-stream": "v1", # AI SDK protocol header
}, },
) )
@@ -952,42 +751,3 @@ async def health_check() -> dict:
"service": "chat", "service": "chat",
"version": "0.1.0", "version": "0.1.0",
} }
# ========== Schema Export (for OpenAPI / Orval codegen) ==========
ToolResponseUnion = (
AgentsFoundResponse
| NoResultsResponse
| AgentDetailsResponse
| SetupRequirementsResponse
| ExecutionStartedResponse
| NeedLoginResponse
| ErrorResponse
| InputValidationErrorResponse
| AgentOutputResponse
| UnderstandingUpdatedResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| BlockListResponse
| BlockOutputResponse
| DocSearchResultsResponse
| DocPageResponse
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
)
@router.get(
"/schema/tool-responses",
response_model=ToolResponseUnion,
include_in_schema=True,
summary="[Dummy] Tool response type export for codegen",
description="This endpoint is not meant to be called. It exists solely to "
"expose tool response models in the OpenAPI schema for frontend codegen.",
)
async def _tool_response_schema() -> ToolResponseUnion: # type: ignore[return]
"""Never called at runtime. Exists only so Orval generates TS types."""
raise HTTPException(status_code=501, detail="Schema-only endpoint")

View File

@@ -52,10 +52,8 @@ from .response_model import (
StreamBaseResponse, StreamBaseResponse,
StreamError, StreamError,
StreamFinish, StreamFinish,
StreamFinishStep,
StreamHeartbeat, StreamHeartbeat,
StreamStart, StreamStart,
StreamStartStep,
StreamTextDelta, StreamTextDelta,
StreamTextEnd, StreamTextEnd,
StreamTextStart, StreamTextStart,
@@ -353,10 +351,6 @@ async def stream_chat_completion(
retry_count: int = 0, retry_count: int = 0,
session: ChatSession | None = None, session: ChatSession | None = None,
context: dict[str, str] | None = None, # {url: str, content: str} context: dict[str, str] | None = None, # {url: str, content: str}
_continuation_message_id: (
str | None
) = None, # Internal: reuse message ID for tool call continuations
_task_id: str | None = None, # Internal: task ID for SSE reconnection support
) -> AsyncGenerator[StreamBaseResponse, None]: ) -> AsyncGenerator[StreamBaseResponse, None]:
"""Main entry point for streaming chat completions with database handling. """Main entry point for streaming chat completions with database handling.
@@ -377,45 +371,21 @@ async def stream_chat_completion(
ValueError: If max_context_messages is exceeded ValueError: If max_context_messages is exceeded
""" """
completion_start = time.monotonic()
# Build log metadata for structured logging
log_meta = {"component": "ChatService", "session_id": session_id}
if user_id:
log_meta["user_id"] = user_id
logger.info( logger.info(
f"[TIMING] stream_chat_completion STARTED, session={session_id}, user={user_id}, " f"Streaming chat completion for session {session_id} for message {message} and user id {user_id}. Message is user message: {is_user_message}"
f"message_len={len(message) if message else 0}, is_user={is_user_message}",
extra={
"json_fields": {
**log_meta,
"message_len": len(message) if message else 0,
"is_user_message": is_user_message,
}
},
) )
# Only fetch from Redis if session not provided (initial call) # Only fetch from Redis if session not provided (initial call)
if session is None: if session is None:
fetch_start = time.monotonic()
session = await get_chat_session(session_id, user_id) session = await get_chat_session(session_id, user_id)
fetch_time = (time.monotonic() - fetch_start) * 1000
logger.info( logger.info(
f"[TIMING] get_chat_session took {fetch_time:.1f}ms, " f"Fetched session from Redis: {session.session_id if session else 'None'}, "
f"n_messages={len(session.messages) if session else 0}", f"message_count={len(session.messages) if session else 0}"
extra={
"json_fields": {
**log_meta,
"duration_ms": fetch_time,
"n_messages": len(session.messages) if session else 0,
}
},
) )
else: else:
logger.info( logger.info(
f"[TIMING] Using provided session, messages={len(session.messages)}", f"Using provided session object: {session.session_id}, "
extra={"json_fields": {**log_meta, "n_messages": len(session.messages)}}, f"message_count={len(session.messages)}"
) )
if not session: if not session:
@@ -436,25 +406,17 @@ async def stream_chat_completion(
# Track user message in PostHog # Track user message in PostHog
if is_user_message: if is_user_message:
posthog_start = time.monotonic()
track_user_message( track_user_message(
user_id=user_id, user_id=user_id,
session_id=session_id, session_id=session_id,
message_length=len(message), message_length=len(message),
) )
posthog_time = (time.monotonic() - posthog_start) * 1000
logger.info(
f"[TIMING] track_user_message took {posthog_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": posthog_time}},
)
upsert_start = time.monotonic()
session = await upsert_chat_session(session)
upsert_time = (time.monotonic() - upsert_start) * 1000
logger.info( logger.info(
f"[TIMING] upsert_chat_session took {upsert_time:.1f}ms", f"Upserting session: {session.session_id} with user id {session.user_id}, "
extra={"json_fields": {**log_meta, "duration_ms": upsert_time}}, f"message_count={len(session.messages)}"
) )
session = await upsert_chat_session(session)
assert session, "Session not found" assert session, "Session not found"
# Generate title for new sessions on first user message (non-blocking) # Generate title for new sessions on first user message (non-blocking)
@@ -492,13 +454,7 @@ async def stream_chat_completion(
asyncio.create_task(_update_title()) asyncio.create_task(_update_title())
# Build system prompt with business understanding # Build system prompt with business understanding
prompt_start = time.monotonic()
system_prompt, understanding = await _build_system_prompt(user_id) system_prompt, understanding = await _build_system_prompt(user_id)
prompt_time = (time.monotonic() - prompt_start) * 1000
logger.info(
f"[TIMING] _build_system_prompt took {prompt_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": prompt_time}},
)
# Initialize variables for streaming # Initialize variables for streaming
assistant_response = ChatMessage( assistant_response = ChatMessage(
@@ -523,27 +479,13 @@ async def stream_chat_completion(
# Generate unique IDs for AI SDK protocol # Generate unique IDs for AI SDK protocol
import uuid as uuid_module import uuid as uuid_module
is_continuation = _continuation_message_id is not None message_id = str(uuid_module.uuid4())
message_id = _continuation_message_id or str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4()) text_block_id = str(uuid_module.uuid4())
# Only yield message start for the initial call, not for continuations. # Yield message start
setup_time = (time.monotonic() - completion_start) * 1000 yield StreamStart(messageId=message_id)
logger.info(
f"[TIMING] Setup complete, yielding StreamStart at {setup_time:.1f}ms",
extra={"json_fields": {**log_meta, "setup_time_ms": setup_time}},
)
if not is_continuation:
yield StreamStart(messageId=message_id, taskId=_task_id)
# Emit start-step before each LLM call (AI SDK uses this to add step boundaries)
yield StreamStartStep()
try: try:
logger.info(
"[TIMING] Calling _stream_chat_chunks",
extra={"json_fields": log_meta},
)
async for chunk in _stream_chat_chunks( async for chunk in _stream_chat_chunks(
session=session, session=session,
tools=tools, tools=tools,
@@ -643,10 +585,6 @@ async def stream_chat_completion(
) )
yield chunk yield chunk
elif isinstance(chunk, StreamFinish): elif isinstance(chunk, StreamFinish):
if has_done_tool_call:
# Tool calls happened — close the step but don't send message-level finish.
# The continuation will open a new step, and finish will come at the end.
yield StreamFinishStep()
if not has_done_tool_call: if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it # Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended: if has_received_text and not text_streaming_ended:
@@ -678,8 +616,6 @@ async def stream_chat_completion(
has_saved_assistant_message = True has_saved_assistant_message = True
has_yielded_end = True has_yielded_end = True
# Emit finish-step before finish (resets AI SDK text/reasoning state)
yield StreamFinishStep()
yield chunk yield chunk
elif isinstance(chunk, StreamError): elif isinstance(chunk, StreamError):
has_yielded_error = True has_yielded_error = True
@@ -729,10 +665,6 @@ async def stream_chat_completion(
logger.info( logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}" f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
) )
# Close the current step before retrying so the recursive call's
# StreamStartStep doesn't produce unbalanced step events.
if not has_yielded_end:
yield StreamFinishStep()
should_retry = True should_retry = True
else: else:
# Non-retryable error or max retries exceeded # Non-retryable error or max retries exceeded
@@ -768,7 +700,6 @@ async def stream_chat_completion(
error_response = StreamError(errorText=error_message) error_response = StreamError(errorText=error_message)
yield error_response yield error_response
if not has_yielded_end: if not has_yielded_end:
yield StreamFinishStep()
yield StreamFinish() yield StreamFinish()
return return
@@ -783,8 +714,6 @@ async def stream_chat_completion(
retry_count=retry_count + 1, retry_count=retry_count + 1,
session=session, session=session,
context=context, context=context,
_continuation_message_id=message_id, # Reuse message ID since start was already sent
_task_id=_task_id,
): ):
yield chunk yield chunk
return # Exit after retry to avoid double-saving in finally block return # Exit after retry to avoid double-saving in finally block
@@ -800,13 +729,9 @@ async def stream_chat_completion(
# Build the messages list in the correct order # Build the messages list in the correct order
messages_to_save: list[ChatMessage] = [] messages_to_save: list[ChatMessage] = []
# Add assistant message with tool_calls if any. # Add assistant message with tool_calls if any
# Use extend (not assign) to preserve tool_calls already added by
# _yield_tool_call for long-running tools.
if accumulated_tool_calls: if accumulated_tool_calls:
if not assistant_response.tool_calls: assistant_response.tool_calls = accumulated_tool_calls
assistant_response.tool_calls = []
assistant_response.tool_calls.extend(accumulated_tool_calls)
logger.info( logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message" f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
) )
@@ -858,8 +783,6 @@ async def stream_chat_completion(
session=session, # Pass session object to avoid Redis refetch session=session, # Pass session object to avoid Redis refetch
context=context, context=context,
tool_call_response=str(tool_response_messages), tool_call_response=str(tool_response_messages),
_continuation_message_id=message_id, # Reuse message ID to avoid duplicates
_task_id=_task_id,
): ):
yield chunk yield chunk
@@ -970,21 +893,9 @@ async def _stream_chat_chunks(
SSE formatted JSON response objects SSE formatted JSON response objects
""" """
import time as time_module
stream_chunks_start = time_module.perf_counter()
model = config.model model = config.model
# Build log metadata for structured logging logger.info("Starting pure chat stream")
log_meta = {"component": "ChatService", "session_id": session.session_id}
if session.user_id:
log_meta["user_id"] = session.user_id
logger.info(
f"[TIMING] _stream_chat_chunks STARTED, session={session.session_id}, "
f"user={session.user_id}, n_messages={len(session.messages)}",
extra={"json_fields": {**log_meta, "n_messages": len(session.messages)}},
)
messages = session.to_openai_messages() messages = session.to_openai_messages()
if system_prompt: if system_prompt:
@@ -995,18 +906,12 @@ async def _stream_chat_chunks(
messages = [system_message] + messages messages = [system_message] + messages
# Apply context window management # Apply context window management
context_start = time_module.perf_counter()
context_result = await _manage_context_window( context_result = await _manage_context_window(
messages=messages, messages=messages,
model=model, model=model,
api_key=config.api_key, api_key=config.api_key,
base_url=config.base_url, base_url=config.base_url,
) )
context_time = (time_module.perf_counter() - context_start) * 1000
logger.info(
f"[TIMING] _manage_context_window took {context_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": context_time}},
)
if context_result.error: if context_result.error:
if "System prompt dropped" in context_result.error: if "System prompt dropped" in context_result.error:
@@ -1041,19 +946,9 @@ async def _stream_chat_chunks(
while retry_count <= MAX_RETRIES: while retry_count <= MAX_RETRIES:
try: try:
elapsed = (time_module.perf_counter() - stream_chunks_start) * 1000
retry_info = (
f" (retry {retry_count}/{MAX_RETRIES})" if retry_count > 0 else ""
)
logger.info( logger.info(
f"[TIMING] Creating OpenAI stream at {elapsed:.1f}ms{retry_info}", f"Creating OpenAI chat completion stream..."
extra={ f"{f' (retry {retry_count}/{MAX_RETRIES})' if retry_count > 0 else ''}"
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"retry_count": retry_count,
}
},
) )
# Build extra_body for OpenRouter tracing and PostHog analytics # Build extra_body for OpenRouter tracing and PostHog analytics
@@ -1070,11 +965,6 @@ async def _stream_chat_chunks(
:128 :128
] # OpenRouter limit ] # OpenRouter limit
# Enable adaptive thinking for Anthropic models via OpenRouter
if config.thinking_enabled and "anthropic" in model.lower():
extra_body["reasoning"] = {"enabled": True}
api_call_start = time_module.perf_counter()
stream = await client.chat.completions.create( stream = await client.chat.completions.create(
model=model, model=model,
messages=cast(list[ChatCompletionMessageParam], messages), messages=cast(list[ChatCompletionMessageParam], messages),
@@ -1084,11 +974,6 @@ async def _stream_chat_chunks(
stream_options=ChatCompletionStreamOptionsParam(include_usage=True), stream_options=ChatCompletionStreamOptionsParam(include_usage=True),
extra_body=extra_body, extra_body=extra_body,
) )
api_init_time = (time_module.perf_counter() - api_call_start) * 1000
logger.info(
f"[TIMING] OpenAI stream object returned in {api_init_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": api_init_time}},
)
# Variables to accumulate tool calls # Variables to accumulate tool calls
tool_calls: list[dict[str, Any]] = [] tool_calls: list[dict[str, Any]] = []
@@ -1099,13 +984,10 @@ async def _stream_chat_chunks(
# Track if we've started the text block # Track if we've started the text block
text_started = False text_started = False
first_content_chunk = True
chunk_count = 0
# Process the stream # Process the stream
chunk: ChatCompletionChunk chunk: ChatCompletionChunk
async for chunk in stream: async for chunk in stream:
chunk_count += 1
if chunk.usage: if chunk.usage:
yield StreamUsage( yield StreamUsage(
promptTokens=chunk.usage.prompt_tokens, promptTokens=chunk.usage.prompt_tokens,
@@ -1128,23 +1010,6 @@ async def _stream_chat_chunks(
if not text_started and text_block_id: if not text_started and text_block_id:
yield StreamTextStart(id=text_block_id) yield StreamTextStart(id=text_block_id)
text_started = True text_started = True
# Log timing for first content chunk
if first_content_chunk:
first_content_chunk = False
ttfc = (
time_module.perf_counter() - api_call_start
) * 1000
logger.info(
f"[TIMING] FIRST CONTENT CHUNK at {ttfc:.1f}ms "
f"(since API call), n_chunks={chunk_count}",
extra={
"json_fields": {
**log_meta,
"time_to_first_chunk_ms": ttfc,
"n_chunks": chunk_count,
}
},
)
# Stream the text delta # Stream the text delta
text_response = StreamTextDelta( text_response = StreamTextDelta(
id=text_block_id or "", id=text_block_id or "",
@@ -1201,21 +1066,7 @@ async def _stream_chat_chunks(
toolName=tool_calls[idx]["function"]["name"], toolName=tool_calls[idx]["function"]["name"],
) )
emitted_start_for_idx.add(idx) emitted_start_for_idx.add(idx)
stream_duration = time_module.perf_counter() - api_call_start logger.info(f"Stream complete. Finish reason: {finish_reason}")
logger.info(
f"[TIMING] OpenAI stream COMPLETE, finish_reason={finish_reason}, "
f"duration={stream_duration:.2f}s, "
f"n_chunks={chunk_count}, n_tool_calls={len(tool_calls)}",
extra={
"json_fields": {
**log_meta,
"stream_duration_ms": stream_duration * 1000,
"finish_reason": finish_reason,
"n_chunks": chunk_count,
"n_tool_calls": len(tool_calls),
}
},
)
# Yield all accumulated tool calls after the stream is complete # Yield all accumulated tool calls after the stream is complete
# This ensures all tool call arguments have been fully received # This ensures all tool call arguments have been fully received
@@ -1235,12 +1086,6 @@ async def _stream_chat_chunks(
# Re-raise to trigger retry logic in the parent function # Re-raise to trigger retry logic in the parent function
raise raise
total_time = (time_module.perf_counter() - stream_chunks_start) * 1000
logger.info(
f"[TIMING] _stream_chat_chunks COMPLETED in {total_time/1000:.1f}s; "
f"session={session.session_id}, user={session.user_id}",
extra={"json_fields": {**log_meta, "total_time_ms": total_time}},
)
yield StreamFinish() yield StreamFinish()
return return
except Exception as e: except Exception as e:
@@ -1408,9 +1253,13 @@ async def _yield_tool_call(
operation_id=operation_id, operation_id=operation_id,
) )
# Attach the tool_call to the current turn's assistant message # Save assistant message with tool_call FIRST (required by LLM)
# (or create one if this is a tool-only response with no text). assistant_message = ChatMessage(
session.add_tool_call_to_current_turn(tool_calls[yield_idx]) role="assistant",
content="",
tool_calls=[tool_calls[yield_idx]],
)
session.messages.append(assistant_message)
# Then save pending tool result # Then save pending tool result
pending_message = ChatMessage( pending_message = ChatMessage(
@@ -1716,7 +1565,6 @@ async def _execute_long_running_tool_with_streaming(
task_id, task_id,
StreamError(errorText=str(e)), StreamError(errorText=str(e)),
) )
await stream_registry.publish_chunk(task_id, StreamFinishStep())
await stream_registry.publish_chunk(task_id, StreamFinish()) await stream_registry.publish_chunk(task_id, StreamFinish())
await _update_pending_operation( await _update_pending_operation(
@@ -1833,10 +1681,6 @@ async def _generate_llm_continuation(
if session_id: if session_id:
extra_body["session_id"] = session_id[:128] extra_body["session_id"] = session_id[:128]
# Enable adaptive thinking for Anthropic models via OpenRouter
if config.thinking_enabled and "anthropic" in config.model.lower():
extra_body["reasoning"] = {"enabled": True}
retry_count = 0 retry_count = 0
last_error: Exception | None = None last_error: Exception | None = None
response = None response = None
@@ -1967,10 +1811,6 @@ async def _generate_llm_continuation_with_streaming(
if session_id: if session_id:
extra_body["session_id"] = session_id[:128] extra_body["session_id"] = session_id[:128]
# Enable adaptive thinking for Anthropic models via OpenRouter
if config.thinking_enabled and "anthropic" in config.model.lower():
extra_body["reasoning"] = {"enabled": True}
# Make streaming LLM call (no tools - just text response) # Make streaming LLM call (no tools - just text response)
from typing import cast from typing import cast
@@ -1982,7 +1822,6 @@ async def _generate_llm_continuation_with_streaming(
# Publish start event # Publish start event
await stream_registry.publish_chunk(task_id, StreamStart(messageId=message_id)) await stream_registry.publish_chunk(task_id, StreamStart(messageId=message_id))
await stream_registry.publish_chunk(task_id, StreamStartStep())
await stream_registry.publish_chunk(task_id, StreamTextStart(id=text_block_id)) await stream_registry.publish_chunk(task_id, StreamTextStart(id=text_block_id))
# Stream the response # Stream the response
@@ -2006,7 +1845,6 @@ async def _generate_llm_continuation_with_streaming(
# Publish end events # Publish end events
await stream_registry.publish_chunk(task_id, StreamTextEnd(id=text_block_id)) await stream_registry.publish_chunk(task_id, StreamTextEnd(id=text_block_id))
await stream_registry.publish_chunk(task_id, StreamFinishStep())
if assistant_content: if assistant_content:
# Reload session from DB to avoid race condition with user messages # Reload session from DB to avoid race condition with user messages
@@ -2048,5 +1886,4 @@ async def _generate_llm_continuation_with_streaming(
task_id, task_id,
StreamError(errorText=f"Failed to generate response: {e}"), StreamError(errorText=f"Failed to generate response: {e}"),
) )
await stream_registry.publish_chunk(task_id, StreamFinishStep())
await stream_registry.publish_chunk(task_id, StreamFinish()) await stream_registry.publish_chunk(task_id, StreamFinish())

View File

@@ -104,24 +104,6 @@ async def create_task(
Returns: Returns:
The created ActiveTask instance (metadata only) The created ActiveTask instance (metadata only)
""" """
import time
start_time = time.perf_counter()
# Build log metadata for structured logging
log_meta = {
"component": "StreamRegistry",
"task_id": task_id,
"session_id": session_id,
}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] create_task STARTED, task={task_id}, session={session_id}, user={user_id}",
extra={"json_fields": log_meta},
)
task = ActiveTask( task = ActiveTask(
task_id=task_id, task_id=task_id,
session_id=session_id, session_id=session_id,
@@ -132,18 +114,10 @@ async def create_task(
) )
# Store metadata in Redis # Store metadata in Redis
redis_start = time.perf_counter()
redis = await get_redis_async() redis = await get_redis_async()
redis_time = (time.perf_counter() - redis_start) * 1000
logger.info(
f"[TIMING] get_redis_async took {redis_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": redis_time}},
)
meta_key = _get_task_meta_key(task_id) meta_key = _get_task_meta_key(task_id)
op_key = _get_operation_mapping_key(operation_id) op_key = _get_operation_mapping_key(operation_id)
hset_start = time.perf_counter()
await redis.hset( # type: ignore[misc] await redis.hset( # type: ignore[misc]
meta_key, meta_key,
mapping={ mapping={
@@ -157,22 +131,12 @@ async def create_task(
"created_at": task.created_at.isoformat(), "created_at": task.created_at.isoformat(),
}, },
) )
hset_time = (time.perf_counter() - hset_start) * 1000
logger.info(
f"[TIMING] redis.hset took {hset_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": hset_time}},
)
await redis.expire(meta_key, config.stream_ttl) await redis.expire(meta_key, config.stream_ttl)
# Create operation_id -> task_id mapping for webhook lookups # Create operation_id -> task_id mapping for webhook lookups
await redis.set(op_key, task_id, ex=config.stream_ttl) await redis.set(op_key, task_id, ex=config.stream_ttl)
total_time = (time.perf_counter() - start_time) * 1000 logger.debug(f"Created task {task_id} for session {session_id}")
logger.info(
f"[TIMING] create_task COMPLETED in {total_time:.1f}ms; task={task_id}, session={session_id}",
extra={"json_fields": {**log_meta, "total_time_ms": total_time}},
)
return task return task
@@ -192,60 +156,26 @@ async def publish_chunk(
Returns: Returns:
The Redis Stream message ID The Redis Stream message ID
""" """
import time
start_time = time.perf_counter()
chunk_type = type(chunk).__name__
chunk_json = chunk.model_dump_json() chunk_json = chunk.model_dump_json()
message_id = "0-0" message_id = "0-0"
# Build log metadata
log_meta = {
"component": "StreamRegistry",
"task_id": task_id,
"chunk_type": chunk_type,
}
try: try:
redis = await get_redis_async() redis = await get_redis_async()
stream_key = _get_task_stream_key(task_id) stream_key = _get_task_stream_key(task_id)
# Write to Redis Stream for persistence and real-time delivery # Write to Redis Stream for persistence and real-time delivery
xadd_start = time.perf_counter()
raw_id = await redis.xadd( raw_id = await redis.xadd(
stream_key, stream_key,
{"data": chunk_json}, {"data": chunk_json},
maxlen=config.stream_max_length, maxlen=config.stream_max_length,
) )
xadd_time = (time.perf_counter() - xadd_start) * 1000
message_id = raw_id if isinstance(raw_id, str) else raw_id.decode() message_id = raw_id if isinstance(raw_id, str) else raw_id.decode()
# Set TTL on stream to match task metadata TTL # Set TTL on stream to match task metadata TTL
await redis.expire(stream_key, config.stream_ttl) await redis.expire(stream_key, config.stream_ttl)
total_time = (time.perf_counter() - start_time) * 1000
# Only log timing for significant chunks or slow operations
if (
chunk_type
in ("StreamStart", "StreamFinish", "StreamTextStart", "StreamTextEnd")
or total_time > 50
):
logger.info(
f"[TIMING] publish_chunk {chunk_type} in {total_time:.1f}ms (xadd={xadd_time:.1f}ms)",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"xadd_time_ms": xadd_time,
"message_id": message_id,
}
},
)
except Exception as e: except Exception as e:
elapsed = (time.perf_counter() - start_time) * 1000
logger.error( logger.error(
f"[TIMING] Failed to publish chunk {chunk_type} after {elapsed:.1f}ms: {e}", f"Failed to publish chunk for task {task_id}: {e}",
extra={"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}},
exc_info=True, exc_info=True,
) )
@@ -270,61 +200,24 @@ async def subscribe_to_task(
An asyncio Queue that will receive stream chunks, or None if task not found An asyncio Queue that will receive stream chunks, or None if task not found
or user doesn't have access or user doesn't have access
""" """
import time
start_time = time.perf_counter()
# Build log metadata
log_meta = {"component": "StreamRegistry", "task_id": task_id}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] subscribe_to_task STARTED, task={task_id}, user={user_id}, last_msg={last_message_id}",
extra={"json_fields": {**log_meta, "last_message_id": last_message_id}},
)
redis_start = time.perf_counter()
redis = await get_redis_async() redis = await get_redis_async()
meta_key = _get_task_meta_key(task_id) meta_key = _get_task_meta_key(task_id)
meta: dict[Any, Any] = await redis.hgetall(meta_key) # type: ignore[misc] meta: dict[Any, Any] = await redis.hgetall(meta_key) # type: ignore[misc]
hgetall_time = (time.perf_counter() - redis_start) * 1000
logger.info(
f"[TIMING] Redis hgetall took {hgetall_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": hgetall_time}},
)
if not meta: if not meta:
elapsed = (time.perf_counter() - start_time) * 1000 logger.debug(f"Task {task_id} not found in Redis")
logger.info(
f"[TIMING] Task not found in Redis after {elapsed:.1f}ms",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"reason": "task_not_found",
}
},
)
return None return None
# Note: Redis client uses decode_responses=True, so keys are strings # Note: Redis client uses decode_responses=True, so keys are strings
task_status = meta.get("status", "") task_status = meta.get("status", "")
task_user_id = meta.get("user_id", "") or None task_user_id = meta.get("user_id", "") or None
log_meta["session_id"] = meta.get("session_id", "")
# Validate ownership - if task has an owner, requester must match # Validate ownership - if task has an owner, requester must match
if task_user_id: if task_user_id:
if user_id != task_user_id: if user_id != task_user_id:
logger.warning( logger.warning(
f"[TIMING] Access denied: user {user_id} tried to access task owned by {task_user_id}", f"User {user_id} denied access to task {task_id} "
extra={ f"owned by {task_user_id}"
"json_fields": {
**log_meta,
"task_owner": task_user_id,
"reason": "access_denied",
}
},
) )
return None return None
@@ -332,19 +225,7 @@ async def subscribe_to_task(
stream_key = _get_task_stream_key(task_id) stream_key = _get_task_stream_key(task_id)
# Step 1: Replay messages from Redis Stream # Step 1: Replay messages from Redis Stream
xread_start = time.perf_counter()
messages = await redis.xread({stream_key: last_message_id}, block=0, count=1000) messages = await redis.xread({stream_key: last_message_id}, block=0, count=1000)
xread_time = (time.perf_counter() - xread_start) * 1000
logger.info(
f"[TIMING] Redis xread (replay) took {xread_time:.1f}ms, status={task_status}",
extra={
"json_fields": {
**log_meta,
"duration_ms": xread_time,
"task_status": task_status,
}
},
)
replayed_count = 0 replayed_count = 0
replay_last_id = last_message_id replay_last_id = last_message_id
@@ -363,48 +244,19 @@ async def subscribe_to_task(
except Exception as e: except Exception as e:
logger.warning(f"Failed to replay message: {e}") logger.warning(f"Failed to replay message: {e}")
logger.info( logger.debug(f"Task {task_id}: replayed {replayed_count} messages")
f"[TIMING] Replayed {replayed_count} messages, last_id={replay_last_id}",
extra={
"json_fields": {
**log_meta,
"n_messages_replayed": replayed_count,
"replay_last_id": replay_last_id,
}
},
)
# Step 2: If task is still running, start stream listener for live updates # Step 2: If task is still running, start stream listener for live updates
if task_status == "running": if task_status == "running":
logger.info(
"[TIMING] Task still running, starting _stream_listener",
extra={"json_fields": {**log_meta, "task_status": task_status}},
)
listener_task = asyncio.create_task( listener_task = asyncio.create_task(
_stream_listener(task_id, subscriber_queue, replay_last_id, log_meta) _stream_listener(task_id, subscriber_queue, replay_last_id)
) )
# Track listener task for cleanup on unsubscribe # Track listener task for cleanup on unsubscribe
_listener_tasks[id(subscriber_queue)] = (task_id, listener_task) _listener_tasks[id(subscriber_queue)] = (task_id, listener_task)
else: else:
# Task is completed/failed - add finish marker # Task is completed/failed - add finish marker
logger.info(
f"[TIMING] Task already {task_status}, adding StreamFinish",
extra={"json_fields": {**log_meta, "task_status": task_status}},
)
await subscriber_queue.put(StreamFinish()) await subscriber_queue.put(StreamFinish())
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] subscribe_to_task COMPLETED in {total_time:.1f}ms; task={task_id}, "
f"n_messages_replayed={replayed_count}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"n_messages_replayed": replayed_count,
}
},
)
return subscriber_queue return subscriber_queue
@@ -412,7 +264,6 @@ async def _stream_listener(
task_id: str, task_id: str,
subscriber_queue: asyncio.Queue[StreamBaseResponse], subscriber_queue: asyncio.Queue[StreamBaseResponse],
last_replayed_id: str, last_replayed_id: str,
log_meta: dict | None = None,
) -> None: ) -> None:
"""Listen to Redis Stream for new messages using blocking XREAD. """Listen to Redis Stream for new messages using blocking XREAD.
@@ -423,27 +274,10 @@ async def _stream_listener(
task_id: Task ID to listen for task_id: Task ID to listen for
subscriber_queue: Queue to deliver messages to subscriber_queue: Queue to deliver messages to
last_replayed_id: Last message ID from replay (continue from here) last_replayed_id: Last message ID from replay (continue from here)
log_meta: Structured logging metadata
""" """
import time
start_time = time.perf_counter()
# Use provided log_meta or build minimal one
if log_meta is None:
log_meta = {"component": "StreamRegistry", "task_id": task_id}
logger.info(
f"[TIMING] _stream_listener STARTED, task={task_id}, last_id={last_replayed_id}",
extra={"json_fields": {**log_meta, "last_replayed_id": last_replayed_id}},
)
queue_id = id(subscriber_queue) queue_id = id(subscriber_queue)
# Track the last successfully delivered message ID for recovery hints # Track the last successfully delivered message ID for recovery hints
last_delivered_id = last_replayed_id last_delivered_id = last_replayed_id
messages_delivered = 0
first_message_time = None
xread_count = 0
try: try:
redis = await get_redis_async() redis = await get_redis_async()
@@ -453,39 +287,9 @@ async def _stream_listener(
while True: while True:
# Block for up to 30 seconds waiting for new messages # Block for up to 30 seconds waiting for new messages
# This allows periodic checking if task is still running # This allows periodic checking if task is still running
xread_start = time.perf_counter()
xread_count += 1
messages = await redis.xread( messages = await redis.xread(
{stream_key: current_id}, block=30000, count=100 {stream_key: current_id}, block=30000, count=100
) )
xread_time = (time.perf_counter() - xread_start) * 1000
if messages:
msg_count = sum(len(msgs) for _, msgs in messages)
logger.info(
f"[TIMING] xread #{xread_count} returned {msg_count} messages in {xread_time:.1f}ms",
extra={
"json_fields": {
**log_meta,
"xread_count": xread_count,
"n_messages": msg_count,
"duration_ms": xread_time,
}
},
)
elif xread_time > 1000:
# Only log timeouts (30s blocking)
logger.info(
f"[TIMING] xread #{xread_count} timeout after {xread_time:.1f}ms",
extra={
"json_fields": {
**log_meta,
"xread_count": xread_count,
"duration_ms": xread_time,
"reason": "timeout",
}
},
)
if not messages: if not messages:
# Timeout - check if task is still running # Timeout - check if task is still running
@@ -522,30 +326,10 @@ async def _stream_listener(
) )
# Update last delivered ID on successful delivery # Update last delivered ID on successful delivery
last_delivered_id = current_id last_delivered_id = current_id
messages_delivered += 1
if first_message_time is None:
first_message_time = time.perf_counter()
elapsed = (first_message_time - start_time) * 1000
logger.info(
f"[TIMING] FIRST live message at {elapsed:.1f}ms, type={type(chunk).__name__}",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"chunk_type": type(chunk).__name__,
}
},
)
except asyncio.TimeoutError: except asyncio.TimeoutError:
logger.warning( logger.warning(
f"[TIMING] Subscriber queue full, delivery timed out after {QUEUE_PUT_TIMEOUT}s", f"Subscriber queue full for task {task_id}, "
extra={ f"message delivery timed out after {QUEUE_PUT_TIMEOUT}s"
"json_fields": {
**log_meta,
"timeout_s": QUEUE_PUT_TIMEOUT,
"reason": "queue_full",
}
},
) )
# Send overflow error with recovery info # Send overflow error with recovery info
try: try:
@@ -567,44 +351,15 @@ async def _stream_listener(
# Stop listening on finish # Stop listening on finish
if isinstance(chunk, StreamFinish): if isinstance(chunk, StreamFinish):
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] StreamFinish received in {total_time/1000:.1f}s; delivered={messages_delivered}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"messages_delivered": messages_delivered,
}
},
)
return return
except Exception as e: except Exception as e:
logger.warning( logger.warning(f"Error processing stream message: {e}")
f"Error processing stream message: {e}",
extra={"json_fields": {**log_meta, "error": str(e)}},
)
except asyncio.CancelledError: except asyncio.CancelledError:
elapsed = (time.perf_counter() - start_time) * 1000 logger.debug(f"Stream listener cancelled for task {task_id}")
logger.info(
f"[TIMING] _stream_listener CANCELLED after {elapsed:.1f}ms, delivered={messages_delivered}",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"messages_delivered": messages_delivered,
"reason": "cancelled",
}
},
)
raise # Re-raise to propagate cancellation raise # Re-raise to propagate cancellation
except Exception as e: except Exception as e:
elapsed = (time.perf_counter() - start_time) * 1000 logger.error(f"Stream listener error for task {task_id}: {e}")
logger.error(
f"[TIMING] _stream_listener ERROR after {elapsed:.1f}ms: {e}",
extra={"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}},
)
# On error, send finish to unblock subscriber # On error, send finish to unblock subscriber
try: try:
await asyncio.wait_for( await asyncio.wait_for(
@@ -613,24 +368,10 @@ async def _stream_listener(
) )
except (asyncio.TimeoutError, asyncio.QueueFull): except (asyncio.TimeoutError, asyncio.QueueFull):
logger.warning( logger.warning(
"Could not deliver finish event after error", f"Could not deliver finish event for task {task_id} after error"
extra={"json_fields": log_meta},
) )
finally: finally:
# Clean up listener task mapping on exit # Clean up listener task mapping on exit
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] _stream_listener FINISHED in {total_time/1000:.1f}s; task={task_id}, "
f"delivered={messages_delivered}, xread_count={xread_count}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"messages_delivered": messages_delivered,
"xread_count": xread_count,
}
},
)
_listener_tasks.pop(queue_id, None) _listener_tasks.pop(queue_id, None)
@@ -857,10 +598,8 @@ def _reconstruct_chunk(chunk_data: dict) -> StreamBaseResponse | None:
ResponseType, ResponseType,
StreamError, StreamError,
StreamFinish, StreamFinish,
StreamFinishStep,
StreamHeartbeat, StreamHeartbeat,
StreamStart, StreamStart,
StreamStartStep,
StreamTextDelta, StreamTextDelta,
StreamTextEnd, StreamTextEnd,
StreamTextStart, StreamTextStart,
@@ -874,8 +613,6 @@ def _reconstruct_chunk(chunk_data: dict) -> StreamBaseResponse | None:
type_to_class: dict[str, type[StreamBaseResponse]] = { type_to_class: dict[str, type[StreamBaseResponse]] = {
ResponseType.START.value: StreamStart, ResponseType.START.value: StreamStart,
ResponseType.FINISH.value: StreamFinish, ResponseType.FINISH.value: StreamFinish,
ResponseType.START_STEP.value: StreamStartStep,
ResponseType.FINISH_STEP.value: StreamFinishStep,
ResponseType.TEXT_START.value: StreamTextStart, ResponseType.TEXT_START.value: StreamTextStart,
ResponseType.TEXT_DELTA.value: StreamTextDelta, ResponseType.TEXT_DELTA.value: StreamTextDelta,
ResponseType.TEXT_END.value: StreamTextEnd, ResponseType.TEXT_END.value: StreamTextEnd,

View File

@@ -13,32 +13,10 @@ from backend.api.features.chat.tools.models import (
NoResultsResponse, NoResultsResponse,
) )
from backend.api.features.store.hybrid_search import unified_hybrid_search from backend.api.features.store.hybrid_search import unified_hybrid_search
from backend.data.block import BlockType, get_block from backend.data.block import get_block
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_TARGET_RESULTS = 10
# Over-fetch to compensate for post-hoc filtering of graph-only blocks.
# 40 is 2x current removed; speed of query 10 vs 40 is minimial
_OVERFETCH_PAGE_SIZE = 40
# Block types that only work within graphs and cannot run standalone in CoPilot.
COPILOT_EXCLUDED_BLOCK_TYPES = {
BlockType.INPUT, # Graph interface definition - data enters via chat, not graph inputs
BlockType.OUTPUT, # Graph interface definition - data exits via chat, not graph outputs
BlockType.WEBHOOK, # Wait for external events - would hang forever in CoPilot
BlockType.WEBHOOK_MANUAL, # Same as WEBHOOK
BlockType.NOTE, # Visual annotation only - no runtime behavior
BlockType.HUMAN_IN_THE_LOOP, # Pauses for human approval - CoPilot IS human-in-the-loop
BlockType.AGENT, # AgentExecutorBlock requires execution_context - use run_agent tool
}
# Specific block IDs excluded from CoPilot (STANDARD type but still require graph context)
COPILOT_EXCLUDED_BLOCK_IDS = {
# SmartDecisionMakerBlock - dynamically discovers downstream blocks via graph topology
"3b191d9f-356f-482d-8238-ba04b6d18381",
}
class FindBlockTool(BaseTool): class FindBlockTool(BaseTool):
"""Tool for searching available blocks.""" """Tool for searching available blocks."""
@@ -110,7 +88,7 @@ class FindBlockTool(BaseTool):
query=query, query=query,
content_types=[ContentType.BLOCK], content_types=[ContentType.BLOCK],
page=1, page=1,
page_size=_OVERFETCH_PAGE_SIZE, page_size=10,
) )
if not results: if not results:
@@ -130,35 +108,18 @@ class FindBlockTool(BaseTool):
block = get_block(block_id) block = get_block(block_id)
# Skip disabled blocks # Skip disabled blocks
if not block or block.disabled: if block and not block.disabled:
continue
# Skip blocks excluded from CoPilot (graph-only blocks)
if (
block.block_type in COPILOT_EXCLUDED_BLOCK_TYPES
or block.id in COPILOT_EXCLUDED_BLOCK_IDS
):
continue
# Get input/output schemas # Get input/output schemas
input_schema = {} input_schema = {}
output_schema = {} output_schema = {}
try: try:
input_schema = block.input_schema.jsonschema() input_schema = block.input_schema.jsonschema()
except Exception as e: except Exception:
logger.debug( pass
"Failed to generate input schema for block %s: %s",
block_id,
e,
)
try: try:
output_schema = block.output_schema.jsonschema() output_schema = block.output_schema.jsonschema()
except Exception as e: except Exception:
logger.debug( pass
"Failed to generate output schema for block %s: %s",
block_id,
e,
)
# Get categories from block instance # Get categories from block instance
categories = [] categories = []
@@ -202,19 +163,6 @@ class FindBlockTool(BaseTool):
) )
) )
if len(blocks) >= _TARGET_RESULTS:
break
if blocks and len(blocks) < _TARGET_RESULTS:
logger.debug(
"find_block returned %d/%d results for query '%s' "
"(filtered %d excluded/disabled blocks)",
len(blocks),
_TARGET_RESULTS,
query,
len(results) - len(blocks),
)
if not blocks: if not blocks:
return NoResultsResponse( return NoResultsResponse(
message=f"No blocks found for '{query}'", message=f"No blocks found for '{query}'",

View File

@@ -1,139 +0,0 @@
"""Tests for block filtering in FindBlockTool."""
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from backend.api.features.chat.tools.find_block import (
COPILOT_EXCLUDED_BLOCK_IDS,
COPILOT_EXCLUDED_BLOCK_TYPES,
FindBlockTool,
)
from backend.api.features.chat.tools.models import BlockListResponse
from backend.data.block import BlockType
from ._test_data import make_session
_TEST_USER_ID = "test-user-find-block"
def make_mock_block(
block_id: str, name: str, block_type: BlockType, disabled: bool = False
):
"""Create a mock block for testing."""
mock = MagicMock()
mock.id = block_id
mock.name = name
mock.description = f"{name} description"
mock.block_type = block_type
mock.disabled = disabled
mock.input_schema = MagicMock()
mock.input_schema.jsonschema.return_value = {"properties": {}, "required": []}
mock.input_schema.get_credentials_fields.return_value = {}
mock.output_schema = MagicMock()
mock.output_schema.jsonschema.return_value = {}
mock.categories = []
return mock
class TestFindBlockFiltering:
"""Tests for block filtering in FindBlockTool."""
def test_excluded_block_types_contains_expected_types(self):
"""Verify COPILOT_EXCLUDED_BLOCK_TYPES contains all graph-only types."""
assert BlockType.INPUT in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.OUTPUT in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.WEBHOOK in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.WEBHOOK_MANUAL in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.NOTE in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.HUMAN_IN_THE_LOOP in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.AGENT in COPILOT_EXCLUDED_BLOCK_TYPES
def test_excluded_block_ids_contains_smart_decision_maker(self):
"""Verify SmartDecisionMakerBlock is in COPILOT_EXCLUDED_BLOCK_IDS."""
assert "3b191d9f-356f-482d-8238-ba04b6d18381" in COPILOT_EXCLUDED_BLOCK_IDS
@pytest.mark.asyncio(loop_scope="session")
async def test_excluded_block_type_filtered_from_results(self):
"""Verify blocks with excluded BlockTypes are filtered from search results."""
session = make_session(user_id=_TEST_USER_ID)
# Mock search returns an INPUT block (excluded) and a STANDARD block (included)
search_results = [
{"content_id": "input-block-id", "score": 0.9},
{"content_id": "standard-block-id", "score": 0.8},
]
input_block = make_mock_block("input-block-id", "Input Block", BlockType.INPUT)
standard_block = make_mock_block(
"standard-block-id", "HTTP Request", BlockType.STANDARD
)
def mock_get_block(block_id):
return {
"input-block-id": input_block,
"standard-block-id": standard_block,
}.get(block_id)
with patch(
"backend.api.features.chat.tools.find_block.unified_hybrid_search",
new_callable=AsyncMock,
return_value=(search_results, 2),
):
with patch(
"backend.api.features.chat.tools.find_block.get_block",
side_effect=mock_get_block,
):
tool = FindBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID, session=session, query="test"
)
# Should only return the standard block, not the INPUT block
assert isinstance(response, BlockListResponse)
assert len(response.blocks) == 1
assert response.blocks[0].id == "standard-block-id"
@pytest.mark.asyncio(loop_scope="session")
async def test_excluded_block_id_filtered_from_results(self):
"""Verify SmartDecisionMakerBlock is filtered from search results."""
session = make_session(user_id=_TEST_USER_ID)
smart_decision_id = "3b191d9f-356f-482d-8238-ba04b6d18381"
search_results = [
{"content_id": smart_decision_id, "score": 0.9},
{"content_id": "normal-block-id", "score": 0.8},
]
# SmartDecisionMakerBlock has STANDARD type but is excluded by ID
smart_block = make_mock_block(
smart_decision_id, "Smart Decision Maker", BlockType.STANDARD
)
normal_block = make_mock_block(
"normal-block-id", "Normal Block", BlockType.STANDARD
)
def mock_get_block(block_id):
return {
smart_decision_id: smart_block,
"normal-block-id": normal_block,
}.get(block_id)
with patch(
"backend.api.features.chat.tools.find_block.unified_hybrid_search",
new_callable=AsyncMock,
return_value=(search_results, 2),
):
with patch(
"backend.api.features.chat.tools.find_block.get_block",
side_effect=mock_get_block,
):
tool = FindBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID, session=session, query="decision"
)
# Should only return normal block, not SmartDecisionMakerBlock
assert isinstance(response, BlockListResponse)
assert len(response.blocks) == 1
assert response.blocks[0].id == "normal-block-id"

View File

@@ -1,29 +0,0 @@
"""Shared helpers for chat tools."""
from typing import Any
def get_inputs_from_schema(
input_schema: dict[str, Any],
exclude_fields: set[str] | None = None,
) -> list[dict[str, Any]]:
"""Extract input field info from JSON schema."""
if not isinstance(input_schema, dict):
return []
exclude = exclude_fields or set()
properties = input_schema.get("properties", {})
required = set(input_schema.get("required", []))
return [
{
"name": name,
"title": schema.get("title", name),
"type": schema.get("type", "string"),
"description": schema.get("description", ""),
"required": name in required,
"default": schema.get("default"),
}
for name, schema in properties.items()
if name not in exclude
]

View File

@@ -24,7 +24,6 @@ from backend.util.timezone_utils import (
) )
from .base import BaseTool from .base import BaseTool
from .helpers import get_inputs_from_schema
from .models import ( from .models import (
AgentDetails, AgentDetails,
AgentDetailsResponse, AgentDetailsResponse,
@@ -262,7 +261,7 @@ class RunAgentTool(BaseTool):
), ),
requirements={ requirements={
"credentials": requirements_creds_list, "credentials": requirements_creds_list,
"inputs": get_inputs_from_schema(graph.input_schema), "inputs": self._get_inputs_list(graph.input_schema),
"execution_modes": self._get_execution_modes(graph), "execution_modes": self._get_execution_modes(graph),
}, },
), ),
@@ -370,6 +369,22 @@ class RunAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
def _get_inputs_list(self, input_schema: dict[str, Any]) -> list[dict[str, Any]]:
"""Extract inputs list from schema."""
inputs_list = []
if isinstance(input_schema, dict) and "properties" in input_schema:
for field_name, field_schema in input_schema["properties"].items():
inputs_list.append(
{
"name": field_name,
"title": field_schema.get("title", field_name),
"type": field_schema.get("type", "string"),
"description": field_schema.get("description", ""),
"required": field_name in input_schema.get("required", []),
}
)
return inputs_list
def _get_execution_modes(self, graph: GraphModel) -> list[str]: def _get_execution_modes(self, graph: GraphModel) -> list[str]:
"""Get available execution modes for the graph.""" """Get available execution modes for the graph."""
trigger_info = graph.trigger_setup_info trigger_info = graph.trigger_setup_info
@@ -383,7 +398,7 @@ class RunAgentTool(BaseTool):
suffix: str, suffix: str,
) -> str: ) -> str:
"""Build a message describing available inputs for an agent.""" """Build a message describing available inputs for an agent."""
inputs_list = get_inputs_from_schema(graph.input_schema) inputs_list = self._get_inputs_list(graph.input_schema)
required_names = [i["name"] for i in inputs_list if i["required"]] required_names = [i["name"] for i in inputs_list if i["required"]]
optional_names = [i["name"] for i in inputs_list if not i["required"]] optional_names = [i["name"] for i in inputs_list if not i["required"]]

View File

@@ -8,19 +8,14 @@ from typing import Any
from pydantic_core import PydanticUndefined from pydantic_core import PydanticUndefined
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools.find_block import ( from backend.data.block import get_block
COPILOT_EXCLUDED_BLOCK_IDS,
COPILOT_EXCLUDED_BLOCK_TYPES,
)
from backend.data.block import AnyBlockSchema, get_block
from backend.data.execution import ExecutionContext from backend.data.execution import ExecutionContext
from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput from backend.data.model import CredentialsMetaInput
from backend.data.workspace import get_or_create_workspace from backend.data.workspace import get_or_create_workspace
from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import BlockError from backend.util.exceptions import BlockError
from .base import BaseTool from .base import BaseTool
from .helpers import get_inputs_from_schema
from .models import ( from .models import (
BlockOutputResponse, BlockOutputResponse,
ErrorResponse, ErrorResponse,
@@ -29,10 +24,7 @@ from .models import (
ToolResponseBase, ToolResponseBase,
UserReadiness, UserReadiness,
) )
from .utils import ( from .utils import build_missing_credentials_from_field_info
build_missing_credentials_from_field_info,
match_credentials_to_requirements,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -81,6 +73,91 @@ class RunBlockTool(BaseTool):
def requires_auth(self) -> bool: def requires_auth(self) -> bool:
return True return True
async def _check_block_credentials(
self,
user_id: str,
block: Any,
input_data: dict[str, Any] | None = None,
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Check if user has required credentials for a block.
Args:
user_id: User ID
block: Block to check credentials for
input_data: Input data for the block (used to determine provider via discriminator)
Returns:
tuple[matched_credentials, missing_credentials]
"""
matched_credentials: dict[str, CredentialsMetaInput] = {}
missing_credentials: list[CredentialsMetaInput] = []
input_data = input_data or {}
# Get credential field info from block's input schema
credentials_fields_info = block.input_schema.get_credentials_fields_info()
if not credentials_fields_info:
return matched_credentials, missing_credentials
# Get user's available credentials
creds_manager = IntegrationCredentialsManager()
available_creds = await creds_manager.store.get_all_creds(user_id)
for field_name, field_info in credentials_fields_info.items():
effective_field_info = field_info
if field_info.discriminator and field_info.discriminator_mapping:
# Get discriminator from input, falling back to schema default
discriminator_value = input_data.get(field_info.discriminator)
if discriminator_value is None:
field = block.input_schema.model_fields.get(
field_info.discriminator
)
if field and field.default is not PydanticUndefined:
discriminator_value = field.default
if (
discriminator_value
and discriminator_value in field_info.discriminator_mapping
):
effective_field_info = field_info.discriminate(discriminator_value)
logger.debug(
f"Discriminated provider for {field_name}: "
f"{discriminator_value} -> {effective_field_info.provider}"
)
matching_cred = next(
(
cred
for cred in available_creds
if cred.provider in effective_field_info.provider
and cred.type in effective_field_info.supported_types
),
None,
)
if matching_cred:
matched_credentials[field_name] = CredentialsMetaInput(
id=matching_cred.id,
provider=matching_cred.provider, # type: ignore
type=matching_cred.type,
title=matching_cred.title,
)
else:
# Create a placeholder for the missing credential
provider = next(iter(effective_field_info.provider), "unknown")
cred_type = next(iter(effective_field_info.supported_types), "api_key")
missing_credentials.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=field_name.replace("_", " ").title(),
)
)
return matched_credentials, missing_credentials
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,
@@ -135,24 +212,11 @@ class RunBlockTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Check if block is excluded from CoPilot (graph-only blocks)
if (
block.block_type in COPILOT_EXCLUDED_BLOCK_TYPES
or block.id in COPILOT_EXCLUDED_BLOCK_IDS
):
return ErrorResponse(
message=(
f"Block '{block.name}' cannot be run directly in CoPilot. "
"This block is designed for use within graphs only."
),
session_id=session_id,
)
logger.info(f"Executing block {block.name} ({block_id}) for user {user_id}") logger.info(f"Executing block {block.name} ({block_id}) for user {user_id}")
creds_manager = IntegrationCredentialsManager() creds_manager = IntegrationCredentialsManager()
matched_credentials, missing_credentials = ( matched_credentials, missing_credentials = await self._check_block_credentials(
await self._resolve_block_credentials(user_id, block, input_data) user_id, block, input_data
) )
if missing_credentials: if missing_credentials:
@@ -281,75 +345,29 @@ class RunBlockTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
async def _resolve_block_credentials( def _get_inputs_list(self, block: Any) -> list[dict[str, Any]]:
self,
user_id: str,
block: AnyBlockSchema,
input_data: dict[str, Any] | None = None,
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Resolve credentials for a block by matching user's available credentials.
Args:
user_id: User ID
block: Block to resolve credentials for
input_data: Input data for the block (used to determine provider via discriminator)
Returns:
tuple of (matched_credentials, missing_credentials) - matched credentials
are used for block execution, missing ones indicate setup requirements.
"""
input_data = input_data or {}
requirements = self._resolve_discriminated_credentials(block, input_data)
if not requirements:
return {}, []
return await match_credentials_to_requirements(user_id, requirements)
def _get_inputs_list(self, block: AnyBlockSchema) -> list[dict[str, Any]]:
"""Extract non-credential inputs from block schema.""" """Extract non-credential inputs from block schema."""
inputs_list = []
schema = block.input_schema.jsonschema() schema = block.input_schema.jsonschema()
properties = schema.get("properties", {})
required_fields = set(schema.get("required", []))
# Get credential field names to exclude
credentials_fields = set(block.input_schema.get_credentials_fields().keys()) credentials_fields = set(block.input_schema.get_credentials_fields().keys())
return get_inputs_from_schema(schema, exclude_fields=credentials_fields)
def _resolve_discriminated_credentials( for field_name, field_schema in properties.items():
self, # Skip credential fields
block: AnyBlockSchema, if field_name in credentials_fields:
input_data: dict[str, Any], continue
) -> dict[str, CredentialsFieldInfo]:
"""Resolve credential requirements, applying discriminator logic where needed."""
credentials_fields_info = block.input_schema.get_credentials_fields_info()
if not credentials_fields_info:
return {}
resolved: dict[str, CredentialsFieldInfo] = {} inputs_list.append(
{
for field_name, field_info in credentials_fields_info.items(): "name": field_name,
effective_field_info = field_info "title": field_schema.get("title", field_name),
"type": field_schema.get("type", "string"),
if field_info.discriminator and field_info.discriminator_mapping: "description": field_schema.get("description", ""),
discriminator_value = input_data.get(field_info.discriminator) "required": field_name in required_fields,
if discriminator_value is None: }
field = block.input_schema.model_fields.get(
field_info.discriminator
)
if field and field.default is not PydanticUndefined:
discriminator_value = field.default
if (
discriminator_value
and discriminator_value in field_info.discriminator_mapping
):
effective_field_info = field_info.discriminate(discriminator_value)
# For host-scoped credentials, add the discriminator value
# (e.g., URL) so _credential_is_for_host can match it
effective_field_info.discriminator_values.add(discriminator_value)
logger.debug(
f"Discriminated provider for {field_name}: "
f"{discriminator_value} -> {effective_field_info.provider}"
) )
resolved[field_name] = effective_field_info return inputs_list
return resolved

View File

@@ -1,106 +0,0 @@
"""Tests for block execution guards in RunBlockTool."""
from unittest.mock import MagicMock, patch
import pytest
from backend.api.features.chat.tools.models import ErrorResponse
from backend.api.features.chat.tools.run_block import RunBlockTool
from backend.data.block import BlockType
from ._test_data import make_session
_TEST_USER_ID = "test-user-run-block"
def make_mock_block(
block_id: str, name: str, block_type: BlockType, disabled: bool = False
):
"""Create a mock block for testing."""
mock = MagicMock()
mock.id = block_id
mock.name = name
mock.block_type = block_type
mock.disabled = disabled
mock.input_schema = MagicMock()
mock.input_schema.jsonschema.return_value = {"properties": {}, "required": []}
mock.input_schema.get_credentials_fields_info.return_value = []
return mock
class TestRunBlockFiltering:
"""Tests for block execution guards in RunBlockTool."""
@pytest.mark.asyncio(loop_scope="session")
async def test_excluded_block_type_returns_error(self):
"""Attempting to execute a block with excluded BlockType returns error."""
session = make_session(user_id=_TEST_USER_ID)
input_block = make_mock_block("input-block-id", "Input Block", BlockType.INPUT)
with patch(
"backend.api.features.chat.tools.run_block.get_block",
return_value=input_block,
):
tool = RunBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID,
session=session,
block_id="input-block-id",
input_data={},
)
assert isinstance(response, ErrorResponse)
assert "cannot be run directly in CoPilot" in response.message
assert "designed for use within graphs only" in response.message
@pytest.mark.asyncio(loop_scope="session")
async def test_excluded_block_id_returns_error(self):
"""Attempting to execute SmartDecisionMakerBlock returns error."""
session = make_session(user_id=_TEST_USER_ID)
smart_decision_id = "3b191d9f-356f-482d-8238-ba04b6d18381"
smart_block = make_mock_block(
smart_decision_id, "Smart Decision Maker", BlockType.STANDARD
)
with patch(
"backend.api.features.chat.tools.run_block.get_block",
return_value=smart_block,
):
tool = RunBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID,
session=session,
block_id=smart_decision_id,
input_data={},
)
assert isinstance(response, ErrorResponse)
assert "cannot be run directly in CoPilot" in response.message
@pytest.mark.asyncio(loop_scope="session")
async def test_non_excluded_block_passes_guard(self):
"""Non-excluded blocks pass the filtering guard (may fail later for other reasons)."""
session = make_session(user_id=_TEST_USER_ID)
standard_block = make_mock_block(
"standard-id", "HTTP Request", BlockType.STANDARD
)
with patch(
"backend.api.features.chat.tools.run_block.get_block",
return_value=standard_block,
):
tool = RunBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID,
session=session,
block_id="standard-id",
input_data={},
)
# Should NOT be an ErrorResponse about CoPilot exclusion
# (may be other errors like missing credentials, but not the exclusion guard)
if isinstance(response, ErrorResponse):
assert "cannot be run directly in CoPilot" not in response.message

View File

@@ -8,7 +8,6 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db from backend.api.features.store import db as store_db
from backend.data.graph import GraphModel from backend.data.graph import GraphModel
from backend.data.model import ( from backend.data.model import (
Credentials,
CredentialsFieldInfo, CredentialsFieldInfo,
CredentialsMetaInput, CredentialsMetaInput,
HostScopedCredentials, HostScopedCredentials,
@@ -224,99 +223,6 @@ async def get_or_create_library_agent(
return library_agents[0] return library_agents[0]
async def match_credentials_to_requirements(
user_id: str,
requirements: dict[str, CredentialsFieldInfo],
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Match user's credentials against a dictionary of credential requirements.
This is the core matching logic shared by both graph and block credential matching.
"""
matched: dict[str, CredentialsMetaInput] = {}
missing: list[CredentialsMetaInput] = []
if not requirements:
return matched, missing
available_creds = await get_user_credentials(user_id)
for field_name, field_info in requirements.items():
matching_cred = find_matching_credential(available_creds, field_info)
if matching_cred:
try:
matched[field_name] = create_credential_meta_from_match(matching_cred)
except Exception as e:
logger.error(
f"Failed to create CredentialsMetaInput for field '{field_name}': "
f"provider={matching_cred.provider}, type={matching_cred.type}, "
f"credential_id={matching_cred.id}",
exc_info=True,
)
provider = next(iter(field_info.provider), "unknown")
cred_type = next(iter(field_info.supported_types), "api_key")
missing.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=f"{field_name} (validation failed: {e})",
)
)
else:
provider = next(iter(field_info.provider), "unknown")
cred_type = next(iter(field_info.supported_types), "api_key")
missing.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=field_name.replace("_", " ").title(),
)
)
return matched, missing
async def get_user_credentials(user_id: str) -> list[Credentials]:
"""Get all available credentials for a user."""
creds_manager = IntegrationCredentialsManager()
return await creds_manager.store.get_all_creds(user_id)
def find_matching_credential(
available_creds: list[Credentials],
field_info: CredentialsFieldInfo,
) -> Credentials | None:
"""Find a credential that matches the required provider, type, scopes, and host."""
for cred in available_creds:
if cred.provider not in field_info.provider:
continue
if cred.type not in field_info.supported_types:
continue
if cred.type == "oauth2" and not _credential_has_required_scopes(
cred, field_info
):
continue
if cred.type == "host_scoped" and not _credential_is_for_host(cred, field_info):
continue
return cred
return None
def create_credential_meta_from_match(
matching_cred: Credentials,
) -> CredentialsMetaInput:
"""Create a CredentialsMetaInput from a matched credential."""
return CredentialsMetaInput(
id=matching_cred.id,
provider=matching_cred.provider, # type: ignore
type=matching_cred.type,
title=matching_cred.title,
)
async def match_user_credentials_to_graph( async def match_user_credentials_to_graph(
user_id: str, user_id: str,
graph: GraphModel, graph: GraphModel,
@@ -425,6 +331,8 @@ def _credential_has_required_scopes(
# If no scopes are required, any credential matches # If no scopes are required, any credential matches
if not requirements.required_scopes: if not requirements.required_scopes:
return True return True
# Check that credential scopes are a superset of required scopes
return set(credential.scopes).issuperset(requirements.required_scopes) return set(credential.scopes).issuperset(requirements.required_scopes)

View File

@@ -22,6 +22,7 @@ from backend.data.notifications import (
AgentApprovalData, AgentApprovalData,
AgentRejectionData, AgentRejectionData,
NotificationEventModel, NotificationEventModel,
WaitlistLaunchData,
) )
from backend.notifications.notifications import queue_notification_async from backend.notifications.notifications import queue_notification_async
from backend.util.exceptions import DatabaseError from backend.util.exceptions import DatabaseError
@@ -1730,6 +1731,29 @@ async def review_store_submission(
# Don't fail the review process if email sending fails # Don't fail the review process if email sending fails
pass pass
# Notify waitlist users if this is an approval and has a linked waitlist
if is_approved and submission.StoreListing:
try:
frontend_base_url = (
settings.config.frontend_base_url
or settings.config.platform_base_url
)
store_agent = (
await prisma.models.StoreAgent.prisma().find_first_or_raise(
where={"storeListingVersionId": submission.id}
)
)
creator_username = store_agent.creator_username or "unknown"
store_url = f"{frontend_base_url}/marketplace/agent/{creator_username}/{store_agent.slug}"
await notify_waitlist_users_on_launch(
store_listing_id=submission.StoreListing.id,
agent_name=submission.name,
store_url=store_url,
)
except Exception as e:
logger.error(f"Failed to notify waitlist users on agent approval: {e}")
# Don't fail the approval process
# Convert to Pydantic model for consistency # Convert to Pydantic model for consistency
return store_model.StoreSubmission( return store_model.StoreSubmission(
listing_id=(submission.StoreListing.id if submission.StoreListing else ""), listing_id=(submission.StoreListing.id if submission.StoreListing else ""),
@@ -1977,3 +2001,557 @@ async def get_agent_as_admin(
) )
return graph return graph
def _waitlist_to_store_entry(
waitlist: prisma.models.WaitlistEntry,
) -> store_model.StoreWaitlistEntry:
"""Convert a WaitlistEntry to StoreWaitlistEntry for public display."""
return store_model.StoreWaitlistEntry(
waitlistId=waitlist.id,
slug=waitlist.slug,
name=waitlist.name,
subHeading=waitlist.subHeading,
videoUrl=waitlist.videoUrl,
agentOutputDemoUrl=waitlist.agentOutputDemoUrl,
imageUrls=waitlist.imageUrls or [],
description=waitlist.description,
categories=waitlist.categories or [],
)
async def get_waitlist() -> list[store_model.StoreWaitlistEntry]:
"""Get all active waitlists for public display."""
try:
waitlists = await prisma.models.WaitlistEntry.prisma().find_many(
where=prisma.types.WaitlistEntryWhereInput(isDeleted=False),
)
# Filter out closed/done waitlists and sort by votes (descending)
excluded_statuses = {
prisma.enums.WaitlistExternalStatus.CANCELED,
prisma.enums.WaitlistExternalStatus.DONE,
}
active_waitlists = [w for w in waitlists if w.status not in excluded_statuses]
sorted_list = sorted(active_waitlists, key=lambda x: x.votes, reverse=True)
return [_waitlist_to_store_entry(w) for w in sorted_list]
except Exception as e:
logger.error(f"Error fetching waitlists: {e}")
raise DatabaseError("Failed to fetch waitlists") from e
async def get_user_waitlist_memberships(user_id: str) -> list[str]:
"""Get all waitlist IDs that a user has joined."""
try:
user = await prisma.models.User.prisma().find_unique(
where={"id": user_id},
include={"joinedWaitlists": True},
)
if not user or not user.joinedWaitlists:
return []
return [w.id for w in user.joinedWaitlists]
except Exception as e:
logger.error(f"Error fetching user waitlist memberships: {e}")
raise DatabaseError("Failed to fetch waitlist memberships") from e
async def add_user_to_waitlist(
waitlist_id: str, user_id: str | None, email: str | None
) -> store_model.StoreWaitlistEntry:
"""
Add a user to a waitlist.
For logged-in users: connects via joinedUsers relation
For anonymous users: adds email to unaffiliatedEmailUsers array
"""
if not user_id and not email:
raise ValueError("Either user_id or email must be provided")
try:
# Find the waitlist
waitlist = await prisma.models.WaitlistEntry.prisma().find_unique(
where={"id": waitlist_id},
include={"joinedUsers": True},
)
if not waitlist:
raise ValueError(f"Waitlist {waitlist_id} not found")
if waitlist.isDeleted:
raise ValueError(f"Waitlist {waitlist_id} is no longer available")
if waitlist.status in [
prisma.enums.WaitlistExternalStatus.CANCELED,
prisma.enums.WaitlistExternalStatus.DONE,
]:
raise ValueError(f"Waitlist {waitlist_id} is closed")
if user_id:
# Check if user already joined
joined_user_ids = [u.id for u in (waitlist.joinedUsers or [])]
if user_id in joined_user_ids:
# Already joined - return waitlist info
logger.debug(f"User {user_id} already joined waitlist {waitlist_id}")
else:
# Connect user to waitlist
await prisma.models.WaitlistEntry.prisma().update(
where={"id": waitlist_id},
data={"joinedUsers": {"connect": [{"id": user_id}]}},
)
logger.info(f"User {user_id} joined waitlist {waitlist_id}")
# If user was previously in email list, remove them
# Use transaction to prevent race conditions
if email:
async with transaction() as tx:
current_waitlist = await tx.waitlistentry.find_unique(
where={"id": waitlist_id}
)
if current_waitlist and email in (
current_waitlist.unaffiliatedEmailUsers or []
):
updated_emails: list[str] = [
e
for e in (current_waitlist.unaffiliatedEmailUsers or [])
if e != email
]
await tx.waitlistentry.update(
where={"id": waitlist_id},
data={"unaffiliatedEmailUsers": updated_emails},
)
elif email:
# Add email to unaffiliated list if not already present
# Use transaction to prevent race conditions with concurrent signups
async with transaction() as tx:
# Re-fetch within transaction to get latest state
current_waitlist = await tx.waitlistentry.find_unique(
where={"id": waitlist_id}
)
if current_waitlist:
current_emails: list[str] = list(
current_waitlist.unaffiliatedEmailUsers or []
)
if email not in current_emails:
current_emails.append(email)
await tx.waitlistentry.update(
where={"id": waitlist_id},
data={"unaffiliatedEmailUsers": current_emails},
)
# Mask email for logging to avoid PII exposure
parts = email.split("@") if "@" in email else []
local = parts[0] if len(parts) > 0 else ""
domain = parts[1] if len(parts) > 1 else "unknown"
masked = (local[0] if local else "?") + "***@" + domain
logger.info(f"Email {masked} added to waitlist {waitlist_id}")
else:
logger.debug(f"Email already exists on waitlist {waitlist_id}")
# Re-fetch to return updated data
updated_waitlist = await prisma.models.WaitlistEntry.prisma().find_unique(
where={"id": waitlist_id}
)
return _waitlist_to_store_entry(updated_waitlist or waitlist)
except ValueError:
raise
except Exception as e:
logger.error(f"Error adding user to waitlist: {e}")
raise DatabaseError("Failed to add user to waitlist") from e
# ============== Admin Waitlist Functions ==============
def _waitlist_to_admin_response(
waitlist: prisma.models.WaitlistEntry,
) -> store_model.WaitlistAdminResponse:
"""Convert a WaitlistEntry to WaitlistAdminResponse."""
joined_count = len(waitlist.joinedUsers) if waitlist.joinedUsers else 0
email_count = (
len(waitlist.unaffiliatedEmailUsers) if waitlist.unaffiliatedEmailUsers else 0
)
return store_model.WaitlistAdminResponse(
id=waitlist.id,
createdAt=waitlist.createdAt.isoformat() if waitlist.createdAt else "",
updatedAt=waitlist.updatedAt.isoformat() if waitlist.updatedAt else "",
slug=waitlist.slug,
name=waitlist.name,
subHeading=waitlist.subHeading,
description=waitlist.description,
categories=waitlist.categories or [],
imageUrls=waitlist.imageUrls or [],
videoUrl=waitlist.videoUrl,
agentOutputDemoUrl=waitlist.agentOutputDemoUrl,
status=waitlist.status or prisma.enums.WaitlistExternalStatus.NOT_STARTED,
votes=waitlist.votes,
signupCount=joined_count + email_count,
storeListingId=waitlist.storeListingId,
owningUserId=waitlist.owningUserId,
)
async def create_waitlist_admin(
admin_user_id: str,
data: store_model.WaitlistCreateRequest,
) -> store_model.WaitlistAdminResponse:
"""Create a new waitlist (admin only)."""
logger.info(f"Admin {admin_user_id} creating waitlist: {data.name}")
try:
waitlist = await prisma.models.WaitlistEntry.prisma().create(
data=prisma.types.WaitlistEntryCreateInput(
name=data.name,
slug=data.slug,
subHeading=data.subHeading,
description=data.description,
categories=data.categories,
imageUrls=data.imageUrls,
videoUrl=data.videoUrl,
agentOutputDemoUrl=data.agentOutputDemoUrl,
owningUserId=admin_user_id,
status=prisma.enums.WaitlistExternalStatus.NOT_STARTED,
),
include={"joinedUsers": True},
)
return _waitlist_to_admin_response(waitlist)
except Exception as e:
logger.error(f"Error creating waitlist: {e}")
raise DatabaseError("Failed to create waitlist") from e
async def get_waitlists_admin() -> store_model.WaitlistAdminListResponse:
"""Get all waitlists with admin details."""
try:
waitlists = await prisma.models.WaitlistEntry.prisma().find_many(
where=prisma.types.WaitlistEntryWhereInput(isDeleted=False),
include={"joinedUsers": True},
order={"createdAt": "desc"},
)
return store_model.WaitlistAdminListResponse(
waitlists=[_waitlist_to_admin_response(w) for w in waitlists],
totalCount=len(waitlists),
)
except Exception as e:
logger.error(f"Error fetching waitlists for admin: {e}")
raise DatabaseError("Failed to fetch waitlists") from e
async def get_waitlist_admin(
waitlist_id: str,
) -> store_model.WaitlistAdminResponse:
"""Get a single waitlist with admin details."""
try:
waitlist = await prisma.models.WaitlistEntry.prisma().find_unique(
where={"id": waitlist_id},
include={"joinedUsers": True},
)
if not waitlist:
raise ValueError(f"Waitlist {waitlist_id} not found")
if waitlist.isDeleted:
raise ValueError(f"Waitlist {waitlist_id} has been deleted")
return _waitlist_to_admin_response(waitlist)
except ValueError:
raise
except Exception as e:
logger.error(f"Error fetching waitlist {waitlist_id}: {e}")
raise DatabaseError("Failed to fetch waitlist") from e
async def update_waitlist_admin(
waitlist_id: str,
data: store_model.WaitlistUpdateRequest,
) -> store_model.WaitlistAdminResponse:
"""Update a waitlist (admin only)."""
logger.info(f"Updating waitlist {waitlist_id}")
try:
# Check if waitlist exists first
existing = await prisma.models.WaitlistEntry.prisma().find_unique(
where={"id": waitlist_id}
)
if not existing:
raise ValueError(f"Waitlist {waitlist_id} not found")
if existing.isDeleted:
raise ValueError(f"Waitlist {waitlist_id} has been deleted")
# Build update data from explicitly provided fields
# Use model_fields_set to allow clearing fields by setting them to None
field_mappings = {
"name": data.name,
"slug": data.slug,
"subHeading": data.subHeading,
"description": data.description,
"categories": data.categories,
"imageUrls": data.imageUrls,
"videoUrl": data.videoUrl,
"agentOutputDemoUrl": data.agentOutputDemoUrl,
"storeListingId": data.storeListingId,
}
update_data: dict[str, Any] = {
k: v for k, v in field_mappings.items() if k in data.model_fields_set
}
# Add status if provided (already validated as enum by Pydantic)
if "status" in data.model_fields_set and data.status is not None:
update_data["status"] = data.status
if not update_data:
# No updates, just return current data
return await get_waitlist_admin(waitlist_id)
waitlist = await prisma.models.WaitlistEntry.prisma().update(
where={"id": waitlist_id},
data=prisma.types.WaitlistEntryUpdateInput(**update_data),
include={"joinedUsers": True},
)
# We already verified existence above, so this should never be None
assert waitlist is not None
return _waitlist_to_admin_response(waitlist)
except ValueError:
raise
except Exception as e:
logger.error(f"Error updating waitlist {waitlist_id}: {e}")
raise DatabaseError("Failed to update waitlist") from e
async def delete_waitlist_admin(waitlist_id: str) -> None:
"""Soft delete a waitlist (admin only)."""
logger.info(f"Soft deleting waitlist {waitlist_id}")
try:
# Check if waitlist exists first
waitlist = await prisma.models.WaitlistEntry.prisma().find_unique(
where={"id": waitlist_id},
)
if not waitlist:
raise ValueError(f"Waitlist {waitlist_id} not found")
if waitlist.isDeleted:
raise ValueError(f"Waitlist {waitlist_id} has already been deleted")
await prisma.models.WaitlistEntry.prisma().update(
where={"id": waitlist_id},
data={"isDeleted": True},
)
except ValueError:
raise
except Exception as e:
logger.error(f"Error deleting waitlist {waitlist_id}: {e}")
raise DatabaseError("Failed to delete waitlist") from e
async def get_waitlist_signups_admin(
waitlist_id: str,
) -> store_model.WaitlistSignupListResponse:
"""Get all signups for a waitlist (admin only)."""
try:
waitlist = await prisma.models.WaitlistEntry.prisma().find_unique(
where={"id": waitlist_id},
include={"joinedUsers": True},
)
if not waitlist:
raise ValueError(f"Waitlist {waitlist_id} not found")
signups: list[store_model.WaitlistSignup] = []
# Add user signups
for user in waitlist.joinedUsers or []:
signups.append(
store_model.WaitlistSignup(
type="user",
userId=user.id,
email=user.email,
username=user.name,
)
)
# Add email signups
for email in waitlist.unaffiliatedEmailUsers or []:
signups.append(
store_model.WaitlistSignup(
type="email",
email=email,
)
)
return store_model.WaitlistSignupListResponse(
waitlistId=waitlist_id,
signups=signups,
totalCount=len(signups),
)
except ValueError:
raise
except Exception as e:
logger.error(f"Error fetching signups for waitlist {waitlist_id}: {e}")
raise DatabaseError("Failed to fetch waitlist signups") from e
async def link_waitlist_to_listing_admin(
waitlist_id: str,
store_listing_id: str,
) -> store_model.WaitlistAdminResponse:
"""Link a waitlist to a store listing (admin only)."""
logger.info(f"Linking waitlist {waitlist_id} to listing {store_listing_id}")
try:
# Verify the waitlist exists
waitlist = await prisma.models.WaitlistEntry.prisma().find_unique(
where={"id": waitlist_id}
)
if not waitlist:
raise ValueError(f"Waitlist {waitlist_id} not found")
if waitlist.isDeleted:
raise ValueError(f"Waitlist {waitlist_id} has been deleted")
# Verify the store listing exists
listing = await prisma.models.StoreListing.prisma().find_unique(
where={"id": store_listing_id}
)
if not listing:
raise ValueError(f"Store listing {store_listing_id} not found")
updated_waitlist = await prisma.models.WaitlistEntry.prisma().update(
where={"id": waitlist_id},
data={"StoreListing": {"connect": {"id": store_listing_id}}},
include={"joinedUsers": True},
)
# We already verified existence above, so this should never be None
assert updated_waitlist is not None
return _waitlist_to_admin_response(updated_waitlist)
except ValueError:
raise
except Exception as e:
logger.error(f"Error linking waitlist to listing: {e}")
raise DatabaseError("Failed to link waitlist to listing") from e
async def notify_waitlist_users_on_launch(
store_listing_id: str,
agent_name: str,
store_url: str,
) -> int:
"""
Notify all users on waitlists linked to a store listing when the agent is launched.
Args:
store_listing_id: The ID of the store listing that was approved
agent_name: The name of the approved agent
store_url: The URL to the agent's store page
Returns:
The number of notifications sent
"""
logger.info(f"Notifying waitlist users for store listing {store_listing_id}")
try:
# Find all active waitlists linked to this store listing
# Exclude DONE and CANCELED to prevent duplicate notifications on re-approval
waitlists = await prisma.models.WaitlistEntry.prisma().find_many(
where={
"storeListingId": store_listing_id,
"isDeleted": False,
"status": {
"not_in": [
prisma.enums.WaitlistExternalStatus.DONE,
prisma.enums.WaitlistExternalStatus.CANCELED,
]
},
},
include={"joinedUsers": True},
)
if not waitlists:
logger.info(
f"No active waitlists found for store listing {store_listing_id}"
)
return 0
notification_count = 0
launched_at = datetime.now(tz=timezone.utc)
for waitlist in waitlists:
# Track notification results for this waitlist
users_to_notify = waitlist.joinedUsers or []
failed_user_ids: list[str] = []
# Notify registered users
for user in users_to_notify:
try:
notification_data = WaitlistLaunchData(
agent_name=agent_name,
waitlist_name=waitlist.name,
store_url=store_url,
launched_at=launched_at,
)
notification_event = NotificationEventModel[WaitlistLaunchData](
user_id=user.id,
type=prisma.enums.NotificationType.WAITLIST_LAUNCH,
data=notification_data,
)
await queue_notification_async(notification_event)
notification_count += 1
except Exception as e:
logger.error(
f"Failed to send waitlist launch notification to user {user.id}: {e}"
)
failed_user_ids.append(user.id)
# Note: For unaffiliated email users, you would need to send emails directly
# since they don't have user IDs for the notification system.
# This could be done via a separate email service.
# For now, we log these for potential manual follow-up or future implementation.
has_pending_email_users = bool(waitlist.unaffiliatedEmailUsers)
if has_pending_email_users:
logger.info(
f"Waitlist {waitlist.id} has {len(waitlist.unaffiliatedEmailUsers)} "
f"unaffiliated email users that need email notifications"
)
# Only mark waitlist as DONE if all registered user notifications succeeded
# AND there are no unaffiliated email users still waiting for notifications
if not failed_user_ids and not has_pending_email_users:
await prisma.models.WaitlistEntry.prisma().update(
where={"id": waitlist.id},
data={"status": prisma.enums.WaitlistExternalStatus.DONE},
)
logger.info(f"Updated waitlist {waitlist.id} status to DONE")
elif failed_user_ids:
logger.warning(
f"Waitlist {waitlist.id} not marked as DONE due to "
f"{len(failed_user_ids)} failed notifications"
)
elif has_pending_email_users:
logger.warning(
f"Waitlist {waitlist.id} not marked as DONE due to "
f"{len(waitlist.unaffiliatedEmailUsers)} pending email-only users"
)
logger.info(
f"Sent {notification_count} waitlist launch notifications for store listing {store_listing_id}"
)
return notification_count
except Exception as e:
logger.error(
f"Error notifying waitlist users for store listing {store_listing_id}: {e}"
)
# Don't raise - we don't want to fail the approval process
return 0

View File

@@ -8,7 +8,6 @@ Includes BM25 reranking for improved lexical relevance.
import logging import logging
import re import re
import time
from dataclasses import dataclass from dataclasses import dataclass
from typing import Any, Literal from typing import Any, Literal
@@ -363,11 +362,7 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param} LIMIT {limit_param} OFFSET {offset_param}
""" """
try:
results = await query_raw_with_schema(sql_query, *params) results = await query_raw_with_schema(sql_query, *params)
except Exception as e:
await _log_vector_error_diagnostics(e)
raise
total = results[0]["total_count"] if results else 0 total = results[0]["total_count"] if results else 0
# Apply BM25 reranking # Apply BM25 reranking
@@ -691,11 +686,7 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param} LIMIT {limit_param} OFFSET {offset_param}
""" """
try:
results = await query_raw_with_schema(sql_query, *params) results = await query_raw_with_schema(sql_query, *params)
except Exception as e:
await _log_vector_error_diagnostics(e)
raise
total = results[0]["total_count"] if results else 0 total = results[0]["total_count"] if results else 0
@@ -727,87 +718,6 @@ async def hybrid_search_simple(
return await hybrid_search(query=query, page=page, page_size=page_size) return await hybrid_search(query=query, page=page, page_size=page_size)
# ============================================================================
# Diagnostics
# ============================================================================
# Rate limit: only log vector error diagnostics once per this interval
_VECTOR_DIAG_INTERVAL_SECONDS = 60
_last_vector_diag_time: float = 0
async def _log_vector_error_diagnostics(error: Exception) -> None:
"""Log diagnostic info when 'type vector does not exist' error occurs.
Note: Diagnostic queries use query_raw_with_schema which may run on a different
pooled connection than the one that failed. Session-level search_path can differ,
so these diagnostics show cluster-wide state, not necessarily the failed session.
Includes rate limiting to avoid log spam - only logs once per minute.
Caller should re-raise the error after calling this function.
"""
global _last_vector_diag_time
# Check if this is the vector type error
error_str = str(error).lower()
if not (
"type" in error_str and "vector" in error_str and "does not exist" in error_str
):
return
# Rate limit: only log once per interval
now = time.time()
if now - _last_vector_diag_time < _VECTOR_DIAG_INTERVAL_SECONDS:
return
_last_vector_diag_time = now
try:
diagnostics: dict[str, object] = {}
try:
search_path_result = await query_raw_with_schema("SHOW search_path")
diagnostics["search_path"] = search_path_result
except Exception as e:
diagnostics["search_path"] = f"Error: {e}"
try:
schema_result = await query_raw_with_schema("SELECT current_schema()")
diagnostics["current_schema"] = schema_result
except Exception as e:
diagnostics["current_schema"] = f"Error: {e}"
try:
user_result = await query_raw_with_schema(
"SELECT current_user, session_user, current_database()"
)
diagnostics["user_info"] = user_result
except Exception as e:
diagnostics["user_info"] = f"Error: {e}"
try:
# Check pgvector extension installation (cluster-wide, stable info)
ext_result = await query_raw_with_schema(
"SELECT extname, extversion, nspname as schema "
"FROM pg_extension e "
"JOIN pg_namespace n ON e.extnamespace = n.oid "
"WHERE extname = 'vector'"
)
diagnostics["pgvector_extension"] = ext_result
except Exception as e:
diagnostics["pgvector_extension"] = f"Error: {e}"
logger.error(
f"Vector type error diagnostics:\n"
f" Error: {error}\n"
f" search_path: {diagnostics.get('search_path')}\n"
f" current_schema: {diagnostics.get('current_schema')}\n"
f" user_info: {diagnostics.get('user_info')}\n"
f" pgvector_extension: {diagnostics.get('pgvector_extension')}"
)
except Exception as diag_error:
logger.error(f"Failed to collect vector error diagnostics: {diag_error}")
# Backward compatibility alias - HybridSearchWeights maps to StoreAgentSearchWeights # Backward compatibility alias - HybridSearchWeights maps to StoreAgentSearchWeights
# for existing code that expects the popularity parameter # for existing code that expects the popularity parameter
HybridSearchWeights = StoreAgentSearchWeights HybridSearchWeights = StoreAgentSearchWeights

View File

@@ -224,6 +224,102 @@ class ReviewSubmissionRequest(pydantic.BaseModel):
internal_comments: str | None = None # Private admin notes internal_comments: str | None = None # Private admin notes
class StoreWaitlistEntry(pydantic.BaseModel):
"""Public waitlist entry - no PII fields exposed."""
waitlistId: str
slug: str
# Content fields
name: str
subHeading: str
videoUrl: str | None = None
agentOutputDemoUrl: str | None = None
imageUrls: list[str]
description: str
categories: list[str]
class StoreWaitlistsAllResponse(pydantic.BaseModel):
listings: list[StoreWaitlistEntry]
# Admin Waitlist Models
class WaitlistCreateRequest(pydantic.BaseModel):
"""Request model for creating a new waitlist."""
name: str
slug: str
subHeading: str
description: str
categories: list[str] = []
imageUrls: list[str] = []
videoUrl: str | None = None
agentOutputDemoUrl: str | None = None
class WaitlistUpdateRequest(pydantic.BaseModel):
"""Request model for updating a waitlist."""
name: str | None = None
slug: str | None = None
subHeading: str | None = None
description: str | None = None
categories: list[str] | None = None
imageUrls: list[str] | None = None
videoUrl: str | None = None
agentOutputDemoUrl: str | None = None
status: prisma.enums.WaitlistExternalStatus | None = None
storeListingId: str | None = None # Link to a store listing
class WaitlistAdminResponse(pydantic.BaseModel):
"""Admin response model with full waitlist details including internal data."""
id: str
createdAt: str
updatedAt: str
slug: str
name: str
subHeading: str
description: str
categories: list[str]
imageUrls: list[str]
videoUrl: str | None = None
agentOutputDemoUrl: str | None = None
status: prisma.enums.WaitlistExternalStatus
votes: int
signupCount: int # Total count of joinedUsers + unaffiliatedEmailUsers
storeListingId: str | None = None
owningUserId: str
class WaitlistSignup(pydantic.BaseModel):
"""Individual signup entry for a waitlist."""
type: str # "user" or "email"
userId: str | None = None
email: str | None = None
username: str | None = None # For user signups
class WaitlistSignupListResponse(pydantic.BaseModel):
"""Response model for listing waitlist signups."""
waitlistId: str
signups: list[WaitlistSignup]
totalCount: int
class WaitlistAdminListResponse(pydantic.BaseModel):
"""Response model for listing all waitlists (admin view)."""
waitlists: list[WaitlistAdminResponse]
totalCount: int
class UnifiedSearchResult(pydantic.BaseModel): class UnifiedSearchResult(pydantic.BaseModel):
"""A single result from unified hybrid search across all content types.""" """A single result from unified hybrid search across all content types."""

View File

@@ -8,6 +8,7 @@ import autogpt_libs.auth
import fastapi import fastapi
import fastapi.responses import fastapi.responses
import prisma.enums import prisma.enums
from autogpt_libs.auth.dependencies import get_optional_user_id
import backend.data.graph import backend.data.graph
import backend.util.json import backend.util.json
@@ -81,6 +82,74 @@ async def update_or_create_profile(
return updated_profile return updated_profile
##############################################
############## Waitlist Endpoints ############
##############################################
@router.get(
"/waitlist",
summary="Get the agent waitlist",
tags=["store", "public"],
response_model=store_model.StoreWaitlistsAllResponse,
)
async def get_waitlist():
"""
Get all active waitlists for public display.
"""
waitlists = await store_db.get_waitlist()
return store_model.StoreWaitlistsAllResponse(listings=waitlists)
@router.get(
"/waitlist/my-memberships",
summary="Get waitlist IDs the current user has joined",
tags=["store", "private"],
)
async def get_my_waitlist_memberships(
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
) -> list[str]:
"""Returns list of waitlist IDs the authenticated user has joined."""
return await store_db.get_user_waitlist_memberships(user_id)
@router.post(
path="/waitlist/{waitlist_id}/join",
summary="Add self to the agent waitlist",
tags=["store", "public"],
response_model=store_model.StoreWaitlistEntry,
)
async def add_self_to_waitlist(
user_id: str | None = fastapi.Security(get_optional_user_id),
waitlist_id: str = fastapi.Path(..., description="The ID of the waitlist to join"),
email: str | None = fastapi.Body(
default=None, embed=True, description="Email address for unauthenticated users"
),
):
"""
Add the current user to the agent waitlist.
"""
if not user_id and not email:
raise fastapi.HTTPException(
status_code=400,
detail="Either user authentication or email address is required",
)
try:
waitlist_entry = await store_db.add_user_to_waitlist(
waitlist_id=waitlist_id, user_id=user_id, email=email
)
return waitlist_entry
except ValueError as e:
error_msg = str(e)
if "not found" in error_msg:
raise fastapi.HTTPException(status_code=404, detail="Waitlist not found")
# Waitlist exists but is closed or unavailable
raise fastapi.HTTPException(status_code=400, detail=error_msg)
except Exception:
raise fastapi.HTTPException(
status_code=500, detail="An error occurred while joining the waitlist"
)
############################################## ##############################################
############### Agent Endpoints ############## ############### Agent Endpoints ##############
############################################## ##############################################

View File

@@ -19,6 +19,7 @@ from prisma.errors import PrismaError
import backend.api.features.admin.credit_admin_routes import backend.api.features.admin.credit_admin_routes
import backend.api.features.admin.execution_analytics_routes import backend.api.features.admin.execution_analytics_routes
import backend.api.features.admin.store_admin_routes import backend.api.features.admin.store_admin_routes
import backend.api.features.admin.waitlist_admin_routes
import backend.api.features.builder import backend.api.features.builder
import backend.api.features.builder.routes import backend.api.features.builder.routes
import backend.api.features.chat.routes as chat_routes import backend.api.features.chat.routes as chat_routes
@@ -306,6 +307,11 @@ app.include_router(
tags=["v2", "admin"], tags=["v2", "admin"],
prefix="/api/store", prefix="/api/store",
) )
app.include_router(
backend.api.features.admin.waitlist_admin_routes.router,
tags=["v2", "admin"],
prefix="/api/store",
)
app.include_router( app.include_router(
backend.api.features.admin.credit_admin_routes.router, backend.api.features.admin.credit_admin_routes.router,
tags=["v2", "admin"], tags=["v2", "admin"],

View File

@@ -21,71 +21,43 @@ logger = logging.getLogger(__name__)
class HumanInTheLoopBlock(Block): class HumanInTheLoopBlock(Block):
""" """
Pauses execution and waits for human approval or rejection of the data. This block pauses execution and waits for human approval or modification of the data.
When executed, this block creates a pending review entry and sets the node execution When executed, it creates a pending review entry and sets the node execution status
status to REVIEW. The execution remains paused until a human user either approves to REVIEW. The execution will remain paused until a human user either:
or rejects the data. - Approves the data (with or without modifications)
- Rejects the data
**How it works:** This is useful for workflows that require human validation or intervention before
- The input data is presented to a human reviewer proceeding to the next steps.
- The reviewer can approve or reject (and optionally modify the data if editable)
- On approval: the data flows out through the `approved_data` output pin
- On rejection: the data flows out through the `rejected_data` output pin
**Important:** The output pins yield the actual data itself, NOT status strings.
The approval/rejection decision determines WHICH output pin fires, not the value.
You do NOT need to compare the output to "APPROVED" or "REJECTED" - simply connect
downstream blocks to the appropriate output pin for each case.
**Example usage:**
- Connect `approved_data` → next step in your workflow (data was approved)
- Connect `rejected_data` → error handling or notification (data was rejected)
""" """
class Input(BlockSchemaInput): class Input(BlockSchemaInput):
data: Any = SchemaField( data: Any = SchemaField(description="The data to be reviewed by a human user")
description="The data to be reviewed by a human user. "
"This exact data will be passed through to either approved_data or "
"rejected_data output based on the reviewer's decision."
)
name: str = SchemaField( name: str = SchemaField(
description="A descriptive name for what this data represents. " description="A descriptive name for what this data represents",
"This helps the reviewer understand what they are reviewing.",
) )
editable: bool = SchemaField( editable: bool = SchemaField(
description="Whether the human reviewer can edit the data before " description="Whether the human reviewer can edit the data",
"approving or rejecting it",
default=True, default=True,
advanced=True, advanced=True,
) )
class Output(BlockSchemaOutput): class Output(BlockSchemaOutput):
approved_data: Any = SchemaField( approved_data: Any = SchemaField(
description="Outputs the input data when the reviewer APPROVES it. " description="The data when approved (may be modified by reviewer)"
"The value is the actual data itself (not a status string like 'APPROVED'). "
"If the reviewer edited the data, this contains the modified version. "
"Connect downstream blocks here for the 'approved' workflow path."
) )
rejected_data: Any = SchemaField( rejected_data: Any = SchemaField(
description="Outputs the input data when the reviewer REJECTS it. " description="The data when rejected (may be modified by reviewer)"
"The value is the actual data itself (not a status string like 'REJECTED'). "
"If the reviewer edited the data, this contains the modified version. "
"Connect downstream blocks here for the 'rejected' workflow path."
) )
review_message: str = SchemaField( review_message: str = SchemaField(
description="Optional message provided by the reviewer explaining their " description="Any message provided by the reviewer", default=""
"decision. Only outputs when the reviewer provides a message; "
"this pin does not fire if no message was given.",
default="",
) )
def __init__(self): def __init__(self):
super().__init__( super().__init__(
id="8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d", id="8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d",
description="Pause execution for human review. Data flows through " description="Pause execution and wait for human approval or modification of data",
"approved_data or rejected_data output based on the reviewer's decision. "
"Outputs contain the actual data, not status strings.",
categories={BlockCategory.BASIC}, categories={BlockCategory.BASIC},
input_schema=HumanInTheLoopBlock.Input, input_schema=HumanInTheLoopBlock.Input,
output_schema=HumanInTheLoopBlock.Output, output_schema=HumanInTheLoopBlock.Output,

View File

@@ -531,12 +531,12 @@ class LLMResponse(BaseModel):
def convert_openai_tool_fmt_to_anthropic( def convert_openai_tool_fmt_to_anthropic(
openai_tools: list[dict] | None = None, openai_tools: list[dict] | None = None,
) -> Iterable[ToolParam] | anthropic.Omit: ) -> Iterable[ToolParam] | anthropic.NotGiven:
""" """
Convert OpenAI tool format to Anthropic tool format. Convert OpenAI tool format to Anthropic tool format.
""" """
if not openai_tools or len(openai_tools) == 0: if not openai_tools or len(openai_tools) == 0:
return anthropic.omit return anthropic.NOT_GIVEN
anthropic_tools = [] anthropic_tools = []
for tool in openai_tools: for tool in openai_tools:

View File

@@ -743,11 +743,6 @@ class GraphModel(Graph, GraphMeta):
# For invalid blocks, we still raise immediately as this is a structural issue # For invalid blocks, we still raise immediately as this is a structural issue
raise ValueError(f"Invalid block {node.block_id} for node #{node.id}") raise ValueError(f"Invalid block {node.block_id} for node #{node.id}")
if block.disabled:
raise ValueError(
f"Block {node.block_id} is disabled and cannot be used in graphs"
)
node_input_mask = ( node_input_mask = (
nodes_input_masks.get(node.id, {}) if nodes_input_masks else {} nodes_input_masks.get(node.id, {}) if nodes_input_masks else {}
) )

View File

@@ -211,6 +211,22 @@ class AgentRejectionData(BaseNotificationData):
return value return value
class WaitlistLaunchData(BaseNotificationData):
"""Notification data for when an agent from a waitlist is launched."""
agent_name: str
waitlist_name: str
store_url: str
launched_at: datetime
@field_validator("launched_at")
@classmethod
def validate_timezone(cls, value: datetime):
if value.tzinfo is None:
raise ValueError("datetime must have timezone information")
return value
NotificationData = Annotated[ NotificationData = Annotated[
Union[ Union[
AgentRunData, AgentRunData,
@@ -223,6 +239,7 @@ NotificationData = Annotated[
DailySummaryData, DailySummaryData,
RefundRequestData, RefundRequestData,
BaseSummaryData, BaseSummaryData,
WaitlistLaunchData,
], ],
Field(discriminator="type"), Field(discriminator="type"),
] ]
@@ -273,6 +290,7 @@ def get_notif_data_type(
NotificationType.REFUND_PROCESSED: RefundRequestData, NotificationType.REFUND_PROCESSED: RefundRequestData,
NotificationType.AGENT_APPROVED: AgentApprovalData, NotificationType.AGENT_APPROVED: AgentApprovalData,
NotificationType.AGENT_REJECTED: AgentRejectionData, NotificationType.AGENT_REJECTED: AgentRejectionData,
NotificationType.WAITLIST_LAUNCH: WaitlistLaunchData,
}[notification_type] }[notification_type]
@@ -318,6 +336,7 @@ class NotificationTypeOverride:
NotificationType.REFUND_PROCESSED: QueueType.ADMIN, NotificationType.REFUND_PROCESSED: QueueType.ADMIN,
NotificationType.AGENT_APPROVED: QueueType.IMMEDIATE, NotificationType.AGENT_APPROVED: QueueType.IMMEDIATE,
NotificationType.AGENT_REJECTED: QueueType.IMMEDIATE, NotificationType.AGENT_REJECTED: QueueType.IMMEDIATE,
NotificationType.WAITLIST_LAUNCH: QueueType.IMMEDIATE,
} }
return BATCHING_RULES.get(self.notification_type, QueueType.IMMEDIATE) return BATCHING_RULES.get(self.notification_type, QueueType.IMMEDIATE)
@@ -337,6 +356,7 @@ class NotificationTypeOverride:
NotificationType.REFUND_PROCESSED: "refund_processed.html", NotificationType.REFUND_PROCESSED: "refund_processed.html",
NotificationType.AGENT_APPROVED: "agent_approved.html", NotificationType.AGENT_APPROVED: "agent_approved.html",
NotificationType.AGENT_REJECTED: "agent_rejected.html", NotificationType.AGENT_REJECTED: "agent_rejected.html",
NotificationType.WAITLIST_LAUNCH: "waitlist_launch.html",
}[self.notification_type] }[self.notification_type]
@property @property
@@ -354,6 +374,7 @@ class NotificationTypeOverride:
NotificationType.REFUND_PROCESSED: "Refund for ${{data.amount / 100}} to {{data.user_name}} has been processed", NotificationType.REFUND_PROCESSED: "Refund for ${{data.amount / 100}} to {{data.user_name}} has been processed",
NotificationType.AGENT_APPROVED: "🎉 Your agent '{{data.agent_name}}' has been approved!", NotificationType.AGENT_APPROVED: "🎉 Your agent '{{data.agent_name}}' has been approved!",
NotificationType.AGENT_REJECTED: "Your agent '{{data.agent_name}}' needs some updates", NotificationType.AGENT_REJECTED: "Your agent '{{data.agent_name}}' needs some updates",
NotificationType.WAITLIST_LAUNCH: "🚀 {{data.agent_name}} is now available!",
}[self.notification_type] }[self.notification_type]

View File

@@ -1,4 +1,3 @@
import asyncio
import logging import logging
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from enum import Enum from enum import Enum
@@ -226,10 +225,6 @@ class SyncRabbitMQ(RabbitMQBase):
class AsyncRabbitMQ(RabbitMQBase): class AsyncRabbitMQ(RabbitMQBase):
"""Asynchronous RabbitMQ client""" """Asynchronous RabbitMQ client"""
def __init__(self, config: RabbitMQConfig):
super().__init__(config)
self._reconnect_lock: asyncio.Lock | None = None
@property @property
def is_connected(self) -> bool: def is_connected(self) -> bool:
return bool(self._connection and not self._connection.is_closed) return bool(self._connection and not self._connection.is_closed)
@@ -240,17 +235,7 @@ class AsyncRabbitMQ(RabbitMQBase):
@conn_retry("AsyncRabbitMQ", "Acquiring async connection") @conn_retry("AsyncRabbitMQ", "Acquiring async connection")
async def connect(self): async def connect(self):
if self.is_connected and self._channel and not self._channel.is_closed: if self.is_connected:
return
if (
self.is_connected
and self._connection
and (self._channel is None or self._channel.is_closed)
):
self._channel = await self._connection.channel()
await self._channel.set_qos(prefetch_count=1)
await self.declare_infrastructure()
return return
self._connection = await aio_pika.connect_robust( self._connection = await aio_pika.connect_robust(
@@ -306,46 +291,24 @@ class AsyncRabbitMQ(RabbitMQBase):
exchange, routing_key=queue.routing_key or queue.name exchange, routing_key=queue.routing_key or queue.name
) )
@property @func_retry
def _lock(self) -> asyncio.Lock: async def publish_message(
if self._reconnect_lock is None:
self._reconnect_lock = asyncio.Lock()
return self._reconnect_lock
async def _ensure_channel(self) -> aio_pika.abc.AbstractChannel:
"""Get a valid channel, reconnecting if the current one is stale.
Uses a lock to prevent concurrent reconnection attempts from racing.
"""
if self.is_ready:
return self._channel # type: ignore # is_ready guarantees non-None
async with self._lock:
# Double-check after acquiring lock
if self.is_ready:
return self._channel # type: ignore
self._channel = None
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
return self._channel
async def _publish_once(
self, self,
routing_key: str, routing_key: str,
message: str, message: str,
exchange: Optional[Exchange] = None, exchange: Optional[Exchange] = None,
persistent: bool = True, persistent: bool = True,
) -> None: ) -> None:
channel = await self._ensure_channel() if not self.is_ready:
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
if exchange: if exchange:
exchange_obj = await channel.get_exchange(exchange.name) exchange_obj = await self._channel.get_exchange(exchange.name)
else: else:
exchange_obj = channel.default_exchange exchange_obj = self._channel.default_exchange
await exchange_obj.publish( await exchange_obj.publish(
aio_pika.Message( aio_pika.Message(
@@ -359,23 +322,9 @@ class AsyncRabbitMQ(RabbitMQBase):
routing_key=routing_key, routing_key=routing_key,
) )
@func_retry
async def publish_message(
self,
routing_key: str,
message: str,
exchange: Optional[Exchange] = None,
persistent: bool = True,
) -> None:
try:
await self._publish_once(routing_key, message, exchange, persistent)
except aio_pika.exceptions.ChannelInvalidStateError:
logger.warning(
"RabbitMQ channel invalid, forcing reconnect and retrying publish"
)
async with self._lock:
self._channel = None
await self._publish_once(routing_key, message, exchange, persistent)
async def get_channel(self) -> aio_pika.abc.AbstractChannel: async def get_channel(self) -> aio_pika.abc.AbstractChannel:
return await self._ensure_channel() if not self.is_ready:
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
return self._channel

View File

@@ -213,9 +213,6 @@ async def execute_node(
block_name=node_block.name, block_name=node_block.name,
) )
if node_block.disabled:
raise ValueError(f"Block {node_block.id} is disabled and cannot be executed")
# Sanity check: validate the execution input. # Sanity check: validate the execution input.
input_data, error = validate_exec(node, data.inputs, resolve_input=False) input_data, error = validate_exec(node, data.inputs, resolve_input=False)
if input_data is None: if input_data is None:

View File

@@ -0,0 +1,59 @@
{# Waitlist Launch Notification Email Template #}
{#
Template variables:
data.agent_name: the name of the launched agent
data.waitlist_name: the name of the waitlist the user joined
data.store_url: URL to view the agent in the store
data.launched_at: when the agent was launched
Subject: {{ data.agent_name }} is now available!
#}
{% block content %}
<h1 style="color: #7c3aed; font-size: 32px; font-weight: 700; margin: 0 0 24px 0; text-align: center;">
The wait is over!
</h1>
<p style="color: #586069; font-size: 18px; text-align: center; margin: 0 0 24px 0;">
<strong>'{{ data.agent_name }}'</strong> is now live in the AutoGPT Store!
</p>
<div style="height: 32px; background: transparent;"></div>
<div style="background: #f3e8ff; border: 1px solid #d8b4fe; border-radius: 8px; padding: 20px; margin: 0;">
<h3 style="color: #6b21a8; font-size: 16px; font-weight: 600; margin: 0 0 12px 0;">
You're one of the first to know!
</h3>
<p style="color: #6b21a8; margin: 0; font-size: 16px; line-height: 1.5;">
You signed up for the <strong>{{ data.waitlist_name }}</strong> waitlist, and we're excited to let you know that this agent is now ready for you to use.
</p>
</div>
<div style="height: 32px; background: transparent;"></div>
<div style="text-align: center; margin: 24px 0;">
<a href="{{ data.store_url }}" style="display: inline-block; background: linear-gradient(135deg, #7c3aed 0%, #5b21b6 100%); color: white; text-decoration: none; padding: 14px 28px; border-radius: 6px; font-weight: 600; font-size: 16px;">
Get {{ data.agent_name }} Now
</a>
</div>
<div style="height: 32px; background: transparent;"></div>
<div style="background: #d1ecf1; border: 1px solid #bee5eb; border-radius: 8px; padding: 20px; margin: 0;">
<h3 style="color: #0c5460; font-size: 16px; font-weight: 600; margin: 0 0 12px 0;">
What can you do now?
</h3>
<ul style="color: #0c5460; margin: 0; padding-left: 18px; font-size: 16px; line-height: 1.6;">
<li>Visit the store to learn more about what this agent can do</li>
<li>Install and start using the agent right away</li>
<li>Share it with others who might find it useful</li>
</ul>
</div>
<div style="height: 32px; background: transparent;"></div>
<p style="color: #6a737d; font-size: 14px; text-align: center; margin: 24px 0;">
Thank you for helping us prioritize what to build! Your interest made this happen.
</p>
{% endblock %}

View File

@@ -364,44 +364,6 @@ def _remove_orphan_tool_responses(
return result return result
def validate_and_remove_orphan_tool_responses(
messages: list[dict],
log_warning: bool = True,
) -> list[dict]:
"""
Validate tool_call/tool_response pairs and remove orphaned responses.
Scans messages in order, tracking all tool_call IDs. Any tool response
referencing an ID not seen in a preceding message is considered orphaned
and removed. This prevents API errors like Anthropic's "unexpected tool_use_id".
Args:
messages: List of messages to validate (OpenAI or Anthropic format)
log_warning: Whether to log a warning when orphans are found
Returns:
A new list with orphaned tool responses removed
"""
available_ids: set[str] = set()
orphan_ids: set[str] = set()
for msg in messages:
available_ids |= _extract_tool_call_ids_from_message(msg)
for resp_id in _extract_tool_response_ids_from_message(msg):
if resp_id not in available_ids:
orphan_ids.add(resp_id)
if not orphan_ids:
return messages
if log_warning:
logger.warning(
f"Removing {len(orphan_ids)} orphan tool response(s): {orphan_ids}"
)
return _remove_orphan_tool_responses(messages, orphan_ids)
def _ensure_tool_pairs_intact( def _ensure_tool_pairs_intact(
recent_messages: list[dict], recent_messages: list[dict],
all_messages: list[dict], all_messages: list[dict],
@@ -761,13 +723,6 @@ async def compress_context(
# Filter out any None values that may have been introduced # Filter out any None values that may have been introduced
final_msgs: list[dict] = [m for m in msgs if m is not None] final_msgs: list[dict] = [m for m in msgs if m is not None]
# ---- STEP 6: Final tool-pair validation ---------------------------------
# After all compression steps, verify that every tool response has a
# matching tool_call in a preceding assistant message. Remove orphans
# to prevent API errors (e.g., Anthropic's "unexpected tool_use_id").
final_msgs = validate_and_remove_orphan_tool_responses(final_msgs)
final_count = sum(_msg_tokens(m, enc) for m in final_msgs) final_count = sum(_msg_tokens(m, enc) for m in final_msgs)
error = None error = None
if final_count + reserve > target_tokens: if final_count + reserve > target_tokens:

View File

@@ -0,0 +1,53 @@
-- CreateEnum
CREATE TYPE "WaitlistExternalStatus" AS ENUM ('DONE', 'NOT_STARTED', 'CANCELED', 'WORK_IN_PROGRESS');
-- AlterEnum
ALTER TYPE "NotificationType" ADD VALUE 'WAITLIST_LAUNCH';
-- CreateTable
CREATE TABLE "WaitlistEntry" (
"id" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
"storeListingId" TEXT,
"owningUserId" TEXT NOT NULL,
"slug" TEXT NOT NULL,
"search" tsvector DEFAULT ''::tsvector,
"name" TEXT NOT NULL,
"subHeading" TEXT NOT NULL,
"videoUrl" TEXT,
"agentOutputDemoUrl" TEXT,
"imageUrls" TEXT[],
"description" TEXT NOT NULL,
"categories" TEXT[],
"status" "WaitlistExternalStatus" NOT NULL DEFAULT 'NOT_STARTED',
"votes" INTEGER NOT NULL DEFAULT 0,
"unaffiliatedEmailUsers" TEXT[] DEFAULT ARRAY[]::TEXT[],
"isDeleted" BOOLEAN NOT NULL DEFAULT false,
CONSTRAINT "WaitlistEntry_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "_joinedWaitlists" (
"A" TEXT NOT NULL,
"B" TEXT NOT NULL
);
-- CreateIndex
CREATE UNIQUE INDEX "_joinedWaitlists_AB_unique" ON "_joinedWaitlists"("A", "B");
-- CreateIndex
CREATE INDEX "_joinedWaitlists_B_index" ON "_joinedWaitlists"("B");
-- AddForeignKey
ALTER TABLE "WaitlistEntry" ADD CONSTRAINT "WaitlistEntry_storeListingId_fkey" FOREIGN KEY ("storeListingId") REFERENCES "StoreListing"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "WaitlistEntry" ADD CONSTRAINT "WaitlistEntry_owningUserId_fkey" FOREIGN KEY ("owningUserId") REFERENCES "User"("id") ON DELETE RESTRICT ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "_joinedWaitlists" ADD CONSTRAINT "_joinedWaitlists_A_fkey" FOREIGN KEY ("A") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "_joinedWaitlists" ADD CONSTRAINT "_joinedWaitlists_B_fkey" FOREIGN KEY ("B") REFERENCES "WaitlistEntry"("id") ON DELETE CASCADE ON UPDATE CASCADE;

View File

@@ -46,14 +46,14 @@ pycares = ">=4.9.0,<5"
[[package]] [[package]]
name = "aiofiles" name = "aiofiles"
version = "25.1.0" version = "24.1.0"
description = "File support for asyncio." description = "File support for asyncio."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.8"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "aiofiles-25.1.0-py3-none-any.whl", hash = "sha256:abe311e527c862958650f9438e859c1fa7568a141b22abcd015e120e86a85695"}, {file = "aiofiles-24.1.0-py3-none-any.whl", hash = "sha256:b4ec55f4195e3eb5d7abd1bf7e061763e864dd4954231fb8539a0ef8bb8260e5"},
{file = "aiofiles-25.1.0.tar.gz", hash = "sha256:a8d728f0a29de45dc521f18f07297428d56992a742f0cd2701ba86e44d23d5b2"}, {file = "aiofiles-24.1.0.tar.gz", hash = "sha256:22a075c9e5a3810f0c2e48f3008c94d68c65d763b9b03857924c99e57355166c"},
] ]
[[package]] [[package]]
@@ -269,20 +269,19 @@ files = [
[[package]] [[package]]
name = "anthropic" name = "anthropic"
version = "0.79.0" version = "0.59.0"
description = "The official Python library for the anthropic API" description = "The official Python library for the anthropic API"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.8"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "anthropic-0.79.0-py3-none-any.whl", hash = "sha256:04cbd473b6bbda4ca2e41dd670fe2f829a911530f01697d0a1e37321eb75f3cf"}, {file = "anthropic-0.59.0-py3-none-any.whl", hash = "sha256:cbc8b3dccef66ad6435c4fa1d317e5ebb092399a4b88b33a09dc4bf3944c3183"},
{file = "anthropic-0.79.0.tar.gz", hash = "sha256:8707aafb3b1176ed6c13e2b1c9fb3efddce90d17aee5d8b83a86c70dcdcca871"}, {file = "anthropic-0.59.0.tar.gz", hash = "sha256:d710d1ef0547ebbb64b03f219e44ba078e83fc83752b96a9b22e9726b523fd8f"},
] ]
[package.dependencies] [package.dependencies]
anyio = ">=3.5.0,<5" anyio = ">=3.5.0,<5"
distro = ">=1.7.0,<2" distro = ">=1.7.0,<2"
docstring-parser = ">=0.15,<1"
httpx = ">=0.25.0,<1" httpx = ">=0.25.0,<1"
jiter = ">=0.4.0,<1" jiter = ">=0.4.0,<1"
pydantic = ">=1.9.0,<3" pydantic = ">=1.9.0,<3"
@@ -290,7 +289,7 @@ sniffio = "*"
typing-extensions = ">=4.10,<5" typing-extensions = ">=4.10,<5"
[package.extras] [package.extras]
aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.9)"] aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.8)"]
bedrock = ["boto3 (>=1.28.57)", "botocore (>=1.31.57)"] bedrock = ["boto3 (>=1.28.57)", "botocore (>=1.31.57)"]
vertex = ["google-auth[requests] (>=2,<3)"] vertex = ["google-auth[requests] (>=2,<3)"]
@@ -1149,23 +1148,6 @@ idna = ["idna (>=3.10)"]
trio = ["trio (>=0.30)"] trio = ["trio (>=0.30)"]
wmi = ["wmi (>=1.5.1) ; platform_system == \"Windows\""] wmi = ["wmi (>=1.5.1) ; platform_system == \"Windows\""]
[[package]]
name = "docstring-parser"
version = "0.17.0"
description = "Parse Python docstrings in reST, Google and Numpydoc format"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "docstring_parser-0.17.0-py3-none-any.whl", hash = "sha256:cf2569abd23dce8099b300f9b4fa8191e9582dda731fd533daf54c4551658708"},
{file = "docstring_parser-0.17.0.tar.gz", hash = "sha256:583de4a309722b3315439bb31d64ba3eebada841f2e2cee23b99df001434c912"},
]
[package.extras]
dev = ["pre-commit (>=2.16.0) ; python_version >= \"3.9\"", "pydoctor (>=25.4.0)", "pytest"]
docs = ["pydoctor (>=25.4.0)"]
test = ["pytest"]
[[package]] [[package]]
name = "dulwich" name = "dulwich"
version = "0.22.8" version = "0.22.8"
@@ -1382,14 +1364,14 @@ tzdata = "*"
[[package]] [[package]]
name = "fastapi" name = "fastapi"
version = "0.128.6" version = "0.128.3"
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production" description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "fastapi-0.128.6-py3-none-any.whl", hash = "sha256:bb1c1ef87d6086a7132d0ab60869d6f1ee67283b20fbf84ec0003bd335099509"}, {file = "fastapi-0.128.3-py3-none-any.whl", hash = "sha256:c8cdf7c2182c9a06bf9cfa3329819913c189dc86389b90d5709892053582db29"},
{file = "fastapi-0.128.6.tar.gz", hash = "sha256:0cb3946557e792d731b26a42b04912f16367e3c3135ea8290f620e234f2b604f"}, {file = "fastapi-0.128.3.tar.gz", hash = "sha256:ed99383fd96063447597d5aa2a9ec3973be198e3b4fc10c55f15c62efdb21c60"},
] ]
[package.dependencies] [package.dependencies]
@@ -3078,14 +3060,14 @@ type = ["pygobject-stubs", "pytest-mypy (>=1.0.1)", "shtab", "types-pywin32"]
[[package]] [[package]]
name = "langfuse" name = "langfuse"
version = "3.14.1" version = "3.13.0"
description = "A client library for accessing langfuse" description = "A client library for accessing langfuse"
optional = false optional = false
python-versions = "<4.0,>=3.10" python-versions = "<4.0,>=3.10"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "langfuse-3.14.1-py3-none-any.whl", hash = "sha256:17bed605dbfc9947cbd1738a715f6d27c1b80b6da9f2946586171958fa5820d0"}, {file = "langfuse-3.13.0-py3-none-any.whl", hash = "sha256:71912ddac1cc831a65df895eae538a556f564c094ae51473e747426e9ded1a9d"},
{file = "langfuse-3.14.1.tar.gz", hash = "sha256:404a6104cd29353d7829aa417ec46565b04917e5599afdda96c5b0865f4bc991"}, {file = "langfuse-3.13.0.tar.gz", hash = "sha256:dacea8111ca4442e97dbfec4f8d676cf9709b35357a26e468f8887b95de0012f"},
] ]
[package.dependencies] [package.dependencies]
@@ -3963,14 +3945,14 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]] [[package]]
name = "ollama" name = "ollama"
version = "0.6.1" version = "0.5.4"
description = "The official Python client for Ollama." description = "The official Python client for Ollama."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "ollama-0.6.1-py3-none-any.whl", hash = "sha256:fc4c984b345735c5486faeee67d8a265214a31cbb828167782dc642ce0a2bf8c"}, {file = "ollama-0.5.4-py3-none-any.whl", hash = "sha256:6374c9bb4f2a371b3583c09786112ba85b006516745689c172a7e28af4d4d1a2"},
{file = "ollama-0.6.1.tar.gz", hash = "sha256:478c67546836430034b415ed64fa890fd3d1ff91781a9d548b3325274e69d7c6"}, {file = "ollama-0.5.4.tar.gz", hash = "sha256:75857505a5d42e5e58114a1b78cc8c24596d8866863359d8a2329946a9b6d6f3"},
] ]
[package.dependencies] [package.dependencies]
@@ -4640,20 +4622,20 @@ testing = ["coverage", "pytest", "pytest-benchmark"]
[[package]] [[package]]
name = "poethepoet" name = "poethepoet"
version = "0.41.0" version = "0.37.0"
description = "A task runner that works well with poetry and uv." description = "A task runner that works well with poetry and uv."
optional = false optional = false
python-versions = ">=3.10" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "poethepoet-0.41.0-py3-none-any.whl", hash = "sha256:4bab9fd8271664c5d21407e8f12827daeb6aa484dc6cc7620f0c3b4e62b42ee4"}, {file = "poethepoet-0.37.0-py3-none-any.whl", hash = "sha256:861790276315abcc8df1b4bd60e28c3d48a06db273edd3092f3c94e1a46e5e22"},
{file = "poethepoet-0.41.0.tar.gz", hash = "sha256:dcaad621dc061f6a90b17d091bebb9ca043d67bfe9bd6aa4185aea3ebf7ff3e6"}, {file = "poethepoet-0.37.0.tar.gz", hash = "sha256:73edf458707c674a079baa46802e21455bda3a7f82a408e58c31b9f4fe8e933d"},
] ]
[package.dependencies] [package.dependencies]
pastel = ">=0.2.1,<0.3.0" pastel = ">=0.2.1,<0.3.0"
pyyaml = ">=6.0.3,<7.0" pyyaml = ">=6.0.2,<7.0"
tomli = {version = ">=1.3.0", markers = "python_version < \"3.11\""} tomli = {version = ">=1.2.2", markers = "python_version < \"3.11\""}
[package.extras] [package.extras]
poetry-plugin = ["poetry (>=1.2.0,<3.0.0) ; python_version < \"4.0\""] poetry-plugin = ["poetry (>=1.2.0,<3.0.0) ; python_version < \"4.0\""]
@@ -4728,14 +4710,14 @@ tests = ["coverage-conditional-plugin (>=0.9.0)", "portalocker[redis]", "pytest
[[package]] [[package]]
name = "postgrest" name = "postgrest"
version = "2.27.3" version = "2.27.2"
description = "PostgREST client for Python. This library provides an ORM interface to PostgREST." description = "PostgREST client for Python. This library provides an ORM interface to PostgREST."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "postgrest-2.27.3-py3-none-any.whl", hash = "sha256:ed79123af7127edd78d538bfe8351d277e45b1a36994a4dbf57ae27dde87a7b7"}, {file = "postgrest-2.27.2-py3-none-any.whl", hash = "sha256:1666fef3de05ca097a314433dd5ae2f2d71c613cb7b233d0f468c4ffe37277da"},
{file = "postgrest-2.27.3.tar.gz", hash = "sha256:c2e2679addfc8eaab23197bad7ddaee6cbb4cbe8c483ebd2d2e5219543037cc3"}, {file = "postgrest-2.27.2.tar.gz", hash = "sha256:55407d530b5af3d64e883a71fec1f345d369958f723ce4a8ab0b7d169e313242"},
] ]
[package.dependencies] [package.dependencies]
@@ -4893,19 +4875,17 @@ tqdm = "*"
[[package]] [[package]]
name = "prometheus-client" name = "prometheus-client"
version = "0.24.1" version = "0.22.1"
description = "Python client for the Prometheus monitoring system." description = "Python client for the Prometheus monitoring system."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "prometheus_client-0.24.1-py3-none-any.whl", hash = "sha256:150db128af71a5c2482b36e588fc8a6b95e498750da4b17065947c16070f4055"}, {file = "prometheus_client-0.22.1-py3-none-any.whl", hash = "sha256:cca895342e308174341b2cbf99a56bef291fbc0ef7b9e5412a0f26d653ba7094"},
{file = "prometheus_client-0.24.1.tar.gz", hash = "sha256:7e0ced7fbbd40f7b84962d5d2ab6f17ef88a72504dcf7c0b40737b43b2a461f9"}, {file = "prometheus_client-0.22.1.tar.gz", hash = "sha256:190f1331e783cf21eb60bca559354e0a4d4378facecf78f5428c39b675d20d28"},
] ]
[package.extras] [package.extras]
aiohttp = ["aiohttp"]
django = ["django"]
twisted = ["twisted"] twisted = ["twisted"]
[[package]] [[package]]
@@ -5919,18 +5899,18 @@ pytest = ">=3.0.0"
[[package]] [[package]]
name = "pytest-watcher" name = "pytest-watcher"
version = "0.6.3" version = "0.4.3"
description = "Automatically rerun your tests on file modifications" description = "Automatically rerun your tests on file modifications"
optional = false optional = false
python-versions = ">=3.9" python-versions = "<4.0.0,>=3.7.0"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pytest_watcher-0.6.3-py3-none-any.whl", hash = "sha256:83e7748c933087e8276edb6078663e6afa9926434b4fd8b85cf6b32b1d5bec89"}, {file = "pytest_watcher-0.4.3-py3-none-any.whl", hash = "sha256:d59b1e1396f33a65ea4949b713d6884637755d641646960056a90b267c3460f9"},
{file = "pytest_watcher-0.6.3.tar.gz", hash = "sha256:842dc904264df0ad2d5264153a66bb452fccfa46598cd6e0a5ef1d19afed9b13"}, {file = "pytest_watcher-0.4.3.tar.gz", hash = "sha256:0cb0e4661648c8c0ff2b2d25efa5a8e421784b9e4c60fcecbf9b7c30b2d731b3"},
] ]
[package.dependencies] [package.dependencies]
tomli = {version = ">=2.0.1", markers = "python_version < \"3.11\""} tomli = {version = ">=2.0.1,<3.0.0", markers = "python_version < \"3.11\""}
watchdog = ">=2.0.0" watchdog = ">=2.0.0"
[[package]] [[package]]
@@ -5965,14 +5945,14 @@ cli = ["click (>=5.0)"]
[[package]] [[package]]
name = "python-multipart" name = "python-multipart"
version = "0.0.22" version = "0.0.20"
description = "A streaming multipart parser for Python" description = "A streaming multipart parser for Python"
optional = false optional = false
python-versions = ">=3.10" python-versions = ">=3.8"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "python_multipart-0.0.22-py3-none-any.whl", hash = "sha256:2b2cd894c83d21bf49d702499531c7bafd057d730c201782048f7945d82de155"}, {file = "python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104"},
{file = "python_multipart-0.0.22.tar.gz", hash = "sha256:7340bef99a7e0032613f56dc36027b959fd3b30a787ed62d310e951f7c3a3a58"}, {file = "python_multipart-0.0.20.tar.gz", hash = "sha256:8dd0cab45b8e23064ae09147625994d090fa46f5b0d1e13af944c331a7fa9d13"},
] ]
[[package]] [[package]]
@@ -6260,14 +6240,14 @@ all = ["numpy"]
[[package]] [[package]]
name = "realtime" name = "realtime"
version = "2.27.3" version = "2.27.2"
description = "" description = ""
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "realtime-2.27.3-py3-none-any.whl", hash = "sha256:f571115f86988e33c41c895cb3fba2eaa1b693aeaede3617288f44274ca90f43"}, {file = "realtime-2.27.2-py3-none-any.whl", hash = "sha256:34a9cbb26a274e707e8fc9e3ee0a66de944beac0fe604dc336d1e985db2c830f"},
{file = "realtime-2.27.3.tar.gz", hash = "sha256:02b082243107656a5ef3fb63e8e2ab4c40bc199abb45adb8a42ed63f089a1041"}, {file = "realtime-2.27.2.tar.gz", hash = "sha256:b960a90294d2cea1b3f1275ecb89204304728e08fff1c393cc1b3150739556b3"},
] ]
[package.dependencies] [package.dependencies]
@@ -6672,30 +6652,31 @@ pyasn1 = ">=0.1.3"
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.15.0" version = "0.14.14"
description = "An extremely fast Python linter and code formatter, written in Rust." description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "ruff-0.15.0-py3-none-linux_armv6l.whl", hash = "sha256:aac4ebaa612a82b23d45964586f24ae9bc23ca101919f5590bdb368d74ad5455"}, {file = "ruff-0.14.14-py3-none-linux_armv6l.whl", hash = "sha256:7cfe36b56e8489dee8fbc777c61959f60ec0f1f11817e8f2415f429552846aed"},
{file = "ruff-0.15.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:dcd4be7cc75cfbbca24a98d04d0b9b36a270d0833241f776b788d59f4142b14d"}, {file = "ruff-0.14.14-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:6006a0082336e7920b9573ef8a7f52eec837add1265cc74e04ea8a4368cd704c"},
{file = "ruff-0.15.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d747e3319b2bce179c7c1eaad3d884dc0a199b5f4d5187620530adf9105268ce"}, {file = "ruff-0.14.14-py3-none-macosx_11_0_arm64.whl", hash = "sha256:026c1d25996818f0bf498636686199d9bd0d9d6341c9c2c3b62e2a0198b758de"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:650bd9c56ae03102c51a5e4b554d74d825ff3abe4db22b90fd32d816c2e90621"}, {file = "ruff-0.14.14-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f666445819d31210b71e0a6d1c01e24447a20b85458eea25a25fe8142210ae0e"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a6664b7eac559e3048223a2da77769c2f92b43a6dfd4720cef42654299a599c9"}, {file = "ruff-0.14.14-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3c0f18b922c6d2ff9a5e6c3ee16259adc513ca775bcf82c67ebab7cbd9da5bc8"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f811f97b0f092b35320d1556f3353bf238763420ade5d9e62ebd2b73f2ff179"}, {file = "ruff-0.14.14-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1629e67489c2dea43e8658c3dba659edbfd87361624b4040d1df04c9740ae906"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:761ec0a66680fab6454236635a39abaf14198818c8cdf691e036f4bc0f406b2d"}, {file = "ruff-0.14.14-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:27493a2131ea0f899057d49d303e4292b2cae2bb57253c1ed1f256fbcd1da480"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:940f11c2604d317e797b289f4f9f3fa5555ffe4fb574b55ed006c3d9b6f0eb78"}, {file = "ruff-0.14.14-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:01ff589aab3f5b539e35db38425da31a57521efd1e4ad1ae08fc34dbe30bd7df"},
{file = "ruff-0.15.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bcbca3d40558789126da91d7ef9a7c87772ee107033db7191edefa34e2c7f1b4"}, {file = "ruff-0.14.14-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1cc12d74eef0f29f51775f5b755913eb523546b88e2d733e1d701fe65144e89b"},
{file = "ruff-0.15.0-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:9a121a96db1d75fa3eb39c4539e607f628920dd72ff1f7c5ee4f1b768ac62d6e"}, {file = "ruff-0.14.14-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb8481604b7a9e75eff53772496201690ce2687067e038b3cc31aaf16aa0b974"},
{file = "ruff-0.15.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:5298d518e493061f2eabd4abd067c7e4fb89e2f63291c94332e35631c07c3662"}, {file = "ruff-0.14.14-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:14649acb1cf7b5d2d283ebd2f58d56b75836ed8c6f329664fa91cdea19e76e66"},
{file = "ruff-0.15.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:afb6e603d6375ff0d6b0cee563fa21ab570fd15e65c852cb24922cef25050cf1"}, {file = "ruff-0.14.14-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e8058d2145566510790eab4e2fad186002e288dec5e0d343a92fe7b0bc1b3e13"},
{file = "ruff-0.15.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:77e515f6b15f828b94dc17d2b4ace334c9ddb7d9468c54b2f9ed2b9c1593ef16"}, {file = "ruff-0.14.14-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:e651e977a79e4c758eb807f0481d673a67ffe53cfa92209781dfa3a996cf8412"},
{file = "ruff-0.15.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:6f6e80850a01eb13b3e42ee0ebdf6e4497151b48c35051aab51c101266d187a3"}, {file = "ruff-0.14.14-py3-none-musllinux_1_2_i686.whl", hash = "sha256:cc8b22da8d9d6fdd844a68ae937e2a0adf9b16514e9a97cc60355e2d4b219fc3"},
{file = "ruff-0.15.0-py3-none-win32.whl", hash = "sha256:238a717ef803e501b6d51e0bdd0d2c6e8513fe9eec14002445134d3907cd46c3"}, {file = "ruff-0.14.14-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:16bc890fb4cc9781bb05beb5ab4cd51be9e7cb376bf1dd3580512b24eb3fda2b"},
{file = "ruff-0.15.0-py3-none-win_amd64.whl", hash = "sha256:dd5e4d3301dc01de614da3cdffc33d4b1b96fb89e45721f1598e5532ccf78b18"}, {file = "ruff-0.14.14-py3-none-win32.whl", hash = "sha256:b530c191970b143375b6a68e6f743800b2b786bbcf03a7965b06c4bf04568167"},
{file = "ruff-0.15.0-py3-none-win_arm64.whl", hash = "sha256:c480d632cc0ca3f0727acac8b7d053542d9e114a462a145d0b00e7cd658c515a"}, {file = "ruff-0.14.14-py3-none-win_amd64.whl", hash = "sha256:3dde1435e6b6fe5b66506c1dff67a421d0b7f6488d466f651c07f4cab3bf20fd"},
{file = "ruff-0.15.0.tar.gz", hash = "sha256:6bdea47cdbea30d40f8f8d7d69c0854ba7c15420ec75a26f463290949d7f7e9a"}, {file = "ruff-0.14.14-py3-none-win_arm64.whl", hash = "sha256:56e6981a98b13a32236a72a8da421d7839221fa308b223b9283312312e5ac76c"},
{file = "ruff-0.14.14.tar.gz", hash = "sha256:2d0f819c9a90205f3a867dbbd0be083bee9912e170fd7d9704cc8ae45824896b"},
] ]
[[package]] [[package]]
@@ -7024,14 +7005,14 @@ full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart
[[package]] [[package]]
name = "storage3" name = "storage3"
version = "2.27.3" version = "2.27.2"
description = "Supabase Storage client for Python." description = "Supabase Storage client for Python."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "storage3-2.27.3-py3-none-any.whl", hash = "sha256:11a05b7da84bccabeeea12d940bca3760cf63fe6ca441868677335cfe4fdfbe0"}, {file = "storage3-2.27.2-py3-none-any.whl", hash = "sha256:e6f16e7a260729e7b1f46e9bf61746805a02e30f5e419ee1291007c432e3ec63"},
{file = "storage3-2.27.3.tar.gz", hash = "sha256:dc1a4a010cf36d5482c5cb6c1c28fc5f00e23284342b89e4ae43b5eae8501ddb"}, {file = "storage3-2.27.2.tar.gz", hash = "sha256:cb4807b7f86b4bb1272ac6fdd2f3cfd8ba577297046fa5f88557425200275af5"},
] ]
[package.dependencies] [package.dependencies]
@@ -7091,35 +7072,35 @@ typing-extensions = {version = ">=4.5.0", markers = "python_version >= \"3.7\""}
[[package]] [[package]]
name = "supabase" name = "supabase"
version = "2.27.3" version = "2.27.2"
description = "Supabase client for Python." description = "Supabase client for Python."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "supabase-2.27.3-py3-none-any.whl", hash = "sha256:082a74642fcf9954693f1ce8c251baf23e4bda26ffdbc8dcd4c99c82e60d69ff"}, {file = "supabase-2.27.2-py3-none-any.whl", hash = "sha256:d4dce00b3a418ee578017ec577c0e5be47a9a636355009c76f20ed2faa15bc54"},
{file = "supabase-2.27.3.tar.gz", hash = "sha256:5e5a348232ac4315c1032ddd687278f0b982465471f0cbb52bca7e6a66495ff3"}, {file = "supabase-2.27.2.tar.gz", hash = "sha256:2aed40e4f3454438822442a1e94a47be6694c2c70392e7ae99b51a226d4293f7"},
] ]
[package.dependencies] [package.dependencies]
httpx = ">=0.26,<0.29" httpx = ">=0.26,<0.29"
postgrest = "2.27.3" postgrest = "2.27.2"
realtime = "2.27.3" realtime = "2.27.2"
storage3 = "2.27.3" storage3 = "2.27.2"
supabase-auth = "2.27.3" supabase-auth = "2.27.2"
supabase-functions = "2.27.3" supabase-functions = "2.27.2"
yarl = ">=1.22.0" yarl = ">=1.22.0"
[[package]] [[package]]
name = "supabase-auth" name = "supabase-auth"
version = "2.27.3" version = "2.27.2"
description = "Python Client Library for Supabase Auth" description = "Python Client Library for Supabase Auth"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "supabase_auth-2.27.3-py3-none-any.whl", hash = "sha256:82a4262eaad85383319d394dab0eea11fcf3ebd774062aef8ea3874ae2f02579"}, {file = "supabase_auth-2.27.2-py3-none-any.whl", hash = "sha256:78ec25b11314d0a9527a7205f3b1c72560dccdc11b38392f80297ef98664ee91"},
{file = "supabase_auth-2.27.3.tar.gz", hash = "sha256:39894d4bc60b6f23b5cff4d0d7d4c1659e5d69563cadf014d4896f780ca8ca78"}, {file = "supabase_auth-2.27.2.tar.gz", hash = "sha256:0f5bcc79b3677cb42e9d321f3c559070cfa40d6a29a67672cc8382fb7dc2fe97"},
] ]
[package.dependencies] [package.dependencies]
@@ -7129,14 +7110,14 @@ pyjwt = {version = ">=2.10.1", extras = ["crypto"]}
[[package]] [[package]]
name = "supabase-functions" name = "supabase-functions"
version = "2.27.3" version = "2.27.2"
description = "Library for Supabase Functions" description = "Library for Supabase Functions"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "supabase_functions-2.27.3-py3-none-any.whl", hash = "sha256:9d14a931d49ede1c6cf5fbfceb11c44061535ba1c3f310f15384964d86a83d9e"}, {file = "supabase_functions-2.27.2-py3-none-any.whl", hash = "sha256:db480efc669d0bca07605b9b6f167312af43121adcc842a111f79bea416ef754"},
{file = "supabase_functions-2.27.3.tar.gz", hash = "sha256:e954f1646da8ca6e7e16accef58d0884a5f97b25956ee98e7d4927a210ed92f9"}, {file = "supabase_functions-2.27.2.tar.gz", hash = "sha256:d0c8266207a94371cb3fd35ad3c7f025b78a97cf026861e04ccd35ac1775f80b"},
] ]
[package.dependencies] [package.dependencies]
@@ -7146,14 +7127,14 @@ yarl = ">=1.20.1"
[[package]] [[package]]
name = "tenacity" name = "tenacity"
version = "9.1.4" version = "9.1.3"
description = "Retry code until it succeeds" description = "Retry code until it succeeds"
optional = false optional = false
python-versions = ">=3.10" python-versions = ">=3.10"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "tenacity-9.1.4-py3-none-any.whl", hash = "sha256:6095a360c919085f28c6527de529e76a06ad89b23659fa881ae0649b867a9d55"}, {file = "tenacity-9.1.3-py3-none-any.whl", hash = "sha256:51171cfc6b8a7826551e2f029426b10a6af189c5ac6986adcd7eb36d42f17954"},
{file = "tenacity-9.1.4.tar.gz", hash = "sha256:adb31d4c263f2bd041081ab33b498309a57c77f9acf2db65aadf0898179cf93a"}, {file = "tenacity-9.1.3.tar.gz", hash = "sha256:a6724c947aa717087e2531f883bde5c9188f603f6669a9b8d54eb998e604c12a"},
] ]
[package.extras] [package.extras]
@@ -7162,69 +7143,43 @@ test = ["pytest", "tornado (>=4.5)", "typeguard"]
[[package]] [[package]]
name = "tiktoken" name = "tiktoken"
version = "0.12.0" version = "0.9.0"
description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models" description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "tiktoken-0.12.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3de02f5a491cfd179aec916eddb70331814bd6bf764075d39e21d5862e533970"}, {file = "tiktoken-0.9.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:586c16358138b96ea804c034b8acf3f5d3f0258bd2bc3b0227af4af5d622e382"},
{file = "tiktoken-0.12.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b6cfb6d9b7b54d20af21a912bfe63a2727d9cfa8fbda642fd8322c70340aad16"}, {file = "tiktoken-0.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d9c59ccc528c6c5dd51820b3474402f69d9a9e1d656226848ad68a8d5b2e5108"},
{file = "tiktoken-0.12.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:cde24cdb1b8a08368f709124f15b36ab5524aac5fa830cc3fdce9c03d4fb8030"}, {file = "tiktoken-0.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0968d5beeafbca2a72c595e8385a1a1f8af58feaebb02b227229b69ca5357fd"},
{file = "tiktoken-0.12.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6de0da39f605992649b9cfa6f84071e3f9ef2cec458d08c5feb1b6f0ff62e134"}, {file = "tiktoken-0.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:92a5fb085a6a3b7350b8fc838baf493317ca0e17bd95e8642f95fc69ecfed1de"},
{file = "tiktoken-0.12.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6faa0534e0eefbcafaccb75927a4a380463a2eaa7e26000f0173b920e98b720a"}, {file = "tiktoken-0.9.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:15a2752dea63d93b0332fb0ddb05dd909371ededa145fe6a3242f46724fa7990"},
{file = "tiktoken-0.12.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:82991e04fc860afb933efb63957affc7ad54f83e2216fe7d319007dab1ba5892"}, {file = "tiktoken-0.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:26113fec3bd7a352e4b33dbaf1bd8948de2507e30bd95a44e2b1156647bc01b4"},
{file = "tiktoken-0.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:6fb2995b487c2e31acf0a9e17647e3b242235a20832642bb7a9d1a181c0c1bb1"}, {file = "tiktoken-0.9.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:f32cc56168eac4851109e9b5d327637f15fd662aa30dd79f964b7c39fbadd26e"},
{file = "tiktoken-0.12.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:6e227c7f96925003487c33b1b32265fad2fbcec2b7cf4817afb76d416f40f6bb"}, {file = "tiktoken-0.9.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:45556bc41241e5294063508caf901bf92ba52d8ef9222023f83d2483a3055348"},
{file = "tiktoken-0.12.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c06cf0fcc24c2cb2adb5e185c7082a82cba29c17575e828518c2f11a01f445aa"}, {file = "tiktoken-0.9.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03935988a91d6d3216e2ec7c645afbb3d870b37bcb67ada1943ec48678e7ee33"},
{file = "tiktoken-0.12.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:f18f249b041851954217e9fd8e5c00b024ab2315ffda5ed77665a05fa91f42dc"}, {file = "tiktoken-0.9.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b3d80aad8d2c6b9238fc1a5524542087c52b860b10cbf952429ffb714bc1136"},
{file = "tiktoken-0.12.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:47a5bc270b8c3db00bb46ece01ef34ad050e364b51d406b6f9730b64ac28eded"}, {file = "tiktoken-0.9.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b2a21133be05dc116b1d0372af051cd2c6aa1d2188250c9b553f9fa49301b336"},
{file = "tiktoken-0.12.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:508fa71810c0efdcd1b898fda574889ee62852989f7c1667414736bcb2b9a4bd"}, {file = "tiktoken-0.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:11a20e67fdf58b0e2dea7b8654a288e481bb4fc0289d3ad21291f8d0849915fb"},
{file = "tiktoken-0.12.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a1af81a6c44f008cba48494089dd98cccb8b313f55e961a52f5b222d1e507967"}, {file = "tiktoken-0.9.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:e88f121c1c22b726649ce67c089b90ddda8b9662545a8aeb03cfef15967ddd03"},
{file = "tiktoken-0.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:3e68e3e593637b53e56f7237be560f7a394451cb8c11079755e80ae64b9e6def"}, {file = "tiktoken-0.9.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a6600660f2f72369acb13a57fb3e212434ed38b045fd8cc6cdd74947b4b5d210"},
{file = "tiktoken-0.12.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b97f74aca0d78a1ff21b8cd9e9925714c15a9236d6ceacf5c7327c117e6e21e8"}, {file = "tiktoken-0.9.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95e811743b5dfa74f4b227927ed86cbc57cad4df859cb3b643be797914e41794"},
{file = "tiktoken-0.12.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2b90f5ad190a4bb7c3eb30c5fa32e1e182ca1ca79f05e49b448438c3e225a49b"}, {file = "tiktoken-0.9.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:99376e1370d59bcf6935c933cb9ba64adc29033b7e73f5f7569f3aad86552b22"},
{file = "tiktoken-0.12.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:65b26c7a780e2139e73acc193e5c63ac754021f160df919add909c1492c0fb37"}, {file = "tiktoken-0.9.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:badb947c32739fb6ddde173e14885fb3de4d32ab9d8c591cbd013c22b4c31dd2"},
{file = "tiktoken-0.12.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:edde1ec917dfd21c1f2f8046b86348b0f54a2c0547f68149d8600859598769ad"}, {file = "tiktoken-0.9.0-cp312-cp312-win_amd64.whl", hash = "sha256:5a62d7a25225bafed786a524c1b9f0910a1128f4232615bf3f8257a73aaa3b16"},
{file = "tiktoken-0.12.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:35a2f8ddd3824608b3d650a000c1ef71f730d0c56486845705a8248da00f9fe5"}, {file = "tiktoken-0.9.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:2b0e8e05a26eda1249e824156d537015480af7ae222ccb798e5234ae0285dbdb"},
{file = "tiktoken-0.12.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:83d16643edb7fa2c99eff2ab7733508aae1eebb03d5dfc46f5565862810f24e3"}, {file = "tiktoken-0.9.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:27d457f096f87685195eea0165a1807fae87b97b2161fe8c9b1df5bd74ca6f63"},
{file = "tiktoken-0.12.0-cp312-cp312-win_amd64.whl", hash = "sha256:ffc5288f34a8bc02e1ea7047b8d041104791d2ddbf42d1e5fa07822cbffe16bd"}, {file = "tiktoken-0.9.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cf8ded49cddf825390e36dd1ad35cd49589e8161fdcb52aa25f0583e90a3e01"},
{file = "tiktoken-0.12.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:775c2c55de2310cc1bc9a3ad8826761cbdc87770e586fd7b6da7d4589e13dab3"}, {file = "tiktoken-0.9.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc156cb314119a8bb9748257a2eaebd5cc0753b6cb491d26694ed42fc7cb3139"},
{file = "tiktoken-0.12.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a01b12f69052fbe4b080a2cfb867c4de12c704b56178edf1d1d7b273561db160"}, {file = "tiktoken-0.9.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:cd69372e8c9dd761f0ab873112aba55a0e3e506332dd9f7522ca466e817b1b7a"},
{file = "tiktoken-0.12.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:01d99484dc93b129cd0964f9d34eee953f2737301f18b3c7257bf368d7615baa"}, {file = "tiktoken-0.9.0-cp313-cp313-win_amd64.whl", hash = "sha256:5ea0edb6f83dc56d794723286215918c1cde03712cbbafa0348b33448faf5b95"},
{file = "tiktoken-0.12.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:4a1a4fcd021f022bfc81904a911d3df0f6543b9e7627b51411da75ff2fe7a1be"}, {file = "tiktoken-0.9.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:c6386ca815e7d96ef5b4ac61e0048cd32ca5a92d5781255e13b31381d28667dc"},
{file = "tiktoken-0.12.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:981a81e39812d57031efdc9ec59fa32b2a5a5524d20d4776574c4b4bd2e9014a"}, {file = "tiktoken-0.9.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:75f6d5db5bc2c6274b674ceab1615c1778e6416b14705827d19b40e6355f03e0"},
{file = "tiktoken-0.12.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9baf52f84a3f42eef3ff4e754a0db79a13a27921b457ca9832cf944c6be4f8f3"}, {file = "tiktoken-0.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e15b16f61e6f4625a57a36496d28dd182a8a60ec20a534c5343ba3cafa156ac7"},
{file = "tiktoken-0.12.0-cp313-cp313-win_amd64.whl", hash = "sha256:b8a0cd0c789a61f31bf44851defbd609e8dd1e2c8589c614cc1060940ef1f697"}, {file = "tiktoken-0.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ebcec91babf21297022882344c3f7d9eed855931466c3311b1ad6b64befb3df"},
{file = "tiktoken-0.12.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:d5f89ea5680066b68bcb797ae85219c72916c922ef0fcdd3480c7d2315ffff16"}, {file = "tiktoken-0.9.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:e5fd49e7799579240f03913447c0cdfa1129625ebd5ac440787afc4345990427"},
{file = "tiktoken-0.12.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:b4e7ed1c6a7a8a60a3230965bdedba8cc58f68926b835e519341413370e0399a"}, {file = "tiktoken-0.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:26242ca9dc8b58e875ff4ca078b9a94d2f0813e6a535dcd2205df5d49d927cc7"},
{file = "tiktoken-0.12.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:fc530a28591a2d74bce821d10b418b26a094bf33839e69042a6e86ddb7a7fb27"}, {file = "tiktoken-0.9.0.tar.gz", hash = "sha256:d02a5ca6a938e0490e1ff957bc48c8b078c88cb83977be1625b1fd8aac792c5d"},
{file = "tiktoken-0.12.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:06a9f4f49884139013b138920a4c393aa6556b2f8f536345f11819389c703ebb"},
{file = "tiktoken-0.12.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:04f0e6a985d95913cabc96a741c5ffec525a2c72e9df086ff17ebe35985c800e"},
{file = "tiktoken-0.12.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:0ee8f9ae00c41770b5f9b0bb1235474768884ae157de3beb5439ca0fd70f3e25"},
{file = "tiktoken-0.12.0-cp313-cp313t-win_amd64.whl", hash = "sha256:dc2dd125a62cb2b3d858484d6c614d136b5b848976794edfb63688d539b8b93f"},
{file = "tiktoken-0.12.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:a90388128df3b3abeb2bfd1895b0681412a8d7dc644142519e6f0a97c2111646"},
{file = "tiktoken-0.12.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:da900aa0ad52247d8794e307d6446bd3cdea8e192769b56276695d34d2c9aa88"},
{file = "tiktoken-0.12.0-cp314-cp314-manylinux_2_28_aarch64.whl", hash = "sha256:285ba9d73ea0d6171e7f9407039a290ca77efcdb026be7769dccc01d2c8d7fff"},
{file = "tiktoken-0.12.0-cp314-cp314-manylinux_2_28_x86_64.whl", hash = "sha256:d186a5c60c6a0213f04a7a802264083dea1bbde92a2d4c7069e1a56630aef830"},
{file = "tiktoken-0.12.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:604831189bd05480f2b885ecd2d1986dc7686f609de48208ebbbddeea071fc0b"},
{file = "tiktoken-0.12.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:8f317e8530bb3a222547b85a58583238c8f74fd7a7408305f9f63246d1a0958b"},
{file = "tiktoken-0.12.0-cp314-cp314-win_amd64.whl", hash = "sha256:399c3dd672a6406719d84442299a490420b458c44d3ae65516302a99675888f3"},
{file = "tiktoken-0.12.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:c2c714c72bc00a38ca969dae79e8266ddec999c7ceccd603cc4f0d04ccd76365"},
{file = "tiktoken-0.12.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:cbb9a3ba275165a2cb0f9a83f5d7025afe6b9d0ab01a22b50f0e74fee2ad253e"},
{file = "tiktoken-0.12.0-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:dfdfaa5ffff8993a3af94d1125870b1d27aed7cb97aa7eb8c1cefdbc87dbee63"},
{file = "tiktoken-0.12.0-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:584c3ad3d0c74f5269906eb8a659c8bfc6144a52895d9261cdaf90a0ae5f4de0"},
{file = "tiktoken-0.12.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:54c891b416a0e36b8e2045b12b33dd66fb34a4fe7965565f1b482da50da3e86a"},
{file = "tiktoken-0.12.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5edb8743b88d5be814b1a8a8854494719080c28faaa1ccbef02e87354fe71ef0"},
{file = "tiktoken-0.12.0-cp314-cp314t-win_amd64.whl", hash = "sha256:f61c0aea5565ac82e2ec50a05e02a6c44734e91b51c10510b084ea1b8e633a71"},
{file = "tiktoken-0.12.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:d51d75a5bffbf26f86554d28e78bfb921eae998edc2675650fd04c7e1f0cdc1e"},
{file = "tiktoken-0.12.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:09eb4eae62ae7e4c62364d9ec3a57c62eea707ac9a2b2c5d6bd05de6724ea179"},
{file = "tiktoken-0.12.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:df37684ace87d10895acb44b7f447d4700349b12197a526da0d4a4149fde074c"},
{file = "tiktoken-0.12.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:4c9614597ac94bb294544345ad8cf30dac2129c05e2db8dc53e082f355857af7"},
{file = "tiktoken-0.12.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:20cf97135c9a50de0b157879c3c4accbb29116bcf001283d26e073ff3b345946"},
{file = "tiktoken-0.12.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:15d875454bbaa3728be39880ddd11a5a2a9e548c29418b41e8fd8a767172b5ec"},
{file = "tiktoken-0.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:2cff3688ba3c639ebe816f8d58ffbbb0aa7433e23e08ab1cade5d175fc973fb3"},
{file = "tiktoken-0.12.0.tar.gz", hash = "sha256:b18ba7ee2b093863978fcb14f74b3707cdc8d4d4d3836853ce7ec60772139931"},
] ]
[package.dependencies] [package.dependencies]
@@ -8440,4 +8395,4 @@ cffi = ["cffi (>=1.17,<2.0) ; platform_python_implementation != \"PyPy\" and pyt
[metadata] [metadata]
lock-version = "2.1" lock-version = "2.1"
python-versions = ">=3.10,<3.14" python-versions = ">=3.10,<3.14"
content-hash = "c06e96ad49388ba7a46786e9ea55ea2c1a57408e15613237b4bee40a592a12af" content-hash = "1e226d8f7a342d17a85c036bfdfdf2ccc7d9e52c96644022fa69bf6044046528"

View File

@@ -12,7 +12,7 @@ python = ">=3.10,<3.14"
aio-pika = "^9.5.5" aio-pika = "^9.5.5"
aiohttp = "^3.10.0" aiohttp = "^3.10.0"
aiodns = "^3.5.0" aiodns = "^3.5.0"
anthropic = "^0.79.0" anthropic = "^0.59.0"
apscheduler = "^3.11.1" apscheduler = "^3.11.1"
autogpt-libs = { path = "../autogpt_libs", develop = true } autogpt-libs = { path = "../autogpt_libs", develop = true }
bleach = { extras = ["css"], version = "^6.2.0" } bleach = { extras = ["css"], version = "^6.2.0" }
@@ -21,7 +21,7 @@ cryptography = "^46.0"
discord-py = "^2.5.2" discord-py = "^2.5.2"
e2b-code-interpreter = "^1.5.2" e2b-code-interpreter = "^1.5.2"
elevenlabs = "^1.50.0" elevenlabs = "^1.50.0"
fastapi = "^0.128.6" fastapi = "^0.128.0"
feedparser = "^6.0.11" feedparser = "^6.0.11"
flake8 = "^7.3.0" flake8 = "^7.3.0"
google-api-python-client = "^2.177.0" google-api-python-client = "^2.177.0"
@@ -34,11 +34,11 @@ html2text = "^2024.2.26"
jinja2 = "^3.1.6" jinja2 = "^3.1.6"
jsonref = "^1.1.0" jsonref = "^1.1.0"
jsonschema = "^4.25.0" jsonschema = "^4.25.0"
langfuse = "^3.14.1" langfuse = "^3.11.0"
launchdarkly-server-sdk = "^9.14.1" launchdarkly-server-sdk = "^9.14.1"
mem0ai = "^0.1.115" mem0ai = "^0.1.115"
moviepy = "^2.1.2" moviepy = "^2.1.2"
ollama = "^0.6.1" ollama = "^0.5.1"
openai = "^1.97.1" openai = "^1.97.1"
orjson = "^3.10.0" orjson = "^3.10.0"
pika = "^1.3.2" pika = "^1.3.2"
@@ -48,7 +48,7 @@ postmarker = "^1.0"
praw = "~7.8.1" praw = "~7.8.1"
prisma = "^0.15.0" prisma = "^0.15.0"
rank-bm25 = "^0.2.2" rank-bm25 = "^0.2.2"
prometheus-client = "^0.24.1" prometheus-client = "^0.22.1"
prometheus-fastapi-instrumentator = "^7.0.0" prometheus-fastapi-instrumentator = "^7.0.0"
psutil = "^7.0.0" psutil = "^7.0.0"
psycopg2-binary = "^2.9.10" psycopg2-binary = "^2.9.10"
@@ -57,7 +57,7 @@ pydantic-settings = "^2.12.0"
pytest = "^8.4.1" pytest = "^8.4.1"
pytest-asyncio = "^1.1.0" pytest-asyncio = "^1.1.0"
python-dotenv = "^1.1.1" python-dotenv = "^1.1.1"
python-multipart = "^0.0.22" python-multipart = "^0.0.20"
redis = "^6.2.0" redis = "^6.2.0"
regex = "^2025.9.18" regex = "^2025.9.18"
replicate = "^1.0.6" replicate = "^1.0.6"
@@ -65,8 +65,8 @@ sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlal
sqlalchemy = "^2.0.40" sqlalchemy = "^2.0.40"
strenum = "^0.4.9" strenum = "^0.4.9"
stripe = "^11.5.0" stripe = "^11.5.0"
supabase = "2.27.3" supabase = "2.27.2"
tenacity = "^9.1.4" tenacity = "^9.1.2"
todoist-api-python = "^2.1.7" todoist-api-python = "^2.1.7"
tweepy = "^4.16.0" tweepy = "^4.16.0"
uvicorn = { extras = ["standard"], version = "^0.40.0" } uvicorn = { extras = ["standard"], version = "^0.40.0" }
@@ -76,8 +76,8 @@ yt-dlp = "2025.12.08"
zerobouncesdk = "^1.1.2" zerobouncesdk = "^1.1.2"
# NOTE: please insert new dependencies in their alphabetical location # NOTE: please insert new dependencies in their alphabetical location
pytest-snapshot = "^0.9.0" pytest-snapshot = "^0.9.0"
aiofiles = "^25.1.0" aiofiles = "^24.1.0"
tiktoken = "^0.12.0" tiktoken = "^0.9.0"
aioclamd = "^1.0.0" aioclamd = "^1.0.0"
setuptools = "^80.9.0" setuptools = "^80.9.0"
gcloud-aio-storage = "^9.5.0" gcloud-aio-storage = "^9.5.0"
@@ -95,13 +95,13 @@ black = "^24.10.0"
faker = "^38.2.0" faker = "^38.2.0"
httpx = "^0.28.1" httpx = "^0.28.1"
isort = "^5.13.2" isort = "^5.13.2"
poethepoet = "^0.41.0" poethepoet = "^0.37.0"
pre-commit = "^4.4.0" pre-commit = "^4.4.0"
pyright = "^1.1.407" pyright = "^1.1.407"
pytest-mock = "^3.15.1" pytest-mock = "^3.15.1"
pytest-watcher = "^0.6.3" pytest-watcher = "^0.4.2"
requests = "^2.32.5" requests = "^2.32.5"
ruff = "^0.15.0" ruff = "^0.14.5"
# NOTE: please insert new dependencies in their alphabetical location # NOTE: please insert new dependencies in their alphabetical location
[build-system] [build-system]

View File

@@ -70,6 +70,10 @@ model User {
OAuthAuthorizationCodes OAuthAuthorizationCode[] OAuthAuthorizationCodes OAuthAuthorizationCode[]
OAuthAccessTokens OAuthAccessToken[] OAuthAccessTokens OAuthAccessToken[]
OAuthRefreshTokens OAuthRefreshToken[] OAuthRefreshTokens OAuthRefreshToken[]
// Waitlist relations
waitlistEntries WaitlistEntry[]
joinedWaitlists WaitlistEntry[] @relation("joinedWaitlists")
} }
enum OnboardingStep { enum OnboardingStep {
@@ -344,6 +348,7 @@ enum NotificationType {
REFUND_PROCESSED REFUND_PROCESSED
AGENT_APPROVED AGENT_APPROVED
AGENT_REJECTED AGENT_REJECTED
WAITLIST_LAUNCH
} }
model NotificationEvent { model NotificationEvent {
@@ -949,6 +954,7 @@ model StoreListing {
// Relations // Relations
Versions StoreListingVersion[] @relation("ListingVersions") Versions StoreListingVersion[] @relation("ListingVersions")
waitlistEntries WaitlistEntry[]
// Unique index on agentId to ensure only one listing per agent, regardless of number of versions the agent has. // Unique index on agentId to ensure only one listing per agent, regardless of number of versions the agent has.
@@unique([agentGraphId]) @@unique([agentGraphId])
@@ -1080,6 +1086,47 @@ model StoreListingReview {
@@index([reviewByUserId]) @@index([reviewByUserId])
} }
enum WaitlistExternalStatus {
DONE
NOT_STARTED
CANCELED
WORK_IN_PROGRESS
}
model WaitlistEntry {
id String @id @default(uuid())
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
storeListingId String?
StoreListing StoreListing? @relation(fields: [storeListingId], references: [id], onDelete: SetNull)
owningUserId String
OwningUser User @relation(fields: [owningUserId], references: [id])
slug String
search Unsupported("tsvector")? @default(dbgenerated("''::tsvector"))
// Content fields
name String
subHeading String
videoUrl String?
agentOutputDemoUrl String?
imageUrls String[]
description String
categories String[]
//Waitlist specific fields
status WaitlistExternalStatus @default(NOT_STARTED)
votes Int @default(0) // Hide from frontend api
joinedUsers User[] @relation("joinedWaitlists")
// NOTE: DO NOT DOUBLE SEND TO THESE USERS, IF THEY HAVE SIGNED UP SINCE THEY MAY HAVE ALREADY RECEIVED AN EMAIL
// DOUBLE CHECK WHEN SENDING THAT THEY ARE NOT IN THE JOINED USERS LIST ALSO
unaffiliatedEmailUsers String[] @default([])
isDeleted Boolean @default(false)
}
enum SubmissionStatus { enum SubmissionStatus {
DRAFT // Being prepared, not yet submitted DRAFT // Being prepared, not yet submitted
PENDING // Submitted, awaiting review PENDING // Submitted, awaiting review

View File

@@ -25,12 +25,8 @@ RUN if [ -f .env.production ]; then \
cp .env.default .env; \ cp .env.default .env; \
fi fi
RUN pnpm run generate:api RUN pnpm run generate:api
# Disable source-map generation in Docker builds to halve webpack memory usage.
# Source maps are only useful when SENTRY_AUTH_TOKEN is set (Vercel deploys);
# the Docker image never uploads them, so generating them just wastes RAM.
ENV NEXT_PUBLIC_SOURCEMAPS="false"
# In CI, we want NEXT_PUBLIC_PW_TEST=true during build so Next.js inlines it # In CI, we want NEXT_PUBLIC_PW_TEST=true during build so Next.js inlines it
RUN if [ "$NEXT_PUBLIC_PW_TEST" = "true" ]; then NEXT_PUBLIC_PW_TEST=true NODE_OPTIONS="--max-old-space-size=8192" pnpm build; else NODE_OPTIONS="--max-old-space-size=8192" pnpm build; fi RUN if [ "$NEXT_PUBLIC_PW_TEST" = "true" ]; then NEXT_PUBLIC_PW_TEST=true NODE_OPTIONS="--max-old-space-size=4096" pnpm build; else NODE_OPTIONS="--max-old-space-size=4096" pnpm build; fi
# Prod stage - based on NextJS reference Dockerfile https://github.com/vercel/next.js/blob/64271354533ed16da51be5dce85f0dbd15f17517/examples/with-docker/Dockerfile # Prod stage - based on NextJS reference Dockerfile https://github.com/vercel/next.js/blob/64271354533ed16da51be5dce85f0dbd15f17517/examples/with-docker/Dockerfile
FROM node:21-alpine AS prod FROM node:21-alpine AS prod

View File

@@ -1,12 +1,8 @@
import { withSentryConfig } from "@sentry/nextjs"; import { withSentryConfig } from "@sentry/nextjs";
// Allow Docker builds to skip source-map generation (halves memory usage).
// Defaults to true so Vercel/local builds are unaffected.
const enableSourceMaps = process.env.NEXT_PUBLIC_SOURCEMAPS !== "false";
/** @type {import('next').NextConfig} */ /** @type {import('next').NextConfig} */
const nextConfig = { const nextConfig = {
productionBrowserSourceMaps: enableSourceMaps, productionBrowserSourceMaps: true,
// Externalize OpenTelemetry packages to fix Turbopack HMR issues // Externalize OpenTelemetry packages to fix Turbopack HMR issues
serverExternalPackages: [ serverExternalPackages: [
"@opentelemetry/instrumentation", "@opentelemetry/instrumentation",
@@ -18,37 +14,9 @@ const nextConfig = {
serverActions: { serverActions: {
bodySizeLimit: "256mb", bodySizeLimit: "256mb",
}, },
// Increase body size limit for API routes (file uploads) - 256MB to match backend limit
proxyClientMaxBodySize: "256mb",
middlewareClientMaxBodySize: "256mb", middlewareClientMaxBodySize: "256mb",
// Limit parallel webpack workers to reduce peak memory during builds.
cpus: 2,
},
// Work around cssnano "Invalid array length" bug in Next.js's bundled
// cssnano-simple comment parser when processing very large CSS chunks.
// CSS is still bundled correctly; gzip handles most of the size savings anyway.
webpack: (config, { dev }) => {
if (!dev) {
// Next.js adds CssMinimizerPlugin internally (after user config), so we
// can't filter it from config.plugins. Instead, intercept the webpack
// compilation hooks and replace the buggy plugin's tap with a no-op.
config.plugins.push({
apply(compiler) {
compiler.hooks.compilation.tap(
"DisableCssMinimizer",
(compilation) => {
compilation.hooks.processAssets.intercept({
register: (tap) => {
if (tap.name === "CssMinimizerPlugin") {
return { ...tap, fn: async () => {} };
}
return tap;
},
});
},
);
},
});
}
return config;
}, },
images: { images: {
domains: [ domains: [
@@ -86,16 +54,9 @@ const nextConfig = {
transpilePackages: ["geist"], transpilePackages: ["geist"],
}; };
// Only run the Sentry webpack plugin when we can actually upload source maps const isDevelopmentBuild = process.env.NODE_ENV !== "production";
// (i.e. on Vercel with SENTRY_AUTH_TOKEN set). The Sentry *runtime* SDK
// (imported in app code) still captures errors without the plugin.
// Skipping the plugin saves ~1 GB of peak memory during `next build`.
const skipSentryPlugin =
process.env.NODE_ENV !== "production" ||
!enableSourceMaps ||
!process.env.SENTRY_AUTH_TOKEN;
export default skipSentryPlugin export default isDevelopmentBuild
? nextConfig ? nextConfig
: withSentryConfig(nextConfig, { : withSentryConfig(nextConfig, {
// For all available options, see: // For all available options, see:
@@ -135,7 +96,7 @@ export default skipSentryPlugin
// This helps Sentry with sourcemaps... https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/ // This helps Sentry with sourcemaps... https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/
sourcemaps: { sourcemaps: {
disable: !enableSourceMaps, disable: false,
assets: [".next/**/*.js", ".next/**/*.js.map"], assets: [".next/**/*.js", ".next/**/*.js.map"],
ignore: ["**/node_modules/**"], ignore: ["**/node_modules/**"],
deleteSourcemapsAfterUpload: false, // Source is public anyway :) deleteSourcemapsAfterUpload: false, // Source is public anyway :)

View File

@@ -7,7 +7,7 @@
}, },
"scripts": { "scripts": {
"dev": "pnpm run generate:api:force && next dev --turbo", "dev": "pnpm run generate:api:force && next dev --turbo",
"build": "cross-env NODE_OPTIONS=--max-old-space-size=16384 next build", "build": "next build",
"start": "next start", "start": "next start",
"start:standalone": "cd .next/standalone && node server.js", "start:standalone": "cd .next/standalone && node server.js",
"lint": "next lint && prettier --check .", "lint": "next lint && prettier --check .",
@@ -30,7 +30,6 @@
"defaults" "defaults"
], ],
"dependencies": { "dependencies": {
"@ai-sdk/react": "3.0.61",
"@faker-js/faker": "10.0.0", "@faker-js/faker": "10.0.0",
"@hookform/resolvers": "5.2.2", "@hookform/resolvers": "5.2.2",
"@next/third-parties": "15.4.6", "@next/third-parties": "15.4.6",
@@ -61,10 +60,6 @@
"@rjsf/utils": "6.1.2", "@rjsf/utils": "6.1.2",
"@rjsf/validator-ajv8": "6.1.2", "@rjsf/validator-ajv8": "6.1.2",
"@sentry/nextjs": "10.27.0", "@sentry/nextjs": "10.27.0",
"@streamdown/cjk": "1.0.1",
"@streamdown/code": "1.0.1",
"@streamdown/math": "1.0.1",
"@streamdown/mermaid": "1.0.1",
"@supabase/ssr": "0.7.0", "@supabase/ssr": "0.7.0",
"@supabase/supabase-js": "2.78.0", "@supabase/supabase-js": "2.78.0",
"@tanstack/react-query": "5.90.6", "@tanstack/react-query": "5.90.6",
@@ -73,7 +68,6 @@
"@vercel/analytics": "1.5.0", "@vercel/analytics": "1.5.0",
"@vercel/speed-insights": "1.2.0", "@vercel/speed-insights": "1.2.0",
"@xyflow/react": "12.9.2", "@xyflow/react": "12.9.2",
"ai": "6.0.59",
"boring-avatars": "1.11.2", "boring-avatars": "1.11.2",
"class-variance-authority": "0.7.1", "class-variance-authority": "0.7.1",
"clsx": "2.1.1", "clsx": "2.1.1",
@@ -93,6 +87,7 @@
"launchdarkly-react-client-sdk": "3.9.0", "launchdarkly-react-client-sdk": "3.9.0",
"lodash": "4.17.21", "lodash": "4.17.21",
"lucide-react": "0.552.0", "lucide-react": "0.552.0",
"moment": "2.30.1",
"next": "15.4.10", "next": "15.4.10",
"next-themes": "0.4.6", "next-themes": "0.4.6",
"nuqs": "2.7.2", "nuqs": "2.7.2",
@@ -107,7 +102,7 @@
"react-markdown": "9.0.3", "react-markdown": "9.0.3",
"react-modal": "3.16.3", "react-modal": "3.16.3",
"react-shepherd": "6.1.9", "react-shepherd": "6.1.9",
"react-window": "2.2.0", "react-window": "1.8.11",
"recharts": "3.3.0", "recharts": "3.3.0",
"rehype-autolink-headings": "7.1.0", "rehype-autolink-headings": "7.1.0",
"rehype-highlight": "7.0.2", "rehype-highlight": "7.0.2",
@@ -117,11 +112,9 @@
"remark-math": "6.0.0", "remark-math": "6.0.0",
"shepherd.js": "14.5.1", "shepherd.js": "14.5.1",
"sonner": "2.0.7", "sonner": "2.0.7",
"streamdown": "2.1.0",
"tailwind-merge": "2.6.0", "tailwind-merge": "2.6.0",
"tailwind-scrollbar": "3.1.0", "tailwind-scrollbar": "3.1.0",
"tailwindcss-animate": "1.0.7", "tailwindcss-animate": "1.0.7",
"use-stick-to-bottom": "1.1.2",
"uuid": "11.1.0", "uuid": "11.1.0",
"vaul": "1.1.2", "vaul": "1.1.2",
"zod": "3.25.76", "zod": "3.25.76",
@@ -147,7 +140,7 @@
"@types/react": "18.3.17", "@types/react": "18.3.17",
"@types/react-dom": "18.3.5", "@types/react-dom": "18.3.5",
"@types/react-modal": "3.16.3", "@types/react-modal": "3.16.3",
"@types/react-window": "2.0.0", "@types/react-window": "1.8.8",
"@vitejs/plugin-react": "5.1.2", "@vitejs/plugin-react": "5.1.2",
"axe-playwright": "2.2.2", "axe-playwright": "2.2.2",
"chromatic": "13.3.3", "chromatic": "13.3.3",
@@ -179,8 +172,7 @@
}, },
"pnpm": { "pnpm": {
"overrides": { "overrides": {
"@opentelemetry/instrumentation": "0.209.0", "@opentelemetry/instrumentation": "0.209.0"
"lodash-es": "4.17.23"
} }
}, },
"packageManager": "pnpm@10.20.0+sha512.cf9998222162dd85864d0a8102e7892e7ba4ceadebbf5a31f9c2fce48dfce317a9c53b9f6464d1ef9042cba2e02ae02a9f7c143a2b438cd93c91840f0192b9dd" "packageManager": "pnpm@10.20.0+sha512.cf9998222162dd85864d0a8102e7892e7ba4ceadebbf5a31f9c2fce48dfce317a9c53b9f6464d1ef9042cba2e02ae02a9f7c143a2b438cd93c91840f0192b9dd"

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
import { Sidebar } from "@/components/__legacy__/Sidebar"; import { Sidebar } from "@/components/__legacy__/Sidebar";
import { Users, DollarSign, UserSearch, FileText } from "lucide-react"; import { Users, DollarSign, UserSearch, FileText, Clock } from "lucide-react";
import { IconSliders } from "@/components/__legacy__/ui/icons"; import { IconSliders } from "@/components/__legacy__/ui/icons";
@@ -11,6 +11,11 @@ const sidebarLinkGroups = [
href: "/admin/marketplace", href: "/admin/marketplace",
icon: <Users className="h-6 w-6" />, icon: <Users className="h-6 w-6" />,
}, },
{
text: "Waitlist Management",
href: "/admin/waitlist",
icon: <Clock className="h-6 w-6" />,
},
{ {
text: "User Spending", text: "User Spending",
href: "/admin/spending", href: "/admin/spending",

View File

@@ -0,0 +1,217 @@
"use client";
import { useState } from "react";
import { useQueryClient } from "@tanstack/react-query";
import { Button } from "@/components/atoms/Button/Button";
import { Input } from "@/components/atoms/Input/Input";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import {
usePostV2CreateWaitlist,
getGetV2ListAllWaitlistsQueryKey,
} from "@/app/api/__generated__/endpoints/admin/admin";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { Plus } from "@phosphor-icons/react";
export function CreateWaitlistButton() {
const [open, setOpen] = useState(false);
const { toast } = useToast();
const queryClient = useQueryClient();
const createWaitlistMutation = usePostV2CreateWaitlist({
mutation: {
onSuccess: (response) => {
if (response.status === 200) {
toast({
title: "Success",
description: "Waitlist created successfully",
});
setOpen(false);
setFormData({
name: "",
slug: "",
subHeading: "",
description: "",
categories: "",
imageUrls: "",
videoUrl: "",
agentOutputDemoUrl: "",
});
queryClient.invalidateQueries({
queryKey: getGetV2ListAllWaitlistsQueryKey(),
});
} else {
toast({
variant: "destructive",
title: "Error",
description: "Failed to create waitlist",
});
}
},
onError: (error) => {
console.error("Error creating waitlist:", error);
toast({
variant: "destructive",
title: "Error",
description: "Failed to create waitlist",
});
},
},
});
const [formData, setFormData] = useState({
name: "",
slug: "",
subHeading: "",
description: "",
categories: "",
imageUrls: "",
videoUrl: "",
agentOutputDemoUrl: "",
});
function handleInputChange(id: string, value: string) {
setFormData((prev) => ({
...prev,
[id]: value,
}));
}
function generateSlug(name: string) {
return name
.toLowerCase()
.replace(/[^a-z0-9]+/g, "-")
.replace(/^-|-$/g, "");
}
function handleSubmit(e: React.FormEvent) {
e.preventDefault();
createWaitlistMutation.mutate({
data: {
name: formData.name,
slug: formData.slug || generateSlug(formData.name),
subHeading: formData.subHeading,
description: formData.description,
categories: formData.categories
? formData.categories.split(",").map((c) => c.trim())
: [],
imageUrls: formData.imageUrls
? formData.imageUrls.split(",").map((u) => u.trim())
: [],
videoUrl: formData.videoUrl || null,
agentOutputDemoUrl: formData.agentOutputDemoUrl || null,
},
});
}
return (
<>
<Button onClick={() => setOpen(true)}>
<Plus size={16} className="mr-2" />
Create Waitlist
</Button>
<Dialog
title="Create New Waitlist"
controlled={{
isOpen: open,
set: async (isOpen) => setOpen(isOpen),
}}
onClose={() => setOpen(false)}
styling={{ maxWidth: "600px" }}
>
<Dialog.Content>
<p className="mb-4 text-sm text-zinc-500">
Create a new waitlist for an upcoming agent. Users can sign up to be
notified when it launches.
</p>
<form onSubmit={handleSubmit} className="flex flex-col gap-2">
<Input
id="name"
label="Name"
value={formData.name}
onChange={(e) => handleInputChange("name", e.target.value)}
placeholder="SEO Analysis Agent"
required
/>
<Input
id="slug"
label="Slug"
value={formData.slug}
onChange={(e) => handleInputChange("slug", e.target.value)}
placeholder="seo-analysis-agent (auto-generated if empty)"
/>
<Input
id="subHeading"
label="Subheading"
value={formData.subHeading}
onChange={(e) => handleInputChange("subHeading", e.target.value)}
placeholder="Analyze your website's SEO in minutes"
required
/>
<Input
id="description"
label="Description"
type="textarea"
value={formData.description}
onChange={(e) => handleInputChange("description", e.target.value)}
placeholder="Detailed description of what this agent does..."
rows={4}
required
/>
<Input
id="categories"
label="Categories (comma-separated)"
value={formData.categories}
onChange={(e) => handleInputChange("categories", e.target.value)}
placeholder="SEO, Marketing, Analysis"
/>
<Input
id="imageUrls"
label="Image URLs (comma-separated)"
value={formData.imageUrls}
onChange={(e) => handleInputChange("imageUrls", e.target.value)}
placeholder="https://example.com/image1.jpg, https://example.com/image2.jpg"
/>
<Input
id="videoUrl"
label="Video URL (optional)"
value={formData.videoUrl}
onChange={(e) => handleInputChange("videoUrl", e.target.value)}
placeholder="https://youtube.com/watch?v=..."
/>
<Input
id="agentOutputDemoUrl"
label="Output Demo URL (optional)"
value={formData.agentOutputDemoUrl}
onChange={(e) =>
handleInputChange("agentOutputDemoUrl", e.target.value)
}
placeholder="https://example.com/demo-output.mp4"
/>
<Dialog.Footer>
<Button
type="button"
variant="secondary"
onClick={() => setOpen(false)}
>
Cancel
</Button>
<Button type="submit" loading={createWaitlistMutation.isPending}>
Create Waitlist
</Button>
</Dialog.Footer>
</form>
</Dialog.Content>
</Dialog>
</>
);
}

View File

@@ -0,0 +1,221 @@
"use client";
import { useState } from "react";
import { Button } from "@/components/atoms/Button/Button";
import { Input } from "@/components/atoms/Input/Input";
import { Select } from "@/components/atoms/Select/Select";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { usePutV2UpdateWaitlist } from "@/app/api/__generated__/endpoints/admin/admin";
import type { WaitlistAdminResponse } from "@/app/api/__generated__/models/waitlistAdminResponse";
import type { WaitlistUpdateRequest } from "@/app/api/__generated__/models/waitlistUpdateRequest";
import { WaitlistExternalStatus } from "@/app/api/__generated__/models/waitlistExternalStatus";
type EditWaitlistDialogProps = {
waitlist: WaitlistAdminResponse;
onClose: () => void;
onSave: () => void;
};
const STATUS_OPTIONS = [
{ value: WaitlistExternalStatus.NOT_STARTED, label: "Not Started" },
{ value: WaitlistExternalStatus.WORK_IN_PROGRESS, label: "Work In Progress" },
{ value: WaitlistExternalStatus.DONE, label: "Done" },
{ value: WaitlistExternalStatus.CANCELED, label: "Canceled" },
];
export function EditWaitlistDialog({
waitlist,
onClose,
onSave,
}: EditWaitlistDialogProps) {
const { toast } = useToast();
const updateWaitlistMutation = usePutV2UpdateWaitlist();
const [formData, setFormData] = useState({
name: waitlist.name,
slug: waitlist.slug,
subHeading: waitlist.subHeading,
description: waitlist.description,
categories: waitlist.categories.join(", "),
imageUrls: waitlist.imageUrls.join(", "),
videoUrl: waitlist.videoUrl || "",
agentOutputDemoUrl: waitlist.agentOutputDemoUrl || "",
status: waitlist.status,
storeListingId: waitlist.storeListingId || "",
});
function handleInputChange(id: string, value: string) {
setFormData((prev) => ({
...prev,
[id]: value,
}));
}
function handleStatusChange(value: string) {
setFormData((prev) => ({
...prev,
status: value as WaitlistExternalStatus,
}));
}
async function handleSubmit(e: React.FormEvent) {
e.preventDefault();
const updateData: WaitlistUpdateRequest = {
name: formData.name,
slug: formData.slug,
subHeading: formData.subHeading,
description: formData.description,
categories: formData.categories
? formData.categories.split(",").map((c) => c.trim())
: [],
imageUrls: formData.imageUrls
? formData.imageUrls.split(",").map((u) => u.trim())
: [],
videoUrl: formData.videoUrl || null,
agentOutputDemoUrl: formData.agentOutputDemoUrl || null,
status: formData.status,
storeListingId: formData.storeListingId || null,
};
updateWaitlistMutation.mutate(
{ waitlistId: waitlist.id, data: updateData },
{
onSuccess: (response) => {
if (response.status === 200) {
toast({
title: "Success",
description: "Waitlist updated successfully",
});
onSave();
} else {
toast({
variant: "destructive",
title: "Error",
description: "Failed to update waitlist",
});
}
},
onError: () => {
toast({
variant: "destructive",
title: "Error",
description: "Failed to update waitlist",
});
},
},
);
}
return (
<Dialog
title="Edit Waitlist"
controlled={{
isOpen: true,
set: async (open) => {
if (!open) onClose();
},
}}
onClose={onClose}
styling={{ maxWidth: "600px" }}
>
<Dialog.Content>
<p className="mb-4 text-sm text-zinc-500">
Update the waitlist details. Changes will be reflected immediately.
</p>
<form onSubmit={handleSubmit} className="flex flex-col gap-2">
<Input
id="name"
label="Name"
value={formData.name}
onChange={(e) => handleInputChange("name", e.target.value)}
required
/>
<Input
id="slug"
label="Slug"
value={formData.slug}
onChange={(e) => handleInputChange("slug", e.target.value)}
/>
<Input
id="subHeading"
label="Subheading"
value={formData.subHeading}
onChange={(e) => handleInputChange("subHeading", e.target.value)}
required
/>
<Input
id="description"
label="Description"
type="textarea"
value={formData.description}
onChange={(e) => handleInputChange("description", e.target.value)}
rows={4}
required
/>
<Select
id="status"
label="Status"
value={formData.status}
onValueChange={handleStatusChange}
options={STATUS_OPTIONS}
/>
<Input
id="categories"
label="Categories (comma-separated)"
value={formData.categories}
onChange={(e) => handleInputChange("categories", e.target.value)}
/>
<Input
id="imageUrls"
label="Image URLs (comma-separated)"
value={formData.imageUrls}
onChange={(e) => handleInputChange("imageUrls", e.target.value)}
/>
<Input
id="videoUrl"
label="Video URL"
value={formData.videoUrl}
onChange={(e) => handleInputChange("videoUrl", e.target.value)}
/>
<Input
id="agentOutputDemoUrl"
label="Output Demo URL"
value={formData.agentOutputDemoUrl}
onChange={(e) =>
handleInputChange("agentOutputDemoUrl", e.target.value)
}
/>
<Input
id="storeListingId"
label="Store Listing ID (for linking)"
value={formData.storeListingId}
onChange={(e) =>
handleInputChange("storeListingId", e.target.value)
}
placeholder="Leave empty if not linked"
/>
<Dialog.Footer>
<Button type="button" variant="secondary" onClick={onClose}>
Cancel
</Button>
<Button type="submit" loading={updateWaitlistMutation.isPending}>
Save Changes
</Button>
</Dialog.Footer>
</form>
</Dialog.Content>
</Dialog>
);
}

View File

@@ -0,0 +1,156 @@
"use client";
import { Button } from "@/components/atoms/Button/Button";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { User, Envelope, DownloadSimple } from "@phosphor-icons/react";
import { useGetV2GetWaitlistSignups } from "@/app/api/__generated__/endpoints/admin/admin";
type WaitlistSignupsDialogProps = {
waitlistId: string;
onClose: () => void;
};
export function WaitlistSignupsDialog({
waitlistId,
onClose,
}: WaitlistSignupsDialogProps) {
const {
data: signupsResponse,
isLoading,
isError,
} = useGetV2GetWaitlistSignups(waitlistId);
const signups = signupsResponse?.status === 200 ? signupsResponse.data : null;
function exportToCSV() {
if (!signups) return;
const headers = ["Type", "Email", "User ID", "Username"];
const rows = signups.signups.map((signup) => [
signup.type,
signup.email || "",
signup.userId || "",
signup.username || "",
]);
const escapeCell = (cell: string) => `"${cell.replace(/"/g, '""')}"`;
const csvContent = [
headers.join(","),
...rows.map((row) => row.map(escapeCell).join(",")),
].join("\n");
const blob = new Blob([csvContent], { type: "text/csv" });
const url = window.URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
a.download = `waitlist-${waitlistId}-signups.csv`;
a.click();
window.URL.revokeObjectURL(url);
}
function renderContent() {
if (isLoading) {
return <div className="py-10 text-center">Loading signups...</div>;
}
if (isError) {
return (
<div className="py-10 text-center text-red-500">
Failed to load signups. Please try again.
</div>
);
}
if (!signups || signups.signups.length === 0) {
return (
<div className="py-10 text-center text-gray-500">
No signups yet for this waitlist.
</div>
);
}
return (
<>
<div className="flex justify-end">
<Button variant="secondary" size="small" onClick={exportToCSV}>
<DownloadSimple className="mr-2 h-4 w-4" size={16} />
Export CSV
</Button>
</div>
<div className="max-h-[400px] overflow-y-auto rounded-md border">
<table className="w-full">
<thead className="bg-gray-50 dark:bg-gray-800">
<tr>
<th className="px-4 py-3 text-left text-sm font-medium">
Type
</th>
<th className="px-4 py-3 text-left text-sm font-medium">
Email / Username
</th>
<th className="px-4 py-3 text-left text-sm font-medium">
User ID
</th>
</tr>
</thead>
<tbody className="divide-y">
{signups.signups.map((signup, index) => (
<tr key={index}>
<td className="px-4 py-3">
{signup.type === "user" ? (
<span className="flex items-center gap-1 text-blue-600">
<User className="h-4 w-4" size={16} /> User
</span>
) : (
<span className="flex items-center gap-1 text-gray-600">
<Envelope className="h-4 w-4" size={16} /> Email
</span>
)}
</td>
<td className="px-4 py-3">
{signup.type === "user"
? signup.username || signup.email
: signup.email}
</td>
<td className="px-4 py-3 font-mono text-sm">
{signup.userId || "-"}
</td>
</tr>
))}
</tbody>
</table>
</div>
</>
);
}
return (
<Dialog
title="Waitlist Signups"
controlled={{
isOpen: true,
set: async (open) => {
if (!open) onClose();
},
}}
onClose={onClose}
styling={{ maxWidth: "700px" }}
>
<Dialog.Content>
<p className="mb-4 text-sm text-zinc-500">
{signups
? `${signups.totalCount} total signups`
: "Loading signups..."}
</p>
{renderContent()}
<Dialog.Footer>
<Button variant="secondary" onClick={onClose}>
Close
</Button>
</Dialog.Footer>
</Dialog.Content>
</Dialog>
);
}

View File

@@ -0,0 +1,214 @@
"use client";
import { useState } from "react";
import { useQueryClient } from "@tanstack/react-query";
import {
Table,
TableBody,
TableCell,
TableHead,
TableHeader,
TableRow,
} from "@/components/__legacy__/ui/table";
import { Button } from "@/components/atoms/Button/Button";
import {
useGetV2ListAllWaitlists,
useDeleteV2DeleteWaitlist,
getGetV2ListAllWaitlistsQueryKey,
} from "@/app/api/__generated__/endpoints/admin/admin";
import type { WaitlistAdminResponse } from "@/app/api/__generated__/models/waitlistAdminResponse";
import { EditWaitlistDialog } from "./EditWaitlistDialog";
import { WaitlistSignupsDialog } from "./WaitlistSignupsDialog";
import { Trash, PencilSimple, Users, Link } from "@phosphor-icons/react";
import { useToast } from "@/components/molecules/Toast/use-toast";
export function WaitlistTable() {
const [editingWaitlist, setEditingWaitlist] =
useState<WaitlistAdminResponse | null>(null);
const [viewingSignups, setViewingSignups] = useState<string | null>(null);
const { toast } = useToast();
const queryClient = useQueryClient();
const { data: response, isLoading, error } = useGetV2ListAllWaitlists();
const deleteWaitlistMutation = useDeleteV2DeleteWaitlist({
mutation: {
onSuccess: (response) => {
if (response.status === 200) {
toast({
title: "Success",
description: "Waitlist deleted successfully",
});
queryClient.invalidateQueries({
queryKey: getGetV2ListAllWaitlistsQueryKey(),
});
} else {
toast({
variant: "destructive",
title: "Error",
description: "Failed to delete waitlist",
});
}
},
onError: (error) => {
console.error("Error deleting waitlist:", error);
toast({
variant: "destructive",
title: "Error",
description: "Failed to delete waitlist",
});
},
},
});
function handleDelete(waitlistId: string) {
if (!confirm("Are you sure you want to delete this waitlist?")) return;
deleteWaitlistMutation.mutate({ waitlistId });
}
function handleWaitlistSaved() {
setEditingWaitlist(null);
queryClient.invalidateQueries({
queryKey: getGetV2ListAllWaitlistsQueryKey(),
});
}
function formatStatus(status: string) {
const statusColors: Record<string, string> = {
NOT_STARTED: "bg-gray-100 text-gray-800",
WORK_IN_PROGRESS: "bg-blue-100 text-blue-800",
DONE: "bg-green-100 text-green-800",
CANCELED: "bg-red-100 text-red-800",
};
return (
<span
className={`rounded-full px-2 py-1 text-xs font-medium ${statusColors[status] || "bg-gray-100 text-gray-700"}`}
>
{status.replace(/_/g, " ")}
</span>
);
}
function formatDate(dateStr: string) {
if (!dateStr) return "-";
return new Intl.DateTimeFormat("en-US", {
month: "short",
day: "numeric",
year: "numeric",
}).format(new Date(dateStr));
}
if (isLoading) {
return <div className="py-10 text-center">Loading waitlists...</div>;
}
if (error) {
return (
<div className="py-10 text-center text-red-500">
Error loading waitlists. Please try again.
</div>
);
}
const waitlists = response?.status === 200 ? response.data.waitlists : [];
if (waitlists.length === 0) {
return (
<div className="py-10 text-center text-gray-500">
No waitlists found. Create one to get started!
</div>
);
}
return (
<>
<div className="rounded-md border bg-white">
<Table>
<TableHeader className="bg-gray-50">
<TableRow>
<TableHead className="font-medium">Name</TableHead>
<TableHead className="font-medium">Status</TableHead>
<TableHead className="font-medium">Signups</TableHead>
<TableHead className="font-medium">Votes</TableHead>
<TableHead className="font-medium">Created</TableHead>
<TableHead className="font-medium">Linked Agent</TableHead>
<TableHead className="font-medium">Actions</TableHead>
</TableRow>
</TableHeader>
<TableBody>
{waitlists.map((waitlist) => (
<TableRow key={waitlist.id}>
<TableCell>
<div>
<div className="font-medium">{waitlist.name}</div>
<div className="text-sm text-gray-500">
{waitlist.subHeading}
</div>
</div>
</TableCell>
<TableCell>{formatStatus(waitlist.status)}</TableCell>
<TableCell>{waitlist.signupCount}</TableCell>
<TableCell>{waitlist.votes}</TableCell>
<TableCell>{formatDate(waitlist.createdAt)}</TableCell>
<TableCell>
{waitlist.storeListingId ? (
<span className="text-green-600">
<Link size={16} className="inline" /> Linked
</span>
) : (
<span className="text-gray-400">Not linked</span>
)}
</TableCell>
<TableCell>
<div className="flex gap-2">
<Button
variant="ghost"
size="small"
onClick={() => setViewingSignups(waitlist.id)}
title="View signups"
>
<Users size={16} />
</Button>
<Button
variant="ghost"
size="small"
onClick={() => setEditingWaitlist(waitlist)}
title="Edit"
>
<PencilSimple size={16} />
</Button>
<Button
variant="ghost"
size="small"
onClick={() => handleDelete(waitlist.id)}
title="Delete"
disabled={deleteWaitlistMutation.isPending}
>
<Trash size={16} className="text-red-500" />
</Button>
</div>
</TableCell>
</TableRow>
))}
</TableBody>
</Table>
</div>
{editingWaitlist && (
<EditWaitlistDialog
waitlist={editingWaitlist}
onClose={() => setEditingWaitlist(null)}
onSave={handleWaitlistSaved}
/>
)}
{viewingSignups && (
<WaitlistSignupsDialog
waitlistId={viewingSignups}
onClose={() => setViewingSignups(null)}
/>
)}
</>
);
}

View File

@@ -0,0 +1,52 @@
import { withRoleAccess } from "@/lib/withRoleAccess";
import { Suspense } from "react";
import { WaitlistTable } from "./components/WaitlistTable";
import { CreateWaitlistButton } from "./components/CreateWaitlistButton";
import { Warning } from "@phosphor-icons/react/dist/ssr";
function WaitlistDashboard() {
return (
<div className="mx-auto p-6">
<div className="flex flex-col gap-4">
<div className="flex items-center justify-between">
<div>
<h1 className="text-3xl font-bold">Waitlist Management</h1>
<p className="text-gray-500">
Manage upcoming agent waitlists and track signups
</p>
</div>
<CreateWaitlistButton />
</div>
<div className="flex items-start gap-3 rounded-lg border border-amber-300 bg-amber-50 p-4 dark:border-amber-700 dark:bg-amber-950">
<Warning
className="mt-0.5 h-5 w-5 flex-shrink-0 text-amber-600 dark:text-amber-400"
weight="fill"
/>
<div className="text-sm text-amber-800 dark:text-amber-200">
<p className="font-medium">TODO: Email-only signup notifications</p>
<p className="mt-1 text-amber-700 dark:text-amber-300">
Notifications for email-only signups (users who weren&apos;t
logged in) have not been implemented yet. Currently only
registered users will receive launch emails.
</p>
</div>
</div>
<Suspense
fallback={
<div className="py-10 text-center">Loading waitlists...</div>
}
>
<WaitlistTable />
</Suspense>
</div>
</div>
);
}
export default async function WaitlistDashboardPage() {
const withAdminAccess = await withRoleAccess(["admin"]);
const ProtectedWaitlistDashboard = await withAdminAccess(WaitlistDashboard);
return <ProtectedWaitlistDashboard />;
}

View File

@@ -4,7 +4,7 @@ import {
} from "@/app/api/__generated__/endpoints/graphs/graphs"; } from "@/app/api/__generated__/endpoints/graphs/graphs";
import { useToast } from "@/components/molecules/Toast/use-toast"; import { useToast } from "@/components/molecules/Toast/use-toast";
import { parseAsInteger, parseAsString, useQueryStates } from "nuqs"; import { parseAsInteger, parseAsString, useQueryStates } from "nuqs";
import { GraphExecutionMeta } from "@/app/api/__generated__/models/graphExecutionMeta"; import { GraphExecutionMeta } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/use-agent-runs";
import { useGraphStore } from "@/app/(platform)/build/stores/graphStore"; import { useGraphStore } from "@/app/(platform)/build/stores/graphStore";
import { useShallow } from "zustand/react/shallow"; import { useShallow } from "zustand/react/shallow";
import { useEffect, useState } from "react"; import { useEffect, useState } from "react";

View File

@@ -0,0 +1,31 @@
"use client";
import { Tabs, TabsList, TabsTrigger } from "@/components/__legacy__/ui/tabs";
export type BuilderView = "old" | "new";
export function BuilderViewTabs({
value,
onChange,
}: {
value: BuilderView;
onChange: (value: BuilderView) => void;
}) {
return (
<div className="pointer-events-auto fixed right-4 top-20 z-50">
<Tabs
value={value}
onValueChange={(v: string) => onChange(v as BuilderView)}
>
<TabsList className="w-fit bg-zinc-900">
<TabsTrigger value="old" className="text-gray-100">
Old
</TabsTrigger>
<TabsTrigger value="new" className="text-gray-100">
New
</TabsTrigger>
</TabsList>
</Tabs>
</div>
);
}

View File

@@ -23,9 +23,6 @@ import { useCopyPaste } from "./useCopyPaste";
import { useFlow } from "./useFlow"; import { useFlow } from "./useFlow";
import { useFlowRealtime } from "./useFlowRealtime"; import { useFlowRealtime } from "./useFlowRealtime";
import "@xyflow/react/dist/style.css";
import "./flow.css";
export const Flow = () => { export const Flow = () => {
const [{ flowID, flowExecutionID }] = useQueryStates({ const [{ flowID, flowExecutionID }] = useQueryStates({
flowID: parseAsString, flowID: parseAsString,

View File

@@ -1,9 +0,0 @@
/* Reset default xyflow handle styles so custom Phosphor icon handles render correctly */
.react-flow__handle {
background: transparent;
width: auto;
height: auto;
border: 0;
position: relative;
transform: none;
}

View File

@@ -1,4 +1,4 @@
import debounce from "lodash/debounce"; import { debounce } from "lodash";
import { useCallback, useEffect, useRef, useState } from "react"; import { useCallback, useEffect, useRef, useState } from "react";
import { useBlockMenuStore } from "../../../../stores/blockMenuStore"; import { useBlockMenuStore } from "../../../../stores/blockMenuStore";
import { getQueryClient } from "@/lib/react-query/queryClient"; import { getQueryClient } from "@/lib/react-query/queryClient";

View File

@@ -70,10 +70,10 @@ export const HorizontalScroll: React.FC<HorizontalScrollAreaProps> = ({
{children} {children}
</div> </div>
{canScrollLeft && ( {canScrollLeft && (
<div className="pointer-events-none absolute inset-y-0 left-0 w-8 bg-gradient-to-r from-background via-background/80 to-background/0" /> <div className="pointer-events-none absolute inset-y-0 left-0 w-8 bg-gradient-to-r from-white via-white/80 to-white/0" />
)} )}
{canScrollRight && ( {canScrollRight && (
<div className="pointer-events-none absolute inset-y-0 right-0 w-8 bg-gradient-to-l from-background via-background/80 to-background/0" /> <div className="pointer-events-none absolute inset-y-0 right-0 w-8 bg-gradient-to-l from-white via-white/80 to-white/0" />
)} )}
{canScrollLeft && ( {canScrollLeft && (
<button <button

View File

@@ -0,0 +1,83 @@
import { useMemo } from "react";
import { Link } from "@/app/api/__generated__/models/link";
import { useEdgeStore } from "../stores/edgeStore";
import { useNodeStore } from "../stores/nodeStore";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { customEdgeToLink } from "./helper";
export const RightSidebar = () => {
const edges = useEdgeStore((s) => s.edges);
const nodes = useNodeStore((s) => s.nodes);
const backendLinks: Link[] = useMemo(
() => edges.map(customEdgeToLink),
[edges],
);
return (
<div
className={cn(
"flex h-full w-full flex-col border-l border-slate-200 bg-white p-4 dark:border-slate-700 dark:bg-slate-900",
scrollbarStyles,
)}
>
<div className="mb-4">
<h2 className="text-lg font-semibold text-slate-800 dark:text-slate-200">
Graph Debug Panel
</h2>
</div>
<div className="flex-1 overflow-y-auto">
<h3 className="mb-2 text-sm font-semibold text-slate-700 dark:text-slate-200">
Nodes ({nodes.length})
</h3>
<div className="mb-6 space-y-3">
{nodes.map((n) => (
<div
key={n.id}
className="rounded border p-2 text-xs dark:border-slate-700"
>
<div className="mb-1 font-medium">
#{n.id} {n.data?.title ? ` ${n.data.title}` : ""}
</div>
<div className="text-slate-500 dark:text-slate-400">
hardcodedValues
</div>
<pre className="mt-1 max-h-40 overflow-auto rounded bg-slate-50 p-2 dark:bg-slate-800">
{JSON.stringify(n.data?.hardcodedValues ?? {}, null, 2)}
</pre>
</div>
))}
</div>
<h3 className="mb-2 text-sm font-semibold text-slate-700 dark:text-slate-200">
Links ({backendLinks.length})
</h3>
<div className="mb-6 space-y-3">
{backendLinks.map((l) => (
<div
key={l.id}
className="rounded border p-2 text-xs dark:border-slate-700"
>
<div className="font-medium">
{l.source_id}[{l.source_name}] {l.sink_id}[{l.sink_name}]
</div>
<div className="mt-1 text-slate-500 dark:text-slate-400">
edge.id: {l.id}
</div>
</div>
))}
</div>
<h4 className="mb-2 text-xs font-semibold text-slate-600 dark:text-slate-300">
Backend Links JSON
</h4>
<pre className="max-h-64 overflow-auto rounded bg-slate-50 p-2 text-[11px] dark:bg-slate-800">
{JSON.stringify(backendLinks, null, 2)}
</pre>
</div>
</div>
);
};

View File

@@ -1,6 +1,6 @@
import { useCallback } from "react"; import { useCallback } from "react";
import { AgentRunDraftView } from "@/app/(platform)/build/components/legacy-builder/agent-run-draft-view"; import { AgentRunDraftView } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view";
import { Dialog } from "@/components/molecules/Dialog/Dialog"; import { Dialog } from "@/components/molecules/Dialog/Dialog";
import type { import type {
CredentialsMetaInput, CredentialsMetaInput,

View File

@@ -18,7 +18,7 @@ import {
import { useToast } from "@/components/molecules/Toast/use-toast"; import { useToast } from "@/components/molecules/Toast/use-toast";
import { useQueryClient } from "@tanstack/react-query"; import { useQueryClient } from "@tanstack/react-query";
import { getGetV2ListMySubmissionsQueryKey } from "@/app/api/__generated__/endpoints/store/store"; import { getGetV2ListMySubmissionsQueryKey } from "@/app/api/__generated__/endpoints/store/store";
import { CronExpressionDialog } from "@/components/contextual/CronScheduler/cron-scheduler-dialog"; import { CronExpressionDialog } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler-dialog";
import { humanizeCronExpression } from "@/lib/cron-expression-utils"; import { humanizeCronExpression } from "@/lib/cron-expression-utils";
import { CalendarClockIcon } from "lucide-react"; import { CalendarClockIcon } from "lucide-react";

View File

@@ -1,13 +1,64 @@
"use client"; "use client";
import FlowEditor from "@/app/(platform)/build/components/legacy-builder/Flow/Flow";
import { useOnboarding } from "@/providers/onboarding/onboarding-provider";
// import LoadingBox from "@/components/__legacy__/ui/loading";
import { GraphID } from "@/lib/autogpt-server-api/types";
import { ReactFlowProvider } from "@xyflow/react"; import { ReactFlowProvider } from "@xyflow/react";
import { useSearchParams } from "next/navigation";
import { useEffect } from "react";
import { BuilderViewTabs } from "./components/BuilderViewTabs/BuilderViewTabs";
import { Flow } from "./components/FlowEditor/Flow/Flow"; import { Flow } from "./components/FlowEditor/Flow/Flow";
import { useBuilderView } from "./useBuilderView";
function BuilderContent() {
const query = useSearchParams();
const { completeStep } = useOnboarding();
useEffect(() => {
completeStep("BUILDER_OPEN");
}, [completeStep]);
const _graphVersion = query.get("flowVersion");
const graphVersion = _graphVersion ? parseInt(_graphVersion) : undefined;
return (
<FlowEditor
className="flex h-full w-full"
flowID={(query.get("flowID") as GraphID | null) ?? undefined}
flowVersion={graphVersion}
/>
);
}
export default function BuilderPage() { export default function BuilderPage() {
const {
isSwitchEnabled,
selectedView,
setSelectedView,
isNewFlowEditorEnabled,
} = useBuilderView();
// Switch is temporary, we will remove it once our new flow editor is ready
if (isSwitchEnabled) {
return ( return (
<div className="relative h-full w-full"> <div className="relative h-full w-full">
<BuilderViewTabs value={selectedView} onChange={setSelectedView} />
{selectedView === "new" ? (
<ReactFlowProvider> <ReactFlowProvider>
<Flow /> <Flow />
</ReactFlowProvider> </ReactFlowProvider>
) : (
<BuilderContent />
)}
</div> </div>
); );
}
return isNewFlowEditorEnabled ? (
<ReactFlowProvider>
<Flow />
</ReactFlowProvider>
) : (
<BuilderContent />
);
} }

View File

@@ -0,0 +1,44 @@
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { usePathname, useRouter, useSearchParams } from "next/navigation";
import { useEffect, useMemo } from "react";
import { BuilderView } from "./components/BuilderViewTabs/BuilderViewTabs";
export function useBuilderView() {
const isNewFlowEditorEnabled = useGetFlag(Flag.NEW_FLOW_EDITOR);
const isBuilderViewSwitchEnabled = useGetFlag(Flag.BUILDER_VIEW_SWITCH);
const router = useRouter();
const pathname = usePathname();
const searchParams = useSearchParams();
const currentView = searchParams.get("view");
const defaultView = "old";
const selectedView = useMemo<BuilderView>(() => {
if (currentView === "new" || currentView === "old") return currentView;
return defaultView;
}, [currentView, defaultView]);
useEffect(() => {
if (isBuilderViewSwitchEnabled === true) {
if (currentView !== "new" && currentView !== "old") {
const params = new URLSearchParams(searchParams);
params.set("view", defaultView);
router.replace(`${pathname}?${params.toString()}`);
}
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [isBuilderViewSwitchEnabled, defaultView, pathname, router, searchParams]);
const setSelectedView = (value: BuilderView) => {
const params = new URLSearchParams(searchParams);
params.set("view", value);
router.push(`${pathname}?${params.toString()}`);
};
return {
isSwitchEnabled: isBuilderViewSwitchEnabled === true,
selectedView,
setSelectedView,
isNewFlowEditorEnabled: Boolean(isNewFlowEditorEnabled),
} as const;
}

View File

@@ -1,80 +0,0 @@
"use client";
import { SidebarProvider } from "@/components/ui/sidebar";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatSidebar } from "./components/ChatSidebar/ChatSidebar";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { ScaleLoader } from "./components/ScaleLoader/ScaleLoader";
import { useCopilotPage } from "./useCopilotPage";
export function CopilotPage() {
const {
sessionId,
messages,
status,
error,
stop,
createSession,
onSend,
isLoadingSession,
isCreatingSession,
isUserLoading,
isLoggedIn,
// Mobile drawer
isMobile,
isDrawerOpen,
sessions,
isLoadingSessions,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleSelectSession,
handleNewChat,
} = useCopilotPage();
if (isUserLoading || !isLoggedIn) {
return (
<div className="fixed inset-0 z-50 flex items-center justify-center bg-[#f8f8f9]">
<ScaleLoader className="text-neutral-400" />
</div>
);
}
return (
<SidebarProvider
defaultOpen={true}
className="h-[calc(100vh-72px)] min-h-0"
>
{!isMobile && <ChatSidebar />}
<div className="relative flex h-full w-full flex-col overflow-hidden bg-[#f8f8f9] px-0">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex-1 overflow-hidden">
<ChatContainer
messages={messages}
status={status}
error={error}
sessionId={sessionId}
isLoadingSession={isLoadingSession}
isCreatingSession={isCreatingSession}
onCreateSession={createSession}
onSend={onSend}
onStop={stop}
/>
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={sessionId}
isLoading={isLoadingSessions}
onSelectSession={handleSelectSession}
onNewChat={handleNewChat}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
/>
)}
</SidebarProvider>
);
}

View File

@@ -1,74 +0,0 @@
"use client";
import { ChatInput } from "@/app/(platform)/copilot/components/ChatInput/ChatInput";
import { UIDataTypes, UIMessage, UITools } from "ai";
import { LayoutGroup, motion } from "framer-motion";
import { ChatMessagesContainer } from "../ChatMessagesContainer/ChatMessagesContainer";
import { CopilotChatActionsProvider } from "../CopilotChatActionsProvider/CopilotChatActionsProvider";
import { EmptySession } from "../EmptySession/EmptySession";
export interface ChatContainerProps {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
sessionId: string | null;
isLoadingSession: boolean;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
onStop: () => void;
}
export const ChatContainer = ({
messages,
status,
error,
sessionId,
isLoadingSession,
isCreatingSession,
onCreateSession,
onSend,
onStop,
}: ChatContainerProps) => {
const inputLayoutId = "copilot-2-chat-input";
return (
<CopilotChatActionsProvider onSend={onSend}>
<LayoutGroup id="copilot-2-chat-layout">
<div className="flex h-full min-h-0 w-full flex-col bg-[#f8f8f9] px-2 lg:px-0">
{sessionId ? (
<div className="mx-auto flex h-full min-h-0 w-full max-w-3xl flex-col">
<ChatMessagesContainer
messages={messages}
status={status}
error={error}
isLoading={isLoadingSession}
/>
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.3 }}
className="relative px-3 pb-2 pt-2"
>
<div className="pointer-events-none absolute left-0 right-0 top-[-18px] z-10 h-6 bg-gradient-to-b from-transparent to-[#f8f8f9]" />
<ChatInput
inputId="chat-input-session"
onSend={onSend}
disabled={status === "streaming"}
isStreaming={status === "streaming"}
onStop={onStop}
placeholder="What else can I help with?"
/>
</motion.div>
</div>
) : (
<EmptySession
inputLayoutId={inputLayoutId}
isCreatingSession={isCreatingSession}
onCreateSession={onCreateSession}
onSend={onSend}
/>
)}
</div>
</LayoutGroup>
</CopilotChatActionsProvider>
);
};

View File

@@ -1,294 +0,0 @@
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
import {
Conversation,
ConversationContent,
ConversationScrollButton,
} from "@/components/ai-elements/conversation";
import {
Message,
MessageContent,
MessageResponse,
} from "@/components/ai-elements/message";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { toast } from "@/components/molecules/Toast/use-toast";
import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai";
import { useEffect, useRef, useState } from "react";
import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent";
import { EditAgentTool } from "../../tools/EditAgent/EditAgent";
import { FindAgentsTool } from "../../tools/FindAgents/FindAgents";
import { FindBlocksTool } from "../../tools/FindBlocks/FindBlocks";
import { RunAgentTool } from "../../tools/RunAgent/RunAgent";
import { RunBlockTool } from "../../tools/RunBlock/RunBlock";
import { SearchDocsTool } from "../../tools/SearchDocs/SearchDocs";
import { ViewAgentOutputTool } from "../../tools/ViewAgentOutput/ViewAgentOutput";
// ---------------------------------------------------------------------------
// Workspace media support
// ---------------------------------------------------------------------------
/**
* Resolve workspace:// URLs in markdown text to proxy download URLs.
* Detects MIME type from the hash fragment (e.g. workspace://id#video/mp4)
* and prefixes the alt text with "video:" so the custom img component can
* render a <video> element instead.
*/
function resolveWorkspaceUrls(text: string): string {
return text.replace(
/!\[([^\]]*)\]\(workspace:\/\/([^)#\s]+)(?:#([^)\s]*))?\)/g,
(_match, alt: string, fileId: string, mimeHint?: string) => {
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
const url = `/api/proxy${apiPath}`;
if (mimeHint?.startsWith("video/")) {
return `![video:${alt || "Video"}](${url})`;
}
return `![${alt || "Image"}](${url})`;
},
);
}
/**
* Custom img component for Streamdown that renders <video> elements
* for workspace video files (detected via "video:" alt-text prefix).
* Falls back to <video> when an <img> fails to load for workspace files.
*/
function WorkspaceMediaImage(props: React.JSX.IntrinsicElements["img"]) {
const { src, alt, ...rest } = props;
const [imgFailed, setImgFailed] = useState(false);
const isWorkspace = src?.includes("/workspace/files/") ?? false;
if (!src) return null;
if (alt?.startsWith("video:") || (imgFailed && isWorkspace)) {
return (
<span className="my-2 inline-block">
<video
controls
className="h-auto max-w-full rounded-md border border-zinc-200"
preload="metadata"
>
<source src={src} />
Your browser does not support the video tag.
</video>
</span>
);
}
return (
// eslint-disable-next-line @next/next/no-img-element
<img
src={src}
alt={alt || "Image"}
className="h-auto max-w-full rounded-md border border-zinc-200"
loading="lazy"
onError={() => {
if (isWorkspace) setImgFailed(true);
}}
{...rest}
/>
);
}
/** Stable components override for Streamdown (avoids re-creating on every render). */
const STREAMDOWN_COMPONENTS = { img: WorkspaceMediaImage };
const THINKING_PHRASES = [
"Thinking...",
"Considering this...",
"Working through this...",
"Analyzing your request...",
"Reasoning...",
"Looking into it...",
"Processing your request...",
"Mulling this over...",
"Piecing it together...",
"On it...",
];
function getRandomPhrase() {
return THINKING_PHRASES[Math.floor(Math.random() * THINKING_PHRASES.length)];
}
interface ChatMessagesContainerProps {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
isLoading: boolean;
}
export const ChatMessagesContainer = ({
messages,
status,
error,
isLoading,
}: ChatMessagesContainerProps) => {
const [thinkingPhrase, setThinkingPhrase] = useState(getRandomPhrase);
const lastToastTimeRef = useRef(0);
useEffect(() => {
if (status === "submitted") {
setThinkingPhrase(getRandomPhrase());
}
}, [status]);
// Show a toast when a new error occurs, debounced to avoid spam
useEffect(() => {
if (!error) return;
const now = Date.now();
if (now - lastToastTimeRef.current < 3_000) return;
lastToastTimeRef.current = now;
toast({
variant: "destructive",
title: "Something went wrong",
description:
"The assistant encountered an error. Please try sending your message again.",
});
}, [error]);
const lastMessage = messages[messages.length - 1];
const lastAssistantHasVisibleContent =
lastMessage?.role === "assistant" &&
lastMessage.parts.some(
(p) =>
(p.type === "text" && p.text.trim().length > 0) ||
p.type.startsWith("tool-"),
);
const showThinking =
status === "submitted" ||
(status === "streaming" && !lastAssistantHasVisibleContent);
return (
<Conversation className="min-h-0 flex-1">
<ConversationContent className="flex min-h-screen flex-1 flex-col gap-6 px-3 py-6">
{isLoading && messages.length === 0 && (
<div className="flex min-h-full flex-1 items-center justify-center">
<LoadingSpinner className="text-neutral-600" />
</div>
)}
{messages.map((message, messageIndex) => {
const isLastAssistant =
messageIndex === messages.length - 1 &&
message.role === "assistant";
const messageHasVisibleContent = message.parts.some(
(p) =>
(p.type === "text" && p.text.trim().length > 0) ||
p.type.startsWith("tool-"),
);
return (
<Message from={message.role} key={message.id}>
<MessageContent
className={
"text-[1rem] leading-relaxed " +
"group-[.is-user]:rounded-xl group-[.is-user]:bg-purple-100 group-[.is-user]:px-3 group-[.is-user]:py-2.5 group-[.is-user]:text-slate-900 group-[.is-user]:[border-bottom-right-radius:0] " +
"group-[.is-assistant]:bg-transparent group-[.is-assistant]:text-slate-900"
}
>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return (
<MessageResponse
key={`${message.id}-${i}`}
components={STREAMDOWN_COMPONENTS}
>
{resolveWorkspaceUrls(part.text)}
</MessageResponse>
);
case "tool-find_block":
return (
<FindBlocksTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-find_agent":
case "tool-find_library_agent":
return (
<FindAgentsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-search_docs":
case "tool-get_doc_page":
return (
<SearchDocsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_block":
return (
<RunBlockTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_agent":
case "tool-schedule_agent":
return (
<RunAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-create_agent":
return (
<CreateAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-edit_agent":
return (
<EditAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-view_agent_output":
return (
<ViewAgentOutputTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
default:
return null;
}
})}
{isLastAssistant &&
!messageHasVisibleContent &&
showThinking && (
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
{thinkingPhrase}
</span>
)}
</MessageContent>
</Message>
);
})}
{showThinking && lastMessage?.role !== "assistant" && (
<Message from="assistant">
<MessageContent className="text-[1rem] leading-relaxed">
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
{thinkingPhrase}
</span>
</MessageContent>
</Message>
)}
{error && (
<div className="rounded-lg bg-red-50 p-4 text-sm text-red-700">
<p className="font-medium">Something went wrong</p>
<p className="mt-1 text-red-600">
The assistant encountered an error. Please try sending your
message again.
</p>
</div>
)}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
);
};

View File

@@ -1,188 +0,0 @@
"use client";
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import { Button } from "@/components/atoms/Button/Button";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { Text } from "@/components/atoms/Text/Text";
import {
Sidebar,
SidebarContent,
SidebarFooter,
SidebarHeader,
SidebarTrigger,
useSidebar,
} from "@/components/ui/sidebar";
import { cn } from "@/lib/utils";
import { PlusCircleIcon, PlusIcon } from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { parseAsString, useQueryState } from "nuqs";
export function ChatSidebar() {
const { state } = useSidebar();
const isCollapsed = state === "collapsed";
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
const { data: sessionsResponse, isLoading: isLoadingSessions } =
useGetV2ListSessions({ limit: 50 });
const sessions =
sessionsResponse?.status === 200 ? sessionsResponse.data.sessions : [];
function handleNewChat() {
setSessionId(null);
}
function handleSelectSession(id: string) {
setSessionId(id);
}
function formatDate(dateString: string) {
const date = new Date(dateString);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
if (diffDays === 0) return "Today";
if (diffDays === 1) return "Yesterday";
if (diffDays < 7) return `${diffDays} days ago`;
const day = date.getDate();
const ordinal =
day % 10 === 1 && day !== 11
? "st"
: day % 10 === 2 && day !== 12
? "nd"
: day % 10 === 3 && day !== 13
? "rd"
: "th";
const month = date.toLocaleDateString("en-US", { month: "short" });
const year = date.getFullYear();
return `${day}${ordinal} ${month} ${year}`;
}
return (
<Sidebar
variant="inset"
collapsible="icon"
className="!top-[50px] !h-[calc(100vh-50px)] border-r border-zinc-100 px-0"
>
{isCollapsed && (
<SidebarHeader
className={cn(
"flex",
isCollapsed
? "flex-row items-center justify-between gap-y-4 md:flex-col md:items-start md:justify-start"
: "flex-row items-center justify-between",
)}
>
<motion.div
key={isCollapsed ? "header-collapsed" : "header-expanded"}
className="flex flex-col items-center gap-3 pt-4"
initial={{ opacity: 0, filter: "blur(3px)" }}
animate={{ opacity: 1, filter: "blur(0px)" }}
transition={{ type: "spring", bounce: 0.2 }}
>
<div className="flex flex-col items-center gap-2">
<SidebarTrigger />
<Button
variant="ghost"
onClick={handleNewChat}
style={{ minWidth: "auto", width: "auto" }}
>
<PlusCircleIcon className="!size-5" />
<span className="sr-only">New Chat</span>
</Button>
</div>
</motion.div>
</SidebarHeader>
)}
<SidebarContent className="gap-4 overflow-y-auto px-4 py-4 [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{!isCollapsed && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.1 }}
className="flex items-center justify-between px-3"
>
<Text variant="h3" size="body-medium">
Your chats
</Text>
<div className="relative left-6">
<SidebarTrigger />
</div>
</motion.div>
)}
{!isCollapsed && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.15 }}
className="mt-4 flex flex-col gap-1"
>
{isLoadingSessions ? (
<div className="flex min-h-[30rem] items-center justify-center py-4">
<LoadingSpinner size="small" className="text-neutral-600" />
</div>
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions.map((session) => (
<button
key={session.id}
onClick={() => handleSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
session.id === sessionId
? "bg-zinc-100"
: "hover:bg-zinc-50",
)}
>
<div className="flex min-w-0 max-w-full flex-col overflow-hidden">
<div className="min-w-0 max-w-full">
<Text
variant="body"
className={cn(
"truncate font-normal",
session.id === sessionId
? "text-zinc-600"
: "text-zinc-800",
)}
>
{session.title || `Untitled chat`}
</Text>
</div>
<Text variant="small" className="text-neutral-400">
{formatDate(session.updated_at)}
</Text>
</div>
</button>
))
)}
</motion.div>
)}
</SidebarContent>
{!isCollapsed && sessionId && (
<SidebarFooter className="shrink-0 bg-zinc-50 p-3 pb-1 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.2 }}
>
<Button
variant="primary"
size="small"
onClick={handleNewChat}
className="w-full"
leftIcon={<PlusIcon className="h-4 w-4" weight="bold" />}
>
New Chat
</Button>
</motion.div>
</SidebarFooter>
)}
</Sidebar>
);
}

View File

@@ -1,16 +0,0 @@
"use client";
import { CopilotChatActionsContext } from "./useCopilotChatActions";
interface Props {
onSend: (message: string) => void | Promise<void>;
children: React.ReactNode;
}
export function CopilotChatActionsProvider({ onSend, children }: Props) {
return (
<CopilotChatActionsContext.Provider value={{ onSend }}>
{children}
</CopilotChatActionsContext.Provider>
);
}

View File

@@ -1,23 +0,0 @@
"use client";
import { createContext, useContext } from "react";
interface CopilotChatActions {
onSend: (message: string) => void | Promise<void>;
}
const CopilotChatActionsContext = createContext<CopilotChatActions | null>(
null,
);
export function useCopilotChatActions(): CopilotChatActions {
const ctx = useContext(CopilotChatActionsContext);
if (!ctx) {
throw new Error(
"useCopilotChatActions must be used within CopilotChatActionsProvider",
);
}
return ctx;
}
export { CopilotChatActionsContext };

View File

@@ -0,0 +1,99 @@
"use client";
import { ChatLoader } from "@/components/contextual/Chat/components/ChatLoader/ChatLoader";
import { Text } from "@/components/atoms/Text/Text";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import type { ReactNode } from "react";
import { DesktopSidebar } from "./components/DesktopSidebar/DesktopSidebar";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { useCopilotShell } from "./useCopilotShell";
interface Props {
children: ReactNode;
}
export function CopilotShell({ children }: Props) {
const {
isMobile,
isDrawerOpen,
isLoading,
isCreatingSession,
isLoggedIn,
hasActiveSession,
sessions,
currentSessionId,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChatClick,
handleSessionClick,
hasNextPage,
isFetchingNextPage,
fetchNextPage,
} = useCopilotShell();
if (!isLoggedIn) {
return (
<div className="flex h-full items-center justify-center">
<ChatLoader />
</div>
);
}
return (
<div
className="flex overflow-hidden bg-[#EFEFF0]"
style={{ height: `calc(100vh - ${NAVBAR_HEIGHT_PX}px)` }}
>
{!isMobile && (
<DesktopSidebar
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSessionClick}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChatClick}
hasActiveSession={Boolean(hasActiveSession)}
/>
)}
<div className="relative flex min-h-0 flex-1 flex-col">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex min-h-0 flex-1 flex-col">
{isCreatingSession ? (
<div className="flex h-full flex-1 flex-col items-center justify-center bg-[#f8f8f9]">
<div className="flex flex-col items-center gap-4">
<ChatLoader />
<Text variant="body" className="text-zinc-500">
Creating your chat...
</Text>
</div>
</div>
) : (
children
)}
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSessionClick}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChatClick}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
hasActiveSession={Boolean(hasActiveSession)}
/>
)}
</div>
);
}

View File

@@ -0,0 +1,70 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { Plus } from "@phosphor-icons/react";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
hasActiveSession: boolean;
}
export function DesktopSidebar({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
hasActiveSession,
}: Props) {
return (
<aside className="flex h-full w-80 flex-col border-r border-zinc-100 bg-zinc-50">
<div className="shrink-0 px-6 py-4">
<Text variant="h3" size="body-medium">
Your chats
</Text>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
{hasActiveSession && (
<div className="shrink-0 bg-zinc-50 p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<Plus width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</aside>
);
}

View File

@@ -0,0 +1,91 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { PlusIcon, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
hasActiveSession: boolean;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
onClose,
onOpenChange,
hasActiveSession,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-zinc-50">
<div className="shrink-0 border-b border-zinc-200 p-4">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your chats
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1.25rem" height="1.25rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
{hasActiveSession && (
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<PlusIcon width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -0,0 +1,24 @@
import { useState } from "react";
export function useMobileDrawer() {
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
const handleOpenDrawer = () => {
setIsDrawerOpen(true);
};
const handleCloseDrawer = () => {
setIsDrawerOpen(false);
};
const handleDrawerOpenChange = (open: boolean) => {
setIsDrawerOpen(open);
};
return {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
};
}

View File

@@ -0,0 +1,80 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { Text } from "@/components/atoms/Text/Text";
import { InfiniteList } from "@/components/molecules/InfiniteList/InfiniteList";
import { cn } from "@/lib/utils";
import { getSessionTitle } from "../../helpers";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
}
export function SessionsList({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
}: Props) {
if (isLoading) {
return (
<div className="space-y-1">
{Array.from({ length: 5 }).map((_, i) => (
<div key={i} className="rounded-lg px-3 py-2.5">
<Skeleton className="h-5 w-full" />
</div>
))}
</div>
);
}
if (sessions.length === 0) {
return (
<div className="flex h-full items-center justify-center">
<Text variant="body" className="text-zinc-500">
You don&apos;t have previous chats
</Text>
</div>
);
}
return (
<InfiniteList
items={sessions}
hasMore={hasNextPage}
isFetchingMore={isFetchingNextPage}
onEndReached={onFetchNextPage}
className="space-y-1"
renderItem={(session) => {
const isActive = session.id === currentSessionId;
return (
<button
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
isActive ? "bg-zinc-100" : "hover:bg-zinc-50",
)}
>
<Text
variant="body"
className={cn(
"font-normal",
isActive ? "text-zinc-600" : "text-zinc-800",
)}
>
{getSessionTitle(session)}
</Text>
</button>
);
}}
/>
);
}

View File

@@ -0,0 +1,91 @@
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { okData } from "@/app/api/helpers";
import { useEffect, useState } from "react";
const PAGE_SIZE = 50;
export interface UseSessionsPaginationArgs {
enabled: boolean;
}
export function useSessionsPagination({ enabled }: UseSessionsPaginationArgs) {
const [offset, setOffset] = useState(0);
const [accumulatedSessions, setAccumulatedSessions] = useState<
SessionSummaryResponse[]
>([]);
const [totalCount, setTotalCount] = useState<number | null>(null);
const { data, isLoading, isFetching, isError } = useGetV2ListSessions(
{ limit: PAGE_SIZE, offset },
{
query: {
enabled: enabled && offset >= 0,
},
},
);
useEffect(() => {
const responseData = okData(data);
if (responseData) {
const newSessions = responseData.sessions;
const total = responseData.total;
setTotalCount(total);
if (offset === 0) {
setAccumulatedSessions(newSessions);
} else {
setAccumulatedSessions((prev) => [...prev, ...newSessions]);
}
} else if (!enabled) {
setAccumulatedSessions([]);
setTotalCount(null);
}
}, [data, offset, enabled]);
const hasNextPage =
totalCount !== null && accumulatedSessions.length < totalCount;
const areAllSessionsLoaded =
totalCount !== null &&
accumulatedSessions.length >= totalCount &&
!isFetching &&
!isLoading;
useEffect(() => {
if (
hasNextPage &&
!isFetching &&
!isLoading &&
!isError &&
totalCount !== null
) {
setOffset((prev) => prev + PAGE_SIZE);
}
}, [hasNextPage, isFetching, isLoading, isError, totalCount]);
const fetchNextPage = () => {
if (hasNextPage && !isFetching) {
setOffset((prev) => prev + PAGE_SIZE);
}
};
const reset = () => {
// Only reset the offset - keep existing sessions visible during refetch
// The effect will replace sessions when new data arrives at offset 0
setOffset(0);
};
return {
sessions: accumulatedSessions,
isLoading,
isFetching,
hasNextPage,
areAllSessionsLoaded,
totalCount,
fetchNextPage,
reset,
};
}

View File

@@ -0,0 +1,106 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { format, formatDistanceToNow, isToday } from "date-fns";
export function convertSessionDetailToSummary(session: SessionDetailResponse) {
return {
id: session.id,
created_at: session.created_at,
updated_at: session.updated_at,
title: undefined,
};
}
export function filterVisibleSessions(sessions: SessionSummaryResponse[]) {
const fiveMinutesAgo = Date.now() - 5 * 60 * 1000;
return sessions.filter((session) => {
const hasBeenUpdated = session.updated_at !== session.created_at;
if (hasBeenUpdated) return true;
const isRecentlyCreated =
new Date(session.created_at).getTime() > fiveMinutesAgo;
return isRecentlyCreated;
});
}
export function getSessionTitle(session: SessionSummaryResponse) {
if (session.title) return session.title;
const isNewSession = session.updated_at === session.created_at;
if (isNewSession) {
const createdDate = new Date(session.created_at);
if (isToday(createdDate)) {
return "Today";
}
return format(createdDate, "MMM d, yyyy");
}
return "Untitled Chat";
}
export function getSessionUpdatedLabel(session: SessionSummaryResponse) {
if (!session.updated_at) return "";
return formatDistanceToNow(new Date(session.updated_at), { addSuffix: true });
}
export function mergeCurrentSessionIntoList(
accumulatedSessions: SessionSummaryResponse[],
currentSessionId: string | null,
currentSessionData: SessionDetailResponse | null | undefined,
recentlyCreatedSessions?: Map<string, SessionSummaryResponse>,
) {
const filteredSessions: SessionSummaryResponse[] = [];
const addedIds = new Set<string>();
if (accumulatedSessions.length > 0) {
const visibleSessions = filterVisibleSessions(accumulatedSessions);
if (currentSessionId) {
const currentInAll = accumulatedSessions.find(
(s) => s.id === currentSessionId,
);
if (currentInAll) {
const isInVisible = visibleSessions.some(
(s) => s.id === currentSessionId,
);
if (!isInVisible) {
filteredSessions.push(currentInAll);
addedIds.add(currentInAll.id);
}
}
}
for (const session of visibleSessions) {
if (!addedIds.has(session.id)) {
filteredSessions.push(session);
addedIds.add(session.id);
}
}
}
if (currentSessionId && currentSessionData) {
if (!addedIds.has(currentSessionId)) {
const summarySession = convertSessionDetailToSummary(currentSessionData);
filteredSessions.unshift(summarySession);
addedIds.add(currentSessionId);
}
}
if (recentlyCreatedSessions) {
for (const [sessionId, sessionData] of recentlyCreatedSessions) {
if (!addedIds.has(sessionId)) {
filteredSessions.unshift(sessionData);
addedIds.add(sessionId);
}
}
}
return filteredSessions;
}
export function getCurrentSessionId(searchParams: URLSearchParams) {
return searchParams.get("sessionId");
}

View File

@@ -0,0 +1,124 @@
"use client";
import {
getGetV2GetSessionQueryKey,
getGetV2ListSessionsQueryKey,
useGetV2GetSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { okData } from "@/app/api/helpers";
import { useChatStore } from "@/components/contextual/Chat/chat-store";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useQueryClient } from "@tanstack/react-query";
import { usePathname, useSearchParams } from "next/navigation";
import { useCopilotStore } from "../../copilot-page-store";
import { useCopilotSessionId } from "../../useCopilotSessionId";
import { useMobileDrawer } from "./components/MobileDrawer/useMobileDrawer";
import { getCurrentSessionId } from "./helpers";
import { useShellSessionList } from "./useShellSessionList";
export function useCopilotShell() {
const pathname = usePathname();
const searchParams = useSearchParams();
const queryClient = useQueryClient();
const breakpoint = useBreakpoint();
const { isLoggedIn } = useSupabase();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
const { urlSessionId, setUrlSessionId } = useCopilotSessionId();
const isOnHomepage = pathname === "/copilot";
const paramSessionId = searchParams.get("sessionId");
const {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
} = useMobileDrawer();
const paginationEnabled = !isMobile || isDrawerOpen || !!paramSessionId;
const currentSessionId = getCurrentSessionId(searchParams);
const { data: currentSessionData } = useGetV2GetSession(
currentSessionId || "",
{
query: {
enabled: !!currentSessionId,
select: okData,
},
},
);
const {
sessions,
isLoading,
isSessionsFetching,
hasNextPage,
fetchNextPage,
resetPagination,
recentlyCreatedSessionsRef,
} = useShellSessionList({
paginationEnabled,
currentSessionId,
currentSessionData,
isOnHomepage,
paramSessionId,
});
const stopStream = useChatStore((s) => s.stopStream);
const isCreatingSession = useCopilotStore((s) => s.isCreatingSession);
function handleSessionClick(sessionId: string) {
if (sessionId === currentSessionId) return;
// Stop current stream - SSE reconnection allows resuming later
if (currentSessionId) {
stopStream(currentSessionId);
}
if (recentlyCreatedSessionsRef.current.has(sessionId)) {
queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(sessionId),
});
}
setUrlSessionId(sessionId, { shallow: false });
if (isMobile) handleCloseDrawer();
}
function handleNewChatClick() {
// Stop current stream - SSE reconnection allows resuming later
if (currentSessionId) {
stopStream(currentSessionId);
}
resetPagination();
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
setUrlSessionId(null, { shallow: false });
if (isMobile) handleCloseDrawer();
}
return {
isMobile,
isDrawerOpen,
isLoggedIn,
hasActiveSession:
Boolean(currentSessionId) && (!isOnHomepage || Boolean(paramSessionId)),
isLoading: isLoading || isCreatingSession,
isCreatingSession,
sessions,
currentSessionId: urlSessionId,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChatClick,
handleSessionClick,
hasNextPage,
isFetchingNextPage: isSessionsFetching,
fetchNextPage,
};
}

View File

@@ -0,0 +1,113 @@
import { getGetV2ListSessionsQueryKey } from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { useChatStore } from "@/components/contextual/Chat/chat-store";
import { useQueryClient } from "@tanstack/react-query";
import { useEffect, useMemo, useRef } from "react";
import { useSessionsPagination } from "./components/SessionsList/useSessionsPagination";
import {
convertSessionDetailToSummary,
filterVisibleSessions,
mergeCurrentSessionIntoList,
} from "./helpers";
interface UseShellSessionListArgs {
paginationEnabled: boolean;
currentSessionId: string | null;
currentSessionData: SessionDetailResponse | null | undefined;
isOnHomepage: boolean;
paramSessionId: string | null;
}
export function useShellSessionList({
paginationEnabled,
currentSessionId,
currentSessionData,
isOnHomepage,
paramSessionId,
}: UseShellSessionListArgs) {
const queryClient = useQueryClient();
const onStreamComplete = useChatStore((s) => s.onStreamComplete);
const {
sessions: accumulatedSessions,
isLoading: isSessionsLoading,
isFetching: isSessionsFetching,
hasNextPage,
fetchNextPage,
reset: resetPagination,
} = useSessionsPagination({
enabled: paginationEnabled,
});
const recentlyCreatedSessionsRef = useRef<
Map<string, SessionSummaryResponse>
>(new Map());
useEffect(() => {
if (isOnHomepage && !paramSessionId) {
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
}
}, [isOnHomepage, paramSessionId, queryClient]);
useEffect(() => {
if (currentSessionId && currentSessionData) {
const isNewSession =
currentSessionData.updated_at === currentSessionData.created_at;
const isNotInAccumulated = !accumulatedSessions.some(
(s) => s.id === currentSessionId,
);
if (isNewSession || isNotInAccumulated) {
const summary = convertSessionDetailToSummary(currentSessionData);
recentlyCreatedSessionsRef.current.set(currentSessionId, summary);
}
}
}, [currentSessionId, currentSessionData, accumulatedSessions]);
useEffect(() => {
for (const sessionId of recentlyCreatedSessionsRef.current.keys()) {
if (accumulatedSessions.some((s) => s.id === sessionId)) {
recentlyCreatedSessionsRef.current.delete(sessionId);
}
}
}, [accumulatedSessions]);
useEffect(() => {
const unsubscribe = onStreamComplete(() => {
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
});
return unsubscribe;
}, [onStreamComplete, queryClient]);
const sessions = useMemo(
() =>
mergeCurrentSessionIntoList(
accumulatedSessions,
currentSessionId,
currentSessionData,
recentlyCreatedSessionsRef.current,
),
[accumulatedSessions, currentSessionId, currentSessionData],
);
const visibleSessions = useMemo(
() => filterVisibleSessions(sessions),
[sessions],
);
const isLoading = isSessionsLoading && accumulatedSessions.length === 0;
return {
sessions: visibleSessions,
isLoading,
isSessionsFetching,
hasNextPage,
fetchNextPage,
resetPagination,
recentlyCreatedSessionsRef,
};
}

View File

@@ -1,111 +0,0 @@
"use client";
import { ChatInput } from "@/app/(platform)/copilot/components/ChatInput/ChatInput";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { SpinnerGapIcon } from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { useEffect, useState } from "react";
import {
getGreetingName,
getInputPlaceholder,
getQuickActions,
} from "./helpers";
interface Props {
inputLayoutId: string;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
}
export function EmptySession({
inputLayoutId,
isCreatingSession,
onSend,
}: Props) {
const { user } = useSupabase();
const greetingName = getGreetingName(user);
const quickActions = getQuickActions();
const [loadingAction, setLoadingAction] = useState<string | null>(null);
const [inputPlaceholder, setInputPlaceholder] = useState(
getInputPlaceholder(),
);
useEffect(() => {
setInputPlaceholder(getInputPlaceholder(window.innerWidth));
}, [window.innerWidth]);
async function handleQuickActionClick(action: string) {
if (isCreatingSession || loadingAction) return;
setLoadingAction(action);
try {
await onSend(action);
} finally {
setLoadingAction(null);
}
}
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto bg-[#f8f8f9] px-0 py-5 md:px-6 md:py-10">
<motion.div
className="w-full max-w-3xl text-center"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.3 }}
>
<div className="mx-auto max-w-3xl">
<Text variant="h3" className="mb-1 !text-[1.375rem] text-zinc-700">
Hey, <span className="text-violet-600">{greetingName}</span>
</Text>
<Text variant="h3" className="mb-8 !font-normal">
Tell me about your work I&apos;ll find what to automate.
</Text>
<div className="mb-6">
<motion.div
layoutId={inputLayoutId}
transition={{ type: "spring", bounce: 0.2, duration: 0.65 }}
className="w-full px-2"
>
<ChatInput
inputId="chat-input-empty"
onSend={onSend}
disabled={isCreatingSession}
placeholder={inputPlaceholder}
className="w-full"
/>
</motion.div>
</div>
</div>
<div className="flex flex-wrap items-center justify-center gap-3 overflow-x-auto [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{quickActions.map((action) => (
<Button
key={action}
type="button"
variant="outline"
size="small"
onClick={() => void handleQuickActionClick(action)}
disabled={isCreatingSession || loadingAction !== null}
aria-busy={loadingAction === action}
leftIcon={
loadingAction === action ? (
<SpinnerGapIcon
className="h-4 w-4 animate-spin"
weight="bold"
/>
) : null
}
className="h-auto shrink-0 border-zinc-300 px-3 py-2 text-[.9rem] text-zinc-600"
>
{action}
</Button>
))}
</div>
</motion.div>
</div>
);
}

View File

@@ -1,140 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { PlusIcon, SpinnerGapIcon, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
onSelectSession: (sessionId: string) => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
}
function formatDate(dateString: string) {
const date = new Date(dateString);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
if (diffDays === 0) return "Today";
if (diffDays === 1) return "Yesterday";
if (diffDays < 7) return `${diffDays} days ago`;
const day = date.getDate();
const ordinal =
day % 10 === 1 && day !== 11
? "st"
: day % 10 === 2 && day !== 12
? "nd"
: day % 10 === 3 && day !== 13
? "rd"
: "th";
const month = date.toLocaleDateString("en-US", { month: "short" });
const year = date.getFullYear();
return `${day}${ordinal} ${month} ${year}`;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
onSelectSession,
onNewChat,
onClose,
onOpenChange,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-zinc-50">
<div className="shrink-0 border-b border-zinc-200 px-4 py-2">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your chats
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1rem" height="1rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col gap-1 overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
{isLoading ? (
<div className="flex items-center justify-center py-4">
<SpinnerGapIcon className="h-5 w-5 animate-spin text-neutral-400" />
</div>
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions.map((session) => (
<button
key={session.id}
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
session.id === currentSessionId
? "bg-zinc-100"
: "hover:bg-zinc-50",
)}
>
<div className="flex min-w-0 max-w-full flex-col overflow-hidden">
<div className="min-w-0 max-w-full">
<Text
variant="body"
className={cn(
"truncate font-normal",
session.id === currentSessionId
? "text-zinc-600"
: "text-zinc-800",
)}
>
{session.title || "Untitled chat"}
</Text>
</div>
<Text variant="small" className="text-neutral-400">
{formatDate(session.updated_at)}
</Text>
</div>
</button>
))
)}
</div>
{currentSessionId && (
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<PlusIcon width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -1,54 +0,0 @@
import { cn } from "@/lib/utils";
import { AnimatePresence, motion } from "framer-motion";
interface Props {
text: string;
className?: string;
}
export function MorphingTextAnimation({ text, className }: Props) {
const letters = text.split("");
return (
<div className={cn(className)}>
<AnimatePresence mode="popLayout" initial={false}>
<motion.div key={text} className="whitespace-nowrap">
<motion.span className="inline-flex overflow-hidden">
{letters.map((char, index) => (
<motion.span
key={`${text}-${index}`}
initial={{
opacity: 0,
y: 8,
rotateX: "80deg",
filter: "blur(6px)",
}}
animate={{
opacity: 1,
y: 0,
rotateX: "0deg",
filter: "blur(0px)",
}}
exit={{
opacity: 0,
y: -8,
rotateX: "-80deg",
filter: "blur(6px)",
}}
style={{ willChange: "transform" }}
transition={{
delay: 0.015 * index,
type: "spring",
bounce: 0.5,
}}
className="inline-block"
>
{char === " " ? "\u00A0" : char}
</motion.span>
))}
</motion.span>
</motion.div>
</AnimatePresence>
</div>
);
}

View File

@@ -1,69 +0,0 @@
.loader {
position: relative;
animation: rotate 1s infinite;
}
.loader::before,
.loader::after {
border-radius: 50%;
content: "";
display: block;
/* 40% of container size */
height: 40%;
width: 40%;
}
.loader::before {
animation: ball1 1s infinite;
background-color: #a1a1aa; /* zinc-400 */
box-shadow: calc(var(--spacing)) 0 0 #18181b; /* zinc-900 */
margin-bottom: calc(var(--gap));
}
.loader::after {
animation: ball2 1s infinite;
background-color: #18181b; /* zinc-900 */
box-shadow: calc(var(--spacing)) 0 0 #a1a1aa; /* zinc-400 */
}
@keyframes rotate {
0% {
transform: rotate(0deg) scale(0.8);
}
50% {
transform: rotate(360deg) scale(1.2);
}
100% {
transform: rotate(720deg) scale(0.8);
}
}
@keyframes ball1 {
0% {
box-shadow: calc(var(--spacing)) 0 0 #18181b;
}
50% {
box-shadow: 0 0 0 #18181b;
margin-bottom: 0;
transform: translate(calc(var(--spacing) / 2), calc(var(--spacing) / 2));
}
100% {
box-shadow: calc(var(--spacing)) 0 0 #18181b;
margin-bottom: calc(var(--gap));
}
}
@keyframes ball2 {
0% {
box-shadow: calc(var(--spacing)) 0 0 #a1a1aa;
}
50% {
box-shadow: 0 0 0 #a1a1aa;
margin-top: calc(var(--ball-size) * -1);
transform: translate(calc(var(--spacing) / 2), calc(var(--spacing) / 2));
}
100% {
box-shadow: calc(var(--spacing)) 0 0 #a1a1aa;
margin-top: 0;
}
}

View File

@@ -1,28 +0,0 @@
import { cn } from "@/lib/utils";
import styles from "./OrbitLoader.module.css";
interface Props {
size?: number;
className?: string;
}
export function OrbitLoader({ size = 24, className }: Props) {
const ballSize = Math.round(size * 0.4);
const spacing = Math.round(size * 0.6);
const gap = Math.round(size * 0.2);
return (
<div
className={cn(styles.loader, className)}
style={
{
width: size,
height: size,
"--ball-size": `${ballSize}px`,
"--spacing": `${spacing}px`,
"--gap": `${gap}px`,
} as React.CSSProperties
}
/>
);
}

View File

@@ -1,26 +0,0 @@
import { cn } from "@/lib/utils";
interface Props {
value: number;
label?: string;
className?: string;
}
export function ProgressBar({ value, label, className }: Props) {
const clamped = Math.min(100, Math.max(0, value));
return (
<div className={cn("flex flex-col gap-1.5", className)}>
<div className="flex items-center justify-between text-xs text-neutral-500">
<span>{label ?? "Working on it..."}</span>
<span>{Math.round(clamped)}%</span>
</div>
<div className="h-2 w-full overflow-hidden rounded-full bg-neutral-200">
<div
className="h-full rounded-full bg-neutral-900 transition-[width] duration-300 ease-out"
style={{ width: `${clamped}%` }}
/>
</div>
</div>
);
}

View File

@@ -1,34 +0,0 @@
.loader {
position: relative;
display: inline-block;
flex-shrink: 0;
}
.loader::before,
.loader::after {
content: "";
box-sizing: border-box;
width: 100%;
height: 100%;
border-radius: 50%;
background: currentColor;
position: absolute;
left: 0;
top: 0;
animation: ripple 2s linear infinite;
}
.loader::after {
animation-delay: 1s;
}
@keyframes ripple {
0% {
transform: scale(0);
opacity: 1;
}
100% {
transform: scale(1);
opacity: 0;
}
}

View File

@@ -1,16 +0,0 @@
import { cn } from "@/lib/utils";
import styles from "./PulseLoader.module.css";
interface Props {
size?: number;
className?: string;
}
export function PulseLoader({ size = 24, className }: Props) {
return (
<div
className={cn(styles.loader, className)}
style={{ width: size, height: size }}
/>
);
}

View File

@@ -1,35 +0,0 @@
.loader {
width: 48px;
height: 48px;
display: inline-block;
position: relative;
}
.loader::after,
.loader::before {
content: "";
box-sizing: border-box;
width: 100%;
height: 100%;
border-radius: 50%;
background: currentColor;
position: absolute;
left: 0;
top: 0;
animation: animloader 2s linear infinite;
}
.loader::after {
animation-delay: 1s;
}
@keyframes animloader {
0% {
transform: scale(0);
opacity: 1;
}
100% {
transform: scale(1);
opacity: 0;
}
}

View File

@@ -1,16 +0,0 @@
import { cn } from "@/lib/utils";
import styles from "./ScaleLoader.module.css";
interface Props {
size?: number;
className?: string;
}
export function ScaleLoader({ size = 48, className }: Props) {
return (
<div
className={cn(styles.loader, className)}
style={{ width: size, height: size }}
/>
);
}

View File

@@ -1,57 +0,0 @@
.loader {
position: relative;
display: inline-block;
flex-shrink: 0;
transform: rotateZ(45deg);
perspective: 1000px;
border-radius: 50%;
color: currentColor;
}
.loader::before,
.loader::after {
content: "";
display: block;
position: absolute;
top: 0;
left: 0;
width: inherit;
height: inherit;
border-radius: 50%;
transform: rotateX(70deg);
animation: spin 1s linear infinite;
}
.loader::after {
color: var(--spinner-accent, #a855f7);
transform: rotateY(70deg);
animation-delay: 0.4s;
}
@keyframes spin {
0%,
100% {
box-shadow: 0.2em 0 0 0 currentColor;
}
12% {
box-shadow: 0.2em 0.2em 0 0 currentColor;
}
25% {
box-shadow: 0 0.2em 0 0 currentColor;
}
37% {
box-shadow: -0.2em 0.2em 0 0 currentColor;
}
50% {
box-shadow: -0.2em 0 0 0 currentColor;
}
62% {
box-shadow: -0.2em -0.2em 0 0 currentColor;
}
75% {
box-shadow: 0 -0.2em 0 0 currentColor;
}
87% {
box-shadow: 0.2em -0.2em 0 0 currentColor;
}
}

View File

@@ -1,16 +0,0 @@
import { cn } from "@/lib/utils";
import styles from "./SpinnerLoader.module.css";
interface Props {
size?: number;
className?: string;
}
export function SpinnerLoader({ size = 24, className }: Props) {
return (
<div
className={cn(styles.loader, className)}
style={{ width: size, height: size }}
/>
);
}

View File

@@ -1,235 +0,0 @@
import { Link } from "@/components/atoms/Link/Link";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
/* ------------------------------------------------------------------ */
/* Layout */
/* ------------------------------------------------------------------ */
export function ContentGrid({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return <div className={cn("grid gap-2", className)}>{children}</div>;
}
/* ------------------------------------------------------------------ */
/* Card */
/* ------------------------------------------------------------------ */
export function ContentCard({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<div
className={cn(
"min-w-0 rounded-lg bg-gradient-to-r from-purple-500/30 to-blue-500/30 p-[1px]",
className,
)}
>
<div className="rounded-lg bg-neutral-100 p-3">{children}</div>
</div>
);
}
/** Flex row with a left content area (`children`) and an optional rightside `action`. */
export function ContentCardHeader({
children,
action,
className,
}: {
children: React.ReactNode;
action?: React.ReactNode;
className?: string;
}) {
return (
<div className={cn("flex items-start justify-between gap-2", className)}>
<div className="min-w-0">{children}</div>
{action}
</div>
);
}
export function ContentCardTitle({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text
variant="body-medium"
className={cn("truncate text-zinc-800", className)}
>
{children}
</Text>
);
}
export function ContentCardSubtitle({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text
variant="small"
className={cn("mt-0.5 truncate font-mono text-zinc-800", className)}
>
{children}
</Text>
);
}
export function ContentCardDescription({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text variant="body" className={cn("mt-2 text-zinc-800", className)}>
{children}
</Text>
);
}
/* ------------------------------------------------------------------ */
/* Text */
/* ------------------------------------------------------------------ */
export function ContentMessage({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text variant="body" className={cn("text-zinc-800", className)}>
{children}
</Text>
);
}
export function ContentHint({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text variant="small" className={cn("text-neutral-500", className)}>
{children}
</Text>
);
}
/* ------------------------------------------------------------------ */
/* Code / data */
/* ------------------------------------------------------------------ */
export function ContentCodeBlock({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<pre
className={cn(
"whitespace-pre-wrap rounded-lg border bg-black p-3 text-xs text-neutral-200",
className,
)}
>
{children}
</pre>
);
}
/* ------------------------------------------------------------------ */
/* Inline elements */
/* ------------------------------------------------------------------ */
export function ContentBadge({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text
variant="small"
as="span"
className={cn(
"shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-zinc-800",
className,
)}
>
{children}
</Text>
);
}
export function ContentLink({
href,
children,
className,
...rest
}: Omit<React.ComponentProps<typeof Link>, "className"> & {
className?: string;
}) {
return (
<Link
variant="primary"
isExternal
href={href}
className={cn("shrink-0 text-xs text-purple-500", className)}
{...rest}
>
{children}
</Link>
);
}
/* ------------------------------------------------------------------ */
/* Lists */
/* ------------------------------------------------------------------ */
export function ContentSuggestionsList({
items,
max = 5,
className,
}: {
items: string[];
max?: number;
className?: string;
}) {
if (items.length === 0) return null;
return (
<ul
className={cn(
"mt-2 list-disc space-y-1 pl-5 font-sans text-[0.75rem] leading-[1.125rem] text-zinc-800",
className,
)}
>
{items.slice(0, max).map((s) => (
<li key={s}>{s}</li>
))}
</ul>
);
}

Some files were not shown because too many files have changed in this diff Show More