Compare commits

..

105 Commits

Author SHA1 Message Date
Bentlybro
9b71c03d96 Add 'Back' button to tutorial popover
Introduces a 'Back' button to the blocks control popover in the tutorial, allowing users to navigate to the previous step.
2025-08-20 11:57:39 +01:00
Bentlybro
b20b00a441 prettier 2025-08-20 11:13:08 +01:00
Bentlybro
84810ce0af Add delay and safeguard for tutorial popover initialization
Introduces a short delay before starting the tutorial in FlowEditor to ensure component state is initialized, especially after redirects. Also adds a safeguard in the tutorial to keep the blocks popover pinned during the relevant step.
2025-08-20 11:05:37 +01:00
Abhimanyu Yadav
2610c4579f feat(platform/dashboard): Enable editing for agent submissions (#10545)
- resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10511

In this PR, I’ve added backend endpoints and a frontend UI for edit
functionality on the Agent Dashboard. Now, users can update their store
submission, if status is `PENDING` or `APPROVED`, but not for `REJECTED`
and `DRAFT`. When users make changes to a pending status submission, the
changes are made to the same version. However, when users make changes
to an approved status submission, a new store listing version is
created.

Backend works something like this: 

<img width="866" height="832" alt="Screenshot 2025-08-15 at 9 39 02 AM"
src="https://github.com/user-attachments/assets/209c60ac-8350-43c1-ba4c-7378d95ecba7"
/>

### Changes
- I’ve updated the `StoreSubmission` view to include `video_url` and
`categories`.
- I’ve added a new frontend UI for editing submissions.
- I’ve created an endpoint for editing submissions.
- I’ve added more end-to-end tests to ensure the edit submission
functionality works as expected.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I have checked manually, everything is working perfectly.
  - [x] All e2e tests are also passing.

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: neo <neo.dowithless@gmail.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-08-20 02:49:29 +00:00
Abhimanyu Yadav
0c09b0c459 chore(api): remove launch darkly feature flags from api key endpoints (#10694)
Some API key endpoints have the Launch Darkly feature flag enabled,
while others don’t. To ensure consistency and remove the API key flag
from the Launch Darkly dashboard, I’m also removing it from the left
endpoints.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything is working fine locally
2025-08-19 17:10:43 +00:00
Abhimanyu Yadav
1105e6c0d2 tests(frontend): e2e tests for api key page (#10683)
I’ve added three tests for the API keys page:

- The test checks if the user is redirected to the login page when
they’re not authenticated.
- The test verifies that a new API key is created successfully.
- The test ensures that an existing API key can be revoked.

<img width="470" height="143" alt="Screenshot 2025-08-19 at 10 56 19 AM"
src="https://github.com/user-attachments/assets/d27bf736-61ec-435b-a6c4-820e4f3a5e2f"
/>

I’ve also removed the feature flag from the `delete_api_key` endpoint,
so we can use it on CI and in the local environment.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] tests are working perfectly locally.

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2025-08-19 16:04:15 +00:00
Ubbe
c6247f265e feat(frontend): new library agent view setup (#10652)
## Changes 🏗️

Setup for the new Agent Runs page:

<img width="900" height="521" alt="Screenshot 2025-08-15 at 14 36 34"
src="https://github.com/user-attachments/assets/460d6611-4b15-4878-92d3-b477dc4453a9"
/>

It is behind a feature flag in Launch Darkly, `new-agent-runs`, so we
can progressively enable in staging and later on production.

### Other improvements

<img width="350" height="291" alt="Screenshot_2025-08-15_at_14 28 08"
src="https://github.com/user-attachments/assets/972d2a1a-a4cd-4e92-b6d7-2dcf7f57c2db"
/>

- Added a new `<ErrorCard />` component to paint gracefully API errors
when fetching data
- Moved some sub-components of the old library page to a nested
`/components` folder 📁

Behind a feature flag

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tested with the feature flag ON and OFF

### For configuration changes:

None

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-08-19 12:11:39 +00:00
Abhimanyu Yadav
38610d1e7a feat(frontend): add reusable component for new block menu (#10687)
In this project, I’ve added all the reusable, non-reactive components
that will be used in the new block menu. I’ve also included a new
library called `react-timeago` that helps us find related times.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything works perfectly locally
2025-08-19 12:01:08 +00:00
Ubbe
ebfbf31c73 ci(frontend): query generation on dev and ci check (#10417)
## Changes 🏗️

- Run the API query generation as part of the `dev` command
  - update the `README` to reflect so
- Add CI job to generate queries and type-check to make sure we are not
out of sync
  - the job is run both in Front-end and Back-end changes 
- Generate the files via script to load the BE URL dynamically from the
env
- Remove generated files from Git 
- rename the `type-check` command to `types`

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI passes
  - [x] `README` updates make sense 

#### For configuration changes:

None

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-08-19 11:21:36 +00:00
Ubbe
4abe37396c fix(frontend): flaky e2e tests (#10689)
## Changes 🏗️

We had 2 flaky end-to-end tests:
- Build page → user can add two blocks and connect them
- this was failing sometimes because the `Run` button on the builder
does not work well, sometimes you need to click it twice for it to
work...
- Agent dashboard → edit actions
- some flaky tests asserting agent submissions not being there, pulled
the fixes from Abhi here on this PR
https://github.com/Significant-Gravitas/AutoGPT/pull/10545

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] E2E pass on the CI
  - [x] Changes make sense 

### For configuration changes:

None
2025-08-19 10:28:54 +00:00
Ubbe
fa14bf461b feat(frontend): add Line Tabs component (#10674)
## Changes 🏗️

<img width="800" height="644" alt="Screenshot 2025-08-18 at 23 11 46"
src="https://github.com/user-attachments/assets/8c9e1257-5b33-4e4d-937d-e8924b18d7dd"
/>


https://github.com/user-attachments/assets/4a83ed59-068e-46e0-8e76-4f34ed9dd976

- Needed for the new Agent Runs views (
[designs](https://www.figma.com/design/14jjs3hH3Hmkq4hGqxZWco/agent-runs-unification?node-id=187-8653&t=3BV5fF6NDXN7BlI8-1)
)
- Took **shadcn** tabs as a base and applied styles on top

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run storybook locally
  - [x] Play with the new tabs component 

### For configuration changes

None
2025-08-19 07:56:35 +00:00
Ubbe
e2c33e3d2a fix(frontend): agent activity cap (#10675)
## Changes 🏗️

Add the following caps to the **Agent Activity Dropdown**:
- display activity only from the last 72h
- display up to 1000 items

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open the agent activity with a big amount of times locally
  - [x] It displays up to a 1000 and with 72h cap  

### For configuration changes

None

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-08-19 07:56:26 +00:00
Swifty
650be0d1f7 fix(integration): FirecrawlExtractBlock returns 400 Invalid JSON schema when output_schema is passed as a string (#10669)
When the FirecrawlExtractBlock receives an output_schema, we currently
declare the field as a str.
Pydantic therefore serialises the JSON‐looking value into a string and
the Firecrawl API rejects the request with:

`400 Bad Request – Invalid JSON schema. path: ['schema']`

Direct curl requests work because the same structure is sent as a proper
JSON object.

### Changes 🏗️

- Changed the output_schema to dict instead of str

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test firebase.extract(..., schema) works with dict rather than str

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-19 07:04:04 +00:00
Reinier van der Leer
35bd7f7f7a fix(frontend/builder): Prevent unnecessary saves before run (#10670)
- Resolves #10444

Sometimes, the order of nodes and/or links isn't consistent between
frontend and backend, which currently can result in unnecessary
re-saving of the graph when the user tries to run it.
Also, `sub_graphs` was not included in the frontend `Graph` type, which
can cause unchecked code issues when the object is propragated using
spread operators.

### Changes 🏗️

- fix(frontend/builder): Make `graphsEquivalent` insensitive to link and
node order
- dx(frontend): Fix typing of `Graph.sub_graphs` (and its variants)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Import an agent and open it in the builder
  - Run it without making any changes to the graph itself
    - [x] -> graph shouldn't re-save
2025-08-18 17:44:49 +00:00
Zamil Majdy
312cb0227f fix(backend/credit): prevent double-application of transactions due to race condition (#10672)
<!-- Clearly explain the need for these changes: -->
## 🚨 CRITICAL: Double Transaction Bug

**Critical Issue:** Top-up transactions were being applied TWICE to user
balances, causing severe accounting errors.

**Example:**
- User with $160 balance tops up $50
- Expected: $210 balance  
- Actual: $260 balance (extra $50 incorrectly credited)

This compromises the financial integrity of our credit system and
requires immediate fix.

### Changes 🏗️

1. **Added double-checked locking pattern in `_enable_transaction`**
(backend/data/credit.py)
- Added transaction re-check INSIDE the locked transaction block (lines
294-298)
- Prevents race condition when concurrent requests try to activate the
same transaction
- Ensures transaction can only be activated once, even with webhook
retries

2. **Enhanced error messages in Stripe webhook handler**
(backend/server/routers/v1.py)
- Added detailed error messages for better debugging of webhook failures
- Helps identify issues with payload validation or signature
verification

### Root Cause Analysis 🔍

**TOCTOU (Time-of-Check to Time-of-Use) Race Condition:**

The original code checked `transaction.isActive` outside the database
lock. Between this check and acquiring the lock, another concurrent
request (webhook retry or duplicate) could enter, causing both to
proceed with activation.

**Sequence:**
1. Request A: Checks `isActive=False` 
2. Request B: Checks `isActive=False`  (webhook retry)  
3. Request A: Acquires lock, activates transaction, adds $50
4. Request B: Waits for lock, then ALSO adds $50 

**Contributing Factors:**
- Stripe webhook retry mechanism
- `@func_retry` decorator (up to 5 attempts)
- No database-level unique constraint on active transactions
- Missing atomicity between check and update

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified the double-check prevents duplicate transaction
activation
- [x] Tested concurrent webhook calls - only one succeeds in activating
transaction
  - [x] Confirmed balance is only incremented once per transaction
- [x] Verified idempotency - multiple calls with same transaction_key
are safe
  - [x] All existing credit system tests pass
  - [x] Tested webhook error handling with invalid payloads/signatures

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes required - this is a code-only fix*
2025-08-18 17:16:08 +00:00
Abhimanyu Yadav
a8feb3c8d0 feat(platform/builder): implement launchdarkly feature flag for block menu redesign (#10667)
I’ve added a new launch darkly flag to toggle between the new and old
block menu in the builder.

### Changes 🏗️
- A new flag name `NEW_BLOCK_MENU` has been added.
- A new block menu block has been created, which is a normal component.
It will be expanded with more components in the future. Currently, it’s
just a one-line component.
- A new control panel has been created, which improves state
localisation and has a new design according to the design files.

<img width="1512" height="981" alt="Screenshot 2025-08-18 at 2 49 54 PM"
src="https://github.com/user-attachments/assets/3deeefe3-9e42-4178-9cf9-77773ed7e172"
/>



### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything works perfectly on local.
2025-08-18 16:47:21 +00:00
Reinier van der Leer
5da5c2ecd6 Merge branch 'master' into dev 2025-08-18 16:42:59 +02:00
Reinier van der Leer
ba65fee862 hotfix(backend/executor): Fix propagation of passed-in credentials to sub-agents (#10668)
This should fix sub-agent execution issues with passed-in credentials after a crucial data path was removed in #10568.

Additionally, some of the changes are to ensure the `credentials_input_schema` gets refreshed correctly when saving a new version of a graph in the builder.

### Changes 🏗️

- Include `graph_credentials_inputs` in `nodes_input_masks` passed into sub-agent execution
- Fix credentials input schema in `update_graph` and `get_library_agent_by_graph_id` return
- Improve error message on sub-graph validation failure

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Import agent with sub-agent(s) with required credentials inputs & run it -> should work
2025-08-18 16:42:28 +02:00
neo
908dcd7b4b doc(readme): add links to translated README versions (#10659)
Added language selection links to the README for easier access to
translated versions: German, Spanish, French, Japanese, Korean,
Portuguese, Russian, and Chinese.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-08-18 13:29:30 +00:00
Zamil Majdy
542f951dd8 Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into dev 2025-08-18 07:08:39 +00:00
Zamil Majdy
72938590f2 hotfix: reduce scheduler max_workers to match database pool size (#10665)
## Summary
- Fixes scheduler pod crashes during peak scheduling periods (e.g.,
03:00:00)
- Reduces APScheduler ThreadPoolExecutor max_workers from 10 to 3
(matching scheduler_db_pool_size)
- Prevents event loop saturation that blocks health checks and causes
pod restarts

## Root Cause Analysis
During peak scheduling periods, multiple jobs execute simultaneously and
compete for the shared event loop through `run_async()`. This creates a
resource bottleneck where:

1. **ThreadPoolExecutor** runs up to 10 jobs concurrently
2. Each job calls `run_async()` which submits to the **same event loop**
that FastAPI health check needs
3. **Health check blocks** waiting for event loop availability 
4. **Liveness probe fails** after 5 consecutive timeouts (50s)
5. **Pod gets killed** with SIGKILL (exit code 137)
6. **Executions orphaned** - created in DB but never published to
RabbitMQ

## Solution
Match `max_workers` to `scheduler_db_pool_size` (3) to prevent more
concurrent jobs than the system can handle without blocking critical
health checks.

## Evidence
- Pod restart at exactly 03:05:48 when executions
e47cd564-ed87-4a52-999b-40804c41537a and
eae69811-4c7c-4cd5-b084-41872293185b were created
- 7 scheduled jobs triggered simultaneously at 03:00:00
- Health check normally responds in 0.007s but times out during high
concurrency
- Exit code 137 indicates SIGKILL from liveness probe failure

## Test Plan
- [ ] Monitor scheduler pod stability during peak scheduling periods
- [ ] Verify no executions remain QUEUED without being published to
RabbitMQ
- [ ] Confirm health checks remain responsive under load
- [ ] Check that job execution still works correctly with reduced
concurrency

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-18 05:49:39 +00:00
Krzysztof Czerwinski
5d364e13f6 chore(frontend): Regenerate API client for orval v7.11.2 (#10663)
### Changes 🏗️

- Generate API client for orval v7.11.2
- Fix type error in `useAgentSelectStep.ts`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Platform works
  - [x] Updated codepath in `useAgentSelectStep.ts` works
2025-08-18 03:20:37 +00:00
Zamil Majdy
32513b26ab Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-08-17 08:18:15 +07:00
Zamil Majdy
bf92e7dbc8 hotfix(backend/executor): Fix RabbitMQ channel retry logic in executor (#10661)
## Summary
**HOTFIX for production** - Fixes executor being stuck in infinite retry
loop when RabbitMQ channels are closed
- Ensures proper reconnection by checking channel state before
attempting to consume messages
- Prevents accumulation of thousands of retry attempts (was seeing 7000+
retries)

## Changes
The executor was stuck repeatedly failing with "Channel is closed"
errors because the `continuous_retry` decorator was attempting to reuse
closed channels instead of creating new ones.

Added channel state checks (`is_ready`) before connecting in both:
- `_consume_execution_run()` 
- `_consume_execution_cancel()`

When a channel is not ready (closed), the code now:
1. Disconnects the client (safe operation, checks if already
disconnected)
2. Establishes a fresh connection with new channel
3. Proceeds with message consumption

## Test plan
- [x] Verified the disconnect() method is safe to call on already
disconnected clients
- [x] Confirmed is_ready property checks both connection and channel
state
- [ ] Deploy to environment and verify executors reconnect properly
after channel failures
- [ ] Monitor logs to ensure no more "Channel is closed" retry loops

## Related Issues
Fixes critical production issue where:
- Executor pods show repeated "Channel is closed" errors
- 757 messages stuck in `graph_execution_queue`
- 102,286 messages in `failed_notifications` queue
- RabbitMQ logs show connections being closed due to missed heartbeats

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-16 17:06:06 -05:00
Nicholas Tindle
6fce3a09ea fix(platform): fix admin dashboard credit tool search, pagination, and modal feedback issues (#10644)
## Summary
Fixes three critical issues in the admin dashboard spending page
(SECRT-1438):
- Fixed user search not working (P1) - query parameters weren't being
passed to backend
- Fixed broken pagination (P1) - server-side GET requests missing query
parameters
- Added visual feedback for credit updates (P3) - toast notifications,
loading states, auto-dismiss modal

## Root Cause
Server-side API requests weren't appending query parameters for
GET/DELETE requests in the `makeAuthenticatedRequest` function in
`helpers.ts`.

## Changes
- Added missing `transaction_filter` parameter to API client's
`getUsersHistory` method
- Fixed server-side GET request query parameter handling by updating
`makeAuthenticatedRequest` to use `buildUrlWithQuery`
- Added Suspense key to force re-render on URL parameter changes
- Added toast notifications for success/error states when adding credits
- Modal now closes automatically after successful submission
- Added loading state with disabled buttons during credit submission
- Page refreshes automatically to show updated balances
- Added debug logging to help diagnose parameter passing issues

## Test Plan
- [x] Search for users by email in admin spending dashboard
- [x] Navigate through pagination (Next/Previous buttons)
- [x] Filter by transaction type (Grant, Usage, etc.)
- [x] Add credits to a user account
- [x] Verify toast notification appears
- [x] Verify modal closes after successful submission
- [x] Verify balance updates without manual refresh

## Linear Issue
Closes [SECRT-1438](https://linear.app/autogpt/issue/SECRT-1438)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-15 18:06:34 +00:00
Ubbe
9158d4b6a2 docs(frontend): update README with Docker instructions (#10648)
## Changes 🏗️

Update the Front-end `README` to clarify how to run the Front-end and
Back-end separately or together via Docker.

You can [preview the README
here](8f607ca852/autogpt_platform/frontend/README.md).

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] `README` makes sense and looks good formatting wise

### For configuration changes:

None
2025-08-15 14:35:52 +00:00
dependabot[bot]
2403931c2e chore(frontend/deps): bump the production-dependencies group across 1 directory with 25 updates (#10651)
Bumps the production-dependencies group with 25 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
| [@hookform/resolvers](https://github.com/react-hook-form/resolvers) |
`5.2.0` | `5.2.1` |
|
[@next/third-parties](https://github.com/vercel/next.js/tree/HEAD/packages/third-parties)
| `15.4.4` | `15.4.6` |
| [@radix-ui/react-alert-dialog](https://github.com/radix-ui/primitives)
| `1.1.14` | `1.1.15` |
| [@radix-ui/react-checkbox](https://github.com/radix-ui/primitives) |
`1.3.2` | `1.3.3` |
| [@radix-ui/react-collapsible](https://github.com/radix-ui/primitives)
| `1.1.11` | `1.1.12` |
| [@radix-ui/react-context-menu](https://github.com/radix-ui/primitives)
| `2.2.15` | `2.2.16` |
| [@radix-ui/react-dialog](https://github.com/radix-ui/primitives) |
`1.1.14` | `1.1.15` |
|
[@radix-ui/react-dropdown-menu](https://github.com/radix-ui/primitives)
| `2.1.15` | `2.1.16` |
| [@radix-ui/react-popover](https://github.com/radix-ui/primitives) |
`1.1.14` | `1.1.15` |
| [@radix-ui/react-radio-group](https://github.com/radix-ui/primitives)
| `1.3.7` | `1.3.8` |
| [@radix-ui/react-scroll-area](https://github.com/radix-ui/primitives)
| `1.2.9` | `1.2.10` |
| [@radix-ui/react-select](https://github.com/radix-ui/primitives) |
`2.2.5` | `2.2.6` |
| [@radix-ui/react-switch](https://github.com/radix-ui/primitives) |
`1.2.5` | `1.2.6` |
| [@radix-ui/react-tabs](https://github.com/radix-ui/primitives) |
`1.1.12` | `1.1.13` |
| [@radix-ui/react-toast](https://github.com/radix-ui/primitives) |
`1.2.14` | `1.2.15` |
| [@radix-ui/react-tooltip](https://github.com/radix-ui/primitives) |
`1.2.7` | `1.2.8` |
| [@supabase/supabase-js](https://github.com/supabase/supabase-js) |
`2.52.1` | `2.55.0` |
|
[@tanstack/react-query](https://github.com/TanStack/query/tree/HEAD/packages/react-query)
| `5.83.0` | `5.85.3` |
|
[@xyflow/react](https://github.com/xyflow/xyflow/tree/HEAD/packages/react)
| `12.8.2` | `12.8.3` |
| [framer-motion](https://github.com/motiondivision/motion) | `12.23.9`
| `12.23.12` |
|
[lucide-react](https://github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react)
| `0.525.0` | `0.539.0` |
| [next](https://github.com/vercel/next.js) | `15.4.4` | `15.4.6` |
| [react-day-picker](https://github.com/gpbl/react-day-picker) | `9.8.0`
| `9.8.1` |
| [react-hook-form](https://github.com/react-hook-form/react-hook-form)
| `7.61.1` | `7.62.0` |
| [sonner](https://github.com/emilkowalski/sonner) | `2.0.6` | `2.0.7` |


Updates `@hookform/resolvers` from 5.2.0 to 5.2.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/react-hook-form/resolvers/releases"><code>@​hookform/resolvers</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.2.1</h2>
<h2><a
href="https://github.com/react-hook-form/resolvers/compare/v5.2.0...v5.2.1">5.2.1</a>
(2025-07-29)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>discriminated union for zod v4 mini (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/784">#784</a>)
(<a
href="49a0d7ba93">49a0d7b</a>)</li>
<li>zod v4 peer deps (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/798">#798</a>)
(<a
href="2d28e6aca6">2d28e6a</a>)</li>
<li><strong>zod:</strong> fix output type for Zod 4 resolver (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/801">#801</a>)
(<a
href="bc09647a5e">bc09647</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="49a0d7ba93"><code>49a0d7b</code></a>
fix: discriminated union for zod v4 mini (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/784">#784</a>)</li>
<li><a
href="bc09647a5e"><code>bc09647</code></a>
fix(zod): fix output type for Zod 4 resolver (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/801">#801</a>)</li>
<li><a
href="2d28e6aca6"><code>2d28e6a</code></a>
fix: zod v4 peer deps (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/798">#798</a>)</li>
<li>See full diff in <a
href="https://github.com/react-hook-form/resolvers/compare/v5.2.0...v5.2.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `@next/third-parties` from 15.4.4 to 15.4.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases"><code>@​next/third-parties</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v15.4.6</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: <code>_error</code> page's <code>req.url</code> can be
overwritten to dynamic param on minimal mode (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82347">#82347</a>)</li>
<li>fix: add <code>?dpl</code> to fonts in
<code>/_next/static/media</code> (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82384">#82384</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/devjiwonchoi"><code>@​devjiwonchoi</code></a>,
<a href="https://github.com/ijjk"><code>@​ijjk</code></a>, and <a
href="https://github.com/styfle"><code>@​styfle</code></a> for
helping!</p>
<h2>v15.4.5</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Fix API stripping JSON incorrectly (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82062">#82062</a>)</li>
<li>Fix i18n fallback: false collision (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82158">#82158</a>)</li>
<li>Revert &quot;Fix tracing of server actions imported by client
components (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82167">#82167</a>)</li>
<li>Ensure setAssetPrefix updates config instance (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82165">#82165</a>)</li>
<li>Turbopack: update mimalloc (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82166">#82166</a>)</li>
<li>fix(next/image): fix image-optimizer.ts headers (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82175">#82175</a>)</li>
<li>fix(next/image): improve and simplify detect-content-type (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/82174">#82174</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/ijjk"><code>@​ijjk</code></a>, <a
href="https://github.com/sokra"><code>@​sokra</code></a>, and <a
href="https://github.com/styfle"><code>@​styfle</code></a> for
helping!</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="be4aafd4b7"><code>be4aafd</code></a>
v15.4.6</li>
<li><a
href="b9aab5dbe9"><code>b9aab5d</code></a>
v15.4.5</li>
<li>See full diff in <a
href="https://github.com/vercel/next.js/commits/v15.4.6/packages/third-parties">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-alert-dialog` from 1.1.14 to 1.1.15
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-checkbox` from 1.3.2 to 1.3.3
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-collapsible` from 1.1.11 to 1.1.12
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-context-menu` from 2.2.15 to 2.2.16
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-dialog` from 1.1.14 to 1.1.15
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-dropdown-menu` from 2.1.15 to 2.1.16
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-popover` from 1.1.14 to 1.1.15
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-radio-group` from 1.3.7 to 1.3.8
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-scroll-area` from 1.2.9 to 1.2.10
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-select` from 2.2.5 to 2.2.6
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-switch` from 1.2.5 to 1.2.6
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-tabs` from 1.1.12 to 1.1.13
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-toast` from 1.2.14 to 1.2.15
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-tooltip` from 1.2.7 to 1.2.8
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@supabase/supabase-js` from 2.52.1 to 2.55.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-js/releases"><code>@​supabase/supabase-js</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v2.55.0</h2>
<h1><a
href="https://github.com/supabase/supabase-js/compare/v2.54.0...v2.55.0">2.55.0</a>
(2025-08-12)</h1>
<h3>Features</h3>
<ul>
<li>bump realtime-js to 2.15.1 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1529">#1529</a>)
(<a
href="445dad369e">445dad3</a>)</li>
</ul>
<h2>v2.55.0-next.1</h2>
<h1><a
href="https://github.com/supabase/supabase-js/compare/v2.54.0...v2.55.0-next.1">2.55.0-next.1</a>
(2025-08-12)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>update test to provide ws (<a
href="5ac99266ec">5ac9926</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>bump realtime js (<a
href="2da3b97f9b">2da3b97</a>)</li>
</ul>
<h2>v2.54.0</h2>
<h1><a
href="https://github.com/supabase/supabase-js/compare/v2.53.1...v2.54.0">2.54.0</a>
(2025-08-07)</h1>
<h3>Features</h3>
<ul>
<li>fallback to key - update realtime js to 2.15.0 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1523">#1523</a>)
(<a
href="7876a2487d">7876a24</a>)</li>
</ul>
<h2>v2.53.1</h2>
<h2><a
href="https://github.com/supabase/supabase-js/compare/v2.53.0...v2.53.1">2.53.1</a>
(2025-08-07)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>avoid Next.js Edge Runtime warnings in Node.js deprecation check (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1520">#1520</a>)
(<a
href="4f38a9c0cd">4f38a9c</a>),
closes <a
href="https://redirect.github.com/supabase/supabase-js/issues/1515">#1515</a></li>
</ul>
<h2>v2.53.0</h2>
<h1><a
href="https://github.com/supabase/supabase-js/compare/v2.52.1...v2.53.0">2.53.0</a>
(2025-07-28)</h1>
<h3>Features</h3>
<ul>
<li>bump storage version, and expose StorageClientOptions (<a
href="eea0444d93">eea0444</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="445dad369e"><code>445dad3</code></a>
feat: bump realtime-js to 2.15.1 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1529">#1529</a>)</li>
<li><a
href="7876a2487d"><code>7876a24</code></a>
feat: fallback to key - update realtime js to 2.15.0 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1523">#1523</a>)</li>
<li><a
href="dd0146300d"><code>dd01463</code></a>
chore: cleanups - pnpm removal - webpack polyfill - ci update (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1521">#1521</a>)</li>
<li><a
href="4f38a9c0cd"><code>4f38a9c</code></a>
fix: avoid Next.js Edge Runtime warnings in Node.js deprecation check
(<a
href="https://redirect.github.com/supabase/supabase-js/issues/1520">#1520</a>)</li>
<li><a
href="75dd796866"><code>75dd796</code></a>
Merge pull request <a
href="https://redirect.github.com/supabase/supabase-js/issues/1500">#1500</a>
from supabase/feat/update-storage-version-to-support...</li>
<li><a
href="06314d71c8"><code>06314d7</code></a>
bump storage-js to 2.10.4</li>
<li><a
href="eea0444d93"><code>eea0444</code></a>
feat: bump storage version, and expose StorageClientOptions</li>
<li><a
href="137caec44c"><code>137caec</code></a>
Merge pull request <a
href="https://redirect.github.com/supabase/supabase-js/issues/1502">#1502</a>
from georgRusanov/more_test</li>
<li><a
href="f4e2a6bef6"><code>f4e2a6b</code></a>
added more tests</li>
<li><a
href="115bc9ab1f"><code>115bc9a</code></a>
added edge tests</li>
<li>Additional commits viewable in <a
href="https://github.com/supabase/supabase-js/compare/v2.52.1...v2.55.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/react-query` from 5.83.0 to 5.85.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/react-query</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.85.3</h2>
<p>Version 5.85.3 - 8/14/25, 1:05 PM</p>
<h2>Changes</h2>
<h3>Fix</h3>
<ul>
<li>query-core: race condition in StrictMode (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query/issues/9565">#9565</a>)
(51aad7d) by Dominik Dorfmeister</li>
</ul>
<h3>Test</h3>
<ul>
<li>core: tests for StrictMode behaviour (de3626a) by TkDodo</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/svelte-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/svelte-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/vue-query</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/vue-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/angular-query-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
<li><code>@​tanstack/angular-query-devtools-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.85.3</li>
</ul>
<h2>v5.85.2</h2>
<p>Version 5.85.2 - 8/14/25, 8:51 AM</p>
<h2>Changes</h2>
<h3>Fix</h3>
<ul>
<li>query-core: query cancellation and reverting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query/issues/9293">#9293</a>)
(0991576) by Dominik Dorfmeister</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.85.2</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.85.2</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.85.2</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.85.2</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b6516bd25e"><code>b6516bd</code></a>
release: v5.85.3</li>
<li><a
href="e14f39c6ee"><code>e14f39c</code></a>
release: v5.85.2</li>
<li><a
href="0991576781"><code>0991576</code></a>
fix(query-core): query cancellation and reverting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query/issues/9293">#9293</a>)</li>
<li><a
href="31f51b97fa"><code>31f51b9</code></a>
release: v5.85.1</li>
<li><a
href="aab51d9398"><code>aab51d9</code></a>
release: v5.85.0</li>
<li><a
href="6bf2eb7450"><code>6bf2eb7</code></a>
chore(tsconfig.json): add 'test-setup.ts' to 'include' array (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query/issues/9545">#9545</a>)</li>
<li><a
href="77e53b0c21"><code>77e53b0</code></a>
test(react-query/useIsFetching): remove unnecessary
'advanceTimersByTimeAsync...</li>
<li><a
href="edd1bd08e0"><code>edd1bd0</code></a>
release: v5.84.2</li>
<li><a
href="34657e5a12"><code>34657e5</code></a>
test(react-query/mutationOptions): add tests for without 'mutationKey'
in 'mu...</li>
<li><a
href="0db1056fdb"><code>0db1056</code></a>
release: v5.84.1</li>
<li>Additional commits viewable in <a
href="https://github.com/TanStack/query/commits/v5.85.3/packages/react-query">compare
view</a></li>
</ul>
</details>
<br />

Updates `@xyflow/react` from 12.8.2 to 12.8.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/xyflow/xyflow/releases"><code>@​xyflow/react</code>'s
releases</a>.</em></p>
<blockquote>
<h2><code>@​xyflow/react</code><a
href="https://github.com/12"><code>@​12</code></a>.8.3</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5420">#5420</a> <a
href="c453ee3f74"><code>c453ee3f</code></a>
Thanks <a
href="https://github.com/ShlomoGalle"><code>@​ShlomoGalle</code></a>! -
Omit <code>defaultValue</code> from <code>Node</code>'s
<code>domAttributes</code> to fix type incompatibility when using
<code>WritableDraft</code></p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5436">#5436</a> <a
href="def02b9609"><code>def02b96</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Prevent a 0 added to the markup for edges when interactionWidth is
0</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5444">#5444</a> <a
href="9aca483928"><code>9aca4839</code></a>
Thanks <a
href="https://github.com/paula-stacho"><code>@​paula-stacho</code></a>!
- Export MiniMapNode</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5443">#5443</a> <a
href="144f8feb0f"><code>144f8feb</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Use 1 as the default for interactive Minimap zoom step</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5428">#5428</a> <a
href="f18e98569b"><code>f18e9856</code></a>
Thanks <a href="https://github.com/Karl255"><code>@​Karl255</code></a>!
- Fix clicking on detached handle elements not initiating drawing of
connections</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5453">#5453</a> <a
href="7a088817f7"><code>7a088817</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Snap selection instead of separate nodes when snap grid is enabled</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5415">#5415</a> <a
href="6838df9d67"><code>6838df9d</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Allow strings and enums for existing marker types</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5444">#5444</a> <a
href="9192fd7d2c"><code>9192fd7d</code></a>
Thanks <a
href="https://github.com/paula-stacho"><code>@​paula-stacho</code></a>!
- Export MiniMapNode</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5448">#5448</a> <a
href="f5fe1d71e0"><code>f5fe1d71</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Use correct HandleConnection type for Handle onConnect</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5419">#5419</a> <a
href="daa33fb3bd"><code>daa33fb3</code></a>
Thanks <a
href="https://github.com/0x0f0f0f"><code>@​0x0f0f0f</code></a>! - Make
arrow heads markers fallback to --xy-edge-stroke CSS variable when
passing null as marker color</p>
</li>
<li>
<p>Updated dependencies [<a
href="144f8feb0f"><code>144f8feb</code></a>,
<a
href="f18e98569b"><code>f18e9856</code></a>,
<a
href="7a088817f7"><code>7a088817</code></a>,
<a
href="6838df9d67"><code>6838df9d</code></a>,
<a
href="fddbb7de47"><code>fddbb7de</code></a>,
<a
href="f5fe1d71e0"><code>f5fe1d71</code></a>,
<a
href="daa33fb3bd"><code>daa33fb3</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.67</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/xyflow/xyflow/blob/main/packages/react/CHANGELOG.md"><code>@​xyflow/react</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>12.8.3</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5420">#5420</a> <a
href="c453ee3f74"><code>c453ee3f</code></a>
Thanks <a
href="https://github.com/ShlomoGalle"><code>@​ShlomoGalle</code></a>! -
Omit <code>defaultValue</code> from <code>Node</code>'s
<code>domAttributes</code> to fix type incompatibility when using
<code>WritableDraft</code></p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5436">#5436</a> <a
href="def02b9609"><code>def02b96</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Prevent a 0 added to the markup for edges when interactionWidth is
0</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5444">#5444</a> <a
href="9aca483928"><code>9aca4839</code></a>
Thanks <a
href="https://github.com/paula-stacho"><code>@​paula-stacho</code></a>!
- Export MiniMapNode</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5443">#5443</a> <a
href="144f8feb0f"><code>144f8feb</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Use 1 as the default for interactive Minimap zoom step</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5428">#5428</a> <a
href="f18e98569b"><code>f18e9856</code></a>
Thanks <a href="https://github.com/Karl255"><code>@​Karl255</code></a>!
- Fix clicking on detached handle elements not initiating drawing of
connections</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5453">#5453</a> <a
href="7a088817f7"><code>7a088817</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Snap selection instead of separate nodes when snap grid is enabled</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5415">#5415</a> <a
href="6838df9d67"><code>6838df9d</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Allow strings and enums for existing marker types</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5444">#5444</a> <a
href="9192fd7d2c"><code>9192fd7d</code></a>
Thanks <a
href="https://github.com/paula-stacho"><code>@​paula-stacho</code></a>!
- Export MiniMapNode</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5448">#5448</a> <a
href="f5fe1d71e0"><code>f5fe1d71</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Use correct HandleConnection type for Handle onConnect</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5419">#5419</a> <a
href="daa33fb3bd"><code>daa33fb3</code></a>
Thanks <a
href="https://github.com/0x0f0f0f"><code>@​0x0f0f0f</code></a>! - Make
arrow heads markers fallback to --xy-edge-stroke CSS variable when
passing null as marker color</p>
</li>
<li>
<p>Updated dependencies [<a
href="144f8feb0f"><code>144f8feb</code></a>,
<a
href="f18e98569b"><code>f18e9856</code></a>,
<a
href="7a088817f7"><code>7a088817</code></a>,
<a
href="6838df9d67"><code>6838df9d</code></a>,
<a
href="fddbb7de47"><code>fddbb7de</code></a>,
<a
href="f5fe1d71e0"><code>f5fe1d71</code></a>,
<a
href="daa33fb3bd"><code>daa33fb3</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.67</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e8e7d11179"><code>e8e7d11</code></a>
chore(packages): bump</li>
<li><a
href="4e588b2c23"><code>4e588b2</code></a>
fix(handle-on-connect): use correct HandleConnection type <a
href="https://github.com/xyflow/xyflow/tree/HEAD/packages/react/issues/5447">#5447</a></li>
<li><a
href="91e5e302d5"><code>91e5e30</code></a>
Merge pull request <a
href="https://github.com/xyflow/xyflow/tree/HEAD/packages/react/issues/5428">#5428</a>
from Karl255/#5273-handle-click-target-fix</li>
<li><a
href="172c2db251"><code>172c2db</code></a>
Merge pull request <a
href="https://github.com/xyflow/xyflow/tree/HEAD/packages/react/issues/5444">#5444</a>
from paula-stacho/export-minimapnode</li>
<li><a
href="295884fea9"><code>295884f</code></a>
chore(handle): cleanup</li>
<li><a
href="71e90ca0f4"><code>71e90ca</code></a>
Merge pull request <a
href="https://github.com/xyflow/xyflow/tree/HEAD/packages/react/issues/5443">#5443</a>
from xyflow/fix/windows-scroll</li>
<li><a
href="9fa0779664"><code>9fa0779</code></a>
chore(minimap): use 1 as a default for zoom step</li>
<li><a
href="c502f27f86"><code>c502f27</code></a>
Export MiniMapNode</li>
<li><a
href="d060c3fa87"><code>d060c3f</code></a>
Adjust SvelteFlow and allow for null to use CSS variable</li>
<li><a
href="4c389117b7"><code>4c38911</code></a>
allow null and correct behavior</li>
<li>Additional commits viewable in <a
href="https://github.com/xyflow/xyflow/commits/@xyflow/react@12.8.3/packages/react">compare
view</a></li>
</ul>
</details>
<br />

Updates `framer-motion` from 12.23.9 to 12.23.12
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/motiondivision/motion/blob/main/CHANGELOG.md">framer-motion's
changelog</a>.</em></p>
<blockquote>
<h2>[12.23.12] 2025-07-29</h2>
<h3>Added</h3>
<ul>
<li>Exporting internal APIs for use in view animations.</li>
</ul>
<h2>[12.23.11] 2025-07-28</h2>
<h3>Added</h3>
<ul>
<li>Children of variants with <code>delayChildren: stagger()</code> will
now be staggered correctly alongside their newly-entering siblings.</li>
</ul>
<h2>[12.23.10] 2025-07-28</h2>
<h3>Fixed</h3>
<ul>
<li>Fixed shared layout animation in situations where no
<code>motion</code> components have re-rendered between shared element
switching.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e0f7e07570"><code>e0f7e07</code></a>
v12.23.12</li>
<li><a
href="994515fef3"><code>994515f</code></a>
Updating changelog</li>
<li><a
href="95d82ff919"><code>95d82ff</code></a>
Merge pull request <a
href="https://redirect.github.com/motiondivision/motion/issues/3338">#3338</a>
from motiondivision/feature/next-page-transitions</li>
<li><a
href="58b2e8cde4"><code>58b2e8c</code></a>
Exporting APIs for view transitions</li>
<li><a
href="b6f2132fb6"><code>b6f2132</code></a>
Update README.md</li>
<li><a
href="38298c41fc"><code>38298c4</code></a>
Update README.md</li>
<li><a
href="76396b0187"><code>76396b0</code></a>
Update README.md</li>
<li><a
href="b273d064a3"><code>b273d06</code></a>
Update README.md</li>
<li><a
href="c0bd6effa9"><code>c0bd6ef</code></a>
v12.23.11</li>
<li><a
href="e9b52af3e2"><code>e9b52af</code></a>
Updating changelog</li>
<li>Additional commits viewable in <a
href="https://github.com/motiondivision/motion/compare/v12.23.9...v12.23.12">compare
view</a></li>
</ul>
</details>
<br />

Updates `lucide-react` from 0.525.0 to 0.539.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/lucide-icons/lucide/releases">lucide-react's
releases</a>.</em></p>
<blockquote>
<h2>Version 0.539.0</h2>
<h2>What's Changed</h2>
<ul>
<li>feat(icons): added <code>brick-wall-shield</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3476">lucide-icons/lucide#3476</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/lucide-icons/lucide/compare/0.538.0...0.539.0">https://github.com/lucide-icons/lucide/compare/0.538.0...0.539.0</a></p>
<h2>Version 0.538.0</h2>
<h2>What's Changed</h2>
<ul>
<li>fix(icons): changed <code>apple</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3505">lucide-icons/lucide#3505</a></li>
<li>fix(icons): changed <code>store</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3501">lucide-icons/lucide#3501</a></li>
<li>fix(icons): changed <code>mic-off</code> icon by <a
href="https://github.com/lieonlion"><code>@​lieonlion</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2823">lucide-icons/lucide#2823</a></li>
<li>chore(deps): bump astro from 5.5.2 to 5.12.8 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3523">lucide-icons/lucide#3523</a></li>
<li>fix(icons): deprecate rail-symbol by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2862">lucide-icons/lucide#2862</a></li>
<li>feat(icons): added <code>kayak</code> icon by <a
href="https://github.com/jpjacobpadilla"><code>@​jpjacobpadilla</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3054">lucide-icons/lucide#3054</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/lucide-icons/lucide/compare/0.537.0...0.538.0">https://github.com/lucide-icons/lucide/compare/0.537.0...0.538.0</a></p>
<h2>Version 0.537.0</h2>
<h2>What's Changed</h2>
<ul>
<li>chore(metadata): Add tags to <code>x</code> icon by <a
href="https://github.com/jamiemlaw"><code>@​jamiemlaw</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3341">lucide-icons/lucide#3341</a></li>
<li>docs: add rule against war/violence related imagery by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3506">lucide-icons/lucide#3506</a></li>
<li>fix(icons): changed <code>spade</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3502">lucide-icons/lucide#3502</a></li>
<li>fix(icons): changed <code>school</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2968">lucide-icons/lucide#2968</a></li>
<li>fix(site): fixes icon style customiser by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3511">lucide-icons/lucide#3511</a></li>
<li>fix(docs): fixed array length error in diff endpoint by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3498">lucide-icons/lucide#3498</a></li>
<li>feat(icons): added <code>circle-star</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3507">lucide-icons/lucide#3507</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/lucide-icons/lucide/compare/0.536.0...0.537.0">https://github.com/lucide-icons/lucide/compare/0.536.0...0.537.0</a></p>
<h2>Version 0.536.0</h2>
<h2>What's Changed</h2>
<ul>
<li>fix(icons): arcified message icons &amp; fixed optical volume by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3473">lucide-icons/lucide#3473</a></li>
<li>fix(icons): changed <code>hospital</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2969">lucide-icons/lucide#2969</a></li>
<li>fix(<code>@​lucide/svelte</code>): Add <code>.js</code> extensions
to imports by <a
href="https://github.com/abdel-17"><code>@​abdel-17</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2950">lucide-icons/lucide#2950</a></li>
<li>fix(lucide-vue-next): Support for kebabCase props by <a
href="https://github.com/ericfennis"><code>@​ericfennis</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3477">lucide-icons/lucide#3477</a></li>
<li>fix(icons): changed <code>a-arrow-*</code> icons by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3474">lucide-icons/lucide#3474</a></li>
<li>fix(icons): arcified <code>cake-slice</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3345">lucide-icons/lucide#3345</a></li>
<li>feat(lucide-static): include aliases in icons directory by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3368">lucide-icons/lucide#3368</a></li>
<li>feat(icons): added <code>turntable</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3429">lucide-icons/lucide#3429</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/abdel-17"><code>@​abdel-17</code></a>
made their first contribution in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2950">lucide-icons/lucide#2950</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/lucide-icons/lucide/compare/0.535.0...0.536.0">https://github.com/lucide-icons/lucide/compare/0.535.0...0.536.0</a></p>
<h2>Version 0.535.0</h2>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e71198d9b3"><code>e71198d</code></a>
chore: icon alias improvements (<a
href="https://github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react/issues/2861">#2861</a>)</li>
<li>See full diff in <a
href="https://github.com/lucide-icons/lucide/commits/0.539.0/packages/lucide-react">compare
view</a></li>
</ul>
</details>
<br />

Updates `next` from 15.4.4 to 15.4.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">next's
releases</a>.</em></p>
<blockquote>
<h2>v15.4.6</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: <code>_error</code> page's <code>req.url</code> can be
overwritten to dynamic param on minimal mode (<a
href="https://redirect.github.com/vercel/next.js/issues/82347">#82347</a>)</li>
<li>fix: add <code>?dpl</code> to fonts in
<code>/_next/static/media</code> (<a
href="https://redirect.github.com/vercel/next.js/issues/82384">#82384</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/devjiwonchoi"><code>@​devjiwonchoi</code></a>,
<a href="https://github.com/ijjk"><code>@​ijjk</code></a>, and <a
href="https://github.com/styfle"><code>@​styfle</code></a> for
helping!</p>
<h2>v15.4.5</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Fix API stripping JSON incorrectly (<a
href="https://redirect.github.com/vercel/next.js/issues/82062">#82062</a>)</li>
<li>Fix i18n fallback: false collision (<a
href="https://redirect.github.com/vercel/next.js/issues/82158">#82158</a>)</li>
<li>Revert &quot;Fix tracing of server actions imported by client
components (<a
href="https://redirect.github.com/vercel/next.js/issues/82167">#82167</a>)</li>
<li>Ensure setAssetPrefix updates config instance (<a
href="https://redirect.github.com/vercel/next.js/issues/82165">#82165</a>)</li>
<li>Turbopack: update mimalloc (<a
href="https://redirect.github.com/vercel/next.js/issues/82166">#82166</a>)</li>
<li>fix(next/image): fix image-optimizer.ts headers (<a
href="https://redirect.github.com/vercel/next.js/issues/82175">#82175</a>)</li>
<li>fix(next/image): improve and simplify detect-content-type (<a
href="https://redirect.github.com/vercel/next.js/issues/82174">#82174</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/ijjk"><code>@​ijjk</code></a>, <a
href="https://github.com/sokra"><code>@​sokra</code></a>, and <a
href="https://github.com/styfle"><code>@​styfle</code></a> for
helping!</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="be4aafd4b7"><code>be4aafd</code></a>
v15.4.6</li>
<li><a
href="91e5b6b84f"><code>91e5b6b</code></a>
Backport &quot;fix: add <code>?dpl</code> to fonts in
<code>/_next/static/media</code> (<a
href="https://redirect.github.com/vercel/next.js/issues/82384">#82384</a>)&quot;
(<a
href="https://redirect.github.com/vercel/next.js/issues/82421">#82421</a>)</li>
<li><a
href="f1629d9395"><code>f1629d9</code></a>
Backport &quot;[Pages] fix: <code>_error</code> page's
<code>req.url</code> can be overwritten t… (<a
href="https://redirect.github.com/vercel/next.js/issues/82377">#82377</a>)</li>
<li><a
href="b9aab5dbe9"><code>b9aab5d</code></a>
v15.4.5</li>
<li><a
href="a8c93c49dd"><code>a8c93c4</code></a>
Disable test new tests jobs</li>
<li><a
href="ed2a6c7548"><code>ed2a6c7</code></a>
[backport]: fix(next/image): improve and simplify detect-content-type
(<a
href="https://redirect.github.com/vercel/next.js/issues/82118">#82118</a>...</li>
<li><a
href="f00fcc9011"><code>f00fcc9</code></a>
[backport]: fix(next/image): fix image-optimizer.ts headers (<a
href="https://redirect.github.com/vercel/next.js/issues/82114">#82114</a>)
(<a
href="https://redirect.github.com/vercel/next.js/issues/82175">#82175</a>)</li>
<li><a
href="55a7568e9d"><code>55a7568</code></a>
Backport: Turbopack: update mimalloc (<a
href="https://redirect.github.com/vercel/next.js/issues/81993">#81993</a>)
(<a
href="https://redirect.github.com/vercel/next.js/issues/82166">#82166</a>)</li>
<li><a
href="5bc4b368e5"><code>5bc4b36</code></a>
[backport] Ensure setAssetPrefix updates config instance (<a
href="https://redirect.github.com/vercel/next.js/issues/82165">#82165</a>)</li>
<li><a
href="717dfb6ec9"><code>717dfb6</code></a>
[Backport] Revert &quot;Fix tracing of server actions imported by client
component...</li>
<li>Additional commits viewable in <a
href="https://github.com/vercel/next.js/compare/v15.4.4...v15.4.6">compare
view</a></li>
</ul>
</details>
<br />

Updates `react-day-picker` from 9.8.0 to 9.8.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/gpbl/react-day-picker/releases">react-day-picker's
releases</a>.</em></p>
<blockquote>
<h2>v9.8.1</h2>
<p>Improved <code>captionLayout</code> documentation and build
process.</p>
<h2>What's Changed</h2>
<ul>
<li>docs: Improve documentation for <code>captionLayout</code> prop by
<a href="https://github.com/rodgobbi"><code>@​rodgobbi</code></a> in <a
href="https://redirect.github.com/gpbl/react-day-picker/pull/2788">gpbl/react-day-picker#2788</a>
and <a
href="https://github.com/haecheonlee"><code>@​haecheonlee</code></a> in
<a
href="https://redirect.github.com/gpbl/react-day-picker/pull/2787">gpbl/react-day-picker#2787</a></li>
<li>build: avoid locking dependencies by <a
href="https://github.com/nihgwu"><code>@​nihgwu</code></a> in <a
href="https://redirect.github.com/gpbl/react-day-picker/pull/2789">gpbl/react-day-picker#2789</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/haecheonlee"><code>@​haecheonlee</code></a>
made their first contribution in <a
href="https://redirect.github.com/gpbl/react-day-picker/pull/2787">gpbl/react-day-picker#2787</a></li>
<li><a href="https://github.com/n-zngr"><code>@​n-zngr</code></a> made
their first contribution in <a
href="https://redirect.github.com/gpbl/react-day-picker/pull/2790">gpbl/react-day-picker#2790</a></li>
<li><a href="https://github.com/nihgwu"><code>@​nihgwu</code></a> made
their first contribution in <a
href="https://redirect.github.com/gpbl/react-day-picker/pull/2789">gpbl/react-day-picker#2789</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/gpbl/react-day-picker/compare/v9.8.0...v9.8.1">https://github.com/gpbl/react-day-picker/compare/v9.8.0...v9.8.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bd55df2e3a"><code>bd55df2</code></a>
Bump v9.8.1</li>
<li><a
href="3782986bd6"><code>3782986</code></a>
build: upgrade dev dependencies (<a
href="https://redirect.github.com/gpbl/react-day-picker/issues/2800">#2800</a>)</li>
<li><a
href="f74c61965a"><code>f74c619</code></a>
build: avoid locking dependencies (<a
href="https://redirect.github.com/gpbl/react-day-picker/issues/2789">#2789</a>)</li>
<li><a
href="3da2e918fb"><code>3da2e91</code></a>
refactor(website): correct minor spelling error (<a
href="https://redirect.github.com/gpbl/react-day-picker/issues/2790">#2790</a>)</li>
<li><a
href="7e70c4d46d"><code>7e70c4d</code></a>
docs: Improve documentation for <code>captionLayout</code> prop (<a
href="https://redirect.github.com/gpbl/react-day-picker/issues/2788">#2788</a>)</li>
<li><a
href="14940f1c77"><code>14940f1</code></a>
docs: fix captionLayout props doc (<a
href="https://redirect.github.com/gpbl/react-day-picker/issues/2787">#2787</a>)</li>
<li>See full diff in <a
href="https://github.com/gpbl/react-day-picker/compare/v9.8.0...v9.8.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `react-hook-form` from 7.61.1 to 7.62.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/react-hook-form/react-hook-form/releases">react-hook-form's
releases</a>.</em></p>
<blockquote>
<h2>Version 7.62.0</h2>
<p>👨‍🔧 prevent onBlur for readOnly fields (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12971">#12971</a>)
🐞 fix <a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12988">#12988</a>
sync two defaultValues after reset with new defaultValues (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12990">#12990</a>)
🐞 fix: do not override prototype of data in cloneObject (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12985">#12985</a>)
🐞 fix field name type conflict in nested FieldErrors (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12972">#12972</a>)</p>
<p>thanks to <a
href="https://github.com/candymask0712"><code>@​candymask0712</code></a>,
<a href="https://github.com/Adityapradh"><code>@​Adityapradh</code></a>,
<a href="https://github.com/Ty3uK"><code>@​Ty3uK</code></a> &amp; <a
href="https://github.com/kichikawa57"><code>@​kichikawa57</code></a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1b5a6748a8"><code>1b5a674</code></a>
7.62.0</li>
<li><a
href="6025100ea1"><code>6025100</code></a>
🐞 fix <a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12988">#12988</a>
sync two defaultValues after reset with new defaultValues (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12990">#12990</a>)</li>
<li><a
href="323cd41674"><code>323cd41</code></a>
🐞 fix field name type conflict in nested FieldErrors (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12972">#12972</a>)</li>
<li><a
href="dac28d60e1"><code>dac28d6</code></a>
👨‍🔧 fix: prevent onBlur for readOnly fields (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12971">#12971</a>)</li>
<li><a
href="642145a1ba"><code>642145a</code></a>
🧪 test: add unit tests for convertToArrayPayload utility (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12967">#12967</a>)</li>
<li><a
href="15c03a553f"><code>15c03a5</code></a>
🐞 fix: do not override prototype of <code>data</code> in
<code>cloneObject</code> (<a
href="https://redirect.github.com/react-hook-form/react-hook-form/issues/12985">#12985</a>)</li>
<li>See full diff in <a
href="https://github.com/react-hook-form/react-hook-form/compare/v7.61.1...v7.62.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `sonner` from 2.0.6 to 2.0.7
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/emilkowalski/sonner/releases">sonner's
releases</a>.</em></p>
<blockquote>
<h2>v2.0.7</h2>
<p>Sonner now supports multiple <code>&lt;Toaster /&gt;</code>
components, see more <a
href="https://sonner.emilkowal.ski/toaster#multiple-toasters">here</a>.</p>
<h2>What's Changed</h2>
<ul>
<li>feat: add testId prop for individual toast components by <a
href="https://github.com/b-like-bahar"><code>@​b-like-bahar</code></a>
in <a
href="https://redirect.github.com/emilkowalski/sonner/pull/660">emilkowalski/sonner#660</a></li>
<li>feat(toaster): add support for multiple toasters with unique
identifiers by <a
href="https://github.com/taroj1205"><code>@​taroj1205</code></a> in <a
href="https://redirect.github.com/emilkowalski/sonner/pull/665">emilkowalski/sonner#665</a></li>
<li>fix: tests by <a
href="https://github.com/emilkowalski"><code>@​emilkowalski</code></a>
in <a
href="https://redirect.github.com/emilkowalski/sonner/pull/677">emilkowalski/sonner#677</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/b-like-bahar"><code>@​b-like-bahar</code></a>
made their first contribution in <a
href="https://redirect.github.com/emilkowalski/sonner/pull/660">emilkowalski/sonner#660</a></li>
<li><a href="https://github.com/taroj1205"><code>@​taroj1205</code></a>
made their first contribution in <a
href="https://redirect.github.com/emilkowalski/sonner/pull/665">emilkowalski/sonner#665</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/emilkowalski/sonner/compare/v2.0.6...v2.0.7">https://github.com/emilkowalski/sonner/compare/v2.0.6...v2.0.7</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3ba7aa17ab"><code>3ba7aa1</code></a>
v2.0.7</li>
<li><a
href="0604827063"><code>0604827</code></a>
fix: tests (<a
href="https://redirect.github.com/emilkowalski/sonner/issues/677">#677</a>)</li>
<li><a
href="c50fe92dfb"><code>c50fe92</code></a>
fix tests</li>
<li><a
href="0600a5cb40"><code>0600a5c</code></a>
feat(toaster): add support for multiple toasters with unique identifiers
(<a
href="https://redirect.github.com/emilkowalski/sonner/issues/665">#665</a>)</li>
<li><a
href="c14bf44a03"><code>c14bf44</code></a>
feat: add testId prop for individual toast components (<a
href="https://redirect.github.com/emilkowalski/sonner/issues/660">#660</a>)</li>
<li>See full diff in <a
href="https://github.com/emilkowalski/sonner/compare/v2.0.6...v2.0.7">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-15 12:21:35 +00:00
dependabot[bot]
af58b316a2 chore(frontend/deps-dev): Bump the development-dependencies group across 1 directory with 16 updates (#10548)
Bumps the development-dependencies group with 16 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
|
[@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests)
| `4.0.1` | `4.1.0` |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.54.1`
| `1.54.2` |
|
[@storybook/addon-a11y](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y)
| `9.0.17` | `9.1.1` |
|
[@storybook/addon-docs](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs)
| `9.0.17` | `9.1.1` |
|
[@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links)
| `9.0.17` | `9.1.1` |
|
[@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding)
| `9.0.17` | `9.1.1` |
|
[@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs)
| `9.0.17` | `9.1.1` |
|
[@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query)
| `5.81.2` | `5.83.1` |
|
[@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools)
| `5.83.0` | `5.84.1` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `24.0.15` | `24.2.0` |
| [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.2` |
`13.1.3` |
|
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
| `15.4.2` | `15.4.5` |
|
[eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin)
| `9.0.17` | `9.1.1` |
| [orval](https://github.com/orval-labs/orval) | `7.10.0` | `7.11.2` |
|
[storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/core)
| `9.0.17` | `9.1.1` |
| [typescript](https://github.com/microsoft/TypeScript) | `5.8.3` |
`5.9.2` |


Updates `@chromatic-com/storybook` from 4.0.1 to 4.1.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/releases"><code>@​chromatic-com/storybook</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v4.1.0</h2>
<h4>🚀 Enhancement</h4>
<ul>
<li>Support disabling ChannelFetch using <code>--debug</code> flag <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/378">#378</a>
(<a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>)</li>
</ul>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Chore: Fix package.json <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/385">#385</a>
(<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
<li>Add support for Storybook 9.2 <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/384">#384</a>
(<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
<li>Update GraphQL schema and handle
<code>ComparisonResult.SKIPPED</code> value <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/379">#379</a>
(<a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li>Gert Hengeveld (<a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>)</li>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<h2>v4.1.0-next.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Update GraphQL schema and handle
<code>ComparisonResult.SKIPPED</code> value <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/379">#379</a>
(<a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Gert Hengeveld (<a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.0/CHANGELOG.md"><code>@​chromatic-com/storybook</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v4.1.0 (Fri Aug 01 2025)</h1>
<h4>🚀 Enhancement</h4>
<ul>
<li>Support disabling ChannelFetch using <code>--debug</code> flag <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/378">#378</a>
(<a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>)</li>
</ul>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Chore: Fix package.json <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/385">#385</a>
(<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
<li>Add support for Storybook 9.2 <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/384">#384</a>
(<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
<li>Update GraphQL schema and handle
<code>ComparisonResult.SKIPPED</code> value <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/379">#379</a>
(<a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li>Gert Hengeveld (<a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>)</li>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5dd92e687c"><code>5dd92e6</code></a>
Bump version to: 4.1.0 [skip ci]</li>
<li><a
href="bba5226968"><code>bba5226</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="c7167d581c"><code>c7167d5</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/386">#386</a>
from chromaui/next</li>
<li><a
href="8096173502"><code>8096173</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/385">#385</a>
from chromaui/yann/retry-release-4-1</li>
<li><a
href="19eb7933e2"><code>19eb793</code></a>
fix package.json</li>
<li><a
href="a14e50dc8d"><code>a14e50d</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/380">#380</a>
from chromaui/next</li>
<li><a
href="d9727c8178"><code>d9727c8</code></a>
[ci skip] cleanup</li>
<li><a
href="154e220df6"><code>154e220</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/384">#384</a>
from chromaui/yann/support-sb-9.2</li>
<li><a
href="00170dae29"><code>00170da</code></a>
Add support for Storybook 9.2</li>
<li><a
href="e8fa97557e"><code>e8fa975</code></a>
Merge branch 'main' into next</li>
<li>Additional commits viewable in <a
href="https://github.com/chromaui/addon-visual-tests/compare/v4.0.1...v4.1.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@playwright/test` from 1.54.1 to 1.54.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/microsoft/playwright/releases"><code>@​playwright/test</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v1.54.2</h2>
<h3>Highlights</h3>
<p><a
href="https://redirect.github.com/microsoft/playwright/issues/36714">microsoft/playwright#36714</a>
- [Regression]: Codegen is not able to launch in Administrator Terminal
on Windows (ProtocolError: Protocol error)
<a
href="https://redirect.github.com/microsoft/playwright/issues/36828">microsoft/playwright#36828</a>
- [Regression]: Playwright Codegen keeps spamming with selected option
<a
href="https://redirect.github.com/microsoft/playwright/issues/36810">microsoft/playwright#36810</a>
- [Regression]: Starting Codegen with target language doesn't work
anymore</p>
<h2>Browser Versions</h2>
<ul>
<li>Chromium 139.0.7258.5</li>
<li>Mozilla Firefox 140.0.2</li>
<li>WebKit 26.0</li>
</ul>
<p>This version was also tested against the following stable
channels:</p>
<ul>
<li>Google Chrome 140</li>
<li>Microsoft Edge 140</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="00ce6a8b72"><code>00ce6a8</code></a>
chore: mark v1.54.2 (<a
href="https://redirect.github.com/microsoft/playwright/issues/36884">#36884</a>)</li>
<li><a
href="e5b2fbdd73"><code>e5b2fbd</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/36767">#36767</a>):
test: speculative fix for flaky role selectors test</li>
<li><a
href="63c168f8a5"><code>63c168f</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/36881">#36881</a>):
chore: throw pretty error if launchApp is launched using...</li>
<li><a
href="ce9e3d03cc"><code>ce9e3d0</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/36879">#36879</a>):
fix(chromium): launch UI Mode / Trace Viewer under Admin...</li>
<li><a
href="b91e3398c5"><code>b91e339</code></a>
fix-merge(<a
href="https://redirect.github.com/microsoft/playwright/issues/36863">#36863</a>):
adapt to the old source base</li>
<li><a
href="3f4df2c197"><code>3f4df2c</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/36864">#36864</a>):
fix: initial target in codegen</li>
<li><a
href="b847f5efce"><code>b847f5e</code></a>
cherry-pick((<a
href="https://redirect.github.com/microsoft/playwright/issues/36863">#36863</a>):
chore: do not perform option selection while recording</li>
<li><a
href="97aab60570"><code>97aab60</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/36734">#36734</a>):
test: fix client-certificate tests</li>
<li>See full diff in <a
href="https://github.com/microsoft/playwright/compare/v1.54.1...v1.54.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-a11y` from 9.0.17 to 9.1.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-a11y</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.1</h2>
<h2>9.1.1</h2>
<ul>
<li>CLI: Fix throwing in readonly environments - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31785">#31785</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
<li>Onboarding: Tweak referral wording in survey - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32185">#32185</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Send index stats on dev exit - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32168">#32168</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
<h2>v9.1.0</h2>
<h2>9.1.0</h2>
<p>Storybook 9.1 is packed with new features and improvements to enhance
accessibility, streamline testing, and make your development workflow
even smoother!</p>
<p>🚀 Improved upgrade command with monorepo support for seamless
upgrades
🅰 Angular fixes for Tailwind 4, cache busting, and zoneless
compatibility
🧪 <code>sb.mock</code> API and Automocking: one-line module mocking to
simplify your testing workflow
🧪 Favicon shows test run status for quick visual feedback
⚛️ Easier configuration for React Native projects
🔥 Auto-abort play functions on HMR to avoid unwanted side effects
🏗️ Improved CSF factories API for type safe story definitions
️ A11y improvements across Storybook’s UI — addon panel, toolbar,
sidebar, mobile &amp; more
💯 Dozens more fixes and improvements based on community feedback!</p>
<!-- raw HTML omitted -->
<ul>
<li>A11y: Improved toolbar a11y by fixing semantics - <a
href="https://redirect.github.com/storybookjs/storybook/pull/28672">#28672</a>,
thanks <a
href="https://github.com/mehm8128"><code>@​mehm8128</code></a>!</li>
<li>Addon Vitest: Remove Optimize deps candidates due to Vitest warnings
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31809">#31809</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Angular: Bundle using TSup - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31690">#31690</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Angular: Prevent directory import in Angular builders - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32012">#32012</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Automigration: Await updateMainConfig in removeEssentials - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32140">#32140</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Builder-Vite: Fix logic related to setting allowedHosts when IP
address used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31472">#31472</a>,
thanks <a
href="https://github.com/JSMike"><code>@​JSMike</code></a>!</li>
<li>Controls: Improve the accessibility of the object control - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31581">#31581</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Abort play function on HMR - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31542">#31542</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Avoid pausing animations in non-Vitest Playwright environments
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/32123">#32123</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Cleanup of type following up v9 and small verbatimModuleSyntax
type fix - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31823">#31823</a>,
thanks <a
href="https://github.com/alcpereira"><code>@​alcpereira</code></a>!</li>
<li>Core: Fix aria-controls attribute on sidebar nodes to include all
children - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31491">#31491</a>,
thanks <a
href="https://github.com/candrepa1"><code>@​candrepa1</code></a>!</li>
<li>Core: Fix horizontal scrollbar covering part of the toolbar - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31704">#31704</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Fix moving log file across drives and projectRoot detection on
Windows - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32020">#32020</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Prevent interactions panel from flickering and showing
incorrect state - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32150">#32150</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Serve dynamic favicon based on testing module status - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31763">#31763</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Support container queries in addon panels - <a
href="https://redirect.github.com/storybookjs/storybook/pull/23261">#23261</a>,
thanks <a
href="https://github.com/neil-morrison44"><code>@​neil-morrison44</code></a>!</li>
<li>CSF Factories: Add parameters/globals types, <code>extend</code>
API, portable stories - <a
href="https://redirect.github.com/storybookjs/storybook/pull/30601">#30601</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve controls parameters - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31745">#31745</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve docs parameter types - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31736">#31736</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Only add preview annotations to definePreview in csf-factories
automigration - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31727">#31727</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>Docs: Update <code>@​storybook/icons</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32144">#32144</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Docs: Update <code>react-element-to-jsx-string</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31170">#31170</a>,
thanks <a
href="https://github.com/7rulnik"><code>@​7rulnik</code></a>!</li>
<li>Init: Exclude mdx stories when docs feature isn't selected during
init - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32142">#32142</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Maintenance: Add flag to toggle default automigrations - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32113">#32113</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>React Native Web: Simplify config by using vite-plugin-rnw - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32051">#32051</a>,
thanks <a
href="https://github.com/dannyhw"><code>@​dannyhw</code></a>!</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-a11y</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.1</h2>
<ul>
<li>CLI: Fix throwing in readonly environments - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31785">#31785</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
<li>Onboarding: Tweak referral wording in survey - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32185">#32185</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Send index stats on dev exit - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32168">#32168</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
<h2>9.1.0</h2>
<p>Storybook 9.1 is packed with new features and improvements to enhance
accessibility, streamline testing, and make your development workflow
even smoother!</p>
<p>🚀 Improved upgrade command with monorepo support for seamless
upgrades
🅰 Angular fixes for Tailwind 4, cache busting, and zoneless
compatibility
🧪 <code>sb.mock</code> API and Automocking: one-line module mocking to
simplify your testing workflow
🧪 Favicon shows test run status for quick visual feedback
⚛️ Easier configuration for React Native projects
🔥 Auto-abort play functions on HMR to avoid unwanted side effects
🏗️ Improved CSF factories API for type safe story definitions
️ A11y improvements across Storybook’s UI — addon panel, toolbar,
sidebar, mobile &amp; more
💯 Dozens more fixes and improvements based on community feedback!</p>
<!-- raw HTML omitted -->
<ul>
<li>A11y: Improved toolbar a11y by fixing semantics - <a
href="https://redirect.github.com/storybookjs/storybook/pull/28672">#28672</a>,
thanks <a
href="https://github.com/mehm8128"><code>@​mehm8128</code></a>!</li>
<li>Addon Vitest: Remove Optimize deps candidates due to Vitest warnings
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31809">#31809</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Angular: Bundle using TSup - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31690">#31690</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Angular: Prevent directory import in Angular builders - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32012">#32012</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Automigration: Await updateMainConfig in removeEssentials - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32140">#32140</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Builder-Vite: Fix logic related to setting allowedHosts when IP
address used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31472">#31472</a>,
thanks <a
href="https://github.com/JSMike"><code>@​JSMike</code></a>!</li>
<li>Controls: Improve the accessibility of the object control - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31581">#31581</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Abort play function on HMR - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31542">#31542</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Avoid pausing animations in non-Vitest Playwright environments
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/32123">#32123</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Cleanup of type following up v9 and small verbatimModuleSyntax
type fix - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31823">#31823</a>,
thanks <a
href="https://github.com/alcpereira"><code>@​alcpereira</code></a>!</li>
<li>Core: Fix aria-controls attribute on sidebar nodes to include all
children - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31491">#31491</a>,
thanks <a
href="https://github.com/candrepa1"><code>@​candrepa1</code></a>!</li>
<li>Core: Fix horizontal scrollbar covering part of the toolbar - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31704">#31704</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Fix moving log file across drives and projectRoot detection on
Windows - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32020">#32020</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Prevent interactions panel from flickering and showing
incorrect state - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32150">#32150</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Serve dynamic favicon based on testing module status - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31763">#31763</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Support container queries in addon panels - <a
href="https://redirect.github.com/storybookjs/storybook/pull/23261">#23261</a>,
thanks <a
href="https://github.com/neil-morrison44"><code>@​neil-morrison44</code></a>!</li>
<li>CSF Factories: Add parameters/globals types, <code>extend</code>
API, portable stories - <a
href="https://redirect.github.com/storybookjs/storybook/pull/30601">#30601</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve controls parameters - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31745">#31745</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve docs parameter types - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31736">#31736</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Only add preview annotations to definePreview in csf-factories
automigration - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31727">#31727</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>Docs: Update <code>@​storybook/icons</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32144">#32144</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Docs: Update <code>react-element-to-jsx-string</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31170">#31170</a>,
thanks <a
href="https://github.com/7rulnik"><code>@​7rulnik</code></a>!</li>
<li>Init: Exclude mdx stories when docs feature isn't selected during
init - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32142">#32142</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Maintenance: Add flag to toggle default automigrations - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32113">#32113</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>React Native Web: Simplify config by using vite-plugin-rnw - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32051">#32051</a>,
thanks <a
href="https://github.com/dannyhw"><code>@​dannyhw</code></a>!</li>
<li>Telemetry: Add automigration errors - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32103">#32103</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>Telemetry: Fix <code>project.json</code> for getAbsolutePath - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31510">#31510</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a6bb54c38a"><code>a6bb54c</code></a>
Bump version from &quot;9.1.0&quot; to &quot;9.1.1&quot; [skip ci]</li>
<li><a
href="073a65a835"><code>073a65a</code></a>
Bump version from &quot;9.1.0-beta.3&quot; to &quot;9.1.0&quot; [skip
ci]</li>
<li><a
href="d3746ae3c6"><code>d3746ae</code></a>
Bump version from &quot;9.1.0-beta.2&quot; to &quot;9.1.0-beta.3&quot;
[skip ci]</li>
<li><a
href="5ba8775588"><code>5ba8775</code></a>
Bump version from &quot;9.1.0-beta.1&quot; to &quot;9.1.0-beta.2&quot;
[skip ci]</li>
<li><a
href="c146de5a78"><code>c146de5</code></a>
Bump version from &quot;9.1.0-beta.0&quot; to &quot;9.1.0-beta.1&quot;
[skip ci]</li>
<li><a
href="b874fb2553"><code>b874fb2</code></a>
Bump version from &quot;9.1.0-alpha.10&quot; to &quot;9.1.0-beta.0&quot;
[skip ci]</li>
<li><a
href="25d6ece29a"><code>25d6ece</code></a>
Bump version from &quot;9.1.0-alpha.9&quot; to
&quot;9.1.0-alpha.10&quot; [skip ci]</li>
<li><a
href="8d1e92231f"><code>8d1e922</code></a>
Bump version from &quot;9.1.0-alpha.8&quot; to &quot;9.1.0-alpha.9&quot;
[skip ci]</li>
<li><a
href="e8e467e98b"><code>e8e467e</code></a>
Bump version from &quot;9.1.0-alpha.7&quot; to &quot;9.1.0-alpha.8&quot;
[skip ci]</li>
<li><a
href="34ca7ee3dc"><code>34ca7ee</code></a>
Bump version from &quot;9.1.0-alpha.6&quot; to &quot;9.1.0-alpha.7&quot;
[skip ci]</li>
<li>Additional commits viewable in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.1/code/addons/a11y">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-docs` from 9.0.17 to 9.1.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-docs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.1</h2>
<h2>9.1.1</h2>
<ul>
<li>CLI: Fix throwing in readonly environments - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31785">#31785</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
<li>Onboarding: Tweak referral wording in survey - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32185">#32185</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Send index stats on dev exit - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32168">#32168</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
<h2>v9.1.0</h2>
<h2>9.1.0</h2>
<p>Storybook 9.1 is packed with new features and improvements to enhance
accessibility, streamline testing, and make your development workflow
even smoother!</p>
<p>🚀 Improved upgrade command with monorepo support for seamless
upgrades
🅰 Angular fixes for Tailwind 4, cache busting, and zoneless
compatibility
🧪 <code>sb.mock</code> API and Automocking: one-line module mocking to
simplify your testing workflow
🧪 Favicon shows test run status for quick visual feedback
⚛️ Easier configuration for React Native projects
🔥 Auto-abort play functions on HMR to avoid unwanted side effects
🏗️ Improved CSF factories API for type safe story definitions
️ A11y improvements across Storybook’s UI — addon panel, toolbar,
sidebar, mobile &amp; more
💯 Dozens more fixes and improvements based on community feedback!</p>
<!-- raw HTML omitted -->
<ul>
<li>A11y: Improved toolbar a11y by fixing semantics - <a
href="https://redirect.github.com/storybookjs/storybook/pull/28672">#28672</a>,
thanks <a
href="https://github.com/mehm8128"><code>@​mehm8128</code></a>!</li>
<li>Addon Vitest: Remove Optimize deps candidates due to Vitest warnings
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31809">#31809</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Angular: Bundle using TSup - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31690">#31690</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Angular: Prevent directory import in Angular builders - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32012">#32012</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Automigration: Await updateMainConfig in removeEssentials - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32140">#32140</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Builder-Vite: Fix logic related to setting allowedHosts when IP
address used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31472">#31472</a>,
thanks <a
href="https://github.com/JSMike"><code>@​JSMike</code></a>!</li>
<li>Controls: Improve the accessibility of the object control - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31581">#31581</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Abort play function on HMR - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31542">#31542</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Avoid pausing animations in non-Vitest Playwright environments
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/32123">#32123</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Cleanup of type following up v9 and small verbatimModuleSyntax
type fix - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31823">#31823</a>,
thanks <a
href="https://github.com/alcpereira"><code>@​alcpereira</code></a>!</li>
<li>Core: Fix aria-controls attribute on sidebar nodes to include all
children - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31491">#31491</a>,
thanks <a
href="https://github.com/candrepa1"><code>@​candrepa1</code></a>!</li>
<li>Core: Fix horizontal scrollbar covering part of the toolbar - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31704">#31704</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Fix moving log file across drives and projectRoot detection on
Windows - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32020">#32020</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Prevent interactions panel from flickering and showing
incorrect state - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32150">#32150</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Serve dynamic favicon based on testing module status - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31763">#31763</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Support container queries in addon panels - <a
href="https://redirect.github.com/storybookjs/storybook/pull/23261">#23261</a>,
thanks <a
href="https://github.com/neil-morrison44"><code>@​neil-morrison44</code></a>!</li>
<li>CSF Factories: Add parameters/globals types, <code>extend</code>
API, portable stories - <a
href="https://redirect.github.com/storybookjs/storybook/pull/30601">#30601</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve controls parameters - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31745">#31745</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve docs parameter types - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31736">#31736</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Only add preview annotations to definePreview in csf-factories
automigration - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31727">#31727</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>Docs: Update <code>@​storybook/icons</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32144">#32144</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Docs: Update <code>react-element-to-jsx-string</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31170">#31170</a>,
thanks <a
href="https://github.com/7rulnik"><code>@​7rulnik</code></a>!</li>
<li>Init: Exclude mdx stories when docs feature isn't selected during
init - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32142">#32142</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Maintenance: Add flag to toggle default automigrations - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32113">#32113</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>React Native Web: Simplify config by using vite-plugin-rnw - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32051">#32051</a>,
thanks <a
href="https://github.com/dannyhw"><code>@​dannyhw</code></a>!</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-docs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.1</h2>
<ul>
<li>CLI: Fix throwing in readonly environments - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31785">#31785</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
<li>Onboarding: Tweak referral wording in survey - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32185">#32185</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Send index stats on dev exit - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32168">#32168</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
<h2>9.1.0</h2>
<p>Storybook 9.1 is packed with new features and improvements to enhance
accessibility, streamline testing, and make your development workflow
even smoother!</p>
<p>🚀 Improved upgrade command with monorepo support for seamless
upgrades
🅰 Angular fixes for Tailwind 4, cache busting, and zoneless
compatibility
🧪 <code>sb.mock</code> API and Automocking: one-line module mocking to
simplify your testing workflow
🧪 Favicon shows test run status for quick visual feedback
⚛️ Easier configuration for React Native projects
🔥 Auto-abort play functions on HMR to avoid unwanted side effects
🏗️ Improved CSF factories API for type safe story definitions
️ A11y improvements across Storybook’s UI — addon panel, toolbar,
sidebar, mobile &amp; more
💯 Dozens more fixes and improvements based on community feedback!</p>
<!-- raw HTML omitted -->
<ul>
<li>A11y: Improved toolbar a11y by fixing semantics - <a
href="https://redirect.github.com/storybookjs/storybook/pull/28672">#28672</a>,
thanks <a
href="https://github.com/mehm8128"><code>@​mehm8128</code></a>!</li>
<li>Addon Vitest: Remove Optimize deps candidates due to Vitest warnings
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31809">#31809</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Angular: Bundle using TSup - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31690">#31690</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Angular: Prevent directory import in Angular builders - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32012">#32012</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Automigration: Await updateMainConfig in removeEssentials - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32140">#32140</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Builder-Vite: Fix logic related to setting allowedHosts when IP
address used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31472">#31472</a>,
thanks <a
href="https://github.com/JSMike"><code>@​JSMike</code></a>!</li>
<li>Controls: Improve the accessibility of the object control - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31581">#31581</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Abort play function on HMR - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31542">#31542</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Avoid pausing animations in non-Vitest Playwright environments
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/32123">#32123</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Cleanup of type following up v9 and small verbatimModuleSyntax
type fix - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31823">#31823</a>,
thanks <a
href="https://github.com/alcpereira"><code>@​alcpereira</code></a>!</li>
<li>Core: Fix aria-controls attribute on sidebar nodes to include all
children - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31491">#31491</a>,
thanks <a
href="https://github.com/candrepa1"><code>@​candrepa1</code></a>!</li>
<li>Core: Fix horizontal scrollbar covering part of the toolbar - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31704">#31704</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Fix moving log file across drives and projectRoot detection on
Windows - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32020">#32020</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Prevent interactions panel from flickering and showing
incorrect state - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32150">#32150</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Serve dynamic favicon based on testing module status - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31763">#31763</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Support container queries in addon panels - <a
href="https://redirect.github.com/storybookjs/storybook/pull/23261">#23261</a>,
thanks <a
href="https://github.com/neil-morrison44"><code>@​neil-morrison44</code></a>!</li>
<li>CSF Factories: Add parameters/globals types, <code>extend</code>
API, portable stories - <a
href="https://redirect.github.com/storybookjs/storybook/pull/30601">#30601</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve controls parameters - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31745">#31745</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve docs parameter types - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31736">#31736</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Only add preview annotations to definePreview in csf-factories
automigration - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31727">#31727</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>Docs: Update <code>@​storybook/icons</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32144">#32144</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Docs: Update <code>react-element-to-jsx-string</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31170">#31170</a>,
thanks <a
href="https://github.com/7rulnik"><code>@​7rulnik</code></a>!</li>
<li>Init: Exclude mdx stories when docs feature isn't selected during
init - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32142">#32142</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Maintenance: Add flag to toggle default automigrations - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32113">#32113</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>React Native Web: Simplify config by using vite-plugin-rnw - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32051">#32051</a>,
thanks <a
href="https://github.com/dannyhw"><code>@​dannyhw</code></a>!</li>
<li>Telemetry: Add automigration errors - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32103">#32103</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>Telemetry: Fix <code>project.json</code> for getAbsolutePath - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31510">#31510</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a6bb54c38a"><code>a6bb54c</code></a>
Bump version from &quot;9.1.0&quot; to &quot;9.1.1&quot; [skip ci]</li>
<li><a
href="073a65a835"><code>073a65a</code></a>
Bump version from &quot;9.1.0-beta.3&quot; to &quot;9.1.0&quot; [skip
ci]</li>
<li><a
href="d3746ae3c6"><code>d3746ae</code></a>
Bump version from &quot;9.1.0-beta.2&quot; to &quot;9.1.0-beta.3&quot;
[skip ci]</li>
<li><a
href="5ba8775588"><code>5ba8775</code></a>
Bump version from &quot;9.1.0-beta.1&quot; to &quot;9.1.0-beta.2&quot;
[skip ci]</li>
<li><a
href="c146de5a78"><code>c146de5</code></a>
Bump version from &quot;9.1.0-beta.0&quot; to &quot;9.1.0-beta.1&quot;
[skip ci]</li>
<li><a
href="f346049891"><code>f346049</code></a>
Docs: Update <code>@​storybook/icons</code></li>
<li><a
href="b874fb2553"><code>b874fb2</code></a>
Bump version from &quot;9.1.0-alpha.10&quot; to &quot;9.1.0-beta.0&quot;
[skip ci]</li>
<li><a
href="25d6ece29a"><code>25d6ece</code></a>
Bump version from &quot;9.1.0-alpha.9&quot; to
&quot;9.1.0-alpha.10&quot; [skip ci]</li>
<li><a
href="8d1e92231f"><code>8d1e922</code></a>
Bump version from &quot;9.1.0-alpha.8&quot; to &quot;9.1.0-alpha.9&quot;
[skip ci]</li>
<li><a
href="e8e467e98b"><code>e8e467e</code></a>
Bump version from &quot;9.1.0-alpha.7&quot; to &quot;9.1.0-alpha.8&quot;
[skip ci]</li>
<li>Additional commits viewable in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.1/code/addons/docs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-links` from 9.0.17 to 9.1.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-links</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.1</h2>
<h2>9.1.1</h2>
<ul>
<li>CLI: Fix throwing in readonly environments - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31785">#31785</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
<li>Onboarding: Tweak referral wording in survey - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32185">#32185</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Send index stats on dev exit - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32168">#32168</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
<h2>v9.1.0</h2>
<h2>9.1.0</h2>
<p>Storybook 9.1 is packed with new features and improvements to enhance
accessibility, streamline testing, and make your development workflow
even smoother!</p>
<p>🚀 Improved upgrade command with monorepo support for seamless
upgrades
🅰 Angular fixes for Tailwind 4, cache busting, and zoneless
compatibility
🧪 <code>sb.mock</code> API and Automocking: one-line module mocking to
simplify your testing workflow
🧪 Favicon shows test run status for quick visual feedback
⚛️ Easier configuration for React Native projects
🔥 Auto-abort play functions on HMR to avoid unwanted side effects
🏗️ Improved CSF factories API for type safe story definitions
️ A11y improvements across Storybook’s UI — addon panel, toolbar,
sidebar, mobile &amp; more
💯 Dozens more fixes and improvements based on community feedback!</p>
<!-- raw HTML omitted -->
<ul>
<li>A11y: Improved toolbar a11y by fixing semantics - <a
href="https://redirect.github.com/storybookjs/storybook/pull/28672">#28672</a>,
thanks <a
href="https://github.com/mehm8128"><code>@​mehm8128</code></a>!</li>
<li>Addon Vitest: Remove Optimize deps candidates due to Vitest warnings
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31809">#31809</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Angular: Bundle using TSup - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31690">#31690</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Angular: Prevent directory import in Angular builders - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32012">#32012</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Automigration: Await updateMainConfig in removeEssentials - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32140">#32140</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Builder-Vite: Fix logic related to setting allowedHosts when IP
address used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31472">#31472</a>,
thanks <a
href="https://github.com/JSMike"><code>@​JSMike</code></a>!</li>
<li>Controls: Improve the accessibility of the object control - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31581">#31581</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Abort play function on HMR - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31542">#31542</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Avoid pausing animations in non-Vitest Playwright environments
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/32123">#32123</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Cleanup of type following up v9 and small verbatimModuleSyntax
type fix - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31823">#31823</a>,
thanks <a
href="https://github.com/alcpereira"><code>@​alcpereira</code></a>!</li>
<li>Core: Fix aria-controls attribute on sidebar nodes to include all
children - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31491">#31491</a>,
thanks <a
href="https://github.com/candrepa1"><code>@​candrepa1</code></a>!</li>
<li>Core: Fix horizontal scrollbar covering part of the toolbar - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31704">#31704</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Fix moving log file across drives and projectRoot detection on
Windows - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32020">#32020</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Prevent interactions panel from flickering and showing
incorrect state - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32150">#32150</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Serve dynamic favicon based on testing module status - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31763">#31763</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Support container queries in addon panels - <a
href="https://redirect.github.com/storybookjs/storybook/pull/23261">#23261</a>,
thanks <a
href="https://github.com/neil-morrison44"><code>@​neil-morrison44</code></a>!</li>
<li>CSF Factories: Add parameters/globals types, <code>extend</code>
API, portable stories - <a
href="https://redirect.github.com/storybookjs/storybook/pull/30601">#30601</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve controls parameters - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31745">#31745</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve docs parameter types - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31736">#31736</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Only add preview annotations to definePreview in csf-factories
automigration - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31727">#31727</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>Docs: Update <code>@​storybook/icons</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32144">#32144</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Docs: Update <code>react-element-to-jsx-string</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31170">#31170</a>,
thanks <a
href="https://github.com/7rulnik"><code>@​7rulnik</code></a>!</li>
<li>Init: Exclude mdx stories when docs feature isn't selected during
init - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32142">#32142</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Maintenance: Add flag to toggle default automigrations - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32113">#32113</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>React Native Web: Simplify config by using vite-plugin-rnw - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32051">#32051</a>,
thanks <a
href="https://github.com/dannyhw"><code>@​dannyhw</code></a>!</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-links</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.1</h2>
<ul>
<li>CLI: Fix throwing in readonly environments - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31785">#31785</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
<li>Onboarding: Tweak referral wording in survey - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32185">#32185</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Send index stats on dev exit - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32168">#32168</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
<h2>9.1.0</h2>
<p>Storybook 9.1 is packed with new features and improvements to enhance
accessibility, streamline testing, and make your development workflow
even smoother!</p>
<p>🚀 Improved upgrade command with monorepo support for seamless
upgrades
🅰 Angular fixes for Tailwind 4, cache busting, and zoneless
compatibility
🧪 <code>sb.mock</code> API and Automocking: one-line module mocking to
simplify your testing workflow
🧪 Favicon shows test run status for quick visual feedback
⚛️ Easier configuration for React Native projects
🔥 Auto-abort play functions on HMR to avoid unwanted side effects
🏗️ Improved CSF factories API for type safe story definitions
️ A11y improvements across Storybook’s UI — addon panel, toolbar,
sidebar, mobile &amp; more
💯 Dozens more fixes and improvements based on community feedback!</p>
<!-- raw HTML omitted -->
<ul>
<li>A11y: Improved toolbar a11y by fixing semantics - <a
href="https://redirect.github.com/storybookjs/storybook/pull/28672">#28672</a>,
thanks <a
href="https://github.com/mehm8128"><code>@​mehm8128</code></a>!</li>
<li>Addon Vitest: Remove Optimize deps candidates due to Vitest warnings
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31809">#31809</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Angular: Bundle using TSup - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31690">#31690</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Angular: Prevent directory import in Angular builders - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32012">#32012</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Automigration: Await updateMainConfig in removeEssentials - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32140">#32140</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Builder-Vite: Fix logic related to setting allowedHosts when IP
address used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31472">#31472</a>,
thanks <a
href="https://github.com/JSMike"><code>@​JSMike</code></a>!</li>
<li>Controls: Improve the accessibility of the object control - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31581">#31581</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Abort play function on HMR - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31542">#31542</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Avoid pausing animations in non-Vitest Playwright environments
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/32123">#32123</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Cleanup of type following up v9 and small verbatimModuleSyntax
type fix - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31823">#31823</a>,
thanks <a
href="https://github.com/alcpereira"><code>@​alcpereira</code></a>!</li>
<li>Core: Fix aria-controls attribute on sidebar nodes to include all
children - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31491">#31491</a>,
thanks <a
href="https://github.com/candrepa1"><code>@​candrepa1</code></a>!</li>
<li>Core: Fix horizontal scrollbar covering part of the toolbar - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31704">#31704</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Core: Fix moving log file across drives and projectRoot detection on
Windows - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32020">#32020</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Prevent interactions panel from flickering and showing
incorrect state - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32150">#32150</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Serve dynamic favicon based on testing module status - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31763">#31763</a>,
thanks <a
href="https://github.com/ghengeveld"><code>@​ghengeveld</code></a>!</li>
<li>Core: Support container queries in addon panels - <a
href="https://redirect.github.com/storybookjs/storybook/pull/23261">#23261</a>,
thanks <a
href="https://github.com/neil-morrison44"><code>@​neil-morrison44</code></a>!</li>
<li>CSF Factories: Add parameters/globals types, <code>extend</code>
API, portable stories - <a
href="https://redirect.github.com/storybookjs/storybook/pull/30601">#30601</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve controls parameters - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31745">#31745</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Improve docs parameter types - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31736">#31736</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>CSF: Only add preview annotations to definePreview in csf-factories
automigration - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31727">#31727</a>,
thanks <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>!</li>
<li>Docs: Update <code>@​storybook/icons</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32144">#32144</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Docs: Update <code>react-element-to-jsx-string</code> - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31170">#31170</a>,
thanks <a
href="https://github.com/7rulnik"><code>@​7rulnik</code></a>!</li>
<li>Init: Exclude mdx stories when docs feature isn't selected during
init - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32142">#32142</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Maintenance: Add flag to toggle default automigrations - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32113">#32113</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>React Native Web: Simplify config by using vite-plugin-rnw - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32051">#32051</a>,
thanks <a
href="https://github.com/dannyhw"><code>@​dannyhw</code></a>!</li>
<li>Telemetry: Add automigration errors - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32103">#32103</a>,
thanks <a
href="https://github.com/yannbf"><code>@​yannbf</code></a>!</li>
<li>Telemetry: Fix <code>project.json</code> for getAbsolutePath - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31510">#31510</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a6bb54c38a"><code>a6bb54c</code></a>
Bump version from &quot;9.1.0&quot; to &quot;9.1.1&quot; [skip ci]</li>
<li><a
href="073a65a835"><code>073a65a</code></a>
Bump version from &quot;9.1.0-beta.3&quot; to &quot;9.1.0&quot; [skip
ci]</li>
<li><a
href="d3746ae3c6"><code>d3746ae</code></a>
Bump version from &quot;9.1.0-beta.2&quot; to &quot;9.1.0-beta.3&quot;
[skip ci]</li>
<li><a
href="5ba8775588"><code>5ba8775</code></a>
Bump version from &quot;9.1.0-beta.1&quot; to &quot;9.1.0-beta.2&quot;
[skip ci]</li>
<li><a
href="c146de5a78"><code>c146de5</code></a>
Bump version from &quot;9.1.0-beta.0&quot; to &quot;9.1.0-beta.1&quot;
[skip ci]</li>
<li><a
href="b874fb2553"><code>b874fb2</code></a>
Bump version from &quot;9.1.0-alpha.10&quot; to &quot;9.1.0-beta.0&quot;
[skip ci]</li>
<li><a
href="25d6ece29a"><code>25d6ece</code></a>
Bump version from &quot;9.1.0-alpha.9&quot; to
&quot;9.1.0-alpha.10&quot; [skip ci]</li>
<li><a
href="8d1e92231f"><code>8d1e922</code></a>
Bump version from &quot;9.1.0-alpha.8&quot; to &quot;9.1.0-alpha.9&quot;
[skip ci]</li>
<li><a
href="e8e467e98b"><code>e8e467e</code></a>
Bump version from &quot;9.1.0-alpha.7&quot; to &quot;9.1.0-alpha.8&quot;
[skip ci]</li>
<li><a
href="34ca7ee3dc"><code>34ca7ee</code></a>
Bump version from &quot;9.1.0-alpha.6&quot; to &quot;9.1.0-alpha.7&quot;
[skip ci]</li>
<li>Additional commits viewable in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.1/code/addons/links">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-onboarding` from 9.0.17 to 9.1.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-onboarding</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.1</h2>
<h2>9.1.1</h2>
<ul>
<li>CLI: Fix throwing in readonly environments - <a
href="https://redirect.g...

_Description has been truncated_

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-08-15 11:48:34 +00:00
Ubbe
03e3e2ea9a fix(frontend): remove console.log (#10649)
## Changes 🏗️

Not a helpful console log to land in production... We should disallow
console logs all together on the Front-end code, but that is a separate,
bigger PR...

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Go to the signup page
  - [x] Play with the password inputs
  - [x] Password is not printed in the console  

#### For configuration changes:

None
2025-08-15 10:57:49 +00:00
Nicholas Tindle
6bb6a081a2 feat(backend): add support for v0 by Vercel models and credentials (#10641)
## Summary
This PR adds support for v0 by Vercel's Model API to the AutoGPT
platform, enabling users to leverage v0's framework-aware AI models
optimized for React and Next.js code generation.

v0 provides OpenAI-compatible endpoints with models specifically trained
for frontend development, making them ideal for generating UI components
and web applications.

### Changes 🏗️

#### Backend Changes
- **Added v0 Provider**: Added `V0 = "v0"` to `ProviderName` enum in
`/backend/backend/integrations/providers.py`
- **Added v0 Models**: Added three v0 models to `LlmModel` enum in
`/backend/backend/blocks/llm.py`:
- `V0_1_5_MD = "v0-1.5-md"` - Everyday tasks and UI generation (128K
context, 64K output)
- `V0_1_5_LG = "v0-1.5-lg"` - Advanced reasoning (512K context, 64K
output)
  - `V0_1_0_MD = "v0-1.0-md"` - Legacy model (128K context, 64K output)
- **Implemented v0 Provider**: Added v0 support in `llm_call()` function
using OpenAI-compatible client with base URL `https://api.v0.dev/v1`
- **Added Credentials Support**: Created `v0_credentials` in
`/backend/backend/integrations/credentials_store.py` with UUID
`c4e6d1a0-3b5f-4789-a8e2-9b123456789f`
- **Cost Configuration**: Added model costs in
`/backend/backend/data/block_cost_config.py`:
  - v0-1.5-md: 1 credit
  - v0-1.5-lg: 2 credits
  - v0-1.0-md: 1 credit

#### Configuration Changes
- **Settings**: Added `v0_api_key` field to `Secrets` class in
`/backend/backend/util/settings.py`
- **Environment Variables**: Added `V0_API_KEY=` to
`/backend/.env.default`

### Features
-  Full OpenAI-compatible API support
-  Tool/function calling support
-  JSON response format support
-  Framework-aware completions optimized for React/Next.js
-  Large context windows (up to 512K tokens)
-  Integrated with platform credit system

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run existing block tests to ensure no regressions: `poetry run
pytest backend/blocks/test/test_block.py`
  - [x] Verify AITextGeneratorBlock works with v0 models
  - [x] Confirm all model metadata is correctly configured
  - [x] Validate cost configuration is properly set up
  - [x] Check that v0_credentials has a valid UUID4

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
  - Added `V0_API_KEY=` to `/backend/.env.default`
- [x] `docker-compose.yml` is updated or already compatible with my
changes
  - No changes needed - uses existing environment variable patterns
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Configuration Requirements
Users need to:
1. Obtain a v0 API key from [v0.app](https://v0.app) (requires Premium
or Team plan)
2. Add `V0_API_KEY=your-api-key` to their `.env` file

### API Documentation
- v0 API Docs: https://v0.app/docs/api
- Model API Docs: https://v0.app/docs/api/model

### Testing
All existing tests pass with the new v0 integration:
```bash
poetry run pytest backend/blocks/test/test_block.py::test_available_blocks -k "AITextGeneratorBlock" -xvs
# Result: PASSED
```
2025-08-15 05:59:43 +00:00
Nicholas Tindle
df20b70f44 feat(blocks): Enrichlayer integration (#9924)
<!-- Clearly explain the need for these changes: -->

We want to support ~~proxy curl~~ enrichlayer as an integration, and
this is a baseline way to get there

### Changes 🏗️
- Adds some subset of proxycurl blocks based on the API docs:
~~https://nubela.co/proxycurl/docs#people-api-person-profile-endpoint~~
https://enrichlayer.com/docs/pc/#people-api
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] manually test the blocks with an API key
  - [x] make sure the automated tests pass

---------

Co-authored-by: SwiftyOS <craigswift13@gmail.com>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: majdyz <zamil@agpt.co>
2025-08-15 05:57:09 +00:00
Nicholas Tindle
21faf1b677 fix(backend): update and fix weekly summary email (#10343)
<!-- Clearly explain the need for these changes: -->

Our weekly summary emails are currently broken, hard-coded, and so ugly.

### Changes 🏗️
Update the email template to look better
Update the way we queue messages to work after other changes have
occurred

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test by sending a self email with the cron job set to every
minute, so you can see what it would look like

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-08-14 15:39:13 +00:00
Zamil Majdy
b53c373a59 feat(docker): streamline Supabase to minimal essential services (#10639)
## Summary
Streamline Supabase stack from 13 services to 3 core services for faster
startup and lower resource usage while maintaining full API
compatibility.

## Changes Made

### Core Services (Always Running)
- **Kong**: API gateway providing standard `/auth/v1/` endpoints and API
key validation
- **Auth**: GoTrue authentication service for user management
- **Database**: PostgreSQL with pgvector support for data persistence

### Removed Services (9 services eliminated)
- `rest` (PostgREST API) - not needed for auth-only usage
- `realtime` (real-time subscriptions) - not used by platform
- `storage` (file storage) - platform uses separate file handling  
- `imgproxy` (image processing) - not required for core functionality
- `meta` (database metadata) - not needed for runtime operations
- `functions` (edge functions) - not utilized
- `analytics` (Logflare) - monitoring overhead not needed locally
- `vector` (log collection) - not required for basic operation
- `supavisor` (connection pooler) - direct DB access sufficient for
local dev

### Studio (Development Only)  
- Moved to `local` profile: `docker compose --profile local up`
- Available for database management during development
- Excluded from normal startup for cleaner production-like environment

## Benefits
- **80% faster startup**: 3 services vs 13 services  
- **Lower resource usage**: Significant reduction in memory/CPU
consumption
- **Simpler debugging**: Fewer moving parts, cleaner logs, easier
troubleshooting
- **Maintained compatibility**: All auth functionality preserved through
Kong

## Backwards Compatibility
 **No breaking changes**
- All existing auth endpoints (`/auth/v1/*`) work unchanged
- API key authentication (`anon`/`service_role`) preserved  
- CORS and security policies maintained via Kong
- No application code changes required

## Testing
- [x] Docker compose starts successfully with minimal services
- [x] Auth endpoints accessible via Kong at `/auth/v1/`
- [x] Database connectivity maintained
- [x] Studio accessible with `--profile local` flag
- [x] All existing environment variables preserved

## File Changes
- `autogpt_platform/docker-compose.yml`: Removed unnecessary Supabase
services, moved studio to local profile
- `autogpt_platform/db/docker/docker-compose.yml`: Cleaned up service
dependencies on analytics/vector

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-08-14 04:55:45 +00:00
Zamil Majdy
4bfeddc03d feat(platform/docker): add frontend service to docker-compose with env config improvements (#10615)
## Summary
This PR adds the frontend service to the Docker Compose configuration,
enabling `docker compose up` to run the complete stack, including the
frontend. It also implements comprehensive environment variable
improvements, unified .env file support, and fixes Docker networking
issues.

## Key Changes

### 🐳 Docker Compose Improvements
- **Added frontend service** to `docker-compose.yml` and
`docker-compose.platform.yml`
- **Production build**: Uses `pnpm build + serve` instead of dev server
for better stability and lower memory usage
- **Service dependencies**: Frontend now waits for backend services
(`rest_server`, `websocket_server`) to be ready
- **YAML anchors**: Implemented DRY configuration to avoid duplicating
environment values

### 📁 Unified .env File Support
- **Frontend .env loading**: Automatically loads `.env` file during
Docker build and runtime
- **Backend .env loading**: Optional `.env` file support with fallback
to sensible defaults in `settings.py`
- **Single source of truth**: All `NEXT_PUBLIC_*` and API keys can be
defined in respective `.env` files
- **Docker integration**: Updated `.dockerignore` to include `.env`
files in build context
- **Git tracking**: Frontend and backend `.env` files are now trackable
(removed from gitignore)

### 🔧 Environment Variable Architecture
- **Dual environment strategy**: 
- Server-side code uses Docker service names
(`http://rest_server:8006/api`)
  - Client-side code uses localhost URLs (`http://localhost:8006/api`)
- **Comprehensive config**: Added build args and runtime environment
variables
- **Network compatibility**: Fixes connection issues between frontend
and backend containers
- **Shared backend variables**: Common environment variables (service
hosts, auth settings) centralized using YAML anchors

### 🛠️ Code Improvements
- **Centralized env-config helper** (`/frontend/src/lib/env-config.ts`)
with server-side priority
- **Updated all frontend code** to use shared environment helpers
instead of direct `process.env` access
- **Consistent API**: All environment variable access now goes through
helper functions
- **Settings.py improvements**: Better defaults for CORS origins and
optional .env file loading

### 🔗 Files Changed
- `docker-compose.yml` & `docker-compose.platform.yml` - Added frontend
service and shared backend env vars
- `frontend/Dockerfile` - Simplified build process to use .env files
directly
- `backend/settings.py` - Optional .env loading and better defaults
- `frontend/src/lib/env-config.ts` - New centralized environment
configuration
- `.dockerignore` - Allow .env files in build context
- `.gitignore` - Updated to allow frontend/backend .env files
- Multiple frontend files - Updated to use env helpers
- Updates to both auto installer scripts to work with the latest setup!

## Benefits
-  **Single command deployment**: `docker compose up` now runs
everything
-  **Better reliability**: Production build reduces memory usage and
crashes
-  **Network compatibility**: Proper container-to-container
communication
-  **Maintainable config**: Centralized environment variable management
with .env files
-  **Development friendly**: Works in both Docker and local development
-  **API key management**: Easy configuration through .env files for
all services
-  **No more manual env vars**: Frontend and backend automatically load
their respective .env files

## Testing
-  Verified Docker service communication works correctly
-  Frontend responds and serves content properly  
-  Environment variables are correctly resolved in both server and
client contexts
-  No connection errors after implementing service dependencies
-  .env file loading works correctly in both build and runtime phases
-  Backend services work with and without .env files present

### Checklist 📋

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-08-14 03:28:18 +00:00
Zamil Majdy
af7d56612d fix(logging): remove uvicorn log config to prevent startup deadlock (#10638)
## Problem
After applying the CloudLoggingHandler fix to use
BackgroundThreadTransport (#10634), scheduler pods entered a new
deadlock during startup when uvicorn reconfigures logging.

## Root Cause
When uvicorn starts with a log_config parameter, it calls
`logging.config.dictConfig()` which:
1. Calls `_clearExistingHandlers()` 
2. Which calls `logging.shutdown()`
3. Which tries to `flush()` all handlers including CloudLoggingHandler
4. CloudLoggingHandler with BackgroundThreadTransport tries to flush its
queue
5. The background worker thread tries to acquire the logging module lock
to check log levels
6. **Deadlock**: shutdown holds lock waiting for flush to complete,
worker thread needs lock to continue

## Thread Dump Evidence
From py-spy analysis of the stuck pod:
- **Thread 21 (FastAPI)**: Stuck in `flush()` waiting for background
thread to drain queue
- **Thread 13 (google.cloud.logging.Worker)**: Waiting for logging lock
in `isEnabledFor()`
- **Thread 1 (MainThread)**: Waiting for logging lock in `getLogger()`
during SQLAlchemy import
- **Threads 30, 31 (Sentry)**: Also waiting for logging lock

## Solution
Set `log_config=None` for all uvicorn servers. This prevents uvicorn
from calling `dictConfig()` and avoids the deadlock entirely.

**Trade-off**: Uvicorn will use its default logging configuration which
may produce duplicate log entries (one from uvicorn, one from the app),
but the application will start successfully without deadlocks.

## Changes
- Set `log_config=None` in all uvicorn.Config() calls
- Remove unused `generate_uvicorn_config` imports

## Testing
- [x] Verified scheduler pods can start and become healthy
- [x] Health checks respond properly  
- [x] No deadlocks during startup
- [x] Application logs still appear (though may be duplicated)

## Related Issues
- Fixes the startup deadlock introduced after #10634
2025-08-14 05:31:47 +07:00
Dmitry
0dd30e275c docs(blocks): Add AI/ML API integration guide and update LLM headers (#10402)
### Summary
Added a new documentation page and images for integrating AI/ML API with
AutoGPT, including step-by-step instructions. Updated LLM block to send
additional headers for requests to aimlapi.com. Improved provider
listing in index.md and added the new guide to mkdocs navigation. Builds
on and extends the integration work from
https://github.com/Significant-Gravitas/AutoGPT/pull/9996


### Changes 🏗️

This PR introduces official support and documentation for using **AI/ML
API** with the **AutoGPT platform**:

* 📄 **Added a new documentation page** `platform/aimlapi.md` with a
detailed step-by-step integration guide.
* 🖼️ **Added 12+ reference images** to `docs/content/imgs/aimlapi/` for
clear visual walkthrough.
* 🧠 **Updated the LLM block** (`llm.py`) to send extra headers
(`X-Project`, `X-Title`, `Referer`) in requests to `aimlapi.com` for
analytics and source attribution.
* 📚 **Improved provider listing** in `index.md` — added section about
AI/ML API models and benefits.
* 🧭 **Added the new guide to the mkdocs navigation** via `mkdocs.yml`.

---

### Checklist 📋

#### For code changes:

* [x] I have clearly listed my changes in the PR description
* [x] I have made a test plan
* [x] I have tested my changes according to the test plan:

  * [x] Successfully authenticated against `api.aimlapi.com`
  * [x] Verified requests use correct headers
* [x] Confirmed `AI Text Generator` block returns completions for all
supported models
* [x] End-to-end tested: created, saved, and ran agent with AI/ML API
successfully
  * [x] Verified outputs render correctly in the Output panel


No breaking changes introduced. Let me know if you'd like this guide
cross-referenced from other onboarding pages. 

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-13 18:25:58 +00:00
Ubbe
a135f09336 feat(frontend): update settings form (#10628)
## Changes 🏗️

<img width="800" height="687" alt="Screenshot 2025-08-12 at 15 52 41"
src="https://github.com/user-attachments/assets/0d2d70b8-e727-428b-915e-d4c108ab7245"
/>

<img width="800" height="772" alt="Screenshot 2025-08-12 at 15 52 53"
src="https://github.com/user-attachments/assets/b9790616-3754-455e-b8f6-58cd7f6b5a18"
/>

Update the Account Settings ( `profile/settings` ) form so that:
- it uses the new Design System components
- it is split into 2 forms ( update email & notifications )
- the change password inputs have been removed instead we link to the
`/reset-password` page
- uses a normal API route and client query to update the email

This might fix as well an error we are seeing when updating email
preferences on dev. My guess is it is failing because previously it was
using a server action + supabase and it didn't have access to the
cookies auth 🍪

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Navigate to `/profile/settings`
  - [x] Can update the email
  - [x] Can change notification preferences
  - [x] New E2E tests pass on the CI and make sense   

### For configuration changes:

None
2025-08-13 14:58:55 +00:00
Bently
2d436caa84 fix(backend/AM): Fix AutoMod api key issue (#10635)
### Changes 🏗️
Calls to the moderation API now strip whitespace from the API key before
including it in the 'X-API-Key' header, preventing authentication issues
due to accidental leading or trailing spaces.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Setup and run the platform with moderation and test it works
2025-08-13 13:47:40 +00:00
Zamil Majdy
34dd218a91 fix(backend): resolve CloudLoggingHandler deadlock causing scheduler hangs (#10634)
## 🚨 Critical Deadlock Fix: Scheduler Pod Stuck for 3+ Hours

This PR resolves a critical production deadlock where scheduler pods
become completely unresponsive due to a CloudLoggingHandler locking
issue.

## 📋 Incident Summary

**Affected Pod**: `autogpt-scheduler-server-6d7b89c4f9-mqp59`
- **Duration**: Stuck for 3+ hours (still ongoing)
- **Symptoms**: Health checks failing, appears completely dead
- **Impact**: No new job executions, system appears down
- **Root Cause**: CloudLoggingHandler deadlock with gRPC timeout failure

## 🔍 Detailed Incident Analysis

### The Deadlock Chain
1. **Thread 58 (APScheduler Worker)**: 
   - Completed job successfully
   - Called `logger.info("Job executed successfully")`
   - CloudLoggingHandler acquired lock at `logging/__init__.py:976`
   - Made gRPC call to Google Cloud Logging
   - **Got stuck in TCP black hole for 3+ hours**

2. **Thread 26 (FastAPI Health Check)**:
   - Tried to log health check response
   - **Blocked at `logging/__init__.py:927` waiting for same lock**
   - Health check never completes → Kubernetes thinks pod is dead

3. **All Other Threads**: Similarly blocked on any logging attempt

### Why gRPC Timeout Failed
The gRPC call had a 60-second timeout but has been stuck for 10,775+
seconds because:
- **TCP Black Hole**: Network packets silently dropped (firewall/load
balancer timeout)
- **No Socket Timeout**: Python default is `None` (infinite wait)
- **TCP Keepalive Disabled**: Dead connections hang forever  
- **Kernel-Level Block**: gRPC timeout can't interrupt `socket.recv()`
syscall

### Evidence from Thread Dump
```python
Thread 58: "ThreadPoolExecutor-0_1" 
  _blocking (grpc/_channel.py:1162)
    timeout: 60                    # ← Should have timed out
    deadline: 1755061203          # ← Expired 3 hours ago\!
  emit (logging_v2/handlers/handlers.py:225)  # ← HOLDING LOCK
  handle (logging/__init__.py:978)           # ← After acquire()

Thread 26: "Thread-4 (__start_fastapi)"
  acquire (logging/__init__.py:927)          # ← BLOCKED waiting for lock
    self: <CloudLoggingHandler at 0x7a657280d550>  # ← Same instance\!
```

## 🔧 The Fix

### Primary Solution
Replace **blocking** `SyncTransport` with **non-blocking**
`BackgroundThreadTransport`:

```python
# BEFORE (Dangerous - blocks while holding lock)
transport=SyncTransport,

# AFTER (Safe - queues and returns immediately) 
transport=BackgroundThreadTransport,
```

### Why BackgroundThreadTransport Solves It
1. **Non-blocking**: `emit()` returns immediately after queuing
2. **Lock Released**: No network I/O while holding the logging lock
3. **Isolated Failures**: Background thread hangs don't affect main app
4. **Better Performance**: Built-in batching and retry logic

### Additional Hardening
- **Socket Timeout**: 30-second global timeout prevents infinite hangs
- **gRPC Keepalive**: Detects and closes dead connections faster
- **Comprehensive Logging**: Comments explain the deadlock prevention

## 🧪 Technical Validation

### Before (SyncTransport)
```
log.info("message") 
  ↓
acquire_lock() 
  ↓  
gRPC_call()  HANGS FOR HOURS
  ↓
[DEADLOCK - lock never released]
```

### After (BackgroundThreadTransport)  
```
log.info("message")
  ↓
acquire_lock() 
  ↓
queue_message()  Instant
  ↓
release_lock()  Immediate
  ↓
[Background thread handles gRPC separately]
```

## 🚀 Impact & Benefits

**Immediate Impact**:
-  Prevents CloudLoggingHandler deadlocks
-  Health checks respond normally  
-  System remains observable during network issues
-  Scheduler can continue processing jobs

**Long-term Benefits**:
- 📈 Better logging performance (batching + async)
- 🛡️ Resilient to network partitions and timeouts
- 🔍 Maintained observability during failures  
-  No blocking I/O on critical application threads

## 📊 Files Changed
- `autogpt_libs/autogpt_libs/logging/config.py`: Transport change +
socket hardening

## 🧪 Test Plan
- [x] Validate BackgroundThreadTransport import works
- [x] Confirm socket timeout configuration applies
- [x] Verify gRPC keepalive environment variables set
- [ ] Deploy to staging and verify no deadlocks under load
- [ ] Monitor Cloud Logging delivery remains reliable

## 🔍 Monitoring After Deploy
- Watch for any logging delivery delays (expected: minimal)
- Confirm health checks respond consistently  
- Verify no more scheduler "hanging" incidents
- Monitor gRPC connection patterns in Cloud Logging metrics

## 🎯 Risk Assessment
- **Risk**: Very Low - BackgroundThreadTransport is the recommended
approach
- **Rollback**: Simple revert if any issues observed
- **Testing**: Extensively used in production Google Cloud services

---

**This fixes a critical production stability issue affecting scheduler
reliability and system observability.**

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-13 13:23:09 +00:00
Ubbe
41f500790f fix(marketplace): loading state (#10629)
## Changes 🏗️

Use a skeleton for the martkeplace loading state, representing visually
how the place should looks. Looks a bit more stylish than the previous
`Loading...` text.

### Before

<img width="800" height="774" alt="Screenshot 2025-08-12 at 16 01 22"
src="https://github.com/user-attachments/assets/29e44a1a-2089-468c-a253-3a6b763ada5a"
/>

### After

<img width="800" height="761" alt="Screenshot 2025-08-12 at 16 01 01"
src="https://github.com/user-attachments/assets/5ad362ae-df1d-4a1b-90ae-9349a81a4d75"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Martketplace loading state looks good across screen sizes


### For configuration changes:

None
2025-08-13 16:55:23 +04:00
Nicholas Tindle
793de77e76 ref(backend): update Gmail blocks to unify architecture and improve email handling (#10588)
## Summary
This PR refactors all Gmail blocks to share a common base class
(`GmailBase`) and adds several improvements to email handling, including
proper HTML content support, async API calls, and fixing the
78-character line wrapping issue for plain text emails.

## Changes

### Architecture Improvements
- **Unified base class**: Created `GmailBase` abstract class that
consolidates common functionality across all Gmail blocks
- **Async API calls**: Converted all Gmail API calls to use
`asyncio.to_thread` for better performance and non-blocking operations
- **Code deduplication**: Moved shared methods like `_build_service`,
`_get_email_body`, `_get_attachments`, and `_get_label_id` to the base
class

### Email Content Handling
- **Smart content type detection**: Added automatic detection of HTML vs
plain text content
- **Fix 78-char line wrapping**: Plain text emails now use a no-wrap
policy (`max_line_length=0`) to prevent Gmail's default 78-character
hard line wrapping
- **Content type parameter**: Added optional `content_type` field to
Send, Draft, Reply, and Forward blocks allowing manual override ("auto",
"plain", or "html")
- **Proper MIME handling**: Created `_make_mime_text` helper function to
properly configure MIME types and policies

### New Features
- **Gmail Forward Block**: Added new `GmailForwardBlock` for forwarding
emails with proper thread preservation
- **Reply improvements**: Reply block now properly reads the original
email content when replying

### Bug Fixes
- Fixed issue where reply block wasn't reading the email it was replying
to
- Fixed attachment handling in multipart messages
- Improved error handling for base64 decoding

## Technical Details

The refactoring introduces:
- `NO_WRAP_POLICY = SMTP.clone(max_line_length=0)` to prevent line
wrapping in plain text emails
- UTF-8 charset support for proper Unicode/emoji handling
- Consistent async patterns using `asyncio.to_thread` for all Gmail API
calls
- Proper HTML to text conversion using html2text library when available

## Testing
All existing tests pass. The changes maintain backward compatibility
while adding new optional parameters.

## Breaking Changes
None - all changes are backward compatible. The new `content_type`
parameter is optional and defaults to "auto" detection.

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
2025-08-13 02:17:10 +00:00
Zamil Majdy
a2059c6023 refactor(backend): consolidate LaunchDarkly feature flag management (#10632)
This PR consolidates LaunchDarkly feature flag management by moving it
from autogpt_libs to backend and fixing several issues with boolean
handling and configuration management.

### Changes 🏗️

**Code Structure:**
- Move LaunchDarkly client from `autogpt_libs/feature_flag` to
`backend/util/feature_flag.py`
- Delete redundant `config.py` file and merge LaunchDarkly settings into
`backend/util/settings.py`
- Update all imports throughout the codebase to use
`backend.util.feature_flag`
- Move test file to `backend/util/feature_flag_test.py`

**Bug Fixes:**
- Fix `is_feature_enabled` function to properly return boolean values
instead of arbitrary objects that were always evaluating to `True`
- Add proper async/await handling for all `is_feature_enabled` calls
- Add better error handling when LaunchDarkly client is not initialized

**Performance & Architecture:**
- Load Settings at module level instead of creating new instances inside
functions
- Remove unnecessary `sdk_key` parameter from
`initialize_launchdarkly()` function
- Simplify initialization by using centralized settings management

**Configuration:**
- Add `launch_darkly_sdk_key` field to `Secrets` class in settings.py
with proper validation alias
- Remove environment variable fallback in favor of centralized settings

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All existing feature flag tests pass (6/6 tests passing)
  - [x] LaunchDarkly initialization works correctly with settings
  - [x] Boolean feature flags return correct values instead of objects
  - [x] Non-boolean flag values are properly handled with warnings
- [x] Async/await calls work correctly in AutoMod and activity status
generator
  - [x] Code formatting and imports are correct

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Configuration Changes:**
- LaunchDarkly SDK key is now managed through the centralized Settings
system instead of a separate config file
- Uses existing `LAUNCH_DARKLY_SDK_KEY` environment variable (no changes
needed to env files)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-13 01:15:10 +00:00
Nicholas Tindle
b9c3920227 fix(backend): Support dynamic values_#_* fields in CreateDictionaryBlock (#10587)
## Summary

Fixed Smart Decision Maker's function signature generation to properly
handle dynamic fields (e.g., `values_#_*`, `items_$_*`) when connecting
to any block as a tool.

### Context

When Smart Decision Maker calls other blocks as tools, it needs to
generate OpenAI-compatible function signatures. Previously, when
connected to blocks via dynamic fields (which get merged by the executor
at runtime), the signature generation would fail because blocks don't
inherently know about these dynamic field patterns.

### Changes 🏗️

- **Modified
`SmartDecisionMakerBlock._create_block_function_signature()`** to detect
and handle dynamic fields:
- Detects fields containing `_#_` (dict merge), `_$_` (list merge), or
`_@_` (object merge)
- Provides generic string schema for dynamic fields (OpenAI API
compatible)
  - Falls back gracefully for unknown fields
- **Added comprehensive tests** for dynamic field handling with both
dictionary and list patterns
- **No changes needed to individual blocks** - this solution works
universally

### Why This Approach

Instead of modifying every block to handle dynamic fields (original PR
approach), we handle it centrally in Smart Decision Maker where the
function signatures are generated. This is cleaner and more
maintainable.

### Test Plan 📋

- [x] Created test cases for Smart Decision Maker generating function
signatures with dynamic dict fields (`_#_`)
- [x] Created test cases for Smart Decision Maker generating function
signatures with dynamic list fields (`_$_`)
- [x] Verified Smart Decision Maker can successfully call blocks like
CreateDictionaryBlock via dynamic connections
- [x] All existing Smart Decision Maker tests pass
- [x] Linting and formatting pass

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
2025-08-12 22:59:56 +00:00
Zamil Majdy
abba10b649 feat(block): Remove paralel tool-call system prompting (#10627)
We're forcing this note to the end of the system prompt SDM block: 
Only provide EXACTLY one function call; multiple tool calls are strictly
prohibited., this is being interpreted by GPT5 as "Only call one tool
per task," which is resulting in many agent runs that only use a tool
once (i.e., useless low low-effort answers)

### Changes 🏗️

Remove parallel tool-call system prompting entirely.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] automated tests.
2025-08-12 12:46:52 +00:00
Zamil Majdy
6c34790b42 Revert "feat(platform): add py-spy profiling support"
This reverts commit c168277b1d.
2025-08-12 13:53:58 +07:00
Zamil Majdy
c168277b1d feat(platform): add py-spy profiling support
Add py-spy for production-safe Python profiling across all backend services:
- Add py-spy dependency to pyproject.toml
- Grant SYS_PTRACE capability to Docker services for profiling access
- Enable low-overhead performance monitoring in development and production

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-12 13:49:01 +07:00
Zamil Majdy
89eb5d1189 feat(feature-flag): add LaunchDarkly user context and metadata support (#10595)
## Summary

Enable LaunchDarkly feature flags to use rich user context and metadata
for advanced targeting, including user segments, account age, email
domains, and custom attributes. This unlocks LaunchDarkly's powerful
targeting capabilities beyond simple user ID checks.

## Problem

LaunchDarkly feature flags were only receiving basic user IDs,
preventing the use of:
- **Segment-based targeting** (e.g., "employees", "beta users", "new
accounts")
- **Contextual rules** (e.g., account age, email domain, custom
metadata)
- **Advanced LaunchDarkly features** like percentage rollouts by user
attributes

This limited feature flag flexibility and required manual user ID
management for targeting.

## Solution

### 🎯 **LaunchDarkly Context Enhancement**
- **Rich user context**: Send user metadata, segments, account age,
email domain to LaunchDarkly
- **Automatic segmentation**: Users automatically categorized as
"employee", "new_user", "established_user" etc.
- **Custom metadata support**: Any user metadata becomes available for
LaunchDarkly targeting
- **24-hour caching**: Efficient user context retrieval with TTL cache
to reduce database calls

### 📊 **User Context Data**
```python
# Before: Only user ID
context = Context.builder("user-123").build()

# After: Full context with targeting data
context = {
    "email": "user@agpt.co",
    "created_at": "2023-01-15T10:00:00Z",
    "segments": ["employee", "established_user"],
    "email_domain": "agpt.co", 
    "account_age_days": 365,
    "custom_role": "admin"
}
```

### 🏗️ **Required Infrastructure Changes**

To support proper LaunchDarkly serialization, we needed to implement
clean application models:

#### **Application-Layer User Model**
- Created snake_case User model (`created_at`, `email_verified`) for
proper JSON serialization
- LaunchDarkly expects consistent field naming - camelCase Prisma
objects caused validation errors
- Added `User.from_db()` converter to safely transform database objects

#### **HTTP Client Reliability**  
- Fixed HTTP 4xx retry issue that was causing unnecessary load
- Added layer validation to prevent database objects leaking to external
services

#### **Type Safety**
- Eliminated `Any` types and defensive coding patterns
- Proper typing enables better IDE support and catches errors early

## Technical Implementation

### **Core LaunchDarkly Enhancement**
```python
# autogpt_libs/feature_flag/client.py
@async_ttl_cache(maxsize=1000, ttl_seconds=86400)  # 24h cache
async def _fetch_user_context_data(user_id: str) -> dict[str, Any]:
    user = await get_user_by_id(user_id)
    return _build_launchdarkly_context(user)

def _build_launchdarkly_context(user: User) -> dict[str, Any]:
    return {
        "email": user.email,
        "created_at": user.created_at.isoformat(),  # snake_case for serialization
        "segments": determine_user_segments(user),
        "account_age_days": calculate_account_age(user),
        # ... more context data
    }
```

### **User Segmentation Logic**
- **Role-based**: `admin`, `user`, `system` segments
- **Domain-based**: `employee` for @agpt.co emails  
- **Account age**: `new_user` (<7 days), `recent_user` (7-30 days),
`established_user` (>30 days)
- **Custom metadata**: Any user metadata becomes available for targeting

### **Infrastructure Updates**
- `backend/data/model.py`: Application User model with proper
serialization
- `backend/util/service.py`: HTTP client improvements and layer
validation
- Multiple files: Migration to use application models for consistency

## LaunchDarkly Usage Examples

With this enhancement, you can now create LaunchDarkly rules like:

```yaml
# Target employees only
- variation: true
  targets:
    - values: ["employee"]
      contextKind: "user"
      attribute: "segments"

# Target new users for gradual rollout  
- variation: true
  rollout:
    variations:
      - variation: true
        weight: 25000  # 25% of new users
    contextKind: "user" 
    bucketBy: "segments"
    filters:
      - attribute: "segments"
        op: "contains"
        values: ["new_user"]
```

## Performance & Caching

- **24-hour TTL cache**: Dramatically reduces database calls for user
context
- **Graceful fallbacks**: Simple user ID context if database unavailable
- **Efficient caching**: 1000 entry LRU cache with automatic TTL
expiration

## Testing

- [x] LaunchDarkly context includes all expected user attributes
- [x] Segmentation logic correctly categorizes users
- [x] 24-hour cache reduces database load
- [x] Fallback to simple context works when database unavailable
- [x] All existing feature flag functionality preserved
- [x] HTTP retry improvements work correctly

## Breaking Changes

 **No external API changes** - all existing feature flag usage
continues to work

⚠️ **Internal changes only**:
- `get_user_by_id()` returns application User model instead of Prisma
model
- Test utilities need to import User from `backend.data.model`

## Impact

🎯 **Product Impact**:
- **Advanced targeting**: Product teams can now use sophisticated
LaunchDarkly rules
- **Better user experience**: Gradual rollouts, A/B testing, and
segment-based features
- **Operational efficiency**: Reduced need for manual user ID management

🚀 **Performance Impact**:
- **Reduced database load**: 24-hour caching minimizes repeated user
context queries
- **Improved reliability**: Fixed HTTP retry inefficiencies
- **Better monitoring**: Cleaner logs without 4xx retry noise

---

**Primary goal**: Enable rich LaunchDarkly targeting with user context
and segments
**Infrastructure changes**: Required for proper serialization and
reliability

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-12 05:25:56 +00:00
Abhimanyu Yadav
e13e0d4376 test(frontend): add e2e test for profile form page (#10596)
This PR has added end-to-end tests for the profile form page. These
tests include:

- Redirects to the login page when the user is not authenticated.
- Can save profile changes successfully.
- Can cancel profile changes (skipped because we need to fix the form
for this test).

### Changes 🏗️
- Added test-id's inside the ProfileInfoForm.
- Created a page object for the profile form page.
- Added a test for this page in `profile-form.spec.ts`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All test are working perfectly locally
2025-08-11 12:38:00 +00:00
Lluis Agusti
f4a732373b fix(frontend): remove state limits from agent activity dropdown 2025-08-11 12:20:21 +02:00
Bently
28d85ad61c feat(backend/AM): Integrate AutoMod content moderation (#10539)
Copy of [feat(backend/AM): Integrate AutoMod content moderation - By
Bentlybro - PR
#10490](https://github.com/Significant-Gravitas/AutoGPT/pull/10490) cos
i messed it up 🤦

Adds AutoMod input and output moderation to the execution flow.
Introduces a new AutoMod manager and models, updates settings for
moderation configuration, and modifies execution result handling to
support moderation-cleared data. Moderation failures now clear sensitive
data and mark executions as failed.

<img width="921" height="816" alt="image"
src="https://github.com/user-attachments/assets/65c0fee8-d652-42bc-9553-ff507bc067c5"
/>


### Changes 🏗️

I have made some small changes to
``autogpt_platform\backend\backend\executor\manager.py`` to send the
needed into to the AutoMod system which collects the data, combines and
makes the api call to AM and based on its reply lets it run or not!

I also had to make small changes to
``autogpt_platform\backend\backend\data\execution.py`` to add checks
that allow me to clear the content from the blocks if it was flagged

I am working on finalizing the AM repo then that will be public

To note: we will want to set this up behind launch darkly first for
testing on the team before we roll it out any more

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Setup and run the platform with ``automod_enabled`` set to False
and it works normally
- [x] Setup and run the platform with ``automod_enabled`` set to True,
set the AM URL and API Key and test it runs safe blocks normally
- [x] Test AM with content that would trigger it to flag and watch it
stop and clear all the blocks outputs

Message @Bentlybro for the URL and an API key to AM for local testing!

## Changes made to Settings.py 

I have added a few new options to the settings.py for AutoMod Config!

```
    # AutoMod configuration
    automod_enabled: bool = Field(
        default=False,
        description="Whether AutoMod content moderation is enabled",
    )
    automod_api_url: str = Field(
        default="",
        description="AutoMod API base URL - Make sure it ends in /api",
    )
    automod_timeout: int = Field(
        default=30,
        description="Timeout in seconds for AutoMod API requests",
    )
    automod_retry_attempts: int = Field(
        default=3,
        description="Number of retry attempts for AutoMod API requests",
    )
    automod_retry_delay: float = Field(
        default=1.0,
        description="Delay between retries for AutoMod API requests in seconds",
    )
    automod_fail_open: bool = Field(
        default=False,
        description="If True, allow execution to continue if AutoMod fails",
    )
    automod_moderate_inputs: bool = Field(
        default=True,
        description="Whether to moderate block inputs",
    )
    automod_moderate_outputs: bool = Field(
        default=True,
        description="Whether to moderate block outputs",
    )
```
and
```
automod_api_key: str = Field(default="", description="AutoMod API key")
```

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-08-11 09:39:28 +00:00
Zamil Majdy
d4b5508ed1 fix(backend): resolve scheduler deadlock and improve health checks (#10589)
## Summary
Fix critical deadlock issue where scheduler pods would freeze completely
and become unresponsive to health checks, causing pod restarts and stuck
QUEUED executions.

## Root Cause Analysis
The scheduler was using `BlockingScheduler` which blocked the main
thread, and when concurrent jobs deadlocked in the async event loop, the
entire process would freeze - unable to respond to health checks or
process any requests.

From crash analysis:
- At 01:18:00, two jobs started executing concurrently
- At 01:18:01.482, last successful health check  
- Process completely froze - no more logs until pod was killed at
01:18:46
- Execution `8174c459-c975-4308-bc01-331ba67f26ab` was created in DB but
never published to RabbitMQ

## Changes Made

### Core Deadlock Fix
- **Switch from BlockingScheduler to BackgroundScheduler**: Prevents
main thread blocking, allows health checks to work even if scheduler
jobs deadlock
- **Make all health_check methods async**: Makes health checks
completely independent of thread pools and more resilient to blocking
operations

### Enhanced Monitoring & Debugging  
- **Add execution timing**: Track and log how long each graph execution
takes to create and publish
- **Warn on slow operations**: Alert when operations take >10 seconds,
indicating resource contention
- **Enhanced error logging**: Include elapsed time and exception types
in error messages
- **Better APScheduler event listeners**: Add listeners for missed jobs
and max instances with actionable messages

### Files Modified
- `backend/executor/scheduler.py` - Switch to BackgroundScheduler, async
health_check, timing monitoring
- `backend/util/service.py` - Base async health_check method
- `backend/executor/database.py` - Async health_check override  
- `backend/notifications/notifications.py` - Async health_check override

## Test Plan
- [x] All existing tests pass (914 passed, 1 failed unrelated connection
issue)
- [x] Scheduler starts correctly with BackgroundScheduler
- [x] Health checks respond properly under load
- [x] Enhanced logging provides visibility into execution timing

## Impact
- **Prevents pod freezes**: Scheduler remains responsive even when jobs
deadlock
- **Better observability**: Clear visibility into slow operations and
failures
- **No dropped executions**: Jobs won't get stuck in QUEUED state due to
process freezes
- **Faster incident response**: Health checks and logs provide
actionable debugging info

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-09 02:41:10 +00:00
Nicholas Tindle
0116866199 feat(backend): add more discord blocks support (#10586)
# Enhanced Discord Integration Blocks

Introduces new blocks for sending DMs, embeds, files, and replies in
Discord, as well as blocks for retrieving user and channel information.
Enhances existing message blocks with additional metadata fields and
server/channel identification. Improves test coverage and input/output
schemas for all Discord-related blocks.

Co-Authored-By: Claude <claude@users.noreply.github.com>

## Why These Changes Are Needed 🎯

The existing Discord integration was limited to basic message sending
and reading. Users needed more sophisticated Discord functionality to
build comprehensive automation workflows:

1. **Limited messaging options** - Could only send plain text to
channels, no DMs, embeds, or file attachments
2. **Poor graph connectivity** - Blocks didn't output IDs needed for
chaining operations (e.g., couldn't reply to a message after sending it)
3. **No user management** - Couldn't get user information or send direct
messages
4. **Type safety issues** - Discord.py's incomplete type hints caused
linting errors
5. **No channel resolution** - Had to manually find channel IDs instead
of using names

### Changes 🏗️

#### New Blocks Added
- **SendDiscordDMBlock** - Send direct messages to users via their
Discord ID
- **SendDiscordEmbedBlock** - Create rich embedded messages with images,
fields, and formatting
- **SendDiscordFileBlock** - Upload any file type (images, PDFs, videos,
etc.) using MediaFileType
- **ReplyToDiscordMessageBlock** - Reply to specific messages in threads
- **DiscordUserInfoBlock** - Retrieve user profile information
(username, avatar, creation date, etc.)
- **DiscordChannelInfoBlock** - Resolve channel names to IDs and get
channel metadata

#### Enhanced Existing Blocks
- **ReadDiscordMessagesBlock**:
- Now outputs: `message_id`, `channel_id`, `user_id` (previously missing
all IDs)
- Enables workflows like: read message → reply to it, or read message →
DM the author
  
- **SendDiscordMessageBlock**:
- Now outputs: `message_id`, `channel_id` (previously had no outputs
except status)
  - Enables tracking sent messages and replying to them later

#### Technical Improvements
- **MediaFileType Support**: SendDiscordFileBlock accepts data URIs,
URLs, or local paths
- **Defensive Programming**: Added runtime type checks for Discord.py's
incomplete typing
- **ID Passthrough**: DiscordUserInfoBlock passes through user_id for
chaining
- **Better Error Messages**: Clear feedback when operations fail (e.g.,
"Channel cannot receive messages")
- **Channel Flexibility**: Blocks accept both channel names and IDs

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

#### Test Plan 🧪
- [x] **Import and initialization**: All 8 Discord blocks import and
initialize without errors
- [x] **Type checking**: `poetry run format` passes with no type errors
- [x] **Interface connectivity**: Verified blocks can chain together:
- [x] ReadDiscordMessages → ReplyToDiscordMessage (via message_id,
channel_id)
  - [x] ReadDiscordMessages → SendDiscordDM (via user_id)
- [x] SendDiscordMessage → ReplyToDiscordMessage (via message_id,
channel_id)
  - [x] DiscordUserInfo → SendDiscordDM (via user_id passthrough)
  - [x] DiscordChannelInfo → SendDiscordEmbed/File (via channel_id)
- [x] **MediaFileType handling**: SendDiscordFileBlock correctly
processes:
  - [x] Data URIs (base64 encoded files)
  - [x] URLs (downloads from web)
  - [x] Local paths (from other blocks)
- [x] **Defensive checks**: Verified error handling for:
  - [x] Non-text channels (forums, categories)
  - [x] Private/DM channels without guilds
  - [x] Missing attributes on channel objects
- [x] **Mock test data**: All blocks have appropriate test
inputs/outputs defined

## Example Workflows Now Possible 🚀

1. **Auto-reply to mentions**: Read messages → Check if bot mentioned →
Reply in thread
2. **File distribution**: Generate report → Send as PDF to Discord
channel
3. **User notifications**: Get user info → Check if online → Send DM
with alert
4. **Cross-platform sync**: Receive email attachment → Forward to
Discord channel
5. **Rich notifications**: Create embed with thumbnail → Add fields →
Send to announcement channel

## Breaking Changes ⚠️

None - all changes are backward compatible. Existing workflows using
SendDiscordMessageBlock and ReadDiscordMessagesBlock will continue to
work, they just now have additional outputs available.

## Dependencies 📦

No new dependencies added. Uses existing:
- `discord.py` (already in project)
- `aiohttp` (already in project)
- Backend utilities: `MediaFileType`, `store_media_file` (already in
project)

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
2025-08-08 18:45:04 +00:00
Bently
b68e490868 fix(backend): correct LLM configurations (#10585)
## Summary
Corrects the context window for GPT5_CHAT, fixes provider for
CLAUDE_4_1_OPUS from 'openai' to 'anthropic', and adds a 600s timeout to
the Anthropic client call in llm_call.

## Changes 🏗️
- changed gpt5's context limit to be smaller, 16k
- changed claude's provider from openai to anthropic
- Adding a 600s timeout to the Anthropic client call

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] test all models and they work
2025-08-08 15:45:18 +00:00
Swifty
c1c5571fd5 feat(blocks): Add 5 additional GitHub Integration blocks (#10561)
### Summary
Implemented 5 additional GitHub blocks on top of the existing GitHub
Integration to enhance CI/CD workflows and code review automation
capabilities.

[New Github
Blocks_v41.json](https://github.com/user-attachments/files/21684665/New.Github.Blocks_v41.json)
<img width="902" height="1073" alt="Screenshot 2025-08-08 at 15 09 40"
src="https://github.com/user-attachments/assets/ebb6d33b-f3cd-4a56-acc6-56ace5a01274"
/>

### Changes 🏗️

- Added **GitHub CI Results Block** (`github/ci.py`): Fetch and analyze
CI/CD check runs, workflow statuses, and logs
- Added **GitHub Review Blocks** (`github/reviews.py`):
  - Create PR reviews with comments
  - Approve/request changes on PRs
  - Add review comments to specific lines
  - Fetch existing reviews and comments
  - Dismiss stale reviews

### Related Tickets
- SECRT-1423: GitHub CI Results Integration
- SECRT-1426: GitHub PR Review Creation
- SECRT-1425: GitHub Review Comments
- SECRT-1424: GitHub Review Approval/Changes
- SECRT-1427: GitHub Review Management

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Created and tested CI results block with various repositories
  - [x] Tested PR review creation with comments
  - [x] Verified review approval and change request functionality
  - [x] Tested adding line-specific review comments
  - [x] Confirmed fetching and dismissing reviews works correctly
2025-08-08 15:18:02 +00:00
Swifty
da16397882 feat(blocks): update exa websets implementation (#10521)
## Summary

This PR fixes and enhances the Exa Websets implementation to resolve
issues with the expand_items parameter and improve the overall block
functionality. The changes address UI limitations with nested response
objects while providing a more comprehensive and user-friendly interface
for creating and managing Exa websets.


[Websets_v14.json](https://github.com/user-attachments/files/21596313/Websets_v14.json)
<img width="1335" height="949" alt="Screenshot 2025-08-05 at 11 45 07"
src="https://github.com/user-attachments/assets/3a9b3da0-3950-4388-96b2-e5dfa9df9b67"
/>

**Why these changes are necessary:**

1. **UI Compatibility**: The current implementation returns deeply
nested objects that cause the UI to crash. This PR flattens the input
parameters and returns simplified response objects to work around these
UI limitations.

2. **Expand Items Issue**: The `expand_items` toggle in the GetWebset
block was causing failures. This parameter has been removed as it's not
essential for the basic functionality.

3. **Missing SDK Integration**: The previous implementation used raw
HTTP requests instead of the official Exa SDK, making it harder to
maintain and more prone to errors.

4. **Limited Functionality**: The original implementation lacked support
for many Exa API features like imports, enrichments, and scope
configuration.

### Changes 🏗️

<\!-- Concisely describe all of the changes made in this pull request:
-->

1. **Added Pydantic models** (`model.py`):
   - Created comprehensive type definitions for all Exa webset objects
   - Added proper enums for status values and types
   - Structured models to match the Exa API response format

2. **Refactored websets.py**:
   - Replaced raw HTTP requests with the official `exa-py` SDK
- Flattened nested input parameters to avoid UI issues with complex
objects
   - Enhanced `ExaCreateWebsetBlock` with support for:
- Search configuration with entity types, criteria, exclude/scope
sources
     - Import functionality from existing sources
     - Enrichment configuration with multiple formats
- Removed problematic `expand_items` parameter from `ExaGetWebsetBlock`
- Updated response objects to use simplified `Webset` model that returns
dicts for nested objects

3. **Updated webhook_blocks.py**:
- Disabled the webhook block temporarily (`disabled=True`) as it needs
further testing

4. **Added exa-py dependency**:
   - Added official Exa Python SDK to `pyproject.toml` and `poetry.lock`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <\!-- Put your test plan here: -->
- [x] Created a new webset using the ExaCreateWebsetBlock with basic
search parameters
- [x] Verified the webset was created successfully in the Exa dashboard
- [x] Listed websets using ExaListWebsetsBlock and confirmed pagination
works
- [x] Retrieved individual webset details using ExaGetWebsetBlock
without expand_items
- [x] Tested advanced features including entity types, criteria, and
exclude sources
- [x] Confirmed the UI no longer crashes when displaying webset
responses
- [x] Verified the Docker environment builds successfully with the new
exa-py dependency

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
  - Added `exa-py` dependency to backend requirements

### Additional Notes

- The webhook functionality has been temporarily disabled pending
further testing and UI improvements
- The flattened parameter approach is a workaround for current UI
limitations with nested objects
- Future improvements could include re-enabling nested objects once the
UI supports them better
2025-08-08 15:14:52 +00:00
Swifty
098c12a961 feat(backend): Enable Ayrshare TikTok support (#10537)
## Summary
- Enabled the TikTok posting block that was previously disabled
- The block provides comprehensive TikTok-specific posting options

## Changes 🏗️
- Removed `disabled=True` from TikTok posting block to enable
functionality
- Added full TikTok API integration with all supported options:

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified YouTube block is now available in the block list

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-08 14:04:38 +00:00
Zamil Majdy
a28b2cf04f fix(backend/scheduler): Reconfigure scheduling setting & Add more logging on execution scheduling logic 2025-08-08 19:27:30 +07:00
Zamil Majdy
de7b6b503f fix(backend): Add timeout on stopping message consumer on manager 2025-08-08 18:04:10 +07:00
Zamil Majdy
5338ab5b80 feat(backend): standardize service health checks with UnhealthyServiceError (#10584) 2025-08-08 17:23:36 +07:00
Zamil Majdy
e8f897ead1 feat(backend): standardize service health checks with UnhealthyServiceError (#10584)
This PR standardizes health check error handling across all services by
introducing and using a consistent `UnhealthyServiceError` exception
type. This improves monitoring, debugging, and service reliability by
providing uniform error reporting when services are unhealthy.

### Changes 🏗️

- **Added `UnhealthyServiceError` class** in `backend/util/service.py`:
  - Custom exception for unhealthy service states
  - Includes service name in error message
  - Added to `EXCEPTION_MAPPING` for proper serialization
- **Updated health checks across services** to use
`UnhealthyServiceError`:
- **Database service** (`backend/executor/database.py`): Replace
`RuntimeError` with `UnhealthyServiceError` for database connection
failures
- **Scheduler service** (`backend/executor/scheduler.py`): Replace
`RuntimeError` with `UnhealthyServiceError` for scheduler initialization
and running state checks
- **Notification service** (`backend/notifications/notifications.py`):
- Replace `RuntimeError` with `UnhealthyServiceError` for RabbitMQ
configuration issues
    - Added new `health_check()` method to verify RabbitMQ readiness
- **REST API** (`backend/server/rest_api.py`): Replace `RuntimeError`
with `UnhealthyServiceError` for database health checks
- **Updated imports** across all affected files to include
`UnhealthyServiceError`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified health check endpoints return appropriate errors when
services are unhealthy
- [x] Confirmed services start up properly and health checks pass when
healthy
  - [x] Tested error serialization through API responses
  - [x] Verified no breaking changes to existing functionality

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes were made in this PR - only code changes to
improve error handling consistency.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-08 10:00:59 +00:00
Zamil Majdy
fbe432919d fix(backend/scheduler): Add more robust health check mechanism for scheduler service 2025-08-08 14:53:56 +07:00
Abhimanyu Yadav
4f208d262e test(frontend): add e2e tests for agent dashboard page (#10572)
I have added e2e tests for agent dashboard page

It includes, tests like 
- dashboard page loads successfully
- submit agent button works correctly
- agent table displays data correctly
- agent table actions work correctly

I’ve also updated the e2e test script to include some static agent
submissions, so I can test if it loads on the frontend.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All tests are working perfectly locally
  
  
<img width="469" height="177" alt="Screenshot 2025-08-08 at 12 13 42 PM"
src="https://github.com/user-attachments/assets/5e37afc3-c151-476a-84de-0a06f44a0722"
/>
2025-08-08 07:29:11 +00:00
Zamil Majdy
ac9265c40d Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-08-08 14:08:37 +07:00
Zamil Majdy
e60deba05f refactor(backend): separate notification service from scheduler (#10579)
## Summary
- Create dedicated notification service entry point
(backend.notification:main)
- Remove NotificationManager from scheduler service for better
separation of concerns
- Update docker-compose to run notification service on dedicated port
8007
- Configure all services to communicate with separate notification
service

This refactoring separates the notification service from the scheduler
service, allowing them to run as independent microservices instead of
two processes in the same pod.

## Changes Made
- **New notification service entry point**: Created
`backend/backend/notification.py` with dedicated main function
- **Updated pyproject.toml**: Added notification service entry point
registration
- **Modified scheduler service**: Removed NotificationManager from
`backend/backend/scheduler.py`
- **Docker Compose updates**: Added notification_server service on port
8007, updated NOTIFICATIONMANAGER_HOST references

## Test plan
- [x] Verify notification service starts correctly with new entry point
- [x] Confirm scheduler service runs without notification manager
- [x] Test docker-compose configuration with separate services
- [x] Validate service discovery between microservices
- [x] Run linting and type checking

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-08-08 14:07:41 +07:00
Zamil Majdy
3131e2e856 fix(backend): resolve unclosed HTTP client session errors (#10566)
## Summary

This PR resolves unclosed HTTP client session errors that were occurring
in the backend, particularly during file uploads and service-to-service
communication.

### Key Changes

- **Fixed GCS storage operations**: Convert
`gcloud.aio.storage.Storage()` to use async context managers in
`media.py` and `cloud_storage.py`
- **Enhanced service client cleanup**: Added proper cleanup methods to
`DynamicClient` class in `service.py` with `__del__` fallback and
context manager support
- **Application shutdown cleanup**: Added cloud storage handler cleanup
to FastAPI application lifespan
- **Updated test mocks**: Fixed test fixtures to properly mock async
context manager behavior

### Root Cause Analysis

The "Unclosed client session" and "Unclosed connector" errors were
caused by:

1. **GCS storage clients** not using context managers (agent image
uploads)
2. **Service HTTP clients** (`httpx.Client`/`AsyncClient`) not being
properly cleaned up in the `DynamicClient` class

### Technical Details

- All `gcloud.aio.storage.Storage()` instances now use `async with`
context managers
- `DynamicClient` class now has proper cleanup methods and context
manager support
- Application shutdown hook ensures cloud storage handlers are properly
closed
- Test fixtures updated to mock async context manager protocol

### Testing

-  All media upload tests pass
-  Service client tests pass
-  Linting and formatting pass

## Test plan

- [ ] Deploy to staging environment
- [ ] Monitor logs for "Unclosed client session" errors (should be
eliminated)
- [ ] Verify file upload functionality works correctly
- [ ] Check service-to-service communication operates normally

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-08 05:41:41 +00:00
Zamil Majdy
378d256b58 fix(backend): add graph validation before scheduling recurring jobs (#10568)
## Summary

This PR addresses the recurring job validation failures by adding graph
validation before scheduling jobs. Previously, validation errors only
occurred at runtime during job execution, making it difficult to
communicate errors to users for scheduled recurring jobs.

### Changes 🏗️

- **Extract validation logic**: Created
`validate_and_construct_node_execution_input` wrapper function that
centralizes graph fetching, credential mapping, and validation logic
- **Add pre-scheduling validation**: Modified
`add_graph_execution_schedule` to validate graphs before creating
scheduled jobs
- **Make construct function private**: Renamed
`construct_node_execution_input` to `_construct_node_execution_input` to
prevent direct usage and encourage use of the wrapper
- **Reduce code duplication**: Eliminated duplicate validation logic
between scheduler and execution paths
- **Improve scheduler lifecycle management**:
  - Enhanced cleanup process with proper event loop shutdown sequence
  - Added graceful event loop thread termination with timeout
  - Fixed thread lifecycle management to prevent resource leaks
- **Add helper utilities**: 
- Created `run_async` helper to reduce
`asyncio.run_coroutine_threadsafe` boilerplate
- Added `SCHEDULER_OPERATION_TIMEOUT_SECONDS` constant for consistent
timeout handling across all scheduler operations

### Technical Details

**Validation Flow:**
The validation now happens in `add_graph_execution_schedule` before
calling `scheduler.add_job()`, ensuring that:
1. Graph exists and is accessible to the user
2. All credentials are valid and available
3. Graph structure and node configurations are valid
4. Starting nodes are present and properly configured

This uses the same validation logic as runtime execution, guaranteeing
consistency.

**Scheduler Lifecycle Improvements:**
- **Proper cleanup sequence**: Event loop is stopped before thread
termination
- **Thread management**: Added global tracking of event loop thread for
proper cleanup
- **Timeout consistency**: All scheduler operations now use the same
300-second timeout
- **Resource management**: Prevents potential memory leaks from unclosed
event loops

**Code Quality Improvements:**
- **DRY principle**: `run_async` helper eliminates repeated
`asyncio.run_coroutine_threadsafe` patterns
- **Single source of truth**: All timeout values use
`SCHEDULER_OPERATION_TIMEOUT_SECONDS` constant
- **Cleaner abstractions**: Direct utility function calls instead of
unnecessary wrapper methods

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified imports work correctly for both scheduler and utils
modules
  - [x] Confirmed code passes all linting and type checking
  - [x] Validated that existing functionality remains intact
  - [x] Tested that validation logic is properly extracted and reused
  - [x] Verified scheduler cleanup process works correctly
  - [x] Confirmed thread lifecycle management improvements

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes were required for this fix.*

## Impact

- **Prevents runtime failures**: Invalid graphs are caught before
scheduling instead of failing silently during execution
- **Better error communication**: Validation errors surface immediately
when scheduling
- **Improved resource management**: Proper event loop and thread cleanup
prevents memory leaks
- **Enhanced maintainability**: Single source of truth for validation
logic and consistent timeout handling
- **Reduced code duplication**: Eliminated ~30+ lines of duplicate code
across validation and async execution patterns
- **Better developer experience**: Cleaner code with helper functions
and consistent patterns

Resolves the TODO comment: "We need to communicate this error to the
user somehow" in scheduler.py:107

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-08 05:40:20 +00:00
Abhimanyu Yadav
3c52b75278 fix(frontend): marketplace top agents section (#10571)
Currently, we’re only seeing the top 20 agents, but we need to display
all of them until we see more call-to-action buttons.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All tests are working perfectly
  - [x] It's working manually as well
2025-08-08 04:52:51 +00:00
Zamil Majdy
40601f1616 fix(backend): Fix executor running RabbitMQ operations on closed/closing connection (#10578)
The RabbitMQ connection is unreliable (fixing it is a separate issue)
and sometimes get restarted. The scope of this PR is to avoid the
operation break due to executing on a stale, broken connection.

### Changes 🏗️

Fix executor running RabbitMQ operations on closed/closing connection

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Manually kill rabbitmq and see how it goes while executing an
agent
2025-08-07 23:53:52 +00:00
Nicholas Tindle
178c91d6b9 ref(backend): time/date blocks to support ISO 8601 and custom formats (#10576)
Introduces discriminated unions for time, date, and date-time format
selection, supporting both strftime and ISO 8601 (with timezone and
microsecond options). Updates schemas, test cases, and block logic to
handle the new format types, improving flexibility and standards
compliance for time and date outputs.

<!-- Clearly explain the need for these changes: -->

### Why these changes are needed

Users need to output timestamps in ISO 8601/RFC 3339 format for API
integrations and standardized data exchange. The previous implementation
only supported strftime formatting, which made it difficult to generate
properly formatted timestamps with timezone information. This change
enables:

- **Standards compliance**: ISO 8601 and RFC 3339 compliant timestamps
- **Timezone support**: 38 timezone options covering all UTC offsets
globally
- **API compatibility**: Many APIs require RFC 3339 timestamps (e.g.,
"2011-06-03T10:00:00-07:00")
- **Backward compatibility**: Existing workflows continue to work with
default strftime format

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Added discriminated union format types** for all time/date blocks:
- `GetCurrentTimeBlock`: Now supports `TimeStrftimeFormat` and
`TimeISO8601Format`
- `GetCurrentDateBlock`: Now supports `DateStrftimeFormat` and
`DateISO8601Format`
- `GetCurrentDateAndTimeBlock`: Now supports `StrftimeFormat` and
`ISO8601Format`

- **Implemented shared timezone support**:
- Created `TimezoneLiteral` type with 38 timezone options (all UTC
offsets)
  - Supports fractional offsets (e.g., India UTC+05:30, Nepal UTC+05:45)
  - Deduplicated timezone lists across all format classes

- **Added ISO 8601 format features**:
  - Timezone-aware timestamps with proper offset formatting
  - Optional microseconds inclusion
  - RFC 3339 compliance (subset of ISO 8601 with mandatory timezone)

- **Updated test cases** for all three blocks to verify:
  - Default behavior unchanged (backward compatibility)
  - Custom strftime formats still work
  - ISO 8601 format produces correct output

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified backward compatibility - default strftime format
unchanged
  - [x] Tested ISO 8601 format with UTC timezone
- [x] Tested ISO 8601 format with various timezones (India, New York,
etc.)
  - [x] Tested microseconds option for ISO formats
  - [x] Verified all existing tests pass for GetCurrentTimeBlock
  - [x] Verified all existing tests pass for GetCurrentDateBlock
  - [x] Verified all existing tests pass for GetCurrentDateAndTimeBlock
  - [x] Manually tested each block with different format configurations
- [x] Confirmed RFC 3339 compliance for timestamps with mandatory
timezone

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
2025-08-07 22:34:31 +00:00
Nicholas Tindle
c972f34713 Revert "feat(docker): add frontend service to docker-compose with env config improvements" (#10577)
Reverts Significant-Gravitas/AutoGPT#10536 to bring platform back up due
to this error:
```
│ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client!  │
│   │
│ Check your Supabase project's API settings to find these values   │
│   │
│ https://supabase.com/dashboard/project/_/settings/api   │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api)   │
│ at bX (.next/server/chunks/3873.js:6:90688)   │
│ at <unknown> (.next/server/chunks/150.js:6:13460)   │
│ at n (.next/server/chunks/150.js:6:13419)   │
│ at o (.next/server/chunks/150.js:6:14187)   │
│ ⨯ Error: Your project's URL and Key are required to create a Supabase client!   │
│   │
│ Check your Supabase project's API settings to find these values   │
│   │
│ https://supabase.com/dashboard/project/_/settings/api   │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api)   │
│ at bY (.next/server/chunks/3006.js:10:486)   │
│ at g (.next/server/app/(platform)/auth/callback/route.js:1:5890)   │
│ at async e (.next/server/chunks/9836.js:1:101814)   │
│ at async k (.next/server/chunks/9836.js:1:15611)   │
│ at async l (.next/server/chunks/9836.js:1:15817) {   │
│ digest: '424987633'   │
│ }   │
│ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client!  │
│   │
│ Check your Supabase project's API settings to find these values   │
│   │
│ https://supabase.com/dashboard/project/_/settings/api   │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api)   │
│ at bX (.next/server/chunks/3873.js:6:90688)   │
│ at <unknown> (.next/server/chunks/150.js:6:13460)   │
│ at n (.next/server/chunks/150.js:6:13419)   │
│ at j (.next/server/chunks/150.js:6:7482)   │
│ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client!  │
│   │
│ Check your Supabase project's API settings to find these values   │
│   │
│ https://supabase.com/dashboard/project/_/settings/api   │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api)   │
│ at bX (.next/server/chunks/3873.js:6:90688)   │
│ at <unknown> (.next/server/chunks/150.js:6:13460)   │
│ at n (.next/server/chunks/150.js:6:13419)   │
│ at h (.next/server/chunks/150.js:6:10561)   │
│ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client!  │
│   │
│ Check your Supabase project's API settings to find these values   │
│   │
│ https://supabase.com/dashboard/project/_/settings/api   │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api)   │
│ at bX (.next/server/chunks/3873.js:6:90688)   │
│ at <unknown> (.next/server/chunks/150.js:6:13460)   │
│ at n (.next/server/chunks/150.js:6:13419) 
```
2025-08-07 20:00:45 +00:00
Bently
7b3ee66247 feat(blocks): Add Anthropics new Claude Opus 4.1 model (#10575)
This adds the latest claude opus 4.1 model to the platform

This adds the following models
- claude-opus-4-1-20250805

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test claude opus 4.1 to make sure they work
2025-08-07 17:40:04 +00:00
Bently
2d10ac92b5 feat(blocks): Add GPT-5 models to the platform (#10574)
This adds the latest chatGPT models, gpt 5 to the platform, this is
ahead of its release, the prices and context limits are still to be
properly set but for now i set them to be the same as gpt4.1, the price
is set at 5 for now till we know more

This adds the following models
- gpt-5
- gpt-5-mini
- gpt-5-nano
- gpt-5-chat

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test all of the models to make sure they work
2025-08-07 17:19:23 +00:00
Swifty
377b5ef01c fix id not preserved through airtable oauth refresh (#10573)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-08-07 16:44:36 +02:00
Zamil Majdy
7922e4add4 fix(backend): fix lack of event loop on notification manager 2025-08-07 16:15:32 +07:00
Zamil Majdy
f172b314a4 feat(docker): add frontend service to docker-compose with env config improvements (#10536)
## Summary
This PR adds the frontend service to the Docker Compose configuration,
enabling `docker compose up` to run the complete stack including the
frontend. It also implements comprehensive environment variable
improvements and fixes Docker networking issues.

## Key Changes

### 🐳 Docker Compose Improvements
- **Added frontend service** to `docker-compose.yml` and
`docker-compose.platform.yml`
- **Production build**: Uses `pnpm build + serve` instead of dev server
for better stability and lower memory usage
- **Service dependencies**: Frontend now waits for backend services
(`rest_server`, `websocket_server`) to be ready
- **YAML anchors**: Implemented DRY configuration to avoid duplicating
environment values

### 🔧 Environment Variable Architecture
- **Dual environment strategy**: 
- Server-side code uses Docker service names
(`http://rest_server:8006/api`)
  - Client-side code uses localhost URLs (`http://localhost:8006/api`)
- **Comprehensive config**: Added build args and runtime environment
variables
- **Network compatibility**: Fixes connection issues between frontend
and backend containers

### 🛠️ Code Improvements
- **Centralized env-config helper** (`/frontend/src/lib/env-config.ts`)
with server-side priority
- **Updated all frontend code** to use shared environment helpers
instead of direct `process.env` access
- **Consistent API**: All environment variable access now goes through
helper functions

### 🔗 Files Changed
- `docker-compose.yml` & `docker-compose.platform.yml` - Added frontend
service
- `frontend/Dockerfile` - Added build args for environment variables
- `frontend/src/lib/env-config.ts` - New centralized environment
configuration
- Multiple frontend files - Updated to use env helpers

## Benefits
-  **Single command deployment**: `docker compose up` now runs
everything
-  **Better reliability**: Production build reduces memory usage and
crashes
-  **Network compatibility**: Proper container-to-container
communication
-  **Maintainable config**: Centralized environment variable management
-  **Development friendly**: Works in both Docker and local development

## Testing
-  Verified Docker service communication works correctly
-  Frontend responds and serves content properly  
-  Environment variables are correctly resolved in both server and
client contexts
-  No connection errors after implementing service dependencies

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-07 08:26:28 +00:00
Zamil Majdy
a21711a7ff feat(backend): migrate AgentExecutor from ProcessPoolExecutor to ThreadPoolExecutor (#10540)
## Summary
- Migrate execution manager from ProcessPoolExecutor to
ThreadPoolExecutor for improved performance and resource efficiency
- Rename `Executor` class to `ExecutionProcessor` for better clarity
- Convert classmethods to instance methods following proper OOP design
patterns
- Implement thread-local storage using `threading.local()` for
thread-safe execution

## Technical Changes
- **Executor Pattern**: Replace process-based execution with
thread-based execution using `ThreadPoolExecutor`
- **Thread-Local Storage**: Use `threading.local()` to bind
`ExecutionProcessor` instances to worker threads
- **Initialization**: Add `init_worker()` function called once per
thread via `initializer` parameter
- **Event Handling**: Replace `multiprocessing.Manager().Event()` with
`threading.Event()`
- **Tracking**: Update from PID to TID (`threading.get_ident()`) for
thread identification
- **Method Conversion**: Convert all classmethods to instance methods
(`cls` → `self`)
- **Signal Handling**: Remove signal handling code that doesn't work in
worker threads

## Benefits
- **Performance**: Reduced overhead compared to process
creation/destruction
- **Resource Efficiency**: Lower memory footprint and faster startup
- **Simplicity**: Cleaner implementation using thread-local storage
pattern
- **Thread Safety**: Maintained through isolated ExecutionProcessor
instances per thread

## Test Plan
- [x] Code passes all linting and formatting
- [x] All executor tests pass (23/23)
- [x] Graph execution test passes successfully
- [x] Thread-local storage implementation verified
- [x] Signal handling compatibility fixed for worker threads

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-07 08:25:22 +00:00
Zamil Majdy
e2af2f454d fix(backend): migrate notification service to fully async to resolve RabbitMQ connection issues (#10564)
## Summary
- **Remove background_executor from NotificationManager** to eliminate
event loop conflicts that were causing RabbitMQ "Connection reset by
peer" errors
- **Convert all notification processing to fully async** using async
database clients
- **Optimize Settings instantiation** to prevent file descriptor leaks
by moving to module level
- **Fix scheduler event loop management** to use single shared loop
instead of thread-cached approach

## Changes 🏗️

### 1. Remove ProcessPoolExecutor from NotificationManager
- Eliminated `background_executor` entirely from notification service
- Converted `queue_weekly_summary()` and `process_existing_batches()`
from sync to async
- Fixed the root cause: `asyncio.run()` was creating new event loops,
conflicting with existing RabbitMQ connections

### 2. Full Async Conversion
- Updated `_consume_queue` to only accept async functions:
`Callable[[str], Awaitable[bool]]`
- Replaced sync `DatabaseManagerClient` with
`DatabaseManagerAsyncClient` throughout notification service
- Added missing async methods to `DatabaseManagerAsyncClient`:
  - `get_active_user_ids_in_timerange`
  - `get_user_email_by_id` 
  - `get_user_email_verification`
  - `get_user_notification_preference`
  - `create_or_add_to_user_notification_batch`
  - `empty_user_notification_batch`
  - `get_all_batches_by_type`

### 3. Settings Optimization
- Moved `Settings()` instantiation to module level in:
  - `backend/util/metrics.py`
  - `backend/blocks/google_calendar.py`
  - `backend/blocks/gmail.py`
  - `backend/blocks/slant3d.py`
  - `backend/blocks/user.py`
- Prevents multiple file descriptor reads per process, reducing resource
usage

### 4. Scheduler Event Loop Fix
- **Simplified event loop initialization** in `Scheduler.run_service()`
to create single shared loop
- **Removed complex thread caching and locking** that could create
multiple connections
- **Fixed daemon thread lifecycle** by using non-daemon thread with
proper cleanup
- **Event loop runs in dedicated background thread** with graceful
shutdown handling

## Root Cause Analysis

The RabbitMQ "Connection reset by peer" errors were caused by:
1. **Event Loop Conflicts**: `asyncio.run()` in `queue_weekly_summary`
created new event loops, disrupting existing RabbitMQ heartbeat
connections
2. **Thread Resource Waste**: Thread-cached event loops in scheduler
created unnecessary connections
3. **File Descriptor Leaks**: Multiple Settings instantiations per
process increased resource pressure

## Why This Fixes the Issue

1. **Eliminates Event Loop Creation**: By using `asyncio.create_task()`
instead of `asyncio.run()`, we reuse the existing event loop
2. **Maintains Heartbeat Connections**: Async RabbitMQ connections
remain stable without event loop disruption
3. **Reduces Resource Pressure**: Settings optimization and simplified
scheduler reduce file descriptor usage
4. **Ensures Connection Stability**: Single shared event loop prevents
connection multiplexing issues

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified RabbitMQ connection stability by checking heartbeat logs
- [x] Confirmed async conversion maintains all notification
functionality
  - [x] Tested scheduler job execution with simplified event loop
  - [x] Validated Settings optimization reduces file descriptor usage
  - [x] Ensured notification processing works end-to-end

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-07 08:25:09 +00:00
Zamil Majdy
59cc3266e0 Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-08-07 06:28:56 +07:00
Zamil Majdy
c9360555b2 fix(backend): Persist any non interruption error on node execution as output (#10562)
Some non-node execution errors and system failures (like credentials not
found, or database failure) are not logged and exposed to the user. This
will make the node execution look like it's failed without an error
message:

<img width="804" height="1141" alt="image"
src="https://github.com/user-attachments/assets/e81314a0-b9af-4a95-bba7-8df576911e96"
/>

### Changes 🏗️

Make all non-interruption errors yielded as node execution error output.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] CI
2025-08-07 06:28:24 +07:00
Bently
4a63fbc006 feat(blocks): Add OpenAI's new opensource models (#10559)
This adds the latest opensource models from OpenAI to the platform, we
are using openrouter to provide api access to it!

I added 
- openai/gpt-oss-20b
- openai/gpt-oss-120b

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test both of the latest models from openai, openai/gpt-oss-20b and
openai/gpt-oss-120b and they should work!
2025-08-06 11:43:49 +00:00
Abhimanyu Yadav
9848266474 test(frontend): e2e tests for library page (#10355)
In this PR, I’ve added library page tests.

### Changes

I’ve added 9 tests: 8 for normal flows and 1 for checking edge cases.

Test names are something like:
- Library navigation is accessible from the navbar.
- The library page loads successfully.
- Agents are visible, and cards work correctly.
- Pagination works correctly.
- Sorting works correctly.
- Searching works correctly.
- Pagination while searching works correctly.
- Uploading an agent works correctly.
- Edge case: Search edge cases and error handling behave correctly.

Other than that, I’ve added a new utility that uses the build page to
help us create users at the start, which we could use to test the
library page.

- All tests are passing locally

<img width="514" height="465" alt="Screenshot 2025-07-12 at 11 13 41 AM"
src="https://github.com/user-attachments/assets/7a46c437-7db5-458b-b99a-4fa0d479866f"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All library tests are working locally and on CI perfectly.
2025-08-06 08:00:04 +00:00
Zamil Majdy
3fe88b6106 refactor(backend): Refactor log client and resource cleanup (#10558)
## Summary
- Created centralized service client helpers with thread caching in
`util/clients.py`
- Refactored service client management to eliminate health checks and
improve performance
- Enhanced logging in process cleanup to include error details
- Improved retry mechanisms and resource cleanup across the platform
- Updated multiple services to use new centralized client patterns

## Key Changes
### New Centralized Client Factory (`util/clients.py`)
- Added thread-cached factory functions for all major service clients:
  - Database managers (sync and async)
  - Scheduler client
  - Notification manager
  - Execution event bus (Redis-based)
  - RabbitMQ execution queue (sync and async)
  - Integration credentials store
- All clients use `@thread_cached` decorator for performance
optimization

### Service Client Improvements
- **Removed health checks**: Eliminated unnecessary health check calls
from `get_service_client()` to reduce startup overhead
- **Enhanced retry support**: Database manager clients now use request
retry by default
- **Better error handling**: Improved error propagation and logging

### Enhanced Logging and Cleanup
- **Process termination logs**: Added error details to termination
messages in `util/process.py`
- **Retry mechanism updates**: Improved retry logic with better error
handling in `util/retry.py`
- **Resource cleanup**: Better resource management across executors and
monitoring services

### Updated Service Usage
- Refactored 21+ files to use new centralized client patterns
- Updated all executor, monitoring, and notification services
- Maintained backward compatibility while improving performance

## Files Changed
- **Created**: `backend/util/clients.py` - Centralized client factory
with thread caching
- **Modified**: 21 files across blocks, executor, monitoring, and
utility modules
- **Key areas**: Service client initialization, resource cleanup, retry
mechanisms

## Test Plan
- [x] Verify all existing tests pass
- [x] Validate service startup and client initialization  
- [x] Test resource cleanup on process termination
- [x] Confirm retry mechanisms work correctly
- [x] Validate thread caching performance improvements
- [x] Ensure no breaking changes to existing functionality

## Breaking Changes
None - all changes maintain backward compatibility.

## Additional Notes
This refactoring centralizes client management patterns that were
scattered across the codebase, making them more consistent and
performant through thread caching. The removal of health checks reduces
startup time while maintaining reliability through improved retry
mechanisms.

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-08-06 13:53:01 +07:00
Reinier van der Leer
fa2d968458 fix(builder): Defer graph validation to backend (#10556)
- Resolves #10553

### Changes 🏗️

- Remove frontend graph validation in `useAgentGraph:saveAndRun(..)`
  - Remove now unused `ajv` dependency
- Implement graph validation error propagation (backend->frontend)
  - Add `GraphValidationError` type in frontend and backend
  - Add `GraphModel.validate_graph_get_errors(..)` method
  - Fix error handling & propagation in frontend API request logic

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving & running a graph with missing required inputs gives a
node-specific error
- [x] Saving & running a graph with missing node credential inputs
succeeds with passed-in credentials
2025-08-05 23:43:34 +00:00
Zamil Majdy
b935638240 Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-08-06 05:56:00 +07:00
Zamil Majdy
f9b255fb7a feat(backend/executor): Avoid executor premature termination on inflight agent execution (#10552)
There is no 100% accurate way of retrying an agent that has been
terminated. And the safest way to avoid executing an agent wrong is
minimizing the chance of an agent execution being terminated. A whole
set of mechanism to make sure the agent is retried on failure is still
in place and improved, this is used as our best-effort reliability
mechanism.

### Changes 🏗️

* Cap SIGINT & SIGTERM to be raised at most once, so the executor can
gracefully handle the stopping.
* SIGINT & SIGTERM will stop the execution request message consumption,
but not agent execution.
* Executor process will only stop if all the in-flight agent executions
are completed or terminated.
* Avoid retrying the agent stop command on AgentExecutorBlock on
timeout.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run agent, send SIGTERM to the executor pod, execution should not
be interrupted.
- [x] Run agent, send SIGKILL to the executor pod, execution should be
transferred to another pod.
2025-08-06 05:55:30 +07:00
Nicholas Tindle
cc6697e46d fix(backend): clean up parsing a bit for gmail read (#10555)
<!-- Clearly explain the need for these changes: -->
Toran hit an error on reading a snippet incorrectly

### Changes 🏗️
Does fallback getting from dictionary when building email objects
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] Deploy to dev and have Toran test against his inbox

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-05 18:30:34 +00:00
Swifty
5a5f7f0f9e fix(backend): Fix Airtable API boolean type casting for string values (#10551)
### Changes 🏗️

- Added `_convert_bools()` function to recursively convert string
boolean values ("true"/"false") to actual Python booleans
- Applied boolean conversion to all Airtable API endpoints that send
JSON data to ensure proper type casting
- Fixed parameters that were incorrectly converted to strings (e.g.,
`typecast`, `returnFieldsByFieldId`) to maintain their boolean types

This fix addresses an issue where the Airtable API was not properly
handling boolean values passed as strings, which could cause API calls
to fail or behave unexpectedly.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested boolean field updates with string values "true" and "false"
- [x] Verified that boolean parameters like `typecast` and
`returnFieldsByFieldId` are properly handled
- [x] Confirmed that nested boolean values in records are correctly
converted
  - [x] Tested that non-boolean values remain unchanged

[Working Airtable
Example_v56.json](https://github.com/user-attachments/files/21594436/Working.Airtable.Example_v56.json)
2025-08-05 13:22:55 +00:00
Bently
05d4d21d98 feat(frontend): Show CAPTCHA only in cloud environments (#10543)
Updated login and signup pages to display the Turnstile CAPTCHA and
require verification only when running in a cloud environment. This
prevents unnecessary CAPTCHA prompts in local or non-cloud deployments.

### Changes 🏗️

Locally when you try to login with the wrong password, and you update
and login again, you get a warning about captcha which is wrong, so this
fix makes it so the captcha will only when running in a cloud

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Try to login with the wrong password, get "Invalid login
credentials" and try to login again, you should keep getting "Invalid
login credentials" and it should not mention captcha
2025-08-04 16:37:20 +00:00
Zamil Majdy
1e6bd8d2a6 fix(backend/executor): Avoid stopping agent node evaluation when stopping graph (#10542)
Graph evaluation should stop naturally once all the node execution are
stopped.

### Changes 🏗️

Avoid stopping agent node evaluation when stopping graph

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] CI
2025-08-04 23:10:42 +07:00
Zamil Majdy
6f8d0bfdf2 fix(backend/executor): Fix node execution status and output persistence ordering (#10541)
The node execution status can be done before the output persistence,
making the output be persisted when the node execution status is already
completed.

### Changes 🏗️

* Re-order the node execution status & output persistence logic.
* Make agent.py avoid yielding the same node_exec_id twice (that can be
caused by the above issue).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing CI
2025-08-04 22:17:30 +07:00
Zamil Majdy
b85e8204df refactor(platform): remove unused functions and imports (#10535)
## Summary
- Removed unused metadata functions from user.py (get_user_metadata,
update_user_metadata)
- Removed unused execution and database functions from database.py and
related imports
- Added NodeExecutionStats validation in execution.py
- Updated CLAUDE.md with PR and commit conventions

## Changes Made
### `/backend/backend/data/user.py`
- Removed `get_user_metadata()` function (unused)
- Removed `update_user_metadata()` function (unused)
- Removed unused import `UserMetadataRaw`

### `/backend/backend/data/execution.py`
- Added `NodeExecutionStats` validation in `from_db()` method

### `/backend/backend/executor/database.py`
- Removed unused imports and function exposures
- Cleaned up DatabaseManagerClient to remove unused client methods

### `/CLAUDE.md`
- Added documentation for creating pull requests
- Added conventional commit types and scopes guide

## Testing
- Existing tests should pass as removed functions were not being used
- No new functionality added

## Checklist
- [x] Code follows the project's style guidelines
- [x] Self-review completed
- [x] Changes are backward compatible
- [x] No new warnings introduced

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-04 21:34:33 +07:00
Zamil Majdy
e5d3ebac08 feat(backend): Make Graph & Node Execution Stats Update Durable (#10529)
Graph and Node execution can fail due to so many reasons, sometimes this
messes up the stats tracking, giving an inaccurate result. The scope of
this PR is to minimize such issues.

### Changes 🏗️

* Catch BaseException on time_measured decorator to catch
asyncio.CancelledError
* Make sure update node & graph stats are executed on cancellation &
exception.
* Protect graph execution stats update under the thread lock to avoid
race condition.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing automated tests.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-04 21:33:52 +07:00
Swifty
2182c7ba9e refactor(backend): remove Ayrshare from execution & credentials manager (#10538)
This PR refactors the Ayrshare integration to remove the centralized
`get_ayrshare_profile_key` function from the credentials store and
instead retrieve the profile key directly within each Ayrshare block.
This change improves code organization by keeping Ayrshare-specific
logic within the Ayrshare module.

### Changes 🏗️

- **Refactored Ayrshare profile key retrieval**: Moved profile key
fetching logic from the credentials store into the Ayrshare blocks
- **Added `get_profile_key` helper function** in
`autogpt_platform/backend/backend/blocks/ayrshare/_util.py` to fetch the
profile key from user integrations
- **Updated all 15 Ayrshare social media blocks** to use `user_id`
instead of `profile_key` parameter and fetch the profile key internally
- **Removed `get_ayrshare_profile_key` method** from
`autogpt_platform/backend/backend/integrations/credentials_store.py`
- **Removed Ayrshare-specific logic** from
`autogpt_platform/backend/backend/executor/manager.py` that was passing
profile keys to blocks
- **Updated router** in
`autogpt_platform/backend/backend/server/integrations/router.py` to
directly fetch user integrations instead of using the removed method

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test posting to X/Twitter to check credentials flow
- [x] Verify profile key retrieval works correctly for authenticated
users
  - [x] Test Ayrshare SSO URL generation flow

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-08-04 13:36:54 +00:00
Abhimanyu Yadav
e043e4989b fix(frontend) : Update server-side mutator to bypass proxy (#10523)
This PR helps us bypass the proxy server in server-side requests,
allowing us to directly send requests to the backend and reduce latency.

### Changes 🏗️
- Introduced server-side detection to dynamically set the base URL for
API requests.
- Added error handling for server-side requests to log failures and
throw errors appropriately.
- Updated header management to include authentication tokens when
applicable.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All E2E tests are working.
- [x] I have manually checked the server-side and client-side
components, and both are working perfectly.
2025-08-04 11:36:25 +00:00
Abhimanyu Yadav
5dbc3a7d39 feat(frontend): Add marketplace agent page tests (#10434)
- Resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10433
- Depends on -
https://github.com/Significant-Gravitas/AutoGPT/pull/10427
- Need to review this pr, once this issue is fixed -
https://github.com/Significant-Gravitas/AutoGPT/issues/10404

I’ve created additional tests for the agents marketplace page

Tests that I have added
- Add to library button works and agent appears in library.
- Download button functionality works.
- Agent page details are visible.
- User can access agent page when logged in.
- User can access agent page when logged out

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I have done all the tests and they are working perfectly

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-08-04 05:53:53 +00:00
Abhimanyu Yadav
0978f406bc feat(frontend): Add reusable infinite scroll component for consistent pagination across frontend (#10530)
We currently use infinite scroll pagination in multiple places, but our
strategies vary across these locations. This repetitive code writing is
not ideal, and our current methods are also complex. We’re not utilising
React Query’s useInfiniteQuery hooks effectively.

To address these issues, we’re introducing a new component called
`InfiniteScroll` that handles pagination independently.

### How to use it?

- Use React Query’s `useInfiniteHook` to return multiple data points.
For pagination, we only need `fetchNextPage`, `hasNextPage`, and
`isFetchingNextPage`.

```ts
const {
  data: agents,
  fetchNextPage,
  hasNextPage,
  isFetchingNextPage,
  isLoading: agentLoading,
} = useGetV2ListLibraryAgentsInfinite(
  {
    page: 1,
    page_size: 8,
    search_term: searchTerm || undefined,
    sort_by: librarySort,
  },
);
```

- Simply pass these three data points and the current data length to the
`InfiniteScroll` component. That's it

```tsx
<InfiniteScroll
  dataLength={agents.length}
  isFetchingNextPage={isFetchingNextPage}
  fetchNextPage={fetchNextPage}
  hasNextPage={hasNextPage}
  loader={<LoadingSpinner />}
>
  ...
```
   
### Changes
- Add the `InfiniteScroll.tsx` component for consistency and simplicity
in pagination across the frontend.
- Update the current library page to use the `InfiniteScroll` component.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I’ve tested everything locally, and it’s working perfectly fine.
2025-08-04 05:48:54 +00:00
Zamil Majdy
1c3fa804d4 feat(backend): add timeout guard for locked_transaction used for credit transactions (#10528)
## Summary

This PR adds a timeout guard to the `locked_transaction` function used
for credit transactions to prevent indefinite blocking and improve
reliability.

## Changes

- Modified `locked_transaction` in `/backend/backend/data/db.py` to add
proper timeout handling
- Set `lock_timeout` and `statement_timeout` to prevent indefinite
blocking
- Updated function signature to use default timeout parameter
- Added comprehensive docstring explaining the locking mechanism

## Motivation

The previous implementation could potentially block indefinitely if a
lock couldn't be acquired, which could cause issues in production
environments, especially for critical credit transactions.

## Testing

- Existing tests pass
- The timeout mechanism ensures transactions won't hang indefinitely
- Advisory locks are properly released on commit/rollback

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-02 15:32:20 +00:00
Zamil Majdy
69d873debc fix(backend): improve executor reliability and error handling (#10526)
This PR improves the reliability of the executor system by addressing
several race conditions and improving error handling throughout the
execution pipeline.

### Changes 🏗️

- **Consolidated exception handling**: Now using `BaseException` to
properly catch all types of interruptions including `CancelledError` and
`SystemExit`
- **Atomic stats updates**: Moved node execution stats updates to be
atomic with graph stats updates to prevent race conditions
- **Improved cleanup handling**: Added proper timeout handling (3600s)
for stuck executions during cleanup
- **Fixed concurrent update race conditions**: Node execution updates
are now properly synchronized with graph execution updates
- **Better error propagation**: Improved error type preservation and
status management throughout the execution chain
- **Graph resumption support**: Added proper handling for resuming
terminated and failed graph executions
- **Removed deprecated methods**: Removed `update_node_execution_stats`
in favor of atomic updates

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute a graph with multiple nodes and verify stats are updated
correctly
  - [x] Cancel a running graph execution and verify proper cleanup
  - [x] Simulate node failures and verify error propagation
  - [x] Test graph resumption after termination/failure
  - [x] Verify no race conditions in concurrent node execution updates

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-02 17:41:59 +07:00
Zamil Majdy
4283798dc2 feat: Avoid Rest & DatabaseManager service serving traffic when the db is not yet connected (#10522)
Sometimes we receive an error where the service is not connected to the
DB, but we have started receiving traffic, making the request fail.

### Changes 🏗️

Make the `/health_check` endpoint also check the database connection.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing CI, manual test
2025-08-01 23:17:43 +00:00
Abhimanyu Yadav
326c4a9e0c feat(frontend): Add marketplace creator page tests (#10429)
- Resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10428
- Depends on -
https://github.com/Significant-Gravitas/AutoGPT/pull/10427
- Need to review this pr, once this issue is fixed -
https://github.com/Significant-Gravitas/AutoGPT/issues/10404

I’ve created additional tests for the creators marketplace page

Tests that I have added
- User can access creator's page when logged out.
- User can access creator's page when logged in.
- Creator page details are visible.
- Agents in agent by sections navigation works.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I have done all the tests and they are working perfectly
2025-08-01 15:28:02 +00:00
Abhimanyu Yadav
7705cf243c refactor(frontend): Update data fetching strategy in marketplace main page (#10520)
With this PR, we’re changing the data fetching strategy on the
marketplace page. We’re now using autogenerated React queries.

### Changes

- Splits separate render logic and hook logic.
- Update the data fetching strategy.
- Currently, we’re seeing agents in the featured section and creators in
the featured creators section, even if they’re not set to “isFeatured”
true. I’ve fixed that also.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All marketplace E2E tests are working.
- [x] I’ve tested all the links and checked if everything renders
perfectly on the marketplace page.
2025-08-01 15:27:48 +00:00
Zamil Majdy
8331dabf6a feat(backend): Make agent graph execution retriable and its failure visible (#10518)
Make agent graph execution durable by making it retriable. When it fails
to retry, we should make the error visible to the UI.

<img width="900" height="495" alt="image"
src="https://github.com/user-attachments/assets/70e3e117-31e7-4704-8bdf-1802c6afc70b"
/>
<img width="900" height="407" alt="image"
src="https://github.com/user-attachments/assets/78ca6c28-6cc2-4aff-bfa9-9f94b7f89f77"
/>


### Changes 🏗️

* Make _on_graph_execution retriable
* Increase retry count for failing db-manager RPC
* Add test coverage for RPC failure retry

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Allow graph execution retry
2025-08-01 11:44:43 +00:00
Zamil Majdy
e632549175 feat(backend): Add AI-generated activity status for agent executions (#10487)
## Summary
- Adds AI-generated activity status summaries for agent execution
results
- Provides users with conversational, non-technical summaries of what
their agents accomplished
- Includes comprehensive execution data analysis with honest failure
reporting

## Changes Made
- **Backend**: Added `ActivityStatusGenerator` module with async LLM
integration
- **Database**: Extended `GraphExecutionStats` and `Stats` models with
`activity_status` field
- **Frontend**: Added "Smart Agent Execution Summary" display with
disclaimer tooltip
- **Settings**: Added `execution_enable_ai_activity_status` toggle
(disabled by default)
- **Testing**: Comprehensive test suite with 12 test cases covering all
scenarios

## Key Features
- Collects execution data including graph structure, node relations,
errors, and I/O samples
- Generates user-friendly summaries from first-person perspective
- Honest reporting of failures and invalid inputs (no sugar-coating)
- Payload optimization for LLM context limits
- Full async implementation with proper error handling

## Test Plan
- [x] All existing tests pass
- [x] New comprehensive test suite covers success/failure scenarios
- [x] Feature toggle testing (enabled/disabled states)
- [x] Frontend integration displays correctly
- [x] Error handling and edge cases covered

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-01 11:37:49 +00:00
Abhimanyu Yadav
878f61aaf4 fix(test): Enhance E2E test data script to include featured creators and agents (#10517)
This PR updates the existing E2E test data script to support the
creation of featured creators and featured agents. Previously, these
entities were not included, which limited our ability to fully test
certain flows during Playwright E2E testing.

### Changes
- Added logic to create featured creators
- Added logic to create featured agents

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All tests are passing locally after updating the data script.
2025-08-01 11:09:39 +00:00
Nicholas Tindle
9ce857e29f fix(backend): we're making loops (#10514) 2025-07-31 10:18:37 -05:00
678 changed files with 21346 additions and 36889 deletions

View File

@@ -15,6 +15,7 @@
!autogpt_platform/backend/pyproject.toml
!autogpt_platform/backend/poetry.lock
!autogpt_platform/backend/README.md
!autogpt_platform/backend/.env
# Platform - Market
!autogpt_platform/market/market/
@@ -27,6 +28,7 @@
# Platform - Frontend
!autogpt_platform/frontend/src/
!autogpt_platform/frontend/public/
!autogpt_platform/frontend/scripts/
!autogpt_platform/frontend/package.json
!autogpt_platform/frontend/pnpm-lock.yaml
!autogpt_platform/frontend/tsconfig.json
@@ -34,6 +36,7 @@
## config
!autogpt_platform/frontend/*.config.*
!autogpt_platform/frontend/.env.*
!autogpt_platform/frontend/.env
# Classic - AutoGPT
!classic/original_autogpt/autogpt/

View File

@@ -24,7 +24,8 @@
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my changes
- [ ] I have included a list of my configuration changes in the PR description (under **Changes**)

View File

@@ -82,37 +82,6 @@ jobs:
- name: Run lint
run: pnpm lint
type-check:
runs-on: ubuntu-latest
needs: setup
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run tsc check
run: pnpm type-check
chromatic:
runs-on: ubuntu-latest
needs: setup
@@ -176,11 +145,7 @@ jobs:
- name: Copy default supabase .env
run: |
cp ../.env.example ../.env
- name: Copy backend .env
run: |
cp ../backend/.env.example ../backend/.env
cp ../.env.default ../.env
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -252,15 +217,6 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Setup .env
run: cp .env.example .env
- name: Build frontend
run: pnpm build --turbo
# uses Turbopack, much faster and safe enough for a test pipeline
env:
NEXT_PUBLIC_PW_TEST: true
- name: Install Browser 'chromium'
run: pnpm playwright install --with-deps chromium

View File

@@ -0,0 +1,132 @@
name: AutoGPT Platform - Frontend CI
on:
push:
branches: [master, dev]
paths:
- ".github/workflows/platform-fullstack-ci.yml"
- "autogpt_platform/**"
pull_request:
paths:
- ".github/workflows/platform-fullstack-ci.yml"
- "autogpt_platform/**"
merge_group:
defaults:
run:
shell: bash
working-directory: autogpt_platform/frontend
jobs:
setup:
runs-on: ubuntu-latest
outputs:
cache-key: ${{ steps.cache-key.outputs.key }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Generate cache key
id: cache-key
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
types:
runs-on: ubuntu-latest
needs: setup
strategy:
fail-fast: false
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Copy default supabase .env
run: |
cp ../.env.default ../.env
- name: Copy backend .env
run: |
cp ../backend/.env.default ../backend/.env
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml --profile local --profile deps_backend up -d
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Setup .env
run: cp .env.default .env
- name: Wait for services to be ready
run: |
echo "Waiting for rest_server to be ready..."
timeout 60 sh -c 'until curl -f http://localhost:8006/health 2>/dev/null; do sleep 2; done' || echo "Rest server health check timeout, continuing..."
echo "Waiting for database to be ready..."
timeout 60 sh -c 'until docker compose -f ../docker-compose.yml exec -T db pg_isready -U postgres 2>/dev/null; do sleep 2; done' || echo "Database ready check timeout, continuing..."
- name: Generate API queries
run: pnpm generate:api:force
- name: Check for API schema changes
run: |
if ! git diff --exit-code src/app/api/openapi.json; then
echo "❌ API schema changes detected in src/app/api/openapi.json"
echo ""
echo "The openapi.json file has been modified after running 'pnpm generate:api-all'."
echo "This usually means changes have been made in the BE endpoints without updating the Frontend."
echo "The API schema is now out of sync with the Front-end queries."
echo ""
echo "To fix this:"
echo "1. Pull the backend 'docker compose pull && docker compose up -d --build --force-recreate'"
echo "2. Run 'pnpm generate:api' locally"
echo "3. Run 'pnpm types' locally"
echo "4. Fix any TypeScript errors that may have been introduced"
echo "5. Commit and push your changes"
echo ""
exit 1
else
echo "✅ No API schema changes detected"
fi
- name: Run Typescript checks
run: pnpm types

3
.gitignore vendored
View File

@@ -5,6 +5,8 @@ classic/original_autogpt/*.json
auto_gpt_workspace/*
*.mpeg
.env
# Root .env files
/.env
azure.yaml
.vscode
.idea/*
@@ -121,7 +123,6 @@ celerybeat.pid
# Environments
.direnv/
.env
.venv
env/
venv*/

View File

@@ -235,7 +235,7 @@ repos:
hooks:
- id: tsc
name: Typecheck - AutoGPT Platform - Frontend
entry: bash -c 'cd autogpt_platform/frontend && pnpm type-check'
entry: bash -c 'cd autogpt_platform/frontend && pnpm types'
files: ^autogpt_platform/frontend/
types: [file]
language: system

View File

@@ -3,6 +3,16 @@
[![Discord Follow](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2Fautogpt%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&label=total%20members&logo=discord&logoColor=white&color=7289da)](https://discord.gg/autogpt) &ensp;
[![Twitter Follow](https://img.shields.io/twitter/follow/Auto_GPT?style=social)](https://twitter.com/Auto_GPT) &ensp;
<!-- Keep these links. Translations will automatically update with the README. -->
[Deutsch](https://zdoc.app/de/Significant-Gravitas/AutoGPT) |
[Español](https://zdoc.app/es/Significant-Gravitas/AutoGPT) |
[français](https://zdoc.app/fr/Significant-Gravitas/AutoGPT) |
[日本語](https://zdoc.app/ja/Significant-Gravitas/AutoGPT) |
[한국어](https://zdoc.app/ko/Significant-Gravitas/AutoGPT) |
[Português](https://zdoc.app/pt/Significant-Gravitas/AutoGPT) |
[Русский](https://zdoc.app/ru/Significant-Gravitas/AutoGPT) |
[中文](https://zdoc.app/zh/Significant-Gravitas/AutoGPT)
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
## Hosting Options

View File

@@ -1,9 +1,11 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Repository Overview
AutoGPT Platform is a monorepo containing:
- **Backend** (`/backend`): Python FastAPI server with async support
- **Frontend** (`/frontend`): Next.js React application
- **Shared Libraries** (`/autogpt_libs`): Common Python utilities
@@ -11,6 +13,7 @@ AutoGPT Platform is a monorepo containing:
## Essential Commands
### Backend Development
```bash
# Install dependencies
cd backend && poetry install
@@ -30,11 +33,18 @@ poetry run test
# Run specific test
poetry run pytest path/to/test_file.py::test_function_name
# Run block tests (tests that validate all blocks work correctly)
poetry run pytest backend/blocks/test/test_block.py -xvs
# Run tests for a specific block (e.g., GetCurrentTimeBlock)
poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[GetCurrentTimeBlock]' -xvs
# Lint and format
# prefer format if you want to just "fix" it and only get the errors that can't be autofixed
poetry run format # Black + isort
poetry run lint # ruff
```
More details can be found in TESTING.md
#### Creating/Updating Snapshots
@@ -47,8 +57,8 @@ poetry run pytest path/to/test.py --snapshot-update
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
### Frontend Development
```bash
# Install dependencies
cd frontend && npm install
@@ -66,12 +76,13 @@ npm run storybook
npm run build
# Type checking
npm run type-check
npm run types
```
## Architecture Overview
### Backend Architecture
- **API Layer**: FastAPI with REST and WebSocket endpoints
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
- **Queue System**: RabbitMQ for async task processing
@@ -80,6 +91,7 @@ npm run type-check
- **Security**: Cache protection middleware prevents sensitive data caching in browsers/proxies
### Frontend Architecture
- **Framework**: Next.js App Router with React Server Components
- **State Management**: React hooks + Supabase client for real-time updates
- **Workflow Builder**: Visual graph editor using @xyflow/react
@@ -87,6 +99,7 @@ npm run type-check
- **Feature Flags**: LaunchDarkly integration
### Key Concepts
1. **Agent Graphs**: Workflow definitions stored as JSON, executed by the backend
2. **Blocks**: Reusable components in `/backend/blocks/` that perform specific tasks
3. **Integrations**: OAuth and API connections stored per user
@@ -94,13 +107,16 @@ npm run type-check
5. **Virus Scanning**: ClamAV integration for file upload security
### Testing Approach
- Backend uses pytest with snapshot testing for API responses
- Test files are colocated with source files (`*_test.py`)
- Frontend uses Playwright for E2E tests
- Component testing via Storybook
### Database Schema
Key models (defined in `/backend/schema.prisma`):
- `User`: Authentication and profile data
- `AgentGraph`: Workflow definitions with version control
- `AgentGraphExecution`: Execution history and results
@@ -108,13 +124,31 @@ Key models (defined in `/backend/schema.prisma`):
- `StoreListing`: Marketplace listings for sharing agents
### Environment Configuration
- Backend: `.env` file in `/backend`
- Frontend: `.env.local` file in `/frontend`
- Both require Supabase credentials and API keys for various services
#### Configuration Files
- **Backend**: `/backend/.env.default` (defaults) → `/backend/.env` (user overrides)
- **Frontend**: `/frontend/.env.default` (defaults) → `/frontend/.env` (user overrides)
- **Platform**: `/.env.default` (Supabase/shared defaults) → `/.env` (user overrides)
#### Docker Environment Loading Order
1. `.env.default` files provide base configuration (tracked in git)
2. `.env` files provide user-specific overrides (gitignored)
3. Docker Compose `environment:` sections provide service-specific overrides
4. Shell environment variables have highest precedence
#### Key Points
- All services use hardcoded defaults in docker-compose files (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
- Supabase services (`db/docker/docker-compose.yml`) follow the same pattern
### Common Development Tasks
**Adding a new block:**
1. Create new file in `/backend/backend/blocks/`
2. Inherit from `Block` base class
3. Define input/output schemas
@@ -122,13 +156,18 @@ Key models (defined in `/backend/schema.prisma`):
5. Register in block registry
6. Generate the block uuid using `uuid.uuid4()`
Note: when making many new blocks analyze the interfaces for each of these blcoks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside the route file
4. Run `poetry run test` to verify
**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for new components
@@ -137,6 +176,7 @@ Key models (defined in `/backend/schema.prisma`):
### Security Implementation
**Cache Protection Middleware:**
- Located in `/backend/backend/server/middleware/security.py`
- Default behavior: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses an allow list approach - only explicitly permitted paths can be cached
@@ -144,3 +184,47 @@ Key models (defined in `/backend/schema.prisma`):
- Prevents sensitive data (auth tokens, API keys, user data) from being cached by browsers/proxies
- To allow caching for a new endpoint, add it to `CACHEABLE_PATHS` in the middleware
- Applied to both main API server and external API applications
### Creating Pull Requests
- Create the PR aginst the `dev` branch of the repository.
- Ensure the branch name is descriptive (e.g., `feature/add-new-block`)/
- Use conventional commit messages (see below)/
- Fill out the .github/PULL_REQUEST_TEMPLATE.md template as the PR description/
- Run the github pre-commit hooks to ensure code quality.
### Reviewing/Revising Pull Requests
- When the user runs /pr-comments or tries to fetch them, also run gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews to get the reviews
- Use gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews/[review_id]/comments to get the review contents
- Use gh api /repos/Significant-Gravitas/AutoGPT/issues/9924/comments to get the pr specific comments
### Conventional Commits
Use this format for commit messages and Pull Request titles:
**Conventional Commit Types:**
- `feat`: Introduces a new feature to the codebase
- `fix`: Patches a bug in the codebase
- `refactor`: Code change that neither fixes a bug nor adds a feature; also applies to removing features
- `ci`: Changes to CI configuration
- `docs`: Documentation-only changes
- `dx`: Improvements to the developer experience
**Recommended Base Scopes:**
- `platform`: Changes affecting both frontend and backend
- `frontend`
- `backend`
- `infra`
- `blocks`: Modifications/additions of individual blocks
**Subscope Examples:**
- `backend/executor`
- `backend/db`
- `frontend/builder` (includes changes to the block UI component)
- `infra/prod`
Use these scopes and subscopes for clarity and consistency in commit messages.

View File

@@ -8,7 +8,6 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
- Node.js & NPM (for running the frontend application)
### Running the System
@@ -24,10 +23,10 @@ To run the AutoGPT Platform, follow these steps:
2. Run the following command:
```
cp .env.example .env
cp .env.default .env
```
This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables.
This command will copy the `.env.default` file to `.env`. You can modify the `.env` file to add your own environment variables.
3. Run the following command:
@@ -37,44 +36,7 @@ To run the AutoGPT Platform, follow these steps:
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
4. Navigate to `frontend` within the `autogpt_platform` directory:
```
cd frontend
```
You will need to run your frontend application separately on your local machine.
5. Run the following command:
```
cp .env.example .env.local
```
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
6. Run the following command:
Enable corepack and install dependencies by running:
```
corepack enable
pnpm i
```
Generate the API client (this step is required before running the frontend):
```
pnpm generate:api-client
```
Then start the frontend application in development mode:
```
pnpm dev
```
7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
4. After all the services are in ready state, open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
### Docker Compose Commands
@@ -177,20 +139,21 @@ The platform includes scripts for generating and managing the API client:
- `pnpm fetch:openapi`: Fetches the OpenAPI specification from the backend service (requires backend to be running on port 8006)
- `pnpm generate:api-client`: Generates the TypeScript API client from the OpenAPI specification using Orval
- `pnpm generate:api-all`: Runs both fetch and generate commands in sequence
- `pnpm generate:api`: Runs both fetch and generate commands in sequence
#### Manual API Client Updates
If you need to update the API client after making changes to the backend API:
1. Ensure the backend services are running:
```
docker compose up -d
```
2. Generate the updated API client:
```
pnpm generate:api-all
pnpm generate:api
```
This will fetch the latest OpenAPI specification and regenerate the TypeScript client code.

View File

@@ -1,802 +0,0 @@
# DatabaseManager Technical Specification
## Executive Summary
This document provides a complete technical specification for implementing a drop-in replacement for the AutoGPT Platform's DatabaseManager service. The replacement must maintain 100% API compatibility while preserving all functional behaviors, security requirements, and performance characteristics.
## 1. System Overview
### 1.1 Purpose
The DatabaseManager is a centralized service that provides database access for the AutoGPT Platform's executor system. It encapsulates all database operations behind a service interface, enabling distributed execution while maintaining data consistency and security.
### 1.2 Architecture Pattern
- **Service Type**: HTTP-based microservice using FastAPI
- **Communication**: RPC-style over HTTP with JSON serialization
- **Base Class**: Inherits from `AppService` (backend.util.service)
- **Client Classes**: `DatabaseManagerClient` (sync) and `DatabaseManagerAsyncClient` (async)
- **Port**: Configurable via `config.database_api_port`
### 1.3 Critical Requirements
1. **API Compatibility**: All 40+ exposed methods must maintain exact signatures
2. **Type Safety**: Full type preservation across service boundaries
3. **User Isolation**: All operations must respect user_id boundaries
4. **Transaction Support**: Maintain ACID properties for critical operations
5. **Event Publishing**: Maintain Redis event bus integration for real-time updates
## 2. Service Implementation Requirements
### 2.1 Base Service Class
```python
from backend.util.service import AppService, expose
from backend.util.settings import Config
from backend.data import db
import logging
class DatabaseManager(AppService):
"""
REQUIRED: Inherit from AppService to get:
- Automatic endpoint generation via @expose decorator
- Built-in health checks at /health
- Request/response serialization
- Error handling and logging
"""
def run_service(self) -> None:
"""REQUIRED: Initialize database connection before starting service"""
logger.info(f"[{self.service_name}] ⏳ Connecting to Database...")
self.run_and_wait(db.connect()) # CRITICAL: Must connect to database
super().run_service() # Start HTTP server
def cleanup(self):
"""REQUIRED: Clean disconnect on shutdown"""
super().cleanup()
logger.info(f"[{self.service_name}] ⏳ Disconnecting Database...")
self.run_and_wait(db.disconnect()) # CRITICAL: Must disconnect cleanly
@classmethod
def get_port(cls) -> int:
"""REQUIRED: Return configured port"""
return config.database_api_port
```
### 2.2 Method Exposure Pattern
```python
@staticmethod
def _(f: Callable[P, R], name: str | None = None) -> Callable[Concatenate[object, P], R]:
"""
REQUIRED: Helper to expose methods with proper signatures
- Preserves function name for endpoint generation
- Maintains type information
- Adds 'self' parameter for instance binding
"""
if name is not None:
f.__name__ = name
return cast(Callable[Concatenate[object, P], R], expose(f))
```
### 2.3 Database Connection Management
**REQUIRED: Use Prisma ORM with these exact configurations:**
```python
from prisma import Prisma
prisma = Prisma(
auto_register=True,
http={"timeout": HTTP_TIMEOUT}, # Default: 120 seconds
datasource={"url": DATABASE_URL}
)
# Connection lifecycle
async def connect():
await prisma.connect()
async def disconnect():
await prisma.disconnect()
```
### 2.4 Transaction Support
**REQUIRED: Implement both regular and locked transactions:**
```python
async def transaction(timeout: float | None = None):
"""Regular database transaction"""
async with prisma.tx(timeout=timeout) as tx:
yield tx
async def locked_transaction(key: str, timeout: float | None = None):
"""Transaction with PostgreSQL advisory lock"""
lock_key = zlib.crc32(key.encode("utf-8"))
async with transaction(timeout=timeout) as tx:
await tx.execute_raw("SELECT pg_advisory_xact_lock($1)", lock_key)
yield tx
```
## 3. Complete API Specification
### 3.1 Execution Management APIs
#### get_graph_execution
```python
async def get_graph_execution(
user_id: str,
execution_id: str,
*,
include_node_executions: bool = False
) -> GraphExecution | GraphExecutionWithNodes | None
```
**Behavior**:
- Returns execution only if user_id matches
- Optionally includes all node executions
- Returns None if not found or unauthorized
#### get_graph_executions
```python
async def get_graph_executions(
user_id: str,
graph_id: str | None = None,
*,
limit: int = 50,
graph_version: int | None = None,
cursor: str | None = None,
preset_id: str | None = None
) -> tuple[list[GraphExecution], str | None]
```
**Behavior**:
- Paginated results with cursor
- Filter by graph_id, version, or preset_id
- Returns (executions, next_cursor)
#### create_graph_execution
```python
async def create_graph_execution(
graph_id: str,
graph_version: int,
starting_nodes_input: dict[str, dict[str, Any]],
user_id: str,
preset_id: str | None = None
) -> GraphExecutionWithNodes
```
**Behavior**:
- Creates execution with status "QUEUED"
- Initializes all nodes with "PENDING" status
- Publishes creation event to Redis
- Uses locked transaction on graph_id
#### update_graph_execution_start_time
```python
async def update_graph_execution_start_time(
graph_exec_id: str
) -> None
```
**Behavior**:
- Sets start_time to current timestamp
- Only updates if currently NULL
#### update_graph_execution_stats
```python
async def update_graph_execution_stats(
graph_exec_id: str,
status: AgentExecutionStatus | None = None,
stats: dict[str, Any] | None = None
) -> GraphExecution | None
```
**Behavior**:
- Updates status and/or stats atomically
- Sets end_time if status is terminal (COMPLETED/FAILED)
- Publishes update event to Redis
- Returns updated execution
#### get_node_execution
```python
async def get_node_execution(
node_exec_id: str
) -> NodeExecutionResult | None
```
**Behavior**:
- No user_id check (relies on graph execution security)
- Includes all input/output data
#### get_node_executions
```python
async def get_node_executions(
graph_exec_id: str
) -> list[NodeExecutionResult]
```
**Behavior**:
- Returns all node executions for graph
- Ordered by creation time
#### get_latest_node_execution
```python
async def get_latest_node_execution(
graph_exec_id: str,
node_id: str
) -> NodeExecutionResult | None
```
**Behavior**:
- Returns most recent execution of specific node
- Used for retry/rerun scenarios
#### update_node_execution_status
```python
async def update_node_execution_status(
node_exec_id: str,
status: AgentExecutionStatus,
execution_data: dict[str, Any] | None = None,
stats: dict[str, Any] | None = None
) -> NodeExecutionResult
```
**Behavior**:
- Updates status atomically
- Sets end_time for terminal states
- Optionally updates stats/data
- Publishes event to Redis
- Returns updated execution
#### update_node_execution_status_batch
```python
async def update_node_execution_status_batch(
execution_updates: list[NodeExecutionUpdate]
) -> list[NodeExecutionResult]
```
**Behavior**:
- Batch update multiple nodes in single transaction
- Each update can have different status/stats
- Publishes events for all updates
- Returns all updated executions
#### update_node_execution_stats
```python
async def update_node_execution_stats(
node_exec_id: str,
stats: dict[str, Any]
) -> NodeExecutionResult
```
**Behavior**:
- Updates only stats field
- Merges with existing stats
- Does not affect status
#### upsert_execution_input
```python
async def upsert_execution_input(
node_id: str,
graph_exec_id: str,
input_name: str,
input_data: Any,
node_exec_id: str | None = None
) -> tuple[str, BlockInput]
```
**Behavior**:
- Creates or updates input data
- If node_exec_id not provided, creates node execution
- Serializes input_data to JSON
- Returns (node_exec_id, input_object)
#### upsert_execution_output
```python
async def upsert_execution_output(
node_exec_id: str,
output_name: str,
output_data: Any
) -> None
```
**Behavior**:
- Creates or updates output data
- Serializes output_data to JSON
- No return value
#### get_execution_kv_data
```python
async def get_execution_kv_data(
user_id: str,
key: str
) -> Any | None
```
**Behavior**:
- User-scoped key-value storage
- Returns deserialized JSON data
- Returns None if key not found
#### set_execution_kv_data
```python
async def set_execution_kv_data(
user_id: str,
node_exec_id: str,
key: str,
data: Any
) -> Any | None
```
**Behavior**:
- Sets user-scoped key-value data
- Associates with node execution
- Serializes data to JSON
- Returns previous value or None
#### get_block_error_stats
```python
async def get_block_error_stats() -> list[BlockErrorStats]
```
**Behavior**:
- Aggregates error counts by block_id
- Last 7 days of data
- Groups by error type
### 3.2 Graph Management APIs
#### get_node
```python
async def get_node(
node_id: str
) -> AgentNode | None
```
**Behavior**:
- Returns node with block data
- No user_id check (public blocks)
#### get_graph
```python
async def get_graph(
graph_id: str,
version: int | None = None,
user_id: str | None = None,
for_export: bool = False,
include_subgraphs: bool = False
) -> GraphModel | None
```
**Behavior**:
- Returns latest version if version=None
- Checks user_id for private graphs
- for_export=True excludes internal fields
- include_subgraphs=True loads nested graphs
#### get_connected_output_nodes
```python
async def get_connected_output_nodes(
node_id: str,
output_name: str
) -> list[tuple[AgentNode, AgentNodeLink]]
```
**Behavior**:
- Returns downstream nodes connected to output
- Includes link metadata
- Used for execution flow
#### get_graph_metadata
```python
async def get_graph_metadata(
graph_id: str,
user_id: str
) -> GraphMetadata | None
```
**Behavior**:
- Returns graph metadata without full definition
- User must own or have access to graph
### 3.3 Credit System APIs
#### get_credits
```python
async def get_credits(
user_id: str
) -> int
```
**Behavior**:
- Returns current credit balance
- Always non-negative
#### spend_credits
```python
async def spend_credits(
user_id: str,
cost: int,
metadata: UsageTransactionMetadata
) -> int
```
**Behavior**:
- Deducts credits atomically
- Creates transaction record
- Throws InsufficientCredits if balance too low
- Returns new balance
- metadata includes: block_id, node_exec_id, context
### 3.4 User Management APIs
#### get_user_metadata
```python
async def get_user_metadata(
user_id: str
) -> UserMetadata
```
**Behavior**:
- Returns user preferences and settings
- Creates default if not exists
#### update_user_metadata
```python
async def update_user_metadata(
user_id: str,
data: UserMetadataDTO
) -> UserMetadata
```
**Behavior**:
- Partial update of metadata
- Validates against schema
- Returns updated metadata
#### get_user_integrations
```python
async def get_user_integrations(
user_id: str
) -> UserIntegrations
```
**Behavior**:
- Returns OAuth credentials
- Decrypts sensitive data
- Creates empty if not exists
#### update_user_integrations
```python
async def update_user_integrations(
user_id: str,
data: UserIntegrations
) -> None
```
**Behavior**:
- Updates integration credentials
- Encrypts sensitive data
- No return value
### 3.5 User Communication APIs
#### get_active_user_ids_in_timerange
```python
async def get_active_user_ids_in_timerange(
start_time: datetime,
end_time: datetime
) -> list[str]
```
**Behavior**:
- Returns users with graph executions in range
- Used for analytics/notifications
#### get_user_email_by_id
```python
async def get_user_email_by_id(
user_id: str
) -> str | None
```
**Behavior**:
- Returns user's email address
- None if user not found
#### get_user_email_verification
```python
async def get_user_email_verification(
user_id: str
) -> UserEmailVerification
```
**Behavior**:
- Returns email and verification status
- Used for notification filtering
#### get_user_notification_preference
```python
async def get_user_notification_preference(
user_id: str
) -> NotificationPreference
```
**Behavior**:
- Returns notification settings
- Creates default if not exists
### 3.6 Notification APIs
#### create_or_add_to_user_notification_batch
```python
async def create_or_add_to_user_notification_batch(
user_id: str,
notification_type: NotificationType,
notification_data: NotificationEvent
) -> UserNotificationBatchDTO
```
**Behavior**:
- Adds to existing batch or creates new
- Batches by type for efficiency
- Returns updated batch
#### empty_user_notification_batch
```python
async def empty_user_notification_batch(
user_id: str,
notification_type: NotificationType
) -> None
```
**Behavior**:
- Clears all notifications of type
- Used after sending batch
#### get_all_batches_by_type
```python
async def get_all_batches_by_type(
notification_type: NotificationType
) -> list[UserNotificationBatchDTO]
```
**Behavior**:
- Returns all user batches of type
- Used by notification service
#### get_user_notification_batch
```python
async def get_user_notification_batch(
user_id: str,
notification_type: NotificationType
) -> UserNotificationBatchDTO | None
```
**Behavior**:
- Returns user's batch for type
- None if no batch exists
#### get_user_notification_oldest_message_in_batch
```python
async def get_user_notification_oldest_message_in_batch(
user_id: str,
notification_type: NotificationType
) -> NotificationEvent | None
```
**Behavior**:
- Returns oldest notification in batch
- Used for batch timing decisions
## 4. Client Implementation Requirements
### 4.1 Synchronous Client
```python
class DatabaseManagerClient(AppServiceClient):
"""
REQUIRED: Synchronous client that:
- Converts async methods to sync using endpoint_to_sync
- Maintains exact method signatures
- Handles connection pooling
- Implements retry logic
"""
@classmethod
def get_service_type(cls):
return DatabaseManager
# Example method mapping
get_graph_execution = endpoint_to_sync(DatabaseManager.get_graph_execution)
```
### 4.2 Asynchronous Client
```python
class DatabaseManagerAsyncClient(AppServiceClient):
"""
REQUIRED: Async client that:
- Directly references async methods
- No conversion needed
- Shares connection pool
"""
@classmethod
def get_service_type(cls):
return DatabaseManager
# Direct method reference
get_graph_execution = DatabaseManager.get_graph_execution
```
## 5. Data Models
### 5.1 Core Enums
```python
class AgentExecutionStatus(str, Enum):
PENDING = "PENDING"
QUEUED = "QUEUED"
RUNNING = "RUNNING"
COMPLETED = "COMPLETED"
FAILED = "FAILED"
CANCELED = "CANCELED"
class NotificationType(str, Enum):
SYSTEM = "SYSTEM"
REVIEW = "REVIEW"
EXECUTION = "EXECUTION"
MARKETING = "MARKETING"
```
### 5.2 Key Data Models
All models must exactly match the Prisma schema definitions. Key models include:
- `GraphExecution`: Execution metadata with stats
- `GraphExecutionWithNodes`: Includes all node executions
- `NodeExecutionResult`: Node execution with I/O data
- `GraphModel`: Complete graph definition
- `UserIntegrations`: OAuth credentials
- `UsageTransactionMetadata`: Credit usage context
- `NotificationEvent`: Individual notification data
## 6. Security Requirements
### 6.1 User Isolation
- **CRITICAL**: All user-scoped operations MUST filter by user_id
- Never expose data across user boundaries
- Use database-level row security where possible
### 6.2 Authentication
- Service assumes authentication handled by API gateway
- user_id parameter is trusted after authentication
- No additional auth checks within service
### 6.3 Data Protection
- Encrypt sensitive integration credentials
- Use HMAC for unsubscribe tokens
- Never log sensitive data
## 7. Performance Requirements
### 7.1 Connection Management
- Maintain persistent database connection
- Use connection pooling (default: 10 connections)
- Implement exponential backoff for retries
### 7.2 Query Optimization
- Use indexes for all WHERE clauses
- Batch operations where possible
- Limit default result sets (50 items)
### 7.3 Event Publishing
- Publish events asynchronously
- Don't block on event delivery
- Use fire-and-forget pattern
## 8. Error Handling
### 8.1 Standard Exceptions
```python
class InsufficientCredits(Exception):
"""Raised when user lacks credits"""
class NotFoundError(Exception):
"""Raised when entity not found"""
class AuthorizationError(Exception):
"""Raised when user lacks access"""
```
### 8.2 Error Response Format
```json
{
"error": "error_type",
"message": "Human readable message",
"details": {} // Optional additional context
}
```
## 9. Testing Requirements
### 9.1 Unit Tests
- Test each method in isolation
- Mock database calls
- Verify user_id filtering
### 9.2 Integration Tests
- Test with real database
- Verify transaction boundaries
- Test concurrent operations
### 9.3 Service Tests
- Test HTTP endpoint generation
- Verify serialization/deserialization
- Test error handling
## 10. Implementation Checklist
### Phase 1: Core Service Setup
- [ ] Create DatabaseManager class inheriting from AppService
- [ ] Implement run_service() with database connection
- [ ] Implement cleanup() with proper disconnect
- [ ] Configure port from settings
- [ ] Set up method exposure helper
### Phase 2: Execution APIs (15 methods)
- [ ] get_graph_execution
- [ ] get_graph_executions
- [ ] get_graph_execution_meta
- [ ] create_graph_execution
- [ ] update_graph_execution_start_time
- [ ] update_graph_execution_stats
- [ ] get_node_execution
- [ ] get_node_executions
- [ ] get_latest_node_execution
- [ ] update_node_execution_status
- [ ] update_node_execution_status_batch
- [ ] update_node_execution_stats
- [ ] upsert_execution_input
- [ ] upsert_execution_output
- [ ] get_execution_kv_data
- [ ] set_execution_kv_data
- [ ] get_block_error_stats
### Phase 3: Graph APIs (4 methods)
- [ ] get_node
- [ ] get_graph
- [ ] get_connected_output_nodes
- [ ] get_graph_metadata
### Phase 4: Credit APIs (2 methods)
- [ ] get_credits
- [ ] spend_credits
### Phase 5: User APIs (4 methods)
- [ ] get_user_metadata
- [ ] update_user_metadata
- [ ] get_user_integrations
- [ ] update_user_integrations
### Phase 6: Communication APIs (4 methods)
- [ ] get_active_user_ids_in_timerange
- [ ] get_user_email_by_id
- [ ] get_user_email_verification
- [ ] get_user_notification_preference
### Phase 7: Notification APIs (5 methods)
- [ ] create_or_add_to_user_notification_batch
- [ ] empty_user_notification_batch
- [ ] get_all_batches_by_type
- [ ] get_user_notification_batch
- [ ] get_user_notification_oldest_message_in_batch
### Phase 8: Client Implementation
- [ ] Create DatabaseManagerClient with sync methods
- [ ] Create DatabaseManagerAsyncClient with async methods
- [ ] Test client method generation
- [ ] Verify type preservation
### Phase 9: Integration Testing
- [ ] Test all methods with real database
- [ ] Verify user isolation
- [ ] Test error scenarios
- [ ] Performance testing
- [ ] Event publishing verification
### Phase 10: Deployment Validation
- [ ] Deploy to test environment
- [ ] Run integration test suite
- [ ] Verify backward compatibility
- [ ] Performance benchmarking
- [ ] Production deployment
## 11. Success Criteria
The implementation is successful when:
1. **All 40+ methods** produce identical outputs to the original
2. **Performance** is within 10% of original implementation
3. **All tests** pass without modification
4. **No breaking changes** to any client code
5. **Security boundaries** are maintained
6. **Event publishing** works identically
7. **Error handling** matches original behavior
## 12. Critical Implementation Notes
1. **DO NOT** modify any function signatures
2. **DO NOT** change any return types
3. **DO NOT** add new required parameters
4. **DO NOT** remove any functionality
5. **ALWAYS** maintain user_id isolation
6. **ALWAYS** publish events for state changes
7. **ALWAYS** use transactions for multi-step operations
8. **ALWAYS** handle errors exactly as original
This specification, when implemented correctly, will produce a drop-in replacement for the DatabaseManager that maintains 100% compatibility with the existing system.

View File

@@ -1,765 +0,0 @@
# Notification Service Technical Specification
## Overview
The AutoGPT Platform Notification Service is a RabbitMQ-based asynchronous notification system that handles various types of user notifications including real-time alerts, batched notifications, and scheduled summaries. The service supports email delivery via Postmark and system alerts via Discord.
## Architecture Overview
### Core Components
1. **NotificationManager Service** (`notifications.py`)
- AppService implementation with RabbitMQ integration
- Processes notification queues asynchronously
- Manages batching strategies and delivery timing
- Handles email templating and sending
2. **RabbitMQ Message Broker**
- Multiple queues for different notification strategies
- Dead letter exchange for failed messages
- Topic-based routing for message distribution
3. **Email Sender** (`email.py`)
- Postmark integration for email delivery
- Jinja2 template rendering
- HTML email composition with unsubscribe headers
4. **Database Storage**
- Notification batching tables
- User preference storage
- Email verification tracking
## Service Exposure Mechanism
### AppService Framework
The NotificationManager extends `AppService` which automatically exposes methods decorated with `@expose` as HTTP endpoints:
```python
class NotificationManager(AppService):
@expose
def queue_weekly_summary(self):
# Implementation
@expose
def process_existing_batches(self, notification_types: list[NotificationType]):
# Implementation
@expose
async def discord_system_alert(self, content: str):
# Implementation
```
### Automatic HTTP Endpoint Creation
When the service starts, the AppService base class:
1. Scans for methods with `@expose` decorator
2. Creates FastAPI routes for each exposed method:
- Route path: `/{method_name}`
- HTTP method: POST
- Endpoint handler: Generated via `_create_fastapi_endpoint()`
### Service Client Access
#### NotificationManagerClient
```python
class NotificationManagerClient(AppServiceClient):
@classmethod
def get_service_type(cls):
return NotificationManager
# Direct method references (sync)
process_existing_batches = NotificationManager.process_existing_batches
queue_weekly_summary = NotificationManager.queue_weekly_summary
# Async-to-sync conversion
discord_system_alert = endpoint_to_sync(NotificationManager.discord_system_alert)
```
#### Client Usage Pattern
```python
# Get client instance
client = get_service_client(NotificationManagerClient)
# Call exposed methods via HTTP
client.process_existing_batches([NotificationType.AGENT_RUN])
client.queue_weekly_summary()
client.discord_system_alert("System alert message")
```
### HTTP Communication Details
1. **Service URL**: `http://{host}:{notification_service_port}`
- Default port: 8007
- Host: Configurable via settings
2. **Request Format**:
- Method: POST
- Path: `/{method_name}`
- Body: JSON with method parameters
3. **Client Implementation**:
- Uses `httpx` for HTTP requests
- Automatic retry on connection failures
- Configurable timeout (default from api_call_timeout)
### Direct Function Calls
The service also exposes two functions that can be called directly without going through the service client:
```python
# Sync version - used by ExecutionManager
def queue_notification(event: NotificationEventModel) -> NotificationResult
# Async version - used by credit system
async def queue_notification_async(event: NotificationEventModel) -> NotificationResult
```
These functions:
- Connect directly to RabbitMQ
- Publish messages to appropriate queues
- Return success/failure status
- Are NOT exposed via HTTP
## Message Queuing Architecture
### RabbitMQ Configuration
#### Exchanges
```python
NOTIFICATION_EXCHANGE = Exchange(name="notifications", type=ExchangeType.TOPIC)
DEAD_LETTER_EXCHANGE = Exchange(name="dead_letter", type=ExchangeType.TOPIC)
```
#### Queues
1. **immediate_notifications**
- Routing Key: `notification.immediate.#`
- Dead Letter: `failed.immediate`
- For: Critical alerts, errors
2. **admin_notifications**
- Routing Key: `notification.admin.#`
- Dead Letter: `failed.admin`
- For: Refund requests, system alerts
3. **summary_notifications**
- Routing Key: `notification.summary.#`
- Dead Letter: `failed.summary`
- For: Daily/weekly summaries
4. **batch_notifications**
- Routing Key: `notification.batch.#`
- Dead Letter: `failed.batch`
- For: Agent runs, batched events
5. **failed_notifications**
- Routing Key: `failed.#`
- For: All failed messages
### Queue Strategies (QueueType enum)
1. **IMMEDIATE**: Send right away (errors, critical notifications)
2. **BATCH**: Batch for configured delay (agent runs)
3. **SUMMARY**: Scheduled digest (daily/weekly summaries)
4. **BACKOFF**: Exponential backoff strategy (defined but not fully implemented)
5. **ADMIN**: Admin-only notifications
## Notification Types
### Enum Values (NotificationType)
```python
AGENT_RUN # Batch strategy, 1 day delay
ZERO_BALANCE # Backoff strategy, 60 min delay
LOW_BALANCE # Immediate strategy
BLOCK_EXECUTION_FAILED # Backoff strategy, 60 min delay
CONTINUOUS_AGENT_ERROR # Backoff strategy, 60 min delay
DAILY_SUMMARY # Summary strategy
WEEKLY_SUMMARY # Summary strategy
MONTHLY_SUMMARY # Summary strategy
REFUND_REQUEST # Admin strategy
REFUND_PROCESSED # Admin strategy
```
## Integration Points
### 1. Scheduler Integration
The scheduler service (`backend.executor.scheduler`) imports monitoring functions that call the NotificationManagerClient:
```python
from backend.monitoring import (
process_existing_batches,
process_weekly_summary,
)
# These are scheduled as cron jobs
```
### 2. Execution Manager Integration
The ExecutionManager directly calls `queue_notification()` for:
- Agent run completions
- Low balance alerts
```python
from backend.notifications.notifications import queue_notification
# Called after graph execution completes
queue_notification(NotificationEventModel(
user_id=graph_exec.user_id,
type=NotificationType.AGENT_RUN,
data=AgentRunData(...)
))
```
### 3. Credit System Integration
The credit system uses `queue_notification_async()` for:
- Refund requests
- Refund processed notifications
```python
from backend.notifications.notifications import queue_notification_async
await queue_notification_async(NotificationEventModel(
user_id=user_id,
type=NotificationType.REFUND_REQUEST,
data=RefundRequestData(...)
))
```
### 4. Monitoring Module Wrappers
The monitoring module provides wrapper functions that are used by the scheduler:
```python
# backend/monitoring/notification_monitor.py
def process_existing_batches(**kwargs):
args = NotificationJobArgs(**kwargs)
get_notification_manager_client().process_existing_batches(
args.notification_types
)
def process_weekly_summary(**kwargs):
get_notification_manager_client().queue_weekly_summary()
```
## Data Models
### Base Event Model
```typescript
interface BaseEventModel {
type: NotificationType;
user_id: string;
created_at: string; // ISO datetime with timezone
}
```
### Notification Event Model
```typescript
interface NotificationEventModel<T> extends BaseEventModel {
data: T;
}
```
### Notification Data Types
#### AgentRunData
```typescript
interface AgentRunData {
agent_name: string;
credits_used: number;
execution_time: number;
node_count: number;
graph_id: string;
outputs: Array<Record<string, any>>;
}
```
#### ZeroBalanceData
```typescript
interface ZeroBalanceData {
last_transaction: number;
last_transaction_time: string; // ISO datetime with timezone
top_up_link: string;
}
```
#### LowBalanceData
```typescript
interface LowBalanceData {
agent_name: string;
current_balance: number; // credits (100 = $1)
billing_page_link: string;
shortfall: number;
}
```
#### BlockExecutionFailedData
```typescript
interface BlockExecutionFailedData {
block_name: string;
block_id: string;
error_message: string;
graph_id: string;
node_id: string;
execution_id: string;
}
```
#### ContinuousAgentErrorData
```typescript
interface ContinuousAgentErrorData {
agent_name: string;
error_message: string;
graph_id: string;
execution_id: string;
start_time: string; // ISO datetime with timezone
error_time: string; // ISO datetime with timezone
attempts: number;
}
```
#### Summary Data Types
```typescript
interface BaseSummaryData {
total_credits_used: number;
total_executions: number;
most_used_agent: string;
total_execution_time: number;
successful_runs: number;
failed_runs: number;
average_execution_time: number;
cost_breakdown: Record<string, number>;
}
interface DailySummaryData extends BaseSummaryData {
date: string; // ISO datetime with timezone
}
interface WeeklySummaryData extends BaseSummaryData {
start_date: string; // ISO datetime with timezone
end_date: string; // ISO datetime with timezone
}
```
#### RefundRequestData
```typescript
interface RefundRequestData {
user_id: string;
user_name: string;
user_email: string;
transaction_id: string;
refund_request_id: string;
reason: string;
amount: number;
balance: number;
}
```
### Summary Parameters
```typescript
interface BaseSummaryParams {
start_date: string; // ISO datetime with timezone
end_date: string; // ISO datetime with timezone
}
interface DailySummaryParams extends BaseSummaryParams {
date: string; // ISO datetime with timezone
}
interface WeeklySummaryParams extends BaseSummaryParams {
start_date: string; // ISO datetime with timezone
end_date: string; // ISO datetime with timezone
}
```
## Database Schema
### NotificationEvent Table
```sql
model NotificationEvent {
id String @id @default(uuid())
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
UserNotificationBatch UserNotificationBatch? @relation
userNotificationBatchId String?
type NotificationType
data Json
@@index([userNotificationBatchId])
}
```
### UserNotificationBatch Table
```sql
model UserNotificationBatch {
id String @id @default(uuid())
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
userId String
User User @relation
type NotificationType
Notifications NotificationEvent[]
@@unique([userId, type])
}
```
## API Methods
### Exposed Service Methods (via HTTP)
#### queue_weekly_summary()
- **HTTP Endpoint**: `POST /queue_weekly_summary`
- **Purpose**: Triggers weekly summary generation for all active users
- **Process**:
1. Runs in background executor
2. Queries users active in last 7 days
3. Queues summary notification for each user
- **Used by**: Scheduler service (via cron)
#### process_existing_batches(notification_types: list[NotificationType])
- **HTTP Endpoint**: `POST /process_existing_batches`
- **Purpose**: Processes aged-out batches for specified notification types
- **Process**:
1. Runs in background executor
2. Retrieves all batches for given types
3. Checks if oldest message exceeds max delay
4. Sends batched email if aged out
5. Clears processed batches
- **Used by**: Scheduler service (via cron)
#### discord_system_alert(content: str)
- **HTTP Endpoint**: `POST /discord_system_alert`
- **Purpose**: Sends system alerts to Discord channel
- **Async**: Yes (converted to sync by client)
- **Used by**: Monitoring services
### Direct Queue Functions (not via HTTP)
#### queue_notification(event: NotificationEventModel) -> NotificationResult
- **Purpose**: Queue a notification (sync version)
- **Used by**: ExecutionManager (same process)
- **Direct RabbitMQ**: Yes
#### queue_notification_async(event: NotificationEventModel) -> NotificationResult
- **Purpose**: Queue a notification (async version)
- **Used by**: Credit system (async context)
- **Direct RabbitMQ**: Yes
## Message Processing Flow
### 1. Message Routing
```python
def get_routing_key(event_type: NotificationType) -> str:
strategy = NotificationTypeOverride(event_type).strategy
if strategy == QueueType.IMMEDIATE:
return f"notification.immediate.{event_type.value}"
elif strategy == QueueType.BATCH:
return f"notification.batch.{event_type.value}"
# ... etc
```
### 2. Queue Processing Methods
#### _process_immediate(message: str) -> bool
1. Parse message to NotificationEventModel
2. Retrieve user email
3. Check user preferences and email verification
4. Send email immediately via EmailSender
5. Return True if successful
#### _process_batch(message: str) -> bool
1. Parse message to NotificationEventModel
2. Add to user's notification batch
3. Check if batch is old enough (based on delay)
4. If aged out:
- Retrieve all batch messages
- Send combined email
- Clear batch
5. Return True if processed or batched
#### _process_summary(message: str) -> bool
1. Parse message to SummaryParamsEventModel
2. Gather summary data (credits, executions, etc.)
- **Note**: Currently returns hardcoded placeholder data
3. Format and send summary email
4. Return True if successful
#### _process_admin_message(message: str) -> bool
1. Parse message
2. Send to configured admin email
3. No user preference checks
4. Return True if successful
## Email Delivery
### EmailSender Class
#### Template Loading
- Base template: `templates/base.html.jinja2`
- Notification templates: `templates/{notification_type}.html.jinja2`
- Subject templates from NotificationTypeOverride
- **Note**: Templates use `.html.jinja2` extension, not just `.html`
#### Email Composition
```python
def send_templated(
notification: NotificationType,
user_email: str,
data: NotificationEventModel | list[NotificationEventModel],
user_unsub_link: str | None = None
)
```
#### Postmark Integration
- API Token: `settings.secrets.postmark_server_api_token`
- Sender Email: `settings.config.postmark_sender_email`
- Headers:
- `List-Unsubscribe-Post: List-Unsubscribe=One-Click`
- `List-Unsubscribe: <{unsubscribe_link}>`
## User Preferences and Permissions
### Email Verification Check
```python
validated_email = get_db().get_user_email_verification(user_id)
```
### Notification Preferences
```python
preferences = get_db().get_user_notification_preference(user_id).preferences
# Returns dict[NotificationType, bool]
```
### Preference Fields in User Model
- `notifyOnAgentRun`
- `notifyOnZeroBalance`
- `notifyOnLowBalance`
- `notifyOnBlockExecutionFailed`
- `notifyOnContinuousAgentError`
- `notifyOnDailySummary`
- `notifyOnWeeklySummary`
- `notifyOnMonthlySummary`
### Unsubscribe Link Generation
```python
def generate_unsubscribe_link(user_id: str) -> str:
# HMAC-SHA256 signed token
# Format: base64(user_id:signature_hex)
# URL: {platform_base_url}/api/email/unsubscribe?token={token}
```
## Batching Logic
### Batch Delays (get_batch_delay)
**Note**: The delay configuration exists for multiple notification types, but only notifications with `QueueType.BATCH` strategy actually use batching. Others use different strategies:
- `AGENT_RUN`: 1 day (Strategy: BATCH - actually uses batching)
- `ZERO_BALANCE`: 60 minutes configured (Strategy: BACKOFF - not batched)
- `LOW_BALANCE`: 60 minutes configured (Strategy: IMMEDIATE - sent immediately)
- `BLOCK_EXECUTION_FAILED`: 60 minutes configured (Strategy: BACKOFF - not batched)
- `CONTINUOUS_AGENT_ERROR`: 60 minutes configured (Strategy: BACKOFF - not batched)
### Batch Processing
1. Messages added to UserNotificationBatch
2. Oldest message timestamp tracked
3. When `oldest_timestamp + delay < now()`:
- Batch is processed
- All messages sent in single email
- Batch cleared
## Service Lifecycle
### Startup
1. Initialize FastAPI app with exposed endpoints
2. Start HTTP server on port 8007
3. Initialize RabbitMQ connection
4. Create/verify exchanges and queues
5. Set up queue consumers
6. Start processing loop
### Main Loop
```python
while self.running:
await self._run_queue(immediate_queue, self._process_immediate, ...)
await self._run_queue(admin_queue, self._process_admin_message, ...)
await self._run_queue(batch_queue, self._process_batch, ...)
await self._run_queue(summary_queue, self._process_summary, ...)
await asyncio.sleep(0.1)
```
### Shutdown
1. Set `running = False`
2. Disconnect RabbitMQ
3. Cleanup resources
## Configuration
### Environment Variables
```python
# Service Configuration
notification_service_port: int = 8007
# Email Configuration
postmark_sender_email: str = "invalid@invalid.com"
refund_notification_email: str = "refund@agpt.co"
# Security
unsubscribe_secret_key: str = ""
# Secrets
postmark_server_api_token: str = ""
postmark_webhook_token: str = ""
discord_bot_token: str = ""
# Platform URLs
platform_base_url: str
frontend_base_url: str
```
## Error Handling
### Message Processing Errors
- Failed messages sent to dead letter queue
- Validation errors logged but don't crash service
- Connection errors trigger retry with `@continuous_retry()`
### RabbitMQ ACK/NACK Protocol
- Success: `message.ack()`
- Failure: `message.reject(requeue=False)`
- Timeout/Queue empty: Continue loop
### HTTP Endpoint Errors
- Wrapped in RemoteCallError for client
- Automatic retry available via client configuration
- Connection failures tracked and logged
## System Integrations
### DatabaseManagerClient
- User email retrieval
- Email verification status
- Notification preferences
- Batch management
- Active user queries
### Discord Integration
- Uses SendDiscordMessageBlock
- Configured via discord_bot_token
- For system alerts only
## Implementation Checklist
1. **Core Service**
- [ ] AppService implementation with @expose decorators
- [ ] FastAPI endpoint generation
- [ ] RabbitMQ connection management
- [ ] Queue consumer setup
- [ ] Message routing logic
2. **Service Client**
- [ ] NotificationManagerClient implementation
- [ ] HTTP client configuration
- [ ] Method mapping to service endpoints
- [ ] Async-to-sync conversions
3. **Message Processing**
- [ ] Parse and validate all notification types
- [ ] Implement all queue strategies
- [ ] Batch management with delays
- [ ] Summary data gathering
4. **Email Delivery**
- [ ] Postmark integration
- [ ] Template loading and rendering
- [ ] Unsubscribe header support
- [ ] HTML email composition
5. **User Management**
- [ ] Preference checking
- [ ] Email verification
- [ ] Unsubscribe link generation
- [ ] Daily limit tracking
6. **Batching System**
- [ ] Database batch operations
- [ ] Age-out checking
- [ ] Batch clearing after send
- [ ] Oldest message tracking
7. **Error Handling**
- [ ] Dead letter queue routing
- [ ] Message rejection on failure
- [ ] Continuous retry wrapper
- [ ] Validation error logging
8. **Scheduled Operations**
- [ ] Weekly summary generation
- [ ] Batch processing triggers
- [ ] Background executor usage
## Security Considerations
1. **Service-to-Service Communication**:
- HTTP endpoints only accessible internally
- No authentication on service endpoints (internal network only)
- Service discovery via host/port configuration
2. **User Security**:
- Email verification required for all user notifications
- Unsubscribe tokens HMAC-signed
- User preferences enforced
3. **Admin Notifications**:
- Separate queue, no user preference checks
- Fixed admin email configuration
## Testing Considerations
1. **Unit Tests**
- Message parsing and validation
- Routing key generation
- Batch delay calculations
- Template rendering
2. **Integration Tests**
- HTTP endpoint accessibility
- Service client method calls
- RabbitMQ message flow
- Database batch operations
- Email sending (mock Postmark)
3. **Load Tests**
- High volume message processing
- Concurrent HTTP requests
- Batch accumulation limits
- Memory usage under load
## Implementation Status Notes
1. **Backoff Strategy**: While `QueueType.BACKOFF` is defined and used by several notification types (ZERO_BALANCE, BLOCK_EXECUTION_FAILED, CONTINUOUS_AGENT_ERROR), the actual exponential backoff processing logic is not implemented. These messages are routed to immediate queue.
2. **Summary Data**: The `_gather_summary_data()` method currently returns hardcoded placeholder values rather than querying actual execution data from the database.
3. **Batch Processing**: Only `AGENT_RUN` notifications actually use batch processing. Other notification types with configured delays use different strategies (IMMEDIATE or BACKOFF).
## Future Enhancements
1. **Additional Channels**
- SMS notifications (not implemented)
- Webhook notifications (not implemented)
- In-app notifications
2. **Advanced Batching**
- Dynamic batch sizes
- Priority-based processing
- Custom delay configurations
3. **Analytics**
- Delivery tracking
- Open/click rates
- Notification effectiveness metrics
4. **Service Improvements**
- Authentication for HTTP endpoints
- Rate limiting per user
- Circuit breaker patterns
- Implement actual backoff processing for BACKOFF strategy
- Implement real summary data gathering

View File

@@ -1,474 +0,0 @@
# AutoGPT Platform Scheduler Technical Specification
## Executive Summary
This document provides a comprehensive technical specification for the AutoGPT Platform Scheduler service. The scheduler is responsible for managing scheduled graph executions, system monitoring tasks, and periodic maintenance operations. This specification is designed to enable a complete reimplementation that maintains 100% compatibility with the existing system.
## Table of Contents
1. [System Architecture](#system-architecture)
2. [Service Implementation](#service-implementation)
3. [Data Models](#data-models)
4. [API Endpoints](#api-endpoints)
5. [Database Schema](#database-schema)
6. [External Dependencies](#external-dependencies)
7. [Authentication & Authorization](#authentication--authorization)
8. [Process Management](#process-management)
9. [Error Handling](#error-handling)
10. [Configuration](#configuration)
11. [Testing Strategy](#testing-strategy)
## System Architecture
### Overview
The scheduler operates as an independent microservice within the AutoGPT platform, implementing the `AppService` base class pattern. It runs on a dedicated port (default: 8003) and exposes HTTP/JSON-RPC endpoints for communication with other services.
### Core Components
1. **Scheduler Service** (`backend/executor/scheduler.py:156`)
- Extends `AppService` base class
- Manages APScheduler instance with multiple jobstores
- Handles lifecycle management and graceful shutdown
2. **Scheduler Client** (`backend/executor/scheduler.py:354`)
- Extends `AppServiceClient` base class
- Provides async/sync method wrappers for RPC calls
- Implements automatic retry and connection pooling
3. **Entry Points**
- Main executable: `backend/scheduler.py`
- Service launcher: `backend/app.py`
## Service Implementation
### Base Service Pattern
```python
class Scheduler(AppService):
scheduler: BlockingScheduler
def __init__(self, register_system_tasks: bool = True):
self.register_system_tasks = register_system_tasks
@classmethod
def get_port(cls) -> int:
return config.execution_scheduler_port # Default: 8003
@classmethod
def db_pool_size(cls) -> int:
return config.scheduler_db_pool_size # Default: 3
def run_service(self):
# Initialize scheduler with jobstores
# Register system tasks if enabled
# Start scheduler blocking loop
def cleanup(self):
# Graceful shutdown of scheduler
# Wait=False for immediate termination
```
### Jobstore Configuration
The scheduler uses three distinct jobstores:
1. **EXECUTION** (`Jobstores.EXECUTION.value`)
- Type: SQLAlchemyJobStore
- Table: `apscheduler_jobs`
- Purpose: Graph execution schedules
- Persistence: Required
2. **BATCHED_NOTIFICATIONS** (`Jobstores.BATCHED_NOTIFICATIONS.value`)
- Type: SQLAlchemyJobStore
- Table: `apscheduler_jobs_batched_notifications`
- Purpose: Batched notification processing
- Persistence: Required
3. **WEEKLY_NOTIFICATIONS** (`Jobstores.WEEKLY_NOTIFICATIONS.value`)
- Type: MemoryJobStore
- Purpose: Weekly summary notifications
- Persistence: Not required
### System Tasks
When `register_system_tasks=True`, the following monitoring tasks are registered:
1. **Weekly Summary Processing**
- Job ID: `process_weekly_summary`
- Schedule: `0 * * * *` (hourly)
- Function: `monitoring.process_weekly_summary`
- Jobstore: WEEKLY_NOTIFICATIONS
2. **Late Execution Monitoring**
- Job ID: `report_late_executions`
- Schedule: Interval (config.execution_late_notification_threshold_secs)
- Function: `monitoring.report_late_executions`
- Jobstore: EXECUTION
3. **Block Error Rate Monitoring**
- Job ID: `report_block_error_rates`
- Schedule: Interval (config.block_error_rate_check_interval_secs)
- Function: `monitoring.report_block_error_rates`
- Jobstore: EXECUTION
4. **Cloud Storage Cleanup**
- Job ID: `cleanup_expired_files`
- Schedule: Interval (config.cloud_storage_cleanup_interval_hours * 3600)
- Function: `cleanup_expired_files`
- Jobstore: EXECUTION
## Data Models
### GraphExecutionJobArgs
```python
class GraphExecutionJobArgs(BaseModel):
user_id: str
graph_id: str
graph_version: int
cron: str
input_data: BlockInput
input_credentials: dict[str, CredentialsMetaInput] = Field(default_factory=dict)
```
### GraphExecutionJobInfo
```python
class GraphExecutionJobInfo(GraphExecutionJobArgs):
id: str
name: str
next_run_time: str
@staticmethod
def from_db(job_args: GraphExecutionJobArgs, job_obj: JobObj) -> "GraphExecutionJobInfo":
return GraphExecutionJobInfo(
id=job_obj.id,
name=job_obj.name,
next_run_time=job_obj.next_run_time.isoformat(),
**job_args.model_dump(),
)
```
### NotificationJobArgs
```python
class NotificationJobArgs(BaseModel):
notification_types: list[NotificationType]
cron: str
```
### CredentialsMetaInput
```python
class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
id: str
title: Optional[str] = None
provider: CP
type: CT
```
## API Endpoints
All endpoints are exposed via the `@expose` decorator and follow HTTP POST JSON-RPC pattern.
### 1. Add Graph Execution Schedule
**Endpoint**: `/add_graph_execution_schedule`
**Request Body**:
```json
{
"user_id": "string",
"graph_id": "string",
"graph_version": "integer",
"cron": "string (crontab format)",
"input_data": {},
"input_credentials": {},
"name": "string (optional)"
}
```
**Response**: `GraphExecutionJobInfo`
**Behavior**:
- Creates APScheduler job with CronTrigger
- Uses job kwargs to store GraphExecutionJobArgs
- Sets `replace_existing=True` to allow updates
- Returns job info with generated ID and next run time
### 2. Delete Graph Execution Schedule
**Endpoint**: `/delete_graph_execution_schedule`
**Request Body**:
```json
{
"schedule_id": "string",
"user_id": "string"
}
```
**Response**: `GraphExecutionJobInfo`
**Behavior**:
- Validates schedule exists in EXECUTION jobstore
- Verifies user_id matches job's user_id
- Removes job from scheduler
- Returns deleted job info
**Errors**:
- `NotFoundError`: If job doesn't exist
- `NotAuthorizedError`: If user_id doesn't match
### 3. Get Graph Execution Schedules
**Endpoint**: `/get_graph_execution_schedules`
**Request Body**:
```json
{
"graph_id": "string (optional)",
"user_id": "string (optional)"
}
```
**Response**: `list[GraphExecutionJobInfo]`
**Behavior**:
- Retrieves all jobs from EXECUTION jobstore
- Filters by graph_id and/or user_id if provided
- Validates job kwargs as GraphExecutionJobArgs
- Skips invalid jobs (ValidationError)
- Only returns jobs with next_run_time set
### 4. System Task Endpoints
- `/execute_process_existing_batches` - Trigger batch processing
- `/execute_process_weekly_summary` - Trigger weekly summary
- `/execute_report_late_executions` - Trigger late execution report
- `/execute_report_block_error_rates` - Trigger error rate report
- `/execute_cleanup_expired_files` - Trigger file cleanup
### 5. Health Check
**Endpoints**: `/health_check`, `/health_check_async`
**Methods**: POST, GET
**Response**: "OK"
## Database Schema
### APScheduler Tables
The scheduler relies on APScheduler's SQLAlchemy jobstore schema:
1. **apscheduler_jobs**
- id: VARCHAR (PRIMARY KEY)
- next_run_time: FLOAT
- job_state: BLOB/BYTEA (pickled job data)
2. **apscheduler_jobs_batched_notifications**
- Same schema as above
- Separate table for notification jobs
### Database Configuration
- URL extraction from `DIRECT_URL` environment variable
- Schema extraction from URL query parameter
- Connection pooling: `pool_size=db_pool_size()`, `max_overflow=0`
- Metadata schema binding for multi-schema support
## External Dependencies
### Required Services
1. **PostgreSQL Database**
- Connection via `DIRECT_URL` environment variable
- Schema support via URL parameter
- APScheduler job persistence
2. **ExecutionManager** (via execution_utils)
- Function: `add_graph_execution`
- Called by: `execute_graph` job function
- Purpose: Create graph execution entries
3. **NotificationManager** (via monitoring module)
- Functions: `process_existing_batches`, `queue_weekly_summary`
- Purpose: Notification processing
4. **Cloud Storage** (via util.cloud_storage)
- Function: `cleanup_expired_files_async`
- Purpose: File expiration management
### Python Dependencies
```
apscheduler>=3.10.0
sqlalchemy
pydantic>=2.0
httpx
uvicorn
fastapi
python-dotenv
tenacity
```
## Authentication & Authorization
### Service-Level Authentication
- No authentication required between internal services
- Services communicate via trusted internal network
- Host/port configuration via environment variables
### User-Level Authorization
- Authorization check in `delete_graph_execution_schedule`:
- Validates `user_id` matches job's `user_id`
- Raises `NotAuthorizedError` on mismatch
- No authorization for read operations (security consideration)
## Process Management
### Startup Sequence
1. Load environment variables via `dotenv.load_dotenv()`
2. Extract database URL and schema
3. Initialize BlockingScheduler with configured jobstores
4. Register system tasks (if enabled)
5. Add job execution listener
6. Start scheduler (blocking)
### Shutdown Sequence
1. Receive SIGTERM/SIGINT signal
2. Call `cleanup()` method
3. Shutdown scheduler with `wait=False`
4. Terminate process
### Multi-Process Architecture
- Runs as independent process via `AppProcess`
- Started by `run_processes()` in app.py
- Can run in foreground or background mode
- Automatic signal handling for graceful shutdown
## Error Handling
### Job Execution Errors
- Listener on `EVENT_JOB_ERROR` logs failures
- Errors in job functions are caught and logged
- Jobs continue to run on schedule despite failures
### RPC Communication Errors
- Automatic retry via `@conn_retry` decorator
- Configurable retry count and timeout
- Connection pooling with self-healing
### Database Connection Errors
- APScheduler handles reconnection automatically
- Pool exhaustion prevented by `max_overflow=0`
- Connection errors logged but don't crash service
## Configuration
### Environment Variables
- `DIRECT_URL`: PostgreSQL connection string (required)
- `{SERVICE_NAME}_HOST`: Override service host
- Standard logging configuration
### Config Settings (via Config class)
```python
execution_scheduler_port: int = 8003
scheduler_db_pool_size: int = 3
execution_late_notification_threshold_secs: int
block_error_rate_check_interval_secs: int
cloud_storage_cleanup_interval_hours: int
pyro_host: str = "localhost"
pyro_client_comm_timeout: float = 15
pyro_client_comm_retry: int = 3
rpc_client_call_timeout: int = 300
```
## Testing Strategy
### Unit Tests
1. Mock APScheduler for job management tests
2. Mock database connections
3. Test each RPC endpoint independently
4. Verify job serialization/deserialization
### Integration Tests
1. Test with real PostgreSQL instance
2. Verify job persistence across restarts
3. Test concurrent job execution
4. Validate cron expression parsing
### Critical Test Cases
1. **Job Persistence**: Jobs survive scheduler restart
2. **User Isolation**: Users can only delete their own jobs
3. **Concurrent Access**: Multiple clients can add/remove jobs
4. **Error Recovery**: Service recovers from database outages
5. **Resource Cleanup**: No memory/connection leaks
## Implementation Notes
### Key Design Decisions
1. **BlockingScheduler vs AsyncIOScheduler**: Uses BlockingScheduler for simplicity and compatibility with multiprocessing architecture
2. **Job Storage**: All job arguments stored in kwargs, not in job name/id
3. **Separate Jobstores**: Isolation between execution and notification jobs
4. **No Authentication**: Relies on network isolation for security
### Migration Considerations
1. APScheduler job format must be preserved exactly
2. Database schema cannot change without migration
3. RPC protocol must maintain compatibility
4. Environment variables must match existing deployment
### Performance Considerations
1. Database pool size limited to prevent exhaustion
2. No job result storage (fire-and-forget pattern)
3. Minimal logging in hot paths
4. Connection reuse via pooling
## Appendix: Critical Implementation Details
### Event Loop Management
```python
@thread_cached
def get_event_loop():
return asyncio.new_event_loop()
def execute_graph(**kwargs):
get_event_loop().run_until_complete(_execute_graph(**kwargs))
```
### Job Function Execution Context
- Jobs run in scheduler's process space
- Each job gets fresh event loop
- No shared state between job executions
- Exceptions logged but don't affect scheduler
### Cron Expression Format
- Uses standard crontab format via `CronTrigger.from_crontab()`
- Supports: minute hour day month day_of_week
- Special strings: @yearly, @monthly, @weekly, @daily, @hourly
This specification provides all necessary details to reimplement the scheduler service while maintaining 100% compatibility with the existing system. Any deviation from these specifications may result in system incompatibility.

View File

@@ -1,85 +0,0 @@
name: CI
on:
push:
branches: [ main, master ]
pull_request:
branches: [ main, master ]
env:
CARGO_TERM_COLOR: always
RUSTFLAGS: "-D warnings"
jobs:
test:
name: Test
runs-on: ubuntu-latest
services:
redis:
image: redis:7
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 6379:6379
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: Run tests
run: cargo test
env:
REDIS_URL: redis://localhost:6379
clippy:
name: Clippy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
with:
components: clippy
- uses: Swatinem/rust-cache@v2
- name: Run clippy
run: |
cargo clippy -- \
-D warnings \
-D clippy::unwrap_used \
-D clippy::panic \
-D clippy::unimplemented \
-D clippy::todo
fmt:
name: Format
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
with:
components: rustfmt
- name: Check formatting
run: cargo fmt -- --check
bench:
name: Benchmarks
runs-on: ubuntu-latest
services:
redis:
image: redis:7
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 6379:6379
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: Build benchmarks
run: cargo bench --no-run
env:
REDIS_URL: redis://localhost:6379

File diff suppressed because it is too large Load Diff

View File

@@ -1,60 +0,0 @@
[package]
name = "websocket"
authors = ["AutoGPT Team"]
description = "WebSocket server for AutoGPT Platform"
version = "0.1.0"
edition = "2021"
[lib]
name = "websocket"
path = "src/lib.rs"
[[bin]]
name = "websocket"
path = "src/main.rs"
[dependencies]
axum = { version = "0.7.5", features = ["ws"] }
jsonwebtoken = "9.3.0"
redis = { version = "0.25.4", features = ["aio", "tokio-comp"] }
serde = { version = "1.0.204", features = ["derive"] }
serde_json = "1.0.120"
tokio = { version = "1.38.1", features = ["rt-multi-thread", "macros", "net", "sync", "time", "io-util"] }
tower-http = { version = "0.5.2", features = ["cors"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
futures = "0.3"
dotenvy = "0.15"
clap = { version = "4.5.4", features = ["derive"] }
toml = "0.8"
[dev-dependencies]
# Load testing and profiling
tokio-console = "0.1"
criterion = { version = "0.5", features = ["async_tokio"] }
pprof = { version = "0.13", features = ["flamegraph", "criterion"] }
# Dependencies for benchmarks
tokio-tungstenite = "0.24"
futures-util = "0.3"
chrono = "0.4"
[[bench]]
name = "websocket_bench"
harness = false
[[example]]
name = "ws_client_example"
required-features = []
[profile.release]
opt-level = 3 # Maximum optimization
lto = true # Enable link-time optimization
codegen-units = 1 # Reduce parallel code generation units to increase optimization
panic = "abort" # Remove panic unwinding to reduce binary size
strip = true # Strip symbols from binary
[profile.bench]
opt-level = 3 # Maximum optimization
lto = true # Enable link-time optimization
codegen-units = 1 # Reduce parallel code generation units to increase optimization
debug = true # Keep debug symbols for profiling

View File

@@ -1,412 +0,0 @@
# WebSocket API Technical Specification
## Overview
This document provides a complete technical specification for the AutoGPT Platform WebSocket API (`ws_api.py`). The WebSocket API provides real-time updates for graph and node execution events, enabling clients to monitor workflow execution progress.
## Architecture Overview
### Core Components
1. **WebSocket Server** (`ws_api.py`)
- FastAPI application with WebSocket endpoint
- Handles client connections and message routing
- Authenticates clients via JWT tokens
- Manages subscriptions to execution events
2. **Connection Manager** (`conn_manager.py`)
- Maintains active WebSocket connections
- Manages channel subscriptions
- Routes execution events to subscribed clients
- Handles connection lifecycle
3. **Event Broadcasting System**
- Redis Pub/Sub based event bus
- Asynchronous event broadcaster
- Execution event propagation from backend services
## API Endpoint
### WebSocket Endpoint
- **URL**: `/ws`
- **Protocol**: WebSocket (ws:// or wss://)
- **Query Parameters**:
- `token` (required when auth enabled): JWT authentication token
## Authentication
### JWT Token Authentication
- **When Required**: When `settings.config.enable_auth` is `True`
- **Token Location**: Query parameter `?token=<JWT_TOKEN>`
- **Token Validation**:
```python
payload = parse_jwt_token(token)
user_id = payload.get("sub")
```
- **JWT Requirements**:
- Algorithm: Configured via `settings.JWT_ALGORITHM`
- Secret Key: Configured via `settings.JWT_SECRET_KEY`
- Audience: Must be "authenticated"
- Claims: Must contain `sub` (user ID)
### Authentication Failures
- **4001**: Missing authentication token
- **4002**: Invalid token (missing user ID)
- **4003**: Invalid token (parsing error or expired)
### No-Auth Mode
- When `settings.config.enable_auth` is `False`
- Uses `DEFAULT_USER_ID` from `backend.data.user`
## Message Protocol
### Message Format
All messages use JSON format with the following structure:
```typescript
interface WSMessage {
method: WSMethod;
data?: Record<string, any> | any[] | string;
success?: boolean;
channel?: string;
error?: string;
}
```
### Message Methods (WSMethod enum)
1. **Client-to-Server Methods**:
- `SUBSCRIBE_GRAPH_EXEC`: Subscribe to specific graph execution
- `SUBSCRIBE_GRAPH_EXECS`: Subscribe to all executions of a graph
- `UNSUBSCRIBE`: Unsubscribe from a channel
- `HEARTBEAT`: Keep-alive ping
2. **Server-to-Client Methods**:
- `GRAPH_EXECUTION_EVENT`: Graph execution status update
- `NODE_EXECUTION_EVENT`: Node execution status update
- `ERROR`: Error message
- `HEARTBEAT`: Keep-alive pong
## Subscription Models
### Subscribe to Specific Graph Execution
```typescript
interface WSSubscribeGraphExecutionRequest {
graph_exec_id: string;
}
```
**Channel Key Format**: `{user_id}|graph_exec#{graph_exec_id}`
### Subscribe to All Graph Executions
```typescript
interface WSSubscribeGraphExecutionsRequest {
graph_id: string;
}
```
**Channel Key Format**: `{user_id}|graph#{graph_id}|executions`
## Event Models
### Graph Execution Event
```typescript
interface GraphExecutionEvent {
event_type: "graph_execution_update";
id: string; // graph_exec_id
user_id: string;
graph_id: string;
graph_version: number;
preset_id?: string;
status: ExecutionStatus;
started_at: string; // ISO datetime
ended_at: string; // ISO datetime
inputs: Record<string, any>;
outputs: Record<string, any>;
stats?: {
cost: number; // cents
duration: number; // seconds
duration_cpu_only: number;
node_exec_time: number;
node_exec_time_cpu_only: number;
node_exec_count: number;
node_error_count: number;
error?: string;
};
}
```
### Node Execution Event
```typescript
interface NodeExecutionEvent {
event_type: "node_execution_update";
user_id: string;
graph_id: string;
graph_version: number;
graph_exec_id: string;
node_exec_id: string;
node_id: string;
block_id: string;
status: ExecutionStatus;
input_data: Record<string, any>;
output_data: Record<string, any>;
add_time: string; // ISO datetime
queue_time?: string; // ISO datetime
start_time?: string; // ISO datetime
end_time?: string; // ISO datetime
}
```
### Execution Status Enum
```typescript
enum ExecutionStatus {
INCOMPLETE = "INCOMPLETE",
QUEUED = "QUEUED",
RUNNING = "RUNNING",
COMPLETED = "COMPLETED",
FAILED = "FAILED"
}
```
## Message Flow Examples
### 1. Subscribe to Graph Execution
```json
// Client → Server
{
"method": "subscribe_graph_execution",
"data": {
"graph_exec_id": "exec-123"
}
}
// Server → Client (Success)
{
"method": "subscribe_graph_execution",
"success": true,
"channel": "user-456|graph_exec#exec-123"
}
```
### 2. Receive Execution Updates
```json
// Server → Client (Graph Update)
{
"method": "graph_execution_event",
"channel": "user-456|graph_exec#exec-123",
"data": {
"event_type": "graph_execution_update",
"id": "exec-123",
"user_id": "user-456",
"graph_id": "graph-789",
"status": "RUNNING",
// ... other fields
}
}
// Server → Client (Node Update)
{
"method": "node_execution_event",
"channel": "user-456|graph_exec#exec-123",
"data": {
"event_type": "node_execution_update",
"node_exec_id": "node-exec-111",
"status": "COMPLETED",
// ... other fields
}
}
```
### 3. Heartbeat
```json
// Client → Server
{
"method": "heartbeat",
"data": "ping"
}
// Server → Client
{
"method": "heartbeat",
"data": "pong",
"success": true
}
```
### 4. Error Handling
```json
// Server → Client (Invalid Message)
{
"method": "error",
"success": false,
"error": "Invalid message format. Review the schema and retry"
}
```
## Event Broadcasting Architecture
### Redis Pub/Sub Integration
1. **Event Bus Name**: Configured via `config.execution_event_bus_name`
2. **Channel Pattern**: `{event_bus_name}/{channel_key}`
3. **Event Flow**:
- Execution services publish events to Redis
- Event broadcaster listens to Redis pattern `*`
- Events are routed to WebSocket connections based on subscriptions
### Event Broadcaster
- Runs as continuous async task using `@continuous_retry()` decorator
- Listens to all execution events via `AsyncRedisExecutionEventBus`
- Calls `ConnectionManager.send_execution_update()` for each event
## Connection Lifecycle
### Connection Establishment
1. Client connects to `/ws` endpoint
2. Authentication performed (JWT validation)
3. WebSocket accepted via `manager.connect_socket()`
4. Connection added to active connections set
### Message Processing Loop
1. Receive text message from client
2. Parse and validate as `WSMessage`
3. Route to appropriate handler based on `method`
4. Send response or error back to client
### Connection Termination
1. `WebSocketDisconnect` exception caught
2. `manager.disconnect_socket()` called
3. Connection removed from active connections
4. All subscriptions for that connection removed
## Error Handling
### Validation Errors
- **Invalid Message Format**: Returns error with method "error"
- **Invalid Message Data**: Returns error with specific validation message
- **Unknown Message Type**: Returns error indicating unsupported method
### Connection Errors
- WebSocket disconnections handled gracefully
- Failed event parsing logged but doesn't crash connection
- Handler exceptions logged and connection continues
## Configuration
### Environment Variables
```python
# WebSocket Server Configuration
websocket_server_host: str = "0.0.0.0"
websocket_server_port: int = 8001
# Authentication
enable_auth: bool = True
# CORS
backend_cors_allow_origins: List[str] = []
# Redis Event Bus
execution_event_bus_name: str = "autogpt:execution_event_bus"
# Message Size Limits
max_message_size_limit: int = 512000 # 512KB
```
### Security Headers
- CORS middleware applied with configured origins
- Credentials allowed for authenticated requests
- All methods and headers allowed (configurable)
## Deployment Requirements
### Dependencies
1. **FastAPI**: Web framework with WebSocket support
2. **Redis**: For pub/sub event broadcasting
3. **JWT Libraries**: For token validation
4. **Prisma**: Database ORM (for future graph access validation)
### Process Management
- Implements `AppProcess` interface for service lifecycle
- Runs via `uvicorn` ASGI server
- Graceful shutdown handling in `cleanup()` method
### Concurrent Connections
- No hard limit on WebSocket connections
- Memory usage scales with active connections
- Each connection maintains subscription set
## Implementation Checklist
To implement a compatible WebSocket API:
1. **Authentication**
- [ ] JWT token validation from query parameters
- [ ] Support for no-auth mode with default user ID
- [ ] Proper error codes for auth failures
2. **Message Handling**
- [ ] Parse and validate WSMessage format
- [ ] Implement all client-to-server methods
- [ ] Support all server-to-client event types
- [ ] Proper error responses for invalid messages
3. **Subscription Management**
- [ ] Channel key generation matching exact format
- [ ] Support for both execution and graph-level subscriptions
- [ ] Unsubscribe functionality
- [ ] Clean up subscriptions on disconnect
4. **Event Broadcasting**
- [ ] Listen to Redis pub/sub for execution events
- [ ] Route events to correct subscribed connections
- [ ] Handle both graph and node execution events
- [ ] Maintain event order and completeness
5. **Connection Management**
- [ ] Track active WebSocket connections
- [ ] Handle graceful disconnections
- [ ] Implement heartbeat/keepalive
- [ ] Memory-efficient subscription storage
6. **Configuration**
- [ ] Support all environment variables
- [ ] CORS configuration for allowed origins
- [ ] Configurable host/port binding
- [ ] Redis connection configuration
7. **Error Handling**
- [ ] Graceful handling of malformed messages
- [ ] Logging of errors without dropping connections
- [ ] Specific error messages for debugging
- [ ] Recovery from Redis connection issues
## Testing Considerations
1. **Unit Tests**
- Message parsing and validation
- Channel key generation
- Subscription management logic
2. **Integration Tests**
- Full WebSocket connection flow
- Event broadcasting from Redis
- Multi-client subscription scenarios
- Authentication success/failure cases
3. **Load Tests**
- Many concurrent connections
- High-frequency event broadcasting
- Memory usage under load
- Connection/disconnection cycles
## Security Considerations
1. **Authentication**: JWT tokens transmitted via query parameters (consider upgrading to headers)
2. **Authorization**: Currently no graph-level access validation (commented out in code)
3. **Rate Limiting**: No rate limiting implemented
4. **Message Size**: Limited by `max_message_size_limit` configuration
5. **Input Validation**: All inputs validated via Pydantic models
## Future Enhancements (Currently Commented Out)
1. **Graph Access Validation**: Verify user has read access to subscribed graphs
2. **Message Compression**: For large execution payloads
3. **Batch Updates**: Aggregate multiple events in single message
4. **Selective Field Subscription**: Subscribe to specific fields only

View File

@@ -1,93 +0,0 @@
# WebSocket Server Benchmarks
This directory contains performance benchmarks for the AutoGPT WebSocket server.
## Prerequisites
1. Redis must be running locally or set `REDIS_URL` environment variable:
```bash
docker run -d -p 6379:6379 redis:latest
```
2. Build the project in release mode:
```bash
cargo build --release
```
## Running Benchmarks
Run all benchmarks:
```bash
cargo bench
```
Run specific benchmark group:
```bash
cargo bench connection_establishment
cargo bench subscriptions
cargo bench message_throughput
cargo bench concurrent_connections
cargo bench message_parsing
cargo bench redis_event_processing
```
## Benchmark Categories
### Connection Establishment
Tests the performance of establishing WebSocket connections with different authentication scenarios:
- No authentication
- Valid JWT authentication
- Invalid JWT authentication (connection rejection)
### Subscriptions
Measures the performance of subscription operations:
- Subscribing to graph execution events
- Unsubscribing from channels
### Message Throughput
Tests how many messages the server can process per second with varying message counts (10, 100, 1000).
### Concurrent Connections
Benchmarks the server's ability to handle multiple simultaneous connections (10, 50, 100, 500 clients).
### Message Parsing
Tests JSON parsing performance with different message sizes (100B to 100KB).
### Redis Event Processing
Benchmarks the parsing of execution events received from Redis.
## Profiling
To generate flamegraphs for CPU profiling:
1. Install flamegraph tools:
```bash
cargo install flamegraph
```
2. Run benchmarks with profiling:
```bash
cargo bench --bench websocket_bench -- --profile-time=10
```
## Interpreting Results
- **Throughput**: Higher is better (operations/second or elements/second)
- **Time**: Lower is better (nanoseconds per operation)
- **Error margins**: Look for stable results with low standard deviation
## Optimizing Performance
Based on benchmark results, consider:
1. **Connection pooling** for Redis connections
2. **Message batching** for high-throughput scenarios
3. **Async task tuning** for concurrent connection handling
4. **JSON parsing optimization** using simd-json or other fast parsers
5. **Memory allocation** optimization using arena allocators
## Notes
- Benchmarks create actual WebSocket servers on random ports
- Each benchmark iteration properly cleans up resources
- Results may vary based on system resources and Redis performance

View File

@@ -1,406 +0,0 @@
#![allow(clippy::unwrap_used)] // Benchmarks can panic on setup errors
use axum::{routing::get, Router};
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
use futures_util::{SinkExt, StreamExt};
use serde_json::json;
use std::sync::Arc;
use std::time::Duration;
use tokio::net::TcpListener;
use tokio::runtime::Runtime;
use tokio_tungstenite::{connect_async, tungstenite::Message};
// Import the actual websocket server components
use websocket::{models, ws_handler, AppState, Config, ConnectionManager, Stats};
// Helper to create a test server
async fn create_test_server(enable_auth: bool) -> (String, tokio::task::JoinHandle<()>) {
// Set environment variables for test config
std::env::set_var("WEBSOCKET_SERVER_HOST", "127.0.0.1");
std::env::set_var("WEBSOCKET_SERVER_PORT", "0");
std::env::set_var("ENABLE_AUTH", enable_auth.to_string());
std::env::set_var("SUPABASE_JWT_SECRET", "test_secret");
std::env::set_var("DEFAULT_USER_ID", "test_user");
if std::env::var("REDIS_URL").is_err() {
std::env::set_var("REDIS_URL", "redis://localhost:6379");
}
let mut config = Config::load(None);
config.port = 0; // Force OS to assign port
let redis_client =
redis::Client::open(config.redis_url.clone()).expect("Failed to connect to Redis");
let stats = Arc::new(Stats::default());
let mgr = Arc::new(ConnectionManager::new(
redis_client,
config.execution_event_bus_name.clone(),
stats.clone(),
));
// Start broadcaster
let mgr_clone = mgr.clone();
tokio::spawn(async move {
mgr_clone.run_broadcaster().await;
});
let state = AppState {
mgr,
config: Arc::new(config),
stats,
};
let app = Router::new()
.route("/ws", get(ws_handler))
.layer(axum::Extension(state));
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
let server_url = format!("ws://{addr}");
let server_handle = tokio::spawn(async move {
axum::serve(listener, app.into_make_service())
.await
.unwrap();
});
// Give server time to start
tokio::time::sleep(Duration::from_millis(100)).await;
(server_url, server_handle)
}
// Helper to create a valid JWT token
fn create_jwt_token(user_id: &str) -> String {
use jsonwebtoken::{encode, Algorithm, EncodingKey, Header};
use serde::Serialize;
#[derive(Serialize)]
struct Claims {
sub: String,
aud: Vec<String>,
exp: usize,
}
let claims = Claims {
sub: user_id.to_string(),
aud: vec!["authenticated".to_string()],
exp: (chrono::Utc::now() + chrono::Duration::hours(1)).timestamp() as usize,
};
encode(
&Header::new(Algorithm::HS256),
&claims,
&EncodingKey::from_secret(b"test_secret"),
)
.unwrap()
}
// Benchmark connection establishment
fn benchmark_connection_establishment(c: &mut Criterion) {
let rt = Runtime::new().unwrap();
let mut group = c.benchmark_group("connection_establishment");
group.measurement_time(Duration::from_secs(30));
// Test without auth
group.bench_function("no_auth", |b| {
b.to_async(&rt).iter_with_large_drop(|| async {
let (server_url, server_handle) = create_test_server(false).await;
let url = format!("{server_url}/ws");
let (ws_stream, _) = connect_async(&url).await.unwrap();
drop(ws_stream);
server_handle.abort();
});
});
// Test with valid auth
group.bench_function("valid_auth", |b| {
b.to_async(&rt).iter_with_large_drop(|| async {
let (server_url, server_handle) = create_test_server(true).await;
let token = create_jwt_token("test_user");
let url = format!("{server_url}/ws?token={token}");
let (ws_stream, _) = connect_async(&url).await.unwrap();
drop(ws_stream);
server_handle.abort();
});
});
// Test with invalid auth
group.bench_function("invalid_auth", |b| {
b.to_async(&rt).iter_with_large_drop(|| async {
let (server_url, server_handle) = create_test_server(true).await;
let url = format!("{server_url}/ws?token=invalid");
let result = connect_async(&url).await;
assert!(
result.is_err() || {
if let Ok((mut ws_stream, _)) = result {
// Should receive close frame
matches!(ws_stream.next().await, Some(Ok(Message::Close(_))))
} else {
false
}
}
);
server_handle.abort();
});
});
group.finish();
}
// Benchmark subscription operations
fn benchmark_subscriptions(c: &mut Criterion) {
let rt = Runtime::new().unwrap();
let mut group = c.benchmark_group("subscriptions");
group.measurement_time(Duration::from_secs(20));
group.bench_function("subscribe_graph_execution", |b| {
b.to_async(&rt).iter_with_large_drop(|| async {
let (server_url, server_handle) = create_test_server(false).await;
let url = format!("{server_url}/ws");
let (mut ws_stream, _) = connect_async(&url).await.unwrap();
let msg = json!({
"method": "subscribe_graph_execution",
"data": {
"graph_exec_id": "test_exec_123"
}
});
ws_stream
.send(Message::Text(msg.to_string()))
.await
.unwrap();
// Wait for response
if let Some(Ok(Message::Text(response))) = ws_stream.next().await {
let resp: serde_json::Value = serde_json::from_str(&response).unwrap();
assert_eq!(resp["success"], true);
}
server_handle.abort();
});
});
group.bench_function("unsubscribe", |b| {
b.to_async(&rt).iter_with_large_drop(|| async {
let (server_url, server_handle) = create_test_server(false).await;
let url = format!("{server_url}/ws");
let (mut ws_stream, _) = connect_async(&url).await.unwrap();
// First subscribe
let msg = json!({
"method": "subscribe_graph_execution",
"data": {
"graph_exec_id": "test_exec_123"
}
});
ws_stream
.send(Message::Text(msg.to_string()))
.await
.unwrap();
ws_stream.next().await; // Consume response
let msg = json!({
"method": "unsubscribe",
"data": {
"channel": "test_user|graph_exec#test_exec_123"
}
});
ws_stream
.send(Message::Text(msg.to_string()))
.await
.unwrap();
// Wait for response
if let Some(Ok(Message::Text(response))) = ws_stream.next().await {
let resp: serde_json::Value = serde_json::from_str(&response).unwrap();
assert_eq!(resp["success"], true);
}
server_handle.abort();
});
});
group.finish();
}
// Benchmark message throughput
fn benchmark_message_throughput(c: &mut Criterion) {
let rt = Runtime::new().unwrap();
let mut group = c.benchmark_group("message_throughput");
group.measurement_time(Duration::from_secs(30));
for msg_count in [10, 100, 1000].iter() {
group.throughput(Throughput::Elements(*msg_count as u64));
group.bench_with_input(
BenchmarkId::from_parameter(msg_count),
msg_count,
|b, &msg_count| {
b.to_async(&rt).iter_with_large_drop(|| async {
let (server_url, server_handle) = create_test_server(false).await;
let url = format!("{server_url}/ws");
let (mut ws_stream, _) = connect_async(&url).await.unwrap();
// Send multiple heartbeat messages
for _ in 0..msg_count {
let msg = json!({
"method": "heartbeat",
"data": "ping"
});
ws_stream
.send(Message::Text(msg.to_string()))
.await
.unwrap();
}
// Receive all responses
for _ in 0..msg_count {
ws_stream.next().await;
}
server_handle.abort();
});
},
);
}
group.finish();
}
// Benchmark concurrent connections
fn benchmark_concurrent_connections(c: &mut Criterion) {
let rt = Runtime::new().unwrap();
let mut group = c.benchmark_group("concurrent_connections");
group.measurement_time(Duration::from_secs(60));
group.sample_size(10);
for num_clients in [100, 500, 1000].iter() {
group.throughput(Throughput::Elements(*num_clients as u64));
group.bench_with_input(
BenchmarkId::from_parameter(num_clients),
num_clients,
|b, &num_clients| {
b.to_async(&rt).iter_with_large_drop(|| async {
let (server_url, server_handle) = create_test_server(false).await;
let url = format!("{server_url}/ws");
// Create multiple concurrent connections
let mut handles = vec![];
for i in 0..num_clients {
let url = url.clone();
let handle = tokio::spawn(async move {
let (mut ws_stream, _) = connect_async(&url).await.unwrap();
// Subscribe to a unique channel
let msg = json!({
"method": "subscribe_graph_execution",
"data": {
"graph_exec_id": format!("exec_{}", i)
}
});
ws_stream
.send(Message::Text(msg.to_string()))
.await
.unwrap();
ws_stream.next().await; // Wait for response
// Send a heartbeat
let msg = json!({
"method": "heartbeat",
"data": "ping"
});
ws_stream
.send(Message::Text(msg.to_string()))
.await
.unwrap();
ws_stream.next().await; // Wait for response
ws_stream
});
handles.push(handle);
}
// Wait for all connections to complete
for handle in handles {
let _ = handle.await;
}
server_handle.abort();
});
},
);
}
group.finish();
}
// Benchmark message parsing
fn benchmark_message_parsing(c: &mut Criterion) {
let mut group = c.benchmark_group("message_parsing");
// Test different message sizes
for msg_size in [100, 1000, 10000].iter() {
group.throughput(Throughput::Bytes(*msg_size as u64));
group.bench_with_input(
BenchmarkId::new("parse_json", msg_size),
msg_size,
|b, &msg_size| {
let data_str = "x".repeat(msg_size);
let json_msg = json!({
"method": "subscribe_graph_execution",
"data": {
"graph_exec_id": data_str
}
});
let json_str = json_msg.to_string();
b.iter(|| {
let _: models::WSMessage = serde_json::from_str(&json_str).unwrap();
});
},
);
}
group.finish();
}
// Benchmark Redis event processing
fn benchmark_redis_event_processing(c: &mut Criterion) {
let mut group = c.benchmark_group("redis_event_processing");
group.bench_function("parse_execution_event", |b| {
let event = json!({
"payload": {
"event_type": "graph_execution_update",
"id": "exec_123",
"graph_id": "graph_456",
"graph_version": 1,
"user_id": "user_789",
"status": "RUNNING",
"started_at": "2024-01-01T00:00:00Z",
"inputs": {"test": "data"},
"outputs": {}
}
});
let event_str = event.to_string();
b.iter(|| {
let _: models::RedisEventWrapper = serde_json::from_str(&event_str).unwrap();
});
});
group.finish();
}
criterion_group!(
benches,
benchmark_connection_establishment,
benchmark_subscriptions,
benchmark_message_throughput,
benchmark_concurrent_connections,
benchmark_message_parsing,
benchmark_redis_event_processing
);
criterion_main!(benches);

View File

@@ -1,10 +0,0 @@
# Clippy configuration for robust error handling
# Set the maximum cognitive complexity allowed
cognitive-complexity-threshold = 30
# Warn on TODO/FIXME comments
allow-dbg-in-tests = false
# Enforce documentation
missing-docs-in-crate-items = true

View File

@@ -1,23 +0,0 @@
# WebSocket API Configuration
# Server settings
host = "0.0.0.0"
port = 8001
# Authentication
enable_auth = true
jwt_secret = "your-super-secret-jwt-token-with-at-least-32-characters-long"
jwt_algorithm = "HS256"
default_user_id = "default"
# Redis configuration
redis_url = "redis://:password@localhost:6379/"
# Event bus
execution_event_bus_name = "execution_event"
# Message size limit (in bytes)
max_message_size_limit = 512000
# CORS allowed origins
backend_cors_allow_origins = ["http://localhost:3000", "https://559f69c159ef.ngrok.app"]

View File

@@ -1,75 +0,0 @@
use futures_util::{SinkExt, StreamExt};
use serde_json::json;
use tokio_tungstenite::{connect_async, tungstenite::Message};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let url = "ws://localhost:8001/ws";
println!("Connecting to {url}");
let (mut ws_stream, _) = connect_async(url).await?;
println!("Connected!");
// Subscribe to a graph execution
let subscribe_msg = json!({
"method": "subscribe_graph_execution",
"data": {
"graph_exec_id": "test_exec_123"
}
});
println!("Sending subscription request...");
ws_stream
.send(Message::Text(subscribe_msg.to_string()))
.await?;
// Wait for response
if let Some(msg) = ws_stream.next().await {
if let Message::Text(text) = msg? {
println!("Received: {text}");
}
}
// Send heartbeat
let heartbeat_msg = json!({
"method": "heartbeat",
"data": "ping"
});
println!("Sending heartbeat...");
ws_stream
.send(Message::Text(heartbeat_msg.to_string()))
.await?;
// Wait for pong
if let Some(msg) = ws_stream.next().await {
if let Message::Text(text) = msg? {
println!("Received: {text}");
}
}
// Unsubscribe
let unsubscribe_msg = json!({
"method": "unsubscribe",
"data": {
"channel": "default|graph_exec#test_exec_123"
}
});
println!("Sending unsubscribe request...");
ws_stream
.send(Message::Text(unsubscribe_msg.to_string()))
.await?;
// Wait for response
if let Some(msg) = ws_stream.next().await {
if let Message::Text(text) = msg? {
println!("Received: {text}");
}
}
println!("Closing connection...");
ws_stream.close(None).await?;
Ok(())
}

View File

@@ -1,99 +0,0 @@
use jsonwebtoken::Algorithm;
use serde::Deserialize;
use std::env;
use std::fs;
use std::path::Path;
use std::str::FromStr;
#[derive(Clone, Debug, Deserialize)]
pub struct Config {
pub host: String,
pub port: u16,
pub enable_auth: bool,
pub jwt_secret: String,
pub jwt_algorithm: Algorithm,
pub execution_event_bus_name: String,
pub redis_url: String,
pub default_user_id: String,
pub max_message_size_limit: usize,
pub backend_cors_allow_origins: Vec<String>,
}
impl Config {
pub fn load(config_path: Option<&Path>) -> Self {
let path = config_path.unwrap_or(Path::new("config.toml"));
let toml_result = fs::read_to_string(path)
.ok()
.and_then(|s| toml::from_str::<Config>(&s).ok());
let mut config = match toml_result {
Some(config) => config,
None => Config {
host: env::var("WEBSOCKET_SERVER_HOST").unwrap_or_else(|_| "0.0.0.0".to_string()),
port: env::var("WEBSOCKET_SERVER_PORT")
.ok()
.and_then(|s| s.parse().ok())
.unwrap_or(8001),
enable_auth: env::var("ENABLE_AUTH")
.ok()
.and_then(|s| s.parse().ok())
.unwrap_or(true),
jwt_secret: env::var("SUPABASE_JWT_SECRET")
.unwrap_or_else(|_| "dummy_secret_for_no_auth".to_string()),
jwt_algorithm: Algorithm::HS256,
execution_event_bus_name: env::var("EXECUTION_EVENT_BUS_NAME")
.unwrap_or_else(|_| "execution_event".to_string()),
redis_url: env::var("REDIS_URL")
.unwrap_or_else(|_| "redis://localhost/".to_string()),
default_user_id: "default".to_string(),
max_message_size_limit: env::var("MAX_MESSAGE_SIZE_LIMIT")
.ok()
.and_then(|s| s.parse().ok())
.unwrap_or(512000),
backend_cors_allow_origins: env::var("BACKEND_CORS_ALLOW_ORIGINS")
.unwrap_or_default()
.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect(),
},
};
if let Ok(v) = env::var("WEBSOCKET_SERVER_HOST") {
config.host = v;
}
if let Ok(v) = env::var("WEBSOCKET_SERVER_PORT") {
config.port = v.parse().unwrap_or(8001);
}
if let Ok(v) = env::var("ENABLE_AUTH") {
config.enable_auth = v.parse().unwrap_or(true);
}
if let Ok(v) = env::var("SUPABASE_JWT_SECRET") {
config.jwt_secret = v;
}
if let Ok(v) = env::var("JWT_ALGORITHM") {
config.jwt_algorithm = Algorithm::from_str(&v).unwrap_or(Algorithm::HS256);
}
if let Ok(v) = env::var("EXECUTION_EVENT_BUS_NAME") {
config.execution_event_bus_name = v;
}
if let Ok(v) = env::var("REDIS_URL") {
config.redis_url = v;
}
if let Ok(v) = env::var("DEFAULT_USER_ID") {
config.default_user_id = v;
}
if let Ok(v) = env::var("MAX_MESSAGE_SIZE_LIMIT") {
config.max_message_size_limit = v.parse().unwrap_or(512000);
}
if let Ok(v) = env::var("BACKEND_CORS_ALLOW_ORIGINS") {
config.backend_cors_allow_origins = v
.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect();
}
config
}
}

View File

@@ -1,277 +0,0 @@
use futures::StreamExt;
use redis::Client as RedisClient;
use std::collections::{HashMap, HashSet};
use std::sync::atomic::AtomicU64;
use std::sync::Arc;
use tokio::sync::{mpsc, RwLock};
use tracing::{debug, error, info, warn};
use crate::models::{ExecutionEvent, RedisEventWrapper, WSMessage};
use crate::stats::Stats;
pub struct ConnectionManager {
pub subscribers: RwLock<HashMap<String, HashSet<u64>>>,
pub clients: RwLock<HashMap<u64, (String, mpsc::Sender<String>)>>,
pub client_channels: RwLock<HashMap<u64, HashSet<String>>>,
pub next_id: AtomicU64,
pub redis_client: RedisClient,
pub bus_name: String,
pub stats: Arc<Stats>,
}
impl ConnectionManager {
pub fn new(redis_client: RedisClient, bus_name: String, stats: Arc<Stats>) -> Self {
Self {
subscribers: RwLock::new(HashMap::new()),
clients: RwLock::new(HashMap::new()),
client_channels: RwLock::new(HashMap::new()),
next_id: AtomicU64::new(0),
redis_client,
bus_name,
stats,
}
}
pub async fn run_broadcaster(self: Arc<Self>) {
info!("🚀 Starting Redis event broadcaster");
loop {
match self.run_broadcaster_inner().await {
Ok(_) => {
warn!("⚠️ Event broadcaster stopped unexpectedly, restarting in 5 seconds");
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
}
Err(e) => {
error!("❌ Event broadcaster error: {}, restarting in 5 seconds", e);
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
}
}
}
}
async fn run_broadcaster_inner(
self: &Arc<Self>,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut pubsub = self.redis_client.get_async_pubsub().await?;
pubsub.psubscribe("*").await?;
info!(
"📡 Listening to all Redis events, filtering for bus: {}",
self.bus_name
);
let mut pubsub_stream = pubsub.on_message();
loop {
let msg = pubsub_stream.next().await;
match msg {
Some(msg) => {
let channel: String = msg.get_channel_name().to_string();
debug!("📨 Received message on Redis channel: {}", channel);
self.stats
.redis_messages_received
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let payload: String = match msg.get_payload() {
Ok(p) => p,
Err(e) => {
warn!("⚠️ Failed to get payload from Redis message: {}", e);
self.stats
.errors_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
continue;
}
};
// Parse the channel format: execution_event/{user_id}/{graph_id}/{graph_exec_id}
let parts: Vec<&str> = channel.split('/').collect();
// Check if this is an execution event channel
if parts.len() != 4 || parts[0] != self.bus_name {
debug!(
"🚫 Ignoring non-execution event channel: {} (parts: {:?}, bus_name: {})",
channel, parts, self.bus_name
);
self.stats
.redis_messages_ignored
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
continue;
}
let user_id = parts[1];
let graph_id = parts[2];
let graph_exec_id = parts[3];
debug!(
"📥 Received event - user: {}, graph: {}, exec: {}",
user_id, graph_id, graph_exec_id
);
// Parse the wrapped event
let wrapped_event = match RedisEventWrapper::parse(&payload) {
Ok(e) => e,
Err(e) => {
warn!("⚠️ Failed to parse event JSON: {}, payload: {}", e, payload);
self.stats
.errors_json_parse
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
self.stats
.errors_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
continue;
}
};
let event = wrapped_event.payload;
debug!("📦 Event received: {:?}", event);
let (method, event_json) = match &event {
ExecutionEvent::GraphExecutionUpdate(graph_event) => {
self.stats
.graph_execution_events
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
self.stats
.events_received_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
(
"graph_execution_event",
match serde_json::to_value(graph_event) {
Ok(v) => v,
Err(e) => {
error!("❌ Failed to serialize graph event: {}", e);
continue;
}
},
)
}
ExecutionEvent::NodeExecutionUpdate(node_event) => {
self.stats
.node_execution_events
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
self.stats
.events_received_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
(
"node_execution_event",
match serde_json::to_value(node_event) {
Ok(v) => v,
Err(e) => {
error!("❌ Failed to serialize node event: {}", e);
continue;
}
},
)
}
};
// Create the channel keys in the format expected by WebSocket clients
let mut channels_to_notify = Vec::new();
// For both event types, notify the specific execution channel
let exec_channel = format!("{user_id}|graph_exec#{graph_exec_id}");
channels_to_notify.push(exec_channel.clone());
// For graph execution events, also notify the graph executions channel
if matches!(&event, ExecutionEvent::GraphExecutionUpdate(_)) {
let graph_channel = format!("{user_id}|graph#{graph_id}|executions");
channels_to_notify.push(graph_channel);
}
debug!(
"📢 Broadcasting {} event to channels: {:?}",
method, channels_to_notify
);
let subs = self.subscribers.read().await;
// Log current subscriber state
debug!("📊 Current subscribers count: {}", subs.len());
for channel_key in channels_to_notify {
let ws_msg = WSMessage {
method: method.to_string(),
channel: Some(channel_key.clone()),
data: Some(event_json.clone()),
..Default::default()
};
let json_msg = match serde_json::to_string(&ws_msg) {
Ok(j) => {
debug!("📤 Sending WebSocket message: {}", j);
j
}
Err(e) => {
error!("❌ Failed to serialize WebSocket message: {}", e);
continue;
}
};
if let Some(client_ids) = subs.get(&channel_key) {
let clients = self.clients.read().await;
let client_count = client_ids.len();
debug!(
"📣 Broadcasting to {} clients on channel: {}",
client_count, channel_key
);
for &cid in client_ids {
if let Some((user_id, tx)) = clients.get(&cid) {
match tx.try_send(json_msg.clone()) {
Ok(_) => {
debug!(
"✅ Message sent immediately to client {} (user: {})",
cid, user_id
);
self.stats
.messages_sent_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
Err(mpsc::error::TrySendError::Full(_)) => {
// Channel is full, try with a small timeout
let tx_clone = tx.clone();
let msg_clone = json_msg.clone();
let stats_clone = self.stats.clone();
tokio::spawn(async move {
match tokio::time::timeout(
std::time::Duration::from_millis(100),
tx_clone.send(msg_clone),
)
.await {
Ok(Ok(_)) => {
stats_clone
.messages_sent_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
_ => {
stats_clone
.messages_failed_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
}
});
warn!("⚠️ Channel full for client {} (user: {}), sending async", cid, user_id);
}
Err(mpsc::error::TrySendError::Closed(_)) => {
warn!(
"⚠️ Channel closed for client {} (user: {})",
cid, user_id
);
self.stats
.messages_failed_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
}
} else {
warn!("⚠️ Client {} not found in clients map", cid);
}
}
} else {
info!("📭 No subscribers for channel: {}", channel_key);
}
}
}
None => {
return Err("❌ Redis pubsub stream ended".into());
}
}
}
}
}

View File

@@ -1,442 +0,0 @@
use axum::extract::ws::{CloseFrame, Message, WebSocket};
use axum::{
extract::{Query, WebSocketUpgrade},
http::HeaderMap,
response::IntoResponse,
Extension,
};
use jsonwebtoken::{decode, DecodingKey, Validation};
use serde_json::{json, Value};
use std::collections::HashMap;
use tokio::sync::mpsc;
use tracing::{debug, error, info, warn};
use crate::connection_manager::ConnectionManager;
use crate::models::{Claims, WSMessage};
use crate::AppState;
// Helper function to safely serialize messages
fn serialize_message(msg: &WSMessage) -> String {
serde_json::to_string(msg).unwrap_or_else(|e| {
error!("❌ Failed to serialize WebSocket message: {}", e);
json!({"method": "error", "success": false, "error": "Internal serialization error"})
.to_string()
})
}
pub async fn ws_handler(
ws: WebSocketUpgrade,
query: Query<HashMap<String, String>>,
_headers: HeaderMap,
Extension(state): Extension<AppState>,
) -> impl IntoResponse {
let token = query.0.get("token").cloned();
let mut user_id = state.config.default_user_id.clone();
let mut auth_error_code: Option<u16> = None;
if state.config.enable_auth {
match token {
Some(token_str) => {
debug!("🔐 Authenticating WebSocket connection");
let mut validation = Validation::new(state.config.jwt_algorithm);
validation.set_audience(&["authenticated"]);
let key = DecodingKey::from_secret(state.config.jwt_secret.as_bytes());
match decode::<Claims>(&token_str, &key, &validation) {
Ok(token_data) => {
user_id = token_data.claims.sub.clone();
debug!("✅ WebSocket authenticated for user: {}", user_id);
}
Err(e) => {
warn!("⚠️ JWT validation failed: {}", e);
auth_error_code = Some(4003);
}
}
}
None => {
warn!("⚠️ Missing authentication token in WebSocket connection");
auth_error_code = Some(4001);
}
}
} else {
debug!("🔓 WebSocket connection without auth (auth disabled)");
}
if let Some(code) = auth_error_code {
error!("❌ WebSocket authentication failed with code: {}", code);
state
.mgr
.stats
.connections_failed_auth
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
state
.mgr
.stats
.connections_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
return ws
.on_upgrade(move |mut socket: WebSocket| async move {
let close_frame = Some(CloseFrame {
code,
reason: "Authentication failed".into(),
});
let _ = socket.send(Message::Close(close_frame)).await;
let _ = socket.close().await;
})
.into_response();
}
debug!("✅ WebSocket connection established for user: {}", user_id);
ws.on_upgrade(move |socket| {
handle_socket(
socket,
user_id,
state.mgr.clone(),
state.config.max_message_size_limit,
)
})
}
async fn update_subscription_stats(mgr: &ConnectionManager, channel: &str, add: bool) {
if add {
mgr.stats
.subscriptions_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
mgr.stats
.subscriptions_active
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let mut channel_stats = mgr.stats.channels_active.write().await;
let count = channel_stats.entry(channel.to_string()).or_insert(0);
*count += 1;
} else {
mgr.stats
.unsubscriptions_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
mgr.stats
.subscriptions_active
.fetch_sub(1, std::sync::atomic::Ordering::Relaxed);
let mut channel_stats = mgr.stats.channels_active.write().await;
if let Some(count) = channel_stats.get_mut(channel) {
*count = count.saturating_sub(1);
if *count == 0 {
channel_stats.remove(channel);
}
}
}
}
pub async fn handle_socket(
mut socket: WebSocket,
user_id: String,
mgr: std::sync::Arc<ConnectionManager>,
max_size: usize,
) {
let client_id = mgr
.next_id
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let (tx, mut rx) = mpsc::channel::<String>(10);
info!("👋 New WebSocket client {} for user: {}", client_id, user_id);
// Update connection stats
mgr.stats
.connections_total
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
mgr.stats
.connections_active
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
// Update active users
{
let mut active_users = mgr.stats.active_users.write().await;
let count = active_users.entry(user_id.clone()).or_insert(0);
*count += 1;
}
{
let mut clients = mgr.clients.write().await;
clients.insert(client_id, (user_id.clone(), tx));
}
{
let mut client_channels = mgr.client_channels.write().await;
client_channels.insert(client_id, std::collections::HashSet::new());
}
loop {
tokio::select! {
msg = rx.recv() => {
if let Some(msg) = msg {
if socket.send(Message::Text(msg)).await.is_err() {
break;
}
} else {
break;
}
}
incoming = socket.recv() => {
let msg = match incoming {
Some(Ok(msg)) => msg,
_ => break,
};
match msg {
Message::Text(text) => {
if text.len() > max_size {
warn!("⚠️ Message from client {} exceeds size limit: {} > {}", client_id, text.len(), max_size);
let err_resp = serialize_message(&WSMessage {
method: "error".to_string(),
success: Some(false),
error: Some("Message exceeds size limit".to_string()),
..Default::default()
});
if socket.send(Message::Text(err_resp)).await.is_err() {
break;
}
continue;
}
mgr.stats.messages_received_total.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let ws_msg: WSMessage = match serde_json::from_str(&text) {
Ok(m) => m,
Err(e) => {
warn!("⚠️ Invalid message format from client {}: {}", client_id, e);
mgr.stats.errors_json_parse.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
mgr.stats.errors_total.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let err_resp = serialize_message(&WSMessage {
method: "error".to_string(),
success: Some(false),
error: Some("Invalid message format. Review the schema and retry".to_string()),
..Default::default()
});
if socket.send(Message::Text(err_resp)).await.is_err() {
break;
}
continue;
}
};
debug!("📥 Received {} message from client {}", ws_msg.method, client_id);
match ws_msg.method.as_str() {
"subscribe_graph_execution" => {
let graph_exec_id = match &ws_msg.data {
Some(Value::Object(map)) => map.get("graph_exec_id").and_then(|v| v.as_str()),
_ => None,
};
let Some(graph_exec_id) = graph_exec_id else {
warn!("⚠️ Missing graph_exec_id in subscribe_graph_execution from client {}", client_id);
let err_resp = json!({"method": "error", "success": false, "error": "Missing graph_exec_id"});
if socket.send(Message::Text(err_resp.to_string())).await.is_err() {
break;
}
continue;
};
let channel = format!("{user_id}|graph_exec#{graph_exec_id}");
debug!("📌 Client {} subscribing to channel: {}", client_id, channel);
{
let mut subs = mgr.subscribers.write().await;
subs.entry(channel.clone()).or_insert(std::collections::HashSet::new()).insert(client_id);
}
{
let mut chs = mgr.client_channels.write().await;
if let Some(set) = chs.get_mut(&client_id) {
set.insert(channel.clone());
}
}
// Update subscription stats
update_subscription_stats(&mgr, &channel, true).await;
let resp = WSMessage {
method: "subscribe_graph_execution".to_string(),
success: Some(true),
channel: Some(channel),
..Default::default()
};
if socket.send(Message::Text(serialize_message(&resp))).await.is_err() {
break;
}
}
"subscribe_graph_executions" => {
let graph_id = match &ws_msg.data {
Some(Value::Object(map)) => map.get("graph_id").and_then(|v| v.as_str()),
_ => None,
};
let Some(graph_id) = graph_id else {
let err_resp = json!({"method": "error", "success": false, "error": "Missing graph_id"});
if socket.send(Message::Text(err_resp.to_string())).await.is_err() {
break;
}
continue;
};
let channel = format!("{user_id}|graph#{graph_id}|executions");
{
let mut subs = mgr.subscribers.write().await;
subs.entry(channel.clone()).or_insert(std::collections::HashSet::new()).insert(client_id);
}
{
let mut chs = mgr.client_channels.write().await;
if let Some(set) = chs.get_mut(&client_id) {
set.insert(channel.clone());
}
}
debug!("📌 Client {} subscribing to channel: {}", client_id, channel);
// Update subscription stats
update_subscription_stats(&mgr, &channel, true).await;
let resp = WSMessage {
method: "subscribe_graph_executions".to_string(),
success: Some(true),
channel: Some(channel),
..Default::default()
};
if socket.send(Message::Text(serialize_message(&resp))).await.is_err() {
break;
}
}
"unsubscribe" => {
let channel = match &ws_msg.data {
Some(Value::String(s)) => Some(s.as_str()),
Some(Value::Object(map)) => map.get("channel").and_then(|v| v.as_str()),
_ => None,
};
let Some(channel) = channel else {
let err_resp = json!({"method": "error", "success": false, "error": "Missing channel"});
if socket.send(Message::Text(err_resp.to_string())).await.is_err() {
break;
}
continue;
};
let channel = channel.to_string();
if !channel.starts_with(&format!("{user_id}|")) {
let err_resp = json!({"method": "error", "success": false, "error": "Unauthorized channel"});
if socket.send(Message::Text(err_resp.to_string())).await.is_err() {
break;
}
continue;
}
{
let mut subs = mgr.subscribers.write().await;
if let Some(set) = subs.get_mut(&channel) {
set.remove(&client_id);
if set.is_empty() {
subs.remove(&channel);
}
}
}
{
let mut chs = mgr.client_channels.write().await;
if let Some(set) = chs.get_mut(&client_id) {
set.remove(&channel);
}
}
// Update subscription stats
update_subscription_stats(&mgr, &channel, false).await;
let resp = WSMessage {
method: "unsubscribe".to_string(),
success: Some(true),
channel: Some(channel),
..Default::default()
};
if socket.send(Message::Text(serialize_message(&resp))).await.is_err() {
break;
}
}
"heartbeat" => {
if ws_msg.data == Some(Value::String("ping".to_string())) {
let resp = WSMessage {
method: "heartbeat".to_string(),
data: Some(Value::String("pong".to_string())),
success: Some(true),
..Default::default()
};
if socket.send(Message::Text(serialize_message(&resp))).await.is_err() {
break;
}
} else {
let err_resp = json!({"method": "error", "success": false, "error": "Invalid heartbeat"});
if socket.send(Message::Text(err_resp.to_string())).await.is_err() {
break;
}
}
}
_ => {
warn!("❓ Unknown method '{}' from client {}", ws_msg.method, client_id);
let err_resp = json!({"method": "error", "success": false, "error": "Unknown method"});
if socket.send(Message::Text(err_resp.to_string())).await.is_err() {
break;
}
}
}
}
Message::Close(_) => break,
Message::Ping(_) => {
if socket.send(Message::Pong(vec![])).await.is_err() {
break;
}
}
Message::Pong(_) => {}
_ => {}
}
}
else => break,
}
}
// Cleanup
debug!("👋 WebSocket client {} disconnected, cleaning up", client_id);
// Update connection stats
mgr.stats
.connections_active
.fetch_sub(1, std::sync::atomic::Ordering::Relaxed);
// Update active users
{
let mut active_users = mgr.stats.active_users.write().await;
if let Some(count) = active_users.get_mut(&user_id) {
*count = count.saturating_sub(1);
if *count == 0 {
active_users.remove(&user_id);
}
}
}
let channels = {
let mut client_channels = mgr.client_channels.write().await;
client_channels.remove(&client_id).unwrap_or_default()
};
{
let mut subs = mgr.subscribers.write().await;
for channel in &channels {
if let Some(set) = subs.get_mut(channel) {
set.remove(&client_id);
if set.is_empty() {
subs.remove(channel);
}
}
}
}
// Update subscription stats for all channels the client was subscribed to
for channel in &channels {
update_subscription_stats(&mgr, channel, false).await;
}
{
let mut clients = mgr.clients.write().await;
clients.remove(&client_id);
}
debug!("✨ Cleanup completed for client {}", client_id);
}

View File

@@ -1,26 +0,0 @@
#![deny(warnings)]
#![deny(clippy::unwrap_used)]
#![deny(clippy::panic)]
#![deny(clippy::unimplemented)]
#![deny(clippy::todo)]
pub mod config;
pub mod connection_manager;
pub mod handlers;
pub mod models;
pub mod stats;
pub use config::Config;
pub use connection_manager::ConnectionManager;
pub use handlers::ws_handler;
pub use stats::Stats;
use std::sync::Arc;
#[derive(Clone)]
pub struct AppState {
pub mgr: Arc<ConnectionManager>,
pub config: Arc<Config>,
pub stats: Arc<Stats>,
}

View File

@@ -1,172 +0,0 @@
use axum::{
body::Body,
http::{header, StatusCode},
response::Response,
routing::get,
Router,
};
use clap::Parser;
use std::sync::Arc;
use tokio::net::TcpListener;
use tower_http::cors::{Any, CorsLayer};
use tracing::{debug, error, info};
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
use crate::config::Config;
use crate::connection_manager::ConnectionManager;
use crate::handlers::ws_handler;
async fn stats_handler(
axum::Extension(state): axum::Extension<AppState>,
) -> Result<axum::response::Json<stats::StatsSnapshot>, StatusCode> {
let snapshot = state.stats.snapshot().await;
Ok(axum::response::Json(snapshot))
}
async fn prometheus_handler(
axum::Extension(state): axum::Extension<AppState>,
) -> Result<Response, StatusCode> {
let snapshot = state.stats.snapshot().await;
let prometheus_text = state.stats.to_prometheus_format(&snapshot);
Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, "text/plain; version=0.0.4")
.body(Body::from(prometheus_text))
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)
}
mod config;
mod connection_manager;
mod handlers;
mod models;
mod stats;
#[derive(Parser, Debug)]
#[command(author, version, about)]
struct Cli {
/// Path to a TOML configuration file
#[arg(short = 'c', long = "config", value_name = "FILE")]
config: Option<std::path::PathBuf>,
}
#[derive(Clone)]
pub struct AppState {
mgr: Arc<ConnectionManager>,
config: Arc<Config>,
stats: Arc<stats::Stats>,
}
#[tokio::main]
async fn main() {
// Initialize tracing
tracing_subscriber::registry()
.with(
tracing_subscriber::EnvFilter::try_from_default_env()
.unwrap_or_else(|_| "websocket=info,tower_http=debug".into()),
)
.with(tracing_subscriber::fmt::layer())
.init();
info!("🚀 Starting WebSocket API server");
let cli = Cli::parse();
let config = Arc::new(Config::load(cli.config.as_deref()));
info!(
"⚙️ Configuration loaded - host: {}, port: {}, auth: {}",
config.host, config.port, config.enable_auth
);
let redis_client = match redis::Client::open(config.redis_url.clone()) {
Ok(client) => {
debug!("✅ Redis client created successfully");
client
}
Err(e) => {
error!(
"❌ Failed to create Redis client: {}. Please check REDIS_URL environment variable",
e
);
std::process::exit(1);
}
};
let stats = Arc::new(stats::Stats::default());
let mgr = Arc::new(ConnectionManager::new(
redis_client,
config.execution_event_bus_name.clone(),
stats.clone(),
));
let mgr_clone = mgr.clone();
tokio::spawn(async move {
debug!("📡 Starting event broadcaster task");
mgr_clone.run_broadcaster().await;
});
let state = AppState {
mgr,
config: config.clone(),
stats,
};
let app = Router::new()
.route("/ws", get(ws_handler))
.route("/stats", get(stats_handler))
.route("/metrics", get(prometheus_handler))
.layer(axum::Extension(state));
let cors = if config.backend_cors_allow_origins.is_empty() {
// If no specific origins configured, allow any origin but without credentials
CorsLayer::new()
.allow_methods(Any)
.allow_headers(Any)
.allow_origin(Any)
} else {
// If specific origins configured, allow credentials
CorsLayer::new()
.allow_methods([
axum::http::Method::GET,
axum::http::Method::POST,
axum::http::Method::PUT,
axum::http::Method::DELETE,
axum::http::Method::OPTIONS,
])
.allow_headers(vec![
axum::http::header::CONTENT_TYPE,
axum::http::header::AUTHORIZATION,
])
.allow_credentials(true)
.allow_origin(
config
.backend_cors_allow_origins
.iter()
.filter_map(|o| o.parse::<axum::http::HeaderValue>().ok())
.collect::<Vec<_>>(),
)
};
let app = app.layer(cors);
let addr = format!("{}:{}", config.host, config.port);
let listener = match TcpListener::bind(&addr).await {
Ok(listener) => {
info!("🎧 WebSocket server listening on: {}", addr);
listener
}
Err(e) => {
error!(
"❌ Failed to bind to {}: {}. Please check if the port is already in use",
addr, e
);
std::process::exit(1);
}
};
info!("✨ WebSocket API server ready to accept connections");
if let Err(e) = axum::serve(listener, app.into_make_service()).await {
error!("💥 Server error: {}", e);
std::process::exit(1);
}
}

View File

@@ -1,103 +0,0 @@
use serde::{Deserialize, Serialize};
use serde_json::Value;
#[derive(Default, Clone, Debug, Serialize, Deserialize)]
pub struct WSMessage {
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub data: Option<Value>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub success: Option<bool>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub channel: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}
#[derive(Deserialize)]
pub struct Claims {
pub sub: String,
}
// Event models moved from events.rs
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "event_type")]
pub enum ExecutionEvent {
#[serde(rename = "graph_execution_update")]
GraphExecutionUpdate(GraphExecutionEvent),
#[serde(rename = "node_execution_update")]
NodeExecutionUpdate(NodeExecutionEvent),
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GraphExecutionEvent {
pub id: String,
pub graph_id: String,
pub graph_version: u32,
pub user_id: String,
pub status: ExecutionStatus,
pub started_at: Option<String>,
pub ended_at: Option<String>,
pub preset_id: Option<String>,
pub stats: Option<ExecutionStats>,
// Keep these as JSON since they vary by graph
pub inputs: Value,
pub outputs: Value,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NodeExecutionEvent {
pub node_exec_id: String,
pub node_id: String,
pub graph_exec_id: String,
pub graph_id: String,
pub graph_version: u32,
pub user_id: String,
pub block_id: String,
pub status: ExecutionStatus,
pub add_time: String,
pub queue_time: Option<String>,
pub start_time: Option<String>,
pub end_time: Option<String>,
// Keep these as JSON since they vary by node type
pub input_data: Value,
pub output_data: Value,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ExecutionStats {
pub cost: f64,
pub duration: f64,
pub duration_cpu_only: f64,
pub error: Option<String>,
pub node_error_count: u32,
pub node_exec_count: u32,
pub node_exec_time: f64,
pub node_exec_time_cpu_only: f64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "SCREAMING_SNAKE_CASE")]
pub enum ExecutionStatus {
Queued,
Running,
Completed,
Failed,
Incomplete,
Terminated,
}
// Wrapper for the Redis event that includes the payload
#[derive(Debug, Deserialize)]
pub struct RedisEventWrapper {
pub payload: ExecutionEvent,
}
impl RedisEventWrapper {
pub fn parse(json_str: &str) -> Result<Self, serde_json::Error> {
serde_json::from_str(json_str)
}
}

View File

@@ -1,238 +0,0 @@
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::sync::atomic::{AtomicU64, Ordering};
use tokio::sync::RwLock;
#[derive(Default)]
pub struct Stats {
// Connection metrics
pub connections_total: AtomicU64,
pub connections_active: AtomicU64,
pub connections_failed_auth: AtomicU64,
// Message metrics
pub messages_received_total: AtomicU64,
pub messages_sent_total: AtomicU64,
pub messages_failed_total: AtomicU64,
// Subscription metrics
pub subscriptions_total: AtomicU64,
pub subscriptions_active: AtomicU64,
pub unsubscriptions_total: AtomicU64,
// Event metrics by type
pub events_received_total: AtomicU64,
pub graph_execution_events: AtomicU64,
pub node_execution_events: AtomicU64,
// Redis metrics
pub redis_messages_received: AtomicU64,
pub redis_messages_ignored: AtomicU64,
// Channel metrics
pub channels_active: RwLock<HashMap<String, usize>>, // channel -> subscriber count
// User metrics
pub active_users: RwLock<HashMap<String, usize>>, // user_id -> connection count
// Error metrics
pub errors_total: AtomicU64,
pub errors_json_parse: AtomicU64,
pub errors_message_size: AtomicU64,
}
#[derive(Serialize, Deserialize)]
pub struct StatsSnapshot {
// Connection metrics
pub connections_total: u64,
pub connections_active: u64,
pub connections_failed_auth: u64,
// Message metrics
pub messages_received_total: u64,
pub messages_sent_total: u64,
pub messages_failed_total: u64,
// Subscription metrics
pub subscriptions_total: u64,
pub subscriptions_active: u64,
pub unsubscriptions_total: u64,
// Event metrics
pub events_received_total: u64,
pub graph_execution_events: u64,
pub node_execution_events: u64,
// Redis metrics
pub redis_messages_received: u64,
pub redis_messages_ignored: u64,
// Channel metrics
pub channels_active_count: usize,
pub total_subscribers: usize,
// User metrics
pub active_users_count: usize,
// Error metrics
pub errors_total: u64,
pub errors_json_parse: u64,
pub errors_message_size: u64,
}
impl Stats {
pub async fn snapshot(&self) -> StatsSnapshot {
// Take read locks for HashMap data - it's ok if this is slightly stale
let channels = self.channels_active.read().await;
let total_subscribers: usize = channels.values().sum();
let channels_active_count = channels.len();
drop(channels); // Release lock early
let users = self.active_users.read().await;
let active_users_count = users.len();
drop(users); // Release lock early
StatsSnapshot {
connections_total: self.connections_total.load(Ordering::Relaxed),
connections_active: self.connections_active.load(Ordering::Relaxed),
connections_failed_auth: self.connections_failed_auth.load(Ordering::Relaxed),
messages_received_total: self.messages_received_total.load(Ordering::Relaxed),
messages_sent_total: self.messages_sent_total.load(Ordering::Relaxed),
messages_failed_total: self.messages_failed_total.load(Ordering::Relaxed),
subscriptions_total: self.subscriptions_total.load(Ordering::Relaxed),
subscriptions_active: self.subscriptions_active.load(Ordering::Relaxed),
unsubscriptions_total: self.unsubscriptions_total.load(Ordering::Relaxed),
events_received_total: self.events_received_total.load(Ordering::Relaxed),
graph_execution_events: self.graph_execution_events.load(Ordering::Relaxed),
node_execution_events: self.node_execution_events.load(Ordering::Relaxed),
redis_messages_received: self.redis_messages_received.load(Ordering::Relaxed),
redis_messages_ignored: self.redis_messages_ignored.load(Ordering::Relaxed),
channels_active_count,
total_subscribers,
active_users_count,
errors_total: self.errors_total.load(Ordering::Relaxed),
errors_json_parse: self.errors_json_parse.load(Ordering::Relaxed),
errors_message_size: self.errors_message_size.load(Ordering::Relaxed),
}
}
pub fn to_prometheus_format(&self, snapshot: &StatsSnapshot) -> String {
let mut output = String::new();
// Connection metrics
output.push_str("# HELP ws_connections_total Total number of WebSocket connections\n");
output.push_str("# TYPE ws_connections_total counter\n");
output.push_str(&format!(
"ws_connections_total {}\n\n",
snapshot.connections_total
));
output.push_str(
"# HELP ws_connections_active Current number of active WebSocket connections\n",
);
output.push_str("# TYPE ws_connections_active gauge\n");
output.push_str(&format!(
"ws_connections_active {}\n\n",
snapshot.connections_active
));
output
.push_str("# HELP ws_connections_failed_auth Total number of failed authentications\n");
output.push_str("# TYPE ws_connections_failed_auth counter\n");
output.push_str(&format!(
"ws_connections_failed_auth {}\n\n",
snapshot.connections_failed_auth
));
// Message metrics
output.push_str(
"# HELP ws_messages_received_total Total number of messages received from clients\n",
);
output.push_str("# TYPE ws_messages_received_total counter\n");
output.push_str(&format!(
"ws_messages_received_total {}\n\n",
snapshot.messages_received_total
));
output.push_str("# HELP ws_messages_sent_total Total number of messages sent to clients\n");
output.push_str("# TYPE ws_messages_sent_total counter\n");
output.push_str(&format!(
"ws_messages_sent_total {}\n\n",
snapshot.messages_sent_total
));
// Subscription metrics
output.push_str("# HELP ws_subscriptions_active Current number of active subscriptions\n");
output.push_str("# TYPE ws_subscriptions_active gauge\n");
output.push_str(&format!(
"ws_subscriptions_active {}\n\n",
snapshot.subscriptions_active
));
// Event metrics
output.push_str(
"# HELP ws_events_received_total Total number of events received from Redis\n",
);
output.push_str("# TYPE ws_events_received_total counter\n");
output.push_str(&format!(
"ws_events_received_total {}\n\n",
snapshot.events_received_total
));
output.push_str(
"# HELP ws_graph_execution_events_total Total number of graph execution events\n",
);
output.push_str("# TYPE ws_graph_execution_events_total counter\n");
output.push_str(&format!(
"ws_graph_execution_events_total {}\n\n",
snapshot.graph_execution_events
));
output.push_str(
"# HELP ws_node_execution_events_total Total number of node execution events\n",
);
output.push_str("# TYPE ws_node_execution_events_total counter\n");
output.push_str(&format!(
"ws_node_execution_events_total {}\n\n",
snapshot.node_execution_events
));
// Channel metrics
output.push_str("# HELP ws_channels_active Number of active channels\n");
output.push_str("# TYPE ws_channels_active gauge\n");
output.push_str(&format!(
"ws_channels_active {}\n\n",
snapshot.channels_active_count
));
output.push_str(
"# HELP ws_total_subscribers Total number of subscribers across all channels\n",
);
output.push_str("# TYPE ws_total_subscribers gauge\n");
output.push_str(&format!(
"ws_total_subscribers {}\n\n",
snapshot.total_subscribers
));
// User metrics
output.push_str("# HELP ws_active_users Number of unique users with active connections\n");
output.push_str("# TYPE ws_active_users gauge\n");
output.push_str(&format!(
"ws_active_users {}\n\n",
snapshot.active_users_count
));
// Error metrics
output.push_str("# HELP ws_errors_total Total number of errors\n");
output.push_str("# TYPE ws_errors_total counter\n");
output.push_str(&format!("ws_errors_total {}\n", snapshot.errors_total));
output
}
}

View File

@@ -7,9 +7,5 @@ class Settings:
self.ENABLE_AUTH: bool = os.getenv("ENABLE_AUTH", "false").lower() == "true"
self.JWT_ALGORITHM: str = "HS256"
@property
def is_configured(self) -> bool:
return bool(self.JWT_SECRET_KEY)
settings = Settings()

View File

@@ -1,166 +0,0 @@
import asyncio
import contextlib
import logging
from functools import wraps
from typing import Any, Awaitable, Callable, Dict, Optional, TypeVar, Union, cast
import ldclient
from fastapi import HTTPException
from ldclient import Context, LDClient
from ldclient.config import Config
from typing_extensions import ParamSpec
from .config import SETTINGS
logger = logging.getLogger(__name__)
P = ParamSpec("P")
T = TypeVar("T")
def get_client() -> LDClient:
"""Get the LaunchDarkly client singleton."""
return ldclient.get()
def initialize_launchdarkly() -> None:
sdk_key = SETTINGS.launch_darkly_sdk_key
logger.debug(
f"Initializing LaunchDarkly with SDK key: {'present' if sdk_key else 'missing'}"
)
if not sdk_key:
logger.warning("LaunchDarkly SDK key not configured")
return
config = Config(sdk_key)
ldclient.set_config(config)
if ldclient.get().is_initialized():
logger.info("LaunchDarkly client initialized successfully")
else:
logger.error("LaunchDarkly client failed to initialize")
def shutdown_launchdarkly() -> None:
"""Shutdown the LaunchDarkly client."""
if ldclient.get().is_initialized():
ldclient.get().close()
logger.info("LaunchDarkly client closed successfully")
def create_context(
user_id: str, additional_attributes: Optional[Dict[str, Any]] = None
) -> Context:
"""Create LaunchDarkly context with optional additional attributes."""
builder = Context.builder(str(user_id)).kind("user")
if additional_attributes:
for key, value in additional_attributes.items():
builder.set(key, value)
return builder.build()
def feature_flag(
flag_key: str,
default: bool = False,
) -> Callable[
[Callable[P, Union[T, Awaitable[T]]]], Callable[P, Union[T, Awaitable[T]]]
]:
"""
Decorator for feature flag protected endpoints.
"""
def decorator(
func: Callable[P, Union[T, Awaitable[T]]],
) -> Callable[P, Union[T, Awaitable[T]]]:
@wraps(func)
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
try:
user_id = kwargs.get("user_id")
if not user_id:
raise ValueError("user_id is required")
if not get_client().is_initialized():
logger.warning(
f"LaunchDarkly not initialized, using default={default}"
)
is_enabled = default
else:
context = create_context(str(user_id))
is_enabled = get_client().variation(flag_key, context, default)
if not is_enabled:
raise HTTPException(status_code=404, detail="Feature not available")
result = func(*args, **kwargs)
if asyncio.iscoroutine(result):
return await result
return cast(T, result)
except Exception as e:
logger.error(f"Error evaluating feature flag {flag_key}: {e}")
raise
@wraps(func)
def sync_wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
try:
user_id = kwargs.get("user_id")
if not user_id:
raise ValueError("user_id is required")
if not get_client().is_initialized():
logger.warning(
f"LaunchDarkly not initialized, using default={default}"
)
is_enabled = default
else:
context = create_context(str(user_id))
is_enabled = get_client().variation(flag_key, context, default)
if not is_enabled:
raise HTTPException(status_code=404, detail="Feature not available")
return cast(T, func(*args, **kwargs))
except Exception as e:
logger.error(f"Error evaluating feature flag {flag_key}: {e}")
raise
return cast(
Callable[P, Union[T, Awaitable[T]]],
async_wrapper if asyncio.iscoroutinefunction(func) else sync_wrapper,
)
return decorator
def percentage_rollout(
flag_key: str,
default: bool = False,
) -> Callable[
[Callable[P, Union[T, Awaitable[T]]]], Callable[P, Union[T, Awaitable[T]]]
]:
"""Decorator for percentage-based rollouts."""
return feature_flag(flag_key, default)
def beta_feature(
flag_key: Optional[str] = None,
unauthorized_response: Any = {"message": "Not available in beta"},
) -> Callable[
[Callable[P, Union[T, Awaitable[T]]]], Callable[P, Union[T, Awaitable[T]]]
]:
"""Decorator for beta features."""
actual_key = f"beta-{flag_key}" if flag_key else "beta"
return feature_flag(actual_key, False)
@contextlib.contextmanager
def mock_flag_variation(flag_key: str, return_value: Any):
"""Context manager for testing feature flags."""
original_variation = get_client().variation
get_client().variation = lambda key, context, default: (
return_value if key == flag_key else original_variation(key, context, default)
)
try:
yield
finally:
get_client().variation = original_variation

View File

@@ -1,45 +0,0 @@
import pytest
from ldclient import LDClient
from autogpt_libs.feature_flag.client import feature_flag, mock_flag_variation
@pytest.fixture
def ld_client(mocker):
client = mocker.Mock(spec=LDClient)
mocker.patch("ldclient.get", return_value=client)
client.is_initialized.return_value = True
return client
@pytest.mark.asyncio
async def test_feature_flag_enabled(ld_client):
ld_client.variation.return_value = True
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = test_function(user_id="test-user")
assert result == "success"
ld_client.variation.assert_called_once()
@pytest.mark.asyncio
async def test_feature_flag_unauthorized_response(ld_client):
ld_client.variation.return_value = False
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = test_function(user_id="test-user")
assert result == {"error": "disabled"}
def test_mock_flag_variation(ld_client):
with mock_flag_variation("test-flag", True):
assert ld_client.variation("test-flag", None, False)
with mock_flag_variation("test-flag", False):
assert ld_client.variation("test-flag", None, False)

View File

@@ -1,15 +0,0 @@
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
launch_darkly_sdk_key: str = Field(
default="",
description="The Launch Darkly SDK key",
validation_alias="LAUNCH_DARKLY_SDK_KEY",
)
model_config = SettingsConfigDict(case_sensitive=True, extra="ignore")
SETTINGS = Settings()

View File

@@ -1,6 +1,8 @@
"""Logging module for Auto-GPT."""
import logging
import os
import socket
import sys
from pathlib import Path
@@ -10,6 +12,15 @@ from pydantic_settings import BaseSettings, SettingsConfigDict
from .filters import BelowLevelFilter
from .formatters import AGPTFormatter
# Configure global socket timeout and gRPC keepalive to prevent deadlocks
# This must be done at import time before any gRPC connections are established
socket.setdefaulttimeout(30) # 30-second socket timeout
# Enable gRPC keepalive to detect dead connections faster
os.environ.setdefault("GRPC_KEEPALIVE_TIME_MS", "30000") # 30 seconds
os.environ.setdefault("GRPC_KEEPALIVE_TIMEOUT_MS", "5000") # 5 seconds
os.environ.setdefault("GRPC_KEEPALIVE_PERMIT_WITHOUT_CALLS", "true")
LOG_DIR = Path(__file__).parent.parent.parent.parent / "logs"
LOG_FILE = "activity.log"
DEBUG_LOG_FILE = "debug.log"
@@ -79,7 +90,6 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
Note: This function is typically called at the start of the application
to set up the logging infrastructure.
"""
config = LoggingConfig()
log_handlers: list[logging.Handler] = []
@@ -105,13 +115,17 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
if config.enable_cloud_logging or force_cloud_logging:
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers.transports.sync import SyncTransport
from google.cloud.logging_v2.handlers.transports import (
BackgroundThreadTransport,
)
client = google.cloud.logging.Client()
# Use BackgroundThreadTransport to prevent blocking the main thread
# and deadlocks when gRPC calls to Google Cloud Logging hang
cloud_handler = CloudLoggingHandler(
client,
name="autogpt_logs",
transport=SyncTransport,
transport=BackgroundThreadTransport,
)
cloud_handler.setLevel(config.level)
log_handlers.append(cloud_handler)

View File

@@ -1,39 +1,5 @@
import logging
import re
from typing import Any
import uvicorn.config
from colorama import Fore
def remove_color_codes(s: str) -> str:
return re.sub(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])", "", s)
def fmt_kwargs(kwargs: dict) -> str:
return ", ".join(f"{n}={repr(v)}" for n, v in kwargs.items())
def print_attribute(
title: str, value: Any, title_color: str = Fore.GREEN, value_color: str = ""
) -> None:
logger = logging.getLogger()
logger.info(
str(value),
extra={
"title": f"{title.rstrip(':')}:",
"title_color": title_color,
"color": value_color,
},
)
def generate_uvicorn_config():
"""
Generates a uvicorn logging config that silences uvicorn's default logging and tells it to use the native logging module.
"""
log_config = dict(uvicorn.config.LOGGING_CONFIG)
log_config["loggers"]["uvicorn"] = {"handlers": []}
log_config["loggers"]["uvicorn.error"] = {"handlers": []}
log_config["loggers"]["uvicorn.access"] = {"handlers": []}
return log_config

View File

@@ -1,17 +1,34 @@
import inspect
import logging
import threading
from typing import Awaitable, Callable, ParamSpec, TypeVar, cast, overload
import time
from functools import wraps
from typing import (
Awaitable,
Callable,
ParamSpec,
Protocol,
Tuple,
TypeVar,
cast,
overload,
runtime_checkable,
)
P = ParamSpec("P")
R = TypeVar("R")
@overload
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]: ...
logger = logging.getLogger(__name__)
@overload
def thread_cached(func: Callable[P, R]) -> Callable[P, R]: ...
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]:
pass
@overload
def thread_cached(func: Callable[P, R]) -> Callable[P, R]:
pass
def thread_cached(
@@ -57,3 +74,193 @@ def thread_cached(
def clear_thread_cache(func: Callable) -> None:
if clear := getattr(func, "clear_cache", None):
clear()
FuncT = TypeVar("FuncT")
R_co = TypeVar("R_co", covariant=True)
@runtime_checkable
class AsyncCachedFunction(Protocol[P, R_co]):
"""Protocol for async functions with cache management methods."""
def cache_clear(self) -> None:
"""Clear all cached entries."""
return None
def cache_info(self) -> dict[str, int | None]:
"""Get cache statistics."""
return {}
async def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
"""Call the cached function."""
return None # type: ignore
def async_ttl_cache(
maxsize: int = 128, ttl_seconds: int | None = None
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
"""
TTL (Time To Live) cache decorator for async functions.
Similar to functools.lru_cache but works with async functions and includes optional TTL.
Args:
maxsize: Maximum number of cached entries
ttl_seconds: Time to live in seconds. If None, entries never expire (like lru_cache)
Returns:
Decorator function
Example:
# With TTL
@async_ttl_cache(maxsize=1000, ttl_seconds=300)
async def api_call(param: str) -> dict:
return {"result": param}
# Without TTL (permanent cache like lru_cache)
@async_ttl_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
return {"result": param}
"""
def decorator(
async_func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
# Cache storage - use union type to handle both cases
cache_storage: dict[tuple, R | Tuple[R, float]] = {}
@wraps(async_func)
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# Create cache key from arguments
key = (args, tuple(sorted(kwargs.items())))
current_time = time.time()
# Check if we have a valid cached entry
if key in cache_storage:
if ttl_seconds is None:
# No TTL - return cached result directly
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, cache_storage[key])
else:
# With TTL - check expiration
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, result)
else:
# Expired entry
del cache_storage[key]
logger.debug(
f"Cache entry expired for {async_func.__name__}"
)
# Cache miss or expired - fetch fresh data
logger.debug(
f"Cache miss for {async_func.__name__} with key: {str(key)[:50]}"
)
result = await async_func(*args, **kwargs)
# Store in cache
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Simple cleanup when cache gets too large
if len(cache_storage) > maxsize:
# Remove oldest entries (simple FIFO cleanup)
cutoff = maxsize // 2
oldest_keys = list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
logger.debug(
f"Cache cleanup: removed {len(oldest_keys)} entries for {async_func.__name__}"
)
return result
# Add cache management methods (similar to functools.lru_cache)
def cache_clear() -> None:
cache_storage.clear()
def cache_info() -> dict[str, int | None]:
return {
"size": len(cache_storage),
"maxsize": maxsize,
"ttl_seconds": ttl_seconds,
}
# Attach methods to wrapper
setattr(wrapper, "cache_clear", cache_clear)
setattr(wrapper, "cache_info", cache_info)
return cast(AsyncCachedFunction[P, R], wrapper)
return decorator
@overload
def async_cache(
func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
pass
@overload
def async_cache(
func: None = None,
*,
maxsize: int = 128,
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
pass
def async_cache(
func: Callable[P, Awaitable[R]] | None = None,
*,
maxsize: int = 128,
) -> (
AsyncCachedFunction[P, R]
| Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]
):
"""
Process-level cache decorator for async functions (no TTL).
Similar to functools.lru_cache but works with async functions.
This is a convenience wrapper around async_ttl_cache with ttl_seconds=None.
Args:
func: The async function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
Returns:
Decorated function or decorator
Example:
# Without parentheses (uses default maxsize=128)
@async_cache
async def get_data(param: str) -> dict:
return {"result": param}
# With parentheses and custom maxsize
@async_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
# Expensive computation here
return {"result": param}
"""
if func is None:
# Called with parentheses @async_cache() or @async_cache(maxsize=...)
return async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
else:
# Called without parentheses @async_cache
decorator = async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
return decorator(func)

View File

@@ -16,7 +16,12 @@ from unittest.mock import Mock
import pytest
from autogpt_libs.utils.cache import clear_thread_cache, thread_cached
from autogpt_libs.utils.cache import (
async_cache,
async_ttl_cache,
clear_thread_cache,
thread_cached,
)
class TestThreadCached:
@@ -323,3 +328,378 @@ class TestThreadCached:
assert function_using_mock(2) == 42
assert mock.call_count == 2
class TestAsyncTTLCache:
"""Tests for the @async_ttl_cache decorator."""
@pytest.mark.asyncio
async def test_basic_caching(self):
"""Test basic caching functionality."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def cached_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01) # Simulate async work
return x + y
# First call
result1 = await cached_function(1, 2)
assert result1 == 3
assert call_count == 1
# Second call with same args - should use cache
result2 = await cached_function(1, 2)
assert result2 == 3
assert call_count == 1 # No additional call
# Different args - should call function again
result3 = await cached_function(2, 3)
assert result3 == 5
assert call_count == 2
@pytest.mark.asyncio
async def test_ttl_expiration(self):
"""Test that cache entries expire after TTL."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=1) # Short TTL
async def short_lived_cache(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 2
# First call
result1 = await short_lived_cache(5)
assert result1 == 10
assert call_count == 1
# Second call immediately - should use cache
result2 = await short_lived_cache(5)
assert result2 == 10
assert call_count == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Third call after expiration - should call function again
result3 = await short_lived_cache(5)
assert result3 == 10
assert call_count == 2
@pytest.mark.asyncio
async def test_cache_info(self):
"""Test cache info functionality."""
@async_ttl_cache(maxsize=5, ttl_seconds=300)
async def info_test_function(x: int) -> int:
return x * 3
# Check initial cache info
info = info_test_function.cache_info()
assert info["size"] == 0
assert info["maxsize"] == 5
assert info["ttl_seconds"] == 300
# Add an entry
await info_test_function(1)
info = info_test_function.cache_info()
assert info["size"] == 1
@pytest.mark.asyncio
async def test_cache_clear(self):
"""Test cache clearing functionality."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def clearable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 4
# First call
result1 = await clearable_function(2)
assert result1 == 8
assert call_count == 1
# Second call - should use cache
result2 = await clearable_function(2)
assert result2 == 8
assert call_count == 1
# Clear cache
clearable_function.cache_clear()
# Third call after clear - should call function again
result3 = await clearable_function(2)
assert result3 == 8
assert call_count == 2
@pytest.mark.asyncio
async def test_maxsize_cleanup(self):
"""Test that cache cleans up when maxsize is exceeded."""
call_count = 0
@async_ttl_cache(maxsize=3, ttl_seconds=60)
async def size_limited_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x**2
# Fill cache to maxsize
await size_limited_function(1) # call_count: 1
await size_limited_function(2) # call_count: 2
await size_limited_function(3) # call_count: 3
info = size_limited_function.cache_info()
assert info["size"] == 3
# Add one more entry - should trigger cleanup
await size_limited_function(4) # call_count: 4
# Cache size should be reduced (cleanup removes oldest entries)
info = size_limited_function.cache_info()
assert info["size"] is not None and info["size"] <= 3 # Should be cleaned up
@pytest.mark.asyncio
async def test_argument_variations(self):
"""Test caching with different argument patterns."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def arg_test_function(a: int, b: str = "default", *, c: int = 100) -> str:
nonlocal call_count
call_count += 1
return f"{a}-{b}-{c}"
# Different ways to call with same logical arguments
result1 = await arg_test_function(1, "test", c=200)
assert call_count == 1
# Same arguments, same order - should use cache
result2 = await arg_test_function(1, "test", c=200)
assert call_count == 1
assert result1 == result2
# Different arguments - should call function
result3 = await arg_test_function(2, "test", c=200)
assert call_count == 2
assert result1 != result3
@pytest.mark.asyncio
async def test_exception_handling(self):
"""Test that exceptions are not cached."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def exception_function(x: int) -> int:
nonlocal call_count
call_count += 1
if x < 0:
raise ValueError("Negative value not allowed")
return x * 2
# Successful call - should be cached
result1 = await exception_function(5)
assert result1 == 10
assert call_count == 1
# Same successful call - should use cache
result2 = await exception_function(5)
assert result2 == 10
assert call_count == 1
# Exception call - should not be cached
with pytest.raises(ValueError):
await exception_function(-1)
assert call_count == 2
# Same exception call - should call again (not cached)
with pytest.raises(ValueError):
await exception_function(-1)
assert call_count == 3
@pytest.mark.asyncio
async def test_concurrent_calls(self):
"""Test caching behavior with concurrent calls."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def concurrent_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.05) # Simulate work
return x * x
# Launch concurrent calls with same arguments
tasks = [concurrent_function(3) for _ in range(5)]
results = await asyncio.gather(*tasks)
# All results should be the same
assert all(result == 9 for result in results)
# Note: Due to race conditions, call_count might be up to 5 for concurrent calls
# This tests that the cache doesn't break under concurrent access
assert 1 <= call_count <= 5
class TestAsyncCache:
"""Tests for the @async_cache decorator (no TTL)."""
@pytest.mark.asyncio
async def test_basic_caching_no_ttl(self):
"""Test basic caching functionality without TTL."""
call_count = 0
@async_cache(maxsize=10)
async def cached_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01) # Simulate async work
return x + y
# First call
result1 = await cached_function(1, 2)
assert result1 == 3
assert call_count == 1
# Second call with same args - should use cache
result2 = await cached_function(1, 2)
assert result2 == 3
assert call_count == 1 # No additional call
# Third call after some time - should still use cache (no TTL)
await asyncio.sleep(0.05)
result3 = await cached_function(1, 2)
assert result3 == 3
assert call_count == 1 # Still no additional call
# Different args - should call function again
result4 = await cached_function(2, 3)
assert result4 == 5
assert call_count == 2
@pytest.mark.asyncio
async def test_no_ttl_vs_ttl_behavior(self):
"""Test the difference between TTL and no-TTL caching."""
ttl_call_count = 0
no_ttl_call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=1) # Short TTL
async def ttl_function(x: int) -> int:
nonlocal ttl_call_count
ttl_call_count += 1
return x * 2
@async_cache(maxsize=10) # No TTL
async def no_ttl_function(x: int) -> int:
nonlocal no_ttl_call_count
no_ttl_call_count += 1
return x * 2
# First calls
await ttl_function(5)
await no_ttl_function(5)
assert ttl_call_count == 1
assert no_ttl_call_count == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Second calls after TTL expiry
await ttl_function(5) # Should call function again (TTL expired)
await no_ttl_function(5) # Should use cache (no TTL)
assert ttl_call_count == 2 # TTL function called again
assert no_ttl_call_count == 1 # No-TTL function still cached
@pytest.mark.asyncio
async def test_async_cache_info(self):
"""Test cache info for no-TTL cache."""
@async_cache(maxsize=5)
async def info_test_function(x: int) -> int:
return x * 3
# Check initial cache info
info = info_test_function.cache_info()
assert info["size"] == 0
assert info["maxsize"] == 5
assert info["ttl_seconds"] is None # No TTL
# Add an entry
await info_test_function(1)
info = info_test_function.cache_info()
assert info["size"] == 1
class TestTTLOptional:
"""Tests for optional TTL functionality."""
@pytest.mark.asyncio
async def test_ttl_none_behavior(self):
"""Test that ttl_seconds=None works like no TTL."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=None)
async def no_ttl_via_none(x: int) -> int:
nonlocal call_count
call_count += 1
return x**2
# First call
result1 = await no_ttl_via_none(3)
assert result1 == 9
assert call_count == 1
# Wait (would expire if there was TTL)
await asyncio.sleep(0.1)
# Second call - should still use cache
result2 = await no_ttl_via_none(3)
assert result2 == 9
assert call_count == 1 # No additional call
# Check cache info
info = no_ttl_via_none.cache_info()
assert info["ttl_seconds"] is None
@pytest.mark.asyncio
async def test_cache_options_comparison(self):
"""Test different cache options work as expected."""
ttl_calls = 0
no_ttl_calls = 0
@async_ttl_cache(maxsize=10, ttl_seconds=1) # With TTL
async def ttl_function(x: int) -> int:
nonlocal ttl_calls
ttl_calls += 1
return x * 10
@async_cache(maxsize=10) # Process-level cache (no TTL)
async def process_function(x: int) -> int:
nonlocal no_ttl_calls
no_ttl_calls += 1
return x * 10
# Both should cache initially
await ttl_function(3)
await process_function(3)
assert ttl_calls == 1
assert no_ttl_calls == 1
# Immediate second calls - both should use cache
await ttl_function(3)
await process_function(3)
assert ttl_calls == 1
assert no_ttl_calls == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# After TTL expiry
await ttl_function(3) # Should call function again
await process_function(3) # Should still use cache
assert ttl_calls == 2 # TTL cache expired, called again
assert no_ttl_calls == 1 # Process cache never expires

View File

@@ -0,0 +1,52 @@
# Development and testing files
**/__pycache__
**/*.pyc
**/*.pyo
**/*.pyd
**/.Python
**/env/
**/venv/
**/.venv/
**/pip-log.txt
**/.pytest_cache/
**/test-results/
**/snapshots/
**/test/
# IDE and editor files
**/.vscode/
**/.idea/
**/*.swp
**/*.swo
*~
# OS files
.DS_Store
Thumbs.db
# Logs
**/*.log
**/logs/
# Git
.git/
.gitignore
# Documentation
**/*.md
!README.md
# Local development files
.env
.env.local
**/.env.test
# Build artifacts
**/dist/
**/build/
**/target/
# Docker files (avoid recursion)
Dockerfile*
docker-compose*
.dockerignore

View File

@@ -1,3 +1,9 @@
# Backend Configuration
# This file contains environment variables that MUST be set for the AutoGPT platform
# Variables with working defaults in settings.py are not included here
## ===== REQUIRED DATABASE CONFIGURATION ===== ##
# PostgreSQL Database Connection
DB_USER=postgres
DB_PASS=your-super-secret-and-long-postgres-password
DB_NAME=postgres
@@ -10,72 +16,50 @@ DB_SCHEMA=platform
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
DIRECT_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
PRISMA_SCHEMA="postgres/schema.prisma"
ENABLE_AUTH=true
# EXECUTOR
NUM_GRAPH_WORKERS=10
BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
# generate using `from cryptography.fernet import Fernet;Fernet.generate_key().decode()`
ENCRYPTION_KEY='dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw='
UNSUBSCRIBE_SECRET_KEY = 'HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio='
## ===== REQUIRED SERVICE CREDENTIALS ===== ##
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=password
ENABLE_CREDIT=false
STRIPE_API_KEY=
STRIPE_WEBHOOK_SECRET=
# RabbitMQ Credentials
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
# What environment things should be logged under: local dev or prod
APP_ENV=local
# What environment to behave as: "local" or "cloud"
BEHAVE_AS=local
PYRO_HOST=localhost
SENTRY_DSN=
# Email For Postmark so we can send emails
POSTMARK_SERVER_API_TOKEN=
POSTMARK_SENDER_EMAIL=invalid@invalid.com
POSTMARK_WEBHOOK_TOKEN=
## User auth with Supabase is required for any of the 3rd party integrations with auth to work.
ENABLE_AUTH=true
# Supabase Authentication
SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
# RabbitMQ credentials -- Used for communication between services
RABBITMQ_HOST=localhost
RABBITMQ_PORT=5672
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
## ===== REQUIRED SECURITY KEYS ===== ##
# Generate using: from cryptography.fernet import Fernet;Fernet.generate_key().decode()
ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=
UNSUBSCRIBE_SECRET_KEY=HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio=
## GCS bucket is required for marketplace and library functionality
## ===== IMPORTANT OPTIONAL CONFIGURATION ===== ##
# Platform URLs (set these for webhooks and OAuth to work)
PLATFORM_BASE_URL=http://localhost:8000
FRONTEND_BASE_URL=http://localhost:3000
# Media Storage (required for marketplace and library functionality)
MEDIA_GCS_BUCKET_NAME=
## For local development, you may need to set FRONTEND_BASE_URL for the OAuth flow
## for integrations to work. Defaults to the value of PLATFORM_BASE_URL if not set.
# FRONTEND_BASE_URL=http://localhost:3000
## ===== API KEYS AND OAUTH CREDENTIALS ===== ##
# All API keys below are optional - only add what you need
## PLATFORM_BASE_URL must be set to a *publicly accessible* URL pointing to your backend
## to use the platform's webhook-related functionality.
## If you are developing locally, you can use something like ngrok to get a publc URL
## and tunnel it to your locally running backend.
PLATFORM_BASE_URL=http://localhost:3000
## Cloudflare Turnstile (CAPTCHA) Configuration
## Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
## This is the backend secret key
TURNSTILE_SECRET_KEY=
## This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
## == INTEGRATION CREDENTIALS == ##
# Each set of server side credentials is required for the corresponding 3rd party
# integration to work.
# AI/LLM Services
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GROQ_API_KEY=
LLAMA_API_KEY=
AIML_API_KEY=
V0_API_KEY=
OPEN_ROUTER_API_KEY=
NVIDIA_API_KEY=
# OAuth Credentials
# For the OAuth callback URL, use <your_frontend_url>/auth/integrations/oauth_callback,
# e.g. http://localhost:3000/auth/integrations/oauth_callback
@@ -85,7 +69,6 @@ GITHUB_CLIENT_SECRET=
# Google OAuth App server credentials - https://console.cloud.google.com/apis/credentials, and enable gmail api and set scopes
# https://console.cloud.google.com/apis/credentials/consent ?project=<your_project_id>
# You'll need to add/enable the following scopes (minimum):
# https://console.developers.google.com/apis/api/gmail.googleapis.com/overview ?project=<your_project_id>
# https://console.cloud.google.com/apis/library/sheets.googleapis.com/ ?project=<your_project_id>
@@ -121,104 +104,66 @@ LINEAR_CLIENT_SECRET=
TODOIST_CLIENT_ID=
TODOIST_CLIENT_SECRET=
## ===== OPTIONAL API KEYS ===== ##
# LLM
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
AIML_API_KEY=
GROQ_API_KEY=
OPEN_ROUTER_API_KEY=
LLAMA_API_KEY=
# Reddit
# Go to https://www.reddit.com/prefs/apps and create a new app
# Choose "script" for the type
# Fill in the redirect uri as <your_frontend_url>/auth/integrations/oauth_callback, e.g. http://localhost:3000/auth/integrations/oauth_callback
NOTION_CLIENT_ID=
NOTION_CLIENT_SECRET=
REDDIT_CLIENT_ID=
REDDIT_CLIENT_SECRET=
REDDIT_USER_AGENT="AutoGPT:1.0 (by /u/autogpt)"
# Discord
DISCORD_BOT_TOKEN=
# Payment Processing
STRIPE_API_KEY=
STRIPE_WEBHOOK_SECRET=
# SMTP/Email
SMTP_SERVER=
SMTP_PORT=
SMTP_USERNAME=
SMTP_PASSWORD=
# Email Service (for sending notifications and confirmations)
POSTMARK_SERVER_API_TOKEN=
POSTMARK_SENDER_EMAIL=invalid@invalid.com
POSTMARK_WEBHOOK_TOKEN=
# D-ID
# Error Tracking
SENTRY_DSN=
# Cloudflare Turnstile (CAPTCHA) Configuration
# Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
# This is the backend secret key
TURNSTILE_SECRET_KEY=
# This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
# Feature Flags
LAUNCH_DARKLY_SDK_KEY=
# Content Generation & Media
DID_API_KEY=
FAL_API_KEY=
IDEOGRAM_API_KEY=
REPLICATE_API_KEY=
REVID_API_KEY=
SCREENSHOTONE_API_KEY=
UNREAL_SPEECH_API_KEY=
# Open Weather Map
# Data & Search Services
E2B_API_KEY=
EXA_API_KEY=
JINA_API_KEY=
MEM0_API_KEY=
OPENWEATHERMAP_API_KEY=
# SMTP
SMTP_SERVER=
SMTP_PORT=
SMTP_USERNAME=
SMTP_PASSWORD=
# Medium
MEDIUM_API_KEY=
MEDIUM_AUTHOR_ID=
# Google Maps
GOOGLE_MAPS_API_KEY=
# Replicate
REPLICATE_API_KEY=
# Communication Services
DISCORD_BOT_TOKEN=
MEDIUM_API_KEY=
MEDIUM_AUTHOR_ID=
SMTP_SERVER=
SMTP_PORT=
SMTP_USERNAME=
SMTP_PASSWORD=
# Ideogram
IDEOGRAM_API_KEY=
# Fal
FAL_API_KEY=
# Exa
EXA_API_KEY=
# E2B
E2B_API_KEY=
# Mem0
MEM0_API_KEY=
# Nvidia
NVIDIA_API_KEY=
# Apollo
# Business & Marketing Tools
APOLLO_API_KEY=
# SmartLead
SMARTLEAD_API_KEY=
# ZeroBounce
ZEROBOUNCE_API_KEY=
# Ayrshare
ENRICHLAYER_API_KEY=
AYRSHARE_API_KEY=
AYRSHARE_JWT_KEY=
SMARTLEAD_API_KEY=
ZEROBOUNCE_API_KEY=
## ===== OPTIONAL API KEYS END ===== ##
# Block Error Rate Monitoring
BLOCK_ERROR_RATE_THRESHOLD=0.5
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400
# Logging Configuration
LOG_LEVEL=INFO
ENABLE_CLOUD_LOGGING=false
ENABLE_FILE_LOGGING=false
# Use to manually set the log directory
# LOG_DIR=./logs
# Example Blocks Configuration
# Set to true to enable example blocks in development
# These blocks are disabled by default in production
ENABLE_EXAMPLE_BLOCKS=false
# Cloud Storage Configuration
# Cleanup interval for expired files (hours between cleanup runs, 1-24 hours)
CLOUD_STORAGE_CLEANUP_INTERVAL_HOURS=6
# Other Services
AUTOMOD_API_KEY=

View File

@@ -1,3 +1,4 @@
.env
database.db
database.db-journal
dev.db

View File

@@ -8,14 +8,14 @@ WORKDIR /app
RUN echo 'Acquire::http::Pipeline-Depth 0;\nAcquire::http::No-Cache true;\nAcquire::BrokenProxy true;\n' > /etc/apt/apt.conf.d/99fixbadproxy
RUN apt-get update --allow-releaseinfo-change --fix-missing
# Install build dependencies
RUN apt-get install -y build-essential
RUN apt-get install -y libpq5
RUN apt-get install -y libz-dev
RUN apt-get install -y libssl-dev
RUN apt-get install -y postgresql-client
# Update package list and install build dependencies in a single layer
RUN apt-get update --allow-releaseinfo-change --fix-missing \
&& apt-get install -y \
build-essential \
libpq5 \
libz-dev \
libssl-dev \
postgresql-client
ENV POETRY_HOME=/opt/poetry
ENV POETRY_NO_INTERACTION=1
@@ -68,6 +68,12 @@ COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.tom
WORKDIR /app/autogpt_platform/backend
FROM server_dependencies AS migrate
# Migration stage only needs schema and migrations - much lighter than full backend
COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/
COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations
FROM server_dependencies AS server
COPY autogpt_platform/backend /app/autogpt_platform/backend

View File

@@ -43,11 +43,11 @@ def main(**kwargs):
run_processes(
DatabaseManager().set_log_level("warning"),
ExecutionManager(),
Scheduler(),
NotificationManager(),
WebsocketServer(),
AgentServer(),
ExecutionManager(),
**kwargs,
)

View File

@@ -1,4 +1,3 @@
import asyncio
import logging
from typing import Any, Optional
@@ -15,7 +14,8 @@ from backend.data.block import (
)
from backend.data.execution import ExecutionStatus
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util import json, retry
from backend.util.json import validate_with_jsonschema
from backend.util.retry import func_retry
_logger = logging.getLogger(__name__)
@@ -49,7 +49,7 @@ class AgentExecutorBlock(Block):
@classmethod
def get_mismatch_error(cls, data: BlockInput) -> str | None:
return json.validate_with_jsonschema(cls.get_input_schema(data), data)
return validate_with_jsonschema(cls.get_input_schema(data), data)
class Output(BlockSchema):
pass
@@ -95,23 +95,14 @@ class AgentExecutorBlock(Block):
logger=logger,
):
yield name, data
except asyncio.CancelledError:
except BaseException as e:
await self._stop(
graph_exec_id=graph_exec.id,
user_id=input_data.user_id,
logger=logger,
)
logger.warning(
f"Execution of graph {input_data.graph_id}v{input_data.graph_version} was cancelled."
)
except Exception as e:
await self._stop(
graph_exec_id=graph_exec.id,
user_id=input_data.user_id,
logger=logger,
)
logger.error(
f"Execution of graph {input_data.graph_id}v{input_data.graph_version} failed: {e}, execution is stopped."
f"Execution of graph {input_data.graph_id}v{input_data.graph_version} failed: {e.__class__.__name__} {str(e)}; execution is stopped."
)
raise
@@ -131,6 +122,7 @@ class AgentExecutorBlock(Block):
log_id = f"Graph #{graph_id}-V{graph_version}, exec-id: {graph_exec_id}"
logger.info(f"Starting execution of {log_id}")
yielded_node_exec_ids = set()
async for event in event_bus.listen(
user_id=user_id,
@@ -162,6 +154,14 @@ class AgentExecutorBlock(Block):
f"Execution {log_id} produced input {event.input_data} output {event.output_data}"
)
if event.node_exec_id in yielded_node_exec_ids:
logger.warning(
f"{log_id} received duplicate event for node execution {event.node_exec_id}"
)
continue
else:
yielded_node_exec_ids.add(event.node_exec_id)
if not event.block_id:
logger.warning(f"{log_id} received event without block_id {event}")
continue
@@ -181,7 +181,7 @@ class AgentExecutorBlock(Block):
)
yield output_name, output_data
@retry.func_retry
@func_retry
async def _stop(
self,
graph_exec_id: str,
@@ -197,7 +197,8 @@ class AgentExecutorBlock(Block):
await execution_utils.stop_graph_execution(
graph_exec_id=graph_exec_id,
user_id=user_id,
wait_timeout=3600,
)
logger.info(f"Execution {log_id} stopped successfully.")
except Exception as e:
logger.error(f"Failed to stop execution {log_id}: {e}")
except TimeoutError as e:
logger.error(f"Execution {log_id} stop timed out: {e}")

View File

@@ -9,6 +9,24 @@ from backend.sdk import BaseModel, Credentials, Requests
logger = getLogger(__name__)
def _convert_bools(
obj: Any,
) -> Any: # noqa: ANN401 allow Any for deep conversion utility
"""Recursively walk *obj* and coerce string booleans to real booleans."""
if isinstance(obj, str):
lowered = obj.lower()
if lowered == "true":
return True
if lowered == "false":
return False
return obj
if isinstance(obj, list):
return [_convert_bools(item) for item in obj]
if isinstance(obj, dict):
return {k: _convert_bools(v) for k, v in obj.items()}
return obj
class WebhookFilters(BaseModel):
dataTypes: list[str]
changeTypes: list[str] | None = None
@@ -579,7 +597,7 @@ async def update_table(
response = await Requests().patch(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables/{table_id}",
headers={"Authorization": credentials.auth_header()},
json=params,
json=_convert_bools(params),
)
return response.json()
@@ -609,7 +627,7 @@ async def create_field(
response = await Requests().post(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables/{table_id}/fields",
headers={"Authorization": credentials.auth_header()},
json=params,
json=_convert_bools(params),
)
return response.json()
@@ -633,7 +651,7 @@ async def update_field(
response = await Requests().patch(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables/{table_id}/fields/{field_id}",
headers={"Authorization": credentials.auth_header()},
json=params,
json=_convert_bools(params),
)
return response.json()
@@ -691,7 +709,7 @@ async def list_records(
response = await Requests().get(
f"https://api.airtable.com/v0/{base_id}/{table_id_or_name}",
headers={"Authorization": credentials.auth_header()},
params=params,
json=_convert_bools(params),
)
return response.json()
@@ -720,20 +738,22 @@ async def update_multiple_records(
typecast: bool | None = None,
) -> dict[str, dict[str, dict[str, str]]]:
params: dict[str, str | dict[str, list[str]] | list[dict[str, dict[str, str]]]] = {}
params: dict[
str, str | bool | dict[str, list[str]] | list[dict[str, dict[str, str]]]
] = {}
if perform_upsert:
params["performUpsert"] = perform_upsert
if return_fields_by_field_id:
params["returnFieldsByFieldId"] = str(return_fields_by_field_id)
if typecast:
params["typecast"] = str(typecast)
params["typecast"] = typecast
params["records"] = records
params["records"] = [_convert_bools(record) for record in records]
response = await Requests().patch(
f"https://api.airtable.com/v0/{base_id}/{table_id_or_name}",
headers={"Authorization": credentials.auth_header()},
json=params,
json=_convert_bools(params),
)
return response.json()
@@ -747,18 +767,20 @@ async def update_record(
typecast: bool | None = None,
fields: dict[str, Any] | None = None,
) -> dict[str, dict[str, dict[str, str]]]:
params: dict[str, str | dict[str, Any] | list[dict[str, dict[str, str]]]] = {}
params: dict[str, str | bool | dict[str, Any] | list[dict[str, dict[str, str]]]] = (
{}
)
if return_fields_by_field_id:
params["returnFieldsByFieldId"] = str(return_fields_by_field_id)
params["returnFieldsByFieldId"] = return_fields_by_field_id
if typecast:
params["typecast"] = str(typecast)
params["typecast"] = typecast
if fields:
params["fields"] = fields
response = await Requests().patch(
f"https://api.airtable.com/v0/{base_id}/{table_id_or_name}/{record_id}",
headers={"Authorization": credentials.auth_header()},
json=params,
json=_convert_bools(params),
)
return response.json()
@@ -779,21 +801,22 @@ async def create_record(
len(records) <= 10
), "Only up to 10 records can be provided when using records"
params: dict[str, str | dict[str, Any] | list[dict[str, Any]]] = {}
params: dict[str, str | bool | dict[str, Any] | list[dict[str, Any]]] = {}
if fields:
params["fields"] = fields
if records:
params["records"] = records
if return_fields_by_field_id:
params["returnFieldsByFieldId"] = str(return_fields_by_field_id)
params["returnFieldsByFieldId"] = return_fields_by_field_id
if typecast:
params["typecast"] = str(typecast)
params["typecast"] = typecast
response = await Requests().post(
f"https://api.airtable.com/v0/{base_id}/{table_id_or_name}",
headers={"Authorization": credentials.auth_header()},
json=params,
json=_convert_bools(params),
)
return response.json()
@@ -850,7 +873,7 @@ async def create_webhook(
response = await Requests().post(
f"https://api.airtable.com/v0/bases/{base_id}/webhooks",
headers={"Authorization": credentials.auth_header()},
json=params,
json=_convert_bools(params),
)
return response.json()
@@ -1195,7 +1218,7 @@ async def create_base(
"Authorization": credentials.auth_header(),
"Content-Type": "application/json",
},
json=params,
json=_convert_bools(params),
)
return response.json()

View File

@@ -159,6 +159,7 @@ class AirtableOAuthHandler(BaseOAuthHandler):
logger.info("Successfully refreshed tokens")
new_credentials = OAuth2Credentials(
id=credentials.id,
access_token=SecretStr(response.access_token),
refresh_token=SecretStr(response.refresh_token),
access_token_expires_at=int(time.time()) + response.expires_in,

View File

@@ -4,11 +4,19 @@ from typing import Optional
from pydantic import BaseModel, Field
from backend.data.block import BlockSchema
from backend.data.model import SchemaField
from backend.data.model import SchemaField, UserIntegrations
from backend.integrations.ayrshare import AyrshareClient
from backend.util.clients import get_database_manager_async_client
from backend.util.exceptions import MissingConfigError
async def get_profile_key(user_id: str):
user_integrations: UserIntegrations = (
await get_database_manager_async_client().get_user_integrations(user_id)
)
return user_integrations.managed_credentials.ayrshare_profile_key
class BaseAyrshareInput(BlockSchema):
"""Base input model for Ayrshare social media posts with common fields."""

View File

@@ -6,10 +6,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class PostToBlueskyBlock(Block):
@@ -58,10 +57,12 @@ class PostToBlueskyBlock(Block):
self,
input_data: "PostToBlueskyBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to Bluesky with Bluesky-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -6,10 +6,14 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, CarouselItem, create_ayrshare_client
from ._util import (
BaseAyrshareInput,
CarouselItem,
create_ayrshare_client,
get_profile_key,
)
class PostToFacebookBlock(Block):
@@ -116,10 +120,11 @@ class PostToFacebookBlock(Block):
self,
input_data: "PostToFacebookBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to Facebook with Facebook-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -6,10 +6,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class PostToGMBBlock(Block):
@@ -111,9 +110,10 @@ class PostToGMBBlock(Block):
)
async def run(
self, input_data: "PostToGMBBlock.Input", *, profile_key: SecretStr, **kwargs
self, input_data: "PostToGMBBlock.Input", *, user_id: str, **kwargs
) -> BlockOutput:
"""Post to Google My Business with GMB-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -8,10 +8,14 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, InstagramUserTag, create_ayrshare_client
from ._util import (
BaseAyrshareInput,
InstagramUserTag,
create_ayrshare_client,
get_profile_key,
)
class PostToInstagramBlock(Block):
@@ -108,10 +112,11 @@ class PostToInstagramBlock(Block):
self,
input_data: "PostToInstagramBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to Instagram with Instagram-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -6,10 +6,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class PostToLinkedInBlock(Block):
@@ -113,10 +112,11 @@ class PostToLinkedInBlock(Block):
self,
input_data: "PostToLinkedInBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to LinkedIn with LinkedIn-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -6,10 +6,14 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, PinterestCarouselOption, create_ayrshare_client
from ._util import (
BaseAyrshareInput,
PinterestCarouselOption,
create_ayrshare_client,
get_profile_key,
)
class PostToPinterestBlock(Block):
@@ -88,10 +92,11 @@ class PostToPinterestBlock(Block):
self,
input_data: "PostToPinterestBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to Pinterest with Pinterest-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -6,10 +6,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class PostToRedditBlock(Block):
@@ -36,8 +35,9 @@ class PostToRedditBlock(Block):
)
async def run(
self, input_data: "PostToRedditBlock.Input", *, profile_key: SecretStr, **kwargs
self, input_data: "PostToRedditBlock.Input", *, user_id: str, **kwargs
) -> BlockOutput:
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -6,10 +6,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class PostToSnapchatBlock(Block):
@@ -63,10 +62,11 @@ class PostToSnapchatBlock(Block):
self,
input_data: "PostToSnapchatBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to Snapchat with Snapchat-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -6,10 +6,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class PostToTelegramBlock(Block):
@@ -58,10 +57,11 @@ class PostToTelegramBlock(Block):
self,
input_data: "PostToTelegramBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to Telegram with Telegram-specific validation."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -6,10 +6,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class PostToThreadsBlock(Block):
@@ -51,10 +50,11 @@ class PostToThreadsBlock(Block):
self,
input_data: "PostToThreadsBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to Threads with Threads-specific validation."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -1,3 +1,5 @@
from enum import Enum
from backend.integrations.ayrshare import PostIds, PostResponse, SocialPlatform
from backend.sdk import (
Block,
@@ -6,10 +8,15 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class TikTokVisibility(str, Enum):
PUBLIC = "public"
PRIVATE = "private"
FOLLOWERS = "followers"
class PostToTikTokBlock(Block):
@@ -21,7 +28,6 @@ class PostToTikTokBlock(Block):
# Override post field to include TikTok-specific information
post: str = SchemaField(
description="The post text (max 2,200 chars, empty string allowed). Use @handle to mention users. Line breaks will be ignored.",
default="",
advanced=False,
)
@@ -34,7 +40,7 @@ class PostToTikTokBlock(Block):
# TikTok-specific options
auto_add_music: bool = SchemaField(
description="Automatically add recommended music to image posts",
description="Whether to automatically add recommended music to the post. If you set this field to true, you can change the music later in the TikTok app.",
default=False,
advanced=True,
)
@@ -54,17 +60,17 @@ class PostToTikTokBlock(Block):
advanced=True,
)
is_ai_generated: bool = SchemaField(
description="Label content as AI-generated (video only)",
description="If you enable the toggle, your video will be labeled as “Creator labeled as AI-generated” once posted and cant be changed. The “Creator labeled as AI-generated” label indicates that the content was completely AI-generated or significantly edited with AI.",
default=False,
advanced=True,
)
is_branded_content: bool = SchemaField(
description="Label as branded content (paid partnership)",
description="Whether to enable the Branded Content toggle. If this field is set to true, the video will be labeled as Branded Content, indicating you are in a paid partnership with a brand. A “Paid partnership” label will be attached to the video.",
default=False,
advanced=True,
)
is_brand_organic: bool = SchemaField(
description="Label as brand organic content (promotional)",
description="Whether to enable the Brand Organic Content toggle. If this field is set to true, the video will be labeled as Brand Organic Content, indicating you are promoting yourself or your own business. A “Promotional content” label will be attached to the video.",
default=False,
advanced=True,
)
@@ -81,9 +87,9 @@ class PostToTikTokBlock(Block):
default=0,
advanced=True,
)
visibility: str = SchemaField(
visibility: TikTokVisibility = SchemaField(
description="Post visibility: 'public', 'private', 'followers', or 'friends'",
default="public",
default=TikTokVisibility.PUBLIC,
advanced=True,
)
draft: bool = SchemaField(
@@ -98,7 +104,6 @@ class PostToTikTokBlock(Block):
def __init__(self):
super().__init__(
disabled=True,
id="7faf4b27-96b0-4f05-bf64-e0de54ae74e1",
description="Post to TikTok using Ayrshare",
categories={BlockCategory.SOCIAL},
@@ -108,9 +113,10 @@ class PostToTikTokBlock(Block):
)
async def run(
self, input_data: "PostToTikTokBlock.Input", *, profile_key: SecretStr, **kwargs
self, input_data: "PostToTikTokBlock.Input", *, user_id: str, **kwargs
) -> BlockOutput:
"""Post to TikTok with TikTok-specific validation and options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return
@@ -160,12 +166,6 @@ class PostToTikTokBlock(Block):
yield "error", f"Image cover index {input_data.image_cover_index} is out of range (max: {len(input_data.media_urls) - 1})"
return
# Validate visibility option
valid_visibility = ["public", "private", "followers", "friends"]
if input_data.visibility not in valid_visibility:
yield "error", f"TikTok visibility must be one of: {', '.join(valid_visibility)}"
return
# Check for PNG files (not supported)
has_png = any(url.lower().endswith(".png") for url in input_data.media_urls)
if has_png:
@@ -218,8 +218,8 @@ class PostToTikTokBlock(Block):
if input_data.title:
tiktok_options["title"] = input_data.title
if input_data.visibility != "public":
tiktok_options["visibility"] = input_data.visibility
if input_data.visibility != TikTokVisibility.PUBLIC:
tiktok_options["visibility"] = input_data.visibility.value
response = await client.create_post(
post=input_data.post,

View File

@@ -6,10 +6,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class PostToXBlock(Block):
@@ -116,10 +115,11 @@ class PostToXBlock(Block):
self,
input_data: "PostToXBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to X / Twitter with enhanced X-specific options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

View File

@@ -9,10 +9,9 @@ from backend.sdk import (
BlockSchema,
BlockType,
SchemaField,
SecretStr,
)
from ._util import BaseAyrshareInput, create_ayrshare_client
from ._util import BaseAyrshareInput, create_ayrshare_client, get_profile_key
class YouTubeVisibility(str, Enum):
@@ -138,10 +137,12 @@ class PostToYouTubeBlock(Block):
self,
input_data: "PostToYouTubeBlock.Input",
*,
profile_key: SecretStr,
user_id: str,
**kwargs,
) -> BlockOutput:
"""Post to YouTube with YouTube-specific validation and options."""
profile_key = await get_profile_key(user_id)
if not profile_key:
yield "error", "Please link a social account via Ayrshare"
return

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,408 @@
"""
API module for Enrichlayer integration.
This module provides a client for interacting with the Enrichlayer API,
which allows fetching LinkedIn profile data and related information.
"""
import datetime
import enum
import logging
from json import JSONDecodeError
from typing import Any, Optional, TypeVar
from pydantic import BaseModel, Field
from backend.data.model import APIKeyCredentials
from backend.util.request import Requests
logger = logging.getLogger(__name__)
T = TypeVar("T")
class EnrichlayerAPIException(Exception):
"""Exception raised for Enrichlayer API errors."""
def __init__(self, message: str, status_code: int):
super().__init__(message)
self.status_code = status_code
class FallbackToCache(enum.Enum):
ON_ERROR = "on-error"
NEVER = "never"
class UseCache(enum.Enum):
IF_PRESENT = "if-present"
NEVER = "never"
class SocialMediaProfiles(BaseModel):
"""Social media profiles model."""
twitter: Optional[str] = None
facebook: Optional[str] = None
github: Optional[str] = None
class Experience(BaseModel):
"""Experience model for LinkedIn profiles."""
company: Optional[str] = None
title: Optional[str] = None
description: Optional[str] = None
location: Optional[str] = None
starts_at: Optional[dict[str, int]] = None
ends_at: Optional[dict[str, int]] = None
company_linkedin_profile_url: Optional[str] = None
class Education(BaseModel):
"""Education model for LinkedIn profiles."""
school: Optional[str] = None
degree_name: Optional[str] = None
field_of_study: Optional[str] = None
starts_at: Optional[dict[str, int]] = None
ends_at: Optional[dict[str, int]] = None
school_linkedin_profile_url: Optional[str] = None
class PersonProfileResponse(BaseModel):
"""Response model for LinkedIn person profile.
This model represents the response from Enrichlayer's LinkedIn profile API.
The API returns comprehensive profile data including work experience,
education, skills, and contact information (when available).
Example API Response:
{
"public_identifier": "johnsmith",
"full_name": "John Smith",
"occupation": "Software Engineer at Tech Corp",
"experiences": [
{
"company": "Tech Corp",
"title": "Software Engineer",
"starts_at": {"year": 2020, "month": 1}
}
],
"education": [...],
"skills": ["Python", "JavaScript", ...]
}
"""
public_identifier: Optional[str] = None
profile_pic_url: Optional[str] = None
full_name: Optional[str] = None
first_name: Optional[str] = None
last_name: Optional[str] = None
occupation: Optional[str] = None
headline: Optional[str] = None
summary: Optional[str] = None
country: Optional[str] = None
country_full_name: Optional[str] = None
city: Optional[str] = None
state: Optional[str] = None
experiences: Optional[list[Experience]] = None
education: Optional[list[Education]] = None
languages: Optional[list[str]] = None
skills: Optional[list[str]] = None
inferred_salary: Optional[dict[str, Any]] = None
personal_email: Optional[str] = None
personal_contact_number: Optional[str] = None
social_media_profiles: Optional[SocialMediaProfiles] = None
extra: Optional[dict[str, Any]] = None
class SimilarProfile(BaseModel):
"""Similar profile model for LinkedIn person lookup."""
similarity: float
linkedin_profile_url: str
class PersonLookupResponse(BaseModel):
"""Response model for LinkedIn person lookup.
This model represents the response from Enrichlayer's person lookup API.
The API returns a LinkedIn profile URL and similarity scores when
searching for a person by name and company.
Example API Response:
{
"url": "https://www.linkedin.com/in/johnsmith/",
"name_similarity_score": 0.95,
"company_similarity_score": 0.88,
"title_similarity_score": 0.75,
"location_similarity_score": 0.60
}
"""
url: str | None = None
name_similarity_score: float | None
company_similarity_score: float | None
title_similarity_score: float | None
location_similarity_score: float | None
last_updated: datetime.datetime | None = None
profile: PersonProfileResponse | None = None
class RoleLookupResponse(BaseModel):
"""Response model for LinkedIn role lookup.
This model represents the response from Enrichlayer's role lookup API.
The API returns LinkedIn profile data for a specific role at a company.
Example API Response:
{
"linkedin_profile_url": "https://www.linkedin.com/in/johnsmith/",
"profile_data": {...} // Full PersonProfileResponse data when enrich_profile=True
}
"""
linkedin_profile_url: Optional[str] = None
profile_data: Optional[PersonProfileResponse] = None
class ProfilePictureResponse(BaseModel):
"""Response model for LinkedIn profile picture.
This model represents the response from Enrichlayer's profile picture API.
The API returns a URL to the person's LinkedIn profile picture.
Example API Response:
{
"tmp_profile_pic_url": "https://media.licdn.com/dms/image/..."
}
"""
tmp_profile_pic_url: str = Field(
..., description="URL of the profile picture", alias="tmp_profile_pic_url"
)
@property
def profile_picture_url(self) -> str:
"""Backward compatibility property for profile_picture_url."""
return self.tmp_profile_pic_url
class EnrichlayerClient:
"""Client for interacting with the Enrichlayer API."""
API_BASE_URL = "https://enrichlayer.com/api/v2"
def __init__(
self,
credentials: Optional[APIKeyCredentials] = None,
custom_requests: Optional[Requests] = None,
):
"""
Initialize the Enrichlayer client.
Args:
credentials: The credentials to use for authentication.
custom_requests: Custom Requests instance for testing.
"""
if custom_requests:
self._requests = custom_requests
else:
headers: dict[str, str] = {
"Content-Type": "application/json",
}
if credentials:
headers["Authorization"] = (
f"Bearer {credentials.api_key.get_secret_value()}"
)
self._requests = Requests(
extra_headers=headers,
raise_for_status=False,
)
async def _handle_response(self, response) -> Any:
"""
Handle API response and check for errors.
Args:
response: The response object from the request.
Returns:
The response data.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
if not response.ok:
try:
error_data = response.json()
error_message = error_data.get("message", "")
except JSONDecodeError:
error_message = response.text
raise EnrichlayerAPIException(
f"Enrichlayer API request failed ({response.status_code}): {error_message}",
response.status_code,
)
return response.json()
async def fetch_profile(
self,
linkedin_url: str,
fallback_to_cache: FallbackToCache = FallbackToCache.ON_ERROR,
use_cache: UseCache = UseCache.IF_PRESENT,
include_skills: bool = False,
include_inferred_salary: bool = False,
include_personal_email: bool = False,
include_personal_contact_number: bool = False,
include_social_media: bool = False,
include_extra: bool = False,
) -> PersonProfileResponse:
"""
Fetch a LinkedIn profile with optional parameters.
Args:
linkedin_url: The LinkedIn profile URL to fetch.
fallback_to_cache: Cache usage if live fetch fails ('on-error' or 'never').
use_cache: Cache utilization ('if-present' or 'never').
include_skills: Whether to include skills data.
include_inferred_salary: Whether to include inferred salary data.
include_personal_email: Whether to include personal email.
include_personal_contact_number: Whether to include personal contact number.
include_social_media: Whether to include social media profiles.
include_extra: Whether to include additional data.
Returns:
The LinkedIn profile data.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"url": linkedin_url,
"fallback_to_cache": fallback_to_cache.value.lower(),
"use_cache": use_cache.value.lower(),
}
if include_skills:
params["skills"] = "include"
if include_inferred_salary:
params["inferred_salary"] = "include"
if include_personal_email:
params["personal_email"] = "include"
if include_personal_contact_number:
params["personal_contact_number"] = "include"
if include_social_media:
params["twitter_profile_id"] = "include"
params["facebook_profile_id"] = "include"
params["github_profile_id"] = "include"
if include_extra:
params["extra"] = "include"
response = await self._requests.get(
f"{self.API_BASE_URL}/profile", params=params
)
return PersonProfileResponse(**await self._handle_response(response))
async def lookup_person(
self,
first_name: str,
company_domain: str,
last_name: str | None = None,
location: Optional[str] = None,
title: Optional[str] = None,
include_similarity_checks: bool = False,
enrich_profile: bool = False,
) -> PersonLookupResponse:
"""
Look up a LinkedIn profile by person's information.
Args:
first_name: The person's first name.
last_name: The person's last name.
company_domain: The domain of the company they work for.
location: The person's location.
title: The person's job title.
include_similarity_checks: Whether to include similarity checks.
enrich_profile: Whether to enrich the profile.
Returns:
The LinkedIn profile lookup result.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {"first_name": first_name, "company_domain": company_domain}
if last_name:
params["last_name"] = last_name
if location:
params["location"] = location
if title:
params["title"] = title
if include_similarity_checks:
params["similarity_checks"] = "include"
if enrich_profile:
params["enrich_profile"] = "enrich"
response = await self._requests.get(
f"{self.API_BASE_URL}/profile/resolve", params=params
)
return PersonLookupResponse(**await self._handle_response(response))
async def lookup_role(
self, role: str, company_name: str, enrich_profile: bool = False
) -> RoleLookupResponse:
"""
Look up a LinkedIn profile by role in a company.
Args:
role: The role title (e.g., CEO, CTO).
company_name: The name of the company.
enrich_profile: Whether to enrich the profile.
Returns:
The LinkedIn profile lookup result.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"role": role,
"company_name": company_name,
}
if enrich_profile:
params["enrich_profile"] = "enrich"
response = await self._requests.get(
f"{self.API_BASE_URL}/find/company/role", params=params
)
return RoleLookupResponse(**await self._handle_response(response))
async def get_profile_picture(
self, linkedin_profile_url: str
) -> ProfilePictureResponse:
"""
Get a LinkedIn profile picture URL.
Args:
linkedin_profile_url: The LinkedIn profile URL.
Returns:
The profile picture URL.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"linkedin_person_profile_url": linkedin_profile_url,
}
response = await self._requests.get(
f"{self.API_BASE_URL}/person/profile-picture", params=params
)
return ProfilePictureResponse(**await self._handle_response(response))

View File

@@ -0,0 +1,34 @@
"""
Authentication module for Enrichlayer API integration.
This module provides credential types and test credentials for the Enrichlayer API.
"""
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsMetaInput
from backend.integrations.providers import ProviderName
# Define the type of credentials input expected for Enrichlayer API
EnrichlayerCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.ENRICHLAYER], Literal["api_key"]
]
# Mock credentials for testing Enrichlayer API integration
TEST_CREDENTIALS = APIKeyCredentials(
id="1234a567-89bc-4def-ab12-3456cdef7890",
provider="enrichlayer",
api_key=SecretStr("mock-enrichlayer-api-key"),
title="Mock Enrichlayer API key",
expires_at=None,
)
# Dictionary representation of test credentials for input fields
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}

View File

@@ -0,0 +1,527 @@
"""
Block definitions for Enrichlayer API integration.
This module implements blocks for interacting with the Enrichlayer API,
which provides access to LinkedIn profile data and related information.
"""
import logging
from typing import Optional
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField
from backend.util.type import MediaFileType
from ._api import (
EnrichlayerClient,
Experience,
FallbackToCache,
PersonLookupResponse,
PersonProfileResponse,
RoleLookupResponse,
UseCache,
)
from ._auth import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, EnrichlayerCredentialsInput
logger = logging.getLogger(__name__)
class GetLinkedinProfileBlock(Block):
"""Block to fetch LinkedIn profile data using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for GetLinkedinProfileBlock."""
linkedin_url: str = SchemaField(
description="LinkedIn profile URL to fetch data from",
placeholder="https://www.linkedin.com/in/username/",
)
fallback_to_cache: FallbackToCache = SchemaField(
description="Cache usage if live fetch fails",
default=FallbackToCache.ON_ERROR,
advanced=True,
)
use_cache: UseCache = SchemaField(
description="Cache utilization strategy",
default=UseCache.IF_PRESENT,
advanced=True,
)
include_skills: bool = SchemaField(
description="Include skills data",
default=False,
advanced=True,
)
include_inferred_salary: bool = SchemaField(
description="Include inferred salary data",
default=False,
advanced=True,
)
include_personal_email: bool = SchemaField(
description="Include personal email",
default=False,
advanced=True,
)
include_personal_contact_number: bool = SchemaField(
description="Include personal contact number",
default=False,
advanced=True,
)
include_social_media: bool = SchemaField(
description="Include social media profiles",
default=False,
advanced=True,
)
include_extra: bool = SchemaField(
description="Include additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for GetLinkedinProfileBlock."""
profile: PersonProfileResponse = SchemaField(
description="LinkedIn profile data"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfileBlock."""
super().__init__(
id="f6e0ac73-4f1d-4acb-b4b7-b67066c5984e",
description="Fetch LinkedIn profile data using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=GetLinkedinProfileBlock.Input,
output_schema=GetLinkedinProfileBlock.Output,
test_input={
"linkedin_url": "https://www.linkedin.com/in/williamhgates/",
"include_skills": True,
"include_social_media": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"profile",
PersonProfileResponse(
public_identifier="williamhgates",
full_name="Bill Gates",
occupation="Co-chair at Bill & Melinda Gates Foundation",
experiences=[
Experience(
company="Bill & Melinda Gates Foundation",
title="Co-chair",
starts_at={"year": 2000},
)
],
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_fetch_profile": lambda *args, **kwargs: PersonProfileResponse(
public_identifier="williamhgates",
full_name="Bill Gates",
occupation="Co-chair at Bill & Melinda Gates Foundation",
experiences=[
Experience(
company="Bill & Melinda Gates Foundation",
title="Co-chair",
starts_at={"year": 2000},
)
],
),
},
)
@staticmethod
async def _fetch_profile(
credentials: APIKeyCredentials,
linkedin_url: str,
fallback_to_cache: FallbackToCache = FallbackToCache.ON_ERROR,
use_cache: UseCache = UseCache.IF_PRESENT,
include_skills: bool = False,
include_inferred_salary: bool = False,
include_personal_email: bool = False,
include_personal_contact_number: bool = False,
include_social_media: bool = False,
include_extra: bool = False,
):
client = EnrichlayerClient(credentials)
profile = await client.fetch_profile(
linkedin_url=linkedin_url,
fallback_to_cache=fallback_to_cache,
use_cache=use_cache,
include_skills=include_skills,
include_inferred_salary=include_inferred_salary,
include_personal_email=include_personal_email,
include_personal_contact_number=include_personal_contact_number,
include_social_media=include_social_media,
include_extra=include_extra,
)
return profile
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to fetch LinkedIn profile data.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
profile = await self._fetch_profile(
credentials=credentials,
linkedin_url=input_data.linkedin_url,
fallback_to_cache=input_data.fallback_to_cache,
use_cache=input_data.use_cache,
include_skills=input_data.include_skills,
include_inferred_salary=input_data.include_inferred_salary,
include_personal_email=input_data.include_personal_email,
include_personal_contact_number=input_data.include_personal_contact_number,
include_social_media=input_data.include_social_media,
include_extra=input_data.include_extra,
)
yield "profile", profile
except Exception as e:
logger.error(f"Error fetching LinkedIn profile: {str(e)}")
yield "error", str(e)
class LinkedinPersonLookupBlock(Block):
"""Block to look up LinkedIn profiles by person's information using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for LinkedinPersonLookupBlock."""
first_name: str = SchemaField(
description="Person's first name",
placeholder="John",
advanced=False,
)
last_name: str | None = SchemaField(
description="Person's last name",
placeholder="Doe",
default=None,
advanced=False,
)
company_domain: str = SchemaField(
description="Domain of the company they work for (optional)",
placeholder="example.com",
advanced=False,
)
location: Optional[str] = SchemaField(
description="Person's location (optional)",
placeholder="San Francisco",
default=None,
)
title: Optional[str] = SchemaField(
description="Person's job title (optional)",
placeholder="CEO",
default=None,
)
include_similarity_checks: bool = SchemaField(
description="Include similarity checks",
default=False,
advanced=True,
)
enrich_profile: bool = SchemaField(
description="Enrich the profile with additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for LinkedinPersonLookupBlock."""
lookup_result: PersonLookupResponse = SchemaField(
description="LinkedIn profile lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinPersonLookupBlock."""
super().__init__(
id="d237a98a-5c4b-4a1c-b9e3-e6f9a6c81df7",
description="Look up LinkedIn profiles by person information using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=LinkedinPersonLookupBlock.Input,
output_schema=LinkedinPersonLookupBlock.Output,
test_input={
"first_name": "Bill",
"last_name": "Gates",
"company_domain": "gatesfoundation.org",
"include_similarity_checks": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"lookup_result",
PersonLookupResponse(
url="https://www.linkedin.com/in/williamhgates/",
name_similarity_score=0.93,
company_similarity_score=0.83,
title_similarity_score=0.3,
location_similarity_score=0.20,
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_lookup_person": lambda *args, **kwargs: PersonLookupResponse(
url="https://www.linkedin.com/in/williamhgates/",
name_similarity_score=0.93,
company_similarity_score=0.83,
title_similarity_score=0.3,
location_similarity_score=0.20,
)
},
)
@staticmethod
async def _lookup_person(
credentials: APIKeyCredentials,
first_name: str,
company_domain: str,
last_name: str | None = None,
location: Optional[str] = None,
title: Optional[str] = None,
include_similarity_checks: bool = False,
enrich_profile: bool = False,
):
client = EnrichlayerClient(credentials=credentials)
lookup_result = await client.lookup_person(
first_name=first_name,
last_name=last_name,
company_domain=company_domain,
location=location,
title=title,
include_similarity_checks=include_similarity_checks,
enrich_profile=enrich_profile,
)
return lookup_result
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to look up LinkedIn profiles.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
lookup_result = await self._lookup_person(
credentials=credentials,
first_name=input_data.first_name,
last_name=input_data.last_name,
company_domain=input_data.company_domain,
location=input_data.location,
title=input_data.title,
include_similarity_checks=input_data.include_similarity_checks,
enrich_profile=input_data.enrich_profile,
)
yield "lookup_result", lookup_result
except Exception as e:
logger.error(f"Error looking up LinkedIn profile: {str(e)}")
yield "error", str(e)
class LinkedinRoleLookupBlock(Block):
"""Block to look up LinkedIn profiles by role in a company using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for LinkedinRoleLookupBlock."""
role: str = SchemaField(
description="Role title (e.g., CEO, CTO)",
placeholder="CEO",
)
company_name: str = SchemaField(
description="Name of the company",
placeholder="Microsoft",
)
enrich_profile: bool = SchemaField(
description="Enrich the profile with additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for LinkedinRoleLookupBlock."""
role_lookup_result: RoleLookupResponse = SchemaField(
description="LinkedIn role lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinRoleLookupBlock."""
super().__init__(
id="3b9fc742-06d4-49c7-b5ce-7e302dd7c8a7",
description="Look up LinkedIn profiles by role in a company using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=LinkedinRoleLookupBlock.Input,
output_schema=LinkedinRoleLookupBlock.Output,
test_input={
"role": "Co-chair",
"company_name": "Gates Foundation",
"enrich_profile": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"role_lookup_result",
RoleLookupResponse(
linkedin_profile_url="https://www.linkedin.com/in/williamhgates/",
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_lookup_role": lambda *args, **kwargs: RoleLookupResponse(
linkedin_profile_url="https://www.linkedin.com/in/williamhgates/",
),
},
)
@staticmethod
async def _lookup_role(
credentials: APIKeyCredentials,
role: str,
company_name: str,
enrich_profile: bool = False,
):
client = EnrichlayerClient(credentials=credentials)
role_lookup_result = await client.lookup_role(
role=role,
company_name=company_name,
enrich_profile=enrich_profile,
)
return role_lookup_result
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to look up LinkedIn profiles by role.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
role_lookup_result = await self._lookup_role(
credentials=credentials,
role=input_data.role,
company_name=input_data.company_name,
enrich_profile=input_data.enrich_profile,
)
yield "role_lookup_result", role_lookup_result
except Exception as e:
logger.error(f"Error looking up role in company: {str(e)}")
yield "error", str(e)
class GetLinkedinProfilePictureBlock(Block):
"""Block to get LinkedIn profile pictures using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for GetLinkedinProfilePictureBlock."""
linkedin_profile_url: str = SchemaField(
description="LinkedIn profile URL",
placeholder="https://www.linkedin.com/in/username/",
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for GetLinkedinProfilePictureBlock."""
profile_picture_url: MediaFileType = SchemaField(
description="LinkedIn profile picture URL"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfilePictureBlock."""
super().__init__(
id="68d5a942-9b3f-4e9a-b7c1-d96ea4321f0d",
description="Get LinkedIn profile pictures using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=GetLinkedinProfilePictureBlock.Input,
output_schema=GetLinkedinProfilePictureBlock.Output,
test_input={
"linkedin_profile_url": "https://www.linkedin.com/in/williamhgates/",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"profile_picture_url",
"https://media.licdn.com/dms/image/C4D03AQFj-xjuXrLFSQ/profile-displayphoto-shrink_800_800/0/1576881858598?e=1686787200&v=beta&t=zrQC76QwsfQQIWthfOnrKRBMZ5D-qIAvzLXLmWgYvTk",
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_get_profile_picture": lambda *args, **kwargs: "https://media.licdn.com/dms/image/C4D03AQFj-xjuXrLFSQ/profile-displayphoto-shrink_800_800/0/1576881858598?e=1686787200&v=beta&t=zrQC76QwsfQQIWthfOnrKRBMZ5D-qIAvzLXLmWgYvTk",
},
)
@staticmethod
async def _get_profile_picture(
credentials: APIKeyCredentials, linkedin_profile_url: str
):
client = EnrichlayerClient(credentials=credentials)
profile_picture_response = await client.get_profile_picture(
linkedin_profile_url=linkedin_profile_url,
)
return profile_picture_response.profile_picture_url
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to get LinkedIn profile pictures.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
profile_picture = await self._get_profile_picture(
credentials=credentials,
linkedin_profile_url=input_data.linkedin_profile_url,
)
yield "profile_picture_url", profile_picture
except Exception as e:
logger.error(f"Error getting profile picture: {str(e)}")
yield "error", str(e)

View File

@@ -0,0 +1,247 @@
from datetime import datetime
from enum import Enum
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
# Enum definitions based on available options
class WebsetStatus(str, Enum):
IDLE = "idle"
PENDING = "pending"
RUNNING = "running"
PAUSED = "paused"
class WebsetSearchStatus(str, Enum):
CREATED = "created"
# Add more if known, based on example it's "created"
class ImportStatus(str, Enum):
PENDING = "pending"
# Add more if known
class ImportFormat(str, Enum):
CSV = "csv"
# Add more if known
class EnrichmentStatus(str, Enum):
PENDING = "pending"
# Add more if known
class EnrichmentFormat(str, Enum):
TEXT = "text"
# Add more if known
class MonitorStatus(str, Enum):
ENABLED = "enabled"
# Add more if known
class MonitorBehaviorType(str, Enum):
SEARCH = "search"
# Add more if known
class MonitorRunStatus(str, Enum):
CREATED = "created"
# Add more if known
class CanceledReason(str, Enum):
WEBSET_DELETED = "webset_deleted"
# Add more if known
class FailedReason(str, Enum):
INVALID_FORMAT = "invalid_format"
# Add more if known
class Confidence(str, Enum):
HIGH = "high"
# Add more if known
# Nested models
class Entity(BaseModel):
type: str
class Criterion(BaseModel):
description: str
successRate: Optional[int] = None
class ExcludeItem(BaseModel):
source: str = Field(default="import")
id: str
class Relationship(BaseModel):
definition: str
limit: Optional[float] = None
class ScopeItem(BaseModel):
source: str = Field(default="import")
id: str
relationship: Optional[Relationship] = None
class Progress(BaseModel):
found: int
analyzed: int
completion: int
timeLeft: int
class Bounds(BaseModel):
min: int
max: int
class Expected(BaseModel):
total: int
confidence: str = Field(default="high") # Use str or Confidence enum
bounds: Bounds
class Recall(BaseModel):
expected: Expected
reasoning: str
class WebsetSearch(BaseModel):
id: str
object: str = Field(default="webset_search")
status: str = Field(default="created") # Or use WebsetSearchStatus
websetId: str
query: str
entity: Entity
criteria: List[Criterion]
count: int
behavior: str = Field(default="override")
exclude: List[ExcludeItem]
scope: List[ScopeItem]
progress: Progress
recall: Recall
metadata: Dict[str, Any] = Field(default_factory=dict)
canceledAt: Optional[datetime] = None
canceledReason: Optional[str] = Field(default=None) # Or use CanceledReason
createdAt: datetime
updatedAt: datetime
class ImportEntity(BaseModel):
type: str
class Import(BaseModel):
id: str
object: str = Field(default="import")
status: str = Field(default="pending") # Or use ImportStatus
format: str = Field(default="csv") # Or use ImportFormat
entity: ImportEntity
title: str
count: int
metadata: Dict[str, Any] = Field(default_factory=dict)
failedReason: Optional[str] = Field(default=None) # Or use FailedReason
failedAt: Optional[datetime] = None
failedMessage: Optional[str] = None
createdAt: datetime
updatedAt: datetime
class Option(BaseModel):
label: str
class WebsetEnrichment(BaseModel):
id: str
object: str = Field(default="webset_enrichment")
status: str = Field(default="pending") # Or use EnrichmentStatus
websetId: str
title: str
description: str
format: str = Field(default="text") # Or use EnrichmentFormat
options: List[Option]
instructions: str
metadata: Dict[str, Any] = Field(default_factory=dict)
createdAt: datetime
updatedAt: datetime
class Cadence(BaseModel):
cron: str
timezone: str = Field(default="Etc/UTC")
class BehaviorConfig(BaseModel):
query: Optional[str] = None
criteria: Optional[List[Criterion]] = None
entity: Optional[Entity] = None
count: Optional[int] = None
behavior: Optional[str] = Field(default=None)
class Behavior(BaseModel):
type: str = Field(default="search") # Or use MonitorBehaviorType
config: BehaviorConfig
class MonitorRun(BaseModel):
id: str
object: str = Field(default="monitor_run")
status: str = Field(default="created") # Or use MonitorRunStatus
monitorId: str
type: str = Field(default="search")
completedAt: Optional[datetime] = None
failedAt: Optional[datetime] = None
failedReason: Optional[str] = None
canceledAt: Optional[datetime] = None
createdAt: datetime
updatedAt: datetime
class Monitor(BaseModel):
id: str
object: str = Field(default="monitor")
status: str = Field(default="enabled") # Or use MonitorStatus
websetId: str
cadence: Cadence
behavior: Behavior
lastRun: Optional[MonitorRun] = None
nextRunAt: Optional[datetime] = None
metadata: Dict[str, Any] = Field(default_factory=dict)
createdAt: datetime
updatedAt: datetime
class Webset(BaseModel):
id: str
object: str = Field(default="webset")
status: WebsetStatus
externalId: Optional[str] = None
title: Optional[str] = None
searches: List[WebsetSearch]
imports: List[Import]
enrichments: List[WebsetEnrichment]
monitors: List[Monitor]
streams: List[Any]
createdAt: datetime
updatedAt: datetime
metadata: Dict[str, Any] = Field(default_factory=dict)
class ListWebsets(BaseModel):
data: List[Webset]
hasMore: bool
nextCursor: Optional[str] = None

View File

@@ -114,6 +114,7 @@ class ExaWebsetWebhookBlock(Block):
def __init__(self):
super().__init__(
disabled=True,
id="d0204ed8-8b81-408d-8b8d-ed087a546228",
description="Receive webhook notifications for Exa webset events",
categories={BlockCategory.INPUT},

View File

@@ -1,7 +1,33 @@
from typing import Any, Optional
from datetime import datetime
from enum import Enum
from typing import Annotated, Any, Dict, List, Optional
from exa_py import Exa
from exa_py.websets.types import (
CreateCriterionParameters,
CreateEnrichmentParameters,
CreateWebsetParameters,
CreateWebsetParametersSearch,
ExcludeItem,
Format,
ImportItem,
ImportSource,
Option,
ScopeItem,
ScopeRelationship,
ScopeSourceType,
WebsetArticleEntity,
WebsetCompanyEntity,
WebsetCustomEntity,
WebsetPersonEntity,
WebsetResearchPaperEntity,
WebsetStatus,
)
from pydantic import Field
from backend.sdk import (
APIKeyCredentials,
BaseModel,
Block,
BlockCategory,
BlockOutput,
@@ -12,7 +38,69 @@ from backend.sdk import (
)
from ._config import exa
from .helpers import WebsetEnrichmentConfig, WebsetSearchConfig
class SearchEntityType(str, Enum):
COMPANY = "company"
PERSON = "person"
ARTICLE = "article"
RESEARCH_PAPER = "research_paper"
CUSTOM = "custom"
AUTO = "auto"
class SearchType(str, Enum):
IMPORT = "import"
WEBSET = "webset"
class EnrichmentFormat(str, Enum):
TEXT = "text"
DATE = "date"
NUMBER = "number"
OPTIONS = "options"
EMAIL = "email"
PHONE = "phone"
class Webset(BaseModel):
id: str
status: WebsetStatus | None = Field(..., title="WebsetStatus")
"""
The status of the webset
"""
external_id: Annotated[Optional[str], Field(alias="externalId")] = None
"""
The external identifier for the webset
NOTE: Returning dict to avoid ui crashing due to nested objects
"""
searches: List[dict[str, Any]] | None = None
"""
The searches that have been performed on the webset.
NOTE: Returning dict to avoid ui crashing due to nested objects
"""
enrichments: List[dict[str, Any]] | None = None
"""
The Enrichments to apply to the Webset Items.
NOTE: Returning dict to avoid ui crashing due to nested objects
"""
monitors: List[dict[str, Any]] | None = None
"""
The Monitors for the Webset.
NOTE: Returning dict to avoid ui crashing due to nested objects
"""
metadata: Optional[Dict[str, Any]] = {}
"""
Set of key-value pairs you want to associate with this object.
"""
created_at: Annotated[datetime, Field(alias="createdAt")] | None = None
"""
The date and time the webset was created
"""
updated_at: Annotated[datetime, Field(alias="updatedAt")] | None = None
"""
The date and time the webset was last updated
"""
class ExaCreateWebsetBlock(Block):
@@ -20,40 +108,121 @@ class ExaCreateWebsetBlock(Block):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
search: WebsetSearchConfig = SchemaField(
description="Initial search configuration for the Webset"
# Search parameters (flattened)
search_query: str = SchemaField(
description="Your search query. Use this to describe what you are looking for. Any URL provided will be crawled and used as context for the search.",
placeholder="Marketing agencies based in the US, that focus on consumer products",
)
enrichments: Optional[list[WebsetEnrichmentConfig]] = SchemaField(
default=None,
description="Enrichments to apply to Webset items",
search_count: Optional[int] = SchemaField(
default=10,
description="Number of items the search will attempt to find. The actual number of items found may be less than this number depending on the search complexity.",
ge=1,
le=1000,
)
search_entity_type: SearchEntityType = SchemaField(
default=SearchEntityType.AUTO,
description="Entity type: 'company', 'person', 'article', 'research_paper', or 'custom'. If not provided, we automatically detect the entity from the query.",
advanced=True,
)
search_entity_description: Optional[str] = SchemaField(
default=None,
description="Description for custom entity type (required when search_entity_type is 'custom')",
advanced=True,
)
# Search criteria (flattened)
search_criteria: list[str] = SchemaField(
default_factory=list,
description="List of criteria descriptions that every item will be evaluated against. If not provided, we automatically detect the criteria from the query.",
advanced=True,
)
# Search exclude sources (flattened)
search_exclude_sources: list[str] = SchemaField(
default_factory=list,
description="List of source IDs (imports or websets) to exclude from search results",
advanced=True,
)
search_exclude_types: list[SearchType] = SchemaField(
default_factory=list,
description="List of source types corresponding to exclude sources ('import' or 'webset')",
advanced=True,
)
# Search scope sources (flattened)
search_scope_sources: list[str] = SchemaField(
default_factory=list,
description="List of source IDs (imports or websets) to limit search scope to",
advanced=True,
)
search_scope_types: list[SearchType] = SchemaField(
default_factory=list,
description="List of source types corresponding to scope sources ('import' or 'webset')",
advanced=True,
)
search_scope_relationships: list[str] = SchemaField(
default_factory=list,
description="List of relationship definitions for hop searches (optional, one per scope source)",
advanced=True,
)
search_scope_relationship_limits: list[int] = SchemaField(
default_factory=list,
description="List of limits on the number of related entities to find (optional, one per scope relationship)",
advanced=True,
)
# Import parameters (flattened)
import_sources: list[str] = SchemaField(
default_factory=list,
description="List of source IDs to import from",
advanced=True,
)
import_types: list[SearchType] = SchemaField(
default_factory=list,
description="List of source types corresponding to import sources ('import' or 'webset')",
advanced=True,
)
# Enrichment parameters (flattened)
enrichment_descriptions: list[str] = SchemaField(
default_factory=list,
description="List of enrichment task descriptions to perform on each webset item",
advanced=True,
)
enrichment_formats: list[EnrichmentFormat] = SchemaField(
default_factory=list,
description="List of formats for enrichment responses ('text', 'date', 'number', 'options', 'email', 'phone'). If not specified, we automatically select the best format.",
advanced=True,
)
enrichment_options: list[list[str]] = SchemaField(
default_factory=list,
description="List of option lists for enrichments with 'options' format. Each inner list contains the option labels.",
advanced=True,
)
enrichment_metadata: list[dict] = SchemaField(
default_factory=list,
description="List of metadata dictionaries for enrichments",
advanced=True,
)
# Webset metadata
external_id: Optional[str] = SchemaField(
default=None,
description="External identifier for the webset",
description="External identifier for the webset. You can use this to reference the webset by your own internal identifiers.",
placeholder="my-webset-123",
advanced=True,
)
metadata: Optional[dict] = SchemaField(
default=None,
default_factory=dict,
description="Key-value pairs to associate with this webset",
advanced=True,
)
class Output(BlockSchema):
webset_id: str = SchemaField(
webset: Webset = SchemaField(
description="The unique identifier for the created webset"
)
status: str = SchemaField(description="The status of the webset")
external_id: Optional[str] = SchemaField(
description="The external identifier for the webset", default=None
)
created_at: str = SchemaField(
description="The date and time the webset was created"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
@@ -67,44 +236,171 @@ class ExaCreateWebsetBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/websets/v0/websets"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
# Build the payload
payload: dict[str, Any] = {
"search": input_data.search.model_dump(exclude_none=True),
}
exa = Exa(credentials.api_key.get_secret_value())
# Convert enrichments to API format
if input_data.enrichments:
enrichments_data = []
for enrichment in input_data.enrichments:
enrichments_data.append(enrichment.model_dump(exclude_none=True))
payload["enrichments"] = enrichments_data
# ------------------------------------------------------------
# Build entity (if explicitly provided)
# ------------------------------------------------------------
entity = None
if input_data.search_entity_type == SearchEntityType.COMPANY:
entity = WebsetCompanyEntity(type="company")
elif input_data.search_entity_type == SearchEntityType.PERSON:
entity = WebsetPersonEntity(type="person")
elif input_data.search_entity_type == SearchEntityType.ARTICLE:
entity = WebsetArticleEntity(type="article")
elif input_data.search_entity_type == SearchEntityType.RESEARCH_PAPER:
entity = WebsetResearchPaperEntity(type="research_paper")
elif (
input_data.search_entity_type == SearchEntityType.CUSTOM
and input_data.search_entity_description
):
entity = WebsetCustomEntity(
type="custom", description=input_data.search_entity_description
)
if input_data.external_id:
payload["externalId"] = input_data.external_id
# ------------------------------------------------------------
# Build criteria list
# ------------------------------------------------------------
criteria = None
if input_data.search_criteria:
criteria = [
CreateCriterionParameters(description=item)
for item in input_data.search_criteria
]
if input_data.metadata:
payload["metadata"] = input_data.metadata
# ------------------------------------------------------------
# Build exclude sources list
# ------------------------------------------------------------
exclude_items = None
if input_data.search_exclude_sources:
exclude_items = []
for idx, src_id in enumerate(input_data.search_exclude_sources):
src_type = None
if input_data.search_exclude_types and idx < len(
input_data.search_exclude_types
):
src_type = input_data.search_exclude_types[idx]
# Default to IMPORT if type missing
if src_type == SearchType.WEBSET:
source_enum = ImportSource.webset
else:
source_enum = ImportSource.import_
exclude_items.append(ExcludeItem(source=source_enum, id=src_id))
try:
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
# ------------------------------------------------------------
# Build scope list
# ------------------------------------------------------------
scope_items = None
if input_data.search_scope_sources:
scope_items = []
for idx, src_id in enumerate(input_data.search_scope_sources):
src_type = None
if input_data.search_scope_types and idx < len(
input_data.search_scope_types
):
src_type = input_data.search_scope_types[idx]
relationship = None
if input_data.search_scope_relationships and idx < len(
input_data.search_scope_relationships
):
rel_def = input_data.search_scope_relationships[idx]
lim = None
if input_data.search_scope_relationship_limits and idx < len(
input_data.search_scope_relationship_limits
):
lim = input_data.search_scope_relationship_limits[idx]
relationship = ScopeRelationship(definition=rel_def, limit=lim)
if src_type == SearchType.WEBSET:
src_enum = ScopeSourceType.webset
else:
src_enum = ScopeSourceType.import_
scope_items.append(
ScopeItem(source=src_enum, id=src_id, relationship=relationship)
)
yield "webset_id", data.get("id", "")
yield "status", data.get("status", "")
yield "external_id", data.get("externalId")
yield "created_at", data.get("createdAt", "")
# ------------------------------------------------------------
# Assemble search parameters (only if a query is provided)
# ------------------------------------------------------------
search_params = None
if input_data.search_query:
search_params = CreateWebsetParametersSearch(
query=input_data.search_query,
count=input_data.search_count,
entity=entity,
criteria=criteria,
exclude=exclude_items,
scope=scope_items,
)
except Exception as e:
yield "error", str(e)
yield "webset_id", ""
yield "status", ""
yield "created_at", ""
# ------------------------------------------------------------
# Build imports list
# ------------------------------------------------------------
imports_params = None
if input_data.import_sources:
imports_params = []
for idx, src_id in enumerate(input_data.import_sources):
src_type = None
if input_data.import_types and idx < len(input_data.import_types):
src_type = input_data.import_types[idx]
if src_type == SearchType.WEBSET:
source_enum = ImportSource.webset
else:
source_enum = ImportSource.import_
imports_params.append(ImportItem(source=source_enum, id=src_id))
# ------------------------------------------------------------
# Build enrichment list
# ------------------------------------------------------------
enrichments_params = None
if input_data.enrichment_descriptions:
enrichments_params = []
for idx, desc in enumerate(input_data.enrichment_descriptions):
fmt = None
if input_data.enrichment_formats and idx < len(
input_data.enrichment_formats
):
fmt_enum = input_data.enrichment_formats[idx]
if fmt_enum is not None:
fmt = Format(
fmt_enum.value if isinstance(fmt_enum, Enum) else fmt_enum
)
options_list = None
if input_data.enrichment_options and idx < len(
input_data.enrichment_options
):
raw_opts = input_data.enrichment_options[idx]
if raw_opts:
options_list = [Option(label=o) for o in raw_opts]
metadata_obj = None
if input_data.enrichment_metadata and idx < len(
input_data.enrichment_metadata
):
metadata_obj = input_data.enrichment_metadata[idx]
enrichments_params.append(
CreateEnrichmentParameters(
description=desc,
format=fmt,
options=options_list,
metadata=metadata_obj,
)
)
# ------------------------------------------------------------
# Create the webset
# ------------------------------------------------------------
webset = exa.websets.create(
params=CreateWebsetParameters(
search=search_params,
imports=imports_params,
enrichments=enrichments_params,
external_id=input_data.external_id,
metadata=input_data.metadata,
)
)
# Use alias field names returned from Exa SDK so that nested models validate correctly
yield "webset", Webset.model_validate(webset.model_dump(by_alias=True))
class ExaUpdateWebsetBlock(Block):
@@ -183,6 +479,11 @@ class ExaListWebsetsBlock(Block):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
trigger: Any | None = SchemaField(
default=None,
description="Trigger for the webset, value is ignored!",
advanced=False,
)
cursor: Optional[str] = SchemaField(
default=None,
description="Cursor for pagination through results",
@@ -197,7 +498,9 @@ class ExaListWebsetsBlock(Block):
)
class Output(BlockSchema):
websets: list = SchemaField(description="List of websets", default_factory=list)
websets: list[Webset] = SchemaField(
description="List of websets", default_factory=list
)
has_more: bool = SchemaField(
description="Whether there are more results to paginate through",
default=False,
@@ -255,9 +558,6 @@ class ExaGetWebsetBlock(Block):
description="The ID or external ID of the Webset to retrieve",
placeholder="webset-id-or-external-id",
)
expand_items: bool = SchemaField(
default=False, description="Include items in the response", advanced=True
)
class Output(BlockSchema):
webset_id: str = SchemaField(description="The unique identifier for the webset")
@@ -309,12 +609,8 @@ class ExaGetWebsetBlock(Block):
"x-api-key": credentials.api_key.get_secret_value(),
}
params = {}
if input_data.expand_items:
params["expand[]"] = "items"
try:
response = await Requests().get(url, headers=headers, params=params)
response = await Requests().get(url, headers=headers)
data = response.json()
yield "webset_id", data.get("id", "")

View File

@@ -29,8 +29,8 @@ class FirecrawlExtractBlock(Block):
prompt: str | None = SchemaField(
description="The prompt to use for the crawl", default=None, advanced=False
)
output_schema: str | None = SchemaField(
description="A more rigid structure if you already know the JSON layout.",
output_schema: dict | None = SchemaField(
description="A Json Schema describing the output structure if more rigid structure is desired.",
default=None,
)
enable_web_search: bool = SchemaField(
@@ -56,7 +56,6 @@ class FirecrawlExtractBlock(Block):
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
# Sync call
extract_result = app.extract(
urls=input_data.urls,
prompt=input_data.prompt,

View File

@@ -0,0 +1,388 @@
import logging
import re
from enum import Enum
from typing import Optional
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from ._api import get_api
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
GithubCredentials,
GithubCredentialsField,
GithubCredentialsInput,
)
logger = logging.getLogger(__name__)
class CheckRunStatus(Enum):
QUEUED = "queued"
IN_PROGRESS = "in_progress"
COMPLETED = "completed"
class CheckRunConclusion(Enum):
SUCCESS = "success"
FAILURE = "failure"
NEUTRAL = "neutral"
CANCELLED = "cancelled"
SKIPPED = "skipped"
TIMED_OUT = "timed_out"
ACTION_REQUIRED = "action_required"
class GithubGetCIResultsBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
placeholder="owner/repo",
)
target: str | int = SchemaField(
description="Commit SHA or PR number to get CI results for",
placeholder="abc123def or 123",
)
search_pattern: Optional[str] = SchemaField(
description="Optional regex pattern to search for in CI logs (e.g., error messages, file names)",
placeholder=".*error.*|.*warning.*",
default=None,
advanced=True,
)
check_name_filter: Optional[str] = SchemaField(
description="Optional filter for specific check names (supports wildcards)",
placeholder="*lint* or build-*",
default=None,
advanced=True,
)
class Output(BlockSchema):
class CheckRunItem(TypedDict, total=False):
id: int
name: str
status: str
conclusion: Optional[str]
started_at: Optional[str]
completed_at: Optional[str]
html_url: str
details_url: Optional[str]
output_title: Optional[str]
output_summary: Optional[str]
output_text: Optional[str]
annotations: list[dict]
class MatchedLine(TypedDict):
check_name: str
line_number: int
line: str
context: list[str]
check_run: CheckRunItem = SchemaField(
title="Check Run",
description="Individual CI check run with details",
)
check_runs: list[CheckRunItem] = SchemaField(
description="List of all CI check runs"
)
matched_line: MatchedLine = SchemaField(
title="Matched Line",
description="Line matching the search pattern with context",
)
matched_lines: list[MatchedLine] = SchemaField(
description="All lines matching the search pattern across all checks"
)
overall_status: str = SchemaField(
description="Overall CI status (pending, success, failure)"
)
overall_conclusion: str = SchemaField(
description="Overall CI conclusion if completed"
)
total_checks: int = SchemaField(description="Total number of CI checks")
passed_checks: int = SchemaField(description="Number of passed checks")
failed_checks: int = SchemaField(description="Number of failed checks")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(
id="8ad9e103-78f2-4fdb-ba12-3571f2c95e98",
description="This block gets CI results for a commit or PR, with optional search for specific errors/warnings in logs.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubGetCIResultsBlock.Input,
output_schema=GithubGetCIResultsBlock.Output,
test_input={
"repo": "owner/repo",
"target": "abc123def456",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("overall_status", "completed"),
("overall_conclusion", "success"),
("total_checks", 1),
("passed_checks", 1),
("failed_checks", 0),
(
"check_runs",
[
{
"id": 123456,
"name": "build",
"status": "completed",
"conclusion": "success",
"started_at": "2024-01-01T00:00:00Z",
"completed_at": "2024-01-01T00:05:00Z",
"html_url": "https://github.com/owner/repo/runs/123456",
"details_url": None,
"output_title": "Build passed",
"output_summary": "All tests passed",
"output_text": "Build log output...",
"annotations": [],
}
],
),
],
test_mock={
"get_ci_results": lambda *args, **kwargs: {
"check_runs": [
{
"id": 123456,
"name": "build",
"status": "completed",
"conclusion": "success",
"started_at": "2024-01-01T00:00:00Z",
"completed_at": "2024-01-01T00:05:00Z",
"html_url": "https://github.com/owner/repo/runs/123456",
"details_url": None,
"output_title": "Build passed",
"output_summary": "All tests passed",
"output_text": "Build log output...",
"annotations": [],
}
],
"total_count": 1,
}
},
)
@staticmethod
async def get_commit_sha(api, repo: str, target: str | int) -> str:
"""Get commit SHA from either a commit SHA or PR URL."""
# If it's already a SHA, return it
if isinstance(target, str):
if re.match(r"^[0-9a-f]{6,40}$", target, re.IGNORECASE):
return target
# If it's a PR URL, get the head SHA
if isinstance(target, int):
pr_url = f"https://api.github.com/repos/{repo}/pulls/{target}"
response = await api.get(pr_url)
pr_data = response.json()
return pr_data["head"]["sha"]
raise ValueError("Target must be a commit SHA or PR URL")
@staticmethod
async def search_in_logs(
check_runs: list,
pattern: str,
) -> list[Output.MatchedLine]:
"""Search for pattern in check run logs."""
if not pattern:
return []
matched_lines = []
regex = re.compile(pattern, re.IGNORECASE | re.MULTILINE)
for check in check_runs:
output_text = check.get("output_text", "") or ""
if not output_text:
continue
lines = output_text.split("\n")
for i, line in enumerate(lines):
if regex.search(line):
# Get context (2 lines before and after)
start = max(0, i - 2)
end = min(len(lines), i + 3)
context = lines[start:end]
matched_lines.append(
{
"check_name": check["name"],
"line_number": i + 1,
"line": line,
"context": context,
}
)
return matched_lines
@staticmethod
async def get_ci_results(
credentials: GithubCredentials,
repo: str,
target: str | int,
search_pattern: Optional[str] = None,
check_name_filter: Optional[str] = None,
) -> dict:
api = get_api(credentials, convert_urls=False)
# Get the commit SHA
commit_sha = await GithubGetCIResultsBlock.get_commit_sha(api, repo, target)
# Get check runs for the commit
check_runs_url = (
f"https://api.github.com/repos/{repo}/commits/{commit_sha}/check-runs"
)
# Get all pages of check runs
all_check_runs = []
page = 1
per_page = 100
while True:
response = await api.get(
check_runs_url, params={"per_page": per_page, "page": page}
)
data = response.json()
check_runs = data.get("check_runs", [])
all_check_runs.extend(check_runs)
if len(check_runs) < per_page:
break
page += 1
# Filter by check name if specified
if check_name_filter:
import fnmatch
filtered_runs = []
for run in all_check_runs:
if fnmatch.fnmatch(run["name"].lower(), check_name_filter.lower()):
filtered_runs.append(run)
all_check_runs = filtered_runs
# Get check run details with logs
detailed_runs = []
for run in all_check_runs:
# Get detailed output including logs
if run.get("output", {}).get("text"):
# Already has output
detailed_run = {
"id": run["id"],
"name": run["name"],
"status": run["status"],
"conclusion": run.get("conclusion"),
"started_at": run.get("started_at"),
"completed_at": run.get("completed_at"),
"html_url": run["html_url"],
"details_url": run.get("details_url"),
"output_title": run.get("output", {}).get("title"),
"output_summary": run.get("output", {}).get("summary"),
"output_text": run.get("output", {}).get("text"),
"annotations": [],
}
else:
# Try to get logs from the check run
detailed_run = {
"id": run["id"],
"name": run["name"],
"status": run["status"],
"conclusion": run.get("conclusion"),
"started_at": run.get("started_at"),
"completed_at": run.get("completed_at"),
"html_url": run["html_url"],
"details_url": run.get("details_url"),
"output_title": run.get("output", {}).get("title"),
"output_summary": run.get("output", {}).get("summary"),
"output_text": None,
"annotations": [],
}
# Get annotations if available
if run.get("output", {}).get("annotations_count", 0) > 0:
annotations_url = f"https://api.github.com/repos/{repo}/check-runs/{run['id']}/annotations"
try:
ann_response = await api.get(annotations_url)
detailed_run["annotations"] = ann_response.json()
except Exception:
pass
detailed_runs.append(detailed_run)
return {
"check_runs": detailed_runs,
"total_count": len(detailed_runs),
}
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
target = int(input_data.target)
except ValueError:
target = input_data.target
result = await self.get_ci_results(
credentials,
input_data.repo,
target,
input_data.search_pattern,
input_data.check_name_filter,
)
check_runs = result["check_runs"]
# Calculate overall status
if not check_runs:
yield "overall_status", "no_checks"
yield "overall_conclusion", "no_checks"
else:
all_completed = all(run["status"] == "completed" for run in check_runs)
if all_completed:
yield "overall_status", "completed"
# Determine overall conclusion
has_failure = any(
run["conclusion"] in ["failure", "timed_out", "action_required"]
for run in check_runs
)
if has_failure:
yield "overall_conclusion", "failure"
else:
yield "overall_conclusion", "success"
else:
yield "overall_status", "pending"
yield "overall_conclusion", "pending"
# Count checks
total = len(check_runs)
passed = sum(1 for run in check_runs if run.get("conclusion") == "success")
failed = sum(
1 for run in check_runs if run.get("conclusion") in ["failure", "timed_out"]
)
yield "total_checks", total
yield "passed_checks", passed
yield "failed_checks", failed
# Output check runs
yield "check_runs", check_runs
# Search for patterns if specified
if input_data.search_pattern:
matched_lines = await self.search_in_logs(
check_runs, input_data.search_pattern
)
if matched_lines:
yield "matched_lines", matched_lines

View File

@@ -0,0 +1,840 @@
import logging
from enum import Enum
from typing import Any, List, Optional
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from ._api import get_api
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
GithubCredentials,
GithubCredentialsField,
GithubCredentialsInput,
)
logger = logging.getLogger(__name__)
class ReviewEvent(Enum):
COMMENT = "COMMENT"
APPROVE = "APPROVE"
REQUEST_CHANGES = "REQUEST_CHANGES"
class GithubCreatePRReviewBlock(Block):
class Input(BlockSchema):
class ReviewComment(TypedDict, total=False):
path: str
position: Optional[int]
body: str
line: Optional[int] # Will be used as position if position not provided
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
placeholder="owner/repo",
)
pr_number: int = SchemaField(
description="Pull request number",
placeholder="123",
)
body: str = SchemaField(
description="Body of the review comment",
placeholder="Enter your review comment",
)
event: ReviewEvent = SchemaField(
description="The review action to perform",
default=ReviewEvent.COMMENT,
)
create_as_draft: bool = SchemaField(
description="Create the review as a draft (pending) or post it immediately",
default=False,
advanced=False,
)
comments: Optional[List[ReviewComment]] = SchemaField(
description="Optional inline comments to add to specific files/lines. Note: Only path, body, and position are supported. Position is line number in diff from first @@ hunk.",
default=None,
advanced=True,
)
class Output(BlockSchema):
review_id: int = SchemaField(description="ID of the created review")
state: str = SchemaField(
description="State of the review (e.g., PENDING, COMMENTED, APPROVED, CHANGES_REQUESTED)"
)
html_url: str = SchemaField(description="URL of the created review")
error: str = SchemaField(
description="Error message if the review creation failed"
)
def __init__(self):
super().__init__(
id="84754b30-97d2-4c37-a3b8-eb39f268275b",
description="This block creates a review on a GitHub pull request with optional inline comments. You can create it as a draft or post immediately. Note: For inline comments, 'position' should be the line number in the diff (starting from the first @@ hunk header).",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubCreatePRReviewBlock.Input,
output_schema=GithubCreatePRReviewBlock.Output,
test_input={
"repo": "owner/repo",
"pr_number": 1,
"body": "This looks good to me!",
"event": "APPROVE",
"create_as_draft": False,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("review_id", 123456),
("state", "APPROVED"),
(
"html_url",
"https://github.com/owner/repo/pull/1#pullrequestreview-123456",
),
],
test_mock={
"create_review": lambda *args, **kwargs: (
123456,
"APPROVED",
"https://github.com/owner/repo/pull/1#pullrequestreview-123456",
)
},
)
@staticmethod
async def create_review(
credentials: GithubCredentials,
repo: str,
pr_number: int,
body: str,
event: ReviewEvent,
create_as_draft: bool,
comments: Optional[List[Input.ReviewComment]] = None,
) -> tuple[int, str, str]:
api = get_api(credentials, convert_urls=False)
# GitHub API endpoint for creating reviews
reviews_url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}/reviews"
# Get commit_id if we have comments
commit_id = None
if comments:
# Get PR details to get the head commit for inline comments
pr_url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}"
pr_response = await api.get(pr_url)
pr_data = pr_response.json()
commit_id = pr_data["head"]["sha"]
# Prepare the request data
# If create_as_draft is True, omit the event field (creates a PENDING review)
# Otherwise, use the actual event value which will auto-submit the review
data: dict[str, Any] = {"body": body}
# Add commit_id if we have it
if commit_id:
data["commit_id"] = commit_id
# Add comments if provided
if comments:
# Process comments to ensure they have the required fields
processed_comments = []
for comment in comments:
comment_data: dict = {
"path": comment.get("path", ""),
"body": comment.get("body", ""),
}
# Add position or line
# Note: For review comments, only position is supported (not line/side)
if "position" in comment and comment.get("position") is not None:
comment_data["position"] = comment.get("position")
elif "line" in comment and comment.get("line") is not None:
# Note: Using line as position - may not work correctly
# Position should be calculated from the diff
comment_data["position"] = comment.get("line")
# Note: side, start_line, and start_side are NOT supported for review comments
# They are only for standalone PR comments
processed_comments.append(comment_data)
data["comments"] = processed_comments
if not create_as_draft:
# Only add event field if not creating a draft
data["event"] = event.value
# Create the review
response = await api.post(reviews_url, json=data)
review_data = response.json()
return review_data["id"], review_data["state"], review_data["html_url"]
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
review_id, state, html_url = await self.create_review(
credentials,
input_data.repo,
input_data.pr_number,
input_data.body,
input_data.event,
input_data.create_as_draft,
input_data.comments,
)
yield "review_id", review_id
yield "state", state
yield "html_url", html_url
except Exception as e:
yield "error", str(e)
class GithubListPRReviewsBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
placeholder="owner/repo",
)
pr_number: int = SchemaField(
description="Pull request number",
placeholder="123",
)
class Output(BlockSchema):
class ReviewItem(TypedDict):
id: int
user: str
state: str
body: str
html_url: str
review: ReviewItem = SchemaField(
title="Review",
description="Individual review with details",
)
reviews: list[ReviewItem] = SchemaField(
description="List of all reviews on the pull request"
)
error: str = SchemaField(description="Error message if listing reviews failed")
def __init__(self):
super().__init__(
id="f79bc6eb-33c0-4099-9c0f-d664ae1ba4d0",
description="This block lists all reviews for a specified GitHub pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListPRReviewsBlock.Input,
output_schema=GithubListPRReviewsBlock.Output,
test_input={
"repo": "owner/repo",
"pr_number": 1,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"reviews",
[
{
"id": 123456,
"user": "reviewer1",
"state": "APPROVED",
"body": "Looks good!",
"html_url": "https://github.com/owner/repo/pull/1#pullrequestreview-123456",
}
],
),
(
"review",
{
"id": 123456,
"user": "reviewer1",
"state": "APPROVED",
"body": "Looks good!",
"html_url": "https://github.com/owner/repo/pull/1#pullrequestreview-123456",
},
),
],
test_mock={
"list_reviews": lambda *args, **kwargs: [
{
"id": 123456,
"user": "reviewer1",
"state": "APPROVED",
"body": "Looks good!",
"html_url": "https://github.com/owner/repo/pull/1#pullrequestreview-123456",
}
]
},
)
@staticmethod
async def list_reviews(
credentials: GithubCredentials, repo: str, pr_number: int
) -> list[Output.ReviewItem]:
api = get_api(credentials, convert_urls=False)
# GitHub API endpoint for listing reviews
reviews_url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}/reviews"
response = await api.get(reviews_url)
data = response.json()
reviews: list[GithubListPRReviewsBlock.Output.ReviewItem] = [
{
"id": review["id"],
"user": review["user"]["login"],
"state": review["state"],
"body": review.get("body", ""),
"html_url": review["html_url"],
}
for review in data
]
return reviews
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
reviews = await self.list_reviews(
credentials,
input_data.repo,
input_data.pr_number,
)
yield "reviews", reviews
for review in reviews:
yield "review", review
class GithubSubmitPendingReviewBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
placeholder="owner/repo",
)
pr_number: int = SchemaField(
description="Pull request number",
placeholder="123",
)
review_id: int = SchemaField(
description="ID of the pending review to submit",
placeholder="123456",
)
event: ReviewEvent = SchemaField(
description="The review action to perform when submitting",
default=ReviewEvent.COMMENT,
)
class Output(BlockSchema):
state: str = SchemaField(description="State of the submitted review")
html_url: str = SchemaField(description="URL of the submitted review")
error: str = SchemaField(
description="Error message if the review submission failed"
)
def __init__(self):
super().__init__(
id="2e468217-7ca0-4201-9553-36e93eb9357a",
description="This block submits a pending (draft) review on a GitHub pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubSubmitPendingReviewBlock.Input,
output_schema=GithubSubmitPendingReviewBlock.Output,
test_input={
"repo": "owner/repo",
"pr_number": 1,
"review_id": 123456,
"event": "APPROVE",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("state", "APPROVED"),
(
"html_url",
"https://github.com/owner/repo/pull/1#pullrequestreview-123456",
),
],
test_mock={
"submit_review": lambda *args, **kwargs: (
"APPROVED",
"https://github.com/owner/repo/pull/1#pullrequestreview-123456",
)
},
)
@staticmethod
async def submit_review(
credentials: GithubCredentials,
repo: str,
pr_number: int,
review_id: int,
event: ReviewEvent,
) -> tuple[str, str]:
api = get_api(credentials, convert_urls=False)
# GitHub API endpoint for submitting a review
submit_url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}/reviews/{review_id}/events"
data = {"event": event.value}
response = await api.post(submit_url, json=data)
review_data = response.json()
return review_data["state"], review_data["html_url"]
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
state, html_url = await self.submit_review(
credentials,
input_data.repo,
input_data.pr_number,
input_data.review_id,
input_data.event,
)
yield "state", state
yield "html_url", html_url
except Exception as e:
yield "error", str(e)
class GithubResolveReviewDiscussionBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
placeholder="owner/repo",
)
pr_number: int = SchemaField(
description="Pull request number",
placeholder="123",
)
comment_id: int = SchemaField(
description="ID of the review comment to resolve/unresolve",
placeholder="123456",
)
resolve: bool = SchemaField(
description="Whether to resolve (true) or unresolve (false) the discussion",
default=True,
)
class Output(BlockSchema):
success: bool = SchemaField(description="Whether the operation was successful")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(
id="b4b8a38c-95ae-4c91-9ef8-c2cffaf2b5d1",
description="This block resolves or unresolves a review discussion thread on a GitHub pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubResolveReviewDiscussionBlock.Input,
output_schema=GithubResolveReviewDiscussionBlock.Output,
test_input={
"repo": "owner/repo",
"pr_number": 1,
"comment_id": 123456,
"resolve": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("success", True),
],
test_mock={"resolve_discussion": lambda *args, **kwargs: True},
)
@staticmethod
async def resolve_discussion(
credentials: GithubCredentials,
repo: str,
pr_number: int,
comment_id: int,
resolve: bool,
) -> bool:
api = get_api(credentials, convert_urls=False)
# Extract owner and repo name
parts = repo.split("/")
owner = parts[0]
repo_name = parts[1]
# GitHub GraphQL API is needed for resolving/unresolving discussions
# First, we need to get the node ID of the comment
graphql_url = "https://api.github.com/graphql"
# Query to get the review comment node ID
query = """
query($owner: String!, $repo: String!, $number: Int!) {
repository(owner: $owner, name: $repo) {
pullRequest(number: $number) {
reviewThreads(first: 100) {
nodes {
comments(first: 100) {
nodes {
databaseId
id
}
}
id
isResolved
}
}
}
}
}
"""
variables = {"owner": owner, "repo": repo_name, "number": pr_number}
response = await api.post(
graphql_url, json={"query": query, "variables": variables}
)
data = response.json()
# Find the thread containing our comment
thread_id = None
for thread in data["data"]["repository"]["pullRequest"]["reviewThreads"][
"nodes"
]:
for comment in thread["comments"]["nodes"]:
if comment["databaseId"] == comment_id:
thread_id = thread["id"]
break
if thread_id:
break
if not thread_id:
raise ValueError(f"Comment {comment_id} not found in pull request")
# Now resolve or unresolve the thread
# GitHub's GraphQL API has separate mutations for resolve and unresolve
if resolve:
mutation = """
mutation($threadId: ID!) {
resolveReviewThread(input: {threadId: $threadId}) {
thread {
isResolved
}
}
}
"""
else:
mutation = """
mutation($threadId: ID!) {
unresolveReviewThread(input: {threadId: $threadId}) {
thread {
isResolved
}
}
}
"""
mutation_variables = {"threadId": thread_id}
response = await api.post(
graphql_url, json={"query": mutation, "variables": mutation_variables}
)
result = response.json()
if "errors" in result:
raise Exception(f"GraphQL error: {result['errors']}")
return True
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
success = await self.resolve_discussion(
credentials,
input_data.repo,
input_data.pr_number,
input_data.comment_id,
input_data.resolve,
)
yield "success", success
except Exception as e:
yield "success", False
yield "error", str(e)
class GithubGetPRReviewCommentsBlock(Block):
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
placeholder="owner/repo",
)
pr_number: int = SchemaField(
description="Pull request number",
placeholder="123",
)
review_id: Optional[int] = SchemaField(
description="ID of a specific review to get comments from (optional)",
placeholder="123456",
default=None,
advanced=True,
)
class Output(BlockSchema):
class CommentItem(TypedDict):
id: int
user: str
body: str
path: str
line: int
side: str
created_at: str
updated_at: str
in_reply_to_id: Optional[int]
html_url: str
comment: CommentItem = SchemaField(
title="Comment",
description="Individual review comment with details",
)
comments: list[CommentItem] = SchemaField(
description="List of all review comments on the pull request"
)
error: str = SchemaField(description="Error message if getting comments failed")
def __init__(self):
super().__init__(
id="1d34db7f-10c1-45c1-9d43-749f743c8bd4",
description="This block gets all review comments from a GitHub pull request or from a specific review.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubGetPRReviewCommentsBlock.Input,
output_schema=GithubGetPRReviewCommentsBlock.Output,
test_input={
"repo": "owner/repo",
"pr_number": 1,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"comments",
[
{
"id": 123456,
"user": "reviewer1",
"body": "This needs improvement",
"path": "src/main.py",
"line": 42,
"side": "RIGHT",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"in_reply_to_id": None,
"html_url": "https://github.com/owner/repo/pull/1#discussion_r123456",
}
],
),
(
"comment",
{
"id": 123456,
"user": "reviewer1",
"body": "This needs improvement",
"path": "src/main.py",
"line": 42,
"side": "RIGHT",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"in_reply_to_id": None,
"html_url": "https://github.com/owner/repo/pull/1#discussion_r123456",
},
),
],
test_mock={
"get_comments": lambda *args, **kwargs: [
{
"id": 123456,
"user": "reviewer1",
"body": "This needs improvement",
"path": "src/main.py",
"line": 42,
"side": "RIGHT",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"in_reply_to_id": None,
"html_url": "https://github.com/owner/repo/pull/1#discussion_r123456",
}
]
},
)
@staticmethod
async def get_comments(
credentials: GithubCredentials,
repo: str,
pr_number: int,
review_id: Optional[int] = None,
) -> list[Output.CommentItem]:
api = get_api(credentials, convert_urls=False)
# Determine the endpoint based on whether we want comments from a specific review
if review_id:
# Get comments from a specific review
comments_url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}/reviews/{review_id}/comments"
else:
# Get all review comments on the PR
comments_url = (
f"https://api.github.com/repos/{repo}/pulls/{pr_number}/comments"
)
response = await api.get(comments_url)
data = response.json()
comments: list[GithubGetPRReviewCommentsBlock.Output.CommentItem] = [
{
"id": comment["id"],
"user": comment["user"]["login"],
"body": comment["body"],
"path": comment.get("path", ""),
"line": comment.get("line", 0),
"side": comment.get("side", ""),
"created_at": comment["created_at"],
"updated_at": comment["updated_at"],
"in_reply_to_id": comment.get("in_reply_to_id"),
"html_url": comment["html_url"],
}
for comment in data
]
return comments
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
comments = await self.get_comments(
credentials,
input_data.repo,
input_data.pr_number,
input_data.review_id,
)
yield "comments", comments
for comment in comments:
yield "comment", comment
except Exception as e:
yield "error", str(e)
class GithubCreateCommentObjectBlock(Block):
class Input(BlockSchema):
path: str = SchemaField(
description="The file path to comment on",
placeholder="src/main.py",
)
body: str = SchemaField(
description="The comment text",
placeholder="Please fix this issue",
)
position: Optional[int] = SchemaField(
description="Position in the diff (line number from first @@ hunk). Use this OR line.",
placeholder="6",
default=None,
advanced=True,
)
line: Optional[int] = SchemaField(
description="Line number in the file (will be used as position if position not provided)",
placeholder="42",
default=None,
advanced=True,
)
side: Optional[str] = SchemaField(
description="Side of the diff to comment on (NOTE: Only for standalone comments, not review comments)",
default="RIGHT",
advanced=True,
)
start_line: Optional[int] = SchemaField(
description="Start line for multi-line comments (NOTE: Only for standalone comments, not review comments)",
default=None,
advanced=True,
)
start_side: Optional[str] = SchemaField(
description="Side for the start of multi-line comments (NOTE: Only for standalone comments, not review comments)",
default=None,
advanced=True,
)
class Output(BlockSchema):
comment_object: dict = SchemaField(
description="The comment object formatted for GitHub API"
)
def __init__(self):
super().__init__(
id="b7d5e4f2-8c3a-4e6b-9f1d-7a8b9c5e4d3f",
description="Creates a comment object for use with GitHub blocks. Note: For review comments, only path, body, and position are used. Side fields are only for standalone PR comments.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubCreateCommentObjectBlock.Input,
output_schema=GithubCreateCommentObjectBlock.Output,
test_input={
"path": "src/main.py",
"body": "Please fix this issue",
"position": 6,
},
test_output=[
(
"comment_object",
{
"path": "src/main.py",
"body": "Please fix this issue",
"position": 6,
},
),
],
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
# Build the comment object
comment_obj: dict = {
"path": input_data.path,
"body": input_data.body,
}
# Add position or line
if input_data.position is not None:
comment_obj["position"] = input_data.position
elif input_data.line is not None:
# Note: line will be used as position, which may not be accurate
# Position should be calculated from the diff
comment_obj["position"] = input_data.line
# Add optional fields only if they differ from defaults or are explicitly provided
if input_data.side and input_data.side != "RIGHT":
comment_obj["side"] = input_data.side
if input_data.start_line is not None:
comment_obj["start_line"] = input_data.start_line
if input_data.start_side:
comment_obj["start_side"] = input_data.start_side
yield "comment_object", comment_obj

View File

@@ -21,6 +21,8 @@ from ._auth import (
GoogleCredentialsInput,
)
settings = Settings()
class CalendarEvent(BaseModel):
"""Structured representation of a Google Calendar event."""
@@ -221,8 +223,8 @@ class GoogleCalendarReadEventsBlock(Block):
else None
),
token_uri="https://oauth2.googleapis.com/token",
client_id=Settings().secrets.google_client_id,
client_secret=Settings().secrets.google_client_secret,
client_id=settings.secrets.google_client_id,
client_secret=settings.secrets.google_client_secret,
scopes=credentials.scopes,
)
return build("calendar", "v3", credentials=creds)
@@ -569,8 +571,8 @@ class GoogleCalendarCreateEventBlock(Block):
else None
),
token_uri="https://oauth2.googleapis.com/token",
client_id=Settings().secrets.google_client_id,
client_secret=Settings().secrets.google_client_secret,
client_id=settings.secrets.google_client_id,
client_secret=settings.secrets.google_client_secret,
scopes=credentials.scopes,
)
return build("calendar", "v3", credentials=creds)

File diff suppressed because it is too large Load Diff

View File

@@ -37,6 +37,7 @@ LLMProviderName = Literal[
ProviderName.OPENAI,
ProviderName.OPEN_ROUTER,
ProviderName.LLAMA_API,
ProviderName.V0,
]
AICredentials = CredentialsMetaInput[LLMProviderName, Literal["api_key"]]
@@ -81,6 +82,11 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
O3 = "o3-2025-04-16"
O1 = "o1"
O1_MINI = "o1-mini"
# GPT-5 models
GPT5 = "gpt-5-2025-08-07"
GPT5_MINI = "gpt-5-mini-2025-08-07"
GPT5_NANO = "gpt-5-nano-2025-08-07"
GPT5_CHAT = "gpt-5-chat-latest"
GPT41 = "gpt-4.1-2025-04-14"
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
GPT4O_MINI = "gpt-4o-mini"
@@ -88,6 +94,7 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
GPT4_TURBO = "gpt-4-turbo"
GPT3_5_TURBO = "gpt-3.5-turbo"
# Anthropic models
CLAUDE_4_1_OPUS = "claude-opus-4-1-20250805"
CLAUDE_4_OPUS = "claude-opus-4-20250514"
CLAUDE_4_SONNET = "claude-sonnet-4-20250514"
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
@@ -115,6 +122,8 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
OLLAMA_LLAMA3_405B = "llama3.1:405b"
OLLAMA_DOLPHIN = "dolphin-mistral:latest"
# OpenRouter models
OPENAI_GPT_OSS_120B = "openai/gpt-oss-120b"
OPENAI_GPT_OSS_20B = "openai/gpt-oss-20b"
GEMINI_FLASH_1_5 = "google/gemini-flash-1.5"
GEMINI_2_5_PRO = "google/gemini-2.5-pro-preview-03-25"
GEMINI_2_5_FLASH = "google/gemini-2.5-flash"
@@ -147,6 +156,10 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
LLAMA_API_LLAMA4_MAVERICK = "Llama-4-Maverick-17B-128E-Instruct-FP8"
LLAMA_API_LLAMA3_3_8B = "Llama-3.3-8B-Instruct"
LLAMA_API_LLAMA3_3_70B = "Llama-3.3-70B-Instruct"
# v0 by Vercel models
V0_1_5_MD = "v0-1.5-md"
V0_1_5_LG = "v0-1.5-lg"
V0_1_0_MD = "v0-1.0-md"
@property
def metadata(self) -> ModelMetadata:
@@ -171,6 +184,11 @@ MODEL_METADATA = {
LlmModel.O3_MINI: ModelMetadata("openai", 200000, 100000), # o3-mini-2025-01-31
LlmModel.O1: ModelMetadata("openai", 200000, 100000), # o1-2024-12-17
LlmModel.O1_MINI: ModelMetadata("openai", 128000, 65536), # o1-mini-2024-09-12
# GPT-5 models
LlmModel.GPT5: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_MINI: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_NANO: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_CHAT: ModelMetadata("openai", 400000, 16384),
LlmModel.GPT41: ModelMetadata("openai", 1047576, 32768),
LlmModel.GPT41_MINI: ModelMetadata("openai", 1047576, 32768),
LlmModel.GPT4O_MINI: ModelMetadata(
@@ -182,6 +200,9 @@ MODEL_METADATA = {
), # gpt-4-turbo-2024-04-09
LlmModel.GPT3_5_TURBO: ModelMetadata("openai", 16385, 4096), # gpt-3.5-turbo-0125
# https://docs.anthropic.com/en/docs/about-claude/models
LlmModel.CLAUDE_4_1_OPUS: ModelMetadata(
"anthropic", 200000, 32000
), # claude-opus-4-1-20250805
LlmModel.CLAUDE_4_OPUS: ModelMetadata(
"anthropic", 200000, 8192
), # claude-4-opus-20250514
@@ -246,6 +267,8 @@ MODEL_METADATA = {
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_70B: ModelMetadata(
"open_router", 12288, 12288
),
LlmModel.OPENAI_GPT_OSS_120B: ModelMetadata("open_router", 131072, 131072),
LlmModel.OPENAI_GPT_OSS_20B: ModelMetadata("open_router", 131072, 32768),
LlmModel.AMAZON_NOVA_LITE_V1: ModelMetadata("open_router", 300000, 5120),
LlmModel.AMAZON_NOVA_MICRO_V1: ModelMetadata("open_router", 128000, 5120),
LlmModel.AMAZON_NOVA_PRO_V1: ModelMetadata("open_router", 300000, 5120),
@@ -262,6 +285,10 @@ MODEL_METADATA = {
LlmModel.LLAMA_API_LLAMA4_MAVERICK: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_8B: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_70B: ModelMetadata("llama_api", 128000, 4028),
# v0 by Vercel models
LlmModel.V0_1_5_MD: ModelMetadata("v0", 128000, 64000),
LlmModel.V0_1_5_LG: ModelMetadata("v0", 512000, 64000),
LlmModel.V0_1_0_MD: ModelMetadata("v0", 128000, 64000),
}
for model in LlmModel:
@@ -475,6 +502,7 @@ async def llm_call(
messages=messages,
max_tokens=max_tokens,
tools=an_tools,
timeout=600,
)
if not resp.content:
@@ -657,7 +685,11 @@ async def llm_call(
client = openai.OpenAI(
base_url="https://api.aimlapi.com/v2",
api_key=credentials.api_key.get_secret_value(),
default_headers={"X-Project": "AutoGPT"},
default_headers={
"X-Project": "AutoGPT",
"X-Title": "AutoGPT",
"HTTP-Referer": "https://github.com/Significant-Gravitas/AutoGPT",
},
)
completion = client.chat.completions.create(
@@ -677,6 +709,42 @@ async def llm_call(
),
reasoning=None,
)
elif provider == "v0":
tools_param = tools if tools else openai.NOT_GIVEN
client = openai.AsyncOpenAI(
base_url="https://api.v0.dev/v1",
api_key=credentials.api_key.get_secret_value(),
)
response_format = None
if json_format:
response_format = {"type": "json_object"}
parallel_tool_calls_param = get_parallel_tool_calls_param(
llm_model, parallel_tool_calls
)
response = await client.chat.completions.create(
model=llm_model.value,
messages=prompt, # type: ignore
response_format=response_format, # type: ignore
max_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=parallel_tool_calls_param,
)
tool_calls = extract_openai_tool_calls(response)
reasoning = extract_openai_reasoning(response)
return LLMResponse(
raw_response=response.choices[0].message,
prompt=prompt,
response=response.choices[0].message.content or "",
tool_calls=tool_calls,
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
reasoning=reasoning,
)
else:
raise ValueError(f"Unsupported LLM provider: {provider}")

View File

@@ -1,22 +1,13 @@
import logging
from typing import Any, Literal
from autogpt_libs.utils.cache import thread_cached
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
@thread_cached
def get_database_manager_client():
from backend.executor import DatabaseManagerAsyncClient
from backend.util.service import get_service_client
return get_service_client(DatabaseManagerAsyncClient, health_check=False)
StorageScope = Literal["within_agent", "across_agents"]
@@ -88,7 +79,7 @@ class PersistInformationBlock(Block):
async def _store_data(
self, user_id: str, node_exec_id: str, key: str, data: Any
) -> Any | None:
return await get_database_manager_client().set_execution_kv_data(
return await get_database_manager_async_client().set_execution_kv_data(
user_id=user_id,
node_exec_id=node_exec_id,
key=key,
@@ -149,7 +140,7 @@ class RetrieveInformationBlock(Block):
yield "value", input_data.default_value
async def _retrieve_data(self, user_id: str, key: str) -> Any | None:
return await get_database_manager_client().get_execution_kv_data(
return await get_database_manager_async_client().get_execution_kv_data(
user_id=user_id,
key=key,
)

View File

@@ -3,8 +3,7 @@ from typing import List
from backend.data.block import BlockOutput, BlockSchema
from backend.data.model import APIKeyCredentials, SchemaField
from backend.util import settings
from backend.util.settings import BehaveAs
from backend.util.settings import BehaveAs, Settings
from ._api import (
TEST_CREDENTIALS,
@@ -16,6 +15,8 @@ from ._api import (
)
from .base import Slant3DBlockBase
settings = Settings()
class Slant3DCreateOrderBlock(Slant3DBlockBase):
"""Block for creating new orders"""
@@ -280,7 +281,7 @@ class Slant3DGetOrdersBlock(Slant3DBlockBase):
input_schema=self.Input,
output_schema=self.Output,
# This block is disabled for cloud hosted because it allows access to all orders for the account
disabled=settings.Settings().config.behave_as == BehaveAs.CLOUD,
disabled=settings.config.behave_as == BehaveAs.CLOUD,
test_input={"credentials": TEST_CREDENTIALS_INPUT},
test_credentials=TEST_CREDENTIALS,
test_output=[

View File

@@ -9,8 +9,7 @@ from backend.data.block import (
)
from backend.data.model import SchemaField
from backend.integrations.providers import ProviderName
from backend.util import settings
from backend.util.settings import AppEnvironment, BehaveAs
from backend.util.settings import AppEnvironment, BehaveAs, Settings
from ._api import (
TEST_CREDENTIALS,
@@ -19,6 +18,8 @@ from ._api import (
Slant3DCredentialsInput,
)
settings = Settings()
class Slant3DTriggerBase:
"""Base class for Slant3D webhook triggers"""
@@ -76,8 +77,8 @@ class Slant3DOrderWebhookBlock(Slant3DTriggerBase, Block):
),
# All webhooks are currently subscribed to for all orders. This works for self hosted, but not for cloud hosted prod
disabled=(
settings.Settings().config.behave_as == BehaveAs.CLOUD
and settings.Settings().config.app_env != AppEnvironment.LOCAL
settings.config.behave_as == BehaveAs.CLOUD
and settings.config.app_env != AppEnvironment.LOCAL
),
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=self.Input,

View File

@@ -3,8 +3,6 @@ import re
from collections import Counter
from typing import TYPE_CHECKING, Any
from autogpt_libs.utils.cache import thread_cached
import backend.blocks.llm as llm
from backend.blocks.agent import AgentExecutorBlock
from backend.data.block import (
@@ -17,6 +15,7 @@ from backend.data.block import (
)
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util import json
from backend.util.clients import get_database_manager_async_client
if TYPE_CHECKING:
from backend.data.graph import Link, Node
@@ -24,14 +23,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
@thread_cached
def get_database_manager_client():
from backend.executor import DatabaseManagerAsyncClient
from backend.util.service import get_service_client
return get_service_client(DatabaseManagerAsyncClient, health_check=False)
def _get_tool_requests(entry: dict[str, Any]) -> list[str]:
"""
Return a list of tool_call_ids if the entry is a tool request.
@@ -300,9 +291,32 @@ class SmartDecisionMakerBlock(Block):
for link in links:
sink_name = SmartDecisionMakerBlock.cleanup(link.sink_name)
properties[sink_name] = sink_block_input_schema.get_field_schema(
link.sink_name
)
# Handle dynamic fields (e.g., values_#_*, items_$_*, etc.)
# These are fields that get merged by the executor into their base field
if (
"_#_" in link.sink_name
or "_$_" in link.sink_name
or "_@_" in link.sink_name
):
# For dynamic fields, provide a generic string schema
# The executor will handle merging these into the appropriate structure
properties[sink_name] = {
"type": "string",
"description": f"Dynamic value for {link.sink_name}",
}
else:
# For regular fields, use the block's schema
try:
properties[sink_name] = sink_block_input_schema.get_field_schema(
link.sink_name
)
except (KeyError, AttributeError):
# If the field doesn't exist in the schema, provide a generic schema
properties[sink_name] = {
"type": "string",
"description": f"Value for {link.sink_name}",
}
tool_function["parameters"] = {
**block.input_schema.jsonschema(),
@@ -333,7 +347,7 @@ class SmartDecisionMakerBlock(Block):
if not graph_id or not graph_version:
raise ValueError("Graph ID or Graph Version not found in sink node.")
db_client = get_database_manager_client()
db_client = get_database_manager_async_client()
sink_graph_meta = await db_client.get_graph_metadata(graph_id, graph_version)
if not sink_graph_meta:
raise ValueError(
@@ -393,7 +407,7 @@ class SmartDecisionMakerBlock(Block):
ValueError: If no tool links are found for the specified node_id, or if a sink node
or its metadata cannot be found.
"""
db_client = get_database_manager_client()
db_client = get_database_manager_async_client()
tools = [
(link, node)
for link, node in await db_client.get_connected_output_nodes(node_id)
@@ -487,10 +501,6 @@ class SmartDecisionMakerBlock(Block):
}
)
prompt.extend(tool_output)
if input_data.multiple_tool_calls:
input_data.sys_prompt += "\nYou can call a tool (different tools) multiple times in a single response."
else:
input_data.sys_prompt += "\nOnly provide EXACTLY one function call, multiple tool calls is strictly prohibited."
values = input_data.prompt_values
if values:
@@ -529,15 +539,6 @@ class SmartDecisionMakerBlock(Block):
)
)
# Add reasoning to conversation history if available
if response.reasoning:
prompt.append(
{"role": "assistant", "content": f"[Reasoning]: {response.reasoning}"}
)
prompt.append(response.raw_response)
yield "conversations", prompt
if not response.tool_calls:
yield "finished", response.response
return
@@ -571,3 +572,12 @@ class SmartDecisionMakerBlock(Block):
yield f"tools_^_{tool_name}_~_{arg_name}", tool_args[arg_name]
else:
yield f"tools_^_{tool_name}_~_{arg_name}", None
# Add reasoning to conversation history if available
if response.reasoning:
prompt.append(
{"role": "assistant", "content": f"[Reasoning]: {response.reasoning}"}
)
prompt.append(response.raw_response)
yield "conversations", prompt

View File

@@ -1,9 +1,8 @@
import logging
import pytest
from prisma.models import User
from backend.data.model import ProviderName
from backend.data.model import ProviderName, User
from backend.server.model import CreateGraph
from backend.server.rest_api import AgentServer
from backend.usecases.sample import create_test_graph, create_test_user

View File

@@ -0,0 +1,130 @@
from unittest.mock import Mock
import pytest
from backend.blocks.data_manipulation import AddToListBlock, CreateDictionaryBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
@pytest.mark.asyncio
async def test_smart_decision_maker_handles_dynamic_dict_fields():
"""Test Smart Decision Maker can handle dynamic dictionary fields (_#_) for any block"""
# Create a mock node for CreateDictionaryBlock
mock_node = Mock()
mock_node.block = CreateDictionaryBlock()
mock_node.block_id = CreateDictionaryBlock().id
mock_node.input_default = {}
# Create mock links with dynamic dictionary fields
mock_links = [
Mock(
source_name="tools_^_create_dict_~_name",
sink_name="values_#_name", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_create_dict_~_age",
sink_name="values_#_age", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_create_dict_~_city",
sink_name="values_#_city", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
]
# Generate function signature
signature = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, mock_links # type: ignore
)
# Verify the signature was created successfully
assert signature["type"] == "function"
assert "parameters" in signature["function"]
assert "properties" in signature["function"]["parameters"]
# Check that dynamic fields are handled
properties = signature["function"]["parameters"]["properties"]
assert len(properties) == 3 # Should have all three fields
# Each dynamic field should have proper schema
for prop_value in properties.values():
assert "type" in prop_value
assert prop_value["type"] == "string" # Dynamic fields get string type
assert "description" in prop_value
assert "Dynamic value for" in prop_value["description"]
@pytest.mark.asyncio
async def test_smart_decision_maker_handles_dynamic_list_fields():
"""Test Smart Decision Maker can handle dynamic list fields (_$_) for any block"""
# Create a mock node for AddToListBlock
mock_node = Mock()
mock_node.block = AddToListBlock()
mock_node.block_id = AddToListBlock().id
mock_node.input_default = {}
# Create mock links with dynamic list fields
mock_links = [
Mock(
source_name="tools_^_add_to_list_~_0",
sink_name="entries_$_0", # Dynamic list field
sink_id="list_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_add_to_list_~_1",
sink_name="entries_$_1", # Dynamic list field
sink_id="list_node_id",
source_id="smart_decision_node_id",
),
]
# Generate function signature
signature = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, mock_links # type: ignore
)
# Verify dynamic list fields are handled properly
assert signature["type"] == "function"
properties = signature["function"]["parameters"]["properties"]
assert len(properties) == 2 # Should have both list items
# Each dynamic field should have proper schema
for prop_value in properties.values():
assert prop_value["type"] == "string"
assert "Dynamic value for" in prop_value["description"]
@pytest.mark.asyncio
async def test_create_dict_block_with_dynamic_values():
"""Test CreateDictionaryBlock processes dynamic values correctly"""
block = CreateDictionaryBlock()
# Simulate what happens when executor merges dynamic fields
# The executor merges values_#_* fields into the values dict
input_data = block.input_schema(
values={
"existing": "value",
"name": "Alice", # This would come from values_#_name
"age": 25, # This would come from values_#_age
}
)
# Run the block
result = {}
async for output_name, output_value in block.run(input_data):
result[output_name] = output_value
# Check the result
assert "dictionary" in result
assert result["dictionary"]["existing"] == "value"
assert result["dictionary"]["name"] == "Alice"
assert result["dictionary"]["age"] == 25

View File

@@ -1,19 +1,78 @@
import asyncio
import time
from datetime import datetime, timedelta
from typing import Any, Union
from typing import Any, Literal, Union
from zoneinfo import ZoneInfo
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
# Shared timezone literal type for all time/date blocks
TimezoneLiteral = Literal[
"UTC", # UTC±00:00
"Pacific/Honolulu", # UTC-10:00
"America/Anchorage", # UTC-09:00 (Alaska)
"America/Los_Angeles", # UTC-08:00 (Pacific)
"America/Denver", # UTC-07:00 (Mountain)
"America/Chicago", # UTC-06:00 (Central)
"America/New_York", # UTC-05:00 (Eastern)
"America/Caracas", # UTC-04:00
"America/Sao_Paulo", # UTC-03:00
"America/St_Johns", # UTC-02:30 (Newfoundland)
"Atlantic/South_Georgia", # UTC-02:00
"Atlantic/Azores", # UTC-01:00
"Europe/London", # UTC+00:00 (GMT/BST)
"Europe/Paris", # UTC+01:00 (CET)
"Europe/Athens", # UTC+02:00 (EET)
"Europe/Moscow", # UTC+03:00
"Asia/Tehran", # UTC+03:30 (Iran)
"Asia/Dubai", # UTC+04:00
"Asia/Kabul", # UTC+04:30 (Afghanistan)
"Asia/Karachi", # UTC+05:00 (Pakistan)
"Asia/Kolkata", # UTC+05:30 (India)
"Asia/Kathmandu", # UTC+05:45 (Nepal)
"Asia/Dhaka", # UTC+06:00 (Bangladesh)
"Asia/Yangon", # UTC+06:30 (Myanmar)
"Asia/Bangkok", # UTC+07:00
"Asia/Shanghai", # UTC+08:00 (China)
"Australia/Eucla", # UTC+08:45
"Asia/Tokyo", # UTC+09:00 (Japan)
"Australia/Adelaide", # UTC+09:30
"Australia/Sydney", # UTC+10:00
"Australia/Lord_Howe", # UTC+10:30
"Pacific/Noumea", # UTC+11:00
"Pacific/Auckland", # UTC+12:00 (New Zealand)
"Pacific/Chatham", # UTC+12:45
"Pacific/Tongatapu", # UTC+13:00
"Pacific/Kiritimati", # UTC+14:00
"Etc/GMT-12", # UTC+12:00
"Etc/GMT+12", # UTC-12:00
]
class TimeStrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%H:%M:%S"
timezone: TimezoneLiteral = "UTC"
class TimeISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
include_microseconds: bool = False
class GetCurrentTimeBlock(Block):
class Input(BlockSchema):
trigger: str = SchemaField(
description="Trigger any data to output the current time"
)
format: str = SchemaField(
description="Format of the time to output", default="%H:%M:%S"
format_type: Union[TimeStrftimeFormat, TimeISO8601Format] = SchemaField(
discriminator="discriminator",
description="Format type for time output (strftime with custom format or ISO 8601)",
default=TimeStrftimeFormat(discriminator="strftime"),
)
class Output(BlockSchema):
@@ -30,19 +89,65 @@ class GetCurrentTimeBlock(Block):
output_schema=GetCurrentTimeBlock.Output,
test_input=[
{"trigger": "Hello"},
{"trigger": "Hello", "format": "%H:%M"},
{
"trigger": "Hello",
"format_type": {
"discriminator": "strftime",
"format": "%H:%M",
},
},
{
"trigger": "Hello",
"format_type": {
"discriminator": "iso8601",
"timezone": "UTC",
"include_microseconds": False,
},
},
],
test_output=[
("time", lambda _: time.strftime("%H:%M:%S")),
("time", lambda _: time.strftime("%H:%M")),
(
"time",
lambda t: "T" in t and ("+" in t or "Z" in t),
), # Check for ISO format with timezone
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
current_time = time.strftime(input_data.format)
if isinstance(input_data.format_type, TimeISO8601Format):
# ISO 8601 format for time only (extract time portion from full ISO datetime)
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
# Get the full ISO format and extract just the time portion with timezone
if input_data.format_type.include_microseconds:
full_iso = dt.isoformat()
else:
full_iso = dt.isoformat(timespec="seconds")
# Extract time portion (everything after 'T')
current_time = full_iso.split("T")[1] if "T" in full_iso else full_iso
current_time = f"T{current_time}" # Add T prefix for ISO 8601 time format
else: # TimeStrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
current_time = dt.strftime(input_data.format_type.format)
yield "time", current_time
class DateStrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%Y-%m-%d"
timezone: TimezoneLiteral = "UTC"
class DateISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
class GetCurrentDateBlock(Block):
class Input(BlockSchema):
trigger: str = SchemaField(
@@ -53,8 +158,10 @@ class GetCurrentDateBlock(Block):
description="Offset in days from the current date",
default=0,
)
format: str = SchemaField(
description="Format of the date to output", default="%Y-%m-%d"
format_type: Union[DateStrftimeFormat, DateISO8601Format] = SchemaField(
discriminator="discriminator",
description="Format type for date output (strftime with custom format or ISO 8601)",
default=DateStrftimeFormat(discriminator="strftime"),
)
class Output(BlockSchema):
@@ -71,7 +178,22 @@ class GetCurrentDateBlock(Block):
output_schema=GetCurrentDateBlock.Output,
test_input=[
{"trigger": "Hello", "offset": "7"},
{"trigger": "Hello", "offset": "7", "format": "%m/%d/%Y"},
{
"trigger": "Hello",
"offset": "7",
"format_type": {
"discriminator": "strftime",
"format": "%m/%d/%Y",
},
},
{
"trigger": "Hello",
"offset": "0",
"format_type": {
"discriminator": "iso8601",
"timezone": "UTC",
},
},
],
test_output=[
(
@@ -85,6 +207,12 @@ class GetCurrentDateBlock(Block):
< timedelta(days=8),
# 7 days difference + 1 day error margin.
),
(
"date",
lambda t: len(t) == 10
and t[4] == "-"
and t[7] == "-", # ISO date format YYYY-MM-DD
),
],
)
@@ -93,8 +221,31 @@ class GetCurrentDateBlock(Block):
offset = int(input_data.offset)
except ValueError:
offset = 0
current_date = datetime.now() - timedelta(days=offset)
yield "date", current_date.strftime(input_data.format)
if isinstance(input_data.format_type, DateISO8601Format):
# ISO 8601 format for date only (YYYY-MM-DD)
tz = ZoneInfo(input_data.format_type.timezone)
current_date = datetime.now(tz=tz) - timedelta(days=offset)
# ISO 8601 date format is YYYY-MM-DD
date_str = current_date.date().isoformat()
else: # DateStrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
current_date = datetime.now(tz=tz) - timedelta(days=offset)
date_str = current_date.strftime(input_data.format_type.format)
yield "date", date_str
class StrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%Y-%m-%d %H:%M:%S"
timezone: TimezoneLiteral = "UTC"
class ISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
include_microseconds: bool = False
class GetCurrentDateAndTimeBlock(Block):
@@ -102,9 +253,10 @@ class GetCurrentDateAndTimeBlock(Block):
trigger: str = SchemaField(
description="Trigger any data to output the current date and time"
)
format: str = SchemaField(
description="Format of the date and time to output",
default="%Y-%m-%d %H:%M:%S",
format_type: Union[StrftimeFormat, ISO8601Format] = SchemaField(
discriminator="discriminator",
description="Format type for date and time output (strftime with custom format or ISO 8601/RFC 3339)",
default=StrftimeFormat(discriminator="strftime"),
)
class Output(BlockSchema):
@@ -121,20 +273,63 @@ class GetCurrentDateAndTimeBlock(Block):
output_schema=GetCurrentDateAndTimeBlock.Output,
test_input=[
{"trigger": "Hello"},
{
"trigger": "Hello",
"format_type": {
"discriminator": "strftime",
"format": "%Y/%m/%d",
},
},
{
"trigger": "Hello",
"format_type": {
"discriminator": "iso8601",
"timezone": "UTC",
"include_microseconds": False,
},
},
],
test_output=[
(
"date_time",
lambda t: abs(
datetime.now() - datetime.strptime(t, "%Y-%m-%d %H:%M:%S")
datetime.now(tz=ZoneInfo("UTC"))
- datetime.strptime(t + "+00:00", "%Y-%m-%d %H:%M:%S%z")
)
< timedelta(seconds=10), # 10 seconds error margin.
),
(
"date_time",
lambda t: abs(
datetime.now().date() - datetime.strptime(t, "%Y/%m/%d").date()
)
< timedelta(days=1), # Date format only, no time component
),
(
"date_time",
lambda t: abs(
datetime.now(tz=ZoneInfo("UTC")) - datetime.fromisoformat(t)
)
< timedelta(seconds=10), # 10 seconds error margin for ISO format.
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
current_date_time = time.strftime(input_data.format)
if isinstance(input_data.format_type, ISO8601Format):
# ISO 8601 format with specified timezone (also RFC3339-compliant)
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
# Format with or without microseconds
if input_data.format_type.include_microseconds:
current_date_time = dt.isoformat()
else:
current_date_time = dt.isoformat(timespec="seconds")
else: # StrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
current_date_time = dt.strftime(input_data.format_type.format)
yield "date_time", current_date_time

View File

@@ -5,6 +5,12 @@ from backend.blocks.ai_shortform_video_block import AIShortformVideoCreatorBlock
from backend.blocks.apollo.organization import SearchOrganizationsBlock
from backend.blocks.apollo.people import SearchPeopleBlock
from backend.blocks.apollo.person import GetPersonDetailBlock
from backend.blocks.enrichlayer.linkedin import (
GetLinkedinProfileBlock,
GetLinkedinProfilePictureBlock,
LinkedinPersonLookupBlock,
LinkedinRoleLookupBlock,
)
from backend.blocks.flux_kontext import AIImageEditorBlock, FluxKontextModelName
from backend.blocks.ideogram import IdeogramModelBlock
from backend.blocks.jina.embeddings import JinaEmbeddingBlock
@@ -30,6 +36,7 @@ from backend.integrations.credentials_store import (
anthropic_credentials,
apollo_credentials,
did_credentials,
enrichlayer_credentials,
groq_credentials,
ideogram_credentials,
jina_credentials,
@@ -39,6 +46,7 @@ from backend.integrations.credentials_store import (
replicate_credentials,
revid_credentials,
unreal_credentials,
v0_credentials,
)
# =============== Configure the cost for each LLM Model call =============== #
@@ -48,12 +56,18 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.O3_MINI: 2, # $1.10 / $4.40
LlmModel.O1: 16, # $15 / $60
LlmModel.O1_MINI: 4,
# GPT-5 models
LlmModel.GPT5: 2,
LlmModel.GPT5_MINI: 1,
LlmModel.GPT5_NANO: 1,
LlmModel.GPT5_CHAT: 2,
LlmModel.GPT41: 2,
LlmModel.GPT41_MINI: 1,
LlmModel.GPT4O_MINI: 1,
LlmModel.GPT4O: 3,
LlmModel.GPT4_TURBO: 10,
LlmModel.GPT3_5_TURBO: 1,
LlmModel.CLAUDE_4_1_OPUS: 21,
LlmModel.CLAUDE_4_OPUS: 21,
LlmModel.CLAUDE_4_SONNET: 5,
LlmModel.CLAUDE_3_7_SONNET: 5,
@@ -76,6 +90,8 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.OLLAMA_LLAMA3_405B: 1,
LlmModel.DEEPSEEK_LLAMA_70B: 1, # ? / ?
LlmModel.OLLAMA_DOLPHIN: 1,
LlmModel.OPENAI_GPT_OSS_120B: 1,
LlmModel.OPENAI_GPT_OSS_20B: 1,
LlmModel.GEMINI_FLASH_1_5: 1,
LlmModel.GEMINI_2_5_PRO: 4,
LlmModel.MISTRAL_NEMO: 1,
@@ -107,6 +123,10 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: 1,
LlmModel.GEMINI_2_0_FLASH_LITE: 1,
LlmModel.DEEPSEEK_R1_0528: 1,
# v0 by Vercel models
LlmModel.V0_1_5_MD: 1,
LlmModel.V0_1_5_LG: 2,
LlmModel.V0_1_0_MD: 1,
}
for model in LlmModel:
@@ -196,6 +216,23 @@ LLM_COST = (
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "llama_api"
]
# v0 by Vercel Models
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {
"id": v0_credentials.id,
"provider": v0_credentials.provider,
"type": v0_credentials.type,
},
},
cost_amount=cost,
)
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "v0"
]
# AI/ML Api Models
+ [
BlockCost(
@@ -368,6 +405,54 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
},
)
],
GetLinkedinProfileBlock: [
BlockCost(
cost_amount=1,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
LinkedinPersonLookupBlock: [
BlockCost(
cost_amount=2,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
LinkedinRoleLookupBlock: [
BlockCost(
cost_amount=3,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
GetLinkedinProfilePictureBlock: [
BlockCost(
cost_amount=3,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
SmartDecisionMakerBlock: LLM_COST,
SearchOrganizationsBlock: [
BlockCost(

View File

@@ -34,10 +34,10 @@ from backend.data.model import (
from backend.data.notifications import NotificationEventModel, RefundRequestData
from backend.data.user import get_user_by_id, get_user_email_by_id
from backend.notifications.notifications import queue_notification_async
from backend.server.model import Pagination
from backend.server.v2.admin.model import UserHistoryResponse
from backend.util.exceptions import InsufficientBalanceError
from backend.util.json import SafeJson
from backend.util.models import Pagination
from backend.util.retry import func_retry
from backend.util.settings import Settings
@@ -286,11 +286,17 @@ class UserCreditBase(ABC):
transaction = await CreditTransaction.prisma().find_first_or_raise(
where={"transactionKey": transaction_key, "userId": user_id}
)
if transaction.isActive:
return
async with db.locked_transaction(f"usr_trx_{user_id}"):
transaction = await CreditTransaction.prisma().find_first_or_raise(
where={"transactionKey": transaction_key, "userId": user_id}
)
if transaction.isActive:
return
user_balance, _ = await self._get_credits(user_id)
await CreditTransaction.prisma().update(
where={
@@ -998,8 +1004,8 @@ def get_block_costs() -> dict[str, list[BlockCost]]:
async def get_stripe_customer_id(user_id: str) -> str:
user = await get_user_by_id(user_id)
if user.stripeCustomerId:
return user.stripeCustomerId
if user.stripe_customer_id:
return user.stripe_customer_id
customer = stripe.Customer.create(
name=user.name or "",
@@ -1022,10 +1028,10 @@ async def set_auto_top_up(user_id: str, config: AutoTopUpConfig):
async def get_auto_top_up(user_id: str) -> AutoTopUpConfig:
user = await get_user_by_id(user_id)
if not user.topUpConfig:
if not user.top_up_config:
return AutoTopUpConfig(threshold=0, amount=0)
return AutoTopUpConfig.model_validate(user.topUpConfig)
return AutoTopUpConfig.model_validate(user.top_up_config)
async def admin_get_user_history(

View File

@@ -1,6 +1,5 @@
import logging
import os
import zlib
from contextlib import asynccontextmanager
from urllib.parse import parse_qsl, urlencode, urlparse, urlunparse
from uuid import uuid4
@@ -50,6 +49,10 @@ prisma = Prisma(
logger = logging.getLogger(__name__)
def is_connected():
return prisma.is_connected()
@conn_retry("Prisma", "Acquiring connection")
async def connect():
if prisma.is_connected():
@@ -84,35 +87,50 @@ TRANSACTION_TIMEOUT = 15000 # 15 seconds - Increased from 5s to prevent timeout
@asynccontextmanager
async def transaction(timeout: int | None = None):
async def transaction(timeout: int = TRANSACTION_TIMEOUT):
"""
Create a database transaction with optional timeout.
Args:
timeout: Transaction timeout in milliseconds. If None, uses TRANSACTION_TIMEOUT (15s).
"""
if timeout is None:
timeout = TRANSACTION_TIMEOUT
async with prisma.tx(timeout=timeout) as tx:
yield tx
@asynccontextmanager
async def locked_transaction(key: str, timeout: int | None = None):
async def locked_transaction(key: str, timeout: int = TRANSACTION_TIMEOUT):
"""
Create a database transaction with advisory lock.
Create a transaction and take a per-key advisory *transaction* lock.
- Uses a 64-bit lock id via hashtextextended(key, 0) to avoid 32-bit collisions.
- Bound by lock_timeout and statement_timeout so it won't block indefinitely.
- Lock is held for the duration of the transaction and auto-released on commit/rollback.
Args:
key: Lock key for advisory lock
timeout: Transaction timeout in milliseconds. If None, uses TRANSACTION_TIMEOUT (15s).
key: String lock key (e.g., "usr_trx_<uuid>").
timeout: Transaction/lock/statement timeout in milliseconds.
"""
if timeout is None:
timeout = TRANSACTION_TIMEOUT
lock_key = zlib.crc32(key.encode("utf-8"))
async with transaction(timeout=timeout) as tx:
await tx.execute_raw("SELECT pg_advisory_xact_lock($1)", lock_key)
# Ensure we don't wait longer than desired
# Note: SET LOCAL doesn't support parameterized queries, must use string interpolation
await tx.execute_raw(f"SET LOCAL statement_timeout = '{int(timeout)}ms'") # type: ignore[arg-type]
await tx.execute_raw(f"SET LOCAL lock_timeout = '{int(timeout)}ms'") # type: ignore[arg-type]
# Block until acquired or lock_timeout hits
try:
await tx.execute_raw(
"SELECT pg_advisory_xact_lock(hashtextextended($1, 0))",
key,
)
except Exception as e:
# Normalize PG's lock timeout error to TimeoutError for callers
if "lock timeout" in str(e).lower():
raise TimeoutError(
f"Could not acquire lock for key={key!r} within {timeout}ms"
) from e
raise
yield tx

View File

@@ -33,12 +33,13 @@ from prisma.types import (
AgentNodeExecutionUpdateInput,
AgentNodeExecutionWhereInput,
)
from pydantic import BaseModel, ConfigDict, JsonValue
from pydantic import BaseModel, ConfigDict, JsonValue, ValidationError
from pydantic.fields import Field
from backend.server.v2.store.exceptions import DatabaseError
from backend.util import type as type_utils
from backend.util.json import SafeJson
from backend.util.retry import func_retry
from backend.util.settings import Config
from backend.util.truncate import truncate
@@ -134,6 +135,10 @@ class GraphExecutionMeta(BaseDbModel):
default=None,
description="Error message if any",
)
activity_status: str | None = Field(
default=None,
description="AI-generated summary of what the agent did",
)
def to_db(self) -> GraphExecutionStats:
return GraphExecutionStats(
@@ -145,6 +150,7 @@ class GraphExecutionMeta(BaseDbModel):
node_count=self.node_exec_count,
node_error_count=self.node_error_count,
error=self.error,
activity_status=self.activity_status,
)
stats: Stats | None
@@ -189,6 +195,7 @@ class GraphExecutionMeta(BaseDbModel):
if isinstance(stats.error, Exception)
else stats.error
),
activity_status=stats.activity_status,
)
if stats
else None
@@ -311,18 +318,30 @@ class NodeExecutionResult(BaseModel):
@staticmethod
def from_db(_node_exec: AgentNodeExecution, user_id: Optional[str] = None):
if _node_exec.executionData:
# Execution that has been queued for execution will persist its data.
try:
stats = NodeExecutionStats.model_validate(_node_exec.stats or {})
except (ValueError, ValidationError):
stats = NodeExecutionStats()
if stats.cleared_inputs:
input_data: BlockInput = defaultdict()
for name, messages in stats.cleared_inputs.items():
input_data[name] = messages[-1] if messages else ""
elif _node_exec.executionData:
input_data = type_utils.convert(_node_exec.executionData, dict[str, Any])
else:
# For incomplete execution, executionData will not be yet available.
input_data: BlockInput = defaultdict()
for data in _node_exec.Input or []:
input_data[data.name] = type_utils.convert(data.data, type[Any])
output_data: CompletedBlockOutput = defaultdict(list)
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
if stats.cleared_outputs:
for name, messages in stats.cleared_outputs.items():
output_data[name].extend(messages)
else:
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
graph_execution: AgentGraphExecution | None = _node_exec.GraphExecution
if graph_execution:
@@ -630,6 +649,8 @@ async def update_graph_execution_stats(
"OR": [
{"executionStatus": ExecutionStatus.RUNNING},
{"executionStatus": ExecutionStatus.QUEUED},
# Terminated graph can be resumed.
{"executionStatus": ExecutionStatus.TERMINATED},
],
},
data=update_data,
@@ -646,27 +667,6 @@ async def update_graph_execution_stats(
return GraphExecution.from_db(graph_exec)
async def update_node_execution_stats(
node_exec_id: str, stats: NodeExecutionStats
) -> NodeExecutionResult:
data = stats.model_dump()
if isinstance(data["error"], Exception):
data["error"] = str(data["error"])
res = await AgentNodeExecution.prisma().update(
where={"id": node_exec_id},
data={
"stats": SafeJson(data),
"endedTime": datetime.now(tz=timezone.utc),
},
include=EXECUTION_RESULT_INCLUDE,
)
if not res:
raise ValueError(f"Node execution {node_exec_id} not found.")
return NodeExecutionResult.from_db(res)
async def update_node_execution_status_batch(
node_exec_ids: list[str],
status: ExecutionStatus,
@@ -896,15 +896,15 @@ class RedisExecutionEventBus(RedisEventBus[ExecutionEvent]):
def publish(self, res: GraphExecution | NodeExecutionResult):
if isinstance(res, GraphExecution):
self.publish_graph_exec_update(res)
self._publish_graph_exec_update(res)
else:
self.publish_node_exec_update(res)
self._publish_node_exec_update(res)
def publish_node_exec_update(self, res: NodeExecutionResult):
def _publish_node_exec_update(self, res: NodeExecutionResult):
event = NodeExecutionEvent.model_validate(res.model_dump())
self._publish(event, f"{res.user_id}/{res.graph_id}/{res.graph_exec_id}")
def publish_graph_exec_update(self, res: GraphExecution):
def _publish_graph_exec_update(self, res: GraphExecution):
event = GraphExecutionEvent.model_validate(res.model_dump())
self._publish(event, f"{res.user_id}/{res.graph_id}/{res.id}")
@@ -936,17 +936,18 @@ class AsyncRedisExecutionEventBus(AsyncRedisEventBus[ExecutionEvent]):
def event_bus_name(self) -> str:
return config.execution_event_bus_name
@func_retry
async def publish(self, res: GraphExecutionMeta | NodeExecutionResult):
if isinstance(res, GraphExecutionMeta):
await self.publish_graph_exec_update(res)
await self._publish_graph_exec_update(res)
else:
await self.publish_node_exec_update(res)
await self._publish_node_exec_update(res)
async def publish_node_exec_update(self, res: NodeExecutionResult):
async def _publish_node_exec_update(self, res: NodeExecutionResult):
event = NodeExecutionEvent.model_validate(res.model_dump())
await self._publish(event, f"{res.user_id}/{res.graph_id}/{res.graph_exec_id}")
async def publish_graph_exec_update(self, res: GraphExecutionMeta):
async def _publish_graph_exec_update(self, res: GraphExecutionMeta):
# GraphExecutionEvent requires inputs and outputs fields that GraphExecutionMeta doesn't have
# Add default empty values for compatibility
event_data = res.model_dump()

View File

@@ -0,0 +1,109 @@
import logging
from collections import defaultdict
from datetime import datetime
from prisma.enums import AgentExecutionStatus
from backend.data.execution import get_graph_executions
from backend.data.graph import get_graph_metadata
from backend.data.model import UserExecutionSummaryStats
from backend.server.v2.store.exceptions import DatabaseError
from backend.util.logging import TruncatedLogger
logger = TruncatedLogger(logging.getLogger(__name__), prefix="[SummaryData]")
async def get_user_execution_summary_data(
user_id: str, start_time: datetime, end_time: datetime
) -> UserExecutionSummaryStats:
"""Gather all summary data for a user in a time range.
This function fetches graph executions once and aggregates all required
statistics in a single pass for efficiency.
"""
try:
# Fetch graph executions once
executions = await get_graph_executions(
user_id=user_id,
created_time_gte=start_time,
created_time_lte=end_time,
)
# Initialize aggregation variables
total_credits_used = 0.0
total_executions = len(executions)
successful_runs = 0
failed_runs = 0
terminated_runs = 0
execution_times = []
agent_usage = defaultdict(int)
cost_by_graph_id = defaultdict(float)
# Single pass through executions to aggregate all stats
for execution in executions:
# Count execution statuses (including TERMINATED as failed)
if execution.status == AgentExecutionStatus.COMPLETED:
successful_runs += 1
elif execution.status == AgentExecutionStatus.FAILED:
failed_runs += 1
elif execution.status == AgentExecutionStatus.TERMINATED:
terminated_runs += 1
# Aggregate costs from stats
if execution.stats and hasattr(execution.stats, "cost"):
cost_in_dollars = execution.stats.cost / 100
total_credits_used += cost_in_dollars
cost_by_graph_id[execution.graph_id] += cost_in_dollars
# Collect execution times
if execution.stats and hasattr(execution.stats, "duration"):
execution_times.append(execution.stats.duration)
# Count agent usage
agent_usage[execution.graph_id] += 1
# Calculate derived stats
total_execution_time = sum(execution_times)
average_execution_time = (
total_execution_time / len(execution_times) if execution_times else 0
)
# Find most used agent
most_used_agent = "No agents used"
if agent_usage:
most_used_agent_id = max(agent_usage, key=lambda k: agent_usage[k])
try:
graph_meta = await get_graph_metadata(graph_id=most_used_agent_id)
most_used_agent = (
graph_meta.name if graph_meta else f"Agent {most_used_agent_id[:8]}"
)
except Exception:
logger.warning(f"Could not get metadata for graph {most_used_agent_id}")
most_used_agent = f"Agent {most_used_agent_id[:8]}"
# Convert graph_ids to agent names for cost breakdown
cost_breakdown = {}
for graph_id, cost in cost_by_graph_id.items():
try:
graph_meta = await get_graph_metadata(graph_id=graph_id)
agent_name = graph_meta.name if graph_meta else f"Agent {graph_id[:8]}"
except Exception:
logger.warning(f"Could not get metadata for graph {graph_id}")
agent_name = f"Agent {graph_id[:8]}"
cost_breakdown[agent_name] = cost
# Build the summary stats object (include terminated runs as failed)
return UserExecutionSummaryStats(
total_credits_used=total_credits_used,
total_executions=total_executions,
successful_runs=successful_runs,
failed_runs=failed_runs + terminated_runs,
most_used_agent=most_used_agent,
total_execution_time=total_execution_time,
average_execution_time=average_execution_time,
cost_breakdown=cost_breakdown,
)
except Exception as e:
logger.error(f"Failed to get user summary data: {e}")
raise DatabaseError(f"Failed to get user summary data: {e}") from e

View File

@@ -416,6 +416,10 @@ class GraphModel(Graph):
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
):
"""
Validate graph structure and raise `ValueError` on issues.
For structured error reporting, use `validate_graph_get_errors`.
"""
self._validate_graph(self, for_run, nodes_input_masks)
for sub_graph in self.sub_graphs:
self._validate_graph(sub_graph, for_run, nodes_input_masks)
@@ -425,15 +429,58 @@ class GraphModel(Graph):
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
):
def is_tool_pin(name: str) -> bool:
return name.startswith("tools_^_")
) -> None:
errors = GraphModel._validate_graph_get_errors(
graph, for_run, nodes_input_masks
)
if errors:
# Just raise the first error for backward compatibility
first_error = next(iter(errors.values()))
first_field_error = next(iter(first_error.values()))
raise ValueError(first_field_error)
def sanitize(name):
sanitized_name = name.split("_#_")[0].split("_@_")[0].split("_$_")[0]
if is_tool_pin(sanitized_name):
return "tools"
return sanitized_name
def validate_graph_get_errors(
self,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
Returns: dict[node_id, dict[field_name, error_message]]
"""
return {
**self._validate_graph_get_errors(self, for_run, nodes_input_masks),
**{
node_id: error
for sub_graph in self.sub_graphs
for node_id, error in self._validate_graph_get_errors(
sub_graph, for_run, nodes_input_masks
).items()
},
}
@staticmethod
def _validate_graph_get_errors(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
Returns: dict[node_id, dict[field_name, error_message]]
"""
# First, check for structural issues with the graph
try:
GraphModel._validate_graph_structure(graph)
except ValueError:
# If structural validation fails, we can't provide per-node errors
# so we re-raise as is
raise
# Collect errors per node
node_errors: dict[str, dict[str, str]] = defaultdict(dict)
# Validate smart decision maker nodes
nodes_block = {
@@ -442,7 +489,7 @@ class GraphModel(Graph):
if (block := get_block(node.block_id)) is not None
}
input_links = defaultdict(list)
input_links: dict[str, list[Link]] = defaultdict(list)
for link in graph.links:
input_links[link.sink_id].append(link)
@@ -450,17 +497,22 @@ class GraphModel(Graph):
# Nodes: required fields are filled or connected and dependencies are satisfied
for node in graph.nodes:
if (block := nodes_block.get(node.id)) is None:
# For invalid blocks, we still raise immediately as this is a structural issue
raise ValueError(f"Invalid block {node.block_id} for node #{node.id}")
node_input_mask = (
nodes_input_masks.get(node.id, {}) if nodes_input_masks else {}
)
provided_inputs = set(
[sanitize(name) for name in node.input_default]
+ [sanitize(link.sink_name) for link in input_links.get(node.id, [])]
[_sanitize_pin_name(name) for name in node.input_default]
+ [
_sanitize_pin_name(link.sink_name)
for link in input_links.get(node.id, [])
]
+ ([name for name in node_input_mask] if node_input_mask else [])
)
InputSchema = block.input_schema
for name in (required_fields := InputSchema.get_required_fields()):
if (
name not in provided_inputs
@@ -477,18 +529,16 @@ class GraphModel(Graph):
]
)
):
raise ValueError(
f"Node {block.name} #{node.id} required input missing: `{name}`"
)
node_errors[node.id][name] = "This field is required"
if (
block.block_type == BlockType.INPUT
and (input_key := node.input_default.get("name"))
and is_credentials_field_name(input_key)
):
raise ValueError(
f"Agent input node uses reserved name '{input_key}'; "
"'credentials' and `*_credentials` are reserved input names"
node_errors[node.id]["name"] = (
f"'{input_key}' is a reserved input name: "
"'credentials' and `*_credentials` are reserved"
)
# Get input schema properties and check dependencies
@@ -538,10 +588,15 @@ class GraphModel(Graph):
# Check for missing dependencies when dependent field is present
missing_deps = [dep for dep in dependencies if not has_value(node, dep)]
if missing_deps and (field_has_value or field_is_required):
raise ValueError(
f"Node {block.name} #{node.id}: Field `{field_name}` requires [{', '.join(missing_deps)}] to be set"
)
node_errors[node.id][
field_name
] = f"Requires {', '.join(missing_deps)} to be set"
return node_errors
@staticmethod
def _validate_graph_structure(graph: BaseGraph):
"""Validate graph structure (links, connections, etc.)"""
node_map = {v.id: v for v in graph.nodes}
def is_static_output_block(nid: str) -> bool:
@@ -567,7 +622,7 @@ class GraphModel(Graph):
f"{prefix}, {node.block_id} is invalid block id, available blocks: {blocks}"
)
sanitized_name = sanitize(name)
sanitized_name = _sanitize_pin_name(name)
vals = node.input_default
if i == 0:
fields = (
@@ -581,7 +636,7 @@ class GraphModel(Graph):
if block.block_type not in [BlockType.AGENT]
else vals.get("input_schema", {}).get("properties", {}).keys()
)
if sanitized_name not in fields and not is_tool_pin(name):
if sanitized_name not in fields and not _is_tool_pin(name):
fields_msg = f"Allowed fields: {fields}"
raise ValueError(f"{prefix}, `{name}` invalid, {fields_msg}")
@@ -618,6 +673,17 @@ class GraphModel(Graph):
)
def _is_tool_pin(name: str) -> bool:
return name.startswith("tools_^_")
def _sanitize_pin_name(name: str) -> str:
sanitized_name = name.split("_#_")[0].split("_@_")[0].split("_$_")[0]
if _is_tool_pin(sanitized_name):
return "tools"
return sanitized_name
class GraphMeta(Graph):
user_id: str

View File

@@ -5,6 +5,7 @@ import enum
import logging
from collections import defaultdict
from datetime import datetime, timezone
from json import JSONDecodeError
from typing import (
TYPE_CHECKING,
Annotated,
@@ -40,12 +41,120 @@ from pydantic_core import (
from typing_extensions import TypedDict
from backend.integrations.providers import ProviderName
from backend.util.json import loads as json_loads
from backend.util.settings import Secrets
# Type alias for any provider name (including custom ones)
AnyProviderName = str # Will be validated as ProviderName at runtime
class User(BaseModel):
"""Application-layer User model with snake_case convention."""
model_config = ConfigDict(
extra="forbid",
str_strip_whitespace=True,
)
id: str = Field(..., description="User ID")
email: str = Field(..., description="User email address")
email_verified: bool = Field(default=True, description="Whether email is verified")
name: Optional[str] = Field(None, description="User display name")
created_at: datetime = Field(..., description="When user was created")
updated_at: datetime = Field(..., description="When user was last updated")
metadata: dict[str, Any] = Field(
default_factory=dict, description="User metadata as dict"
)
integrations: str = Field(default="", description="Encrypted integrations data")
stripe_customer_id: Optional[str] = Field(None, description="Stripe customer ID")
top_up_config: Optional["AutoTopUpConfig"] = Field(
None, description="Top up configuration"
)
# Notification preferences
max_emails_per_day: int = Field(default=3, description="Maximum emails per day")
notify_on_agent_run: bool = Field(default=True, description="Notify on agent run")
notify_on_zero_balance: bool = Field(
default=True, description="Notify on zero balance"
)
notify_on_low_balance: bool = Field(
default=True, description="Notify on low balance"
)
notify_on_block_execution_failed: bool = Field(
default=True, description="Notify on block execution failure"
)
notify_on_continuous_agent_error: bool = Field(
default=True, description="Notify on continuous agent error"
)
notify_on_daily_summary: bool = Field(
default=True, description="Notify on daily summary"
)
notify_on_weekly_summary: bool = Field(
default=True, description="Notify on weekly summary"
)
notify_on_monthly_summary: bool = Field(
default=True, description="Notify on monthly summary"
)
@classmethod
def from_db(cls, prisma_user: "PrismaUser") -> "User":
"""Convert a database User object to application User model."""
# Handle metadata field - convert from JSON string or dict to dict
metadata = {}
if prisma_user.metadata:
if isinstance(prisma_user.metadata, str):
try:
metadata = json_loads(prisma_user.metadata)
except (JSONDecodeError, TypeError):
metadata = {}
elif isinstance(prisma_user.metadata, dict):
metadata = prisma_user.metadata
# Handle topUpConfig field
top_up_config = None
if prisma_user.topUpConfig:
if isinstance(prisma_user.topUpConfig, str):
try:
config_dict = json_loads(prisma_user.topUpConfig)
top_up_config = AutoTopUpConfig.model_validate(config_dict)
except (JSONDecodeError, TypeError, ValueError):
top_up_config = None
elif isinstance(prisma_user.topUpConfig, dict):
try:
top_up_config = AutoTopUpConfig.model_validate(
prisma_user.topUpConfig
)
except ValueError:
top_up_config = None
return cls(
id=prisma_user.id,
email=prisma_user.email,
email_verified=prisma_user.emailVerified or True,
name=prisma_user.name,
created_at=prisma_user.createdAt,
updated_at=prisma_user.updatedAt,
metadata=metadata,
integrations=prisma_user.integrations or "",
stripe_customer_id=prisma_user.stripeCustomerId,
top_up_config=top_up_config,
max_emails_per_day=prisma_user.maxEmailsPerDay or 3,
notify_on_agent_run=prisma_user.notifyOnAgentRun or True,
notify_on_zero_balance=prisma_user.notifyOnZeroBalance or True,
notify_on_low_balance=prisma_user.notifyOnLowBalance or True,
notify_on_block_execution_failed=prisma_user.notifyOnBlockExecutionFailed
or True,
notify_on_continuous_agent_error=prisma_user.notifyOnContinuousAgentError
or True,
notify_on_daily_summary=prisma_user.notifyOnDailySummary or True,
notify_on_weekly_summary=prisma_user.notifyOnWeeklySummary or True,
notify_on_monthly_summary=prisma_user.notifyOnMonthlySummary or True,
)
if TYPE_CHECKING:
from prisma.models import User as PrismaUser
from backend.data.block import BlockSchema
T = TypeVar("T")
@@ -644,7 +753,7 @@ class NodeExecutionStats(BaseModel):
arbitrary_types_allowed=True,
)
error: Optional[Exception | str] = None
error: Optional[BaseException | str] = None
walltime: float = 0
cputime: float = 0
input_size: int = 0
@@ -655,6 +764,9 @@ class NodeExecutionStats(BaseModel):
output_token_count: int = 0
extra_cost: int = 0
extra_steps: int = 0
# Moderation fields
cleared_inputs: Optional[dict[str, list[str]]] = None
cleared_outputs: Optional[dict[str, list[str]]] = None
def __iadd__(self, other: "NodeExecutionStats") -> "NodeExecutionStats":
"""Mutate this instance by adding another NodeExecutionStats."""
@@ -706,3 +818,24 @@ class GraphExecutionStats(BaseModel):
default=0, description="Total number of errors generated"
)
cost: int = Field(default=0, description="Total execution cost (cents)")
activity_status: Optional[str] = Field(
default=None, description="AI-generated summary of what the agent did"
)
class UserExecutionSummaryStats(BaseModel):
"""Summary of user statistics for a specific user."""
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
)
total_credits_used: float = Field(default=0)
total_executions: int = Field(default=0)
successful_runs: int = Field(default=0)
failed_runs: int = Field(default=0)
most_used_agent: str = Field(default="")
total_execution_time: float = Field(default=0)
average_execution_time: float = Field(default=0)
cost_breakdown: dict[str, float] = Field(default_factory=dict)

View File

@@ -4,20 +4,12 @@ from enum import Enum
from typing import Awaitable, Optional
import aio_pika
import aio_pika.exceptions as aio_ex
import pika
import pika.adapters.blocking_connection
from pika.exceptions import AMQPError
from pika.spec import BasicProperties
from pydantic import BaseModel
from tenacity import (
retry,
retry_if_exception_type,
stop_after_attempt,
wait_random_exponential,
)
from backend.util.retry import conn_retry
from backend.util.retry import conn_retry, func_retry
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
@@ -148,6 +140,7 @@ class SyncRabbitMQ(RabbitMQBase):
socket_timeout=SOCKET_TIMEOUT,
connection_attempts=CONNECTION_ATTEMPTS,
retry_delay=RETRY_DELAY,
heartbeat=300, # 5 minute timeout (heartbeats sent every 2.5 min)
)
self._connection = pika.BlockingConnection(parameters)
@@ -198,12 +191,7 @@ class SyncRabbitMQ(RabbitMQBase):
routing_key=queue.routing_key or queue.name,
)
@retry(
retry=retry_if_exception_type((AMQPError, ConnectionError)),
wait=wait_random_exponential(multiplier=1, max=5),
stop=stop_after_attempt(5),
reraise=True,
)
@func_retry
def publish_message(
self,
routing_key: str,
@@ -257,6 +245,7 @@ class AsyncRabbitMQ(RabbitMQBase):
password=self.password,
virtualhost=self.config.vhost.lstrip("/"),
blocked_connection_timeout=BLOCKED_CONNECTION_TIMEOUT,
heartbeat=300, # 5 minute timeout (heartbeats sent every 2.5 min)
)
self._channel = await self._connection.channel()
await self._channel.set_qos(prefetch_count=1)
@@ -302,12 +291,7 @@ class AsyncRabbitMQ(RabbitMQBase):
exchange, routing_key=queue.routing_key or queue.name
)
@retry(
retry=retry_if_exception_type((aio_ex.AMQPError, ConnectionError)),
wait=wait_random_exponential(multiplier=1, max=5),
stop=stop_after_attempt(5),
reraise=True,
)
@func_retry
async def publish_message(
self,
routing_key: str,

View File

@@ -9,11 +9,11 @@ from urllib.parse import quote_plus
from autogpt_libs.auth.models import DEFAULT_USER_ID
from fastapi import HTTPException
from prisma.enums import NotificationType
from prisma.models import User
from prisma.models import User as PrismaUser
from prisma.types import JsonFilter, UserCreateInput, UserUpdateInput
from backend.data.db import prisma
from backend.data.model import UserIntegrations, UserMetadata, UserMetadataRaw
from backend.data.model import User, UserIntegrations, UserMetadata
from backend.data.notifications import NotificationPreference, NotificationPreferenceDTO
from backend.server.v2.store.exceptions import DatabaseError
from backend.util.encryption import JSONCryptor
@@ -21,6 +21,7 @@ from backend.util.json import SafeJson
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()
async def get_or_create_user(user_data: dict) -> User:
@@ -43,7 +44,7 @@ async def get_or_create_user(user_data: dict) -> User:
)
)
return User.model_validate(user)
return User.from_db(user)
except Exception as e:
raise DatabaseError(f"Failed to get or create user {user_data}: {e}") from e
@@ -52,7 +53,7 @@ async def get_user_by_id(user_id: str) -> User:
user = await prisma.user.find_unique(where={"id": user_id})
if not user:
raise ValueError(f"User not found with ID: {user_id}")
return User.model_validate(user)
return User.from_db(user)
async def get_user_email_by_id(user_id: str) -> Optional[str]:
@@ -66,7 +67,7 @@ async def get_user_email_by_id(user_id: str) -> Optional[str]:
async def get_user_by_email(email: str) -> Optional[User]:
try:
user = await prisma.user.find_unique(where={"email": email})
return User.model_validate(user) if user else None
return User.from_db(user) if user else None
except Exception as e:
raise DatabaseError(f"Failed to get user by email {email}: {e}") from e
@@ -90,27 +91,11 @@ async def create_default_user() -> Optional[User]:
name="Default User",
)
)
return User.model_validate(user)
async def get_user_metadata(user_id: str) -> UserMetadata:
user = await User.prisma().find_unique_or_raise(
where={"id": user_id},
)
metadata = cast(UserMetadataRaw, user.metadata)
return UserMetadata.model_validate(metadata)
async def update_user_metadata(user_id: str, metadata: UserMetadata):
await User.prisma().update(
where={"id": user_id},
data={"metadata": SafeJson(metadata.model_dump())},
)
return User.from_db(user)
async def get_user_integrations(user_id: str) -> UserIntegrations:
user = await User.prisma().find_unique_or_raise(
user = await PrismaUser.prisma().find_unique_or_raise(
where={"id": user_id},
)
@@ -125,7 +110,7 @@ async def get_user_integrations(user_id: str) -> UserIntegrations:
async def update_user_integrations(user_id: str, data: UserIntegrations):
encrypted_data = JSONCryptor().encrypt(data.model_dump(exclude_none=True))
await User.prisma().update(
await PrismaUser.prisma().update(
where={"id": user_id},
data={"integrations": encrypted_data},
)
@@ -133,7 +118,7 @@ async def update_user_integrations(user_id: str, data: UserIntegrations):
async def migrate_and_encrypt_user_integrations():
"""Migrate integration credentials and OAuth states from metadata to integrations column."""
users = await User.prisma().find_many(
users = await PrismaUser.prisma().find_many(
where={
"metadata": cast(
JsonFilter,
@@ -169,7 +154,7 @@ async def migrate_and_encrypt_user_integrations():
raw_metadata.pop("integration_oauth_states", None)
# Update metadata without integration data
await User.prisma().update(
await PrismaUser.prisma().update(
where={"id": user.id},
data={"metadata": SafeJson(raw_metadata)},
)
@@ -177,7 +162,7 @@ async def migrate_and_encrypt_user_integrations():
async def get_active_user_ids_in_timerange(start_time: str, end_time: str) -> list[str]:
try:
users = await User.prisma().find_many(
users = await PrismaUser.prisma().find_many(
where={
"AgentGraphExecutions": {
"some": {
@@ -207,7 +192,7 @@ async def get_active_users_ids() -> list[str]:
async def get_user_notification_preference(user_id: str) -> NotificationPreference:
try:
user = await User.prisma().find_unique_or_raise(
user = await PrismaUser.prisma().find_unique_or_raise(
where={"id": user_id},
)
@@ -284,7 +269,7 @@ async def update_user_notification_preference(
if data.daily_limit:
update_data["maxEmailsPerDay"] = data.daily_limit
user = await User.prisma().update(
user = await PrismaUser.prisma().update(
where={"id": user_id},
data=update_data,
)
@@ -322,7 +307,7 @@ async def update_user_notification_preference(
async def set_user_email_verification(user_id: str, verified: bool) -> None:
"""Set the email verification status for a user."""
try:
await User.prisma().update(
await PrismaUser.prisma().update(
where={"id": user_id},
data={"emailVerified": verified},
)
@@ -335,7 +320,7 @@ async def set_user_email_verification(user_id: str, verified: bool) -> None:
async def get_user_email_verification(user_id: str) -> bool:
"""Get the email verification status for a user."""
try:
user = await User.prisma().find_unique_or_raise(
user = await PrismaUser.prisma().find_unique_or_raise(
where={"id": user_id},
)
return user.emailVerified
@@ -348,7 +333,7 @@ async def get_user_email_verification(user_id: str) -> bool:
def generate_unsubscribe_link(user_id: str) -> str:
"""Generate a link to unsubscribe from all notifications"""
# Create an HMAC using a secret key
secret_key = Settings().secrets.unsubscribe_secret_key
secret_key = settings.secrets.unsubscribe_secret_key
signature = hmac.new(
secret_key.encode("utf-8"), user_id.encode("utf-8"), hashlib.sha256
).digest()
@@ -359,7 +344,7 @@ def generate_unsubscribe_link(user_id: str) -> str:
).decode("utf-8")
logger.info(f"Generating unsubscribe link for user {user_id}")
base_url = Settings().config.platform_base_url
base_url = settings.config.platform_base_url
return f"{base_url}/api/email/unsubscribe?token={quote_plus(token)}"
@@ -371,7 +356,7 @@ async def unsubscribe_user_by_token(token: str) -> None:
user_id, received_signature_hex = decoded.split(":", 1)
# Verify the signature
secret_key = Settings().secrets.unsubscribe_secret_key
secret_key = settings.secrets.unsubscribe_secret_key
expected_signature = hmac.new(
secret_key.encode("utf-8"), user_id.encode("utf-8"), hashlib.sha256
).digest()

View File

@@ -0,0 +1,434 @@
"""
Module for generating AI-based activity status for graph executions.
"""
import json
import logging
from typing import TYPE_CHECKING, Any, NotRequired, TypedDict
from pydantic import SecretStr
from backend.blocks.llm import LlmModel, llm_call
from backend.data.block import get_block
from backend.data.execution import ExecutionStatus, NodeExecutionResult
from backend.data.model import APIKeyCredentials, GraphExecutionStats
from backend.util.feature_flag import Flag, is_feature_enabled
from backend.util.retry import func_retry
from backend.util.settings import Settings
from backend.util.truncate import truncate
if TYPE_CHECKING:
from backend.executor import DatabaseManagerAsyncClient
logger = logging.getLogger(__name__)
class ErrorInfo(TypedDict):
"""Type definition for error information."""
error: str
execution_id: str
timestamp: str
class InputOutputInfo(TypedDict):
"""Type definition for input/output information."""
execution_id: str
output_data: dict[str, Any] # Used for both input and output data
timestamp: str
class NodeInfo(TypedDict):
"""Type definition for node information."""
node_id: str
block_id: str
block_name: str
block_description: str
execution_count: int
error_count: int
recent_errors: list[ErrorInfo]
recent_outputs: list[InputOutputInfo]
recent_inputs: list[InputOutputInfo]
class NodeRelation(TypedDict):
"""Type definition for node relation information."""
source_node_id: str
sink_node_id: str
source_name: str
sink_name: str
is_static: bool
source_block_name: NotRequired[str] # Optional, only set if block exists
sink_block_name: NotRequired[str] # Optional, only set if block exists
def _truncate_uuid(uuid_str: str) -> str:
"""Truncate UUID to first segment to reduce payload size."""
if not uuid_str:
return uuid_str
return uuid_str.split("-")[0] if "-" in uuid_str else uuid_str[:8]
async def generate_activity_status_for_execution(
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_stats: GraphExecutionStats,
db_client: "DatabaseManagerAsyncClient",
user_id: str,
execution_status: ExecutionStatus | None = None,
) -> str | None:
"""
Generate an AI-based activity status summary for a graph execution.
This function handles all the data collection and AI generation logic,
keeping the manager integration simple.
Args:
graph_exec_id: The graph execution ID
graph_id: The graph ID
graph_version: The graph version
execution_stats: Execution statistics
db_client: Database client for fetching data
user_id: User ID for LaunchDarkly feature flag evaluation
execution_status: The overall execution status (COMPLETED, FAILED, TERMINATED)
Returns:
AI-generated activity status string, or None if feature is disabled
"""
# Check LaunchDarkly feature flag for AI activity status generation with full context support
if not await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id):
logger.debug("AI activity status generation is disabled via LaunchDarkly")
return None
# Check if we have OpenAI API key
try:
settings = Settings()
if not settings.secrets.openai_api_key:
logger.debug(
"OpenAI API key not configured, skipping activity status generation"
)
return None
# Get all node executions for this graph execution
node_executions = await db_client.get_node_executions(
graph_exec_id, include_exec_data=True
)
# Get graph metadata and full graph structure for name, description, and links
graph_metadata = await db_client.get_graph_metadata(graph_id, graph_version)
graph = await db_client.get_graph(graph_id, graph_version)
graph_name = graph_metadata.name if graph_metadata else f"Graph {graph_id}"
graph_description = graph_metadata.description if graph_metadata else ""
graph_links = graph.links if graph else []
# Build execution data summary
execution_data = _build_execution_summary(
node_executions,
execution_stats,
graph_name,
graph_description,
graph_links,
execution_status,
)
# Prepare prompt for AI
prompt = [
{
"role": "system",
"content": (
"You are an AI assistant summarizing what you just did for a user in simple, friendly language. "
"Write from the user's perspective about what they accomplished, NOT about technical execution details. "
"Focus on the ACTUAL TASK the user wanted done, not the internal workflow steps. "
"Avoid technical terms like 'workflow', 'execution', 'components', 'nodes', 'processing', etc. "
"Keep it to 3 sentences maximum. Be conversational and human-friendly.\n\n"
"IMPORTANT: Be HONEST about what actually happened:\n"
"- If the input was invalid/nonsensical, say so directly\n"
"- If the task failed, explain what went wrong in simple terms\n"
"- If errors occurred, focus on what the user needs to know\n"
"- Only claim success if the task was genuinely completed\n"
"- Don't sugar-coat failures or present them as helpful feedback\n\n"
"Understanding Errors:\n"
"- Node errors: Individual steps may fail but the overall task might still complete (e.g., one data source fails but others work)\n"
"- Graph error (in overall_status.graph_error): This means the entire execution failed and nothing was accomplished\n"
"- Even if execution shows 'completed', check if critical nodes failed that would prevent the desired outcome\n"
"- Focus on the end result the user wanted, not whether technical steps completed"
),
},
{
"role": "user",
"content": (
f"A user ran '{graph_name}' to accomplish something. Based on this execution data, "
f"write what they achieved in simple, user-friendly terms:\n\n"
f"{json.dumps(execution_data, indent=2)}\n\n"
"CRITICAL: Check overall_status.graph_error FIRST - if present, the entire execution failed.\n"
"Then check individual node errors to understand partial failures.\n\n"
"Write 1-3 sentences about what the user accomplished, such as:\n"
"- 'I analyzed your resume and provided detailed feedback for the IT industry.'\n"
"- 'I couldn't analyze your resume because the input was just nonsensical text.'\n"
"- 'I failed to complete the task due to missing API access.'\n"
"- 'I extracted key information from your documents and organized it into a summary.'\n"
"- 'The task failed to run due to system configuration issues.'\n\n"
"Focus on what ACTUALLY happened, not what was attempted."
),
},
]
# Log the prompt for debugging purposes
logger.debug(
f"Sending prompt to LLM for graph execution {graph_exec_id}: {json.dumps(prompt, indent=2)}"
)
# Create credentials for LLM call
credentials = APIKeyCredentials(
id="openai",
provider="openai",
api_key=SecretStr(settings.secrets.openai_api_key),
title="System OpenAI",
)
# Make LLM call using current event loop
activity_status = await _call_llm_direct(credentials, prompt)
logger.debug(
f"Generated activity status for {graph_exec_id}: {activity_status}"
)
return activity_status
except Exception as e:
logger.error(
f"Failed to generate activity status for execution {graph_exec_id}: {str(e)}"
)
return None
def _build_execution_summary(
node_executions: list[NodeExecutionResult],
execution_stats: GraphExecutionStats,
graph_name: str,
graph_description: str,
graph_links: list[Any],
execution_status: ExecutionStatus | None = None,
) -> dict[str, Any]:
"""Build a structured summary of execution data for AI analysis."""
nodes: list[NodeInfo] = []
node_execution_counts: dict[str, int] = {}
node_error_counts: dict[str, int] = {}
node_errors: dict[str, list[ErrorInfo]] = {}
node_outputs: dict[str, list[InputOutputInfo]] = {}
node_inputs: dict[str, list[InputOutputInfo]] = {}
input_output_data: dict[str, Any] = {}
node_map: dict[str, NodeInfo] = {}
# Process node executions
for node_exec in node_executions:
block = get_block(node_exec.block_id)
if not block:
logger.warning(
f"Block {node_exec.block_id} not found for node {node_exec.node_id}"
)
continue
# Track execution counts per node
if node_exec.node_id not in node_execution_counts:
node_execution_counts[node_exec.node_id] = 0
node_execution_counts[node_exec.node_id] += 1
# Track errors per node and group them
if node_exec.status == ExecutionStatus.FAILED:
if node_exec.node_id not in node_error_counts:
node_error_counts[node_exec.node_id] = 0
node_error_counts[node_exec.node_id] += 1
# Extract actual error message from output_data
error_message = "Unknown error"
if node_exec.output_data and isinstance(node_exec.output_data, dict):
# Check if error is in output_data
if "error" in node_exec.output_data:
error_output = node_exec.output_data["error"]
if isinstance(error_output, list) and error_output:
error_message = str(error_output[0])
else:
error_message = str(error_output)
# Group errors by node_id
if node_exec.node_id not in node_errors:
node_errors[node_exec.node_id] = []
node_errors[node_exec.node_id].append(
{
"error": error_message,
"execution_id": _truncate_uuid(node_exec.node_exec_id),
"timestamp": node_exec.add_time.isoformat(),
}
)
# Collect output samples for each node (latest executions)
if node_exec.output_data:
if node_exec.node_id not in node_outputs:
node_outputs[node_exec.node_id] = []
# Truncate output data to 100 chars to save space
truncated_output = truncate(node_exec.output_data, 100)
node_outputs[node_exec.node_id].append(
{
"execution_id": _truncate_uuid(node_exec.node_exec_id),
"output_data": truncated_output,
"timestamp": node_exec.add_time.isoformat(),
}
)
# Collect input samples for each node (latest executions)
if node_exec.input_data:
if node_exec.node_id not in node_inputs:
node_inputs[node_exec.node_id] = []
# Truncate input data to 100 chars to save space
truncated_input = truncate(node_exec.input_data, 100)
node_inputs[node_exec.node_id].append(
{
"execution_id": _truncate_uuid(node_exec.node_exec_id),
"output_data": truncated_input, # Reuse field name for consistency
"timestamp": node_exec.add_time.isoformat(),
}
)
# Build node data (only add unique nodes)
if node_exec.node_id not in node_map:
node_data: NodeInfo = {
"node_id": _truncate_uuid(node_exec.node_id),
"block_id": _truncate_uuid(node_exec.block_id),
"block_name": block.name,
"block_description": block.description or "",
"execution_count": 0, # Will be set later
"error_count": 0, # Will be set later
"recent_errors": [], # Will be set later
"recent_outputs": [], # Will be set later
"recent_inputs": [], # Will be set later
}
nodes.append(node_data)
node_map[node_exec.node_id] = node_data
# Store input/output data for special blocks (input/output blocks)
if block.name in ["AgentInputBlock", "AgentOutputBlock", "UserInputBlock"]:
if node_exec.input_data:
input_output_data[f"{node_exec.node_id}_inputs"] = dict(
node_exec.input_data
)
if node_exec.output_data:
input_output_data[f"{node_exec.node_id}_outputs"] = dict(
node_exec.output_data
)
# Add execution and error counts to node data, plus limited errors and output samples
for node in nodes:
# Use original node_id for lookups (before truncation)
original_node_id = None
for orig_id, node_data in node_map.items():
if node_data == node:
original_node_id = orig_id
break
if original_node_id:
node["execution_count"] = node_execution_counts.get(original_node_id, 0)
node["error_count"] = node_error_counts.get(original_node_id, 0)
# Add limited errors for this node (latest 10 or first 5 + last 5)
if original_node_id in node_errors:
node_error_list = node_errors[original_node_id]
if len(node_error_list) <= 10:
node["recent_errors"] = node_error_list
else:
# First 5 + last 5 if more than 10 errors
node["recent_errors"] = node_error_list[:5] + node_error_list[-5:]
# Add latest output samples (latest 3)
if original_node_id in node_outputs:
node_output_list = node_outputs[original_node_id]
# Sort by timestamp if available, otherwise take last 3
if node_output_list and node_output_list[0].get("timestamp"):
node_output_list.sort(
key=lambda x: x.get("timestamp", ""), reverse=True
)
node["recent_outputs"] = node_output_list[:3]
# Add latest input samples (latest 3)
if original_node_id in node_inputs:
node_input_list = node_inputs[original_node_id]
# Sort by timestamp if available, otherwise take last 3
if node_input_list and node_input_list[0].get("timestamp"):
node_input_list.sort(
key=lambda x: x.get("timestamp", ""), reverse=True
)
node["recent_inputs"] = node_input_list[:3]
# Build node relations from graph links
node_relations: list[NodeRelation] = []
for link in graph_links:
# Include link details with source and sink information (truncated UUIDs)
relation: NodeRelation = {
"source_node_id": _truncate_uuid(link.source_id),
"sink_node_id": _truncate_uuid(link.sink_id),
"source_name": link.source_name,
"sink_name": link.sink_name,
"is_static": link.is_static if hasattr(link, "is_static") else False,
}
# Add block names if nodes exist in our map
if link.source_id in node_map:
relation["source_block_name"] = node_map[link.source_id]["block_name"]
if link.sink_id in node_map:
relation["sink_block_name"] = node_map[link.sink_id]["block_name"]
node_relations.append(relation)
# Build overall summary
return {
"graph_info": {"name": graph_name, "description": graph_description},
"nodes": nodes,
"node_relations": node_relations,
"input_output_data": input_output_data,
"overall_status": {
"total_nodes_in_graph": len(nodes),
"total_executions": execution_stats.node_count,
"total_errors": execution_stats.node_error_count,
"execution_time_seconds": execution_stats.walltime,
"has_errors": bool(
execution_stats.error or execution_stats.node_error_count > 0
),
"graph_error": (
str(execution_stats.error) if execution_stats.error else None
),
"graph_execution_status": (
execution_status.value if execution_status else None
),
},
}
@func_retry
async def _call_llm_direct(
credentials: APIKeyCredentials, prompt: list[dict[str, str]]
) -> str:
"""Make direct LLM call."""
response = await llm_call(
credentials=credentials,
llm_model=LlmModel.GPT4O_MINI,
prompt=prompt,
json_format=False,
max_tokens=150,
compress_prompt_to_fit=True,
)
if response and response.response:
return response.response.strip()
else:
return "Unable to generate activity summary"

View File

@@ -0,0 +1,702 @@
"""
Tests for activity status generator functionality.
"""
from datetime import datetime, timezone
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from backend.blocks.llm import LLMResponse
from backend.data.execution import ExecutionStatus, NodeExecutionResult
from backend.data.model import GraphExecutionStats
from backend.executor.activity_status_generator import (
_build_execution_summary,
_call_llm_direct,
generate_activity_status_for_execution,
)
@pytest.fixture
def mock_node_executions():
"""Create mock node executions for testing."""
return [
NodeExecutionResult(
user_id="test_user",
graph_id="test_graph",
graph_version=1,
graph_exec_id="test_exec",
node_exec_id="123e4567-e89b-12d3-a456-426614174001",
node_id="456e7890-e89b-12d3-a456-426614174002",
block_id="789e1234-e89b-12d3-a456-426614174003",
status=ExecutionStatus.COMPLETED,
input_data={"user_input": "Hello, world!"},
output_data={"processed_input": ["Hello, world!"]},
add_time=datetime.now(timezone.utc),
queue_time=None,
start_time=None,
end_time=None,
),
NodeExecutionResult(
user_id="test_user",
graph_id="test_graph",
graph_version=1,
graph_exec_id="test_exec",
node_exec_id="234e5678-e89b-12d3-a456-426614174004",
node_id="567e8901-e89b-12d3-a456-426614174005",
block_id="890e2345-e89b-12d3-a456-426614174006",
status=ExecutionStatus.COMPLETED,
input_data={"data": "Hello, world!"},
output_data={"result": ["Processed data"]},
add_time=datetime.now(timezone.utc),
queue_time=None,
start_time=None,
end_time=None,
),
NodeExecutionResult(
user_id="test_user",
graph_id="test_graph",
graph_version=1,
graph_exec_id="test_exec",
node_exec_id="345e6789-e89b-12d3-a456-426614174007",
node_id="678e9012-e89b-12d3-a456-426614174008",
block_id="901e3456-e89b-12d3-a456-426614174009",
status=ExecutionStatus.FAILED,
input_data={"final_data": "Processed data"},
output_data={
"error": ["Connection timeout: Unable to reach external service"]
},
add_time=datetime.now(timezone.utc),
queue_time=None,
start_time=None,
end_time=None,
),
]
@pytest.fixture
def mock_execution_stats():
"""Create mock execution stats for testing."""
return GraphExecutionStats(
walltime=2.5,
cputime=1.8,
nodes_walltime=2.0,
nodes_cputime=1.5,
node_count=3,
node_error_count=1,
cost=10,
error=None,
)
@pytest.fixture
def mock_execution_stats_with_graph_error():
"""Create mock execution stats with graph-level error."""
return GraphExecutionStats(
walltime=2.5,
cputime=1.8,
nodes_walltime=2.0,
nodes_cputime=1.5,
node_count=3,
node_error_count=1,
cost=10,
error="Graph execution failed: Invalid API credentials",
)
@pytest.fixture
def mock_blocks():
"""Create mock blocks for testing."""
input_block = MagicMock()
input_block.name = "AgentInputBlock"
input_block.description = "Handles user input"
process_block = MagicMock()
process_block.name = "ProcessingBlock"
process_block.description = "Processes data"
output_block = MagicMock()
output_block.name = "AgentOutputBlock"
output_block.description = "Provides output to user"
return {
"789e1234-e89b-12d3-a456-426614174003": input_block,
"890e2345-e89b-12d3-a456-426614174006": process_block,
"901e3456-e89b-12d3-a456-426614174009": output_block,
"process_block_id": process_block, # Keep old key for different error format test
}
class TestBuildExecutionSummary:
"""Tests for _build_execution_summary function."""
def test_build_summary_with_successful_execution(
self, mock_node_executions, mock_execution_stats, mock_blocks
):
"""Test building summary for successful execution."""
# Create mock links with realistic UUIDs
mock_links = [
MagicMock(
source_id="456e7890-e89b-12d3-a456-426614174002",
sink_id="567e8901-e89b-12d3-a456-426614174005",
source_name="output",
sink_name="input",
is_static=False,
)
]
with patch(
"backend.executor.activity_status_generator.get_block"
) as mock_get_block:
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
summary = _build_execution_summary(
mock_node_executions[:2],
mock_execution_stats,
"Test Graph",
"A test graph for processing",
mock_links,
ExecutionStatus.COMPLETED,
)
# Check graph info
assert summary["graph_info"]["name"] == "Test Graph"
assert summary["graph_info"]["description"] == "A test graph for processing"
# Check nodes with per-node counts
assert len(summary["nodes"]) == 2
assert summary["nodes"][0]["block_name"] == "AgentInputBlock"
assert summary["nodes"][0]["execution_count"] == 1
assert summary["nodes"][0]["error_count"] == 0
assert summary["nodes"][1]["block_name"] == "ProcessingBlock"
assert summary["nodes"][1]["execution_count"] == 1
assert summary["nodes"][1]["error_count"] == 0
# Check node relations (UUIDs are truncated to first segment)
assert len(summary["node_relations"]) == 1
assert (
summary["node_relations"][0]["source_node_id"] == "456e7890"
) # Truncated
assert (
summary["node_relations"][0]["sink_node_id"] == "567e8901"
) # Truncated
assert (
summary["node_relations"][0]["source_block_name"] == "AgentInputBlock"
)
assert summary["node_relations"][0]["sink_block_name"] == "ProcessingBlock"
# Check overall status
assert summary["overall_status"]["total_nodes_in_graph"] == 2
assert summary["overall_status"]["total_executions"] == 3
assert summary["overall_status"]["total_errors"] == 1
assert summary["overall_status"]["execution_time_seconds"] == 2.5
assert summary["overall_status"]["graph_execution_status"] == "COMPLETED"
# Check input/output data (using actual node UUIDs)
assert (
"456e7890-e89b-12d3-a456-426614174002_inputs"
in summary["input_output_data"]
)
assert (
"456e7890-e89b-12d3-a456-426614174002_outputs"
in summary["input_output_data"]
)
def test_build_summary_with_failed_execution(
self, mock_node_executions, mock_execution_stats, mock_blocks
):
"""Test building summary for execution with failures."""
mock_links = [] # No links for this test
with patch(
"backend.executor.activity_status_generator.get_block"
) as mock_get_block:
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
summary = _build_execution_summary(
mock_node_executions,
mock_execution_stats,
"Failed Graph",
"Test with failures",
mock_links,
ExecutionStatus.FAILED,
)
# Check that errors are now in node's recent_errors field
# Find the output node (with truncated UUID)
output_node = next(
n for n in summary["nodes"] if n["node_id"] == "678e9012" # Truncated
)
assert output_node["error_count"] == 1
assert output_node["execution_count"] == 1
# Check recent_errors field
assert "recent_errors" in output_node
assert len(output_node["recent_errors"]) == 1
assert (
output_node["recent_errors"][0]["error"]
== "Connection timeout: Unable to reach external service"
)
assert (
"execution_id" in output_node["recent_errors"][0]
) # Should include execution ID
def test_build_summary_with_missing_blocks(
self, mock_node_executions, mock_execution_stats
):
"""Test building summary when blocks are missing."""
with patch(
"backend.executor.activity_status_generator.get_block"
) as mock_get_block:
mock_get_block.return_value = None
summary = _build_execution_summary(
mock_node_executions,
mock_execution_stats,
"Missing Blocks Graph",
"Test with missing blocks",
[],
ExecutionStatus.COMPLETED,
)
# Should handle missing blocks gracefully
assert len(summary["nodes"]) == 0
# No top-level errors field anymore, errors are in nodes' recent_errors
assert summary["graph_info"]["name"] == "Missing Blocks Graph"
def test_build_summary_with_graph_error(
self, mock_node_executions, mock_execution_stats_with_graph_error, mock_blocks
):
"""Test building summary with graph-level error."""
mock_links = []
with patch(
"backend.executor.activity_status_generator.get_block"
) as mock_get_block:
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
summary = _build_execution_summary(
mock_node_executions,
mock_execution_stats_with_graph_error,
"Graph with Error",
"Test with graph error",
mock_links,
ExecutionStatus.FAILED,
)
# Check that graph error is included in overall status
assert summary["overall_status"]["has_errors"] is True
assert (
summary["overall_status"]["graph_error"]
== "Graph execution failed: Invalid API credentials"
)
assert summary["overall_status"]["total_errors"] == 1
assert summary["overall_status"]["graph_execution_status"] == "FAILED"
def test_build_summary_with_different_error_formats(
self, mock_execution_stats, mock_blocks
):
"""Test building summary with different error formats."""
# Create node executions with different error formats and realistic UUIDs
mock_executions = [
NodeExecutionResult(
user_id="test_user",
graph_id="test_graph",
graph_version=1,
graph_exec_id="test_exec",
node_exec_id="111e2222-e89b-12d3-a456-426614174010",
node_id="333e4444-e89b-12d3-a456-426614174011",
block_id="process_block_id",
status=ExecutionStatus.FAILED,
input_data={},
output_data={"error": ["Simple string error message"]},
add_time=datetime.now(timezone.utc),
queue_time=None,
start_time=None,
end_time=None,
),
NodeExecutionResult(
user_id="test_user",
graph_id="test_graph",
graph_version=1,
graph_exec_id="test_exec",
node_exec_id="555e6666-e89b-12d3-a456-426614174012",
node_id="777e8888-e89b-12d3-a456-426614174013",
block_id="process_block_id",
status=ExecutionStatus.FAILED,
input_data={},
output_data={}, # No error in output
add_time=datetime.now(timezone.utc),
queue_time=None,
start_time=None,
end_time=None,
),
]
with patch(
"backend.executor.activity_status_generator.get_block"
) as mock_get_block:
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
summary = _build_execution_summary(
mock_executions,
mock_execution_stats,
"Error Test Graph",
"Testing error formats",
[],
ExecutionStatus.FAILED,
)
# Check different error formats - errors are now in nodes' recent_errors
error_nodes = [n for n in summary["nodes"] if n.get("recent_errors")]
assert len(error_nodes) == 2
# String error format - find node with truncated ID
string_error_node = next(
n for n in summary["nodes"] if n["node_id"] == "333e4444" # Truncated
)
assert len(string_error_node["recent_errors"]) == 1
assert (
string_error_node["recent_errors"][0]["error"]
== "Simple string error message"
)
# No error output format - find node with truncated ID
no_error_node = next(
n for n in summary["nodes"] if n["node_id"] == "777e8888" # Truncated
)
assert len(no_error_node["recent_errors"]) == 1
assert no_error_node["recent_errors"][0]["error"] == "Unknown error"
class TestLLMCall:
"""Tests for LLM calling functionality."""
@pytest.mark.asyncio
async def test_call_llm_direct_success(self):
"""Test successful LLM call."""
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials
mock_response = LLMResponse(
raw_response={},
prompt=[],
response="Agent successfully processed user input and generated response.",
tool_calls=None,
prompt_tokens=50,
completion_tokens=20,
)
with patch(
"backend.executor.activity_status_generator.llm_call"
) as mock_llm_call:
mock_llm_call.return_value = mock_response
credentials = APIKeyCredentials(
id="test",
provider="openai",
api_key=SecretStr("test_key"),
title="Test",
)
prompt = [{"role": "user", "content": "Test prompt"}]
result = await _call_llm_direct(credentials, prompt)
assert (
result
== "Agent successfully processed user input and generated response."
)
mock_llm_call.assert_called_once()
@pytest.mark.asyncio
async def test_call_llm_direct_no_response(self):
"""Test LLM call with no response."""
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials
with patch(
"backend.executor.activity_status_generator.llm_call"
) as mock_llm_call:
mock_llm_call.return_value = None
credentials = APIKeyCredentials(
id="test",
provider="openai",
api_key=SecretStr("test_key"),
title="Test",
)
prompt = [{"role": "user", "content": "Test prompt"}]
result = await _call_llm_direct(credentials, prompt)
assert result == "Unable to generate activity summary"
class TestGenerateActivityStatusForExecution:
"""Tests for the main generate_activity_status_for_execution function."""
@pytest.mark.asyncio
async def test_generate_status_success(
self, mock_node_executions, mock_execution_stats, mock_blocks
):
"""Test successful activity status generation."""
mock_db_client = AsyncMock()
mock_db_client.get_node_executions.return_value = mock_node_executions
mock_graph_metadata = MagicMock()
mock_graph_metadata.name = "Test Agent"
mock_graph_metadata.description = "A test agent"
mock_db_client.get_graph_metadata.return_value = mock_graph_metadata
mock_graph = MagicMock()
mock_graph.links = []
mock_db_client.get_graph.return_value = mock_graph
with patch(
"backend.executor.activity_status_generator.get_block"
) as mock_get_block, patch(
"backend.executor.activity_status_generator.Settings"
) as mock_settings, patch(
"backend.executor.activity_status_generator._call_llm_direct"
) as mock_llm, patch(
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=True,
):
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
mock_settings.return_value.secrets.openai_api_key = "test_key"
mock_llm.return_value = (
"I analyzed your data and provided the requested insights."
)
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
graph_id="test_graph",
graph_version=1,
execution_stats=mock_execution_stats,
db_client=mock_db_client,
user_id="test_user",
)
assert result == "I analyzed your data and provided the requested insights."
mock_db_client.get_node_executions.assert_called_once()
mock_db_client.get_graph_metadata.assert_called_once()
mock_db_client.get_graph.assert_called_once()
mock_llm.assert_called_once()
@pytest.mark.asyncio
async def test_generate_status_feature_disabled(self, mock_execution_stats):
"""Test activity status generation when feature is disabled."""
mock_db_client = AsyncMock()
with patch(
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=False,
):
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
graph_id="test_graph",
graph_version=1,
execution_stats=mock_execution_stats,
db_client=mock_db_client,
user_id="test_user",
)
assert result is None
mock_db_client.get_node_executions.assert_not_called()
@pytest.mark.asyncio
async def test_generate_status_no_api_key(self, mock_execution_stats):
"""Test activity status generation with no API key."""
mock_db_client = AsyncMock()
with patch(
"backend.executor.activity_status_generator.Settings"
) as mock_settings, patch(
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=True,
):
mock_settings.return_value.secrets.openai_api_key = ""
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
graph_id="test_graph",
graph_version=1,
execution_stats=mock_execution_stats,
db_client=mock_db_client,
user_id="test_user",
)
assert result is None
mock_db_client.get_node_executions.assert_not_called()
@pytest.mark.asyncio
async def test_generate_status_exception_handling(self, mock_execution_stats):
"""Test activity status generation with exception."""
mock_db_client = AsyncMock()
mock_db_client.get_node_executions.side_effect = Exception("Database error")
with patch(
"backend.executor.activity_status_generator.Settings"
) as mock_settings, patch(
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=True,
):
mock_settings.return_value.secrets.openai_api_key = "test_key"
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
graph_id="test_graph",
graph_version=1,
execution_stats=mock_execution_stats,
db_client=mock_db_client,
user_id="test_user",
)
assert result is None
@pytest.mark.asyncio
async def test_generate_status_with_graph_name_fallback(
self, mock_node_executions, mock_execution_stats, mock_blocks
):
"""Test activity status generation with graph name fallback."""
mock_db_client = AsyncMock()
mock_db_client.get_node_executions.return_value = mock_node_executions
mock_db_client.get_graph_metadata.return_value = None # No metadata
mock_db_client.get_graph.return_value = None # No graph
with patch(
"backend.executor.activity_status_generator.get_block"
) as mock_get_block, patch(
"backend.executor.activity_status_generator.Settings"
) as mock_settings, patch(
"backend.executor.activity_status_generator._call_llm_direct"
) as mock_llm, patch(
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=True,
):
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
mock_settings.return_value.secrets.openai_api_key = "test_key"
mock_llm.return_value = "Agent completed execution."
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
graph_id="test_graph",
graph_version=1,
execution_stats=mock_execution_stats,
db_client=mock_db_client,
user_id="test_user",
)
assert result == "Agent completed execution."
# Should use fallback graph name in prompt
call_args = mock_llm.call_args[0][1] # prompt argument
assert "Graph test_graph" in call_args[1]["content"]
class TestIntegration:
"""Integration tests to verify the complete flow."""
@pytest.mark.asyncio
async def test_full_integration_flow(
self, mock_node_executions, mock_execution_stats, mock_blocks
):
"""Test the complete integration flow."""
mock_db_client = AsyncMock()
mock_db_client.get_node_executions.return_value = mock_node_executions
mock_graph_metadata = MagicMock()
mock_graph_metadata.name = "Test Integration Agent"
mock_graph_metadata.description = "Integration test agent"
mock_db_client.get_graph_metadata.return_value = mock_graph_metadata
mock_graph = MagicMock()
mock_graph.links = []
mock_db_client.get_graph.return_value = mock_graph
expected_activity = "I processed user input but failed during final output generation due to system error."
with patch(
"backend.executor.activity_status_generator.get_block"
) as mock_get_block, patch(
"backend.executor.activity_status_generator.Settings"
) as mock_settings, patch(
"backend.executor.activity_status_generator.llm_call"
) as mock_llm_call, patch(
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=True,
):
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
mock_settings.return_value.secrets.openai_api_key = "test_key"
mock_response = LLMResponse(
raw_response={},
prompt=[],
response=expected_activity,
tool_calls=None,
prompt_tokens=100,
completion_tokens=30,
)
mock_llm_call.return_value = mock_response
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
graph_id="test_graph",
graph_version=1,
execution_stats=mock_execution_stats,
db_client=mock_db_client,
user_id="test_user",
)
assert result == expected_activity
# Verify the correct data was passed to LLM
llm_call_args = mock_llm_call.call_args
prompt = llm_call_args[1]["prompt"]
# Check system prompt
assert prompt[0]["role"] == "system"
assert "user's perspective" in prompt[0]["content"]
# Check user prompt contains expected data
user_content = prompt[1]["content"]
assert "Test Integration Agent" in user_content
assert "user-friendly terms" in user_content.lower()
# Verify that execution data is present in the prompt
assert "{" in user_content # Should contain JSON data
assert "overall_status" in user_content
@pytest.mark.asyncio
async def test_manager_integration_with_disabled_feature(
self, mock_execution_stats
):
"""Test that when feature returns None, manager doesn't set activity_status."""
mock_db_client = AsyncMock()
with patch(
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=False,
):
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
graph_id="test_graph",
graph_version=1,
execution_stats=mock_execution_stats,
db_client=mock_db_client,
user_id="test_user",
)
# Should return None when disabled
assert result is None
# Verify no database calls were made
mock_db_client.get_node_executions.assert_not_called()
mock_db_client.get_graph_metadata.assert_not_called()
mock_db_client.get_graph.assert_not_called()

View File

@@ -7,7 +7,6 @@ from backend.data.execution import (
create_graph_execution,
get_block_error_stats,
get_execution_kv_data,
get_graph_execution,
get_graph_execution_meta,
get_graph_executions,
get_latest_node_execution,
@@ -16,12 +15,12 @@ from backend.data.execution import (
set_execution_kv_data,
update_graph_execution_start_time,
update_graph_execution_stats,
update_node_execution_stats,
update_node_execution_status,
update_node_execution_status_batch,
upsert_execution_input,
upsert_execution_output,
)
from backend.data.generate_data import get_user_execution_summary_data
from backend.data.graph import (
get_connected_output_nodes,
get_graph,
@@ -40,12 +39,16 @@ from backend.data.user import (
get_user_email_by_id,
get_user_email_verification,
get_user_integrations,
get_user_metadata,
get_user_notification_preference,
update_user_integrations,
update_user_metadata,
)
from backend.util.service import AppService, AppServiceClient, endpoint_to_sync, expose
from backend.util.service import (
AppService,
AppServiceClient,
UnhealthyServiceError,
endpoint_to_sync,
expose,
)
from backend.util.settings import Config
config = Config()
@@ -77,6 +80,11 @@ class DatabaseManager(AppService):
logger.info(f"[{self.service_name}] ⏳ Disconnecting Database...")
self.run_and_wait(db.disconnect())
async def health_check(self) -> str:
if not db.is_connected():
raise UnhealthyServiceError("Database is not connected")
return await super().health_check()
@classmethod
def get_port(cls) -> int:
return config.database_api_port
@@ -90,7 +98,6 @@ class DatabaseManager(AppService):
return cast(Callable[Concatenate[object, P], R], expose(f))
# Executions
get_graph_execution = _(get_graph_execution)
get_graph_executions = _(get_graph_executions)
get_graph_execution_meta = _(get_graph_execution_meta)
create_graph_execution = _(create_graph_execution)
@@ -101,7 +108,6 @@ class DatabaseManager(AppService):
update_node_execution_status_batch = _(update_node_execution_status_batch)
update_graph_execution_start_time = _(update_graph_execution_start_time)
update_graph_execution_stats = _(update_graph_execution_stats)
update_node_execution_stats = _(update_node_execution_stats)
upsert_execution_input = _(upsert_execution_input)
upsert_execution_output = _(upsert_execution_output)
get_execution_kv_data = _(get_execution_kv_data)
@@ -119,8 +125,6 @@ class DatabaseManager(AppService):
get_credits = _(_get_credits, name="get_credits")
# User + User Metadata + User Integrations
get_user_metadata = _(get_user_metadata)
update_user_metadata = _(update_user_metadata)
get_user_integrations = _(get_user_integrations)
update_user_integrations = _(update_user_integrations)
@@ -141,6 +145,9 @@ class DatabaseManager(AppService):
get_user_notification_oldest_message_in_batch
)
# Summary data - async
get_user_execution_summary_data = _(get_user_execution_summary_data)
class DatabaseManagerClient(AppServiceClient):
d = DatabaseManager
@@ -151,55 +158,23 @@ class DatabaseManagerClient(AppServiceClient):
return DatabaseManager
# Executions
get_graph_execution = _(d.get_graph_execution)
get_graph_executions = _(d.get_graph_executions)
get_graph_execution_meta = _(d.get_graph_execution_meta)
create_graph_execution = _(d.create_graph_execution)
get_node_execution = _(d.get_node_execution)
get_node_executions = _(d.get_node_executions)
get_latest_node_execution = _(d.get_latest_node_execution)
update_node_execution_status = _(d.update_node_execution_status)
update_node_execution_status_batch = _(d.update_node_execution_status_batch)
update_graph_execution_start_time = _(d.update_graph_execution_start_time)
update_graph_execution_stats = _(d.update_graph_execution_stats)
update_node_execution_stats = _(d.update_node_execution_stats)
upsert_execution_input = _(d.upsert_execution_input)
upsert_execution_output = _(d.upsert_execution_output)
get_execution_kv_data = _(d.get_execution_kv_data)
set_execution_kv_data = _(d.set_execution_kv_data)
# Graphs
get_node = _(d.get_node)
get_graph = _(d.get_graph)
get_connected_output_nodes = _(d.get_connected_output_nodes)
get_graph_metadata = _(d.get_graph_metadata)
# Credits
spend_credits = _(d.spend_credits)
get_credits = _(d.get_credits)
# User + User Metadata + User Integrations
get_user_metadata = _(d.get_user_metadata)
update_user_metadata = _(d.update_user_metadata)
get_user_integrations = _(d.get_user_integrations)
update_user_integrations = _(d.update_user_integrations)
# User Comms - async
get_active_user_ids_in_timerange = _(d.get_active_user_ids_in_timerange)
get_user_email_by_id = _(d.get_user_email_by_id)
get_user_email_verification = _(d.get_user_email_verification)
get_user_notification_preference = _(d.get_user_notification_preference)
# Notifications - async
create_or_add_to_user_notification_batch = _(
d.create_or_add_to_user_notification_batch
)
empty_user_notification_batch = _(d.empty_user_notification_batch)
get_all_batches_by_type = _(d.get_all_batches_by_type)
get_user_notification_batch = _(d.get_user_notification_batch)
get_user_notification_oldest_message_in_batch = _(
d.get_user_notification_oldest_message_in_batch
)
# Summary data - async
get_user_execution_summary_data = _(d.get_user_execution_summary_data)
# Block error monitoring
get_block_error_stats = _(d.get_block_error_stats)
@@ -225,10 +200,28 @@ class DatabaseManagerAsyncClient(AppServiceClient):
upsert_execution_input = d.upsert_execution_input
upsert_execution_output = d.upsert_execution_output
update_graph_execution_stats = d.update_graph_execution_stats
update_node_execution_stats = d.update_node_execution_stats
update_node_execution_status = d.update_node_execution_status
update_node_execution_status_batch = d.update_node_execution_status_batch
update_user_integrations = d.update_user_integrations
get_execution_kv_data = d.get_execution_kv_data
set_execution_kv_data = d.set_execution_kv_data
get_block_error_stats = d.get_block_error_stats
# User Comms
get_active_user_ids_in_timerange = d.get_active_user_ids_in_timerange
get_user_email_by_id = d.get_user_email_by_id
get_user_email_verification = d.get_user_email_verification
get_user_notification_preference = d.get_user_notification_preference
# Notifications
create_or_add_to_user_notification_batch = (
d.create_or_add_to_user_notification_batch
)
empty_user_notification_batch = d.empty_user_notification_batch
get_all_batches_by_type = d.get_all_batches_by_type
get_user_notification_batch = d.get_user_notification_batch
get_user_notification_oldest_message_in_batch = (
d.get_user_notification_oldest_message_in_batch
)
# Summary data
get_user_execution_summary_data = d.get_user_execution_summary_data

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,6 @@ import logging
import autogpt_libs.auth.models
import fastapi.responses
import pytest
from prisma.models import User
import backend.server.v2.library.model
import backend.server.v2.store.model
@@ -12,6 +11,7 @@ from backend.blocks.data_manipulation import FindInDictionaryBlock
from backend.blocks.io import AgentInputBlock
from backend.blocks.maths import CalculatorBlock, Operation
from backend.data import execution, graph
from backend.data.model import User
from backend.server.model import CreateGraph
from backend.server.rest_api import AgentServer
from backend.usecases.sample import create_test_graph, create_test_user

View File

@@ -1,17 +1,22 @@
import asyncio
import logging
import os
import threading
from enum import Enum
from typing import Optional
from urllib.parse import parse_qs, urlencode, urlparse, urlunparse
from apscheduler.events import EVENT_JOB_ERROR, EVENT_JOB_EXECUTED
from apscheduler.events import (
EVENT_JOB_ERROR,
EVENT_JOB_EXECUTED,
EVENT_JOB_MAX_INSTANCES,
EVENT_JOB_MISSED,
)
from apscheduler.job import Job as JobObj
from apscheduler.jobstores.memory import MemoryJobStore
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from autogpt_libs.utils.cache import thread_cached
from dotenv import load_dotenv
from pydantic import BaseModel, Field, ValidationError
from sqlalchemy import MetaData, create_engine
@@ -30,7 +35,14 @@ from backend.monitoring import (
from backend.util.cloud_storage import cleanup_expired_files_async
from backend.util.exceptions import NotAuthorizedError, NotFoundError
from backend.util.logging import PrefixFilter
from backend.util.service import AppService, AppServiceClient, endpoint_to_async, expose
from backend.util.retry import func_retry
from backend.util.service import (
AppService,
AppServiceClient,
UnhealthyServiceError,
endpoint_to_async,
expose,
)
from backend.util.settings import Config
@@ -60,26 +72,69 @@ apscheduler_logger.addFilter(PrefixFilter("[Scheduler] [APScheduler]"))
config = Config()
# Timeout constants
SCHEDULER_OPERATION_TIMEOUT_SECONDS = 300 # 5 minutes for scheduler operations
def job_listener(event):
"""Logs job execution outcomes for better monitoring."""
if event.exception:
logger.error(f"Job {event.job_id} failed.")
logger.error(
f"Job {event.job_id} failed: {type(event.exception).__name__}: {event.exception}"
)
else:
logger.info(f"Job {event.job_id} completed successfully.")
@thread_cached
def job_missed_listener(event):
"""Logs when jobs are missed due to scheduling issues."""
logger.warning(
f"Job {event.job_id} was missed at scheduled time {event.scheduled_run_time}. "
f"This can happen if the scheduler is overloaded or if previous executions are still running."
)
def job_max_instances_listener(event):
"""Logs when jobs hit max instances limit."""
logger.warning(
f"Job {event.job_id} execution was SKIPPED - max instances limit reached. "
f"Previous execution(s) are still running. "
f"Consider increasing max_instances or check why previous executions are taking too long."
)
_event_loop: asyncio.AbstractEventLoop | None = None
_event_loop_thread: threading.Thread | None = None
@func_retry
def get_event_loop():
return asyncio.new_event_loop()
"""Get the shared event loop."""
if _event_loop is None:
raise RuntimeError("Event loop not initialized. Scheduler not started.")
return _event_loop
def run_async(coro, timeout: float = SCHEDULER_OPERATION_TIMEOUT_SECONDS):
"""Run a coroutine in the shared event loop and wait for completion."""
loop = get_event_loop()
future = asyncio.run_coroutine_threadsafe(coro, loop)
try:
return future.result(timeout=timeout)
except Exception as e:
logger.error(f"Async operation failed: {type(e).__name__}: {e}")
raise
def execute_graph(**kwargs):
get_event_loop().run_until_complete(_execute_graph(**kwargs))
"""Execute graph in the shared event loop and wait for completion."""
# Wait for completion to ensure job doesn't exit prematurely
run_async(_execute_graph(**kwargs))
async def _execute_graph(**kwargs):
args = GraphExecutionJobArgs(**kwargs)
start_time = asyncio.get_event_loop().time()
try:
logger.info(f"Executing recurring job for graph #{args.graph_id}")
graph_exec: GraphExecutionWithNodes = await execution_utils.add_graph_execution(
@@ -89,16 +144,28 @@ async def _execute_graph(**kwargs):
inputs=args.input_data,
graph_credentials_inputs=args.input_credentials,
)
elapsed = asyncio.get_event_loop().time() - start_time
logger.info(
f"Graph execution started with ID {graph_exec.id} for graph {args.graph_id}"
f"Graph execution started with ID {graph_exec.id} for graph {args.graph_id} "
f"(took {elapsed:.2f}s to create and publish)"
)
if elapsed > 10:
logger.warning(
f"Graph execution {graph_exec.id} took {elapsed:.2f}s to create/publish - "
f"this is unusually slow and may indicate resource contention"
)
except Exception as e:
logger.error(f"Error executing graph {args.graph_id}: {e}")
elapsed = asyncio.get_event_loop().time() - start_time
logger.error(
f"Error executing graph {args.graph_id} after {elapsed:.2f}s: "
f"{type(e).__name__}: {e}"
)
def cleanup_expired_files():
"""Clean up expired files from cloud storage."""
get_event_loop().run_until_complete(cleanup_expired_files_async())
# Wait for completion
run_async(cleanup_expired_files_async())
# Monitoring functions are now imported from monitoring module
@@ -154,7 +221,7 @@ class NotificationJobInfo(NotificationJobArgs):
class Scheduler(AppService):
scheduler: BlockingScheduler
scheduler: BackgroundScheduler
def __init__(self, register_system_tasks: bool = True):
self.register_system_tasks = register_system_tasks
@@ -167,10 +234,50 @@ class Scheduler(AppService):
def db_pool_size(cls) -> int:
return config.scheduler_db_pool_size
async def health_check(self) -> str:
# Thread-safe health check with proper initialization handling
if not hasattr(self, "scheduler"):
raise UnhealthyServiceError("Scheduler is still initializing")
# Check if we're in the middle of cleanup
if self.cleaned_up:
return await super().health_check()
# Normal operation - check if scheduler is running
if not self.scheduler.running:
raise UnhealthyServiceError("Scheduler is not running")
return await super().health_check()
def run_service(self):
load_dotenv()
# Initialize the event loop for async jobs
global _event_loop
_event_loop = asyncio.new_event_loop()
# Use daemon thread since it should die with the main service
global _event_loop_thread
_event_loop_thread = threading.Thread(
target=_event_loop.run_forever, daemon=True, name="SchedulerEventLoop"
)
_event_loop_thread.start()
db_schema, db_url = _extract_schema_from_url(os.getenv("DIRECT_URL"))
self.scheduler = BlockingScheduler(
# Configure executors to limit concurrency without skipping jobs
from apscheduler.executors.pool import ThreadPoolExecutor
self.scheduler = BackgroundScheduler(
executors={
"default": ThreadPoolExecutor(
max_workers=self.db_pool_size()
), # Match DB pool size to prevent resource contention
},
job_defaults={
"coalesce": True, # Skip redundant missed jobs - just run the latest
"max_instances": 1000, # Effectively unlimited - never drop executions
"misfire_grace_time": None, # No time limit for missed jobs
},
jobstores={
Jobstores.EXECUTION.value: SQLAlchemyJobStore(
engine=create_engine(
@@ -200,9 +307,10 @@ class Scheduler(AppService):
if self.register_system_tasks:
# Notification PROCESS WEEKLY SUMMARY
# Runs every Monday at 9 AM UTC
self.scheduler.add_job(
process_weekly_summary,
CronTrigger.from_crontab("0 * * * *"),
CronTrigger.from_crontab("0 9 * * 1"),
id="process_weekly_summary",
kwargs={},
replace_existing=True,
@@ -250,13 +358,30 @@ class Scheduler(AppService):
)
self.scheduler.add_listener(job_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
self.scheduler.add_listener(job_missed_listener, EVENT_JOB_MISSED)
self.scheduler.add_listener(job_max_instances_listener, EVENT_JOB_MAX_INSTANCES)
self.scheduler.start()
# Keep the service running since BackgroundScheduler doesn't block
super().run_service()
def cleanup(self):
super().cleanup()
logger.info("⏳ Shutting down scheduler...")
if self.scheduler:
self.scheduler.shutdown(wait=False)
logger.info("⏳ Shutting down scheduler...")
self.scheduler.shutdown(wait=True)
global _event_loop
if _event_loop:
logger.info("⏳ Closing event loop...")
_event_loop.call_soon_threadsafe(_event_loop.stop)
global _event_loop_thread
if _event_loop_thread:
logger.info("⏳ Waiting for event loop thread to finish...")
_event_loop_thread.join(timeout=SCHEDULER_OPERATION_TIMEOUT_SECONDS)
logger.info("Scheduler cleanup complete.")
@expose
def add_graph_execution_schedule(
@@ -269,6 +394,18 @@ class Scheduler(AppService):
input_credentials: dict[str, CredentialsMetaInput],
name: Optional[str] = None,
) -> GraphExecutionJobInfo:
# Validate the graph before scheduling to prevent runtime failures
# We don't need the return value, just want the validation to run
run_async(
execution_utils.validate_and_construct_node_execution_input(
graph_id=graph_id,
user_id=user_id,
graph_inputs=input_data,
graph_version=graph_version,
graph_credentials_inputs=input_credentials,
)
)
job_args = GraphExecutionJobArgs(
user_id=user_id,
graph_id=graph_id,

View File

@@ -1,10 +1,9 @@
import pytest
from backend.data import db
from backend.executor.scheduler import SchedulerClient
from backend.server.model import CreateGraph
from backend.usecases.sample import create_test_graph, create_test_user
from backend.util.service import get_service_client
from backend.util.clients import get_scheduler_client
from backend.util.test import SpinTestServer
@@ -17,7 +16,7 @@ async def test_agent_schedule(server: SpinTestServer):
user_id=test_user.id,
)
scheduler = get_service_client(SchedulerClient)
scheduler = get_scheduler_client()
schedules = await scheduler.get_execution_schedules(test_graph.id, test_user.id)
assert len(schedules) == 0

View File

@@ -4,52 +4,36 @@ import threading
import time
from collections import defaultdict
from concurrent.futures import Future
from typing import TYPE_CHECKING, Any, Callable, Optional, cast
from typing import Any, Optional
from autogpt_libs.utils.cache import thread_cached
from pydantic import BaseModel, JsonValue
from pydantic import BaseModel, JsonValue, ValidationError
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.block import (
Block,
BlockData,
BlockInput,
BlockSchema,
BlockType,
get_block,
)
from backend.data.block import Block, BlockData, BlockInput, BlockType, get_block
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCostType
from backend.data.db import prisma
from backend.data.execution import (
AsyncRedisExecutionEventBus,
ExecutionStatus,
GraphExecutionStats,
GraphExecutionWithNodes,
RedisExecutionEventBus,
)
from backend.data.graph import GraphModel, Node
from backend.data.model import CredentialsMetaInput
from backend.data.rabbitmq import (
AsyncRabbitMQ,
Exchange,
ExchangeType,
Queue,
RabbitMQConfig,
SyncRabbitMQ,
from backend.data.rabbitmq import Exchange, ExchangeType, Queue, RabbitMQConfig
from backend.util.clients import (
get_async_execution_event_bus,
get_async_execution_queue,
get_database_manager_async_client,
get_integration_credentials_store,
)
from backend.util.exceptions import NotFoundError
from backend.util.exceptions import GraphValidationError, NotFoundError
from backend.util.logging import TruncatedLogger
from backend.util.mock import MockObject
from backend.util.service import get_service_client
from backend.util.settings import Config
from backend.util.type import convert
if TYPE_CHECKING:
from backend.executor import DatabaseManagerAsyncClient, DatabaseManagerClient
from backend.integrations.credentials_store import IntegrationCredentialsStore
config = Config()
logger = TruncatedLogger(logging.getLogger(__name__), prefix="[GraphExecutorUtil]")
@@ -86,51 +70,6 @@ class LogMetadata(TruncatedLogger):
)
@thread_cached
def get_execution_event_bus() -> RedisExecutionEventBus:
return RedisExecutionEventBus()
@thread_cached
def get_async_execution_event_bus() -> AsyncRedisExecutionEventBus:
return AsyncRedisExecutionEventBus()
@thread_cached
def get_execution_queue() -> SyncRabbitMQ:
client = SyncRabbitMQ(create_execution_queue_config())
client.connect()
return client
@thread_cached
async def get_async_execution_queue() -> AsyncRabbitMQ:
client = AsyncRabbitMQ(create_execution_queue_config())
await client.connect()
return client
@thread_cached
def get_integration_credentials_store() -> "IntegrationCredentialsStore":
from backend.integrations.credentials_store import IntegrationCredentialsStore
return IntegrationCredentialsStore()
@thread_cached
def get_db_client() -> "DatabaseManagerClient":
from backend.executor import DatabaseManagerClient
return get_service_client(DatabaseManagerClient)
@thread_cached
def get_db_async_client() -> "DatabaseManagerAsyncClient":
from backend.executor import DatabaseManagerAsyncClient
return get_service_client(DatabaseManagerAsyncClient)
# ============ Execution Cost Helpers ============ #
@@ -457,7 +396,7 @@ def validate_exec(
# Last validation: Validate the input values against the schema.
if error := schema.get_mismatch_error(data):
error_message = f"{error_prefix} {error}"
logger.error(error_message)
logger.warning(error_message)
return None, error_message
return data, node_block.name
@@ -467,47 +406,65 @@ async def _validate_node_input_credentials(
graph: GraphModel,
user_id: str,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
):
"""Checks all credentials for all nodes of the graph"""
) -> dict[str, dict[str, str]]:
"""
Checks all credentials for all nodes of the graph and returns structured errors.
Returns:
dict[node_id, dict[field_name, error_message]]: Credential validation errors per node
"""
credential_errors: dict[str, dict[str, str]] = defaultdict(dict)
for node in graph.nodes:
block = node.block
# Find any fields of type CredentialsMetaInput
credentials_fields = cast(
type[BlockSchema], block.input_schema
).get_credentials_fields()
credentials_fields = block.input_schema.get_credentials_fields()
if not credentials_fields:
continue
for field_name, credentials_meta_type in credentials_fields.items():
if (
nodes_input_masks
and (node_input_mask := nodes_input_masks.get(node.id))
and field_name in node_input_mask
):
credentials_meta = credentials_meta_type.model_validate(
node_input_mask[field_name]
)
elif field_name in node.input_default:
credentials_meta = credentials_meta_type.model_validate(
node.input_default[field_name]
)
else:
raise ValueError(
f"Credentials absent for {block.name} node #{node.id} "
f"input '{field_name}'"
)
try:
if (
nodes_input_masks
and (node_input_mask := nodes_input_masks.get(node.id))
and field_name in node_input_mask
):
credentials_meta = credentials_meta_type.model_validate(
node_input_mask[field_name]
)
elif field_name in node.input_default:
credentials_meta = credentials_meta_type.model_validate(
node.input_default[field_name]
)
else:
# Missing credentials
credential_errors[node.id][
field_name
] = "These credentials are required"
continue
except ValidationError as e:
credential_errors[node.id][field_name] = f"Invalid credentials: {e}"
continue
# Fetch the corresponding Credentials and perform sanity checks
credentials = await get_integration_credentials_store().get_creds_by_id(
user_id, credentials_meta.id
)
if not credentials:
raise ValueError(
f"Unknown credentials #{credentials_meta.id} "
f"for node #{node.id} input '{field_name}'"
try:
# Fetch the corresponding Credentials and perform sanity checks
credentials = await get_integration_credentials_store().get_creds_by_id(
user_id, credentials_meta.id
)
except Exception as e:
# Handle any errors fetching credentials
credential_errors[node.id][
field_name
] = f"Credentials not available: {e}"
continue
if not credentials:
credential_errors[node.id][
field_name
] = f"Unknown credentials #{credentials_meta.id}"
continue
if (
credentials.provider != credentials_meta.provider
or credentials.type != credentials_meta.type
@@ -518,10 +475,12 @@ async def _validate_node_input_credentials(
f"{credentials_meta.type}<>{credentials.type};"
f"{credentials_meta.provider}<>{credentials.provider}"
)
raise ValueError(
f"Invalid credentials #{credentials.id} for node #{node.id}: "
"type/provider mismatch"
)
credential_errors[node.id][
field_name
] = "Invalid credentials: type/provider mismatch"
continue
return credential_errors
def make_node_credentials_input_map(
@@ -559,7 +518,37 @@ def make_node_credentials_input_map(
return result
async def construct_node_execution_input(
async def validate_graph_with_credentials(
graph: GraphModel,
user_id: str,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph including credentials and return structured errors per node.
Returns:
dict[node_id, dict[field_name, error_message]]: Validation errors per node
"""
# Get input validation errors
node_input_errors = GraphModel.validate_graph_get_errors(
graph, for_run=True, nodes_input_masks=nodes_input_masks
)
# Get credential input/availability/validation errors
node_credential_input_errors = await _validate_node_input_credentials(
graph, user_id, nodes_input_masks
)
# Merge credential errors with structural errors
for node_id, field_errors in node_credential_input_errors.items():
if node_id not in node_input_errors:
node_input_errors[node_id] = {}
node_input_errors[node_id].update(field_errors)
return node_input_errors
async def _construct_starting_node_execution_input(
graph: GraphModel,
user_id: str,
graph_inputs: BlockInput,
@@ -581,8 +570,17 @@ async def construct_node_execution_input(
list[tuple[str, BlockInput]]: A list of tuples, each containing the node ID and
the corresponding input data for that node.
"""
graph.validate_graph(for_run=True, nodes_input_masks=nodes_input_masks)
await _validate_node_input_credentials(graph, user_id, nodes_input_masks)
# Use new validation function that includes credentials
validation_errors = await validate_graph_with_credentials(
graph, user_id, nodes_input_masks
)
n_error_nodes = len(validation_errors)
n_errors = sum(len(errors) for errors in validation_errors.values())
if validation_errors:
raise GraphValidationError(
f"Graph validation failed: {n_errors} issues on {n_error_nodes} nodes",
node_errors=validation_errors,
)
nodes_input = []
for node in graph.starting_nodes:
@@ -617,6 +615,67 @@ async def construct_node_execution_input(
return nodes_input
async def validate_and_construct_node_execution_input(
graph_id: str,
user_id: str,
graph_inputs: BlockInput,
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[dict[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> tuple[GraphModel, list[tuple[str, BlockInput]], dict[str, dict[str, JsonValue]]]:
"""
Public wrapper that handles graph fetching, credential mapping, and validation+construction.
This centralizes the logic used by both scheduler validation and actual execution.
Args:
graph_id: The ID of the graph to validate/construct.
user_id: The ID of the user.
graph_inputs: The input data for the graph execution.
graph_version: The version of the graph to use.
graph_credentials_inputs: Credentials inputs to use.
nodes_input_masks: Node inputs to use.
Returns:
tuple[GraphModel, list[tuple[str, BlockInput]]]: Graph model and list of tuples for node execution input.
Raises:
NotFoundError: If the graph is not found.
GraphValidationError: If the graph has validation issues.
ValueError: If there are other validation errors.
"""
if prisma.is_connected():
gdb = graph_db
else:
gdb = get_database_manager_async_client()
graph: GraphModel | None = await gdb.get_graph(
graph_id=graph_id,
user_id=user_id,
version=graph_version,
include_subgraphs=True,
)
if not graph:
raise NotFoundError(f"Graph #{graph_id} not found.")
nodes_input_masks = _merge_nodes_input_masks(
(
make_node_credentials_input_map(graph, graph_credentials_inputs)
if graph_credentials_inputs
else {}
),
nodes_input_masks or {},
)
starting_nodes_input = await _construct_starting_node_execution_input(
graph=graph,
user_id=user_id,
graph_inputs=graph_inputs,
nodes_input_masks=nodes_input_masks,
)
return graph, starting_nodes_input, nodes_input_masks
def _merge_nodes_input_masks(
overrides_map_1: dict[str, dict[str, JsonValue]],
overrides_map_2: dict[str, dict[str, JsonValue]],
@@ -633,11 +692,6 @@ def _merge_nodes_input_masks(
# ============ Execution Queue Helpers ============ #
class CancelExecutionEvent(BaseModel):
graph_exec_id: str
GRAPH_EXECUTION_EXCHANGE = Exchange(
name="graph_execution",
type=ExchangeType.DIRECT,
@@ -655,6 +709,11 @@ GRAPH_EXECUTION_CANCEL_EXCHANGE = Exchange(
)
GRAPH_EXECUTION_CANCEL_QUEUE_NAME = "graph_execution_cancel_queue"
# Graceful shutdown timeout constants
# Agent executions can run for up to 1 day, so we need a graceful shutdown period
# that allows long-running executions to complete naturally
GRACEFUL_SHUTDOWN_TIMEOUT_SECONDS = 24 * 60 * 60 # 1 day to complete active executions
def create_execution_queue_config() -> RabbitMQConfig:
"""
@@ -669,13 +728,14 @@ def create_execution_queue_config() -> RabbitMQConfig:
durable=True,
auto_delete=False,
arguments={
# x-consumer-timeout (0 = disabled)
# x-consumer-timeout (1 week)
# Problem: Default 30-minute consumer timeout kills long-running graph executions
# Original error: "Consumer acknowledgement timed out after 1800000 ms (30 minutes)"
# Solution: Disable consumer timeout entirely - let graphs run indefinitely
# Safety: Heartbeat mechanism now handles dead consumer detection instead
# Use case: Graph executions that take hours to complete (AI model training, etc.)
"x-consumer-timeout": 0,
"x-consumer-timeout": GRACEFUL_SHUTDOWN_TIMEOUT_SECONDS
* 1000,
},
)
cancel_queue = Queue(
@@ -692,6 +752,10 @@ def create_execution_queue_config() -> RabbitMQConfig:
)
class CancelExecutionEvent(BaseModel):
graph_exec_id: str
async def stop_graph_execution(
user_id: str,
graph_exec_id: str,
@@ -705,7 +769,7 @@ async def stop_graph_execution(
3. Update execution statuses in DB and set `error` outputs to `"TERMINATED"`.
"""
queue_client = await get_async_execution_queue()
db = execution_db if prisma.is_connected() else get_db_async_client()
db = execution_db if prisma.is_connected() else get_database_manager_async_client()
await queue_client.publish_message(
routing_key="",
message=CancelExecutionEvent(graph_exec_id=graph_exec_id).model_dump_json(),
@@ -737,51 +801,28 @@ async def stop_graph_execution(
ExecutionStatus.QUEUED,
ExecutionStatus.INCOMPLETE,
]:
break
# If the graph is still on the queue, we can prevent them from being executed
# by setting the status to TERMINATED.
graph_exec.status = ExecutionStatus.TERMINATED
await asyncio.gather(
# Update graph execution status
db.update_graph_execution_stats(
graph_exec_id=graph_exec.id,
status=ExecutionStatus.TERMINATED,
),
# Publish graph execution event
get_async_execution_event_bus().publish(graph_exec),
)
return
if graph_exec.status == ExecutionStatus.RUNNING:
await asyncio.sleep(0.1)
# Set the termination status if the graph is not stopped after the timeout.
if graph_exec := await db.get_graph_execution_meta(
execution_id=graph_exec_id, user_id=user_id
):
# If the graph is still on the queue, we can prevent them from being executed
# by setting the status to TERMINATED.
node_execs = await db.get_node_executions(
graph_exec_id=graph_exec_id,
statuses=[
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
],
include_exec_data=False,
)
graph_exec.status = ExecutionStatus.TERMINATED
for node_exec in node_execs:
node_exec.status = ExecutionStatus.TERMINATED
await asyncio.gather(
# Update node execution statuses
db.update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in node_execs],
ExecutionStatus.TERMINATED,
),
# Publish node execution events
*[
get_async_execution_event_bus().publish(node_exec)
for node_exec in node_execs
],
)
await asyncio.gather(
# Update graph execution status
db.update_graph_execution_stats(
graph_exec_id=graph_exec_id,
status=ExecutionStatus.TERMINATED,
),
# Publish graph execution event
get_async_execution_event_bus().publish(graph_exec),
)
raise TimeoutError(
f"Graph execution #{graph_exec_id} will need to take longer than {wait_timeout} seconds to stop. "
f"You can check the status of the execution in the UI or try again later."
)
async def add_graph_execution(
@@ -811,61 +852,62 @@ async def add_graph_execution(
ValueError: If the graph is not found or if there are validation errors.
"""
if prisma.is_connected():
gdb = graph_db
edb = execution_db
else:
gdb = get_db_async_client()
edb = get_db_async_client()
edb = get_database_manager_async_client()
graph: GraphModel | None = await gdb.get_graph(
graph_id=graph_id,
user_id=user_id,
version=graph_version,
include_subgraphs=True,
)
if not graph:
raise NotFoundError(f"Graph #{graph_id} not found.")
nodes_input_masks = _merge_nodes_input_masks(
(
make_node_credentials_input_map(graph, graph_credentials_inputs)
if graph_credentials_inputs
else {}
),
nodes_input_masks or {},
)
starting_nodes_input = await construct_node_execution_input(
graph=graph,
user_id=user_id,
graph_inputs=inputs or {},
nodes_input_masks=nodes_input_masks,
)
graph_exec = await edb.create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
graph, starting_nodes_input, nodes_input_masks = (
await validate_and_construct_node_execution_input(
graph_id=graph_id,
user_id=user_id,
graph_inputs=inputs or {},
graph_version=graph_version,
graph_credentials_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
)
)
graph_exec = None
try:
graph_exec = await edb.create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
)
queue = await get_async_execution_queue()
graph_exec_entry = graph_exec.to_graph_execution_entry()
if nodes_input_masks:
graph_exec_entry.nodes_input_masks = nodes_input_masks
logger.info(
f"Created graph execution #{graph_exec.id} for graph "
f"#{graph_id} with {len(starting_nodes_input)} starting nodes. "
f"Now publishing to execution queue."
)
await queue.publish_message(
routing_key=GRAPH_EXECUTION_ROUTING_KEY,
message=graph_exec_entry.model_dump_json(),
exchange=GRAPH_EXECUTION_EXCHANGE,
)
logger.info(f"Published execution {graph_exec.id} to RabbitMQ queue")
bus = get_async_execution_event_bus()
await bus.publish(graph_exec)
return graph_exec
except Exception as e:
logger.error(f"Unable to publish graph #{graph_id} exec #{graph_exec.id}: {e}")
except BaseException as e:
err = str(e) or type(e).__name__
if not graph_exec:
logger.error(f"Unable to execute graph #{graph_id} failed: {err}")
raise
logger.error(
f"Unable to publish graph #{graph_id} exec #{graph_exec.id}: {err}"
)
await edb.update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in graph_exec.node_executions],
ExecutionStatus.FAILED,
@@ -873,7 +915,7 @@ async def add_graph_execution(
await edb.update_graph_execution_stats(
graph_exec_id=graph_exec.id,
status=ExecutionStatus.FAILED,
stats=GraphExecutionStats(error=str(e)),
stats=GraphExecutionStats(error=err),
)
raise
@@ -888,13 +930,9 @@ class ExecutionOutputEntry(BaseModel):
class NodeExecutionProgress:
def __init__(
self,
on_done_task: Callable[[str, object], None],
):
def __init__(self):
self.output: dict[str, list[ExecutionOutputEntry]] = defaultdict(list)
self.tasks: dict[str, Future] = {}
self.on_done_task = on_done_task
self._lock = threading.Lock()
def add_task(self, node_exec_id: str, task: Future):
@@ -934,7 +972,9 @@ class NodeExecutionProgress:
except TimeoutError:
pass
except Exception as e:
logger.error(f"Task for exec ID {exec_id} failed with error: {str(e)}")
logger.error(
f"Task for exec ID {exec_id} failed with error: {e.__class__.__name__} {str(e)}"
)
pass
return self.is_done(0)
@@ -952,7 +992,7 @@ class NodeExecutionProgress:
cancelled_ids.append(task_id)
return cancelled_ids
def wait_for_cancellation(self, timeout: float = 5.0):
def wait_for_done(self, timeout: float = 5.0):
"""
Wait for all cancelled tasks to complete cancellation.
@@ -962,9 +1002,12 @@ class NodeExecutionProgress:
start_time = time.time()
while time.time() - start_time < timeout:
# Check if all tasks are done (either completed or cancelled)
if all(task.done() for task in self.tasks.values()):
return True
while self.pop_output():
pass
if self.is_done():
return
time.sleep(0.1) # Small delay to avoid busy waiting
raise TimeoutError(
@@ -983,11 +1026,7 @@ class NodeExecutionProgress:
if self.output[exec_id]:
return False
if task := self.tasks.pop(exec_id):
try:
self.on_done_task(exec_id, task.result())
except Exception as e:
logger.error(f"Task for exec ID {exec_id} failed with error: {str(e)}")
self.tasks.pop(exec_id)
return True
def _next_exec(self) -> str | None:

View File

@@ -5,7 +5,6 @@ from contextlib import asynccontextmanager
from datetime import datetime, timedelta, timezone
from typing import Optional
from autogpt_libs.utils.cache import thread_cached
from autogpt_libs.utils.synchronize import AsyncRedisKeyedMutex
from pydantic import SecretStr
@@ -183,6 +182,15 @@ zerobounce_credentials = APIKeyCredentials(
expires_at=None,
)
enrichlayer_credentials = APIKeyCredentials(
id="d9fce73a-6c1d-4e8b-ba2e-12a456789def",
provider="enrichlayer",
api_key=SecretStr(settings.secrets.enrichlayer_api_key),
title="Use Credits for Enrichlayer",
expires_at=None,
)
llama_api_credentials = APIKeyCredentials(
id="d44045af-1c33-4833-9e19-752313214de2",
provider="llama_api",
@@ -191,6 +199,14 @@ llama_api_credentials = APIKeyCredentials(
expires_at=None,
)
v0_credentials = APIKeyCredentials(
id="c4e6d1a0-3b5f-4789-a8e2-9b123456789f",
provider="v0",
api_key=SecretStr(settings.secrets.v0_api_key),
title="Use Credits for v0 by Vercel",
expires_at=None,
)
DEFAULT_CREDENTIALS = [
ollama_credentials,
revid_credentials,
@@ -204,6 +220,7 @@ DEFAULT_CREDENTIALS = [
jina_credentials,
unreal_credentials,
open_router_credentials,
enrichlayer_credentials,
fal_credentials,
exa_credentials,
e2b_credentials,
@@ -214,6 +231,8 @@ DEFAULT_CREDENTIALS = [
smartlead_credentials,
zerobounce_credentials,
google_maps_credentials,
llama_api_credentials,
v0_credentials,
]
@@ -229,17 +248,15 @@ class IntegrationCredentialsStore:
return self._locks
@property
@thread_cached
def db_manager(self):
if prisma.is_connected():
from backend.data import user
return user
else:
from backend.executor.database import DatabaseManagerAsyncClient
from backend.util.service import get_service_client
from backend.util.clients import get_database_manager_async_client
return get_service_client(DatabaseManagerAsyncClient)
return get_database_manager_async_client()
# =============== USER-MANAGED CREDENTIALS =============== #
async def add_creds(self, user_id: str, credentials: Credentials) -> None:
@@ -282,6 +299,8 @@ class IntegrationCredentialsStore:
all_credentials.append(unreal_credentials)
if settings.secrets.open_router_api_key:
all_credentials.append(open_router_credentials)
if settings.secrets.enrichlayer_api_key:
all_credentials.append(enrichlayer_credentials)
if settings.secrets.fal_api_key:
all_credentials.append(fal_credentials)
if settings.secrets.exa_api_key:
@@ -363,21 +382,6 @@ class IntegrationCredentialsStore:
# ============== SYSTEM-MANAGED CREDENTIALS ============== #
async def get_ayrshare_profile_key(self, user_id: str) -> SecretStr | None:
"""Get the Ayrshare profile key for a user.
The profile key is used to authenticate API requests to Ayrshare's social media posting service.
See https://www.ayrshare.com/docs/apis/profiles/overview for more details.
Args:
user_id: The ID of the user to get the profile key for
Returns:
The profile key as a SecretStr if set, None otherwise
"""
user_integrations = await self._get_user_integrations(user_id)
return user_integrations.managed_credentials.ayrshare_profile_key
async def set_ayrshare_profile_key(self, user_id: str, profile_key: str) -> None:
"""Set the Ayrshare profile key for a user.

View File

@@ -25,6 +25,7 @@ class ProviderName(str, Enum):
GROQ = "groq"
HTTP = "http"
HUBSPOT = "hubspot"
ENRICHLAYER = "enrichlayer"
IDEOGRAM = "ideogram"
JINA = "jina"
LLAMA_API = "llama_api"
@@ -47,6 +48,7 @@ class ProviderName(str, Enum):
TWITTER = "twitter"
TODOIST = "todoist"
UNREAL_SPEECH = "unreal_speech"
V0 = "v0"
ZEROBOUNCE = "zerobounce"
@classmethod

View File

@@ -8,10 +8,11 @@ from pydantic import BaseModel
from backend.data.block import get_block
from backend.data.execution import ExecutionStatus, NodeExecutionResult
from backend.executor import utils as execution_utils
from backend.notifications.notifications import NotificationManagerClient
from backend.util.clients import (
get_database_manager_client,
get_notification_manager_client,
)
from backend.util.metrics import sentry_capture_error
from backend.util.service import get_service_client
from backend.util.settings import Config
logger = logging.getLogger(__name__)
@@ -40,7 +41,7 @@ class BlockErrorMonitor:
def __init__(self, include_top_blocks: int | None = None):
self.config = config
self.notification_client = get_service_client(NotificationManagerClient)
self.notification_client = get_notification_manager_client()
self.include_top_blocks = (
include_top_blocks
if include_top_blocks is not None
@@ -107,7 +108,7 @@ class BlockErrorMonitor:
) -> dict[str, BlockStatsWithSamples]:
"""Get block execution stats using efficient SQL aggregation."""
result = execution_utils.get_db_client().get_block_error_stats(
result = get_database_manager_client().get_block_error_stats(
start_time, end_time
)
@@ -197,7 +198,7 @@ class BlockErrorMonitor:
) -> list[str]:
"""Get error samples for a specific block - just a few recent ones."""
# Only fetch a small number of recent failed executions for this specific block
executions = execution_utils.get_db_client().get_node_executions(
executions = get_database_manager_client().get_node_executions(
block_ids=[block_id],
statuses=[ExecutionStatus.FAILED],
created_time_gte=start_time,

Some files were not shown because too many files have changed in this diff Show More