Compare commits

..

119 Commits

Author SHA1 Message Date
Nicholas Tindle
155b496678 wip 2025-10-03 13:25:39 -05:00
Nicholas Tindle
f64c309fd4 feat: baseline impl 2025-09-25 18:24:43 -05:00
Nicholas Tindle
03863219a3 feat: baseline backend for optional 2025-09-25 16:24:19 -05:00
Reinier van der Leer
0b267f573e feat(blocks): Improve JSON generation+parsing in AI Structured Response block (#10960)
The AI Structured Response Generator block currently doesn't support
responses that aren't pure JSON. This prohibits multi-step prompting
because reasoning content is not allowed in the response, which in turn
limits performance.

### Changes 🏗️

- Adjust prompt to enclose JSON in pre-defined tags so we can extract it
from a response that isn't pure JSON
- Adjust mechanism to extract and parse JSON
- Add `force_json_output` input (advanced, default `False`)
- Update incorrect `max_output_tokens` values for Claude 4 and 3.7 to
prevent responses from being cut off due to `max_tokens`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] LLMs correctly follows response generation instructions
- [x] LLMs follow system response format instructions even if user
prompt contains conflicting instructions
  - [x] JSON is extracted from response successfully
  - [x] `force_json_output` works (at least for models that support it)

Tested with Claude 4 Sonnet, various GPT models, and Llama 3.3 70B.

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-09-25 17:42:25 +00:00
Reinier van der Leer
7bd571d9ce fix(blocks): Default disable HTML escaping in all blocks with templating features (#10955)
- Resolves #10954

Unnecessary escaping distorts content and so should be disabled wherever
the output isn't used in HTML.

### Changes 🏗️

- Disable HTML escaping on prompt value insertion in AI blocks
- Make HTML escaping optional in text formatting and output blocks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x]
`SandboxedEnvironment(autoescape=False).from_string(template_str).render(values)`
doesn't escape characters with HTML entities
2025-09-25 12:04:38 +00:00
Zamil Majdy
7a331651ba feat(backend): enhance database indexes for AgentGraph and AgentGraphExecution performance (#10985)
## Summary

Enhances database performance by improving indexes on `AgentGraph` and
`AgentGraphExecution` tables for better query efficiency.

### Changes 🏗️

- **Database Schema**: Updated Prisma schema to enhance database indexes
- Modified `AgentGraph` index from `[userId, isActive]` to `[userId,
isActive, id, version]` for better compound query performance
- Enhanced `AgentGraphExecution` index from `[userId]` to `[userId,
isDeleted, createdAt]` to support filtered queries with sorting
- **Migration**: Auto-generated Prisma migration to implement the index
changes
- Drops existing indexes: `AgentGraph_userId_isActive_idx` and
`AgentGraphExecution_userId_idx`
- Creates new compound indexes:
`AgentGraph_userId_isActive_id_version_idx` and
`AgentGraphExecution_userId_isDeleted_createdAt_idx`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified migration runs successfully
  - [x] Confirmed database queries continue to work with new indexes
  - [x] Tested that existing functionality remains unaffected

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-25 09:05:53 +00:00
Zamil Majdy
5bc69adc33 Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-09-25 16:09:01 +07:00
Krzysztof Czerwinski
f4bcc8494f hotfix: Fix Agent node missing inputs and outputs (#10987)
Restore `include=AGENT_GRAPH_INCLUDE` that is needed to build schema
from the nodes.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I/O is back on the Agent node
2025-09-25 08:56:32 +00:00
Zamil Majdy
4c000086e6 feat(backend): implement clean k6 load testing infrastructure (#10978)
## Summary

Implement comprehensive k6 load testing infrastructure for the AutoGPT
Platform with clean file organization, unified test runner, and cloud
integration.

## Key Features

### 🗂️ Clean File Organization
- **tests/basic/**: Simple validation tests (connectivity, single
endpoints)
- **tests/api/**: Core functionality tests (API endpoints, graph
execution)
- **tests/marketplace/**: User-facing feature tests (public/library
access)
- **tests/comprehensive/**: End-to-end scenario tests (complete user
journeys)
- **orchestrator/**: Advanced test orchestration for full suites

### 🚀 Unified Test Runner
- **Single entry point**: `run-tests.js` for both local and cloud
execution
- **7 available tests**: From basic connectivity to comprehensive
platform journeys
- **Flexible execution**: Run individual tests, comma-separated lists,
or all tests
- **Auto-configuration**: Different VU/duration settings for local vs
cloud execution

### 🔐 Advanced Authentication
- **Pre-authenticated tokens**: 24-hour JWT tokens eliminate Supabase
rate limiting
- **Configurable generation**: Default 10 tokens, scalable to 150+ for
high concurrency
- **Graceful handling**: Proper auth failure detection and recovery
- **ES module compatible**: Modern JavaScript with full import/export
support

### ☁️ k6 Cloud Integration
- **Cloud execution**: Tests run on k6 cloud infrastructure for
consistent results
- **Real-time monitoring**: Live dashboards with performance metrics
- **URL tracking**: Automatic test result URL capture and storage
- **Sequential orchestration**: Proper delays between tests for resource
management

## Test Coverage

### Performance Validated
- **Core API**: 100 VUs successfully testing `/api/credits`,
`/api/graphs`, `/api/blocks`, `/api/executions`
- **Graph Execution**: 80 VUs for complete workflow pipeline testing
- **Marketplace**: 150 VUs for public browsing, 100 VUs for
authenticated library operations
- **Authentication**: 150+ concurrent users with pre-authenticated token
scaling

### User Journey Simulation
- **Dashboard workflows**: Credits checking, graph management, execution
monitoring
- **Marketplace browsing**: Public search, agent discovery, category
filtering
- **Library operations**: Agent adding, favoriting, forking, detailed
views
- **Complete workflows**: End-to-end platform usage with realistic user
behavior

## Technical Implementation

### ES Module Compatibility
- Full ES module support with modern JavaScript imports/exports
- Proper module execution patterns for Node.js compatibility
- Clean separation between CommonJS legacy and modern ES modules

### Error Handling & Monitoring  
- **Separate metrics**: HTTP status, authentication, JSON validation,
overall success
- **Graceful degradation**: Auth failures don't crash VUs, proper error
tracking
- **Performance thresholds**: Configurable P95/P99 latency and error
rate limits
- **Custom counters**: Track operation types, success rates, user
journey completion

### Infrastructure Benefits
- **Rate limit protection**: Pre-auth tokens prevent Supabase auth
bottlenecks
- **Scalable testing**: Support for 150+ concurrent users with proper
token management
- **Cloud consistency**: Tests run on dedicated k6 cloud servers for
reliable results
- **Development workflow**: Local execution for debugging, cloud for
performance validation

## Usage

### Quick Start
```bash
# Setup and verification
export SUPABASE_SERVICE_KEY="your-service-key"
node generate-tokens.js
node run-tests.js verify

# Local testing (development)
node run-tests.js run core-api-test DEV

# Cloud testing (performance)
node run-tests.js cloud all DEV
```

### NPM Scripts
```bash
npm run verify    # Quick setup check
npm test         # All tests locally  
npm run cloud    # All tests in k6 cloud
```

## Validation Results

 **Authentication**: 100% success rate with fresh 24-hour tokens  
 **File Structure**: All imports and references verified correct  
 **Test Execution**: All 7 tests execute successfully with proper
metrics
 **Cloud Integration**: k6 cloud execution working with proper
credentials
 **Documentation**: Complete README with usage examples and
troubleshooting

## Files Changed

### Core Infrastructure
- `run-tests.js`: Unified test runner supporting local/cloud execution
- `generate-tokens.js`: ES module compatible token generation with
24-hour expiry
- `README.md`: Comprehensive documentation with updated file references

### Organized Test Structure  
- `tests/basic/connectivity-test.js`: Basic connectivity validation
- `tests/basic/single-endpoint-test.js`: Individual API endpoint testing
- `tests/api/core-api-test.js`: Core authenticated API endpoints
- `tests/api/graph-execution-test.js`: Graph workflow pipeline testing  
- `tests/marketplace/public-access-test.js`: Public marketplace browsing
- `tests/marketplace/library-access-test.js`: Authenticated
marketplace/library operations
- `tests/comprehensive/platform-journey-test.js`: Complete user journey
simulation

### Configuration
- `configs/environment.js`: Environment URLs and performance settings
- `package.json`: NPM scripts and dependencies for unified workflow

This infrastructure provides a solid foundation for continuous
performance monitoring and load testing of the AutoGPT Platform.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-25 12:51:54 +07:00
Krzysztof Czerwinski
9c6cc5b29d Merge branch 'dev' 2025-09-25 13:28:17 +09:00
Toran Bruce Richards
b34973ca47 feat: Add 'depth' parameter to DataForSEO Related Keywords block (#10983)
Fixes #10982

<!-- Clearly explain the need for these changes: -->
The DataForSEO Related Keywords block was missing the `depth` parameter,
which is a critical parameter that controls the comprehensiveness of
keyword research. The depth parameter determines the number of related
keywords returned by the API, ranging from 1 keyword at depth 0 to
approximately 4680 keywords at depth 4.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Added `depth` parameter to the DataForSEO Related Keywords block as an
integer input field (range 0-4)
- Added `depth` parameter to the `related_keywords` method signature in
the API client
- Updated the API client to include the depth parameter in the request
payload when provided
- Added documentation explaining the depth parameter's effect on the
number of returned keywords
- Fixed missing parameter in function signature that was causing runtime
errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified the depth parameter appears correctly in the block UI
with appropriate range validation (0-4)
  - [x] Confirmed the parameter is passed correctly to the API client
- [x] Tested that omitting the depth parameter doesn't break existing
functionality (defaults to None)
- [x] Verified the implementation follows the existing pattern for
optional parameters in the DataForSEO blocks

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

Note: No configuration changes were required for this feature addition.

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
2025-09-24 21:29:47 +00:00
Nicholas Tindle
2bc6a56877 fix(backend): Fix GCS timeout error in FileInput blocks (#10976)
## Summary
- Fixed "Timeout context manager should be used inside a task" error
occurring intermittently in FileInput blocks when downloading files from
Google Cloud Storage
- Implemented proper async session management for GCS client to ensure
operations run within correct task context
- Added comprehensive logging to help diagnose and monitor the issue in
production

## Changes
### Core Fix
- Modified `CloudStorageHandler._retrieve_file_gcs()` to create a fresh
GCS client and session for each download operation
- This ensures the aiohttp session is always created within the proper
async task context, preventing the timeout error
- The fix trades a small amount of efficiency for reliability, but only
affects download operations

### Logging Enhancements
- Added detailed logging in `store_media_file()` to track execution
context and async task state
- Enhanced `scan_content_safe()` to specifically catch and log timeout
errors with CRITICAL level
- Added context logging in virus scanner around `asyncio.create_task()`
calls
- Upgraded key debug logs to info level for visibility in production

### Code Quality
- Fixed unbound variable issue where `async_client` could be referenced
before initialization
- Replaced bare `except:` clauses with proper exception handling
- Fixed unused parameters warning in `__aexit__` method

## Testing
- The timeout error was occurring intermittently in production when
FileInput blocks processed GCS files
- With these changes, the error should be eliminated as the session is
always created in the correct context
- Comprehensive logging allows monitoring of the fix effectiveness in
production


## Context
The root cause was that `gcloud-aio-storage` was creating its internal
aiohttp session/timeout context outside of an async task context when
called by the executor. This happened intermittently depending on how
the executor scheduled block execution.

## Related Issues
- Addresses timeout errors reported in FileInput block execution
- Improves reliability of file uploads from the platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test a multiple file input agent and it works
  - [x] Test the agent that is causing the failure and it works

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 16:21:41 -05:00
Nicholas Tindle
87c773d03a fix(backend): Fix GCS timeout error in FileInput blocks (#10976)
## Summary
- Fixed "Timeout context manager should be used inside a task" error
occurring intermittently in FileInput blocks when downloading files from
Google Cloud Storage
- Implemented proper async session management for GCS client to ensure
operations run within correct task context
- Added comprehensive logging to help diagnose and monitor the issue in
production

## Changes
### Core Fix
- Modified `CloudStorageHandler._retrieve_file_gcs()` to create a fresh
GCS client and session for each download operation
- This ensures the aiohttp session is always created within the proper
async task context, preventing the timeout error
- The fix trades a small amount of efficiency for reliability, but only
affects download operations

### Logging Enhancements
- Added detailed logging in `store_media_file()` to track execution
context and async task state
- Enhanced `scan_content_safe()` to specifically catch and log timeout
errors with CRITICAL level
- Added context logging in virus scanner around `asyncio.create_task()`
calls
- Upgraded key debug logs to info level for visibility in production

### Code Quality
- Fixed unbound variable issue where `async_client` could be referenced
before initialization
- Replaced bare `except:` clauses with proper exception handling
- Fixed unused parameters warning in `__aexit__` method

## Testing
- The timeout error was occurring intermittently in production when
FileInput blocks processed GCS files
- With these changes, the error should be eliminated as the session is
always created in the correct context
- Comprehensive logging allows monitoring of the fix effectiveness in
production


## Context
The root cause was that `gcloud-aio-storage` was creating its internal
aiohttp session/timeout context outside of an async task context when
called by the executor. This happened intermittently depending on how
the executor scheduled block execution.

## Related Issues
- Addresses timeout errors reported in FileInput block execution
- Improves reliability of file uploads from the platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test a multiple file input agent and it works
  - [x] Test the agent that is causing the failure and it works

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 21:06:51 +00:00
Swifty
ebeefc96e8 feat(backend): implement caching layer for store API endpoints (Part 1) (#10975)
## Summary
This PR introduces comprehensive caching for the Store API endpoints to
improve performance and reduce database load. This is **Part 1** in a
series of PRs to add comprehensive caching across our entire API.

### Key improvements:
- Implements caching layer using the existing `@cached` decorator from
`autogpt_libs.utils.cache`
- Reduces database queries by 80-90% for frequently accessed public data
- Built-in thundering herd protection prevents database overload during
cache expiry
- Selective cache invalidation ensures data freshness when mutations
occur

## Details

### Cached endpoints with TTLs:
- **Public data (5-10 min TTL):**
  - `/agents` - Store agents list (2 min)
  - `/agents/{username}/{agent_name}` - Agent details (5 min)
  - `/graph/{store_listing_version_id}` - Agent graphs (10 min)
  - `/agents/{store_listing_version_id}` - Agent by version (10 min)
  - `/creators` - Creators list (5 min)
  - `/creator/{username}` - Creator details (5 min)

- **User-specific data (1 min TTL):**
  - `/profile` - User profiles (5 min)
  - `/myagents` - User's own agents (1 min)
  - `/submissions` - User's submissions (1 min)

### Cache invalidation strategy:
- Profile updates → clear user's profile cache
- New reviews → clear specific agent cache + agents list
- New submissions → clear agents list + user's caches
- Submission edits → clear related version caches

### Cache management endpoints:
- `GET /cache/info` - Monitor cache statistics
- `POST /cache/clear` - Clear all caches
- `POST /cache/clear/{cache_name}` - Clear specific cache

## Changes  
<!-- REQUIRED: Bullet point summary of changes -->
- Added caching decorators to all suitable GET endpoints in store routes
- Implemented cache invalidation on data mutations (POST/PUT/DELETE)
- Added cache management endpoints for monitoring and manual clearing
- Created comprehensive test suite for cache_delete functionality
- Verified thundering herd protection works correctly

## Testing
<!-- How to test your changes -->
-  Created comprehensive test suite (`test_cache_delete.py`)
validating:
  - Selective cache deletion works correctly
  - Cache entries are properly invalidated on mutations
  - Other cache entries remain unaffected
  - cache_info() accurately reflects state
-  Tested thundering herd protection with concurrent requests
-  Verified all endpoints return correct data with and without cache

## Checklist
<!-- REQUIRED: Be sure to check these off before marking the PR ready
for review. -->
- [x] I have self-reviewed this PR's diff, line by line
- [x] I have updated and tested the software architecture documentation
(if applicable)
- [x] I have run the agent to verify that it still works (if applicable)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 10:01:52 +00:00
Nicholas Tindle
83fe8d5b94 fix(backend): make preset migration not crash the system (#10966)
<!-- Clearly explain the need for these changes: -->
For those who develop blocks, they may or may not exist in the code at
the same time as the database.
> Create block in one branch, test, then move to another branch the
block is not in

This migration will prevent startup in that case.

### Changes 🏗️
Adds a try except around the migration
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test that startup actually works

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 14:23:22 +07:00
Zamil Majdy
50689218ed feat(backend): implement comprehensive load testing performance fixes + database health improvements (#10965) 2025-09-24 14:22:57 +07:00
Aayush Shah
ddff09a8e4 feat(blocks): add NotionReadPage block (#10760)
Introduces a Notion Read Page block that fetches a page by ID via the
Notion REST API. This is a first step toward Notion integration in the
AutoGPT Platform.

Motivation - Notion was not integrated yet. Im starting with a small
block to add capability incrementally.

### Notes
- I referred to the Todoist block implementation as a reference since
I’m a beginner.
- This is my first PR here  
- The block passed `docker compose run --rm rest_server pytest -q`
successfully

<!-- Clearly explain the need for these changes: -->

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

### Test plan
- [x] Ran `docker compose run --rm rest_server pytest -q
backend/blocks/test/test_block.py -k notion`
- [x] Confirmed tests passed (2 passed, 652 deselected, warnings only).
- [x] Ran poetry run format to fix linters and tests

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2025-09-19 18:54:47 +00:00
Ubbe
0c363a1cea fix(frontend): force dynamic rendering on marketplace (#10957)
## Changes 🏗️

When building on Vercel:
```
    at Object.start (.next/server/chunks/2744.js:1:312830) {
  description: "Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
  digest: 'DYNAMIC_SERVER_USAGE'
}
Failed to get server auth token: Error: Dynamic server usage: Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error
    at r (.next/server/chunks/8450.js:22:7298)
    at n (.next/server/chunks/4735.js:1:37020)
    at g (.next/server/chunks/555.js:1:31925)
    at m (.next/server/chunks/555.js:1:87056)
    at h (.next/server/chunks/555.js:1:932)
    at k (.next/server/chunks/555.js:1:25195)
    at queryFn (.next/server/chunks/555.js:1:25590)
    at Object.f [as fn] (.next/server/chunks/2744.js:1:316625)
    at q (.next/server/chunks/2744.js:1:312288)
    at Object.start (.next/server/chunks/2744.js:1:312830) {
  description: "Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
  digest: 'DYNAMIC_SERVER_USAGE'
}
```
That's because the `/marketplace` page prefetches the store agents data
on the server, and that query uses `cookies` for Auth. In theory, those
endpoints can be called without auth, but I think if you are logged that
affects the results.

The simpler for now is to tell Next.js to not try to statically render
it and render on the fly with caching. According to AI we shouldn't see
much difference performance wise:

> Short answer: Usually no noticeable slowdown. You’ll trade a small
TTFB increase (server renders per request) for correct behavior with
cookies. Overall interactivity stays the same since we still dehydrate
React Query data.
Why it’s fine:
Server already had to fetch marketplace data; doing it at request-time
vs build-time is roughly the same cost for users.
Hydration uses the prefetched data, avoiding extra client round-trips.
If you want extra speed:
If those endpoints don’t need auth, we can skip reading cookies during
server prefetch and enable ISR (e.g., revalidate=60) for partial
caching.
Or move the cookie-dependent parts to the client and keep the page
static.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Page load marketplace is fine and not slow 
  - [x] No build cookies errors 


### For configuration changes:

None
2025-09-19 08:24:08 +00:00
Ubbe
e5d870a348 refactor(frontend): move old components to __legacy__ (#10953)
## Changes 🏗️

Moving non-design-system components ( old ) to a `components/__legacy__`
folder 📁 so like this is more obvious for developers that they should
not import them or use them on new features. What is now top-level in
`/components` is what it is actively maintained.

Document some existing components like `<Alert />`. More on this coming
on follow-up PRs.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test and types pass on the CI
  - [x] Run app locally, click around, looks good 

### For configuration changes:

None
2025-09-18 21:37:43 +00:00
Reinier van der Leer
3f19cba28f fix(frontend/builder): Fix moved blocks disappearing on save (#10951)
- Resolves #10926
- Fixes a bug introduced in #10779

### Changes 🏗️

- Fix `.metadata.position` in graph save payload
- Make node reconciliation after graph save more robust

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Moved nodes don't disappear on graph save
2025-09-18 13:34:06 +00:00
Reinier van der Leer
a978e91271 fix(ci, backend): Update Redis image & amend config to work with it (#10952)
CI is currently broken because Bitnami has pulled all `bitnami/redis`
images.
The current official Redis image on Docker Hub is `redis`.

### Changes 🏗️

- Replace `bitnami/redis:6.2` by `redis:latest` in Backend CI workflow
file
- Make `REDIS_PASSWORD` optional in the backend settings

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI no longer broken
2025-09-18 13:02:49 +00:00
Ubbe
f283e6c514 refactor(frontend): cleanup of components folder (2/3) (#10942)
## Changes 🏗️

Following up my initial PR to tidy up the `components` folder
https://github.com/Significant-Gravitas/AutoGPT/pull/10940.

This is mostly moving files around and renaming some + documenting them
on the design system as needed. Should be pretty safe as long as types
on the CI pass.

## Checklist 📋

### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Click around, looks ok
  - [x] Test and types pass on the CI  

### For configuration changes:

None
2025-09-18 16:21:18 +09:00
Ubbe
9fc2101e7e refactor(frontend): tidy up on components folder (#10940)
## Changes 🏗️

Re-organise the `components` folder, moving things which are not re-used
across screens or part of the design system out of it.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] It works and test/types pass CI wise 

### For configuration changes:

None
2025-09-17 12:56:49 +00:00
Bentlybro
634f826d82 Merge branch 'master' into dev 2025-09-17 11:35:29 +01:00
Ubbe
6d6bf308fc fix(frontend): marketplace page load and caching (#10934)
## Changes 🏗️

### **Server-Side:**
-  **ISR Cache**: Page cached for 60 seconds, served instantly
-  **Prefetch**: All API calls made on server, not client
-  **Static Generation**: HTML pre-rendered with data
-  **Streaming**: Loading states show immediately

### **Client-Side:**
-  **No API Calls**: Data hydrated from server cache
-  **Fast Hydration**: React Query uses prefetched data
-  **Smart Caching**: 60s stale time prevents unnecessary requests
-  **Progressive Loading**: Suspense boundaries for better UX

### **🔄 Caching Strategy:**

1. **Server**: ISR cache (60s) → API calls → Static HTML
2. **CDN**: Cached HTML served instantly
3. **Client**: Hydrated data from server → No additional API calls
4. **Background**: ISR regenerates stale pages automatically

### **🎯 Result:**
- **First Visit**: Instant HTML + hydrated data (no client API calls)
- **Subsequent Visits**: Instant cached page
- **Background Updates**: Automatic revalidation every 60s
- **Optimal Performance**: Server-side rendering + client-side caching

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Marketplace page loads are faster 

### For configuration changes:

None
2025-09-17 07:23:43 +00:00
Nicholas Tindle
dd84fb5c66 feat(platform): Add public share links for agent run results (#10938)
<!-- Clearly explain the need for these changes: -->
This PR adds the ability for users to share their agent run results
publicly via shareable links. Users can generate a public link that
allows anyone to view the outputs of a specific agent execution without
requiring authentication. This feature enables users to share their
agent results with clients, colleagues, or the community.


https://github.com/user-attachments/assets/5508f430-07d0-4cd3-87bc-301b0b005cce


### Changes 🏗️

#### Backend Changes
- **Database Schema**: Added share tracking fields to
`AgentGraphExecution` model in Prisma schema:
  - `isShared`: Boolean flag to track if execution is shared
  - `shareToken`: Unique token for the share URL
  - `sharedAt`: Timestamp when sharing was enabled

- **API Endpoints**: Added three new REST endpoints in
`/backend/backend/server/routers/v1.py`:
- `POST /graphs/{graph_id}/executions/{graph_exec_id}/share`: Enable
sharing for an execution
- `DELETE /graphs/{graph_id}/executions/{graph_exec_id}/share`: Disable
sharing
- `GET /share/{share_token}`: Retrieve shared execution data (public
endpoint)

- **Data Models**:
- Created `SharedExecutionResponse` model for public-safe execution data
- Added `ShareRequest` and `ShareResponse` Pydantic models for type-safe
API responses
  - Updated `GraphExecutionMeta` to include share status fields

- **Security**:
- All share management endpoints verify user ownership before allowing
changes
- Public endpoint only exposes OUTPUT block data, no intermediate
execution details
  - Share tokens are UUIDs for security

#### Frontend Changes
- **ShareButton Component**
(`/frontend/src/components/ShareButton.tsx`):
  - Modal dialog for managing share settings
  - Copy-to-clipboard functionality for share links
  - Clear warnings about public accessibility
  - Uses Orval-generated API hooks for enable/disable operations

- **Share Page**
(`/frontend/src/app/(no-navbar)/share/[token]/page.tsx`):
  - Clean, navigation-free page for viewing shared executions
- Reuses existing `RunOutputs` component for consistent output rendering
  - Proper error handling for invalid/disabled share links
  - Loading states during data fetch

- **API Integration**:
- Fixed custom mutator to properly set Content-Type headers for POST
requests with empty bodies
  - Generated TypeScript types via Orval for type-safe API calls

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Test plan: -->
- [x] Enable sharing for an agent execution and verify share link is
generated
  - [x] Copy share link and verify it copies to clipboard
- [x] Open share link in incognito/private browser and verify outputs
are displayed
  - [x] Disable sharing and verify share link returns 404
- [x] Try to enable/disable sharing for another user's execution (should
fail with 404)
  - [x] Verify share page shows proper loading and error states
- [x] Test that only OUTPUT blocks are shown in shared view, no
intermediate data
=
2025-09-17 06:21:33 +00:00
Zamil Majdy
33679f3ffe feat(platform): Add instructions field to agent submissions (#10931)
## Summary

Added an optional "Instructions" field for agent submissions to help
users understand how to run agents and what to expect.

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/015c4f0b-4bdd-48df-af30-9e52ad283e8b"
/>

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/3242cee8-a4ad-4536-bc12-64b491a8ef68"
/>

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a9b63e1c-94c0-41a4-a44f-b9f98e446793"
/>


### Changes Made

**Backend:**
- Added `instructions` field to `AgentGraph` and `StoreListingVersion`
database models
- Updated `StoreSubmission`, `LibraryAgent`, and related Pydantic models
- Modified store submission API routes to handle instructions parameter
- Updated all database functions to properly save/retrieve instructions
field
- Added graceful handling for cases where database doesn't yet have the
field

**Frontend:**
- Added instructions field to agent submission flow (PublishAgentModal)
- Positioned below "Recommended Schedule" section as specified
- Added instructions display in library/run flow (RunAgentModal)  
- Positioned above credentials section with informative blue styling
- Added proper form validation with 2000 character limit
- Updated all TypeScript types and API client interfaces

### Key Features

-  Optional field - fully backward compatible
-  Proper positioning in both submission and run flows
-  Character limit validation (2000 chars)
-  User-friendly display with "How to use this agent" styling
-  Only shows when instructions are provided

### Testing

- Verified Pydantic model validation works correctly
- Confirmed schema validation enforces character limits
- Tested graceful handling of missing database fields
- Code formatting and linting completed

## Test plan

- [ ] Test agent submission with instructions field
- [ ] Test agent submission without instructions (backward
compatibility)
- [ ] Verify instructions display correctly in run modal
- [ ] Test character limit validation
- [ ] Verify database migrations work properly

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-17 03:55:45 +00:00
Abhimanyu Yadav
fc8c5ccbb6 feat(backend): enhance agent retrieval logic in store agent page (#10933)
This PR enhances the agent retrieval logic in the store database to
ensure accurate fetching of the latest approved agent versions. The
changes address scenarios where agents may have multiple versions with
different approval statuses.

## 🔧 Changes Made

### Enhanced Agent Retrieval Logic (`get_store_agent_details`)
- **Active Version Priority**: Added logic to prioritize fetching agents
based on the `activeVersionId` when available
- **Fallback to Latest Approved**: When no active version is set, the
system now falls back to the latest approved version (sorted by version
number descending)
- **Improved Accuracy**: Ensures users always see the most relevant
agent version based on the current store listing state

### Improved Agent Filtering (`get_my_agents`)
- **Enhanced Store Listing Filter**: Modified the filter to only include
store listings that have at least one available version
- **Nested Version Check**: Added nested filtering to check for
`isAvailable: true` in the versions, preventing empty or unavailable
listings from appearing

##  Testing Checklist

- [x] Test fetching agent details with an active version set
- [x] Test fetching agent details without an active version (should fall
back to latest approved)
- [x] Test `get_my_agents` returns only agents with available store
listing versions
- [x] Verify no agents with only unavailable versions appear in results
- [x] Test with agents having multiple versions with different approval
statuses
2025-09-17 02:57:39 +00:00
Reinier van der Leer
7d2ab61546 feat(platform): Disable Trigger Setup through Builder (#10418)
We want users to set up triggers through the Library rather than the
Builder.

- Resolves #10413


https://github.com/user-attachments/assets/515ed80d-6569-4e26-862f-2a663115218c

### Changes 🏗️

- Update node UI to push users to Library for trigger set-up and
management
  - Add note redirecting to Library for trigger set-up
  - Remove webhook status indicator and webhook URL section
- Add `libraryAgent: LibraryAgent` to `BuilderContext` for access inside
`CustomNode`
  - Move library agent loader from `FlowEditor` to `useAgentGraph`

- Implement `migrate_legacy_triggered_graphs` migrator function
- Remove `on_node_activate` hook (which previously handled webhook
setup)
- Propagate `created_at` from DB to `GraphModel` and
`LibraryAgentPreset` models

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Existing node triggers are converted to triggered presets (visible
in the Library)
    - [x] Converted triggered presets work
  - [x] Trigger node inputs are disabled and handles are hidden
- [x] Trigger node message links to the correct Library Agent when saved
2025-09-16 22:52:51 +00:00
Reinier van der Leer
c2f11dbcfa fix(blocks): Fix feedback loops in AI Structured Response Generator (#10932)
Improve the overall reliability of the AI Structured Response Generator
block from ~40% to ~100%. This block has been giving me a lot of hassle
over the past week and this improvement is an easy win.

- Resolves #10916

### Changes 🏗️

- Improve reliability of AI Structured Response Generator block
  - Fix feedback loops (total success rate ~40% -> 100%)
  - Improve system prompt (one-shot success rate ~40% -> ~76%)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] JSON decode errors are turned into a useful feedback message
  - [x] LLM effectively corrects itself based on the feedback message
2025-09-16 22:50:29 +00:00
Nicholas Tindle
f82adeb959 feat(library): Add agent favoriting functionality (#10828)
### Need 💡

This PR introduces the ability for users to "favorite" agents in the
library view, enhancing agent discoverability and organization.
Favorited agents will be visually marked with a heart icon and
prioritized in the library list, appearing at the top. This feature is
distinct from pinning specific agent runs.

### Changes 🏗️

*   **Backend:**
* Updated `LibraryAgent` model in `backend/server/v2/library/model.py`
to include the `is_favorite` field when fetching from the database.
*   **Frontend:**
* Updated `LibraryAgent` TypeScript type in
`autogpt-server-api/types.ts` to include `is_favorite`.
* Modified `LibraryAgentCard.tsx` to display a clickable heart icon,
indicating the favorite status.
* Implemented a click handler on the heart icon to toggle the
`is_favorite` status via an API call, including loading states and toast
notifications.
* Updated `useLibraryAgentList.ts` to implement client-side sorting,
ensuring favorited agents appear at the top of the list.
* Updated `openapi.json` to include `is_favorite` in the `LibraryAgent`
schema and regenerated frontend API types.
    *   Installed `@orval/core` for API generation.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify that the heart icon is displayed correctly on
`LibraryAgentCard` for both favorited (filled red) and unfavorited
(outlined gray) agents.
  - [x] Click the heart icon on an unfavorited agent:
    - [x] Confirm the icon changes to filled red.
    - [x] Verify a "Added to favorites" toast notification appears.
    - [x] Confirm the agent moves to the top of the library list.
- [x] Check that the agent card does not navigate to the agent details
page.
  - [x] Click the heart icon on a favorited agent:
    - [x] Confirm the icon changes to outlined gray.
    - [x] Verify a "Removed from favorites" toast notification appears.
- [x] Confirm the agent's position adjusts in the list (no longer at the
very top unless other sorting criteria apply).
- [x] Check that the agent card does not navigate to the agent details
page.
- [x] Test the loading state: rapidly click the heart icon and observe
the `opacity-50 cursor-not-allowed` styling.
- [x] Verify that the sorting correctly places all favorited agents at
the top, maintaining their original relative order within the favorited
group, and the same for unfavorited agents.

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---
<a
href="https://cursor.com/background-agent?bcId=bc-43e8f98c-e4ea-4149-afc8-5eea3d1ab439">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-43e8f98c-e4ea-4149-afc8-5eea3d1ab439">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-16 22:43:50 +00:00
Nicholas Tindle
6f08a1cca7 fix: the api key credentials weren't registering correctly (#10936) 2025-09-16 13:27:04 -05:00
Ubbe
1ddf92eed4 fix(frontend): new agent run page design refinements (#10924)
## Changes 🏗️

Implements all the following changes...

1. The margins between the runs, on the left hand side.. reduced them
around `6px` ?
2. Make agent inputs full width
3. Make "Schedule setup" section displayed in a second modal
4. When an agent is running, we should not show the "Delete agent"
button
5. Copy changes around the actions for agent/runs
6. Large button height should be `52px`
7. Fix margins between + New Run button and the runs & scheduled menu
8. Make border white on cards

Also... 
- improve the naming of some components to reflect better their
context/usage
- show on the inputs section when an agent is using already API keys or
credentials
- fix runs/schedules not auto-selecting once created

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with the new agent runs page enabled
  - [x] Test the above 

### For configuration changes:

None
2025-09-16 14:34:52 +00:00
Reinier van der Leer
4c0dd27157 dx(platform): Add manual dispatch to deploy workflows (#10918)
When deploying from the infra repo, migrations aren't run which can
cause issues. We need to be able to manually dispatch deployment from
this repo so that the migrations are run as well.

### Changes 🏗️

- add manual dispatch to deploy workflows

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Either it works or it doesn't but this PR won't break anything
existing
2025-09-16 15:57:56 +02:00
Zamil Majdy
17fcf68f2e feat: Separate OpenAI key for smart agent execution summary and other internal AI calls (#10930)
### Changes 🏗️

Separate the API key for internal usage (smart agent execution summary)
and block usage.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual test after deployment
2025-09-16 09:14:27 +00:00
Reinier van der Leer
381558342a fix(frontend/builder): Fix moved blocks disappearing on no-op save (#10927)
- Resolves #10926

### Changes 🏗️

- Fix save no-op if graph has no changes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving a graph after only moving nodes doesn't make those nodes
disappear
2025-09-16 08:15:10 +00:00
Zamil Majdy
1fdc02467b feat(backend): Add comprehensive Prometheus instrumentation for observability (#10923)
## Summary
- Implement comprehensive Prometheus metrics instrumentation for all
FastAPI services
- Add custom business metrics for graph/block executions
- Enable dual publishing to both Grafana Cloud and internal Prometheus

## Related Infrastructure PR
-
https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/214

## Changes

### 📊 Metrics Infrastructure
- Added `prometheus-fastapi-instrumentator` dependency for automatic
HTTP metrics
- Created centralized `instrumentation.py` module for consistent metrics
across services
- Instrumented REST API, WebSocket, and External API services

### 📈 Automatic HTTP Metrics
All FastAPI services now automatically collect:
- **Request latency**: Histogram with custom buckets (10ms to 60s)
- **Request/response size**: Track payload sizes
- **Request counts**: By method, endpoint, and status code
- **Active requests**: Real-time count of in-progress requests
- **Error rates**: 4xx and 5xx responses

### 🎯 Custom Business Metrics
Added domain-specific metrics:
- **Graph executions**: Count by status (success/error/validation_error)
- **Block executions**: Count and duration by block_type and status
- **WebSocket connections**: Active connection gauge
- **Database queries**: Duration histogram by operation and table
- **RabbitMQ messages**: Count by queue and status
- **Authentication**: Attempts by method and status
- **API key usage**: By provider and block type
- **Rate limiting**: Hit count by endpoint

### 🔌 Service Endpoints
Each service exposes metrics at `/metrics`:
- REST API (port 8006): `/metrics`
- WebSocket (port 8001): `/metrics`
- External API: `/external-api/metrics`
- Executor (port 8002): Already had metrics, now enhanced

### 🏷️ Kubernetes Integration
Updated Helm charts with pod annotations:
```yaml
prometheus.io/scrape: "true"
prometheus.io/port: "8006"  # or appropriate port
prometheus.io/path: "/metrics"
```

## Testing
- [x] Install dependencies: `poetry install`
- [x] Run services: `poetry run serve`
- [x] Check metrics endpoints are accessible
- [x] Verify metrics are being collected
- [x] Confirm Grafana Agent can scrape metrics
- [x] Test graph/block execution tracking
- [x] Verify WebSocket connection metrics

## Performance Impact
- Minimal overhead (~1-2ms per request)
- Metrics are collected asynchronously
- Can be disabled via `ENABLE_METRICS=false` env var

## Next Steps
1. Deploy to dev environment
2. Configure Grafana Cloud dashboards
3. Set up alerting rules based on metrics
4. Add more custom business metrics as needed

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-16 12:58:04 +07:00
Nicholas Tindle
f262bb9307 fix(platform): add timezone awareness to scheduler (#10921)
### Changes 🏗️

This PR restores and improves timezone awareness in the scheduler
service to correctly handle daylight savings time (DST) transitions. The
changes ensure that scheduled agents run at the correct local time even
when crossing DST boundaries.

#### Backend Changes:
- **Scheduler Service (`scheduler.py`):**
- Added `user_timezone` parameter to `add_graph_execution_schedule()`
method
  - CronTrigger now uses the user's timezone instead of hardcoded UTC
  - Added timezone field to `GraphExecutionJobInfo` for visibility
  - Falls back to UTC with a warning if no timezone is provided
  - Extracts and includes timezone information from job triggers

- **API Router (`v1.py`):**
  - Added optional `timezone` field to `ScheduleCreationRequest`
- Fetches user's saved timezone from profile if not provided in request
  - Passes timezone to scheduler client when creating schedules
  - Converts `next_run_time` back to user timezone for display

#### Frontend Changes:
- **Schedule Creation Modal:**
  - Now sends user's timezone with schedule creation requests
- Uses browser's local timezone if user hasn't set one in their profile

- **Schedule Display Components:**
  - Updated to show timezone information in schedule details
  - Improved formatting of schedule information in monitoring views
  - Fixed schedule table display to properly show timezone-aware times

- **Cron Expression Utils:**
  - Removed UTC conversion logic from `formatTime()` function
  - Cron expressions are now stored in the schedule's timezone
  - Simplified humanization logic since no conversion is needed

- **API Types & OpenAPI:**
  - Added `timezone` field to schedule-related types
  - Updated OpenAPI schema to include timezone parameter

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  
### Test Plan 🧪

#### 1. Schedule Creation Tests
- [ ] Create a new schedule and verify the timezone is correctly saved
- [ ] Create a schedule without specifying timezone - should use user's
profile timezone
- [ ] Create a schedule when user has no profile timezone - should
default to UTC with warning

#### 2. Daylight Savings Time Tests
- [ ] Create a schedule for a daily task at 2:00 PM in a DST timezone
(e.g., America/New_York)
- [ ] Verify the schedule runs at 2:00 PM local time before DST
transition
- [ ] Verify the schedule still runs at 2:00 PM local time after DST
transition
- [ ] Check that the next_run_time adjusts correctly across DST
boundaries

#### 3. Display and UI Tests
- [ ] Verify timezone is displayed in schedule details view
- [ ] Verify schedule times are shown in user's local timezone in
monitoring page
- [ ] Verify cron expression humanization shows correct local times
- [ ] Check that schedule table shows timezone information

#### 4. API Tests
- [ ] Test schedule creation API with timezone parameter
- [ ] Test schedule creation API without timezone parameter
- [ ] Verify GET schedules endpoint returns timezone information
- [ ] Verify next_run_time is converted to user timezone in responses

#### 5. Edge Cases
- [ ] Test with various timezones (UTC, EST, PST, Europe/London,
Asia/Tokyo)
- [ ] Test with invalid timezone strings - should handle gracefully
- [ ] Test scheduling at DST transition times (2:00 AM during spring
forward)
- [ ] Verify existing schedules without timezone info default to UTC

#### 6. Regression Tests
- [ ] Verify existing schedules continue to work
- [ ] Verify schedule deletion still works
- [ ] Verify schedule listing endpoints work correctly
- [ ] Check that scheduled graph executions trigger as expected

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-15 18:18:03 -05:00
Nicholas Tindle
5a6978b07d feat(frontend): Add expandable view for block output (#10773)
### Need for these changes 💥


https://github.com/user-attachments/assets/5b9007a1-0c49-44c6-9e8b-52bf23eec72c


Users currently cannot view the full output result from a block when
inspecting the Output Data History panel or node previews, as the
content is clipped. This makes debugging and analysis of complex outputs
difficult, forcing users to copy data to external editors. This feature
improves developer efficiency and user experience, especially for blocks
with large or nested responses, and reintroduces a highly requested
functionality that existed previously.

### Changes 🏗️

* **New `ExpandableOutputDialog` component:** Introduced a reusable
modal dialog (`ExpandableOutputDialog.tsx`) designed to display
complete, untruncated output data.
* **`DataTable.tsx` enhancement:** Added an "Expand" button (Maximize2
icon) to each data entry in the Output Data History panel. This button
appears on hover and opens the `ExpandableOutputDialog` for a full view
of the data.
* **`NodeOutputs.tsx` enhancement:** Integrated the "Expand" button into
node output previews, allowing users to view full output data directly
from the node details.
* The `ExpandableOutputDialog` provides a large, scrollable content
area, displaying individual items in organized cards, with options to
copy individual items or all data, along with execution ID and pin name
metadata.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Navigate to an agent session with executed blocks.
  - [x] Open the Output Data History panel.
  - [x] Hover over a data entry to reveal the "Expand" button.
- [x] Click the "Expand" button and verify the `ExpandableOutputDialog`
opens, displaying the full, untruncated content.
  - [x] Verify scrolling works for large outputs within the dialog.
  - [x] Test "Copy Item" and "Copy All" buttons within the dialog.
  - [x] Navigate to a custom node in the graph.
  - [x] Inspect a node's output (if applicable).
  - [x] Hover over the output data to reveal the "Expand" button.
- [x] Click the "Expand" button and verify the `ExpandableOutputDialog`
opens, displaying the full content.

---
Linear Issue:
[OPEN-2593](https://linear.app/autogpt/issue/OPEN-2593/add-expandable-view-for-full-block-output-preview)

<a
href="https://cursor.com/background-agent?bcId=bc-27badeb8-2b49-4286-aa16-8245dfd33bfc">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-27badeb8-2b49-4286-aa16-8245dfd33bfc">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-15 14:19:40 +00:00
Nicholas Tindle
339ec733cb fix(platform): add timezone awareness to scheduler (#10921)
### Changes 🏗️

This PR restores and improves timezone awareness in the scheduler
service to correctly handle daylight savings time (DST) transitions. The
changes ensure that scheduled agents run at the correct local time even
when crossing DST boundaries.

#### Backend Changes:
- **Scheduler Service (`scheduler.py`):**
- Added `user_timezone` parameter to `add_graph_execution_schedule()`
method
  - CronTrigger now uses the user's timezone instead of hardcoded UTC
  - Added timezone field to `GraphExecutionJobInfo` for visibility
  - Falls back to UTC with a warning if no timezone is provided
  - Extracts and includes timezone information from job triggers

- **API Router (`v1.py`):**
  - Added optional `timezone` field to `ScheduleCreationRequest`
- Fetches user's saved timezone from profile if not provided in request
  - Passes timezone to scheduler client when creating schedules
  - Converts `next_run_time` back to user timezone for display

#### Frontend Changes:
- **Schedule Creation Modal:**
  - Now sends user's timezone with schedule creation requests
- Uses browser's local timezone if user hasn't set one in their profile

- **Schedule Display Components:**
  - Updated to show timezone information in schedule details
  - Improved formatting of schedule information in monitoring views
  - Fixed schedule table display to properly show timezone-aware times

- **Cron Expression Utils:**
  - Removed UTC conversion logic from `formatTime()` function
  - Cron expressions are now stored in the schedule's timezone
  - Simplified humanization logic since no conversion is needed

- **API Types & OpenAPI:**
  - Added `timezone` field to schedule-related types
  - Updated OpenAPI schema to include timezone parameter

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  
### Test Plan 🧪

#### 1. Schedule Creation Tests
- [ ] Create a new schedule and verify the timezone is correctly saved
- [ ] Create a schedule without specifying timezone - should use user's
profile timezone
- [ ] Create a schedule when user has no profile timezone - should
default to UTC with warning

#### 2. Daylight Savings Time Tests
- [ ] Create a schedule for a daily task at 2:00 PM in a DST timezone
(e.g., America/New_York)
- [ ] Verify the schedule runs at 2:00 PM local time before DST
transition
- [ ] Verify the schedule still runs at 2:00 PM local time after DST
transition
- [ ] Check that the next_run_time adjusts correctly across DST
boundaries

#### 3. Display and UI Tests
- [ ] Verify timezone is displayed in schedule details view
- [ ] Verify schedule times are shown in user's local timezone in
monitoring page
- [ ] Verify cron expression humanization shows correct local times
- [ ] Check that schedule table shows timezone information

#### 4. API Tests
- [ ] Test schedule creation API with timezone parameter
- [ ] Test schedule creation API without timezone parameter
- [ ] Verify GET schedules endpoint returns timezone information
- [ ] Verify next_run_time is converted to user timezone in responses

#### 5. Edge Cases
- [ ] Test with various timezones (UTC, EST, PST, Europe/London,
Asia/Tokyo)
- [ ] Test with invalid timezone strings - should handle gracefully
- [ ] Test scheduling at DST transition times (2:00 AM during spring
forward)
- [ ] Verify existing schedules without timezone info default to UTC

#### 6. Regression Tests
- [ ] Verify existing schedules continue to work
- [ ] Verify schedule deletion still works
- [ ] Verify schedule listing endpoints work correctly
- [ ] Check that scheduled graph executions trigger as expected

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-15 06:15:52 +00:00
Ubbe
6575b655f0 fix(frontend): improve agent runs page loading state (#10914)
## Changes 🏗️


https://github.com/user-attachments/assets/356e5364-45be-4f6e-bd1c-cc8e42bf294d

And also tidy up the some of the logic around hooks. I also added a
`okData` helper to avoid having to type case ( `as` ) so much with the
generated types ( given the `response` is a union depending on `status:
200 | 400 | 401` ... )

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run PR locally with the `new-agent-runs` flag enabled
  - [x] Check the nice loading state 

### For configuration changes:

None
2025-09-15 04:56:26 +00:00
Ubbe
7c2df24d7c fix(frontend): delete actions behind dialogs in agent runs view (#10915)
## Changes 🏗️

<img width="800" height="630" alt="Screenshot 2025-09-12 at 17 38 34"
src="https://github.com/user-attachments/assets/103d7e10-e924-4831-b0e7-b7df608a205f"
/>

<img width="800" height="524" alt="Screenshot 2025-09-12 at 17 38 30"
src="https://github.com/user-attachments/assets/aeec2ac7-4bea-4ec9-be0c-4491104733cb"
/>

<img width="800" height="750" alt="Screenshot 2025-09-12 at 17 38 26"
src="https://github.com/user-attachments/assets/e0b28097-8352-4431-ae4a-9dc3e3bcf9eb"
/>

- All the `Delete` actions on the new Agent Library Runs page should be
behind confirmation dialogs
- Re-arrange the file structure a bit 💆🏽 
- Make the buttons min-width a bit more generous

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Test the above 

#### For configuration changes:

None
2025-09-15 04:55:58 +00:00
Reinier van der Leer
23eafa178c fix(backend/db): Unbreak store materialized views refresh job (#10906)
- Resolves #10898

### Changes 🏗️

- Fix and re-create `refresh_store_materialized_views` DB function and
its pg_cron job

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without issues (locally)
  - [x] Refresh function can be run without issues (locally)
2025-09-14 23:31:18 +00:00
Zamil Majdy
27fccdbf31 fix(backend/executor): Make graph execution status transitions atomic and enforce state machine (#10863)
## Summary
- Fixed race condition issues in `update_graph_execution_stats` function
- Implemented atomic status transitions using database-level constraints
- Added state machine enforcement to prevent invalid status transitions
- Eliminated code duplication and improved error handling

## Problem
The `update_graph_execution_stats` function had race condition
vulnerabilities where concurrent status updates could cause invalid
transitions like RUNNING → QUEUED. The function was not durable and
could result in executions moving backwards in their lifecycle, causing
confusion and potential system inconsistencies.

## Root Cause Analysis
1. **Race Conditions**: The function used a broad OR clause that allowed
updates from multiple source statuses without validating the specific
transition
2. **No Atomicity**: No atomic check to ensure the status hadn't changed
between read and write operations
3. **Missing State Machine**: No enforcement of valid state transitions
according to execution lifecycle rules

## Solution Implementation

### 1. Atomic Status Transitions
- Use database-level atomicity by including the current allowed source
statuses in the WHERE clause during updates
- This ensures only valid transitions can occur at the database level

### 2. State Machine Enforcement
Define valid transitions as a module constant
`VALID_STATUS_TRANSITIONS`:
- `INCOMPLETE` → `QUEUED`, `RUNNING`, `FAILED`, `TERMINATED`
- `QUEUED` → `RUNNING`, `FAILED`, `TERMINATED`  
- `RUNNING` → `COMPLETED`, `TERMINATED`, `FAILED`
- `TERMINATED` → `RUNNING` (for resuming halted execution)
- `COMPLETED` and `FAILED` are terminal states with no allowed
transitions

### 3. Improved Error Handling
- Early validation with clear error messages for invalid parameters
- Graceful handling when transitions fail - return current state instead
of None
- Proper logging of invalid transition attempts

### 4. Code Quality Improvements
- Eliminated code duplication in fetch logic
- Added proper type hints and casting
- Made status transitions constant for better maintainability

## Benefits
 **Prevents Invalid Regressions**: No more RUNNING → QUEUED transitions
 **Atomic Operations**: Database-level consistency guarantees  
 **Clear Error Messages**: Better debugging and monitoring  
 **Maintainable Code**: Clean logic flow without duplication  
 **Race Condition Safe**: Handles concurrent updates gracefully  

## Test Plan
- [x] Function imports and basic structure validation
- [x] Code formatting and linting checks pass
- [x] Type checking passes for modified files
- [x] Pre-commit hooks validation

## Technical Details
The key insight is using the database query itself to enforce valid
transitions by filtering on allowed source statuses in the WHERE clause.
This makes the operation truly atomic and eliminates the race condition
window that existed in the previous implementation.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-09-14 23:31:02 +00:00
Reinier van der Leer
fb8fbc9d1f fix(backend/db): Keep CreditTransaction entries on User delete (#10917)
This is a non-critical improvement for bookkeeping purposes.

- Change `CreditTransaction` <- `User` relation to `ON DELETE NO ACTION`
so that `CreditTransactions` are not automatically deleted when we
delete a user's data.

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without problems
2025-09-12 19:42:16 +02:00
Reinier van der Leer
6a86e70fd6 fix(backend/db): Keep CreditTransaction entries on User delete (#10917)
This is a non-critical improvement for bookkeeping purposes.

### Changes 🏗️

- Change `CreditTransaction` <- `User` relation to `ON DELETE NO ACTION`
so that `CreditTransactions` are not automatically deleted when we
delete a user's data.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without problems
2025-09-12 17:11:31 +00:00
Ubbe
6a2d7e0fb0 fix(frontend): handle avatar missing images better (#10903)
## Changes 🏗️

I think this helps `next/image` being more tolerant when optimising
images from certain origins according to Claude.

## Checklist 📋

### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Deploy preview to dev
  - [x] Verify avatar images load better 

### For configuration changes:

None
2025-09-12 02:56:24 +00:00
Nicholas Tindle
3d6ea3088e fix(backend): Add Airtable record normalization and upsert features (#10908)
Introduces normalization of Airtable record outputs to include all
fields with appropriate empty values and optional field metadata.
Enhances record creation to support finding existing records by
specified fields and updating them if found, enabling upsert-like
behavior. Updates block schemas and logic for list, get, and create
operations to support these new features.<!-- Clearly explain the need
for these changes: -->

### Changes 🏗️
Allows normalization of the response of the airtable blocks
Allows you to use create base to find ones already made
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test that it doesn't break existing agents
  - [x] Test that the results for checkboxes are returned
2025-09-11 20:26:34 +00:00
Nicholas Tindle
64b4480b1e Merge branch 'master' into dev 2025-09-11 15:07:22 -05:00
Swifty
f490b01abb feat(frontend): Add Vercel Analytics and Speed Insights (#10904)
## Summary
- Added Vercel Analytics for tracking page views and user interactions
- Added Vercel Speed Insights for monitoring Web Vitals and performance
metrics
- Fixed incorrect placement of SpeedInsights component (was between html
and head tags)

## Changes
- Import Analytics and SpeedInsights components from Vercel packages
- Place both components correctly within the body tag
- Ensure proper HTML structure and Next.js best practices

## Test plan
- [x] Verify components are imported correctly
- [x] Confirm no HTML validation errors
- [x] Test that analytics work when deployed to Vercel
- [x] Verify Speed Insights metrics are being collected
2025-09-11 10:58:11 +00:00
Bentlybro
e56a4a135d Revert "fix(backend): Add Airtable record normalization + find/create base (#10891)"
This reverts commit 5da41e0753.
2025-09-10 14:36:40 +01:00
Ubbe
e70c970ab6 feat(frontend): new <Avatar /> component using next/image (#10897)
## Changes 🏗️

<img width="800" height="648" alt="Screenshot 2025-09-10 at 22 00 01"
src="https://github.com/user-attachments/assets/eb396d62-01f2-45e5-9150-4e01dfcb71d0"
/><br />

Adds a new `<Avatar />` component and uses that across the app. Is a
copy of
[shadcn/avatar](https://duckduckgo.com/?q=shadcn+avatar&t=brave&ia=web)
with the following modifications:
- renders images with
[`next/image`](https://duckduckgo.com/?q=next+image&t=brave&ia=web) by
default
- this ensures avatars rendered on the app are optimised and resized ✔️
- it will work as long as all the domains are white-listed in
`nextjs.config.mjs`
- allows to bypass and use a normal `<img />` tag via an `as` prop if
needed
  - sometimes we might need to render images from a dynamic cdn 🤷🏽‍♂️   

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] ...

### For configuration changes:

None
2025-09-10 13:25:09 +00:00
Abhimanyu Yadav
3bbce71678 feat(builder): Block menu redesign - part 3 (#10864)
### Changes 🏗️

#### Block Menu Redesign - Part 3

This PR continues the block menu redesign effort, implementing the new
content sections and improving the overall user experience. The changes
focus on better organization, pagination, error handling, and visual
consistency.

#### Key Features Implemented:

**1. New Content Organization**
- **All Blocks Content**: Complete listing of all available blocks with
category-based organization and infinite scroll support
(`AllBlocksContent/`)
- **My Agents Content**: Display and manage user's own agents with
pagination (`MyAgentsContent/`)
- **Marketplace Agents Content**: Browse and add marketplace agents with
improved loading states (`MarketplaceAgentsContent/`)
- **Integration Blocks**: Dedicated view for integration-specific blocks
with better filtering (`IntegrationBlocks/`)
- **Suggestion Content**: Smart suggestions based on user context and
search history (`SuggestionContent/`)
- **Integrations Content**: Browse available integrations in a dedicated
view (`IntegrationsContent/`)

**2. Enhanced UI Components**
- **Paginated Lists**: New pagination components for blocks and
integrations (`PaginatedBlocksContent/`, `PaginatedIntegrationList/`)
- **Block List**: Reusable block list component with consistent styling
(`BlockList/`)
- **Improved Error Handling**: Comprehensive error states with retry
functionality across all content types
- **Loading States**: Skeleton loaders for better perceived performance

**3. Infrastructure Improvements**
- **Centralized Styles**: New `style.ts` file for consistent styling
across components
- **Better State Management**: Enhanced context provider with improved
menu state handling
- **Mock Flag Support**: Added feature flags for testing new block
features
- **Default State Enum**: Refactored to use enums for menu default
states

**4. Visual Assets**
- Added 50+ new integration icons/logos for better visual representation
- Updated existing integration images for consistency

**5. Code Quality**
- Improved error handling with proper error cards and retry mechanisms
- Consistent formatting and import organization
- Enhanced TypeScript types and interfaces
- Better separation of concerns with dedicated hooks for each content
type

#### Technical Details:
- **Files Changed**: 96 files
- **Additions**: 1,380 lines
- **Deletions**: 162 lines
- **New Components**: 10+ new React components with dedicated hooks
- **Integration Icons**: 50+ new PNG images for various integrations

#### Breaking Changes:
None - All changes are backwards compatible

---

### Test Plan 📋

- [x] Create a new agent and verify all blocks are accessible
- [x] Test infinite scroll in "All Blocks" view
- [x] Verify pagination works correctly in marketplace agents view
- [x] Test error states by simulating network failures
- [x] Check that all new integration icons display correctly
- [x] Test adding agents from marketplace view
- [x] Ensure skeleton loaders appear during data fetching

> Generated by claude
2025-09-10 11:58:07 +00:00
Abhimanyu Yadav
34fbf4377f fix(frontend): allow lazy loading of images (#10895)
The `next/image` component has inbuilt lazy loading enabled, but in some
components, we are bypassing it using a priority flag. So, I have
reverted this in this PR.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Lazy loading is working perfectly locally.
2025-09-10 11:38:37 +00:00
dependabot[bot]
f682ef885a chore(frontend/deps-dev): Bump 16 dev dependencies to newer minor versions (#10837)
Bumps the development-dependencies group with 16 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
|
[@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests)
| `4.1.0` | `4.1.1` |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.54.2`
| `1.55.0` |
|
[@storybook/addon-a11y](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-docs](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding)
| `9.1.2` | `9.1.4` |
|
[@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs)
| `9.1.2` | `9.1.4` |
|
[@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query)
| `5.83.1` | `5.86.0` |
|
[@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools)
| `5.84.2` | `5.86.0` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `24.2.1` | `24.3.0` |
| [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.3` |
`13.1.4` |
| [concurrently](https://github.com/open-cli-tools/concurrently) |
`9.2.0` | `9.2.1` |
|
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
| `15.4.6` | `15.5.2` |
|
[eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin)
| `9.1.2` | `9.1.4` |
| [msw](https://github.com/mswjs/msw) | `2.10.4` | `2.11.1` |
|
[storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/core)
| `9.1.2` | `9.1.4` |


Updates `@chromatic-com/storybook` from 4.1.0 to 4.1.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/releases"><code>@​chromatic-com/storybook</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v4.1.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.1/CHANGELOG.md"><code>@​chromatic-com/storybook</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v4.1.1 (Wed Aug 20 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e38f120b09"><code>e38f120</code></a>
Bump version to: 4.1.1 [skip ci]</li>
<li><a
href="adb8570cbb"><code>adb8570</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="9e38298e21"><code>9e38298</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/390">#390</a>
from chromaui/next</li>
<li><a
href="7b3a184d0a"><code>7b3a184</code></a>
Merge branch 'main' into next</li>
<li><a
href="2faff8b888"><code>2faff8b</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/389">#389</a>
from chromaui/ndelangen/sb10-beta-compatibility</li>
<li><a
href="51015a75f2"><code>51015a7</code></a>
Update yarn.lock to add support for Storybook 10.0.0-0</li>
<li><a
href="6f050adca2"><code>6f050ad</code></a>
Update package.json</li>
<li>See full diff in <a
href="https://github.com/chromaui/addon-visual-tests/compare/v4.1.0...v4.1.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `@playwright/test` from 1.54.2 to 1.55.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/microsoft/playwright/releases"><code>@​playwright/test</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v1.55.0</h2>
<h2>New APIs</h2>
<ul>
<li>New Property <a
href="https://playwright.dev/docs/api/class-teststepinfo#test-step-info-title-path">testStepInfo.titlePath</a>
Returns the full title path starting from the test file, including test
and step titles.</li>
</ul>
<h2>Codegen</h2>
<ul>
<li>Automatic <code>toBeVisible()</code> assertions: Codegen can now
generate automatic <code>toBeVisible()</code> assertions for common UI
interactions. This feature can be enabled in the Codegen settings
UI.</li>
</ul>
<h2>Breaking Changes</h2>
<ul>
<li>⚠️ Dropped support for Chromium extension manifest v2.</li>
</ul>
<h2>Miscellaneous</h2>
<ul>
<li>Added support for Debian 13 &quot;Trixie&quot;.</li>
</ul>
<h2>Browser Versions</h2>
<ul>
<li>Chromium 140.0.7339.16</li>
<li>Mozilla Firefox 141.0</li>
<li>WebKit 26.0</li>
</ul>
<p>This version was also tested against the following stable
channels:</p>
<ul>
<li>Google Chrome 139</li>
<li>Microsoft Edge 139</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f992162f04"><code>f992162</code></a>
chore: mark v1.55.0 (<a
href="https://redirect.github.com/microsoft/playwright/issues/37121">#37121</a>)</li>
<li><a
href="4a92ea0025"><code>4a92ea0</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37113">#37113</a>):
docs: add release-notes for v1.55</li>
<li><a
href="aa05507bba"><code>aa05507</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37114">#37114</a>):
test: move browser._launchServer in child process</li>
<li><a
href="27ae7dc639"><code>27ae7dc</code></a>
test: tree gardening (<a
href="https://redirect.github.com/microsoft/playwright/issues/37107">#37107</a>)</li>
<li><a
href="cd09d859a9"><code>cd09d85</code></a>
test: unflake &quot;should pick element&quot; (<a
href="https://redirect.github.com/microsoft/playwright/issues/37103">#37103</a>)</li>
<li><a
href="72e47728ba"><code>72e4772</code></a>
chore(trace-viewer): remove unused code (<a
href="https://redirect.github.com/microsoft/playwright/issues/37097">#37097</a>)</li>
<li><a
href="5b8c7d648a"><code>5b8c7d6</code></a>
chore(dotnet): float is non-nullable (<a
href="https://redirect.github.com/microsoft/playwright/issues/37095">#37095</a>)</li>
<li><a
href="c7bf035c35"><code>c7bf035</code></a>
test(webkit): closing dialog &gt; contenteditable (<a
href="https://redirect.github.com/microsoft/playwright/issues/37084">#37084</a>)</li>
<li><a
href="9fd6986f8d"><code>9fd6986</code></a>
test: skip debug-controller tests in driver mode (<a
href="https://redirect.github.com/microsoft/playwright/issues/37090">#37090</a>)</li>
<li><a
href="4c2f44d591"><code>4c2f44d</code></a>
test(bidi): use the nightly channel only for Firefox in CI (<a
href="https://redirect.github.com/microsoft/playwright/issues/37086">#37086</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/microsoft/playwright/compare/v1.54.2...v1.55.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-a11y` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-a11y</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-a11y</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/a11y">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-docs` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-docs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-docs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="0f86613a92"><code>0f86613</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32287">#32287</a>
from storybookjs/shilman/error-utm</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li><a
href="f8ff03a47d"><code>f8ff03a</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32238">#32238</a>
from storybookjs/sidnioulz/issue-31436-table</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/docs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-links` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-links</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-links</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/links">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-onboarding` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-onboarding</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-onboarding</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/onboarding">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/nextjs` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="1dba82472a"><code>1dba824</code></a>
Next.js: Avoid multiple webpack versions at runtime</li>
<li><a
href="a0151efbce"><code>a0151ef</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32306">#32306</a>
from storybookjs/valentin/fix-nextjs-webpack-error</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="bbb4ffe209"><code>bbb4ffe</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32286">#32286</a>
from storybookjs/shilman/configure-utm</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/frameworks/nextjs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/eslint-plugin-query` from 5.83.1 to 5.86.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/eslint-plugin-query</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.86.0</h2>
<p>Version 5.86.0 - 9/4/25, 9:27 AM (Manual Release)</p>
<h2>Changes</h2>
<p>Note: This release contains BREAKING CHANGES for the
<code>experimental_streamedQuery</code> API:</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>query-core: add custom reducer support to streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9532">#9532</a>)
(8f24580) by <a
href="https://github.com/marcog83"><code>@​marcog83</code></a></li>
</ul>
<p>BREAKING CHANGE: The <code>maxChunks</code> parameter has been
removed from <code>streamedQuery</code>.
Use a custom <code>reducer</code> function to control data aggregation
behavior instead.</p>
<p>BREAKING CHANGE: When using a custom reducer function with
<code>streamedQuery</code>,
the <code>initialValue</code> parameter is now required and must be
provided.</p>
<ul>
<li>rename queryFn to streamFn in streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9606">#9606</a>)
(b25412a) by Dominik Dorfmeister</li>
</ul>
<p>BREAKING CHANGE: <code>queryFn</code> has been renamed to
<code>streamFn</code></p>
<h3>Chore</h3>
<ul>
<li>tsconfig.json: simplify &quot;include&quot; patterns by
consolidating file extensions and directory paths (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9547">#9547</a>)
(7306474) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Test</h3>
<ul>
<li>react-query/useMutationState: clarify assertions and improve code
formatting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9611">#9611</a>)
(43049c5) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Other</h3>
<ul>
<li>(c75a994) by BennettLiam</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1599bb4742"><code>1599bb4</code></a>
release: v5.86.0</li>
<li><a
href="7306474eee"><code>7306474</code></a>
chore(tsconfig.json): simplify 'include' patterns by consolidating file
exten...</li>
<li>See full diff in <a
href="https://github.com/TanStack/query/commits/v5.86.0/packages/eslint-plugin-query">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/react-query-devtools` from 5.84.2 to 5.86.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/react-query-devtools</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.86.0</h2>
<p>Version 5.86.0 - 9/4/25, 9:27 AM (Manual Release)</p>
<h2>Changes</h2>
<p>Note: This release contains BREAKING CHANGES for the
<code>experimental_streamedQuery</code> API:</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>query-core: add custom reducer support to streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9532">#9532</a>)
(8f24580) by <a
href="https://github.com/marcog83"><code>@​marcog83</code></a></li>
</ul>
<p>BREAKING CHANGE: The <code>maxChunks</code> parameter has been
removed from <code>streamedQuery</code>.
Use a custom <code>reducer</code> function to control data aggregation
behavior instead.</p>
<p>BREAKING CHANGE: When using a custom reducer function with
<code>streamedQuery</code>,
the <code>initialValue</code> parameter is now required and must be
provided.</p>
<ul>
<li>rename queryFn to streamFn in streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9606">#9606</a>)
(b25412a) by Dominik Dorfmeister</li>
</ul>
<p>BREAKING CHANGE: <code>queryFn</code> has been renamed to
<code>streamFn</code></p>
<h3>Chore</h3>
<ul>
<li>tsconfig.json: simplify &quot;include&quot; patterns by
consolidating file extensions and directory paths (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9547">#9547</a>)
(7306474) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Test</h3>
<ul>
<li>react-query/useMutationState: clarify assertions and improve code
formatting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9611">#9611</a>)
(43049c5) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Other</h3>
<ul>
<li>(c75a994) by BennettLiam</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1599bb4742"><code>1599bb4</code></a>
release: v5.86.0</li>
<li><a
href="7306474eee"><code>7306474</code></a>
chore(tsconfig.json): simplify 'include' patterns by consolidating file
exten...</li>
<li><a
href="0a35234ce4"><code>0a35234</code></a>
release: v5.85.9</li>
<li><a
href="aec19c93a5"><code>aec19c9</code></a>
release: v5.85.8</li>
<li><a
href="a978b34b1b"><code>a978b34</code></a>
release: v5.85.7</li>
<li><a
href="b998f68a57"><code>b998f68</code></a>
release: v5.85.6</li>
<li><a
href="0a0e01443d"><code>0a0e014</code></a>
release: v5.85.5</li>
<li><a
href="ef0a9d2c7e"><code>ef0a9d2</code></a>
release: v5.85.4</li>
<li><a
href="b6516bd25e"><code>b6516bd</code></a>
release: v5.85.3</li>
<li><a
href="e14f39c6ee"><code>e14f39c</code></a>
release: v5.85.2</li>
<li>Additional commits viewable in <a
href="https://github.com/TanStack/query/commits/v5.86.0/packages/react-query-devtools">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/node` from 24.2.1 to 24.3.0
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />

Updates `chromatic` from 13.1.3 to 13.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/releases">chromatic's
releases</a>.</em></p>
<blockquote>
<h2>v13.1.4</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Feat:Fix outdated and incorrect links in the CLI <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1202">#1202</a>
(<a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a>)</li>
<li>Show setup URL on build errors when onboarding. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1201">#1201</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a></li>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/blob/main/CHANGELOG.md">chromatic's
changelog</a>.</em></p>
<blockquote>
<h1>v13.1.4 (Fri Aug 29 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Feat:Fix outdated and incorrect links in the CLI <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1202">#1202</a>
(<a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a>)</li>
<li>Show setup URL on build errors when onboarding. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1201">#1201</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a></li>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c85259331b"><code>c852593</code></a>
Bump version to: 13.1.4 [skip ci]</li>
<li><a
href="aae7c41b86"><code>aae7c41</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="062d1fadee"><code>062d1fa</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1202">#1202</a>
from chromaui/feat_fix_links</li>
<li><a
href="15f72ae8ea"><code>15f72ae</code></a>
Removes references to the viewlayer used by noCSFGlobs</li>
<li><a
href="1a0e200bce"><code>1a0e200</code></a>
Remove viewlayer from nocsf file</li>
<li><a
href="f4c52ab0af"><code>f4c52ab</code></a>
Attempt to fix linting errors</li>
<li><a
href="0db944b07c"><code>0db944b</code></a>
Feat:Fix outdated and incorrect links in the CLI</li>
<li><a
href="8879737ab0"><code>8879737</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1201">#1201</a>
from chromaui/jmhobbs/cap-3287-chromatic-cicli-issue...</li>
<li><a
href="d3c25a21a1"><code>d3c25a2</code></a>
Improve links during onboarding with system errors</li>
<li><a
href="cce1b29cdf"><code>cce1b29</code></a>
Show setup URL on build errors when onboarding.</li>
<li>See full diff in <a
href="https://github.com/chromaui/chromatic-cli/compare/v13.1.3...v13.1.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `concurrently` from 9.2.0 to 9.2.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/open-cli-tools/concurrently/releases">concurrently's
releases</a>.</em></p>
<blockquote>
<h2>v9.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>chore: update eslint-plugin-simple-import-sort from v10 to v12 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/551">open-cli-tools/concurrently#551</a></li>
<li>chore: update eslint-config-prettier from v9 to v10 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/552">open-cli-tools/concurrently#552</a></li>
<li>Remove lodash by <a
href="https://github.com/gustavohenke"><code>@​gustavohenke</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/555">open-cli-tools/concurrently#555</a></li>
<li>chore: update coveralls-next from v4 to v5 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/557">open-cli-tools/concurrently#557</a></li>
<li>Replace jest with vitest by <a
href="https://github.com/gustavohenke"><code>@​gustavohenke</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/554">open-cli-tools/concurrently#554</a></li>
<li>Upgrade to pnpm v10 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/558">open-cli-tools/concurrently#558</a></li>
<li>chore: remove unused eslint-plugin-jest by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/559">open-cli-tools/concurrently#559</a></li>
<li>Minor dependency updates by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/560">open-cli-tools/concurrently#560</a></li>
<li>Migrate to ESLint v9 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/561">open-cli-tools/concurrently#561</a></li>
<li>Update shell-quote to 1.8.3 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/562">open-cli-tools/concurrently#562</a></li>
<li>Full coverage by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/563">open-cli-tools/concurrently#563</a></li>
<li>Update GH actions/workflows, enable NPM provenance by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/564">open-cli-tools/concurrently#564</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1">https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="414cd016c6"><code>414cd01</code></a>
9.2.1</li>
<li><a
href="0dfedb028c"><code>0dfedb0</code></a>
Update GH actions/workflows, enable npm provenance (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/564">#564</a>)</li>
<li><a
href="ee81511999"><code>ee81511</code></a>
Remove obsolete tsdk config</li>
<li><a
href="09d3d7b11f"><code>09d3d7b</code></a>
Full coverage (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/563">#563</a>)</li>
<li><a
href="8cfc6a6cb4"><code>8cfc6a6</code></a>
Update shell-quote to 1.8.3 (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/562">#562</a>)</li>
<li><a
href="4c403f8b01"><code>4c403f8</code></a>
Migrate to ESLint v9 (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/561">#561</a>)</li>
<li><a
href="8bfcaf7828"><code>8bfcaf7</code></a>
Minor dependency updates (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/560">#560</a>)</li>
<li><a
href="389fec4830"><code>389fec4</code></a>
Enable watch mode &amp; coverage for unit tests by default</li>
<li><a
href="7993ce6817"><code>7993ce6</code></a>
chore: remove unused eslint-plugin-jest (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/559">#559</a>)</li>
<li><a
href="58300f45eb"><code>58300f4</code></a>
Remove obsolete .npmrc file</li>
<li>Additional commits viewable in <a
href="https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `eslint-config-next` from 15.4.6 to 15.5.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.5.2</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: disable unknownatrules lint rule entirely (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83059">#83059</a>)</li>
<li>revert: add ?dpl to fonts in /_next/static/media (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83062">#83062</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgub"><code>@​bgub</code></a> and <a
href="https://github.com/ztanner"><code>@​ztanner</code></a> for
helping!</p>
<h2>v15.5.1</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: aliased navigations should apply scroll handling (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82900">#82900</a>)</li>
<li>Turbopack: fix invalid NFT entry with file behind symlink (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82887">#82887</a>)</li>
<li>fix: typesafe linking to route handlers and pages API routes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82858">#82858</a>)</li>
<li>fix: change &quot;noUnknownAtRules&quot; to &quot;warn&quot; for
Biome (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82974">#82974</a>)</li>
<li>fix: add path normalization to getRelativePath for Windows (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82918">#82918</a>)</li>
<li>feat: add typesafety with config.typedRoutes to redirect() and
permanentRedirect() (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82860">#82860</a>)</li>
<li>fix: avoid importing types that will be unused (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82856">#82856</a>)</li>
<li>fix: update the config.api.responseLimit type (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82852">#82852</a>)</li>
<li>fix: update validation return types (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82854">#82854</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgub"><code>@​bgub</code></a>, <a
href="https://github.com/mischnic"><code>@​mischnic</code></a>, and <a
href="https://github.com/ztanner"><code>@​ztanner</code></a> for
helping!</p>
<h2>v15.5.1-canary.26</h2>
<h3>Core Changes</h3>
<ul>
<li>Upgrade React from <code>2805f0ed-20250903</code> to
<code>3302d1f7-20250903</code>: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83400">#83400</a></li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>Fixed broken link in backend-for-frontend.mdx: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83310">#83310</a></li>
<li>Update next-response.mdx: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83302">#83302</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/geraldochristiano"><code>@​geraldochristiano</code></a>
and <a href="https://github.com/blntgvn42"><code>@​blntgvn42</code></a>
for helping!</p>
<h2>v15.5.1-canary.25</h2>
<h3>Core Changes</h3>
<ul>
<li>fix(ppr): fix prerender info matching for rewritten paths: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83359">#83359</a></li>
<li>fix: shrink bottom stack container height: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83377">#83377</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="497ec6aa08"><code>497ec6a</code></a>
v15.5.2</li>
<li><a
href="cc68ced552"><code>cc68ced</code></a>
v15.5.1</li>
<li><a
href="7e08c8223d"><code>7e08c82</code></a>
v15.5.0</li>
<li>...

_Description has been truncated_

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-10 10:21:53 +00:00
Reinier van der Leer
2ffd249aac fix(backend/external-api): Improve security & reliability of API key storage (#10796)
Our API key generation, storage, and verification system has a couple of
issues that need to be ironed out before full-scale deployment.

### Changes 🏗️

- Move from unsalted SHA256 to salted Scrypt hashing for API keys
- Avoid false-negative API key validation due to prefix collision
- Refactor API key management code for clarity
- [refactor(backend): Clean up API key DB & API code
(#10797)](https://github.com/Significant-Gravitas/AutoGPT/pull/10797)
  - Rename models and properties in `backend.data.api_key` for clarity
- Eliminate redundant/custom/boilerplate error handling/wrapping in API
key endpoint call stack
- Remove redundant/inaccurate `response_model` declarations from API key
endpoints

Dependencies for `autogpt_libs`:
- Add `cryptography` as a dependency
- Add `pyright` as a dev dependency

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Performing these actions through the UI (still) works:
    - [x] Creating an API key
    - [x] Listing owned API keys
    - [x] Deleting an owned API key
  - [x] Newly created API key can be used in Swagger UI
  - [x] Existing API key can be used in Swagger UI
  - [x] Existing API key is re-encrypted with salt on use
2025-09-10 09:34:49 +00:00
Ubbe
986245ec43 feat(frontend): run agent page improvements (#10879)
## Changes 🏗️

- Add all the cron scheduling options ( _yearly, monthly, weekly,
custom, etc..._ ) using the new Design System components
- Add missing agent/run actions: export agent + delete agent

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with `new-agent-runs` enabled
  - [x] Test the above 

### For configuration changes:

None
2025-09-10 07:44:51 +00:00
Bentlybro
f89717153f Merge branch 'master' into dev 2025-09-10 08:52:35 +01:00
Nicholas Tindle
5da41e0753 fix(backend): Add Airtable record normalization + find/create base (#10891)
## Summary
Fixes critical issue with Airtable API where empty/false fields are
completely omitted from responses, causing inconsistent data structures.
Also improves the create base block to prevent duplicate bases.

<!-- Clearly explain the need for these changes: -->
The Airtable API has a problematic behavior where it omits fields with
"empty" values from responses:
- Unchecked checkboxes are missing entirely instead of returning `false`
- Empty number fields are missing instead of returning `0`
- This makes it impossible to distinguish between "field doesn't exist"
and "field is false/empty"
- Users were getting inconsistent record structures that broke their
workflows

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

#### 1. **Added Record Normalization**
(`backend/blocks/airtable/_api.py`)
- New `get_table_schema()` function to fetch table field definitions
- New `get_empty_value_for_field()` to determine appropriate empty
values per field type
- New `normalize_records()` to fill in missing fields with proper
defaults:
  - Checkbox → `false`
  - Number/Currency/Percent/Duration/Rating → `0`
  - Text fields → `""`
  - Multiple selects/attachments/collaborators → `[]`
  - Dates/Single selects → `null`
- New `get_base_tables()` to fetch tables for a base

#### 2. **Enhanced List and Get Record Blocks**
(`backend/blocks/airtable/records.py`)
- Added `normalize_output` parameter (defaults to `true`) - ensures all
fields are present
- Added `include_field_metadata` parameter to optionally include field
type information
- When normalization is enabled, fetches schema once and normalizes all
records
- Can be disabled by setting `normalize_output=false` for raw Airtable
response

#### 3. **Simplified Create Records Block**
- Added `skip_normalization` parameter (default `false`) - normalized
output by default
- Records now always include all fields with proper empty values

#### 4. **Enhanced Create Base Block**
(`backend/blocks/airtable/bases.py`)
- Added `find_existing` parameter (defaults to `true`) to prevent
duplicate bases
- When finding an existing base, now fetches and returns table
information
- Added `was_created` output field to indicate whether base was created
or found

### Testing 📋

-  All Airtable block tests pass
-  Tested normalization with records containing missing checkbox fields
-  Verified all field types get appropriate empty values
-  Tested create base find-or-create functionality
-  Ran `poetry run format` and `poetry run lint`

### Migration Guide

This update makes the blocks behave more predictably:
- **List/Get Records**: All fields are now included by default. Set
`normalize_output: false` if you need the raw Airtable response
- **Create Records**: Simply creates records, no more upsert confusion
- **Create Base**: Prevents duplicate bases by default. Set
`find_existing: false` to force creation

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes were required - all changes are code-only.
2025-09-10 04:57:26 +00:00
Nicholas Tindle
cddeb185a8 feat(blocks): Add Gmail Draft Reply Block (#10882)
### Changes 🏗️

This PR adds a new `GmailDraftReplyBlock` that enables creating draft
replies to existing Gmail email threads. This block complements the
existing Gmail blocks by providing specialized functionality for
replying within email conversations.

**New Block: GmailDraftReplyBlock**
- **Purpose**: Creates draft replies to Gmail threads with intelligent
content type detection
- **Key Features**:
-  Automatic HTML detection: Draft replies containing HTML tags are
formatted as text/html
-  No hard-wrap for plain text: Plain text draft replies preserve
natural line flow (prevents 78-character hard-wrap issue)
-  Manual content type override: Use content_type parameter to force
specific format ("auto", "plain", or "html")
-  Reply-all functionality: Option to reply to all original recipients
-  Thread preservation: Maintains proper email threading with
In-Reply-To and References headers
  -  Full Unicode/emoji support with UTF-8 encoding
  -  File attachment support

**Implementation Details**:
- Retrieves parent message metadata to build proper reply context
- Automatically determines recipients based on reply mode (reply vs
reply-all)
- Adds "Re:" prefix to subject if not already present
- Maintains email thread continuity with proper headers
- Supports OAuth scopes: `gmail.modify` and `gmail.readonly`

**Inputs**:
- `threadId`: Thread ID to reply in
- `parentMessageId`: ID of the message being replied to  
- `to`, `cc`, `bcc`: Optional recipient lists
- `replyAll`: Boolean to reply to all original recipients
- `subject`: Optional custom subject
- `body`: Email body (plain text or HTML)
- `content_type`: Optional content type override
- `attachments`: Optional file attachments

**Outputs**:
- `draftId`: Created draft ID
- `messageId`: Draft message ID
- `threadId`: Thread ID
- `status`: Draft creation status
- `error`: Error message if any

Closes #10846

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
  - [x] Block includes test input/output configuration
  - [x] Mock test handler implemented for unit testing
  - [x] Proper error handling included
  - [x] OAuth authentication properly configured
- [x] Content type detection logic tested (auto-detects HTML vs plain
text)
  - [x] Threading headers properly maintained for email conversations

<details>
  <summary>Additional Testing Notes</summary>
  
  - The block uses the existing Gmail authentication infrastructure
  - Test credentials and mock outputs are configured for CI/CD
- The `_make_mime_text` helper function ensures proper content
formatting
  - Reply-all logic properly deduplicates recipients
</details>

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Note**: No configuration changes required - uses existing Google OAuth
configuration.

<details>
  <summary>Configuration Compatibility</summary>

  - Uses existing `GOOGLE_OAUTH_IS_CONFIGURED` flag
  - Leverages existing Google OAuth credentials system
  - No new environment variables or services required
</details>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
2025-09-10 04:24:15 +00:00
Reinier van der Leer
08a3fd6d26 Revert "fix(backend): Improve GitHub PR URL validation and API URL generation" (#10890)
Reverts Significant-Gravitas/AutoGPT#10317

The changes omit the hostname from the "prepared" URL, breaking some
GitHub blocks.
2025-09-09 21:47:51 +00:00
Nicholas Tindle
39b30bc82c ci: Add 'Bash(gh pr edit:*)' to allowed tools for claude (#10885) 2025-09-09 12:36:46 -05:00
Reinier van der Leer
2df0e2b750 fix(backend/api): Fix & add tests for APIKeyAuthenticator (#10881)
- Resolves #10875

### Changes 🏗️

- Fix use of `super().__call__` in `APIKeyAuthenticator.__call__`
- Fix non-ASCII API key validation
- Add tests for `APIKeyAuthenticator`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test implementations have been verified manually
  - [x] All the new tests pass
2025-09-09 16:09:51 +00:00
Ubbe
925f249ce1 feat(frontend): use new output renderers on new run page (#10876)
## Changes 🏗️

Integrating the great work @ntindle did on the rich agent output
renderers into the new Agent run page in the library 💜

- Implemented enhanced output rendering in `<RunDetails />` using the
shared output-renderers
- Added `<RunOutputs />` sub-component at
`RunDetails/components/RunOutputs.tsx` that:
- [x] builds items from `run.outputs`, extracts metadata, picks a
renderer via `globalRegistry`, and falls back to `TextRenderer`
- [x] renders `<OutputActions />` for copy/download and a list of
`<OutputItems />`.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run an agent on the view which outputs rich content
- [x] See the output use the new renderers, for example code is
higlighted

### For configuration changes:

None
2025-09-09 05:44:37 +00:00
Ubbe
e8cf3edbf4 feat(frontend): add timezone to new library agent page (#10874)
## Changes 🏗️

<img width="800" height="756" alt="Screenshot 2025-09-09 at 14 03 24"
src="https://github.com/user-attachments/assets/65f3e3a8-1ce0-491c-824a-601a494d3a36"
/>

<img width="600" height="493" alt="Screenshot 2025-09-09 at 14 03 28"
src="https://github.com/user-attachments/assets/457b37a3-6b3b-49b8-912c-c72cf06e8e58"
/>

Following the nice changes @ntindle did regarding timezones, bring them
into the new page:
- display the timezone when scheduling an agent on the new modal
- display the timezone for a schedule on the new schedule details view

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with `agent-new-runs` flag ON
  - [x] Open an agent on the new page
- [x] On the new modal, create a schedule, it display the timezone alert
  - [x] Once created, on the schedule view, it displays the timezone   

### For configuration changes:

None
2025-09-09 05:37:48 +00:00
Nicholas Tindle
dc03ea718c fix(backend): Report process errors to Sentry before cleanup (#10873)
Adds error reporting to Sentry for exceptions (excluding
KeyboardInterrupt and SystemExit) before executing process cleanup.
Silently ignores failures if Sentry is unavailable.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds cleanup for sentry
Adds disabling for sentry
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test all services with manual exception raising
  - [x] Remove those excptions
  - [x] Make sure they show up in sentry
2025-09-09 04:56:51 +00:00
Nicholas Tindle
dbee580d80 build(frontend): Increase build process memory limit to 4 GiB (#10861)
Increase memory limit of frontend build process to 4GiB to fix
out-of-memory build failures
2025-09-08 05:04:17 +00:00
Nicholas Tindle
0325ec0a2c fix(frontend): Fix environment variable handling in Docker builds for dev/prod deployments (#10859)
<!-- Clearly explain the need for these changes: -->
Sentry was not being enabled in dev/prod deployments because environment
variables were being incorrectly overwritten during the Docker build
process.

### Changes 🏗️

- Fixed Dockerfile environment variable merging logic to prevent
`.env.default` from overwriting `.env.production` values
- Added `NODE_ENV=production` to build stage to ensure Next.js looks for
production env files
- Updated env file merging to only run when not in CI/CD (when
`.env.production` doesn't exist)
- When `.env.production` exists (CI/CD), now merges defaults with
production values properly

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] Verify local Docker builds still work with `docker compose up`
- [ ] Verify dev deployment has `NEXT_PUBLIC_APP_ENV=dev` in built
JavaScript
- [ ] Verify prod deployment has `NEXT_PUBLIC_APP_ENV=prod` in built
JavaScript
- [ ] Verify Sentry is enabled in dev/prod deployments
(`isProdOrDev=true`)

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Technical Details

**Root Cause:**
1. CI/CD workflow creates `.env.production` with correct values (e.g.,
`NEXT_PUBLIC_APP_ENV=dev`)
2. Dockerfile's env merging logic always created `.env` from
`.env.default`
3. Next.js loads `.env.production` first, then `.env` second
4. Since `.env` is loaded after `.env.production`, it overwrites the
values
5. `.env.default` has `NEXT_PUBLIC_APP_ENV=local`, causing `getAppEnv()`
to return "local" instead of "dev"/"prod"
6. This made `isProdOrDev` evaluate to `false`, disabling Sentry

**Solution:**
The Dockerfile now checks if `.env.production` exists:
- If yes (CI/CD): Merges `.env.default` + `.env.production` →
`.env.production` (production values take precedence)
- If no (local): Merges `.env.default` + `.env` → `.env` (user values
take precedence)

This ensures production deployments get the correct environment
variables while preserving local development workflow.

🤖 Description generated + Investigation assisted with [Claude
Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-06 17:05:45 +00:00
Ubbe
3952a1a226 feat(frontend): smol improvements on new run view (#10854)
## Changes 🏗️

<img width="800" height="790" alt="Screenshot 2025-09-05 at 17 22 36"
src="https://github.com/user-attachments/assets/8b22424c-1968-4c4f-9eed-3d5d5185751d"
/>

- Make a nicer empty state and display it when there are no
runs/schedules
- Rename search param to `executionId` to mirror what was on the old
page
- Reduce polling when execution is happening to 1.5s ( 3.s is too slow
maybe... )
- Make sure the run details page also updates when a run is happening

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Tested the above 

### For configuration changes:

None

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-09-06 09:02:25 +00:00
Krzysztof Czerwinski
cfc975d39b feat(backend): Type for API block data response (#10763)
Moving to auto-generated frontend types caused returned blocks data to
no longer have proper typing.

### Changes 🏗️

- Add `BlockInfo` model and `get_info` function that returns it to the
`Block` class, including costs
- Move `BlockCost` and `BlockCostType` to `block.py` to prevent circular
imports

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Endpoints using the new type work correctly

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-09-06 04:21:48 +00:00
Zamil Majdy
46e0f6cc45 feat(platform): Add recommended run schedule for agent execution (#10827)
## Summary

<img width="1000" alt="Screenshot 2025-09-02 at 9 46 49 PM"
src="https://github.com/user-attachments/assets/d78100c7-7974-4d37-a788-757764d8b6b7"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 20 24 PM"
src="https://github.com/user-attachments/assets/cd092963-8e26-4198-b65a-4416b2307a50"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 22 30 PM"
src="https://github.com/user-attachments/assets/e16b3bdb-c48c-4dec-9281-b2a35b3e21d0"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 20 38 PM"
src="https://github.com/user-attachments/assets/11d74a39-f4b4-4fce-8d30-0e6a925f3a9b"
/>

• Added recommended schedule cron expression as an optional input
throughout the platform
• Implemented complete data flow from builder → store submission → agent
library → run page
• Fixed UI layout issues including button text overflow and ensured
proper component reusability

## Changes

### Backend
- Added `recommended_schedule_cron` field to `AgentGraph` schema and
database migration
- Updated API models (`LibraryAgent`, `MyAgent`,
`StoreSubmissionRequest`) to include the new field
- Enhanced store submission approval flow to persist recommended
schedule to database

### Frontend
- Added recommended schedule input to builder page (SaveControl
component) with overflow-safe styling
- Updated store submission modal (PublishAgentModal) with schedule
configuration
- Enhanced agent run page with schedule tip display and pre-filled
schedule dialog
- Refactored `CronSchedulerDialog` with discriminated union types for
better reusability
- Fixed layout issues including button text truncation and popover width
constraints
- Implemented robust cron expression parsing with 100% reversibility
between UI and cron format

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-06 03:57:03 +02:00
Nicholas Tindle
c03af5c196 feat(frontend): allow sentry disable in dev (#10858)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Vercel is logging things it shouldnt

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manually verified in vercel
2025-09-05 20:29:39 +00:00
Nicholas Tindle
00cbfb8f80 feat(frontend): re-enable sentry in dev (#10857)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- We want sentry to actually work so we can do testing
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] we're just re-enabling. it wroks in prod
2025-09-05 19:38:10 +00:00
Ubbe
3beafae955 feat(frontend): new agent library run page (#10835)
## Changes 🏗️

This is the new **Agent Library Run** page. Sorry in advance for the
massive PR 🙏🏽 . I got carried away and it has been tricky to split it (
_maybe I abused the agent too much_ 🤔 )

<img width="800" height="1085" alt="Screenshot 2025-09-04 at 13 58 33"
src="https://github.com/user-attachments/assets/b709edb9-d2b5-48ad-a04d-dddf10c89af3"
/>

<img width="800" height="338" alt="Screenshot 2025-09-04 at 13 54 51"
src="https://github.com/user-attachments/assets/efa28be2-d2dd-477f-af13-33ddd1d639dd"
/>

<img width="800" height="598" alt="Screenshot 2025-09-04 at 13 54 18"
src="https://github.com/user-attachments/assets/806ab620-3492-4c5b-b4e2-f17b89756dd8"
/>

- Schedules are now on the sidebar tabbed along with runs
- The whole UI has been updated to match the new designs and design
system
- There is no more "run draft" view as the modal is in charge of new
runs now 💪🏽
- The page is responsive and mobile friendly 📱 


Uploading mobile.mov…


https://github.com/user-attachments/assets/0e483062-0e50-4fa6-aaad-a1f6766df931

### Safety

I understand this is a lot of changes. However is all behind a feature
flag, `new-agent-runs`, when OFF it will display the old library agent
view. The old library agent view can still be accessed under:
`/library/legacy/{id}` for reference 👍🏽

### Testing

I haven't any tests for now... 💆🏽 I want to get this enabled on dev so
we can start running our agents there through the new page and modal and
start catching edge-cases.

Tests will come later in the form of E2E for the happy paths, and
probably I will introduce [Vitest](https://vitest.dev/) + [Testing
Library](https://testing-library.com/) for the finer details...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test the above

### For configuration changes:

None, the feature flag is already configured 🙏🏽

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-09-05 06:39:54 +00:00
seer-by-sentry[bot]
9cd186a2f3 fix(backend): Improve GitHub PR URL validation and API URL generation (#10317)
### Changes 🏗️

Fixes
[AUTOGPT-SERVER-4EN](https://sentry.io/organizations/significant-gravitas/issues/6731949478/).
The issue was that: Issue URL passed to PR file reader, regex failed,
leading to issue API call, returning object iterated as keys, causing
AttributeError.

- Refactor `prepare_pr_api_url` to improve validation of GitHub PR URLs.
- Update regex to specifically target github.com URLs.
- Raise ValueError with a descriptive message for invalid URLs.
- Correctly construct the API URL using the extracted repository path.

This fix was generated by Seer in Sentry, triggered automatically. 👁️
Run ID: 265077

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6731949478/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test plan:
- [x] Provide an invalid GitHub PR URL and verify that a ValueError is
raised with a descriptive message.
- [x] Provide a valid GitHub PR URL and verify that the API URL is
correctly constructed.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Bently <Github@bentlybro.com>
2025-09-05 06:35:45 +00:00
Nicholas Tindle
dcf26bd3d4 ci: update Claude code action (#10851)
<!-- Clearly explain the need for these changes: -->
Claude code now uses prompt not system prompt
### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Swaps to peomot from custom system prompt
### Checklist 📋

#### For code changes:
N/A
2025-09-05 06:33:09 +00:00
Bently
b97f097c9d feat(blocks): add AI Image Customizer block using Googles Nano Banana (#10845)
Add new AutoGPT Platform Block that uses google/gemini-2.5-flash-image
model via Replicate API.

Features:
- Text prompt input for image generation
- Optional list of image URLs as input
- Configurable output format (jpg/png, defaults to png)
- Single model option: google/gemini-2.5-flash-image
- Returns image_url output for generated images

Fixes #10815

🤖 Generated with [Claude Code](https://claude.ai/code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the AI image customizer block and upload 2 images to see if it
uses them in the image generation/edits


<img width="1536" height="672" alt="tmprhzqasxz"
src="https://github.com/user-attachments/assets/39d7adbd-2847-4988-aeab-1c5453290174"
/>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-05 06:31:12 +01:00
Reinier van der Leer
ce24975a9d fix(backend): Unbreak get_webhook query (#10850)
- Resolves #10849

### Changes 🏗️

- Use `AGENT_PRESET_INCLUDE` in `INTEGRATION_WEBHOOK_INCLUDE` so the
`AgentPreset.from_db(..)` doesn't break

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Webhook ingress works
2025-09-04 19:00:05 +00:00
Bently
75c90e49ce feat(blocks): add AI Image Customizer block using Googles Nano Banana (#10845)
Add new AutoGPT Platform Block that uses google/gemini-2.5-flash-image
model via Replicate API.

Features:
- Text prompt input for image generation
- Optional list of image URLs as input
- Configurable output format (jpg/png, defaults to png)
- Single model option: google/gemini-2.5-flash-image
- Returns image_url output for generated images

Fixes #10815

🤖 Generated with [Claude Code](https://claude.ai/code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the AI image customizer block and upload 2 images to see if it
uses them in the image generation/edits


<img width="1536" height="672" alt="tmprhzqasxz"
src="https://github.com/user-attachments/assets/39d7adbd-2847-4988-aeab-1c5453290174"
/>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-04 18:51:32 +00:00
Zamil Majdy
2e38f132e7 feat(backend/store): implement sub-agent approval support (#10842)
## Summary
Implement comprehensive sub-agent approval flow following the business
requirements from the flow diagram.

<img width="1956" height="1448" alt="image"
src="https://github.com/user-attachments/assets/8de35e5b-9d3e-4dc2-bff0-47b49dbebc83"
/>


### Key Features
-  **Auto-approve sub-agents** when main agent is approved
-  **Handle all scenarios**: new listings, existing versions, missing
versions
-  **Transaction safety** with atomic operations via database
transactions
-  **Parallel processing** using asyncio.gather for performance
optimization
-  **Hidden from store** with isAvailable=false for all sub-agents

### Implementation Details
- **Replaced** `_get_missing_sub_store_listing` with comprehensive
`_handle_sub_agent_approvals`
- **Added** `_approve_sub_agent` function with early returns for clean,
readable code flow
- **Used** `transaction()` context manager to ensure data consistency
across operations
- **Process sub-agents in parallel** while maintaining transaction
integrity

### Business Logic Flow
1. **Check if sub-agent is already listed** in store
2. **If not listed**: create new store listing with `isAvailable=false`
3. **If listed but not approved**: approve the correct version  
4. **If correct version not listed**: create store listing version and
approve it
5. **If already approved**: no action needed (early return)

All sub-agents remain **hidden from public store** while being
internally approved for system use.

## Files Changed
- `backend/server/v2/store/db.py` - Core implementation of sub-agent
approval logic

## Test Plan
- [ ] Verify main agent approval triggers sub-agent approvals
- [ ] Test all sub-agent scenarios: new, existing unapproved, existing
approved
- [ ] Confirm sub-agents remain hidden (`isAvailable=false`)
- [ ] Validate transaction rollback on failures
- [ ] Check parallel processing works correctly

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-09-04 17:59:37 +00:00
Nicholas Tindle
5be6987d58 ci: Add model argument to Claude workflow (#10843) 2025-09-04 10:22:53 -05:00
Nicholas Tindle
b59592be9b feat(ci): Claude workflow for Dependabot PR analysis (#10841)
Created workflow to analyze Dependabot PRs with Claude, including
detailed dependency analysis and changelog review.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds workflow for claude to do dependabot
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
N/A

#### For configuration changes:
N/A

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-09-04 14:48:07 +00:00
dependabot[bot]
7ea17df9ed chore(frontend/deps): Bump recharts from 2.15.3 to 3.1.2 in /autogpt_platform/frontend (#10807)
Bumps [recharts](https://github.com/recharts/recharts) from 2.15.3 to
3.1.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/recharts/recharts/releases">recharts's
releases</a>.</em></p>
<blockquote>
<h2>v3.1.2</h2>
<h2>What's Changed</h2>
<h3>Fix</h3>
<ul>
<li><code>Label/Polar Charts</code>: <code>Label</code> viewbox should
now be present in polar charts and address <a
href="https://redirect.github.com/recharts/recharts/issues/6030">recharts/recharts#6030</a>
by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6180">recharts/recharts#6180</a></li>
<li>Add LRU cache for string size measurements (<a
href="https://redirect.github.com/recharts/recharts/issues/3955">#3955</a>)
by <a
href="https://github.com/shreedharbhat98"><code>@​shreedharbhat98</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6176">recharts/recharts#6176</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/recharts/recharts/compare/v3.1.1...v3.1.2">https://github.com/recharts/recharts/compare/v3.1.1...v3.1.2</a></p>
<h2>v3.1.1</h2>
<h2>What's Changed</h2>
<h3>Fix</h3>
<ul>
<li><code>General</code>: Don't apply duplicate IDs in the DOM by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6111">recharts/recharts#6111</a></li>
<li><code>Stacked Area/Bar</code>: give all graphical items their own
unique identifier and use that to select stacked data. Fixes issue where
stacked charts could not be created from the graphical item
<code>data</code> prop <a
href="https://redirect.github.com/recharts/recharts/issues/6073">recharts/recharts#6073</a>
by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a></li>
<li><code>Stacked Area/Bar</code>: exclude stacked axis domain when not
relevant for axis by <a
href="https://github.com/rinkstiekema"><code>@​rinkstiekema</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6162">recharts/recharts#6162</a>
fixes issue where numeric stacked charts would not render correctly</li>
<li><code>Area Chart</code>: ranged area chart - show active dot on both
points instead of just the top one by <a
href="https://github.com/sroy8091"><code>@​sroy8091</code></a> in <a
href="https://redirect.github.com/recharts/recharts/pull/6116">recharts/recharts#6116</a>
fixes <a
href="https://redirect.github.com/recharts/recharts/issues/6080">#6080</a></li>
<li><code>Polar Charts/Label</code>: fix <code>Label</code> in polar
charts by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6126">recharts/recharts#6126</a></li>
<li><code>Scatter/ErrorBar</code>: choose implicit Scatter ErrorBar
direction based on chart layout (to be the same as 2.x) by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6159">recharts/recharts#6159</a></li>
<li><code>X/YAxis/Reference Components</code>: allow axis values and
reference items to render when there is no data but there is a
domain/explicit ticks set by <a
href="https://github.com/ethphan"><code>@​ethphan</code></a> in <a
href="https://redirect.github.com/recharts/recharts/pull/6161">recharts/recharts#6161</a></li>
<li><code>X/YAxis</code>: pass axis padding info to custom tick
components by <a
href="https://github.com/shreedharbhat98"><code>@​shreedharbhat98</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6163">recharts/recharts#6163</a></li>
</ul>
<h3>Chore / Testing</h3>
<ul>
<li>good progress on our journey to enable
<code>strictNullChecks</code></li>
<li>addition of playwright visual regression tests to CI</li>
<li>split <code>Animate</code> into <code>JavascriptAnimate</code> and
<code>CSSTransitionAnimate</code> by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6175">recharts/recharts#6175</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/sroy8091"><code>@​sroy8091</code></a>
made their first contribution in <a
href="https://redirect.github.com/recharts/recharts/pull/6116">recharts/recharts#6116</a></li>
<li><a href="https://github.com/ethphan"><code>@​ethphan</code></a> made
their first contribution in <a
href="https://redirect.github.com/recharts/recharts/pull/6161">recharts/recharts#6161</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/recharts/recharts/compare/v3.1.0...v3.1.1">https://github.com/recharts/recharts/compare/v3.1.0...v3.1.1</a></p>
<h2>v3.1.0</h2>
<h2>What's Changed</h2>
<p>Bug fixes (old and new) and a few new hooks post 3.0 launch!</p>
<h3>Feat</h3>
<p>More hooks!</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="079545c631"><code>079545c</code></a>
3.1.2</li>
<li><a
href="c6c26b5eae"><code>c6c26b5</code></a>
Label viewbox in polar charts (<a
href="https://redirect.github.com/recharts/recharts/issues/6180">#6180</a>)</li>
<li><a
href="6c4acb7299"><code>6c4acb7</code></a>
Add CompositeAnimationManager to allow testing for multiple concurrent
animat...</li>
<li><a
href="e56913daed"><code>e56913d</code></a>
feat: implement LRU cache for string size measurements (<a
href="https://redirect.github.com/recharts/recharts/issues/3955">#3955</a>)
(<a
href="https://redirect.github.com/recharts/recharts/issues/6176">#6176</a>)</li>
<li><a
href="cf648902cd"><code>cf64890</code></a>
3.1.1</li>
<li><a
href="c4ec635cf5"><code>c4ec635</code></a>
Split Animate into JavascriptAnimate and CSSTransitionAnimate (<a
href="https://redirect.github.com/recharts/recharts/issues/6175">#6175</a>)</li>
<li><a
href="e243a12d02"><code>e243a12</code></a>
Add animation tests for Rectangle (<a
href="https://redirect.github.com/recharts/recharts/issues/6174">#6174</a>)</li>
<li><a
href="a1e07aaa1c"><code>a1e07aa</code></a>
docs: convert class components to functional components (<a
href="https://redirect.github.com/recharts/recharts/issues/6172">#6172</a>)</li>
<li><a
href="1f8d4cb84d"><code>1f8d4cb</code></a>
fix: Exclude stacked axis domain when not relevant for axis (<a
href="https://redirect.github.com/recharts/recharts/issues/6162">#6162</a>)</li>
<li><a
href="fe28f2e63a"><code>fe28f2e</code></a>
chore(deps-dev): bump the storybook group with 8 updates (<a
href="https://redirect.github.com/recharts/recharts/issues/6166">#6166</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/recharts/recharts/compare/v2.15.3...v3.1.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=recharts&package-manager=npm_and_yarn&previous-version=2.15.3&new-version=3.1.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 14:22:30 +00:00
Reinier van der Leer
22e692bdda fix(frontend): Update in-view run with real-time updates (#10839)
- Resolves #10838

### Changes 🏗️

- Update `selectedRun` with received graph execution update if
applicable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Agent outputs appear in real-time
2025-09-04 13:43:44 +00:00
Nicholas Tindle
901e9eba5d feat(frontend): Enhanced output rendering system for agent runs (#10819)
## Summary
Introduces a modular, extensible output renderer system supporting
multiple content types (text, code, images, videos, JSON, markdown) for
agent run outputs. The system includes smart clipboard operations,
concatenated downloads, and rich markdown rendering with LaTeX math and
video embedding support.

## Changes 🏗️

### Core Output Rendering System
- **Added extensible renderer architecture**
(`output-renderers/types.ts`)
  - Plugin-based system with priority ordering
  - Registry pattern for automatic renderer selection
  - Support for custom metadata and MIME types

### Output Renderers
- **TextRenderer**: Plain text with proper formatting and line breaks
- **CodeRenderer**: Syntax-highlighted code blocks with language
detection
- **JSONRenderer**: Collapsible, formatted JSON with syntax highlighting
- **ImageRenderer**: Image display with support for URLs, data URIs, and
file uploads
- **VideoRenderer**: Embedded video player for YouTube, Vimeo, and
direct video files
- **MarkdownRenderer**: Rich markdown with:
  - GitHub Flavored Markdown (tables, task lists, strikethrough)
- LaTeX math rendering via KaTeX (inline `$...$` and display `$$...$$`)
  - Syntax highlighting via highlight.js
  - Video embedding (YouTube/Vimeo URLs auto-convert to embeds)
  - Clickable heading anchors
  - Dark mode support

### User Interface Components
- **OutputItem**: Individual output display with renderer selection
- **OutputActions**: Hover-based action buttons for:
  - Copy to clipboard with smart MIME type detection
- Download with intelligent concatenation (text files merge, binaries
separate)
  - Share functionality (placeholder for future implementation)
- **AgentRunOutputView**: Main output view component with feature flag
integration

### Clipboard & Download Features
- Smart clipboard operations using native ClipboardItem API
- MIME type detection and browser capability checking
- Fallback strategies for unsupported content types
- Concatenated downloads for text-based outputs
- Individual downloads for binary content

### Feature Flag Integration
- Added `ENABLE_ENHANCED_OUTPUT_HANDLING` flag to LaunchDarkly
- Backwards compatible with existing output display
- Graceful fallback for disabled feature flag

### Styling & UX
- Max width constraints (950px card, 660px content)
- Hover-based action buttons for clean interface
- Dark mode support across all renderers
- Responsive design for various content types
- Loading states and error handling

## Test Plan 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

### Test Scenarios:
- [x] **Basic Output Rendering**
  - [x] Execute agent with text output - verify proper formatting
  - [x] Execute agent with JSON output - verify collapsible tree view
  - [x] Execute agent with code output - verify syntax highlighting
  
- [x] **Rich Content**
  - [x] Test markdown rendering with headers, lists, tables
  - [x] Test LaTeX math expressions (inline and display)
  - [x] Test code blocks within markdown
  - [x] Test task lists and strikethrough
  
- [x] **Media Handling**
  - [x] Upload and display PNG/JPEG images
  - [x] Test video URL embedding (YouTube/Vimeo)
  - [x] Test direct video file playback
  
- [x] **Clipboard Operations**
  - [x] Copy plain text output
  - [x] Copy formatted code
  - [x] Copy JSON data
  - [x] Copy markdown content
  - [x] Verify fallback for unsupported MIME types
  
- [x] **Download Functionality**
  - [x] Download single text output
  - [x] Download multiple text outputs (verify concatenation)
  - [x] Download mixed content (verify separate files)
  - [x] Download images and binary content
  
- [x] **Feature Flag**
  - [x] Enable flag - verify enhanced rendering
  - [x] Disable flag - verify fallback to original view
  - [x] Check backwards compatibility
 
  
- [x] **Edge Cases**
  - [x] Large JSON objects (performance)
  - [x] Very long text outputs
  - [x] Mixed content types in single run
  - [x] Malformed markdown
  - [x] Invalid video URLs

## Dependencies Added
- `react-markdown` (9.0.3) - Already present
- `remark-gfm` (4.0.1) - GitHub Flavored Markdown
- `remark-math` (6.0.0) - LaTeX math support
- `rehype-katex` (7.0.1) - Math rendering
- `katex` (0.16.22) - Math typesetting
- `rehype-highlight` (7.0.2) - Syntax highlighting
- `highlight.js` (11.11.1) - Highlighting library
- `rehype-slug` (6.0.0) - Heading anchors
- `rehype-autolink-headings` (7.1.0) - Clickable headings

## Notes
- Mermaid diagram support was attempted but removed due to compatibility
issues
- Share functionality is stubbed out for future implementation
- PNG file upload rendering issue has logging in place for debugging
- All components follow existing UI patterns and use Tailwind CSS

## Screenshots
<img width="1656" height="1250" alt="image"
src="https://github.com/user-attachments/assets/af7542fe-db89-4521-aaf5-19e33d48a409"
/>
## Related Issues
- Implements SECRT-1209

---------

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-04 13:37:26 +00:00
dependabot[bot]
bbd6709bd6 chore(backend/deps): Bump cryptography from 43.0.3 to 45.0.7 in /autogpt_platform/backend (#10810)
Bumps [cryptography](https://github.com/pyca/cryptography) from 43.0.3
to 45.0.7.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>45.0.7 - 2025-09-01</p>
<pre><code>
* Added a function to support an upcoming ``pyOpenSSL`` release.
<p>.. _v45-0-6:</p>
<p>45.0.6 - 2025-08-05<br />
</code></pre></p>
<ul>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.2.</li>
</ul>
<p>.. _v45-0-5:</p>
<p>45.0.5 - 2025-07-02</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.1.
<p>.. _v45-0-4:</p>
<p>45.0.4 - 2025-06-09<br />
</code></pre></p>
<ul>
<li>Fixed decrypting PKCS#8 files encrypted with SHA1-RC4. (This is not
considered secure, and is supported only for backwards
compatibility.)</li>
</ul>
<p>.. _v45-0-3:</p>
<p>45.0.3 - 2025-05-25</p>
<pre><code>
* Fixed decrypting PKCS#8 files encrypted with long salts (this impacts
keys
  encrypted by Bouncy Castle).
* Fixed decrypting PKCS#8 files encrypted with DES-CBC-MD5. While wildly
  insecure, this remains prevalent.
<p>.. _v45-0-2:</p>
<p>45.0.2 - 2025-05-17<br />
</code></pre></p>
<ul>
<li>Fixed using <code>mypy</code> with <code>cryptography</code> on
older versions of Python.</li>
</ul>
<p>.. _v45-0-1:</p>
<p>45.0.1 - 2025-05-17</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.0.
&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f52a3e1496"><code>f52a3e1</code></a>
prep for a 45.0.7 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13378">#13378</a>)</li>
<li><a
href="66198c23c9"><code>66198c2</code></a>
Bump for release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13249">#13249</a>)</li>
<li><a
href="3e53a233b6"><code>3e53a23</code></a>
Bump for 45.0.5 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13135">#13135</a>)</li>
<li><a
href="678c0c59f7"><code>678c0c5</code></a>
prepare for 45.0.4 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13058">#13058</a>)</li>
<li><a
href="5038495987"><code>5038495</code></a>
backports for 45.0.3 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12979">#12979</a>)</li>
<li><a
href="f81c07535d"><code>f81c075</code></a>
Backport mypy fixes for release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12930">#12930</a>)</li>
<li><a
href="8ea28e0bc7"><code>8ea28e0</code></a>
bump for 45.0.1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12922">#12922</a>)</li>
<li><a
href="67840977c9"><code>6784097</code></a>
bump for 45 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12886">#12886</a>)</li>
<li><a
href="2d9c1c9cbe"><code>2d9c1c9</code></a>
bump MSRV to 1.74 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12919">#12919</a>)</li>
<li><a
href="6c18874cc2"><code>6c18874</code></a>
Bump BoringSSL, OpenSSL, AWS-LC in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/12918">#12918</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/43.0.3...45.0.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=43.0.3&new-version=45.0.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 13:35:07 +00:00
dependabot[bot]
3ec1721d6d chore(libs/deps-dev): Bump ruff from 0.12.9 to 0.12.11 in /autogpt_platform/autogpt_libs in the development-dependencies group (#10804)
Bumps the development-dependencies group in
/autogpt_platform/autogpt_libs with 1 update:
[ruff](https://github.com/astral-sh/ruff).

Updates `ruff` from 0.12.9 to 0.12.11
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.12.11</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend <code>AIR311</code> and
<code>AIR312</code> rules (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20082">#20082</a>)</li>
<li>[<code>airflow</code>] Replace wrong path
<code>airflow.io.storage</code> with <code>airflow.io.store</code>
(<code>AIR311</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20081">#20081</a>)</li>
<li>[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx-in-async-function</code>
(<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20091">#20091</a>)</li>
<li>[<code>flake8-logging-format</code>] Add auto-fix for f-string
logging calls (<code>G004</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19303">#19303</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add autofix for
<code>PTH211</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20009">#20009</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Make <code>PTH100</code> fix
unsafe because it can change behavior (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20100">#20100</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>pyflakes</code>, <code>pylint</code>] Fix false positives
caused by <code>__class__</code> cell handling (<code>F841</code>,
<code>PLE0117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20048">#20048</a>)</li>
<li>[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code>
matching for top-level modules (<code>F401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20115">#20115</a>)</li>
<li>[<code>ruff</code>] Fix false positive for t-strings in
<code>default-factory-kwarg</code> (<code>RUF026</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20032">#20032</a>)</li>
<li>[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19647">#19647</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Handle empty t-strings in
<code>unnecessary-empty-iterable-within-deque-call</code>
(<code>RUF037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20045">#20045</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix incorrect <code>D413</code> links in docstrings convention FAQ
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/20089">#20089</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Update links to the table showing
the correspondence between <code>os</code> and <code>pathlib</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20103">#20103</a>)</li>
</ul>
<h2>Contributors</h2>
<ul>
<li><a
href="https://github.com/AlexWaygood"><code>@​AlexWaygood</code></a></li>
<li><a href="https://github.com/Avasam"><code>@​Avasam</code></a></li>
<li><a
href="https://github.com/BurntSushi"><code>@​BurntSushi</code></a></li>
<li><a href="https://github.com/Gankra"><code>@​Gankra</code></a></li>
<li><a
href="https://github.com/Glyphack"><code>@​Glyphack</code></a></li>
<li><a
href="https://github.com/JelleZijlstra"><code>@​JelleZijlstra</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/MatthewMckee4"><code>@​MatthewMckee4</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a
href="https://github.com/PrettyWood"><code>@​PrettyWood</code></a></li>
<li><a href="https://github.com/Renkai"><code>@​Renkai</code></a></li>
<li><a href="https://github.com/TaKO8Ki"><code>@​TaKO8Ki</code></a></li>
<li><a
href="https://github.com/amyreese"><code>@​amyreese</code></a></li>
<li><a href="https://github.com/carljm"><code>@​carljm</code></a></li>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
<li><a
href="https://github.com/github-actions"><code>@​github-actions</code></a></li>
<li><a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.12.11</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend <code>AIR311</code> and
<code>AIR312</code> rules (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20082">#20082</a>)</li>
<li>[<code>airflow</code>] Replace wrong path
<code>airflow.io.storage</code> with <code>airflow.io.store</code>
(<code>AIR311</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20081">#20081</a>)</li>
<li>[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx-in-async-function</code>
(<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20091">#20091</a>)</li>
<li>[<code>flake8-logging-format</code>] Add auto-fix for f-string
logging calls (<code>G004</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19303">#19303</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add autofix for
<code>PTH211</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20009">#20009</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Make <code>PTH100</code> fix
unsafe because it can change behavior (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20100">#20100</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>pyflakes</code>, <code>pylint</code>] Fix false positives
caused by <code>__class__</code> cell handling (<code>F841</code>,
<code>PLE0117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20048">#20048</a>)</li>
<li>[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code>
matching for top-level modules (<code>F401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20115">#20115</a>)</li>
<li>[<code>ruff</code>] Fix false positive for t-strings in
<code>default-factory-kwarg</code> (<code>RUF026</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20032">#20032</a>)</li>
<li>[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19647">#19647</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Handle empty t-strings in
<code>unnecessary-empty-iterable-within-deque-call</code>
(<code>RUF037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20045">#20045</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix incorrect <code>D413</code> links in docstrings convention FAQ
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/20089">#20089</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Update links to the table showing
the correspondence between <code>os</code> and <code>pathlib</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20103">#20103</a>)</li>
</ul>
<h2>0.12.10</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-simplify</code>] Implement fix for
<code>maxsplit</code> without separator (<code>SIM905</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19851">#19851</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add fixes for <code>PTH102</code>
and <code>PTH103</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19514">#19514</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>isort</code>] Handle multiple continuation lines after module
docstring (<code>I002</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19818">#19818</a>)</li>
<li>[<code>pyupgrade</code>] Avoid reporting <code>__future__</code>
features as unnecessary when they are used (<code>UP010</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19769">#19769</a>)</li>
<li>[<code>pyupgrade</code>] Handle nested <code>Optional</code>s
(<code>UP045</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19770">#19770</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>pycodestyle</code>] Make <code>E731</code> fix unsafe instead
of display-only for class assignments (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19700">#19700</a>)</li>
<li>[<code>pyflakes</code>] Add secondary annotation showing previous
definition (<code>F811</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19900">#19900</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix description of global config file discovery strategy (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19188">#19188</a>)</li>
<li>Update outdated links to <a
href="https://typing.python.org/en/latest/source/stubs.html">https://typing.python.org/en/latest/source/stubs.html</a>
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/19992">#19992</a>)</li>
<li>[<code>flake8-annotations</code>] Remove unused import in example
(<code>ANN401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20000">#20000</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c2bc15bc15"><code>c2bc15b</code></a>
Bump 0.12.11 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20136">#20136</a>)</li>
<li><a
href="e586f6dcc4"><code>e586f6d</code></a>
[ty] Benchmarks for problematic implicit instance attributes cases (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20133">#20133</a>)</li>
<li><a
href="76a6b7e3e2"><code>76a6b7e</code></a>
[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code> matching
for top-level modules (`F4...</li>
<li><a
href="1ce65714c0"><code>1ce6571</code></a>
Move GitLab output rendering to <code>ruff_db</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20117">#20117</a>)</li>
<li><a
href="d9aaacd01f"><code>d9aaacd</code></a>
[ty] Evaluate reachability of non-definitely-bound to Ambiguous (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19579">#19579</a>)</li>
<li><a
href="18eaa659c1"><code>18eaa65</code></a>
[ty] Introduce a representation for the top/bottom materialization of an
inva...</li>
<li><a
href="af259faed5"><code>af259fa</code></a>
[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx</code> (<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20091">#20091</a>)</li>
<li><a
href="d75ef3823c"><code>d75ef38</code></a>
[ty] print diagnostics with fully qualified name to disambiguate some
cases (...</li>
<li><a
href="89ca493fd9"><code>89ca493</code></a>
[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (#...</li>
<li><a
href="4b80f5fa4f"><code>4b80f5f</code></a>
[ty] Optimize TDD atom ordering (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20098">#20098</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.9...0.12.11">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.12.9&new-version=0.12.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 13:26:59 +00:00
Swifty
8c8a2ab0c2 feat(blocks): Add Bannerbear API block for text overlay on images (#10768)
## Summary
- Implemented a new Bannerbear API block that enables adding text
overlays to images using template designs
- Block supports customizable text styling (color, font, size, weight,
alignment)
- Always uses synchronous API mode for immediate image generation
results


[agent_ead942d9-58a2-4be6-bdb3-99010c489466.json](https://github.com/user-attachments/files/22027352/agent_ead942d9-58a2-4be6-bdb3-99010c489466.json)

<img width="140" height="572" alt="Screenshot 2025-08-28 at 16 28 35"
src="https://github.com/user-attachments/assets/096b532b-31dc-4ca6-bd68-c00b7594426c"
/>


## Features
- **Text overlay capabilities**: Add multiple text layers to images
using Bannerbear templates
- **Customizable styling**: Support for color, font family, font size,
font weight, and text alignment
- **Image support**: Optional ability to add images to templates
- **Smart field handling**: Only sends non-empty optional parameters to
the API
- **Webhook & metadata**: Advanced options for webhook notifications and
custom metadata

## Implementation Details
- Created provider configuration with API key authentication
- Implemented `BannerbearTextOverlayBlock` with proper input/output
schemas
- Extracted API calls to private method `_make_api_request()` for test
mocking support
- Follows SDK guide patterns and integrates with AutoGPT platform

## Use Case
This block will be used in the Ad generator agent for creating dynamic
marketing materials and social media graphics with text overlays.

## Test plan
- [x] Block imports successfully
- [x] Block instantiates with unique ID
- [x] Code passes linting and formatting checks
- [x] Manual testing with actual Bannerbear API key
- [x] Integration testing with Ad generator agent
2025-09-04 13:15:00 +00:00
Krzysztof Czerwinski
4041e1f39c feat(backend/infra): Cleanup platform and db docker compose files (#10830)
Supabase `db/docker/docker-compose.yml` overrides env vars set in
`autogpt_platform/.env` file. This PR fixes that and simplifies the
compose files further.

### Changes 🏗️

`autogpt_platform/docker-compose.platform.yml`:
- Move hardcoded `DATABASE_URL` and `DIRECT_URL` to `x-backend-env` on
top as it repeats for most services.
- Remove `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` from
`rabbitmq` service and use env files instead

`autogpt_platform/db/docker/docker-compose.yml`:
- Remove hardcoded env vars from `x-supabase-env` - these are already
defined in `.env`
- Remove env vars from services that are already defined in `.env` files

*Changes to db compose file only affect self-hosted Supabase*

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Platform, db works when self-hosting
2025-09-04 12:01:49 +00:00
Nicholas Tindle
7cbdc1ad1a security(frontend): Upgrade next from 15.4.6 to 15.4.7 (#10820)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Server-side Request Forgery (SSRF)
<br/>[SNYK-JS-NEXT-12299318](https://snyk.io/vuln/SNYK-JS-NEXT-12299318)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJjM2E3MTAxZi1mNTI2LTQxMGUtYjVkOS0wMDhmYzMxNjk0MmQiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImMzYTcxMDFmLWY1MjYtNDEwZS1iNWQ5LTAwOGZjMzE2OTQyZCJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Server-side Request Forgery
(SSRF)](https://learn.snyk.io/lesson/ssrf-server-side-request-forgery/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.6","to":"15.4.7"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-12299318"],"prId":"c3a7101f-f526-410e-b5d9-008fc316942d","prPublicId":"c3a7101f-f526-410e-b5d9-008fc316942d","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-12299318"],"vulns":["SNYK-JS-NEXT-12299318"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-04 09:18:12 +00:00
Reinier van der Leer
2a19aa0ed3 fix(frontend/library): Show total runs count above runs list (#10832)
- Resolves #10831

### Changes 🏗️

- Show number of total runs instead of currently loaded runs
- Show loading spinner instead of zero while loading

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Counter shows number of total runs, even if it exceeds number of
currently loaded items
2025-09-04 06:43:19 +00:00
Nicholas Tindle
6d39dfe382 feat(frontend): Add admin button to user profile dropdown (#10774)
<!-- Clearly explain the need for these changes: -->

### Need 💡

This PR addresses Linear issue
[OPEN-2232](https://linear.app/autogpt/issue/OPEN-2232/add-admin-pages-in-dropdown)
by adding an "Admin" button to the user account dropdown menu. This
button is only visible to users with an "admin" role and provides direct
navigation to the admin marketplace management page, making existing
admin functionalities accessible from the new UI.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Added Admin Icon**: Integrated `IconSliders` into the `IconType`
enum and `getAccountMenuOptionIcon` function.
- **Dynamic Menu Generation**: Introduced
`getAccountMenuItems(userRole?: string)` to dynamically construct the
account menu. This function conditionally adds an "Admin" menu item
(linking to `/admin/marketplace`) if the `userRole` is "admin".
- **Navbar Integration**: Updated `NavbarView.tsx` to utilize the
`useSupabase` hook to retrieve the current user's role and then render
the account menu using the new dynamic `getAccountMenuItems` function
for both desktop and mobile views.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Log in as an admin user and verify the "Admin" button appears in
the account dropdown.
- [x] Click the "Admin" button and confirm navigation to
`/admin/marketplace`.
- [x] Log in as a non-admin user and verify the "Admin" button does not
appear in the account dropdown.
- [x] Verify all other existing menu items (e.g., "Edit profile", "Log
out") function correctly for both admin and non-admin users.
  - [x] Test the above scenarios on both desktop and mobile views.

---
Linear Issue:
[OPEN-2232](https://linear.app/autogpt/issue/OPEN-2232/add-admin-pages-in-dropdown)

<a
href="https://cursor.com/background-agent?bcId=bc-2dceda38-31b4-4e8e-8277-fb87c8858abf">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-2dceda38-31b4-4e8e-8277-fb87c8858abf">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-03 09:26:24 +00:00
Ubbe
57ecc10535 fix(frontend): typed inputs in new run modal (#10799)
## Changes 🏗️

###  Make the **Credentials Inputs** show up on the new Run Agent Modal
<img width="450" height="784" alt="Screenshot 2025-09-02 at 00 54 19"
src="https://github.com/user-attachments/assets/26ad8242-a1bc-45f6-9149-a3d207683679"
/>


### Fixes on other modals...

<img width="450" height="579" alt="Screenshot 2025-09-02 at 00 04 40"
src="https://github.com/user-attachments/assets/fa2f9ed9-207b-4599-9e60-3e37c4be6ea9"
/>
 
<img width="450" height="579" alt="Screenshot 2025-09-02 at 00 04 44"
src="https://github.com/user-attachments/assets/92e062a7-f161-423e-b6c9-f998fbdef102"
/>

<img width="450" height="634" alt="Screenshot 2025-09-02 at 00 47 06"
src="https://github.com/user-attachments/assets/93009809-0df0-44c5-b2d2-a9aa0f501312"
/>


- always use buttons/inputs in small size ( _due to the tight space_ ) 
- use from the design system
- always use pretty scrollbars
- Configure
[`tailwind-scrollbars`](https://github.com/adoxography/tailwind-scrollbar)
for pretty scrollbars
- prevent content in dialog to overflow when scrollable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with the new agent run modal flag enabled
  - [x] Check the above 

#### For configuration changes:

None
2025-09-03 06:58:07 +00:00
Reinier van der Leer
4928ce3f90 feat(library): Create presets from runs (#10823)
- Resolves #9307

### Changes 🏗️

- feat(library): Create presets from runs
  - Prevent creating preset from run with unknown credentials
- Fix running presets with credentials
  - Add `credential_inputs` parameter to `execute_preset` endpoint

API:
- Return `GraphExecutionMeta` from `*/execute` endpoints

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]` for an agent that *does not* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
- Go to `/library/agents/[id]` for an agent that *does* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Error toast appears with descriptive message
- Initiate a new run; once finished, click "Create preset from run";
fill out the dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
2025-09-03 01:26:12 +00:00
Reinier van der Leer
e16e69ca55 feat(library, executor): Make "Run Again" work with credentials (#10821)
- Resolves [OPEN-2549: Make "Run again" work with credentials in
`AgentRunDetailsView`](https://linear.app/autogpt/issue/OPEN-2549/make-run-again-work-with-credentials-in-agentrundetailsview)
- Resolves #10237

### Changes 🏗️

- feat(frontend/library): Make "Run Again" button work for runs with
credentials
- feat(backend/executor): Store passed-in credentials on
`GraphExecution`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]` for an agent with credentials inputs
  - Run the agent manually
    - [x] -> runs successfully
- [x] -> "Run again" shows among the action buttons on the newly created
run
  - Click "Run again"
    - [x] -> runs successfully
2025-09-02 18:34:56 +00:00
Nicholas Tindle
4bcc73f784 ci: Update GitHub Actions workflow for Claude integration (#10822) 2025-09-02 17:06:49 +01:00
Nicholas Tindle
c8240a4d6b ci: Modify Claude workflow permissions and action version (#10818) 2025-09-02 08:43:54 -07:00
Ubbe
f669db4a10 fix(frontend): prevent using mock feature flags (#10792)
## Changes 🏗️

Make sure `NEXT_PUBLIC_PW_TEST` is set only when running Playwright.
This forces the app to use "mock" feature flags, so the tests run stable
and predictable despite changes on LaunchDarkly.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x]  should not have `PW_TEST=true` ...

### For configuration changes:

None
2025-09-02 14:18:18 +00:00
Reinier van der Leer
0e755a5c85 feat(platform/library): Support UX for manual-setup triggers (#10309)
- Resolves #10234

### Preview

#### Manual setup triggers
![preview of the setup screen for manual-setup
triggers](https://github.com/user-attachments/assets/295d2968-ad11-4291-b360-2eb2acb03397)
![preview of the view for an active manual-setup
trigger](https://github.com/user-attachments/assets/d0ae2246-2305-48f5-aea8-8adb37336401)

#### Auto-setup triggers
![preview of the view for an active auto-setup
trigger](https://github.com/user-attachments/assets/63856311-fc99-450c-ae1f-86951e40dc26)

### Changes 🏗️

- Add "Trigger status" section to `AgentRunDraftView`
- Add `AgentPreset.webhook`, so we can show webhook URL in library
  - Add `AGENT_PRESET_INCLUDE` to `backend.data.includes`
- Add `BaseGraph.trigger_setup_info` (computed field)
- Rename `LibraryAgentTriggerInfo` to `GraphTriggerInfo`; move to
`backend.data.graph`

Refactor:
- Move contents of `@/components/agents/` to
`@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/`
- Fix small type difference between legacy & generated
`LibraryAgent.image_url`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Setting up GitHub trigger works
  - [x] Setting up manual trigger works
  - [x] Enabling/disabling manual trigger through Library works
2025-09-02 10:23:32 +00:00
Reinier van der Leer
dfdc71f97f feat(backend/external-api): Make API key auth work in Swagger UI (#10783)
![Swagger UI API key auth
dialog](https://github.com/user-attachments/assets/02026802-51f9-410d-bdb8-53840d5eb17b)

- Resolves #10782

### Changes 🏗️

- Use `Security(..)` for security dependencies
- Minor tweaks to auth mechanism (similar to #10720)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] API key auth feature appears in Swagger UI
  - [ ] API key auth *works* in Swagger UI (@ntindle wanna test this?)
2025-09-02 09:24:14 +00:00
Krzysztof Czerwinski
def008408c feat(frontend): Preserve openapi.json spec file on generation failure (#10791)
`openapi.json` file is cleared when script fails to retrieve api spec
from the server. This shouldn't happen and it breaks building docker
containers.

### Changes 🏗️

Use temp file during generation to prevent actual file clearing on
failure.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Spec file doesn't get cleared on failure
  - [x] Spec file is correctly generated
  - [x] Works when frontend is run in docker container
2025-09-02 01:37:54 +00:00
Swifty
916d0adabb feat(frontend): Add graph search functionality to builder (#10776)
## Summary
- Added search functionality to find nodes in the graph by block type,
node ID, and input/output names
- Search icon added to both new and old control panels
- Implemented node highlighting on hover and navigation on click


https://github.com/user-attachments/assets/8cc69186-5582-446d-b2cd-601de992144f



## Changes
- Created `GraphSearchMenu` component for the new control panel
- Created `GraphSearchControl` component for the old control panel  
- Added `GraphSearchContent` component with search UI similar to
BlockMenu
- Implemented `useGraphSearch` hook with fuzzy search logic
- Added node highlighting without viewport movement on hover
- Added node navigation with centering and highlighting on selection

## Features
- Search by block type name, node ID, or input/output field names
- Real-time filtering with keyboard navigation support
- Visual feedback with node highlighting on hover
- Click to navigate and center on selected node
- Consistent styling with BlockMenu including category colors
- Works in both old and new control panels

## Test plan
- [x] Test search functionality in both old and new control panels
- [x] Verify search by block type name works
- [x] Verify search by node ID works  
- [x] Verify search by input/output names works
- [x] Test keyboard navigation (arrow keys and enter)
- [x] Verify node highlighting on hover
- [x] Verify node navigation on click
- [x] Check popover alignment with control panel top
2025-09-01 18:42:13 +00:00
Abhimanyu Yadav
417ee7f0e1 feat(frontend): redesign-block-menu-part-2 (#10793)
In this PR, I have added:

- a search input
- conditional rendering of the search page and the default page
- a sidebar for the default page (with the correct data)

### Screenshot

<img width="1512" height="982" alt="Screenshot 2025-09-01 at 12 28
34 PM"
src="https://github.com/user-attachments/assets/891ab99f-dde5-47b8-a980-a700845f10c2"
/>

#### Checklist:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything works perfectly locally.
2025-09-01 10:04:29 +00:00
Abhimanyu Yadav
ae4c9897b4 refactor(frontend): Revamp marketplace agent page data fetching and structure (#10756)
- Updated the agent page to utilize React Query for data fetching,
improving performance and reliability.
- Removed legacy API calls and integrated prefetching for creator
details and agents.
- Introduced a new MainAgentPage component for better separation of
concerns.
- Added a hydration boundary for managing server state.

> It’s important to note that I haven’t changed any UI in this, as it’s
out of scope for this PR.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have manually tested both `Add to Library` and `Download`
functions, and they are working correctly.
  - [x] All fetching functions are working perfectly.
  - [x] All end-to-end tests are also working correctly.
2025-09-01 05:40:01 +00:00
Ubbe
7544028b94 fix(frontend): use new dialog in schedule agent (#10786)
## Changes 🏗️

Should fix the issue where sometimes the schedule modal wouldn't appear
when clicking on the CTA.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Set up schedules multiple times, look good on the modal
gent from monitor, and confirm it executes correctly

#### For configuration changes:

None
2025-08-30 04:47:12 +00:00
Ubbe
ad49946890 feat(frontend): New Run Agent Modal (2/2) (#10769)
## Changes 🏗️

<img width="400" height="821" alt="Screenshot 2025-08-28 at 23 57 41"
src="https://github.com/user-attachments/assets/f5f7c0a6-0b87-4c1f-b644-3ee2ddd1db95"
/>

<img width="400" height="822" alt="Screenshot 2025-08-28 at 23 57 47"
src="https://github.com/user-attachments/assets/120dbb60-d9e1-4a4a-a593-971badb4a97a"
/>

This is the final piece of work on the new **Run Agent Modal**... It is
all behind a feature flag so I'm relatively comfortable is safe. The
idea is to test with the team once it lands into dev to try different
combinations of agent inputs / credentials and schedules...

I have moved and tied a lot of the original logic around running agents.
Mostly importantly, I have made all the dynamic inputs adhere to the
design system.

### AI changes summary  

- Allow to run schedules in the main modal body
- Integrate and tidy old logic around dynamic run agent inputs 
- Integrate and tidy old logic around credentials inputs
- Refactor: `<TypeBasedInputs />` to use Design System components
(`atoms/Input`, `atoms/Select`, `molecules/MultiToggle`, and native
date/time picker via `<Input />` using the browser's date picker )
- Added support for `type="date"` and `type="datetime-local"` to `<Input
/>` ( _for the above_ )
- On the `<Select />` component:
  - added `size` prop (`small` | `medium`).
- added rich items: `icon`, `disabled`, `separator`, `onSelect`, and
`renderItem` prop.
- stories updated/added for size variants, icons/separators, and custom
rendering.
- Added and documented to the design system:
  - `molecules/TimePicker` + story.
- `atoms/FileInput`: added `accept` and `maxFileSize` props; story
documents constraints.
- `atoms/Progress` stories (Basic, CustomMax, Sizes, Live) with
fixed-width container.
  - `atoms/Switch` stories (Basic, Disabled, WithLabel).
  - `molecules/Dialog` story: Modal-over-Modal example.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open Storybook and verify new/updated stories render correctly.
  - [x] In app, validate modals open/close correctly using DS `Dialog`.
- [x] Validate DS Select rich items (icon, separator, disabled, action)
behave as expected.
  - [x] Run lints and ensure no errors.
  - [x] Manually test File upload constraints (type/size) and progress.

### For configuration changes:
None
2025-08-29 11:38:36 +00:00
Reinier van der Leer
b333d56492 fix(frontend): Fix checking of Date and NaN input values (#10781)
Date values were being rejected as "empty" by the run input form.

![screenshot demonstrating the
issue](https://github.com/user-attachments/assets/cf89414d-9b27-4da5-8a4a-64d9439c7084)

### Changes 🏗️

- Specifically handle `Date` type values in `isEmpty`
- Specifically handle `NaN` values in `isEmpty`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Date values are no longer rejected as "empty"
2025-08-29 11:03:15 +00:00
Abhimanyu Yadav
b23cb14e49 test(frontend): fix library flaky e2e test (#10764)
A test in one of my pr is failing something like… 

<img width="1044" height="452" alt="Screenshot 2025-08-28 at 9 39 07 AM"
src="https://github.com/user-attachments/assets/9c8b8996-50a2-44c6-8a2c-c3904f07ced5"
/>

That’s why I fixed it.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All E2E tests are now working correctly.
2025-08-29 10:19:01 +00:00
Swifty
e71a44521a feat(frontend): Add editable node titles with hover pencil icon (#10779)
## Summary
- Adds ability to edit custom node titles by clicking a pencil icon that
appears on hover
- Custom titles are saved in node metadata and persist across saves
- Original node type is shown in tooltip when hovering over custom
titles



https://github.com/user-attachments/assets/a0a41ac9-1ffb-44c8-9e1c-f4c42e032b49



## Changes
- **CustomNode.tsx**: 
  - Added inline title editing with pencil icon on hover
  - Implemented state management for title editing mode
  - Added tooltip to show original node type for custom titles
  - Prevents custom names from being copied when duplicating nodes

- **useAgentGraph.tsx**:
- Updated graph save/load logic to preserve metadata including custom
titles
  - Ensures metadata persistence through all node operations

## Technical Details
- Uses existing `metadata` JSON field in AgentNode model (no database
changes needed)
- Stores custom title in `metadata.customized_name`
- Backward compatible - nodes without custom titles display normally

## Test Plan
- [x] Hover over node title shows pencil icon
- [x] Click pencil icon to edit title
- [x] Press Enter or blur to save, Escape to cancel
- [x] Custom title persists after saving graph
- [x] Tooltip shows original node type when hovering over custom title
- [x] Copying node doesn't copy custom name
- [x] Backward compatible with existing graphs
2025-08-29 10:02:35 +00:00
Swifty
15aa091f65 feat(frontend): add drag and drop functionality from blocks menu to flow canvas (#10777)
Closes #1055
2025-08-29 10:55:43 +02:00
Swifty
82618aede0 Merge remote-tracking branch 'origin/dev' 2025-08-27 11:39:16 +02:00
Swifty
c4483fa6c7 Merge branch 'dev' 2025-08-20 15:22:08 +02:00
Swifty
c2af8c1a6a Reapply "Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard"
This reverts commit 260dd526c9.
2025-08-20 15:13:46 +02:00
Nicholas Tindle
483c399812 Revert "feat(platform): add ability to reject approved agents in admin dashboard"
This reverts commit 92515b3683.
2025-08-19 00:49:30 -05:00
Nicholas Tindle
260dd526c9 Revert "Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard"
This reverts commit 105d5dc7e9, reversing
changes made to 92515b3683.
2025-08-19 00:49:10 -05:00
Nicholas Tindle
75a159db01 Revert "style(frontend): format code and improve dialog titles in approve-reject buttons"
This reverts commit 62032e6584.
2025-08-19 00:48:22 -05:00
Nicholas Tindle
62032e6584 style(frontend): format code and improve dialog titles in approve-reject buttons 2025-08-18 23:32:43 -05:00
Nicholas Tindle
105d5dc7e9 Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard 2025-08-18 22:59:27 -05:00
Nicholas Tindle
92515b3683 feat(platform): add ability to reject approved agents in admin dashboard
- Modified backend review_store_submission to handle rejecting approved agents
- Added logic to update StoreListing when rejecting approved agents
- Updated UI to show "Revoke" button for approved agents
- Only shows approve button for pending agents
- Updates dialog text appropriately for revoking vs rejecting

Fixes SECRT-1218

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-18 10:35:25 -05:00
662 changed files with 31487 additions and 8473 deletions

View File

@@ -0,0 +1,97 @@
name: Auto Fix CI Failures
on:
workflow_run:
workflows: ["CI"]
types:
- completed
permissions:
contents: write
pull-requests: write
actions: read
issues: write
id-token: write # Required for OIDC token exchange
jobs:
auto-fix:
if: |
github.event.workflow_run.conclusion == 'failure' &&
github.event.workflow_run.pull_requests[0] &&
!startsWith(github.event.workflow_run.head_branch, 'claude-auto-fix-ci-')
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup git identity
run: |
git config --global user.email "claude[bot]@users.noreply.github.com"
git config --global user.name "claude[bot]"
- name: Create fix branch
id: branch
run: |
BRANCH_NAME="claude-auto-fix-ci-${{ github.event.workflow_run.head_branch }}-${{ github.run_id }}"
git checkout -b "$BRANCH_NAME"
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
- name: Get CI failure details
id: failure_details
uses: actions/github-script@v7
with:
script: |
const run = await github.rest.actions.getWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const jobs = await github.rest.actions.listJobsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const failedJobs = jobs.data.jobs.filter(job => job.conclusion === 'failure');
let errorLogs = [];
for (const job of failedJobs) {
const logs = await github.rest.actions.downloadJobLogsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
job_id: job.id
});
errorLogs.push({
jobName: job.name,
logs: logs.data
});
}
return {
runUrl: run.data.html_url,
failedJobs: failedJobs.map(j => j.name),
errorLogs: errorLogs
};
- name: Fix CI failures with Claude
id: claude
uses: anthropics/claude-code-action@v1
with:
prompt: |
/fix-ci
Failed CI Run: ${{ fromJSON(steps.failure_details.outputs.result).runUrl }}
Failed Jobs: ${{ join(fromJSON(steps.failure_details.outputs.result).failedJobs, ', ') }}
PR Number: ${{ github.event.workflow_run.pull_requests[0].number }}
Branch Name: ${{ steps.branch.outputs.branch_name }}
Base Branch: ${{ github.event.workflow_run.head_branch }}
Repository: ${{ github.repository }}
Error logs:
${{ toJSON(fromJSON(steps.failure_details.outputs.result).errorLogs) }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--allowedTools 'Edit,MultiEdit,Write,Read,Glob,Grep,LS,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*)'"

379
.github/workflows/claude-dependabot.yml vendored Normal file
View File

@@ -0,0 +1,379 @@
# Claude Dependabot PR Review Workflow
#
# This workflow automatically runs Claude analysis on Dependabot PRs to:
# - Identify dependency changes and their versions
# - Look up changelogs for updated packages
# - Assess breaking changes and security impacts
# - Provide actionable recommendations for the development team
#
# Triggered on: Dependabot PRs (opened, synchronize)
# Requirements: ANTHROPIC_API_KEY secret must be configured
name: Claude Dependabot PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
dependabot-review:
# Only run on Dependabot PRs
if: github.actor == 'dependabot[bot]'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Dependabot Analysis
id: claude_review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |
You are Claude, an AI assistant specialized in reviewing Dependabot dependency update PRs.
Your primary tasks are:
1. **Analyze the dependency changes** in this Dependabot PR
2. **Look up changelogs** for all updated dependencies to understand what changed
3. **Identify breaking changes** and assess potential impact on the AutoGPT codebase
4. **Provide actionable recommendations** for the development team
## Analysis Process:
1. **Identify Changed Dependencies**:
- Use git diff to see what dependencies were updated
- Parse package.json, poetry.lock, requirements files, etc.
- List all package versions: old → new
2. **Changelog Research**:
- For each updated dependency, look up its changelog/release notes
- Use WebFetch to access GitHub releases, NPM package pages, PyPI project pages. The pr should also have some details
- Focus on versions between the old and new versions
- Identify: breaking changes, deprecations, security fixes, new features
3. **Breaking Change Assessment**:
- Categorize changes: BREAKING, MAJOR, MINOR, PATCH, SECURITY
- Assess impact on AutoGPT's usage patterns
- Check if AutoGPT uses affected APIs/features
- Look for migration guides or upgrade instructions
4. **Codebase Impact Analysis**:
- Search the AutoGPT codebase for usage of changed APIs
- Identify files that might be affected by breaking changes
- Check test files for deprecated usage patterns
- Look for configuration changes needed
## Output Format:
Provide a comprehensive review comment with:
### 🔍 Dependency Analysis Summary
- List of updated packages with version changes
- Overall risk assessment (LOW/MEDIUM/HIGH)
### 📋 Detailed Changelog Review
For each updated dependency:
- **Package**: name (old_version → new_version)
- **Changes**: Summary of key changes
- **Breaking Changes**: List any breaking changes
- **Security Fixes**: Note security improvements
- **Migration Notes**: Any upgrade steps needed
### ⚠️ Impact Assessment
- **Breaking Changes Found**: Yes/No with details
- **Affected Files**: List AutoGPT files that may need updates
- **Test Impact**: Any tests that may need updating
- **Configuration Changes**: Required config updates
### 🛠️ Recommendations
- **Action Required**: What the team should do
- **Testing Focus**: Areas to test thoroughly
- **Follow-up Tasks**: Any additional work needed
- **Merge Recommendation**: APPROVE/REVIEW_NEEDED/HOLD
### 📚 Useful Links
- Links to relevant changelogs, migration guides, documentation
Be thorough but concise. Focus on actionable insights that help the development team make informed decisions about the dependency updates.

View File

@@ -30,18 +30,296 @@ jobs:
github.event.issue.author_association == 'COLLABORATOR'
)
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: read
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout repository
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@beta
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr edit:*)"
--model opus
additional_permissions: |
actions: read

View File

@@ -5,6 +5,13 @@ on:
branches: [ dev ]
paths:
- 'autogpt_platform/**'
workflow_dispatch:
inputs:
git_ref:
description: 'Git ref (branch/tag) of AutoGPT to deploy'
required: true
default: 'master'
type: string
permissions:
contents: 'read'
@@ -19,6 +26,8 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.git_ref || github.ref_name }}
- name: Set up Python
uses: actions/setup-python@v5
@@ -48,4 +57,4 @@ jobs:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: build_deploy_dev
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "repository": "${{ github.repository }}"}'
client-payload: '{"ref": "${{ github.event.inputs.git_ref || github.ref }}", "repository": "${{ github.repository }}"}'

View File

@@ -3,6 +3,7 @@ name: AutoGPT Platform - Deploy Prod Environment
on:
release:
types: [published]
workflow_dispatch:
permissions:
contents: 'read'
@@ -17,6 +18,8 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.ref_name || 'master' }}
- name: Set up Python
uses: actions/setup-python@v5
@@ -36,7 +39,7 @@ jobs:
DATABASE_URL: ${{ secrets.BACKEND_DATABASE_URL }}
DIRECT_URL: ${{ secrets.BACKEND_DATABASE_URL }}
trigger:
needs: migrate
runs-on: ubuntu-latest
@@ -47,4 +50,5 @@ jobs:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: build_deploy_prod
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "repository": "${{ github.repository }}"}'
client-payload: |
{"ref": "${{ github.ref_name || 'master' }}", "repository": "${{ github.repository }}"}

View File

@@ -37,9 +37,7 @@ jobs:
services:
redis:
image: bitnami/redis:6.2
env:
REDIS_PASSWORD: testpassword
image: redis:latest
ports:
- 6379:6379
rabbitmq:
@@ -204,7 +202,6 @@ jobs:
JWT_VERIFY_KEY: ${{ steps.supabase.outputs.JWT_SECRET }}
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
REDIS_PASSWORD: "testpassword"
ENCRYPTION_KEY: "dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=" # DO NOT USE IN PRODUCTION!!
env:

View File

@@ -160,7 +160,7 @@ jobs:
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml up -d
NEXT_PUBLIC_PW_TEST=true docker compose -f ../docker-compose.yml up -d
env:
DOCKER_BUILDKIT: 1
BUILDX_CACHE_FROM: type=local,src=/tmp/.buildx-cache

View File

@@ -61,24 +61,27 @@ poetry run pytest path/to/test.py --snapshot-update
```bash
# Install dependencies
cd frontend && npm install
cd frontend && pnpm i
# Start development server
npm run dev
pnpm dev
# Run E2E tests
npm run test
pnpm test
# Run Storybook for component development
npm run storybook
pnpm storybook
# Build production
npm run build
pnpm build
# Type checking
npm run types
pnpm types
```
We have a components library in autogpt_platform/frontend/src/components/atoms that should be used when adding new pages and components.
## Architecture Overview
### Backend Architecture

View File

@@ -1,35 +0,0 @@
import hashlib
import secrets
from typing import NamedTuple
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
raw: str
prefix: str
postfix: str
hash: str
class APIKeyManager:
PREFIX: str = "agpt_"
PREFIX_LENGTH: int = 8
POSTFIX_LENGTH: int = 8
def generate_api_key(self) -> APIKeyContainer:
"""Generate a new API key with all its parts."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
return APIKeyContainer(
raw=raw_key,
prefix=raw_key[: self.PREFIX_LENGTH],
postfix=raw_key[-self.POSTFIX_LENGTH :],
hash=hashlib.sha256(raw_key.encode()).hexdigest(),
)
def verify_api_key(self, provided_key: str, stored_hash: str) -> bool:
"""Verify if a provided API key matches the stored hash."""
if not provided_key.startswith(self.PREFIX):
return False
provided_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(provided_hash, stored_hash)

View File

@@ -0,0 +1,78 @@
import hashlib
import secrets
from typing import NamedTuple
from cryptography.hazmat.primitives.kdf.scrypt import Scrypt
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
key: str
head: str
tail: str
hash: str
salt: str
class APIKeySmith:
PREFIX: str = "agpt_"
HEAD_LENGTH: int = 8
TAIL_LENGTH: int = 8
def generate_key(self) -> APIKeyContainer:
"""Generate a new API key with secure hashing."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
hash, salt = self.hash_key(raw_key)
return APIKeyContainer(
key=raw_key,
head=raw_key[: self.HEAD_LENGTH],
tail=raw_key[-self.TAIL_LENGTH :],
hash=hash,
salt=salt,
)
def verify_key(
self, provided_key: str, known_hash: str, known_salt: str | None = None
) -> bool:
"""
Verify an API key against a known hash (+ salt).
Supports verifying both legacy SHA256 and secure Scrypt hashes.
"""
if not provided_key.startswith(self.PREFIX):
return False
# Handle legacy SHA256 hashes (migration support)
if known_salt is None:
legacy_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(legacy_hash, known_hash)
try:
salt_bytes = bytes.fromhex(known_salt)
provided_hash = self._hash_key_with_salt(provided_key, salt_bytes)
return secrets.compare_digest(provided_hash, known_hash)
except (ValueError, TypeError):
return False
def hash_key(self, raw_key: str) -> tuple[str, str]:
"""Migrate a legacy hash to secure hash format."""
salt = self._generate_salt()
hash = self._hash_key_with_salt(raw_key, salt)
return hash, salt.hex()
def _generate_salt(self) -> bytes:
"""Generate a random salt for hashing."""
return secrets.token_bytes(32)
def _hash_key_with_salt(self, raw_key: str, salt: bytes) -> str:
"""Hash API key using Scrypt with salt."""
kdf = Scrypt(
length=32,
salt=salt,
n=2**14, # CPU/memory cost parameter
r=8, # Block size parameter
p=1, # Parallelization parameter
)
key_hash = kdf.derive(raw_key.encode())
return key_hash.hex()

View File

@@ -0,0 +1,79 @@
import hashlib
from autogpt_libs.api_key.keysmith import APIKeySmith
def test_generate_api_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
assert key.key.startswith(keysmith.PREFIX)
assert key.head == key.key[: keysmith.HEAD_LENGTH]
assert key.tail == key.key[-keysmith.TAIL_LENGTH :]
assert len(key.hash) == 64 # 32 bytes hex encoded
assert len(key.salt) == 64 # 32 bytes hex encoded
def test_verify_new_secure_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test correct key validates
assert keysmith.verify_key(key.key, key.hash, key.salt) is True
# Test wrong key fails
wrong_key = f"{keysmith.PREFIX}wrongkey123"
assert keysmith.verify_key(wrong_key, key.hash, key.salt) is False
def test_verify_legacy_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}legacykey123"
legacy_hash = hashlib.sha256(legacy_key.encode()).hexdigest()
# Test legacy key validates without salt
assert keysmith.verify_key(legacy_key, legacy_hash) is True
# Test wrong legacy key fails
wrong_key = f"{keysmith.PREFIX}wronglegacy"
assert keysmith.verify_key(wrong_key, legacy_hash) is False
def test_rehash_existing_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}migratekey123"
# Migrate the legacy key
new_hash, new_salt = keysmith.hash_key(legacy_key)
# Verify migrated key works
assert keysmith.verify_key(legacy_key, new_hash, new_salt) is True
# Verify different key fails with migrated hash
wrong_key = f"{keysmith.PREFIX}wrongkey"
assert keysmith.verify_key(wrong_key, new_hash, new_salt) is False
def test_invalid_key_prefix():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test key without proper prefix fails
invalid_key = "invalid_prefix_key"
assert keysmith.verify_key(invalid_key, key.hash, key.salt) is False
def test_secure_hash_requires_salt():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Secure hash without salt should fail
assert keysmith.verify_key(key.key, key.hash) is False
def test_invalid_salt_format():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Invalid salt format should fail gracefully
assert keysmith.verify_key(key.key, key.hash, "invalid_hex") is False

View File

@@ -1,3 +1,5 @@
from typing import Optional
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
@@ -13,8 +15,8 @@ class RateLimitSettings(BaseSettings):
default="6379", description="Redis port", validation_alias="REDIS_PORT"
)
redis_password: str = Field(
default="password",
redis_password: Optional[str] = Field(
default=None,
description="Redis password",
validation_alias="REDIS_PASSWORD",
)

View File

@@ -11,7 +11,7 @@ class RateLimiter:
self,
redis_host: str = RATE_LIMIT_SETTINGS.redis_host,
redis_port: str = RATE_LIMIT_SETTINGS.redis_port,
redis_password: str = RATE_LIMIT_SETTINGS.redis_password,
redis_password: str | None = RATE_LIMIT_SETTINGS.redis_password,
requests_per_minute: int = RATE_LIMIT_SETTINGS.requests_per_minute,
):
self.redis = Redis(

View File

@@ -1,90 +1,68 @@
import asyncio
import inspect
import logging
import threading
import time
from functools import wraps
from typing import (
Awaitable,
Any,
Callable,
ParamSpec,
Protocol,
Tuple,
TypeVar,
cast,
overload,
runtime_checkable,
)
P = ParamSpec("P")
R = TypeVar("R")
R_co = TypeVar("R_co", covariant=True)
logger = logging.getLogger(__name__)
@overload
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]:
pass
def _make_hashable_key(
args: tuple[Any, ...], kwargs: dict[str, Any]
) -> tuple[Any, ...]:
"""
Convert args and kwargs into a hashable cache key.
Handles unhashable types like dict, list, set by converting them to
their sorted string representations.
"""
@overload
def thread_cached(func: Callable[P, R]) -> Callable[P, R]:
pass
def make_hashable(obj: Any) -> Any:
"""Recursively convert an object to a hashable representation."""
if isinstance(obj, dict):
# Sort dict items to ensure consistent ordering
return (
"__dict__",
tuple(sorted((k, make_hashable(v)) for k, v in obj.items())),
)
elif isinstance(obj, (list, tuple)):
return ("__list__", tuple(make_hashable(item) for item in obj))
elif isinstance(obj, set):
return ("__set__", tuple(sorted(make_hashable(item) for item in obj)))
elif hasattr(obj, "__dict__"):
# Handle objects with __dict__ attribute
return ("__obj__", obj.__class__.__name__, make_hashable(obj.__dict__))
else:
# For basic hashable types (str, int, bool, None, etc.)
try:
hash(obj)
return obj
except TypeError:
# Fallback: convert to string representation
return ("__str__", str(obj))
def thread_cached(
func: Callable[P, R] | Callable[P, Awaitable[R]],
) -> Callable[P, R] | Callable[P, Awaitable[R]]:
thread_local = threading.local()
def _clear():
if hasattr(thread_local, "cache"):
del thread_local.cache
if inspect.iscoroutinefunction(func):
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = await cast(Callable[P, Awaitable[R]], func)(
*args, **kwargs
)
return cache[key]
setattr(async_wrapper, "clear_cache", _clear)
return async_wrapper
else:
def sync_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
setattr(sync_wrapper, "clear_cache", _clear)
return sync_wrapper
def clear_thread_cache(func: Callable) -> None:
if clear := getattr(func, "clear_cache", None):
clear()
FuncT = TypeVar("FuncT")
R_co = TypeVar("R_co", covariant=True)
hashable_args = tuple(make_hashable(arg) for arg in args)
hashable_kwargs = tuple(sorted((k, make_hashable(v)) for k, v in kwargs.items()))
return (hashable_args, hashable_kwargs)
@runtime_checkable
class AsyncCachedFunction(Protocol[P, R_co]):
"""Protocol for async functions with cache management methods."""
class CachedFunction(Protocol[P, R_co]):
"""Protocol for cached functions with cache management methods."""
def cache_clear(self) -> None:
"""Clear all cached entries."""
@@ -94,101 +72,169 @@ class AsyncCachedFunction(Protocol[P, R_co]):
"""Get cache statistics."""
return {}
async def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
def cache_delete(self, *args: P.args, **kwargs: P.kwargs) -> bool:
"""Delete a specific cache entry by its arguments. Returns True if entry existed."""
return False
def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
"""Call the cached function."""
return None # type: ignore
def async_ttl_cache(
maxsize: int = 128, ttl_seconds: int | None = None
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
def cached(
*,
maxsize: int = 128,
ttl_seconds: int | None = None,
) -> Callable[[Callable], CachedFunction]:
"""
TTL (Time To Live) cache decorator for async functions.
Thundering herd safe cache decorator for both sync and async functions.
Similar to functools.lru_cache but works with async functions and includes optional TTL.
Uses double-checked locking to prevent multiple threads/coroutines from
executing the expensive operation simultaneously during cache misses.
Args:
func: The function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
ttl_seconds: Time to live in seconds. If None, entries never expire (like lru_cache)
ttl_seconds: Time to live in seconds. If None, entries never expire
Returns:
Decorator function
Decorated function or decorator
Example:
# With TTL
@async_ttl_cache(maxsize=1000, ttl_seconds=300)
async def api_call(param: str) -> dict:
@cache() # Default: maxsize=128, no TTL
def expensive_sync_operation(param: str) -> dict:
return {"result": param}
# Without TTL (permanent cache like lru_cache)
@async_ttl_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
@cache() # Works with async too
async def expensive_async_operation(param: str) -> dict:
return {"result": param}
@cache(maxsize=1000, ttl_seconds=300) # Custom maxsize and TTL
def another_operation(param: str) -> dict:
return {"result": param}
"""
def decorator(
async_func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
# Cache storage - use union type to handle both cases
cache_storage: dict[tuple, R | Tuple[R, float]] = {}
def decorator(target_func):
# Cache storage and locks
cache_storage = {}
@wraps(async_func)
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# Create cache key from arguments
key = (args, tuple(sorted(kwargs.items())))
current_time = time.time()
if inspect.iscoroutinefunction(target_func):
# Async function with asyncio.Lock
cache_lock = asyncio.Lock()
# Check if we have a valid cached entry
if key in cache_storage:
if ttl_seconds is None:
# No TTL - return cached result directly
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, cache_storage[key])
else:
# With TTL - check expiration
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, result)
@wraps(target_func)
async def async_wrapper(*args: P.args, **kwargs: P.kwargs):
key = _make_hashable_key(args, kwargs)
current_time = time.time()
# Fast path: check cache without lock
if key in cache_storage:
if ttl_seconds is None:
logger.debug(f"Cache hit for {target_func.__name__}")
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(f"Cache hit for {target_func.__name__}")
return result
# Slow path: acquire lock for cache miss/expiry
async with cache_lock:
# Double-check: another coroutine might have populated cache
if key in cache_storage:
if ttl_seconds is None:
return cache_storage[key]
else:
# Expired entry
del cache_storage[key]
logger.debug(
f"Cache entry expired for {async_func.__name__}"
)
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
return result
# Cache miss or expired - fetch fresh data
logger.debug(
f"Cache miss for {async_func.__name__} with key: {str(key)[:50]}"
)
result = await async_func(*args, **kwargs)
# Cache miss - execute function
logger.debug(f"Cache miss for {target_func.__name__}")
result = await target_func(*args, **kwargs)
# Store in cache
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Store result
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Simple cleanup when cache gets too large
if len(cache_storage) > maxsize:
# Remove oldest entries (simple FIFO cleanup)
cutoff = maxsize // 2
oldest_keys = list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
logger.debug(
f"Cache cleanup: removed {len(oldest_keys)} entries for {async_func.__name__}"
)
# Cleanup if needed
if len(cache_storage) > maxsize:
cutoff = maxsize // 2
oldest_keys = (
list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
)
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
return result
return result
# Add cache management methods (similar to functools.lru_cache)
wrapper = async_wrapper
else:
# Sync function with threading.Lock
cache_lock = threading.Lock()
@wraps(target_func)
def sync_wrapper(*args: P.args, **kwargs: P.kwargs):
key = _make_hashable_key(args, kwargs)
current_time = time.time()
# Fast path: check cache without lock
if key in cache_storage:
if ttl_seconds is None:
logger.debug(f"Cache hit for {target_func.__name__}")
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(f"Cache hit for {target_func.__name__}")
return result
# Slow path: acquire lock for cache miss/expiry
with cache_lock:
# Double-check: another thread might have populated cache
if key in cache_storage:
if ttl_seconds is None:
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
return result
# Cache miss - execute function
logger.debug(f"Cache miss for {target_func.__name__}")
result = target_func(*args, **kwargs)
# Store result
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Cleanup if needed
if len(cache_storage) > maxsize:
cutoff = maxsize // 2
oldest_keys = (
list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
)
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
return result
wrapper = sync_wrapper
# Add cache management methods
def cache_clear() -> None:
cache_storage.clear()
@@ -199,68 +245,84 @@ def async_ttl_cache(
"ttl_seconds": ttl_seconds,
}
# Attach methods to wrapper
def cache_delete(*args, **kwargs) -> bool:
"""Delete a specific cache entry. Returns True if entry existed."""
key = _make_hashable_key(args, kwargs)
if key in cache_storage:
del cache_storage[key]
return True
return False
setattr(wrapper, "cache_clear", cache_clear)
setattr(wrapper, "cache_info", cache_info)
setattr(wrapper, "cache_delete", cache_delete)
return cast(AsyncCachedFunction[P, R], wrapper)
return cast(CachedFunction, wrapper)
return decorator
@overload
def async_cache(
func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
pass
@overload
def async_cache(
func: None = None,
*,
maxsize: int = 128,
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
pass
def async_cache(
func: Callable[P, Awaitable[R]] | None = None,
*,
maxsize: int = 128,
) -> (
AsyncCachedFunction[P, R]
| Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]
):
def thread_cached(func):
"""
Process-level cache decorator for async functions (no TTL).
Thread-local cache decorator for both sync and async functions.
Similar to functools.lru_cache but works with async functions.
This is a convenience wrapper around async_ttl_cache with ttl_seconds=None.
Each thread gets its own cache, which is useful for request-scoped caching
in web applications where you want to cache within a single request but
not across requests.
Args:
func: The async function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
func: The function to cache
Returns:
Decorated function or decorator
Decorated function with thread-local caching
Example:
# Without parentheses (uses default maxsize=128)
@async_cache
async def get_data(param: str) -> dict:
@thread_cached
def expensive_operation(param: str) -> dict:
return {"result": param}
# With parentheses and custom maxsize
@async_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
# Expensive computation here
@thread_cached # Works with async too
async def expensive_async_operation(param: str) -> dict:
return {"result": param}
"""
if func is None:
# Called with parentheses @async_cache() or @async_cache(maxsize=...)
return async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
thread_local = threading.local()
def _clear():
if hasattr(thread_local, "cache"):
del thread_local.cache
if inspect.iscoroutinefunction(func):
@wraps(func)
async def async_wrapper(*args, **kwargs):
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = _make_hashable_key(args, kwargs)
if key not in cache:
cache[key] = await func(*args, **kwargs)
return cache[key]
setattr(async_wrapper, "clear_cache", _clear)
return async_wrapper
else:
# Called without parentheses @async_cache
decorator = async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
return decorator(func)
@wraps(func)
def sync_wrapper(*args, **kwargs):
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = _make_hashable_key(args, kwargs)
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
setattr(sync_wrapper, "clear_cache", _clear)
return sync_wrapper
def clear_thread_cache(func: Callable) -> None:
"""Clear thread-local cache for a function."""
if clear := getattr(func, "clear_cache", None):
clear()

View File

@@ -16,12 +16,7 @@ from unittest.mock import Mock
import pytest
from autogpt_libs.utils.cache import (
async_cache,
async_ttl_cache,
clear_thread_cache,
thread_cached,
)
from autogpt_libs.utils.cache import cached, clear_thread_cache, thread_cached
class TestThreadCached:
@@ -330,102 +325,202 @@ class TestThreadCached:
assert mock.call_count == 2
class TestAsyncTTLCache:
"""Tests for the @async_ttl_cache decorator."""
class TestCache:
"""Tests for the unified @cache decorator (works for both sync and async)."""
@pytest.mark.asyncio
async def test_basic_caching(self):
"""Test basic caching functionality."""
def test_basic_sync_caching(self):
"""Test basic sync caching functionality."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def cached_function(x: int, y: int = 0) -> int:
@cached()
def expensive_sync_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
return x + y
# First call
result1 = expensive_sync_function(1, 2)
assert result1 == 3
assert call_count == 1
# Second call with same args - should use cache
result2 = expensive_sync_function(1, 2)
assert result2 == 3
assert call_count == 1
# Different args - should call function again
result3 = expensive_sync_function(2, 3)
assert result3 == 5
assert call_count == 2
@pytest.mark.asyncio
async def test_basic_async_caching(self):
"""Test basic async caching functionality."""
call_count = 0
@cached()
async def expensive_async_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01) # Simulate async work
return x + y
# First call
result1 = await cached_function(1, 2)
result1 = await expensive_async_function(1, 2)
assert result1 == 3
assert call_count == 1
# Second call with same args - should use cache
result2 = await cached_function(1, 2)
result2 = await expensive_async_function(1, 2)
assert result2 == 3
assert call_count == 1 # No additional call
assert call_count == 1
# Different args - should call function again
result3 = await cached_function(2, 3)
result3 = await expensive_async_function(2, 3)
assert result3 == 5
assert call_count == 2
@pytest.mark.asyncio
async def test_ttl_expiration(self):
"""Test that cache entries expire after TTL."""
def test_sync_thundering_herd_protection(self):
"""Test that concurrent sync calls don't cause thundering herd."""
call_count = 0
results = []
@async_ttl_cache(maxsize=10, ttl_seconds=1) # Short TTL
async def short_lived_cache(x: int) -> int:
@cached()
def slow_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 2
time.sleep(0.1) # Simulate expensive operation
return x * x
def worker():
result = slow_function(5)
results.append(result)
# Launch multiple concurrent threads
with ThreadPoolExecutor(max_workers=5) as executor:
futures = [executor.submit(worker) for _ in range(5)]
for future in futures:
future.result()
# All results should be the same
assert all(result == 25 for result in results)
# Only one thread should have executed the expensive operation
assert call_count == 1
@pytest.mark.asyncio
async def test_async_thundering_herd_protection(self):
"""Test that concurrent async calls don't cause thundering herd."""
call_count = 0
@cached()
async def slow_async_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.1) # Simulate expensive operation
return x * x
# Launch concurrent coroutines
tasks = [slow_async_function(7) for _ in range(5)]
results = await asyncio.gather(*tasks)
# All results should be the same
assert all(result == 49 for result in results)
# Only one coroutine should have executed the expensive operation
assert call_count == 1
def test_ttl_functionality(self):
"""Test TTL functionality with sync function."""
call_count = 0
@cached(maxsize=10, ttl_seconds=1) # Short TTL
def ttl_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 3
# First call
result1 = await short_lived_cache(5)
assert result1 == 10
result1 = ttl_function(3)
assert result1 == 9
assert call_count == 1
# Second call immediately - should use cache
result2 = await short_lived_cache(5)
assert result2 == 10
result2 = ttl_function(3)
assert result2 == 9
assert call_count == 1
# Wait for TTL to expire
time.sleep(1.1)
# Third call after expiration - should call function again
result3 = ttl_function(3)
assert result3 == 9
assert call_count == 2
@pytest.mark.asyncio
async def test_async_ttl_functionality(self):
"""Test TTL functionality with async function."""
call_count = 0
@cached(maxsize=10, ttl_seconds=1) # Short TTL
async def async_ttl_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01)
return x * 4
# First call
result1 = await async_ttl_function(3)
assert result1 == 12
assert call_count == 1
# Second call immediately - should use cache
result2 = await async_ttl_function(3)
assert result2 == 12
assert call_count == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Third call after expiration - should call function again
result3 = await short_lived_cache(5)
assert result3 == 10
result3 = await async_ttl_function(3)
assert result3 == 12
assert call_count == 2
@pytest.mark.asyncio
async def test_cache_info(self):
def test_cache_info(self):
"""Test cache info functionality."""
@async_ttl_cache(maxsize=5, ttl_seconds=300)
async def info_test_function(x: int) -> int:
@cached(maxsize=10, ttl_seconds=60)
def info_test_function(x: int) -> int:
return x * 3
# Check initial cache info
info = info_test_function.cache_info()
assert info["size"] == 0
assert info["maxsize"] == 5
assert info["ttl_seconds"] == 300
assert info["maxsize"] == 10
assert info["ttl_seconds"] == 60
# Add an entry
await info_test_function(1)
info_test_function(1)
info = info_test_function.cache_info()
assert info["size"] == 1
@pytest.mark.asyncio
async def test_cache_clear(self):
def test_cache_clear(self):
"""Test cache clearing functionality."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def clearable_function(x: int) -> int:
@cached()
def clearable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 4
# First call
result1 = await clearable_function(2)
result1 = clearable_function(2)
assert result1 == 8
assert call_count == 1
# Second call - should use cache
result2 = await clearable_function(2)
result2 = clearable_function(2)
assert result2 == 8
assert call_count == 1
@@ -433,273 +528,149 @@ class TestAsyncTTLCache:
clearable_function.cache_clear()
# Third call after clear - should call function again
result3 = await clearable_function(2)
result3 = clearable_function(2)
assert result3 == 8
assert call_count == 2
@pytest.mark.asyncio
async def test_maxsize_cleanup(self):
"""Test that cache cleans up when maxsize is exceeded."""
async def test_async_cache_clear(self):
"""Test cache clearing functionality with async function."""
call_count = 0
@async_ttl_cache(maxsize=3, ttl_seconds=60)
async def size_limited_function(x: int) -> int:
@cached()
async def async_clearable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x**2
await asyncio.sleep(0.01)
return x * 5
# Fill cache to maxsize
await size_limited_function(1) # call_count: 1
await size_limited_function(2) # call_count: 2
await size_limited_function(3) # call_count: 3
info = size_limited_function.cache_info()
assert info["size"] == 3
# Add one more entry - should trigger cleanup
await size_limited_function(4) # call_count: 4
# Cache size should be reduced (cleanup removes oldest entries)
info = size_limited_function.cache_info()
assert info["size"] is not None and info["size"] <= 3 # Should be cleaned up
@pytest.mark.asyncio
async def test_argument_variations(self):
"""Test caching with different argument patterns."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def arg_test_function(a: int, b: str = "default", *, c: int = 100) -> str:
nonlocal call_count
call_count += 1
return f"{a}-{b}-{c}"
# Different ways to call with same logical arguments
result1 = await arg_test_function(1, "test", c=200)
assert call_count == 1
# Same arguments, same order - should use cache
result2 = await arg_test_function(1, "test", c=200)
assert call_count == 1
assert result1 == result2
# Different arguments - should call function
result3 = await arg_test_function(2, "test", c=200)
assert call_count == 2
assert result1 != result3
@pytest.mark.asyncio
async def test_exception_handling(self):
"""Test that exceptions are not cached."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def exception_function(x: int) -> int:
nonlocal call_count
call_count += 1
if x < 0:
raise ValueError("Negative value not allowed")
return x * 2
# Successful call - should be cached
result1 = await exception_function(5)
# First call
result1 = await async_clearable_function(2)
assert result1 == 10
assert call_count == 1
# Same successful call - should use cache
result2 = await exception_function(5)
# Second call - should use cache
result2 = await async_clearable_function(2)
assert result2 == 10
assert call_count == 1
# Exception call - should not be cached
with pytest.raises(ValueError):
await exception_function(-1)
# Clear cache
async_clearable_function.cache_clear()
# Third call after clear - should call function again
result3 = await async_clearable_function(2)
assert result3 == 10
assert call_count == 2
# Same exception call - should call again (not cached)
with pytest.raises(ValueError):
await exception_function(-1)
@pytest.mark.asyncio
async def test_async_function_returns_results_not_coroutines(self):
"""Test that cached async functions return actual results, not coroutines."""
call_count = 0
@cached()
async def async_result_function(x: int) -> str:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01)
return f"result_{x}"
# First call
result1 = await async_result_function(1)
assert result1 == "result_1"
assert isinstance(result1, str) # Should be string, not coroutine
assert call_count == 1
# Second call - should return cached result (string), not coroutine
result2 = await async_result_function(1)
assert result2 == "result_1"
assert isinstance(result2, str) # Should be string, not coroutine
assert call_count == 1 # Function should not be called again
# Verify results are identical
assert result1 is result2 # Should be same cached object
def test_cache_delete(self):
"""Test selective cache deletion functionality."""
call_count = 0
@cached()
def deletable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 6
# First call for x=1
result1 = deletable_function(1)
assert result1 == 6
assert call_count == 1
# First call for x=2
result2 = deletable_function(2)
assert result2 == 12
assert call_count == 2
# Second calls - should use cache
assert deletable_function(1) == 6
assert deletable_function(2) == 12
assert call_count == 2
# Delete specific entry for x=1
was_deleted = deletable_function.cache_delete(1)
assert was_deleted is True
# Call with x=1 should execute function again
result3 = deletable_function(1)
assert result3 == 6
assert call_count == 3
@pytest.mark.asyncio
async def test_concurrent_calls(self):
"""Test caching behavior with concurrent calls."""
call_count = 0
# Call with x=2 should still use cache
assert deletable_function(2) == 12
assert call_count == 3
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def concurrent_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.05) # Simulate work
return x * x
# Launch concurrent calls with same arguments
tasks = [concurrent_function(3) for _ in range(5)]
results = await asyncio.gather(*tasks)
# All results should be the same
assert all(result == 9 for result in results)
# Note: Due to race conditions, call_count might be up to 5 for concurrent calls
# This tests that the cache doesn't break under concurrent access
assert 1 <= call_count <= 5
class TestAsyncCache:
"""Tests for the @async_cache decorator (no TTL)."""
# Try to delete non-existent entry
was_deleted = deletable_function.cache_delete(99)
assert was_deleted is False
@pytest.mark.asyncio
async def test_basic_caching_no_ttl(self):
"""Test basic caching functionality without TTL."""
async def test_async_cache_delete(self):
"""Test selective cache deletion functionality with async function."""
call_count = 0
@async_cache(maxsize=10)
async def cached_function(x: int, y: int = 0) -> int:
@cached()
async def async_deletable_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01) # Simulate async work
return x + y
await asyncio.sleep(0.01)
return x * 7
# First call
result1 = await cached_function(1, 2)
assert result1 == 3
# First call for x=1
result1 = await async_deletable_function(1)
assert result1 == 7
assert call_count == 1
# Second call with same args - should use cache
result2 = await cached_function(1, 2)
assert result2 == 3
assert call_count == 1 # No additional call
# Third call after some time - should still use cache (no TTL)
await asyncio.sleep(0.05)
result3 = await cached_function(1, 2)
assert result3 == 3
assert call_count == 1 # Still no additional call
# Different args - should call function again
result4 = await cached_function(2, 3)
assert result4 == 5
# First call for x=2
result2 = await async_deletable_function(2)
assert result2 == 14
assert call_count == 2
@pytest.mark.asyncio
async def test_no_ttl_vs_ttl_behavior(self):
"""Test the difference between TTL and no-TTL caching."""
ttl_call_count = 0
no_ttl_call_count = 0
# Second calls - should use cache
assert await async_deletable_function(1) == 7
assert await async_deletable_function(2) == 14
assert call_count == 2
@async_ttl_cache(maxsize=10, ttl_seconds=1) # Short TTL
async def ttl_function(x: int) -> int:
nonlocal ttl_call_count
ttl_call_count += 1
return x * 2
# Delete specific entry for x=1
was_deleted = async_deletable_function.cache_delete(1)
assert was_deleted is True
@async_cache(maxsize=10) # No TTL
async def no_ttl_function(x: int) -> int:
nonlocal no_ttl_call_count
no_ttl_call_count += 1
return x * 2
# Call with x=1 should execute function again
result3 = await async_deletable_function(1)
assert result3 == 7
assert call_count == 3
# First calls
await ttl_function(5)
await no_ttl_function(5)
assert ttl_call_count == 1
assert no_ttl_call_count == 1
# Call with x=2 should still use cache
assert await async_deletable_function(2) == 14
assert call_count == 3
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Second calls after TTL expiry
await ttl_function(5) # Should call function again (TTL expired)
await no_ttl_function(5) # Should use cache (no TTL)
assert ttl_call_count == 2 # TTL function called again
assert no_ttl_call_count == 1 # No-TTL function still cached
@pytest.mark.asyncio
async def test_async_cache_info(self):
"""Test cache info for no-TTL cache."""
@async_cache(maxsize=5)
async def info_test_function(x: int) -> int:
return x * 3
# Check initial cache info
info = info_test_function.cache_info()
assert info["size"] == 0
assert info["maxsize"] == 5
assert info["ttl_seconds"] is None # No TTL
# Add an entry
await info_test_function(1)
info = info_test_function.cache_info()
assert info["size"] == 1
class TestTTLOptional:
"""Tests for optional TTL functionality."""
@pytest.mark.asyncio
async def test_ttl_none_behavior(self):
"""Test that ttl_seconds=None works like no TTL."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=None)
async def no_ttl_via_none(x: int) -> int:
nonlocal call_count
call_count += 1
return x**2
# First call
result1 = await no_ttl_via_none(3)
assert result1 == 9
assert call_count == 1
# Wait (would expire if there was TTL)
await asyncio.sleep(0.1)
# Second call - should still use cache
result2 = await no_ttl_via_none(3)
assert result2 == 9
assert call_count == 1 # No additional call
# Check cache info
info = no_ttl_via_none.cache_info()
assert info["ttl_seconds"] is None
@pytest.mark.asyncio
async def test_cache_options_comparison(self):
"""Test different cache options work as expected."""
ttl_calls = 0
no_ttl_calls = 0
@async_ttl_cache(maxsize=10, ttl_seconds=1) # With TTL
async def ttl_function(x: int) -> int:
nonlocal ttl_calls
ttl_calls += 1
return x * 10
@async_cache(maxsize=10) # Process-level cache (no TTL)
async def process_function(x: int) -> int:
nonlocal no_ttl_calls
no_ttl_calls += 1
return x * 10
# Both should cache initially
await ttl_function(3)
await process_function(3)
assert ttl_calls == 1
assert no_ttl_calls == 1
# Immediate second calls - both should use cache
await ttl_function(3)
await process_function(3)
assert ttl_calls == 1
assert no_ttl_calls == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# After TTL expiry
await ttl_function(3) # Should call function again
await process_function(3) # Should still use cache
assert ttl_calls == 2 # TTL cache expired, called again
assert no_ttl_calls == 1 # Process cache never expires
# Try to delete non-existent entry
was_deleted = async_deletable_function.cache_delete(99)
assert was_deleted is False

View File

@@ -1002,6 +1002,18 @@ dynamodb = ["boto3 (>=1.9.71)"]
redis = ["redis (>=2.10.5)"]
test-filesource = ["pyyaml (>=5.3.1)", "watchdog (>=3.0.0)"]
[[package]]
name = "nodeenv"
version = "1.9.1"
description = "Node.js virtual environment builder"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["dev"]
files = [
{file = "nodeenv-1.9.1-py2.py3-none-any.whl", hash = "sha256:ba11c9782d29c27c70ffbdda2d7415098754709be8a7056d79a737cd901155c9"},
{file = "nodeenv-1.9.1.tar.gz", hash = "sha256:6ec12890a2dab7946721edbfbcd91f3319c6ccc9aec47be7c7e6b7011ee6645f"},
]
[[package]]
name = "opentelemetry-api"
version = "1.35.0"
@@ -1347,6 +1359,27 @@ files = [
{file = "pyrfc3339-2.0.1.tar.gz", hash = "sha256:e47843379ea35c1296c3b6c67a948a1a490ae0584edfcbdea0eaffb5dd29960b"},
]
[[package]]
name = "pyright"
version = "1.1.404"
description = "Command line wrapper for pyright"
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "pyright-1.1.404-py3-none-any.whl", hash = "sha256:c7b7ff1fdb7219c643079e4c3e7d4125f0dafcc19d253b47e898d130ea426419"},
{file = "pyright-1.1.404.tar.gz", hash = "sha256:455e881a558ca6be9ecca0b30ce08aa78343ecc031d37a198ffa9a7a1abeb63e"},
]
[package.dependencies]
nodeenv = ">=1.6.0"
typing-extensions = ">=4.1"
[package.extras]
all = ["nodejs-wheel-binaries", "twine (>=3.4.1)"]
dev = ["twine (>=3.4.1)"]
nodejs = ["nodejs-wheel-binaries"]
[[package]]
name = "pytest"
version = "8.4.1"
@@ -1534,31 +1567,31 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.12.9"
version = "0.12.11"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.12.9-py3-none-linux_armv6l.whl", hash = "sha256:fcebc6c79fcae3f220d05585229463621f5dbf24d79fdc4936d9302e177cfa3e"},
{file = "ruff-0.12.9-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:aed9d15f8c5755c0e74467731a007fcad41f19bcce41cd75f768bbd687f8535f"},
{file = "ruff-0.12.9-py3-none-macosx_11_0_arm64.whl", hash = "sha256:5b15ea354c6ff0d7423814ba6d44be2807644d0c05e9ed60caca87e963e93f70"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d596c2d0393c2502eaabfef723bd74ca35348a8dac4267d18a94910087807c53"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1b15599931a1a7a03c388b9c5df1bfa62be7ede6eb7ef753b272381f39c3d0ff"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3d02faa2977fb6f3f32ddb7828e212b7dd499c59eb896ae6c03ea5c303575756"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:17d5b6b0b3a25259b69ebcba87908496e6830e03acfb929ef9fd4c58675fa2ea"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:72db7521860e246adbb43f6ef464dd2a532ef2ef1f5dd0d470455b8d9f1773e0"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a03242c1522b4e0885af63320ad754d53983c9599157ee33e77d748363c561ce"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fc83e4e9751e6c13b5046d7162f205d0a7bac5840183c5beebf824b08a27340"},
{file = "ruff-0.12.9-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:881465ed56ba4dd26a691954650de6ad389a2d1fdb130fe51ff18a25639fe4bb"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:43f07a3ccfc62cdb4d3a3348bf0588358a66da756aa113e071b8ca8c3b9826af"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:07adb221c54b6bba24387911e5734357f042e5669fa5718920ee728aba3cbadc"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_i686.whl", hash = "sha256:f5cd34fabfdea3933ab85d72359f118035882a01bff15bd1d2b15261d85d5f66"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:f6be1d2ca0686c54564da8e7ee9e25f93bdd6868263805f8c0b8fc6a449db6d7"},
{file = "ruff-0.12.9-py3-none-win32.whl", hash = "sha256:cc7a37bd2509974379d0115cc5608a1a4a6c4bff1b452ea69db83c8855d53f93"},
{file = "ruff-0.12.9-py3-none-win_amd64.whl", hash = "sha256:6fb15b1977309741d7d098c8a3cb7a30bc112760a00fb6efb7abc85f00ba5908"},
{file = "ruff-0.12.9-py3-none-win_arm64.whl", hash = "sha256:63c8c819739d86b96d500cce885956a1a48ab056bbcbc61b747ad494b2485089"},
{file = "ruff-0.12.9.tar.gz", hash = "sha256:fbd94b2e3c623f659962934e52c2bea6fc6da11f667a427a368adaf3af2c866a"},
{file = "ruff-0.12.11-py3-none-linux_armv6l.whl", hash = "sha256:93fce71e1cac3a8bf9200e63a38ac5c078f3b6baebffb74ba5274fb2ab276065"},
{file = "ruff-0.12.11-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8e33ac7b28c772440afa80cebb972ffd823621ded90404f29e5ab6d1e2d4b93"},
{file = "ruff-0.12.11-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d69fb9d4937aa19adb2e9f058bc4fbfe986c2040acb1a4a9747734834eaa0bfd"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:411954eca8464595077a93e580e2918d0a01a19317af0a72132283e28ae21bee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6a2c0a2e1a450f387bf2c6237c727dd22191ae8c00e448e0672d624b2bbd7fb0"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ca4c3a7f937725fd2413c0e884b5248a19369ab9bdd850b5781348ba283f644"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:4d1df0098124006f6a66ecf3581a7f7e754c4df7644b2e6704cd7ca80ff95211"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5a8dd5f230efc99a24ace3b77e3555d3fbc0343aeed3fc84c8d89e75ab2ff793"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4dc75533039d0ed04cd33fb8ca9ac9620b99672fe7ff1533b6402206901c34ee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4fc58f9266d62c6eccc75261a665f26b4ef64840887fc6cbc552ce5b29f96cc8"},
{file = "ruff-0.12.11-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5a0113bd6eafd545146440225fe60b4e9489f59eb5f5f107acd715ba5f0b3d2f"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:0d737b4059d66295c3ea5720e6efc152623bb83fde5444209b69cd33a53e2000"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:916fc5defee32dbc1fc1650b576a8fed68f5e8256e2180d4d9855aea43d6aab2"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c984f07d7adb42d3ded5be894fb4007f30f82c87559438b4879fe7aa08c62b39"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e07fbb89f2e9249f219d88331c833860489b49cdf4b032b8e4432e9b13e8a4b9"},
{file = "ruff-0.12.11-py3-none-win32.whl", hash = "sha256:c792e8f597c9c756e9bcd4d87cf407a00b60af77078c96f7b6366ea2ce9ba9d3"},
{file = "ruff-0.12.11-py3-none-win_amd64.whl", hash = "sha256:a3283325960307915b6deb3576b96919ee89432ebd9c48771ca12ee8afe4a0fd"},
{file = "ruff-0.12.11-py3-none-win_arm64.whl", hash = "sha256:bae4d6e6a2676f8fb0f98b74594a048bae1b944aab17e9f5d504062303c6dbea"},
{file = "ruff-0.12.11.tar.gz", hash = "sha256:c6b09ae8426a65bbee5425b9d0b82796dbb07cb1af045743c79bfb163001165d"},
]
[[package]]
@@ -1740,7 +1773,6 @@ files = [
{file = "typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76"},
{file = "typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36"},
]
markers = {dev = "python_version < \"3.11\""}
[[package]]
name = "typing-inspection"
@@ -1897,4 +1929,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<4.0"
content-hash = "ef7818fba061cea2841c6d7ca4852acde83e4f73b32fca1315e58660002bb0d0"
content-hash = "0c40b63c3c921846cf05ccfb4e685d4959854b29c2c302245f9832e20aac6954"

View File

@@ -9,6 +9,7 @@ packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
colorama = "^0.4.6"
cryptography = "^45.0"
expiringdict = "^1.2.2"
fastapi = "^0.116.1"
google-cloud-logging = "^3.12.1"
@@ -21,11 +22,12 @@ supabase = "^2.16.0"
uvicorn = "^0.35.0"
[tool.poetry.group.dev.dependencies]
ruff = "^0.12.9"
pyright = "^1.1.404"
pytest = "^8.4.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
pytest-cov = "^6.2.1"
ruff = "^0.12.11"
[build-system]
requires = ["poetry-core"]

View File

@@ -21,7 +21,7 @@ PRISMA_SCHEMA="postgres/schema.prisma"
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=password
# REDIS_PASSWORD=
# RabbitMQ Credentials
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
@@ -66,6 +66,11 @@ NVIDIA_API_KEY=
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
# Notion OAuth App server credentials - https://developers.notion.com/docs/authorization
# Configure a public integration
NOTION_CLIENT_ID=
NOTION_CLIENT_SECRET=
# Google OAuth App server credentials - https://console.cloud.google.com/apis/credentials, and enable gmail api and set scopes
# https://console.cloud.google.com/apis/credentials/consent ?project=<your_project_id>
# You'll need to add/enable the following scopes (minimum):

View File

@@ -9,4 +9,12 @@ secrets/*
!secrets/.gitkeep
*.ignore.*
*.ign.*
*.ign.*
# Load test results and reports
load-tests/*_RESULTS.md
load-tests/*_REPORT.md
load-tests/results/
load-tests/*.json
load-tests/*.log
load-tests/node_modules/*

View File

@@ -9,8 +9,15 @@ WORKDIR /app
RUN echo 'Acquire::http::Pipeline-Depth 0;\nAcquire::http::No-Cache true;\nAcquire::BrokenProxy true;\n' > /etc/apt/apt.conf.d/99fixbadproxy
# Update package list and install Python and build dependencies
# Install Node.js repository key and setup
RUN apt-get update --allow-releaseinfo-change --fix-missing \
&& apt-get install -y curl ca-certificates gnupg \
&& mkdir -p /etc/apt/keyrings \
&& curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \
&& echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list
# Update package list and install Python, Node.js, and build dependencies
RUN apt-get update \
&& apt-get install -y \
python3.13 \
python3.13-dev \
@@ -20,7 +27,9 @@ RUN apt-get update --allow-releaseinfo-change --fix-missing \
libpq5 \
libz-dev \
libssl-dev \
postgresql-client
postgresql-client \
nodejs \
&& rm -rf /var/lib/apt/lists/*
ENV POETRY_HOME=/opt/poetry
ENV POETRY_NO_INTERACTION=1
@@ -54,13 +63,18 @@ ENV PATH=/opt/poetry/bin:$PATH
# Install Python without upgrading system-managed packages
RUN apt-get update && apt-get install -y \
python3.13 \
python3-pip
python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Copy only necessary files from builder
COPY --from=builder /app /app
COPY --from=builder /usr/local/lib/python3* /usr/local/lib/python3*
COPY --from=builder /usr/local/bin/poetry /usr/local/bin/poetry
# Copy Prisma binaries
# Copy Node.js installation for Prisma
COPY --from=builder /usr/bin/node /usr/bin/node
COPY --from=builder /usr/lib/node_modules /usr/lib/node_modules
COPY --from=builder /usr/bin/npm /usr/bin/npm
COPY --from=builder /usr/bin/npx /usr/bin/npx
COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries
ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH"

View File

@@ -1,4 +1,3 @@
import functools
import importlib
import logging
import os
@@ -6,6 +5,8 @@ import re
from pathlib import Path
from typing import TYPE_CHECKING, TypeVar
from autogpt_libs.utils.cache import cached
logger = logging.getLogger(__name__)
@@ -15,7 +16,7 @@ if TYPE_CHECKING:
T = TypeVar("T")
@functools.cache
@cached()
def load_all_blocks() -> dict[str, type["Block"]]:
from backend.data.block import Block
from backend.util.settings import Config

View File

@@ -1,8 +1,6 @@
import logging
from typing import Any, Optional
from pydantic import JsonValue
from backend.data.block import (
Block,
BlockCategory,
@@ -12,7 +10,7 @@ from backend.data.block import (
BlockType,
get_block,
)
from backend.data.execution import ExecutionStatus
from backend.data.execution import ExecutionStatus, NodesInputMasks
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util.json import validate_with_jsonschema
from backend.util.retry import func_retry
@@ -33,7 +31,7 @@ class AgentExecutorBlock(Block):
input_schema: dict = SchemaField(description="Input schema for the graph")
output_schema: dict = SchemaField(description="Output schema for the graph")
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = SchemaField(
nodes_input_masks: Optional[NodesInputMasks] = SchemaField(
default=None, hidden=True
)

View File

@@ -0,0 +1,154 @@
from enum import Enum
from typing import Literal
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import MediaFileType
class GeminiImageModel(str, Enum):
NANO_BANANA = "google/nano-banana"
class OutputFormat(str, Enum):
JPG = "jpg"
PNG = "png"
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="replicate",
api_key=SecretStr("mock-replicate-api-key"),
title="Mock Replicate API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
class AIImageCustomizerBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
description="Replicate API key with permissions for Google Gemini image models",
)
prompt: str = SchemaField(
description="A text description of the image you want to generate",
title="Prompt",
)
model: GeminiImageModel = SchemaField(
description="The AI model to use for image generation and editing",
default=GeminiImageModel.NANO_BANANA,
title="Model",
)
images: list[MediaFileType] = SchemaField(
description="Optional list of input images to reference or modify",
default=[],
title="Input Images",
)
output_format: OutputFormat = SchemaField(
description="Format of the output image",
default=OutputFormat.PNG,
title="Output Format",
)
class Output(BlockSchema):
image_url: MediaFileType = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(
id="d76bbe4c-930e-4894-8469-b66775511f71",
description=(
"Generate and edit custom images using Google's Nano-Banana model from Gemini 2.5. "
"Provide a prompt and optional reference images to create or modify images."
),
categories={BlockCategory.AI, BlockCategory.MULTIMEDIA},
input_schema=AIImageCustomizerBlock.Input,
output_schema=AIImageCustomizerBlock.Output,
test_input={
"prompt": "Make the scene more vibrant and colorful",
"model": GeminiImageModel.NANO_BANANA,
"images": [],
"output_format": OutputFormat.JPG,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("image_url", "https://replicate.delivery/generated-image.jpg"),
],
test_mock={
"run_model": lambda *args, **kwargs: MediaFileType(
"https://replicate.delivery/generated-image.jpg"
),
},
test_credentials=TEST_CREDENTIALS,
)
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
graph_exec_id: str,
user_id: str,
**kwargs,
) -> BlockOutput:
try:
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.model.value,
prompt=input_data.prompt,
images=input_data.images,
output_format=input_data.output_format.value,
)
yield "image_url", result
except Exception as e:
yield "error", str(e)
async def run_model(
self,
api_key: SecretStr,
model_name: str,
prompt: str,
images: list[MediaFileType],
output_format: str,
) -> MediaFileType:
client = ReplicateClient(api_token=api_key.get_secret_value())
input_params: dict = {
"prompt": prompt,
"output_format": output_format,
}
# Add images to input if provided (API expects "image_input" parameter)
if images:
input_params["image_input"] = [str(img) for img in images]
output: FileOutput | str = await client.async_run( # type: ignore
model_name,
input=input_params,
wait=False,
)
if isinstance(output, FileOutput):
return MediaFileType(output.url)
if isinstance(output, str):
return MediaFileType(output)
raise ValueError("No output received from the model")

View File

@@ -661,6 +661,167 @@ async def update_field(
#################################################################
async def get_table_schema(
credentials: Credentials,
base_id: str,
table_id_or_name: str,
) -> dict:
"""
Get the schema for a specific table, including all field definitions.
Args:
credentials: Airtable API credentials
base_id: The base ID
table_id_or_name: The table ID or name
Returns:
Dict containing table schema with fields information
"""
# First get all tables to find the right one
response = await Requests().get(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables",
headers={"Authorization": credentials.auth_header()},
)
data = response.json()
tables = data.get("tables", [])
# Find the matching table
for table in tables:
if table.get("id") == table_id_or_name or table.get("name") == table_id_or_name:
return table
raise ValueError(f"Table '{table_id_or_name}' not found in base '{base_id}'")
def get_empty_value_for_field(field_type: str) -> Any:
"""
Return the appropriate empty value for a given Airtable field type.
Args:
field_type: The Airtable field type
Returns:
The appropriate empty value for that field type
"""
# Fields that should be false when empty
if field_type == "checkbox":
return False
# Fields that should be empty arrays
if field_type in [
"multipleSelects",
"multipleRecordLinks",
"multipleAttachments",
"multipleLookupValues",
"multipleCollaborators",
]:
return []
# Fields that should be 0 when empty (numeric types)
if field_type in [
"number",
"percent",
"currency",
"rating",
"duration",
"count",
"autoNumber",
]:
return 0
# Fields that should be empty strings
if field_type in [
"singleLineText",
"multilineText",
"email",
"url",
"phoneNumber",
"richText",
"barcode",
]:
return ""
# Everything else gets null (dates, single selects, formulas, etc.)
return None
async def normalize_records(
records: list[dict],
table_schema: dict,
include_field_metadata: bool = False,
) -> dict:
"""
Normalize Airtable records to include all fields with proper empty values.
Args:
records: List of record objects from Airtable API
table_schema: Table schema containing field definitions
include_field_metadata: Whether to include field metadata in response
Returns:
Dict with normalized records and optionally field metadata
"""
fields = table_schema.get("fields", [])
# Normalize each record
normalized_records = []
for record in records:
normalized = {
"id": record.get("id"),
"createdTime": record.get("createdTime"),
"fields": {},
}
# Add existing fields
existing_fields = record.get("fields", {})
# Add all fields from schema, using empty values for missing ones
for field in fields:
field_name = field["name"]
field_type = field["type"]
if field_name in existing_fields:
# Field exists, use its value
normalized["fields"][field_name] = existing_fields[field_name]
else:
# Field is missing, add appropriate empty value
normalized["fields"][field_name] = get_empty_value_for_field(field_type)
normalized_records.append(normalized)
# Build result dictionary
if include_field_metadata:
field_metadata = {}
for field in fields:
metadata = {"type": field["type"], "id": field["id"]}
# Add type-specific metadata
options = field.get("options", {})
if field["type"] == "currency" and "symbol" in options:
metadata["symbol"] = options["symbol"]
metadata["precision"] = options.get("precision", 2)
elif field["type"] == "duration" and "durationFormat" in options:
metadata["format"] = options["durationFormat"]
elif field["type"] == "percent" and "precision" in options:
metadata["precision"] = options["precision"]
elif (
field["type"] in ["singleSelect", "multipleSelects"]
and "choices" in options
):
metadata["choices"] = [choice["name"] for choice in options["choices"]]
elif field["type"] == "rating" and "max" in options:
metadata["max"] = options["max"]
metadata["icon"] = options.get("icon", "star")
metadata["color"] = options.get("color", "yellowBright")
field_metadata[field["name"]] = metadata
return {"records": normalized_records, "field_metadata": field_metadata}
else:
return {"records": normalized_records}
async def list_records(
credentials: Credentials,
base_id: str,
@@ -1249,3 +1410,26 @@ async def list_bases(
)
return response.json()
async def get_base_tables(
credentials: Credentials,
base_id: str,
) -> list[dict]:
"""
Get all tables for a specific base.
Args:
credentials: Airtable API credentials
base_id: The ID of the base
Returns:
list[dict]: List of table objects with their schemas
"""
response = await Requests().get(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables",
headers={"Authorization": credentials.auth_header()},
)
data = response.json()
return data.get("tables", [])

View File

@@ -14,13 +14,13 @@ from backend.sdk import (
SchemaField,
)
from ._api import create_base, list_bases
from ._api import create_base, get_base_tables, list_bases
from ._config import airtable
class AirtableCreateBaseBlock(Block):
"""
Creates a new base in an Airtable workspace.
Creates a new base in an Airtable workspace, or returns existing base if one with the same name exists.
"""
class Input(BlockSchema):
@@ -31,6 +31,10 @@ class AirtableCreateBaseBlock(Block):
description="The workspace ID where the base will be created"
)
name: str = SchemaField(description="The name of the new base")
find_existing: bool = SchemaField(
description="If true, return existing base with same name instead of creating duplicate",
default=True,
)
tables: list[dict] = SchemaField(
description="At least one table and field must be specified. Array of table objects to create in the base. Each table should have 'name' and 'fields' properties",
default=[
@@ -50,14 +54,18 @@ class AirtableCreateBaseBlock(Block):
)
class Output(BlockSchema):
base_id: str = SchemaField(description="The ID of the created base")
base_id: str = SchemaField(description="The ID of the created or found base")
tables: list[dict] = SchemaField(description="Array of table objects")
table: dict = SchemaField(description="A single table object")
was_created: bool = SchemaField(
description="True if a new base was created, False if existing was found",
default=True,
)
def __init__(self):
super().__init__(
id="f59b88a8-54ce-4676-a508-fd614b4e8dce",
description="Create a new base in Airtable",
description="Create or find a base in Airtable",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
@@ -66,6 +74,31 @@ class AirtableCreateBaseBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# If find_existing is true, check if a base with this name already exists
if input_data.find_existing:
# List all bases to check for existing one with same name
# Note: Airtable API doesn't have a direct search, so we need to list and filter
existing_bases = await list_bases(credentials)
for base in existing_bases.get("bases", []):
if base.get("name") == input_data.name:
# Base already exists, return it
base_id = base.get("id")
yield "base_id", base_id
yield "was_created", False
# Get the tables for this base
try:
tables = await get_base_tables(credentials, base_id)
yield "tables", tables
for table in tables:
yield "table", table
except Exception:
# If we can't get tables, return empty list
yield "tables", []
return
# No existing base found or find_existing is false, create new one
data = await create_base(
credentials,
input_data.workspace_id,
@@ -74,6 +107,7 @@ class AirtableCreateBaseBlock(Block):
)
yield "base_id", data.get("id", None)
yield "was_created", True
yield "tables", data.get("tables", [])
for table in data.get("tables", []):
yield "table", table

View File

@@ -2,7 +2,7 @@
Airtable record operation blocks.
"""
from typing import Optional
from typing import Optional, cast
from backend.sdk import (
APIKeyCredentials,
@@ -18,7 +18,9 @@ from ._api import (
create_record,
delete_multiple_records,
get_record,
get_table_schema,
list_records,
normalize_records,
update_multiple_records,
)
from ._config import airtable
@@ -54,12 +56,24 @@ class AirtableListRecordsBlock(Block):
return_fields: list[str] = SchemaField(
description="Specific fields to return (comma-separated)", default=[]
)
normalize_output: bool = SchemaField(
description="Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response)",
default=True,
)
include_field_metadata: bool = SchemaField(
description="Include field type and configuration metadata (requires normalize_output=true)",
default=False,
)
class Output(BlockSchema):
records: list[dict] = SchemaField(description="Array of record objects")
offset: Optional[str] = SchemaField(
description="Offset for next page (null if no more records)", default=None
)
field_metadata: Optional[dict] = SchemaField(
description="Field type and configuration metadata (only when include_field_metadata=true)",
default=None,
)
def __init__(self):
super().__init__(
@@ -73,6 +87,7 @@ class AirtableListRecordsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
data = await list_records(
credentials,
input_data.base_id,
@@ -88,8 +103,33 @@ class AirtableListRecordsBlock(Block):
fields=input_data.return_fields if input_data.return_fields else None,
)
yield "records", data.get("records", [])
yield "offset", data.get("offset", None)
records = data.get("records", [])
# Normalize output if requested
if input_data.normalize_output:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the records
normalized_data = await normalize_records(
records,
table_schema,
include_field_metadata=input_data.include_field_metadata,
)
yield "records", normalized_data["records"]
yield "offset", data.get("offset", None)
if (
input_data.include_field_metadata
and "field_metadata" in normalized_data
):
yield "field_metadata", normalized_data["field_metadata"]
else:
yield "records", records
yield "offset", data.get("offset", None)
class AirtableGetRecordBlock(Block):
@@ -104,11 +144,23 @@ class AirtableGetRecordBlock(Block):
base_id: str = SchemaField(description="The Airtable base ID")
table_id_or_name: str = SchemaField(description="Table ID or name")
record_id: str = SchemaField(description="The record ID to retrieve")
normalize_output: bool = SchemaField(
description="Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response)",
default=True,
)
include_field_metadata: bool = SchemaField(
description="Include field type and configuration metadata (requires normalize_output=true)",
default=False,
)
class Output(BlockSchema):
id: str = SchemaField(description="The record ID")
fields: dict = SchemaField(description="The record fields")
created_time: str = SchemaField(description="The record created time")
field_metadata: Optional[dict] = SchemaField(
description="Field type and configuration metadata (only when include_field_metadata=true)",
default=None,
)
def __init__(self):
super().__init__(
@@ -122,6 +174,7 @@ class AirtableGetRecordBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
record = await get_record(
credentials,
input_data.base_id,
@@ -129,9 +182,34 @@ class AirtableGetRecordBlock(Block):
input_data.record_id,
)
yield "id", record.get("id", None)
yield "fields", record.get("fields", None)
yield "created_time", record.get("createdTime", None)
# Normalize output if requested
if input_data.normalize_output:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the single record (wrap in list and unwrap result)
normalized_data = await normalize_records(
[record],
table_schema,
include_field_metadata=input_data.include_field_metadata,
)
normalized_record = normalized_data["records"][0]
yield "id", normalized_record.get("id", None)
yield "fields", normalized_record.get("fields", None)
yield "created_time", normalized_record.get("createdTime", None)
if (
input_data.include_field_metadata
and "field_metadata" in normalized_data
):
yield "field_metadata", normalized_data["field_metadata"]
else:
yield "id", record.get("id", None)
yield "fields", record.get("fields", None)
yield "created_time", record.get("createdTime", None)
class AirtableCreateRecordsBlock(Block):
@@ -148,6 +226,10 @@ class AirtableCreateRecordsBlock(Block):
records: list[dict] = SchemaField(
description="Array of records to create (each with 'fields' object)"
)
skip_normalization: bool = SchemaField(
description="Skip output normalization to get raw Airtable response (faster but may have missing fields)",
default=False,
)
typecast: bool = SchemaField(
description="Automatically convert string values to appropriate types",
default=False,
@@ -173,7 +255,7 @@ class AirtableCreateRecordsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# The create_record API expects records in a specific format
data = await create_record(
credentials,
input_data.base_id,
@@ -182,8 +264,22 @@ class AirtableCreateRecordsBlock(Block):
typecast=input_data.typecast if input_data.typecast else None,
return_fields_by_field_id=input_data.return_fields_by_field_id,
)
result_records = cast(list[dict], data.get("records", []))
yield "records", data.get("records", [])
# Normalize output unless explicitly disabled
if not input_data.skip_normalization and result_records:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the records
normalized_data = await normalize_records(
result_records, table_schema, include_field_metadata=False
)
result_records = normalized_data["records"]
yield "records", result_records
details = data.get("details", None)
if details:
yield "details", details

View File

@@ -0,0 +1,3 @@
from .text_overlay import BannerbearTextOverlayBlock
__all__ = ["BannerbearTextOverlayBlock"]

View File

@@ -0,0 +1,8 @@
from backend.sdk import BlockCostType, ProviderBuilder
bannerbear = (
ProviderBuilder("bannerbear")
.with_api_key("BANNERBEAR_API_KEY", "Bannerbear API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -0,0 +1,239 @@
import uuid
from typing import TYPE_CHECKING, Any, Dict, List
if TYPE_CHECKING:
pass
from pydantic import SecretStr
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import bannerbear
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="bannerbear",
api_key=SecretStr("mock-bannerbear-api-key"),
title="Mock Bannerbear API Key",
)
class TextModification(BlockSchema):
name: str = SchemaField(
description="The name of the layer to modify in the template"
)
text: str = SchemaField(description="The text content to add to this layer")
color: str = SchemaField(
description="Hex color code for the text (e.g., '#FF0000')",
default="",
advanced=True,
)
font_family: str = SchemaField(
description="Font family to use for the text",
default="",
advanced=True,
)
font_size: int = SchemaField(
description="Font size in pixels",
default=0,
advanced=True,
)
font_weight: str = SchemaField(
description="Font weight (e.g., 'bold', 'normal')",
default="",
advanced=True,
)
text_align: str = SchemaField(
description="Text alignment (left, center, right)",
default="",
advanced=True,
)
class BannerbearTextOverlayBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = bannerbear.credentials_field(
description="API credentials for Bannerbear"
)
template_id: str = SchemaField(
description="The unique ID of your Bannerbear template"
)
project_id: str = SchemaField(
description="Optional: Project ID (required when using Master API Key)",
default="",
advanced=True,
)
text_modifications: List[TextModification] = SchemaField(
description="List of text layers to modify in the template"
)
image_url: str = SchemaField(
description="Optional: URL of an image to use in the template",
default="",
advanced=True,
)
image_layer_name: str = SchemaField(
description="Optional: Name of the image layer in the template",
default="photo",
advanced=True,
)
webhook_url: str = SchemaField(
description="Optional: URL to receive webhook notification when image is ready",
default="",
advanced=True,
)
metadata: str = SchemaField(
description="Optional: Custom metadata to attach to the image",
default="",
advanced=True,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the image generation was successfully initiated"
)
image_url: str = SchemaField(
description="URL of the generated image (if synchronous) or placeholder"
)
uid: str = SchemaField(description="Unique identifier for the generated image")
status: str = SchemaField(description="Status of the image generation")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(
id="c7d3a5c2-05fc-450e-8dce-3b0e04626009",
description="Add text overlay to images using Bannerbear templates. Perfect for creating social media graphics, marketing materials, and dynamic image content.",
categories={BlockCategory.PRODUCTIVITY, BlockCategory.AI},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"template_id": "jJWBKNELpQPvbX5R93Gk",
"text_modifications": [
{
"name": "headline",
"text": "Amazing Product Launch!",
"color": "#FF0000",
},
{
"name": "subtitle",
"text": "50% OFF Today Only",
},
],
"credentials": {
"provider": "bannerbear",
"id": str(uuid.uuid4()),
"type": "api_key",
},
},
test_output=[
("success", True),
("image_url", "https://cdn.bannerbear.com/test-image.jpg"),
("uid", "test-uid-123"),
("status", "completed"),
],
test_mock={
"_make_api_request": lambda *args, **kwargs: {
"uid": "test-uid-123",
"status": "completed",
"image_url": "https://cdn.bannerbear.com/test-image.jpg",
}
},
test_credentials=TEST_CREDENTIALS,
)
async def _make_api_request(self, payload: dict, api_key: str) -> dict:
"""Make the actual API request to Bannerbear. This is separated for easy mocking in tests."""
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
response = await Requests().post(
"https://sync.api.bannerbear.com/v2/images",
headers=headers,
json=payload,
)
if response.status in [200, 201, 202]:
return response.json()
else:
error_msg = f"API request failed with status {response.status}"
if response.text:
try:
error_data = response.json()
error_msg = (
f"{error_msg}: {error_data.get('message', response.text)}"
)
except Exception:
error_msg = f"{error_msg}: {response.text}"
raise Exception(error_msg)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Build the modifications array
modifications = []
# Add text modifications
for text_mod in input_data.text_modifications:
mod_data: Dict[str, Any] = {
"name": text_mod.name,
"text": text_mod.text,
}
# Add optional text styling parameters only if they have values
if text_mod.color and text_mod.color.strip():
mod_data["color"] = text_mod.color
if text_mod.font_family and text_mod.font_family.strip():
mod_data["font_family"] = text_mod.font_family
if text_mod.font_size and text_mod.font_size > 0:
mod_data["font_size"] = text_mod.font_size
if text_mod.font_weight and text_mod.font_weight.strip():
mod_data["font_weight"] = text_mod.font_weight
if text_mod.text_align and text_mod.text_align.strip():
mod_data["text_align"] = text_mod.text_align
modifications.append(mod_data)
# Add image modification if provided and not empty
if input_data.image_url and input_data.image_url.strip():
modifications.append(
{
"name": input_data.image_layer_name,
"image_url": input_data.image_url,
}
)
# Build the request payload - only include non-empty optional fields
payload = {
"template": input_data.template_id,
"modifications": modifications,
}
# Add project_id if provided (required for Master API keys)
if input_data.project_id and input_data.project_id.strip():
payload["project_id"] = input_data.project_id
if input_data.webhook_url and input_data.webhook_url.strip():
payload["webhook_url"] = input_data.webhook_url
if input_data.metadata and input_data.metadata.strip():
payload["metadata"] = input_data.metadata
# Make the API request using the private method
data = await self._make_api_request(
payload, credentials.api_key.get_secret_value()
)
# Synchronous request - image should be ready
yield "success", True
yield "image_url", data.get("image_url", "")
yield "uid", data.get("uid", "")
yield "status", data.get("status", "completed")

View File

@@ -113,6 +113,7 @@ class DataForSeoClient:
include_serp_info: bool = False,
include_clickstream_data: bool = False,
limit: int = 100,
depth: Optional[int] = None,
) -> List[Dict[str, Any]]:
"""
Get related keywords from DataForSEO Labs.
@@ -125,6 +126,7 @@ class DataForSeoClient:
include_serp_info: Include SERP data
include_clickstream_data: Include clickstream metrics
limit: Maximum number of results (up to 3000)
depth: Keyword search depth (0-4), controls number of returned keywords
Returns:
API response with related keywords
@@ -148,6 +150,8 @@ class DataForSeoClient:
task_data["include_clickstream_data"] = include_clickstream_data
if limit is not None:
task_data["limit"] = limit
if depth is not None:
task_data["depth"] = depth
payload = [task_data]

View File

@@ -78,6 +78,12 @@ class DataForSeoRelatedKeywordsBlock(Block):
ge=1,
le=3000,
)
depth: int = SchemaField(
description="Keyword search depth (0-4). Controls the number of returned keywords: 0=1 keyword, 1=~8 keywords, 2=~72 keywords, 3=~584 keywords, 4=~4680 keywords",
default=1,
ge=0,
le=4,
)
class Output(BlockSchema):
related_keywords: List[RelatedKeyword] = SchemaField(
@@ -154,6 +160,7 @@ class DataForSeoRelatedKeywordsBlock(Block):
include_serp_info=input_data.include_serp_info,
include_clickstream_data=input_data.include_clickstream_data,
limit=input_data.limit,
depth=input_data.depth,
)
async def run(

View File

@@ -1094,6 +1094,117 @@ class GmailGetThreadBlock(GmailBase):
return thread
async def _build_reply_message(
service, input_data, graph_exec_id: str, user_id: str
) -> tuple[str, str]:
"""
Builds a reply MIME message for Gmail threads.
Returns:
tuple: (base64-encoded raw message, threadId)
"""
# Get parent message for reply context
parent = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(
userId="me",
id=input_data.parentMessageId,
format="metadata",
metadataHeaders=[
"Subject",
"References",
"Message-ID",
"From",
"To",
"Cc",
"Reply-To",
],
)
.execute()
)
# Build headers dictionary, preserving all values for duplicate headers
headers = {}
for h in parent.get("payload", {}).get("headers", []):
name = h["name"].lower()
value = h["value"]
if name in headers:
# For duplicate headers, keep the first occurrence (most relevant for reply context)
continue
headers[name] = value
# Determine recipients if not specified
if not (input_data.to or input_data.cc or input_data.bcc):
if input_data.replyAll:
recipients = [parseaddr(headers.get("from", ""))[1]]
recipients += [addr for _, addr in getaddresses([headers.get("to", "")])]
recipients += [addr for _, addr in getaddresses([headers.get("cc", "")])]
# Use dict.fromkeys() for O(n) deduplication while preserving order
input_data.to = list(dict.fromkeys(filter(None, recipients)))
else:
# Check Reply-To header first, fall back to From header
reply_to = headers.get("reply-to", "")
from_addr = headers.get("from", "")
sender = parseaddr(reply_to if reply_to else from_addr)[1]
input_data.to = [sender] if sender else []
# Set subject with Re: prefix if not already present
if input_data.subject:
subject = input_data.subject
else:
parent_subject = headers.get("subject", "").strip()
# Only add "Re:" if not already present (case-insensitive check)
if parent_subject.lower().startswith("re:"):
subject = parent_subject
else:
subject = f"Re: {parent_subject}" if parent_subject else "Re:"
# Build references header for proper threading
references = headers.get("references", "").split()
if headers.get("message-id"):
references.append(headers["message-id"])
# Create MIME message
msg = MIMEMultipart()
if input_data.to:
msg["To"] = ", ".join(input_data.to)
if input_data.cc:
msg["Cc"] = ", ".join(input_data.cc)
if input_data.bcc:
msg["Bcc"] = ", ".join(input_data.bcc)
msg["Subject"] = subject
if headers.get("message-id"):
msg["In-Reply-To"] = headers["message-id"]
if references:
msg["References"] = " ".join(references)
# Use the helper function for consistent content type handling
msg.attach(_make_mime_text(input_data.body, input_data.content_type))
# Handle attachments
for attach in input_data.attachments:
local_path = await store_media_file(
user_id=user_id,
graph_exec_id=graph_exec_id,
file=attach,
return_content=False,
)
abs_path = get_exec_file_path(graph_exec_id, local_path)
part = MIMEBase("application", "octet-stream")
with open(abs_path, "rb") as f:
part.set_payload(f.read())
encoders.encode_base64(part)
part.add_header(
"Content-Disposition", f"attachment; filename={Path(abs_path).name}"
)
msg.attach(part)
# Encode message
raw = base64.urlsafe_b64encode(msg.as_bytes()).decode("utf-8")
return raw, input_data.threadId
class GmailReplyBlock(GmailBase):
"""
Replies to Gmail threads with intelligent content type detection.
@@ -1230,93 +1341,146 @@ class GmailReplyBlock(GmailBase):
async def _reply(
self, service, input_data: Input, graph_exec_id: str, user_id: str
) -> dict:
parent = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(
userId="me",
id=input_data.parentMessageId,
format="metadata",
metadataHeaders=[
"Subject",
"References",
"Message-ID",
"From",
"To",
"Cc",
"Reply-To",
],
)
.execute()
# Build the reply message using the shared helper
raw, thread_id = await _build_reply_message(
service, input_data, graph_exec_id, user_id
)
headers = {
h["name"].lower(): h["value"]
for h in parent.get("payload", {}).get("headers", [])
}
if not (input_data.to or input_data.cc or input_data.bcc):
if input_data.replyAll:
recipients = [parseaddr(headers.get("from", ""))[1]]
recipients += [
addr for _, addr in getaddresses([headers.get("to", "")])
]
recipients += [
addr for _, addr in getaddresses([headers.get("cc", "")])
]
dedup: list[str] = []
for r in recipients:
if r and r not in dedup:
dedup.append(r)
input_data.to = dedup
else:
sender = parseaddr(headers.get("reply-to", headers.get("from", "")))[1]
input_data.to = [sender] if sender else []
subject = input_data.subject or (f"Re: {headers.get('subject', '')}".strip())
references = headers.get("references", "").split()
if headers.get("message-id"):
references.append(headers["message-id"])
msg = MIMEMultipart()
if input_data.to:
msg["To"] = ", ".join(input_data.to)
if input_data.cc:
msg["Cc"] = ", ".join(input_data.cc)
if input_data.bcc:
msg["Bcc"] = ", ".join(input_data.bcc)
msg["Subject"] = subject
if headers.get("message-id"):
msg["In-Reply-To"] = headers["message-id"]
if references:
msg["References"] = " ".join(references)
# Use the new helper function for consistent content type handling
msg.attach(_make_mime_text(input_data.body, input_data.content_type))
for attach in input_data.attachments:
local_path = await store_media_file(
user_id=user_id,
graph_exec_id=graph_exec_id,
file=attach,
return_content=False,
)
abs_path = get_exec_file_path(graph_exec_id, local_path)
part = MIMEBase("application", "octet-stream")
with open(abs_path, "rb") as f:
part.set_payload(f.read())
encoders.encode_base64(part)
part.add_header(
"Content-Disposition", f"attachment; filename={Path(abs_path).name}"
)
msg.attach(part)
raw = base64.urlsafe_b64encode(msg.as_bytes()).decode("utf-8")
# Send the message
return await asyncio.to_thread(
lambda: service.users()
.messages()
.send(userId="me", body={"threadId": input_data.threadId, "raw": raw})
.send(userId="me", body={"threadId": thread_id, "raw": raw})
.execute()
)
class GmailDraftReplyBlock(GmailBase):
"""
Creates draft replies to Gmail threads with intelligent content type detection.
Features:
- Automatic HTML detection: Draft replies containing HTML tags are formatted as text/html
- No hard-wrap for plain text: Plain text draft replies preserve natural line flow
- Manual content type override: Use content_type parameter to force specific format
- Reply-all functionality: Option to reply to all original recipients
- Thread preservation: Maintains proper email threading with headers
- Full Unicode/emoji support with UTF-8 encoding
"""
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
[
"https://www.googleapis.com/auth/gmail.modify",
"https://www.googleapis.com/auth/gmail.readonly",
]
)
threadId: str = SchemaField(description="Thread ID to reply in")
parentMessageId: str = SchemaField(
description="ID of the message being replied to"
)
to: list[str] = SchemaField(description="To recipients", default_factory=list)
cc: list[str] = SchemaField(description="CC recipients", default_factory=list)
bcc: list[str] = SchemaField(description="BCC recipients", default_factory=list)
replyAll: bool = SchemaField(
description="Reply to all original recipients", default=False
)
subject: str = SchemaField(description="Email subject", default="")
body: str = SchemaField(description="Email body (plain text or HTML)")
content_type: Optional[Literal["auto", "plain", "html"]] = SchemaField(
description="Content type: 'auto' (default - detects HTML), 'plain', or 'html'",
default=None,
advanced=True,
)
attachments: list[MediaFileType] = SchemaField(
description="Files to attach", default_factory=list, advanced=True
)
class Output(BlockSchema):
draftId: str = SchemaField(description="Created draft ID")
messageId: str = SchemaField(description="Draft message ID")
threadId: str = SchemaField(description="Thread ID")
status: str = SchemaField(description="Draft creation status")
error: str = SchemaField(description="Error message if any")
def __init__(self):
super().__init__(
id="d7a9f3e2-8b4c-4d6f-9e1a-3c5b7f8d2a6e",
description="Create draft replies to Gmail threads with automatic HTML detection and proper text formatting. Plain text draft replies maintain natural paragraph flow without 78-character line wrapping. HTML content is automatically detected and formatted correctly.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailDraftReplyBlock.Input,
output_schema=GmailDraftReplyBlock.Output,
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
test_input={
"threadId": "t1",
"parentMessageId": "m1",
"body": "Thanks for your message. I'll review and get back to you.",
"replyAll": False,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("draftId", "draft1"),
("messageId", "m2"),
("threadId", "t1"),
("status", "draft_created"),
],
test_mock={
"_create_draft_reply": lambda *args, **kwargs: {
"id": "draft1",
"message": {"id": "m2", "threadId": "t1"},
}
},
)
async def run(
self,
input_data: Input,
*,
credentials: GoogleCredentials,
graph_exec_id: str,
user_id: str,
**kwargs,
) -> BlockOutput:
service = self._build_service(credentials, **kwargs)
draft = await self._create_draft_reply(
service,
input_data,
graph_exec_id,
user_id,
)
yield "draftId", draft["id"]
yield "messageId", draft["message"]["id"]
yield "threadId", draft["message"].get("threadId", input_data.threadId)
yield "status", "draft_created"
async def _create_draft_reply(
self, service, input_data: Input, graph_exec_id: str, user_id: str
) -> dict:
# Build the reply message using the shared helper
raw, thread_id = await _build_reply_message(
service, input_data, graph_exec_id, user_id
)
# Create draft with proper thread association
draft = await asyncio.to_thread(
lambda: service.users()
.drafts()
.create(
userId="me",
body={
"message": {
"threadId": thread_id,
"raw": raw,
}
},
)
.execute()
)
return draft
class GmailGetProfileBlock(GmailBase):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(

View File

@@ -10,7 +10,6 @@ from backend.util.settings import Config
from backend.util.text import TextFormatter
from backend.util.type import LongTextType, MediaFileType, ShortTextType
formatter = TextFormatter()
config = Config()
@@ -132,6 +131,11 @@ class AgentOutputBlock(Block):
default="",
advanced=True,
)
escape_html: bool = SchemaField(
default=False,
advanced=True,
description="Whether to escape special characters in the inserted values to be HTML-safe. Enable for HTML output, disable for plain text.",
)
advanced: bool = SchemaField(
description="Whether to treat the output as advanced.",
default=False,
@@ -193,6 +197,7 @@ class AgentOutputBlock(Block):
"""
if input_data.format:
try:
formatter = TextFormatter(autoescape=input_data.escape_html)
yield "output", formatter.format_string(
input_data.format, {input_data.name: input_data.value}
)

View File

@@ -1,5 +1,9 @@
# This file contains a lot of prompt block strings that would trigger "line too long"
# flake8: noqa: E501
import ast
import logging
import re
import secrets
from abc import ABC
from enum import Enum, EnumMeta
from json import JSONDecodeError
@@ -27,7 +31,7 @@ from backend.util.prompt import compress_prompt, estimate_token_count
from backend.util.text import TextFormatter
logger = TruncatedLogger(logging.getLogger(__name__), "[LLM-Block]")
fmt = TextFormatter()
fmt = TextFormatter(autoescape=False)
LLMProviderName = Literal[
ProviderName.AIML_API,
@@ -204,13 +208,13 @@ MODEL_METADATA = {
"anthropic", 200000, 32000
), # claude-opus-4-1-20250805
LlmModel.CLAUDE_4_OPUS: ModelMetadata(
"anthropic", 200000, 8192
"anthropic", 200000, 32000
), # claude-4-opus-20250514
LlmModel.CLAUDE_4_SONNET: ModelMetadata(
"anthropic", 200000, 8192
"anthropic", 200000, 64000
), # claude-4-sonnet-20250514
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 8192
"anthropic", 200000, 64000
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_5_SONNET: ModelMetadata(
"anthropic", 200000, 8192
@@ -382,7 +386,9 @@ def extract_openai_tool_calls(response) -> list[ToolContentBlock] | None:
return None
def get_parallel_tool_calls_param(llm_model: LlmModel, parallel_tool_calls):
def get_parallel_tool_calls_param(
llm_model: LlmModel, parallel_tool_calls: bool | None
):
"""Get the appropriate parallel_tool_calls parameter for OpenAI-compatible APIs."""
if llm_model.startswith("o") or parallel_tool_calls is None:
return openai.NOT_GIVEN
@@ -393,8 +399,8 @@ async def llm_call(
credentials: APIKeyCredentials,
llm_model: LlmModel,
prompt: list[dict],
json_format: bool,
max_tokens: int | None,
force_json_output: bool = False,
tools: list[dict] | None = None,
ollama_host: str = "localhost:11434",
parallel_tool_calls=None,
@@ -407,7 +413,7 @@ async def llm_call(
credentials: The API key credentials to use.
llm_model: The LLM model to use.
prompt: The prompt to send to the LLM.
json_format: Whether the response should be in JSON format.
force_json_output: Whether the response should be in JSON format.
max_tokens: The maximum number of tokens to generate in the chat completion.
tools: The tools to use in the chat completion.
ollama_host: The host for ollama to use.
@@ -446,7 +452,7 @@ async def llm_call(
llm_model, parallel_tool_calls
)
if json_format:
if force_json_output:
response_format = {"type": "json_object"}
response = await oai_client.chat.completions.create(
@@ -559,7 +565,7 @@ async def llm_call(
raise ValueError("Groq does not support tools.")
client = AsyncGroq(api_key=credentials.api_key.get_secret_value())
response_format = {"type": "json_object"} if json_format else None
response_format = {"type": "json_object"} if force_json_output else None
response = await client.chat.completions.create(
model=llm_model.value,
messages=prompt, # type: ignore
@@ -717,7 +723,7 @@ async def llm_call(
)
response_format = None
if json_format:
if force_json_output:
response_format = {"type": "json_object"}
parallel_tool_calls_param = get_parallel_tool_calls_param(
@@ -780,6 +786,17 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
description="The language model to use for answering the prompt.",
advanced=False,
)
force_json_output: bool = SchemaField(
title="Restrict LLM to pure JSON output",
default=False,
description=(
"Whether to force the LLM to produce a JSON-only response. "
"This can increase the block's reliability, "
"but may also reduce the quality of the response "
"because it prohibits the LLM from reasoning "
"before providing its JSON response."
),
)
credentials: AICredentials = AICredentialsField()
sys_prompt: str = SchemaField(
title="System Prompt",
@@ -848,17 +865,18 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
"llm_call": lambda *args, **kwargs: LLMResponse(
raw_response="",
prompt=[""],
response=json.dumps(
{
"key1": "key1Value",
"key2": "key2Value",
}
response=(
'<json_output id="test123456">{\n'
' "key1": "key1Value",\n'
' "key2": "key2Value"\n'
"}</json_output>"
),
tool_calls=None,
prompt_tokens=0,
completion_tokens=0,
reasoning=None,
)
),
"get_collision_proof_output_tag_id": lambda *args: "test123456",
},
)
@@ -867,9 +885,9 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
credentials: APIKeyCredentials,
llm_model: LlmModel,
prompt: list[dict],
json_format: bool,
compress_prompt_to_fit: bool,
max_tokens: int | None,
force_json_output: bool = False,
compress_prompt_to_fit: bool = True,
tools: list[dict] | None = None,
ollama_host: str = "localhost:11434",
) -> LLMResponse:
@@ -882,8 +900,8 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
credentials=credentials,
llm_model=llm_model,
prompt=prompt,
json_format=json_format,
max_tokens=max_tokens,
force_json_output=force_json_output,
tools=tools,
ollama_host=ollama_host,
compress_prompt_to_fit=compress_prompt_to_fit,
@@ -895,10 +913,6 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
logger.debug(f"Calling LLM with input data: {input_data}")
prompt = [json.to_dict(p) for p in input_data.conversation_history]
def trim_prompt(s: str) -> str:
lines = s.strip().split("\n")
return "\n".join([line.strip().lstrip("|") for line in lines])
values = input_data.prompt_values
if values:
input_data.prompt = fmt.format_string(input_data.prompt, values)
@@ -907,27 +921,15 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
if input_data.sys_prompt:
prompt.append({"role": "system", "content": input_data.sys_prompt})
# Use a one-time unique tag to prevent collisions with user/LLM content
output_tag_id = self.get_collision_proof_output_tag_id()
output_tag_start = f'<json_output id="{output_tag_id}">'
if input_data.expected_format:
expected_format = [
f'"{k}": "{v}"' for k, v in input_data.expected_format.items()
]
if input_data.list_result:
format_prompt = (
f'"results": [\n {{\n {", ".join(expected_format)}\n }}\n]'
)
else:
format_prompt = "\n ".join(expected_format)
sys_prompt = trim_prompt(
f"""
|Reply strictly only in the following JSON format:
|{{
| {format_prompt}
|}}
|
|Ensure the response is valid JSON. Do not include any additional text outside of the JSON.
|If you cannot provide all the keys, provide an empty string for the values you cannot answer.
"""
sys_prompt = self.response_format_instructions(
input_data.expected_format,
list_mode=input_data.list_result,
pure_json_mode=input_data.force_json_output,
output_tag_start=output_tag_start,
)
prompt.append({"role": "system", "content": sys_prompt})
@@ -945,18 +947,21 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
except JSONDecodeError as e:
return f"JSON decode error: {e}"
logger.debug(f"LLM request: {prompt}")
retry_prompt = ""
error_feedback_message = ""
llm_model = input_data.model
for retry_count in range(input_data.retry):
logger.debug(f"LLM request: {prompt}")
try:
llm_response = await self.llm_call(
credentials=credentials,
llm_model=llm_model,
prompt=prompt,
compress_prompt_to_fit=input_data.compress_prompt_to_fit,
json_format=bool(input_data.expected_format),
force_json_output=(
input_data.force_json_output
and bool(input_data.expected_format)
),
ollama_host=input_data.ollama_host,
max_tokens=input_data.max_tokens,
)
@@ -970,16 +975,55 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
logger.debug(f"LLM attempt-{retry_count} response: {response_text}")
if input_data.expected_format:
try:
response_obj = self.get_json_from_response(
response_text,
pure_json_mode=input_data.force_json_output,
output_tag_start=output_tag_start,
)
except (ValueError, JSONDecodeError) as parse_error:
censored_response = re.sub(r"[A-Za-z0-9]", "*", response_text)
response_snippet = (
f"{censored_response[:50]}...{censored_response[-30:]}"
)
logger.warning(
f"Error getting JSON from LLM response: {parse_error}\n\n"
f"Response start+end: `{response_snippet}`"
)
prompt.append({"role": "assistant", "content": response_text})
response_obj = json.loads(response_text)
error_feedback_message = self.invalid_response_feedback(
parse_error,
was_parseable=False,
list_mode=input_data.list_result,
pure_json_mode=input_data.force_json_output,
output_tag_start=output_tag_start,
)
prompt.append(
{"role": "user", "content": error_feedback_message}
)
continue
# Handle object response for `force_json_output`+`list_result`
if input_data.list_result and isinstance(response_obj, dict):
if "results" in response_obj:
response_obj = response_obj.get("results", [])
elif len(response_obj) == 1:
response_obj = list(response_obj.values())
if "results" in response_obj and isinstance(
response_obj["results"], list
):
response_obj = response_obj["results"]
else:
error_feedback_message = (
"Expected an array of objects in the 'results' key, "
f"but got: {response_obj}"
)
prompt.append(
{"role": "assistant", "content": response_text}
)
prompt.append(
{"role": "user", "content": error_feedback_message}
)
continue
response_error = "\n".join(
validation_errors = "\n".join(
[
validation_error
for response_item in (
@@ -991,7 +1035,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
]
)
if not response_error:
if not validation_errors:
self.merge_stats(
NodeExecutionStats(
llm_call_count=retry_count + 1,
@@ -1001,6 +1045,16 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
yield "response", response_obj
yield "prompt", self.prompt
return
prompt.append({"role": "assistant", "content": response_text})
error_feedback_message = self.invalid_response_feedback(
validation_errors,
was_parseable=True,
list_mode=input_data.list_result,
pure_json_mode=input_data.force_json_output,
output_tag_start=output_tag_start,
)
prompt.append({"role": "user", "content": error_feedback_message})
else:
self.merge_stats(
NodeExecutionStats(
@@ -1011,21 +1065,6 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
yield "response", {"response": response_text}
yield "prompt", self.prompt
return
retry_prompt = trim_prompt(
f"""
|This is your previous error response:
|--
|{response_text}
|--
|
|And this is the error:
|--
|{response_error}
|--
"""
)
prompt.append({"role": "user", "content": retry_prompt})
except Exception as e:
logger.exception(f"Error calling LLM: {e}")
if (
@@ -1038,9 +1077,133 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
logger.debug(
f"Reducing max_tokens to {input_data.max_tokens} for next attempt"
)
retry_prompt = f"Error calling LLM: {e}"
# Don't add retry prompt for token limit errors,
# just retry with lower maximum output tokens
raise RuntimeError(retry_prompt)
error_feedback_message = f"Error calling LLM: {e}"
raise RuntimeError(error_feedback_message)
def response_format_instructions(
self,
expected_object_format: dict[str, str],
*,
list_mode: bool,
pure_json_mode: bool,
output_tag_start: str,
) -> str:
expected_output_format = json.dumps(expected_object_format, indent=2)
output_type = "object" if not list_mode else "array"
outer_output_type = "object" if pure_json_mode else output_type
if output_type == "array":
indented_obj_format = expected_output_format.replace("\n", "\n ")
expected_output_format = f"[\n {indented_obj_format},\n ...\n]"
if pure_json_mode:
indented_list_format = expected_output_format.replace("\n", "\n ")
expected_output_format = (
"{\n"
' "reasoning": "... (optional)",\n' # for better performance
f' "results": {indented_list_format}\n'
"}"
)
# Preserve indentation in prompt
expected_output_format = expected_output_format.replace("\n", "\n|")
# Prepare prompt
if not pure_json_mode:
expected_output_format = (
f"{output_tag_start}\n{expected_output_format}\n</json_output>"
)
instructions = f"""
|In your response you MUST include a valid JSON {outer_output_type} strictly following this format:
|{expected_output_format}
|
|If you cannot provide all the keys, you MUST provide an empty string for the values you cannot answer.
""".strip()
if not pure_json_mode:
instructions += f"""
|
|You MUST enclose your final JSON answer in {output_tag_start}...</json_output> tags, even if the user specifies a different tag.
|There MUST be exactly ONE {output_tag_start}...</json_output> block in your response, which MUST ONLY contain the JSON {outer_output_type} and nothing else. Other text outside this block is allowed.
""".strip()
return trim_prompt(instructions)
def invalid_response_feedback(
self,
error,
*,
was_parseable: bool,
list_mode: bool,
pure_json_mode: bool,
output_tag_start: str,
) -> str:
outer_output_type = "object" if not list_mode or pure_json_mode else "array"
if was_parseable:
complaint = f"Your previous response did not match the expected {outer_output_type} format."
else:
complaint = f"Your previous response did not contain a parseable JSON {outer_output_type}."
indented_parse_error = str(error).replace("\n", "\n|")
instruction = (
f"Please provide a {output_tag_start}...</json_output> block containing a"
if not pure_json_mode
else "Please provide a"
) + f" valid JSON {outer_output_type} that matches the expected format."
return trim_prompt(
f"""
|{complaint}
|
|{indented_parse_error}
|
|{instruction}
"""
)
def get_json_from_response(
self, response_text: str, *, pure_json_mode: bool, output_tag_start: str
) -> dict[str, Any] | list[dict[str, Any]]:
if pure_json_mode:
# Handle pure JSON responses
try:
return json.loads(response_text)
except JSONDecodeError as first_parse_error:
# If that didn't work, try finding the { and } to deal with possible ```json fences etc.
json_start = response_text.find("{")
json_end = response_text.rfind("}")
try:
return json.loads(response_text[json_start : json_end + 1])
except JSONDecodeError:
# Raise the original error, as it's more likely to be relevant
raise first_parse_error from None
if output_tag_start not in response_text:
raise ValueError(
"Response does not contain the expected "
f"{output_tag_start}...</json_output> block."
)
json_output = (
response_text.split(output_tag_start, 1)[1]
.rsplit("</json_output>", 1)[0]
.strip()
)
return json.loads(json_output)
def get_collision_proof_output_tag_id(self) -> str:
return secrets.token_hex(8)
def trim_prompt(s: str) -> str:
"""Removes indentation up to and including `|` from a multi-line prompt."""
lines = s.strip().split("\n")
return "\n".join([line.strip().lstrip("|") for line in lines])
class AITextGeneratorBlock(AIBlockBase):

View File

@@ -0,0 +1,536 @@
"""
Notion API helper functions and client for making authenticated requests.
"""
from typing import Any, Dict, List, Optional
from backend.data.model import OAuth2Credentials
from backend.util.request import Requests
NOTION_VERSION = "2022-06-28"
class NotionAPIException(Exception):
"""Exception raised for Notion API errors."""
def __init__(self, message: str, status_code: int):
super().__init__(message)
self.status_code = status_code
class NotionClient:
"""Client for interacting with the Notion API."""
def __init__(self, credentials: OAuth2Credentials):
self.credentials = credentials
self.headers = {
"Authorization": credentials.auth_header(),
"Notion-Version": NOTION_VERSION,
"Content-Type": "application/json",
}
self.requests = Requests()
async def get_page(self, page_id: str) -> dict:
"""
Fetch a page by ID.
Args:
page_id: The ID of the page to fetch.
Returns:
The page object from Notion API.
"""
url = f"https://api.notion.com/v1/pages/{page_id}"
response = await self.requests.get(url, headers=self.headers)
if not response.ok:
raise NotionAPIException(
f"Failed to fetch page: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def get_blocks(self, block_id: str, recursive: bool = True) -> List[dict]:
"""
Fetch all blocks from a page or block.
Args:
block_id: The ID of the page or block to fetch children from.
recursive: Whether to fetch nested blocks recursively.
Returns:
List of block objects.
"""
blocks = []
cursor = None
while True:
url = f"https://api.notion.com/v1/blocks/{block_id}/children"
params = {"page_size": 100}
if cursor:
params["start_cursor"] = cursor
response = await self.requests.get(url, headers=self.headers, params=params)
if not response.ok:
raise NotionAPIException(
f"Failed to fetch blocks: {response.status} - {response.text()}",
response.status,
)
data = response.json()
current_blocks = data.get("results", [])
# If recursive, fetch children for blocks that have them
if recursive:
for block in current_blocks:
if block.get("has_children"):
block["children"] = await self.get_blocks(
block["id"], recursive=True
)
blocks.extend(current_blocks)
if not data.get("has_more"):
break
cursor = data.get("next_cursor")
return blocks
async def query_database(
self,
database_id: str,
filter_obj: Optional[dict] = None,
sorts: Optional[List[dict]] = None,
page_size: int = 100,
) -> dict:
"""
Query a database with optional filters and sorts.
Args:
database_id: The ID of the database to query.
filter_obj: Optional filter object for the query.
sorts: Optional list of sort objects.
page_size: Number of results per page.
Returns:
Query results including pages and pagination info.
"""
url = f"https://api.notion.com/v1/databases/{database_id}/query"
payload: Dict[str, Any] = {"page_size": page_size}
if filter_obj:
payload["filter"] = filter_obj
if sorts:
payload["sorts"] = sorts
response = await self.requests.post(url, headers=self.headers, json=payload)
if not response.ok:
raise NotionAPIException(
f"Failed to query database: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def create_page(
self,
parent: dict,
properties: dict,
children: Optional[List[dict]] = None,
icon: Optional[dict] = None,
cover: Optional[dict] = None,
) -> dict:
"""
Create a new page.
Args:
parent: Parent object (page_id or database_id).
properties: Page properties.
children: Optional list of block children.
icon: Optional icon object.
cover: Optional cover object.
Returns:
The created page object.
"""
url = "https://api.notion.com/v1/pages"
payload: Dict[str, Any] = {"parent": parent, "properties": properties}
if children:
payload["children"] = children
if icon:
payload["icon"] = icon
if cover:
payload["cover"] = cover
response = await self.requests.post(url, headers=self.headers, json=payload)
if not response.ok:
raise NotionAPIException(
f"Failed to create page: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def update_page(self, page_id: str, properties: dict) -> dict:
"""
Update a page's properties.
Args:
page_id: The ID of the page to update.
properties: Properties to update.
Returns:
The updated page object.
"""
url = f"https://api.notion.com/v1/pages/{page_id}"
response = await self.requests.patch(
url, headers=self.headers, json={"properties": properties}
)
if not response.ok:
raise NotionAPIException(
f"Failed to update page: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def append_blocks(self, block_id: str, children: List[dict]) -> dict:
"""
Append blocks to a page or block.
Args:
block_id: The ID of the page or block to append to.
children: List of block objects to append.
Returns:
Response with the created blocks.
"""
url = f"https://api.notion.com/v1/blocks/{block_id}/children"
response = await self.requests.patch(
url, headers=self.headers, json={"children": children}
)
if not response.ok:
raise NotionAPIException(
f"Failed to append blocks: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def search(
self,
query: str = "",
filter_obj: Optional[dict] = None,
sort: Optional[dict] = None,
page_size: int = 100,
) -> dict:
"""
Search for pages and databases.
Args:
query: Search query text.
filter_obj: Optional filter object.
sort: Optional sort object.
page_size: Number of results per page.
Returns:
Search results.
"""
url = "https://api.notion.com/v1/search"
payload: Dict[str, Any] = {"page_size": page_size}
if query:
payload["query"] = query
if filter_obj:
payload["filter"] = filter_obj
if sort:
payload["sort"] = sort
response = await self.requests.post(url, headers=self.headers, json=payload)
if not response.ok:
raise NotionAPIException(
f"Search failed: {response.status} - {response.text()}", response.status
)
return response.json()
# Conversion helper functions
def parse_rich_text(rich_text_array: List[dict]) -> str:
"""
Extract plain text from a Notion rich text array.
Args:
rich_text_array: Array of rich text objects from Notion.
Returns:
Plain text string.
"""
if not rich_text_array:
return ""
text_parts = []
for text_obj in rich_text_array:
if "plain_text" in text_obj:
text_parts.append(text_obj["plain_text"])
return "".join(text_parts)
def rich_text_to_markdown(rich_text_array: List[dict]) -> str:
"""
Convert Notion rich text array to markdown with formatting.
Args:
rich_text_array: Array of rich text objects from Notion.
Returns:
Markdown formatted string.
"""
if not rich_text_array:
return ""
markdown_parts = []
for text_obj in rich_text_array:
text = text_obj.get("plain_text", "")
annotations = text_obj.get("annotations", {})
# Apply formatting based on annotations
if annotations.get("code"):
text = f"`{text}`"
else:
if annotations.get("bold"):
text = f"**{text}**"
if annotations.get("italic"):
text = f"*{text}*"
if annotations.get("strikethrough"):
text = f"~~{text}~~"
if annotations.get("underline"):
text = f"<u>{text}</u>"
# Handle links
if text_obj.get("href"):
text = f"[{text}]({text_obj['href']})"
markdown_parts.append(text)
return "".join(markdown_parts)
def block_to_markdown(block: dict, indent_level: int = 0) -> str:
"""
Convert a single Notion block to markdown.
Args:
block: Block object from Notion API.
indent_level: Current indentation level for nested blocks.
Returns:
Markdown string representation of the block.
"""
block_type = block.get("type")
indent = " " * indent_level
markdown_lines = []
# Handle different block types
if block_type == "paragraph":
text = rich_text_to_markdown(block["paragraph"].get("rich_text", []))
if text:
markdown_lines.append(f"{indent}{text}")
elif block_type == "heading_1":
text = parse_rich_text(block["heading_1"].get("rich_text", []))
markdown_lines.append(f"{indent}# {text}")
elif block_type == "heading_2":
text = parse_rich_text(block["heading_2"].get("rich_text", []))
markdown_lines.append(f"{indent}## {text}")
elif block_type == "heading_3":
text = parse_rich_text(block["heading_3"].get("rich_text", []))
markdown_lines.append(f"{indent}### {text}")
elif block_type == "bulleted_list_item":
text = rich_text_to_markdown(block["bulleted_list_item"].get("rich_text", []))
markdown_lines.append(f"{indent}- {text}")
elif block_type == "numbered_list_item":
text = rich_text_to_markdown(block["numbered_list_item"].get("rich_text", []))
# Note: This is simplified - proper numbering would need context
markdown_lines.append(f"{indent}1. {text}")
elif block_type == "to_do":
text = rich_text_to_markdown(block["to_do"].get("rich_text", []))
checked = "x" if block["to_do"].get("checked") else " "
markdown_lines.append(f"{indent}- [{checked}] {text}")
elif block_type == "toggle":
text = rich_text_to_markdown(block["toggle"].get("rich_text", []))
markdown_lines.append(f"{indent}<details>")
markdown_lines.append(f"{indent}<summary>{text}</summary>")
markdown_lines.append(f"{indent}")
# Process children if they exist
if block.get("children"):
for child in block["children"]:
child_markdown = block_to_markdown(child, indent_level + 1)
if child_markdown:
markdown_lines.append(child_markdown)
markdown_lines.append(f"{indent}</details>")
elif block_type == "code":
code = parse_rich_text(block["code"].get("rich_text", []))
language = block["code"].get("language", "")
markdown_lines.append(f"{indent}```{language}")
markdown_lines.append(f"{indent}{code}")
markdown_lines.append(f"{indent}```")
elif block_type == "quote":
text = rich_text_to_markdown(block["quote"].get("rich_text", []))
markdown_lines.append(f"{indent}> {text}")
elif block_type == "divider":
markdown_lines.append(f"{indent}---")
elif block_type == "image":
image = block["image"]
url = image.get("external", {}).get("url") or image.get("file", {}).get(
"url", ""
)
caption = parse_rich_text(image.get("caption", []))
alt_text = caption if caption else "Image"
markdown_lines.append(f"{indent}![{alt_text}]({url})")
if caption:
markdown_lines.append(f"{indent}*{caption}*")
elif block_type == "video":
video = block["video"]
url = video.get("external", {}).get("url") or video.get("file", {}).get(
"url", ""
)
caption = parse_rich_text(video.get("caption", []))
markdown_lines.append(f"{indent}[Video]({url})")
if caption:
markdown_lines.append(f"{indent}*{caption}*")
elif block_type == "file":
file = block["file"]
url = file.get("external", {}).get("url") or file.get("file", {}).get("url", "")
caption = parse_rich_text(file.get("caption", []))
name = caption if caption else "File"
markdown_lines.append(f"{indent}[{name}]({url})")
elif block_type == "bookmark":
url = block["bookmark"].get("url", "")
caption = parse_rich_text(block["bookmark"].get("caption", []))
markdown_lines.append(f"{indent}[{caption if caption else url}]({url})")
elif block_type == "equation":
expression = block["equation"].get("expression", "")
markdown_lines.append(f"{indent}$${expression}$$")
elif block_type == "callout":
text = rich_text_to_markdown(block["callout"].get("rich_text", []))
icon = block["callout"].get("icon", {})
if icon.get("emoji"):
markdown_lines.append(f"{indent}> {icon['emoji']} {text}")
else:
markdown_lines.append(f"{indent}> {text}")
elif block_type == "child_page":
title = block["child_page"].get("title", "Untitled")
markdown_lines.append(f"{indent}📄 [{title}](notion://page/{block['id']})")
elif block_type == "child_database":
title = block["child_database"].get("title", "Untitled Database")
markdown_lines.append(f"{indent}🗂️ [{title}](notion://database/{block['id']})")
elif block_type == "table":
# Tables are complex - for now just indicate there's a table
markdown_lines.append(
f"{indent}[Table with {block['table'].get('table_width', 0)} columns]"
)
elif block_type == "column_list":
# Process columns
if block.get("children"):
markdown_lines.append(f"{indent}<div style='display: flex'>")
for column in block["children"]:
markdown_lines.append(f"{indent}<div style='flex: 1'>")
if column.get("children"):
for child in column["children"]:
child_markdown = block_to_markdown(child, indent_level + 1)
if child_markdown:
markdown_lines.append(child_markdown)
markdown_lines.append(f"{indent}</div>")
markdown_lines.append(f"{indent}</div>")
# Handle children for blocks that haven't been processed yet
elif block.get("children") and block_type not in ["toggle", "column_list"]:
for child in block["children"]:
child_markdown = block_to_markdown(child, indent_level)
if child_markdown:
markdown_lines.append(child_markdown)
return "\n".join(markdown_lines) if markdown_lines else ""
def blocks_to_markdown(blocks: List[dict]) -> str:
"""
Convert a list of Notion blocks to a markdown document.
Args:
blocks: List of block objects from Notion API.
Returns:
Complete markdown document as a string.
"""
markdown_parts = []
for i, block in enumerate(blocks):
markdown = block_to_markdown(block)
if markdown:
markdown_parts.append(markdown)
# Add spacing between top-level blocks (except lists)
if i < len(blocks) - 1:
next_type = blocks[i + 1].get("type", "")
current_type = block.get("type", "")
# Don't add extra spacing between list items
list_types = {"bulleted_list_item", "numbered_list_item", "to_do"}
if not (current_type in list_types and next_type in list_types):
markdown_parts.append("")
return "\n".join(markdown_parts)
def extract_page_title(page: dict) -> str:
"""
Extract the title from a Notion page object.
Args:
page: Page object from Notion API.
Returns:
Page title as a string.
"""
properties = page.get("properties", {})
# Find the title property (it has type "title")
for prop_name, prop_value in properties.items():
if prop_value.get("type") == "title":
return parse_rich_text(prop_value.get("title", []))
return "Untitled"

View File

@@ -0,0 +1,42 @@
from typing import Literal
from pydantic import SecretStr
from backend.data.model import CredentialsField, CredentialsMetaInput, OAuth2Credentials
from backend.integrations.providers import ProviderName
from backend.util.settings import Secrets
secrets = Secrets()
NOTION_OAUTH_IS_CONFIGURED = bool(
secrets.notion_client_id and secrets.notion_client_secret
)
NotionCredentials = OAuth2Credentials
NotionCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.NOTION], Literal["oauth2"]
]
def NotionCredentialsField() -> NotionCredentialsInput:
"""Creates a Notion OAuth2 credentials field."""
return CredentialsField(
description="Connect your Notion account. Ensure the pages/databases are shared with the integration."
)
# Test credentials for Notion OAuth2
TEST_CREDENTIALS = OAuth2Credentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="notion",
access_token=SecretStr("test_access_token"),
title="Mock Notion OAuth",
scopes=["read_content", "insert_content", "update_content"],
username="testuser",
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}

View File

@@ -0,0 +1,360 @@
from __future__ import annotations
from typing import Any, Dict, List, Optional
from pydantic import model_validator
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionCreatePageBlock(Block):
"""Create a new page in Notion with content."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
parent_page_id: Optional[str] = SchemaField(
description="Parent page ID to create the page under. Either this OR parent_database_id is required.",
default=None,
)
parent_database_id: Optional[str] = SchemaField(
description="Parent database ID to create the page in. Either this OR parent_page_id is required.",
default=None,
)
title: str = SchemaField(
description="Title of the new page",
)
content: Optional[str] = SchemaField(
description="Content for the page. Can be plain text or markdown - will be converted to Notion blocks.",
default=None,
)
properties: Optional[Dict[str, Any]] = SchemaField(
description="Additional properties for database pages (e.g., {'Status': 'In Progress', 'Priority': 'High'})",
default=None,
)
icon_emoji: Optional[str] = SchemaField(
description="Emoji to use as the page icon (e.g., '📄', '🚀')", default=None
)
@model_validator(mode="after")
def validate_parent(self):
"""Ensure either parent_page_id or parent_database_id is provided."""
if not self.parent_page_id and not self.parent_database_id:
raise ValueError(
"Either parent_page_id or parent_database_id must be provided"
)
if self.parent_page_id and self.parent_database_id:
raise ValueError(
"Only one of parent_page_id or parent_database_id should be provided, not both"
)
return self
class Output(BlockSchema):
page_id: str = SchemaField(description="ID of the created page.")
page_url: str = SchemaField(description="URL of the created page.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="c15febe0-66ce-4c6f-aebd-5ab351653804",
description="Create a new page in Notion. Requires EITHER a parent_page_id OR parent_database_id. Supports markdown content.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=NotionCreatePageBlock.Input,
output_schema=NotionCreatePageBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"parent_page_id": "00000000-0000-0000-0000-000000000000",
"title": "Test Page",
"content": "This is test content.",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("page_id", "12345678-1234-1234-1234-123456789012"),
(
"page_url",
"https://notion.so/Test-Page-12345678123412341234123456789012",
),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"create_page": lambda *args, **kwargs: (
"12345678-1234-1234-1234-123456789012",
"https://notion.so/Test-Page-12345678123412341234123456789012",
)
},
)
@staticmethod
def _markdown_to_blocks(content: str) -> List[dict]:
"""Convert markdown content to Notion block objects."""
if not content:
return []
blocks = []
lines = content.split("\n")
i = 0
while i < len(lines):
line = lines[i]
# Skip empty lines
if not line.strip():
i += 1
continue
# Headings
if line.startswith("### "):
blocks.append(
{
"type": "heading_3",
"heading_3": {
"rich_text": [
{"type": "text", "text": {"content": line[4:].strip()}}
]
},
}
)
elif line.startswith("## "):
blocks.append(
{
"type": "heading_2",
"heading_2": {
"rich_text": [
{"type": "text", "text": {"content": line[3:].strip()}}
]
},
}
)
elif line.startswith("# "):
blocks.append(
{
"type": "heading_1",
"heading_1": {
"rich_text": [
{"type": "text", "text": {"content": line[2:].strip()}}
]
},
}
)
# Bullet points
elif line.strip().startswith("- "):
blocks.append(
{
"type": "bulleted_list_item",
"bulleted_list_item": {
"rich_text": [
{
"type": "text",
"text": {"content": line.strip()[2:].strip()},
}
]
},
}
)
# Numbered list
elif line.strip() and line.strip()[0].isdigit() and ". " in line:
content_start = line.find(". ") + 2
blocks.append(
{
"type": "numbered_list_item",
"numbered_list_item": {
"rich_text": [
{
"type": "text",
"text": {"content": line[content_start:].strip()},
}
]
},
}
)
# Code block
elif line.strip().startswith("```"):
code_lines = []
language = line[3:].strip() or "plain text"
i += 1
while i < len(lines) and not lines[i].strip().startswith("```"):
code_lines.append(lines[i])
i += 1
blocks.append(
{
"type": "code",
"code": {
"rich_text": [
{
"type": "text",
"text": {"content": "\n".join(code_lines)},
}
],
"language": language,
},
}
)
# Quote
elif line.strip().startswith("> "):
blocks.append(
{
"type": "quote",
"quote": {
"rich_text": [
{
"type": "text",
"text": {"content": line.strip()[2:].strip()},
}
]
},
}
)
# Horizontal rule
elif line.strip() in ["---", "***", "___"]:
blocks.append({"type": "divider", "divider": {}})
# Regular paragraph
else:
# Parse for basic markdown formatting
text_content = line.strip()
rich_text = []
# Simple bold/italic parsing (this is simplified)
if "**" in text_content or "*" in text_content:
# For now, just pass as plain text
# A full implementation would parse and create proper annotations
rich_text = [{"type": "text", "text": {"content": text_content}}]
else:
rich_text = [{"type": "text", "text": {"content": text_content}}]
blocks.append(
{"type": "paragraph", "paragraph": {"rich_text": rich_text}}
)
i += 1
return blocks
@staticmethod
def _build_properties(
title: str, additional_properties: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""Build properties object for page creation."""
properties: Dict[str, Any] = {
"title": {"title": [{"type": "text", "text": {"content": title}}]}
}
if additional_properties:
for key, value in additional_properties.items():
if key.lower() == "title":
continue # Skip title as we already have it
# Try to intelligently map property types
if isinstance(value, bool):
properties[key] = {"checkbox": value}
elif isinstance(value, (int, float)):
properties[key] = {"number": value}
elif isinstance(value, list):
# Assume multi-select
properties[key] = {
"multi_select": [{"name": str(item)} for item in value]
}
elif isinstance(value, str):
# Could be select, rich_text, or other types
# For simplicity, try common patterns
if key.lower() in ["status", "priority", "type", "category"]:
properties[key] = {"select": {"name": value}}
elif key.lower() in ["url", "link"]:
properties[key] = {"url": value}
elif key.lower() in ["email"]:
properties[key] = {"email": value}
else:
properties[key] = {
"rich_text": [{"type": "text", "text": {"content": value}}]
}
return properties
@staticmethod
async def create_page(
credentials: OAuth2Credentials,
title: str,
parent_page_id: Optional[str] = None,
parent_database_id: Optional[str] = None,
content: Optional[str] = None,
properties: Optional[Dict[str, Any]] = None,
icon_emoji: Optional[str] = None,
) -> tuple[str, str]:
"""
Create a new Notion page.
Returns:
Tuple of (page_id, page_url)
"""
if not parent_page_id and not parent_database_id:
raise ValueError(
"Either parent_page_id or parent_database_id must be provided"
)
if parent_page_id and parent_database_id:
raise ValueError(
"Only one of parent_page_id or parent_database_id should be provided, not both"
)
client = NotionClient(credentials)
# Build parent object
if parent_page_id:
parent = {"type": "page_id", "page_id": parent_page_id}
else:
parent = {"type": "database_id", "database_id": parent_database_id}
# Build properties
page_properties = NotionCreatePageBlock._build_properties(title, properties)
# Convert content to blocks if provided
children = None
if content:
children = NotionCreatePageBlock._markdown_to_blocks(content)
# Build icon if provided
icon = None
if icon_emoji:
icon = {"type": "emoji", "emoji": icon_emoji}
# Create the page
result = await client.create_page(
parent=parent, properties=page_properties, children=children, icon=icon
)
page_id = result.get("id", "")
page_url = result.get("url", "")
if not page_id or not page_url:
raise ValueError("Failed to get page ID or URL from Notion response")
return page_id, page_url
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
page_id, page_url = await self.create_page(
credentials,
input_data.title,
input_data.parent_page_id,
input_data.parent_database_id,
input_data.content,
input_data.properties,
input_data.icon_emoji,
)
yield "page_id", page_id
yield "page_url", page_url
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -0,0 +1,285 @@
from __future__ import annotations
from typing import Any, Dict, List, Optional
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient, parse_rich_text
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionReadDatabaseBlock(Block):
"""Query a Notion database and retrieve entries with their properties."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
database_id: str = SchemaField(
description="Notion database ID. Must be accessible by the connected integration.",
)
filter_property: Optional[str] = SchemaField(
description="Property name to filter by (e.g., 'Status', 'Priority')",
default=None,
)
filter_value: Optional[str] = SchemaField(
description="Value to filter for in the specified property", default=None
)
sort_property: Optional[str] = SchemaField(
description="Property name to sort by", default=None
)
sort_direction: Optional[str] = SchemaField(
description="Sort direction: 'ascending' or 'descending'",
default="ascending",
)
limit: int = SchemaField(
description="Maximum number of entries to retrieve",
default=100,
ge=1,
le=100,
)
class Output(BlockSchema):
entries: List[Dict[str, Any]] = SchemaField(
description="List of database entries with their properties."
)
entry: Dict[str, Any] = SchemaField(
description="Individual database entry (yields one per entry found)."
)
entry_ids: List[str] = SchemaField(
description="List of entry IDs for batch operations."
)
entry_id: str = SchemaField(
description="Individual entry ID (yields one per entry found)."
)
count: int = SchemaField(description="Number of entries retrieved.")
database_title: str = SchemaField(description="Title of the database.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="fcd53135-88c9-4ba3-be50-cc6936286e6c",
description="Query a Notion database with optional filtering and sorting, returning structured entries.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=NotionReadDatabaseBlock.Input,
output_schema=NotionReadDatabaseBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"database_id": "00000000-0000-0000-0000-000000000000",
"limit": 10,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"entries",
[{"Name": "Test Entry", "Status": "Active", "_id": "test-123"}],
),
("entry_ids", ["test-123"]),
(
"entry",
{"Name": "Test Entry", "Status": "Active", "_id": "test-123"},
),
("entry_id", "test-123"),
("count", 1),
("database_title", "Test Database"),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"query_database": lambda *args, **kwargs: (
[{"Name": "Test Entry", "Status": "Active", "_id": "test-123"}],
1,
"Test Database",
)
},
)
@staticmethod
def _parse_property_value(prop: dict) -> Any:
"""Parse a Notion property value into a simple Python type."""
prop_type = prop.get("type")
if prop_type == "title":
return parse_rich_text(prop.get("title", []))
elif prop_type == "rich_text":
return parse_rich_text(prop.get("rich_text", []))
elif prop_type == "number":
return prop.get("number")
elif prop_type == "select":
select = prop.get("select")
return select.get("name") if select else None
elif prop_type == "multi_select":
return [item.get("name") for item in prop.get("multi_select", [])]
elif prop_type == "date":
date = prop.get("date")
if date:
return date.get("start")
return None
elif prop_type == "checkbox":
return prop.get("checkbox", False)
elif prop_type == "url":
return prop.get("url")
elif prop_type == "email":
return prop.get("email")
elif prop_type == "phone_number":
return prop.get("phone_number")
elif prop_type == "people":
return [
person.get("name", person.get("id"))
for person in prop.get("people", [])
]
elif prop_type == "files":
files = prop.get("files", [])
return [
f.get(
"name",
f.get("external", {}).get("url", f.get("file", {}).get("url")),
)
for f in files
]
elif prop_type == "relation":
return [rel.get("id") for rel in prop.get("relation", [])]
elif prop_type == "formula":
formula = prop.get("formula", {})
return formula.get(formula.get("type"))
elif prop_type == "rollup":
rollup = prop.get("rollup", {})
return rollup.get(rollup.get("type"))
elif prop_type == "created_time":
return prop.get("created_time")
elif prop_type == "created_by":
return prop.get("created_by", {}).get(
"name", prop.get("created_by", {}).get("id")
)
elif prop_type == "last_edited_time":
return prop.get("last_edited_time")
elif prop_type == "last_edited_by":
return prop.get("last_edited_by", {}).get(
"name", prop.get("last_edited_by", {}).get("id")
)
else:
# Return the raw value for unknown types
return prop
@staticmethod
def _build_filter(property_name: str, value: str) -> dict:
"""Build a simple filter object for a property."""
# This is a simplified filter - in reality, you'd need to know the property type
# For now, we'll try common filter types
return {
"or": [
{"property": property_name, "rich_text": {"contains": value}},
{"property": property_name, "title": {"contains": value}},
{"property": property_name, "select": {"equals": value}},
{"property": property_name, "multi_select": {"contains": value}},
]
}
@staticmethod
async def query_database(
credentials: OAuth2Credentials,
database_id: str,
filter_property: Optional[str] = None,
filter_value: Optional[str] = None,
sort_property: Optional[str] = None,
sort_direction: str = "ascending",
limit: int = 100,
) -> tuple[List[Dict[str, Any]], int, str]:
"""
Query a Notion database and parse the results.
Returns:
Tuple of (entries_list, count, database_title)
"""
client = NotionClient(credentials)
# Build filter if specified
filter_obj = None
if filter_property and filter_value:
filter_obj = NotionReadDatabaseBlock._build_filter(
filter_property, filter_value
)
# Build sorts if specified
sorts = None
if sort_property:
sorts = [{"property": sort_property, "direction": sort_direction}]
# Query the database
result = await client.query_database(
database_id, filter_obj=filter_obj, sorts=sorts, page_size=limit
)
# Parse the entries
entries = []
for page in result.get("results", []):
entry = {}
properties = page.get("properties", {})
for prop_name, prop_value in properties.items():
entry[prop_name] = NotionReadDatabaseBlock._parse_property_value(
prop_value
)
# Add metadata
entry["_id"] = page.get("id")
entry["_url"] = page.get("url")
entry["_created_time"] = page.get("created_time")
entry["_last_edited_time"] = page.get("last_edited_time")
entries.append(entry)
# Get database title (we need to make a separate call for this)
try:
database_url = f"https://api.notion.com/v1/databases/{database_id}"
db_response = await client.requests.get(
database_url, headers=client.headers
)
if db_response.ok:
db_data = db_response.json()
db_title = parse_rich_text(db_data.get("title", []))
else:
db_title = "Unknown Database"
except Exception:
db_title = "Unknown Database"
return entries, len(entries), db_title
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
entries, count, db_title = await self.query_database(
credentials,
input_data.database_id,
input_data.filter_property,
input_data.filter_value,
input_data.sort_property,
input_data.sort_direction or "ascending",
input_data.limit,
)
# Yield the complete list for batch operations
yield "entries", entries
# Extract and yield IDs as a list for batch operations
entry_ids = [entry["_id"] for entry in entries if "_id" in entry]
yield "entry_ids", entry_ids
# Yield each individual entry and its ID for single connections
for entry in entries:
yield "entry", entry
if "_id" in entry:
yield "entry_id", entry["_id"]
yield "count", count
yield "database_title", db_title
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -0,0 +1,64 @@
from __future__ import annotations
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionReadPageBlock(Block):
"""Read a Notion page by ID and return its raw JSON."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
page_id: str = SchemaField(
description="Notion page ID. Must be accessible by the connected integration. You can get this from the page URL notion.so/A-Page-586edd711467478da59fe3ce29a1ffab would be 586edd711467478da59fe35e29a1ffab",
)
class Output(BlockSchema):
page: dict = SchemaField(description="Raw Notion page JSON.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="5246cc1d-34b7-452b-8fc5-3fb25fd8f542",
description="Read a Notion page by its ID and return its raw JSON.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=NotionReadPageBlock.Input,
output_schema=NotionReadPageBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"page_id": "00000000-0000-0000-0000-000000000000",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[("page", dict)],
test_credentials=TEST_CREDENTIALS,
test_mock={
"get_page": lambda *args, **kwargs: {"object": "page", "id": "mocked"}
},
)
@staticmethod
async def get_page(credentials: OAuth2Credentials, page_id: str) -> dict:
client = NotionClient(credentials)
return await client.get_page(page_id)
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
page = await self.get_page(credentials, input_data.page_id)
yield "page", page
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -0,0 +1,109 @@
from __future__ import annotations
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient, blocks_to_markdown, extract_page_title
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionReadPageMarkdownBlock(Block):
"""Read a Notion page and convert it to clean Markdown format."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
page_id: str = SchemaField(
description="Notion page ID. Must be accessible by the connected integration. You can get this from the page URL notion.so/A-Page-586edd711467478da59fe35e29a1ffab would be 586edd711467478da59fe35e29a1ffab",
)
include_title: bool = SchemaField(
description="Whether to include the page title as a header in the markdown",
default=True,
)
class Output(BlockSchema):
markdown: str = SchemaField(description="Page content in Markdown format.")
title: str = SchemaField(description="Page title.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="d1312c4d-fae2-4e70-893d-f4d07cce1d4e",
description="Read a Notion page and convert it to Markdown format with proper formatting for headings, lists, links, and rich text.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=NotionReadPageMarkdownBlock.Input,
output_schema=NotionReadPageMarkdownBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"page_id": "00000000-0000-0000-0000-000000000000",
"include_title": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("markdown", "# Test Page\n\nThis is test content."),
("title", "Test Page"),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"get_page_markdown": lambda *args, **kwargs: (
"# Test Page\n\nThis is test content.",
"Test Page",
)
},
)
@staticmethod
async def get_page_markdown(
credentials: OAuth2Credentials, page_id: str, include_title: bool = True
) -> tuple[str, str]:
"""
Get a Notion page and convert it to markdown.
Args:
credentials: OAuth2 credentials for Notion.
page_id: The ID of the page to fetch.
include_title: Whether to include the page title in the markdown.
Returns:
Tuple of (markdown_content, title)
"""
client = NotionClient(credentials)
# Get page metadata
page = await client.get_page(page_id)
title = extract_page_title(page)
# Get all blocks from the page
blocks = await client.get_blocks(page_id, recursive=True)
# Convert blocks to markdown
content_markdown = blocks_to_markdown(blocks)
# Combine title and content if requested
if include_title and title:
full_markdown = f"# {title}\n\n{content_markdown}"
else:
full_markdown = content_markdown
return full_markdown, title
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
markdown, title = await self.get_page_markdown(
credentials, input_data.page_id, input_data.include_title
)
yield "markdown", markdown
yield "title", title
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -0,0 +1,225 @@
from __future__ import annotations
from typing import List, Optional
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient, extract_page_title, parse_rich_text
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionSearchResult(BaseModel):
"""Typed model for Notion search results."""
id: str
type: str # 'page' or 'database'
title: str
url: str
created_time: Optional[str] = None
last_edited_time: Optional[str] = None
parent_type: Optional[str] = None # 'page', 'database', or 'workspace'
parent_id: Optional[str] = None
icon: Optional[str] = None # emoji icon if present
is_inline: Optional[bool] = None # for databases only
class NotionSearchBlock(Block):
"""Search across your Notion workspace for pages and databases."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
query: str = SchemaField(
description="Search query text. Leave empty to get all accessible pages/databases.",
default="",
)
filter_type: Optional[str] = SchemaField(
description="Filter results by type: 'page' or 'database'. Leave empty for both.",
default=None,
)
limit: int = SchemaField(
description="Maximum number of results to return", default=20, ge=1, le=100
)
class Output(BlockSchema):
results: List[NotionSearchResult] = SchemaField(
description="List of search results with title, type, URL, and metadata."
)
result: NotionSearchResult = SchemaField(
description="Individual search result (yields one per result found)."
)
result_ids: List[str] = SchemaField(
description="List of IDs from search results for batch operations."
)
count: int = SchemaField(description="Number of results found.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="313515dd-9848-46ea-9cd6-3c627c892c56",
description="Search your Notion workspace for pages and databases by text query.",
categories={BlockCategory.PRODUCTIVITY, BlockCategory.SEARCH},
input_schema=NotionSearchBlock.Input,
output_schema=NotionSearchBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"query": "project",
"limit": 5,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"results",
[
NotionSearchResult(
id="123",
type="page",
title="Project Plan",
url="https://notion.so/Project-Plan-123",
)
],
),
("result_ids", ["123"]),
(
"result",
NotionSearchResult(
id="123",
type="page",
title="Project Plan",
url="https://notion.so/Project-Plan-123",
),
),
("count", 1),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"search_workspace": lambda *args, **kwargs: (
[
NotionSearchResult(
id="123",
type="page",
title="Project Plan",
url="https://notion.so/Project-Plan-123",
)
],
1,
)
},
)
@staticmethod
async def search_workspace(
credentials: OAuth2Credentials,
query: str = "",
filter_type: Optional[str] = None,
limit: int = 20,
) -> tuple[List[NotionSearchResult], int]:
"""
Search the Notion workspace.
Returns:
Tuple of (results_list, count)
"""
client = NotionClient(credentials)
# Build filter if type is specified
filter_obj = None
if filter_type:
filter_obj = {"property": "object", "value": filter_type}
# Execute search
response = await client.search(
query=query, filter_obj=filter_obj, page_size=limit
)
# Parse results
results = []
for item in response.get("results", []):
result_data = {
"id": item.get("id", ""),
"type": item.get("object", ""),
"url": item.get("url", ""),
"created_time": item.get("created_time"),
"last_edited_time": item.get("last_edited_time"),
"title": "", # Will be set below
}
# Extract title based on type
if item.get("object") == "page":
# For pages, get the title from properties
result_data["title"] = extract_page_title(item)
# Add parent info
parent = item.get("parent", {})
if parent.get("type") == "page_id":
result_data["parent_type"] = "page"
result_data["parent_id"] = parent.get("page_id")
elif parent.get("type") == "database_id":
result_data["parent_type"] = "database"
result_data["parent_id"] = parent.get("database_id")
elif parent.get("type") == "workspace":
result_data["parent_type"] = "workspace"
# Add icon if present
icon = item.get("icon")
if icon and icon.get("type") == "emoji":
result_data["icon"] = icon.get("emoji")
elif item.get("object") == "database":
# For databases, get title from the title array
result_data["title"] = parse_rich_text(item.get("title", []))
# Add database-specific metadata
result_data["is_inline"] = item.get("is_inline", False)
# Add parent info
parent = item.get("parent", {})
if parent.get("type") == "page_id":
result_data["parent_type"] = "page"
result_data["parent_id"] = parent.get("page_id")
elif parent.get("type") == "workspace":
result_data["parent_type"] = "workspace"
# Add icon if present
icon = item.get("icon")
if icon and icon.get("type") == "emoji":
result_data["icon"] = icon.get("emoji")
results.append(NotionSearchResult(**result_data))
return results, len(results)
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
results, count = await self.search_workspace(
credentials, input_data.query, input_data.filter_type, input_data.limit
)
# Yield the complete list for batch operations
yield "results", results
# Extract and yield IDs as a list for batch operations
result_ids = [r.id for r in results]
yield "result_ids", result_ids
# Yield each individual result for single connections
for result in results:
yield "result", result
yield "count", count
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -14,6 +14,7 @@ from backend.data.block import (
BlockType,
)
from backend.data.model import NodeExecutionStats, SchemaField
from backend.data.optional_block import get_optional_config
from backend.util import json
from backend.util.clients import get_database_manager_async_client
@@ -388,7 +389,9 @@ class SmartDecisionMakerBlock(Block):
return {"type": "function", "function": tool_function}
@staticmethod
async def _create_function_signature(node_id: str) -> list[dict[str, Any]]:
async def _create_function_signature(
node_id: str, user_id: str | None = None, check_optional: bool = True
) -> list[dict[str, Any]]:
"""
Creates function signatures for tools linked to a specified node within a graph.
@@ -398,6 +401,8 @@ class SmartDecisionMakerBlock(Block):
Args:
node_id: The node_id for which to create function signatures.
user_id: The ID of the user, used for checking credential-based optional blocks.
check_optional: Whether to check and skip optional blocks based on their conditions.
Returns:
list[dict[str, Any]]: A list of dictionaries, each representing a function signature
@@ -429,6 +434,41 @@ class SmartDecisionMakerBlock(Block):
if not sink_node:
raise ValueError(f"Sink node not found: {links[0].sink_id}")
# todo: use the renamed value of metadata when available
# Check if this node is marked as optional and should be skipped
optional_config = get_optional_config(sink_node.metadata)
if optional_config and optional_config.enabled and check_optional:
# Check conditions to determine if block should be skipped
skip_block = False
# Check credential availability if configured
if optional_config.conditions.on_missing_credentials and user_id:
# Get credential fields from the block
sink_block = sink_node.block
if hasattr(sink_block, "input_schema"):
creds_fields = sink_block.input_schema.get_credentials_fields()
if creds_fields:
# For Smart Decision Maker, we simplify by assuming
# credentials might be missing if optional is enabled
# Full check happens at execution time
logger.debug(
f"Optional block {sink_node.id} may be skipped based on credentials"
)
# Continue to exclude from available tools
continue
# If other conditions exist but can't be checked now, be conservative
if (
optional_config.conditions.input_flag
or optional_config.conditions.kv_flag
):
# These runtime flags can't be checked here, so exclude the block
logger.debug(
f"Optional block {sink_node.id} excluded due to runtime conditions"
)
continue
if sink_node.block_id == AgentExecutorBlock().id:
return_tool_functions.append(
await SmartDecisionMakerBlock._create_agent_function_signature(
@@ -456,7 +496,9 @@ class SmartDecisionMakerBlock(Block):
user_id: str,
**kwargs,
) -> BlockOutput:
tool_functions = await self._create_function_signature(node_id)
tool_functions = await self._create_function_signature(
node_id, user_id=user_id, check_optional=True
)
yield "tool_functions", json.dumps(tool_functions)
input_data.conversation_history = input_data.conversation_history or []
@@ -523,7 +565,6 @@ class SmartDecisionMakerBlock(Block):
credentials=credentials,
llm_model=input_data.model,
prompt=prompt,
json_format=False,
max_tokens=input_data.max_tokens,
tools=tool_functions,
ollama_host=input_data.ollama_host,

View File

@@ -30,7 +30,6 @@ class TestLLMStatsTracking:
credentials=llm.TEST_CREDENTIALS,
llm_model=llm.LlmModel.GPT4O,
prompt=[{"role": "user", "content": "Hello"}],
json_format=False,
max_tokens=100,
)
@@ -42,6 +41,8 @@ class TestLLMStatsTracking:
@pytest.mark.asyncio
async def test_ai_structured_response_block_tracks_stats(self):
"""Test that AIStructuredResponseGeneratorBlock correctly tracks stats."""
from unittest.mock import patch
import backend.blocks.llm as llm
block = llm.AIStructuredResponseGeneratorBlock()
@@ -51,7 +52,7 @@ class TestLLMStatsTracking:
return llm.LLMResponse(
raw_response="",
prompt=[],
response='{"key1": "value1", "key2": "value2"}',
response='<json_output id="test123456">{"key1": "value1", "key2": "value2"}</json_output>',
tool_calls=None,
prompt_tokens=15,
completion_tokens=25,
@@ -69,10 +70,12 @@ class TestLLMStatsTracking:
)
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Mock secrets.token_hex to return consistent ID
with patch("secrets.token_hex", return_value="test123456"):
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Check stats
assert block.execution_stats.input_token_count == 15
@@ -143,7 +146,7 @@ class TestLLMStatsTracking:
return llm.LLMResponse(
raw_response="",
prompt=[],
response='{"wrong": "format"}',
response='<json_output id="test123456">{"wrong": "format"}</json_output>',
tool_calls=None,
prompt_tokens=10,
completion_tokens=15,
@@ -154,7 +157,7 @@ class TestLLMStatsTracking:
return llm.LLMResponse(
raw_response="",
prompt=[],
response='{"key1": "value1", "key2": "value2"}',
response='<json_output id="test123456">{"key1": "value1", "key2": "value2"}</json_output>',
tool_calls=None,
prompt_tokens=20,
completion_tokens=25,
@@ -173,10 +176,12 @@ class TestLLMStatsTracking:
)
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Mock secrets.token_hex to return consistent ID
with patch("secrets.token_hex", return_value="test123456"):
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Check stats - should accumulate both calls
# For 2 attempts: attempt 1 (failed) + attempt 2 (success) = 2 total
@@ -269,7 +274,8 @@ class TestLLMStatsTracking:
mock_response.choices = [
MagicMock(
message=MagicMock(
content='{"summary": "Test chunk summary"}', tool_calls=None
content='<json_output id="test123456">{"summary": "Test chunk summary"}</json_output>',
tool_calls=None,
)
)
]
@@ -277,7 +283,7 @@ class TestLLMStatsTracking:
mock_response.choices = [
MagicMock(
message=MagicMock(
content='{"final_summary": "Test final summary"}',
content='<json_output id="test123456">{"final_summary": "Test final summary"}</json_output>',
tool_calls=None,
)
)
@@ -298,11 +304,13 @@ class TestLLMStatsTracking:
max_tokens=1000, # Large enough to avoid chunking
)
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Mock secrets.token_hex to return consistent ID
with patch("secrets.token_hex", return_value="test123456"):
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
print(f"Actual calls made: {call_count}")
print(f"Block stats: {block.execution_stats}")
@@ -457,7 +465,7 @@ class TestLLMStatsTracking:
return llm.LLMResponse(
raw_response="",
prompt=[],
response='{"result": "test"}',
response='<json_output id="test123456">{"result": "test"}</json_output>',
tool_calls=None,
prompt_tokens=10,
completion_tokens=20,
@@ -476,10 +484,12 @@ class TestLLMStatsTracking:
# Run the block
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Mock secrets.token_hex to return consistent ID
with patch("secrets.token_hex", return_value="test123456"):
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Block finished - now grab and assert stats
assert block.execution_stats is not None

View File

@@ -35,20 +35,19 @@ async def execute_graph(
logger.info("Input data: %s", input_data)
# --- Test adding new executions --- #
response = await agent_server.test_execute_graph(
graph_exec = await agent_server.test_execute_graph(
user_id=test_user.id,
graph_id=test_graph.id,
graph_version=test_graph.version,
node_input=input_data,
)
graph_exec_id = response.graph_exec_id
logger.info("Created execution with ID: %s", graph_exec_id)
logger.info("Created execution with ID: %s", graph_exec.id)
# Execution queue should be empty
logger.info("Waiting for execution to complete...")
result = await wait_execution(test_user.id, graph_exec_id, 30)
result = await wait_execution(test_user.id, graph_exec.id, 30)
logger.info("Execution completed with %d results", len(result))
return graph_exec_id
return graph_exec.id
@pytest.mark.asyncio(loop_scope="session")

View File

@@ -172,6 +172,11 @@ class FillTextTemplateBlock(Block):
format: str = SchemaField(
description="Template to format the text using `values`. Use Jinja2 syntax."
)
escape_html: bool = SchemaField(
default=False,
advanced=True,
description="Whether to escape special characters in the inserted values to be HTML-safe. Enable for HTML output, disable for plain text.",
)
class Output(BlockSchema):
output: str = SchemaField(description="Formatted text")
@@ -205,6 +210,7 @@ class FillTextTemplateBlock(Block):
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
formatter = text.TextFormatter(autoescape=input_data.escape_html)
yield "output", formatter.format_string(input_data.format, input_data.values)

View File

@@ -1,5 +1,8 @@
from backend.server.v2.library.model import LibraryAgentPreset
from .graph import NodeModel
from .integrations import Webhook # noqa: F401
# Resolve Webhook <- NodeModel forward reference
# Resolve Webhook forward references
NodeModel.model_rebuild()
LibraryAgentPreset.model_rebuild()

View File

@@ -1,57 +1,31 @@
import logging
import uuid
from datetime import datetime, timezone
from typing import List, Optional
from typing import Optional
from autogpt_libs.api_key.key_manager import APIKeyManager
from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission, APIKeyStatus
from prisma.errors import PrismaError
from prisma.models import APIKey as PrismaAPIKey
from prisma.types import (
APIKeyCreateInput,
APIKeyUpdateInput,
APIKeyWhereInput,
APIKeyWhereUniqueInput,
)
from pydantic import BaseModel
from prisma.types import APIKeyWhereUniqueInput
from pydantic import BaseModel, Field
from backend.data.db import BaseDbModel
from backend.util.exceptions import NotAuthorizedError, NotFoundError
logger = logging.getLogger(__name__)
keysmith = APIKeySmith()
# Some basic exceptions
class APIKeyError(Exception):
"""Base exception for API key operations"""
pass
class APIKeyNotFoundError(APIKeyError):
"""Raised when an API key is not found"""
pass
class APIKeyPermissionError(APIKeyError):
"""Raised when there are permission issues with API key operations"""
pass
class APIKeyValidationError(APIKeyError):
"""Raised when API key validation fails"""
pass
class APIKey(BaseDbModel):
class APIKeyInfo(BaseModel):
id: str
name: str
prefix: str
key: str
status: APIKeyStatus = APIKeyStatus.ACTIVE
permissions: List[APIKeyPermission]
postfix: str
head: str = Field(
description=f"The first {APIKeySmith.HEAD_LENGTH} characters of the key"
)
tail: str = Field(
description=f"The last {APIKeySmith.TAIL_LENGTH} characters of the key"
)
status: APIKeyStatus
permissions: list[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime] = None
revoked_at: Optional[datetime] = None
@@ -60,266 +34,211 @@ class APIKey(BaseDbModel):
@staticmethod
def from_db(api_key: PrismaAPIKey):
try:
return APIKey(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
key=api_key.key,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKey from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
return APIKeyInfo(
id=api_key.id,
name=api_key.name,
head=api_key.head,
tail=api_key.tail,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
class APIKeyWithoutHash(BaseModel):
id: str
name: str
prefix: str
postfix: str
status: APIKeyStatus
permissions: List[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime]
revoked_at: Optional[datetime]
description: Optional[str]
user_id: str
class APIKeyInfoWithHash(APIKeyInfo):
hash: str
salt: str | None = None # None for legacy keys
def match(self, plaintext_key: str) -> bool:
"""Returns whether the given key matches this API key object."""
return keysmith.verify_key(plaintext_key, self.hash, self.salt)
@staticmethod
def from_db(api_key: PrismaAPIKey):
try:
return APIKeyWithoutHash(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKeyWithoutHash from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
return APIKeyInfoWithHash(
**APIKeyInfo.from_db(api_key).model_dump(),
hash=api_key.hash,
salt=api_key.salt,
)
def without_hash(self) -> APIKeyInfo:
return APIKeyInfo(**self.model_dump(exclude={"hash", "salt"}))
async def generate_api_key(
async def create_api_key(
name: str,
user_id: str,
permissions: List[APIKeyPermission],
permissions: list[APIKeyPermission],
description: Optional[str] = None,
) -> tuple[APIKeyWithoutHash, str]:
) -> tuple[APIKeyInfo, str]:
"""
Generate a new API key and store it in the database.
Returns the API key object (without hash) and the plain text key.
"""
try:
api_manager = APIKeyManager()
key = api_manager.generate_api_key()
generated_key = keysmith.generate_key()
api_key = await PrismaAPIKey.prisma().create(
data=APIKeyCreateInput(
id=str(uuid.uuid4()),
name=name,
prefix=key.prefix,
postfix=key.postfix,
key=key.hash,
permissions=[p for p in permissions],
description=description,
userId=user_id,
)
)
saved_key_obj = await PrismaAPIKey.prisma().create(
data={
"id": str(uuid.uuid4()),
"name": name,
"head": generated_key.head,
"tail": generated_key.tail,
"hash": generated_key.hash,
"salt": generated_key.salt,
"permissions": [p for p in permissions],
"description": description,
"userId": user_id,
}
)
api_key_without_hash = APIKeyWithoutHash.from_db(api_key)
return api_key_without_hash, key.raw
except PrismaError as e:
logger.error(f"Database error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
return APIKeyInfo.from_db(saved_key_obj), generated_key.key
async def validate_api_key(plain_text_key: str) -> Optional[APIKey]:
async def get_active_api_keys_by_head(head: str) -> list[APIKeyInfoWithHash]:
results = await PrismaAPIKey.prisma().find_many(
where={"head": head, "status": APIKeyStatus.ACTIVE}
)
return [APIKeyInfoWithHash.from_db(key) for key in results]
async def validate_api_key(plaintext_key: str) -> Optional[APIKeyInfo]:
"""
Validate an API key and return the API key object if valid.
Validate an API key and return the API key object if valid and active.
"""
try:
if not plain_text_key.startswith(APIKeyManager.PREFIX):
if not plaintext_key.startswith(APIKeySmith.PREFIX):
logger.warning("Invalid API key format")
return None
prefix = plain_text_key[: APIKeyManager.PREFIX_LENGTH]
api_manager = APIKeyManager()
head = plaintext_key[: APIKeySmith.HEAD_LENGTH]
potential_matches = await get_active_api_keys_by_head(head)
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(prefix=prefix, status=(APIKeyStatus.ACTIVE))
matched_api_key = next(
(pm for pm in potential_matches if pm.match(plaintext_key)),
None,
)
if not api_key:
logger.warning(f"No active API key found with prefix {prefix}")
if not matched_api_key:
# API key not found or invalid
return None
is_valid = api_manager.verify_api_key(plain_text_key, api_key.key)
if not is_valid:
logger.warning("API key verification failed")
return None
return APIKey.from_db(api_key)
except Exception as e:
logger.error(f"Error validating API key: {str(e)}")
raise APIKeyValidationError(f"Failed to validate API key: {str(e)}")
async def revoke_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to revoke this API key."
# Migrate legacy keys to secure format on successful validation
if matched_api_key.salt is None:
matched_api_key = await _migrate_key_to_secure_hash(
plaintext_key, matched_api_key
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(
status=APIKeyStatus.REVOKED, revokedAt=datetime.now(timezone.utc)
),
)
return matched_api_key.without_hash()
except Exception as e:
logger.error(f"Error while validating API key: {e}")
raise RuntimeError("Failed to validate API key") from e
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
async def _migrate_key_to_secure_hash(
plaintext_key: str, key_obj: APIKeyInfoWithHash
) -> APIKeyInfoWithHash:
"""Replace the SHA256 hash of a legacy API key with a salted Scrypt hash."""
try:
new_hash, new_salt = keysmith.hash_key(plaintext_key)
await PrismaAPIKey.prisma().update(
where={"id": key_obj.id}, data={"hash": new_hash, "salt": new_salt}
)
logger.info(f"Migrated legacy API key #{key_obj.id} to secure format")
# Update the API key object with new values for return
key_obj.hash = new_hash
key_obj.salt = new_salt
except Exception as e:
logger.error(f"Failed to migrate legacy API key #{key_obj.id}: {e}")
return key_obj
async def revoke_api_key(key_id: str, user_id: str) -> APIKeyInfo:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise NotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to revoke this API key.")
updated_api_key = await PrismaAPIKey.prisma().update(
where={"id": key_id},
data={
"status": APIKeyStatus.REVOKED,
"revokedAt": datetime.now(timezone.utc),
},
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to revoke.")
return APIKeyInfo.from_db(updated_api_key)
async def list_user_api_keys(user_id: str) -> list[APIKeyInfo]:
api_keys = await PrismaAPIKey.prisma().find_many(
where={"userId": user_id}, order={"createdAt": "desc"}
)
return [APIKeyInfo.from_db(key) for key in api_keys]
async def suspend_api_key(key_id: str, user_id: str) -> APIKeyInfo:
selector: APIKeyWhereUniqueInput = {"id": key_id}
api_key = await PrismaAPIKey.prisma().find_unique(where=selector)
if not api_key:
raise NotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to suspend this API key.")
updated_api_key = await PrismaAPIKey.prisma().update(
where=selector, data={"status": APIKeyStatus.SUSPENDED}
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to suspend.")
return APIKeyInfo.from_db(updated_api_key)
def has_permission(api_key: APIKeyInfo, required_permission: APIKeyPermission) -> bool:
return required_permission in api_key.permissions
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyInfo]:
api_key = await PrismaAPIKey.prisma().find_first(
where={"id": key_id, "userId": user_id}
)
if not api_key:
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
async def list_user_api_keys(user_id: str) -> List[APIKeyWithoutHash]:
try:
where_clause: APIKeyWhereInput = {"userId": user_id}
api_keys = await PrismaAPIKey.prisma().find_many(
where=where_clause, order={"createdAt": "desc"}
)
return [APIKeyWithoutHash.from_db(key) for key in api_keys]
except PrismaError as e:
logger.error(f"Database error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
async def suspend_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to suspend this API key."
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(status=APIKeyStatus.SUSPENDED),
)
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
def has_permission(api_key: APIKey, required_permission: APIKeyPermission) -> bool:
try:
return required_permission in api_key.permissions
except Exception as e:
logger.error(f"Error checking API key permissions: {str(e)}")
return False
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(id=key_id, userId=user_id)
)
if not api_key:
return None
return APIKeyWithoutHash.from_db(api_key)
except PrismaError as e:
logger.error(f"Database error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
return APIKeyInfo.from_db(api_key)
async def update_api_key_permissions(
key_id: str, user_id: str, permissions: List[APIKeyPermission]
) -> Optional[APIKeyWithoutHash]:
key_id: str, user_id: str, permissions: list[APIKeyPermission]
) -> APIKeyInfo:
"""
Update the permissions of an API key.
"""
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if api_key is None:
raise APIKeyNotFoundError("No such API key found.")
if api_key is None:
raise NotFoundError("No such API key found.")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to update this API key."
)
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to update this API key.")
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(permissions=permissions),
)
updated_api_key = await PrismaAPIKey.prisma().update(
where={"id": key_id},
data={"permissions": permissions},
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to update.")
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")
return APIKeyInfo.from_db(updated_api_key)

View File

@@ -1,4 +1,3 @@
import functools
import inspect
import logging
import os
@@ -8,6 +7,7 @@ from enum import Enum
from typing import (
TYPE_CHECKING,
Any,
Callable,
ClassVar,
Generic,
Optional,
@@ -20,6 +20,7 @@ from typing import (
import jsonref
import jsonschema
from autogpt_libs.utils.cache import cached
from prisma.models import AgentBlock
from prisma.types import AgentBlockCreateInput
from pydantic import BaseModel
@@ -44,9 +45,10 @@ if TYPE_CHECKING:
app_config = Config()
BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data).
BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data.
BlockOutput = AsyncGen[BlockData, None] # Output: 1 output pin produces n data.
BlockOutputEntry = tuple[str, Any] # Output data should be a tuple of (name, value).
BlockOutput = AsyncGen[BlockOutputEntry, None] # Output: 1 output pin produces n data.
BlockTestOutput = BlockOutputEntry | tuple[str, Callable[[Any], bool]]
CompletedBlockOutput = dict[str, list[Any]] # Completed stream, collected as a dict.
@@ -89,6 +91,45 @@ class BlockCategory(Enum):
return {"category": self.name, "description": self.value}
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)
class BlockInfo(BaseModel):
id: str
name: str
inputSchema: dict[str, Any]
outputSchema: dict[str, Any]
costs: list[BlockCost]
description: str
categories: list[dict[str, str]]
contributors: list[dict[str, Any]]
staticOutput: bool
uiType: str
class BlockSchema(BaseModel):
cached_jsonschema: ClassVar[dict[str, Any]]
@@ -306,7 +347,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
input_schema: Type[BlockSchemaInputType] = EmptySchema,
output_schema: Type[BlockSchemaOutputType] = EmptySchema,
test_input: BlockInput | list[BlockInput] | None = None,
test_output: BlockData | list[BlockData] | None = None,
test_output: BlockTestOutput | list[BlockTestOutput] | None = None,
test_mock: dict[str, Any] | None = None,
test_credentials: Optional[Credentials | dict[str, Credentials]] = None,
disabled: bool = False,
@@ -452,6 +493,24 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
"uiType": self.block_type.value,
}
def get_info(self) -> BlockInfo:
from backend.data.credit import get_block_cost
return BlockInfo(
id=self.id,
name=self.name,
inputSchema=self.input_schema.jsonschema(),
outputSchema=self.output_schema.jsonschema(),
costs=get_block_cost(self),
description=self.description,
categories=[category.dict() for category in self.categories],
contributors=[
contributor.model_dump() for contributor in self.contributors
],
staticOutput=self.static_output,
uiType=self.block_type.value,
)
async def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
if error := self.input_schema.validate_data(input_data):
raise ValueError(
@@ -663,7 +722,7 @@ def get_block(block_id: str) -> Block[BlockSchema, BlockSchema] | None:
return cls() if cls else None
@functools.cache
@cached()
def get_webhook_block_ids() -> Sequence[str]:
return [
id
@@ -672,7 +731,7 @@ def get_webhook_block_ids() -> Sequence[str]:
]
@functools.cache
@cached()
def get_io_block_ids() -> Sequence[str]:
return [
id

View File

@@ -29,8 +29,7 @@ from backend.blocks.replicate.replicate_block import ReplicateModelBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.blocks.text_to_speech_block import UnrealTextToSpeechBlock
from backend.data.block import Block
from backend.data.cost import BlockCost, BlockCostType
from backend.data.block import Block, BlockCost, BlockCostType
from backend.integrations.credentials_store import (
aiml_api_credentials,
anthropic_credentials,

View File

@@ -1,32 +0,0 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from backend.data.block import BlockInput
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)

View File

@@ -2,7 +2,7 @@ import logging
from abc import ABC, abstractmethod
from collections import defaultdict
from datetime import datetime, timezone
from typing import Any, cast
from typing import TYPE_CHECKING, Any, cast
import stripe
from prisma import Json
@@ -23,7 +23,6 @@ from pydantic import BaseModel
from backend.data import db
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCost
from backend.data.model import (
AutoTopUpConfig,
RefundRequest,
@@ -41,6 +40,9 @@ from backend.util.models import Pagination
from backend.util.retry import func_retry
from backend.util.settings import Settings
if TYPE_CHECKING:
from backend.data.block import Block, BlockCost
settings = Settings()
stripe.api_key = settings.secrets.stripe_api_key
logger = logging.getLogger(__name__)
@@ -997,10 +999,14 @@ def get_user_credit_model() -> UserCreditBase:
return UserCredit()
def get_block_costs() -> dict[str, list[BlockCost]]:
def get_block_costs() -> dict[str, list["BlockCost"]]:
return {block().id: costs for block, costs in BLOCK_COSTS.items()}
def get_block_cost(block: "Block") -> list["BlockCost"]:
return BLOCK_COSTS.get(block.__class__, [])
async def get_stripe_customer_id(user_id: str) -> str:
user = await get_user_by_id(user_id)

View File

@@ -83,7 +83,7 @@ async def disconnect():
# Transaction timeout constant (in milliseconds)
TRANSACTION_TIMEOUT = 15000 # 15 seconds - Increased from 5s to prevent timeout errors
TRANSACTION_TIMEOUT = 30000 # 30 seconds - Increased from 15s to prevent timeout errors during graph creation under load
@asynccontextmanager

View File

@@ -11,11 +11,14 @@ from typing import (
Generator,
Generic,
Literal,
Mapping,
Optional,
TypeVar,
cast,
overload,
)
from prisma import Json
from prisma.enums import AgentExecutionStatus
from prisma.models import (
AgentGraphExecution,
@@ -24,7 +27,6 @@ from prisma.models import (
AgentNodeExecutionKeyValueData,
)
from prisma.types import (
AgentGraphExecutionCreateInput,
AgentGraphExecutionUpdateManyMutationInput,
AgentGraphExecutionWhereInput,
AgentNodeExecutionCreateInput,
@@ -60,7 +62,7 @@ from .includes import (
GRAPH_EXECUTION_INCLUDE_WITH_NODES,
graph_execution_include,
)
from .model import GraphExecutionStats, NodeExecutionStats
from .model import CredentialsMetaInput, GraphExecutionStats, NodeExecutionStats
T = TypeVar("T")
@@ -87,6 +89,37 @@ class BlockErrorStats(BaseModel):
ExecutionStatus = AgentExecutionStatus
NodeInputMask = Mapping[str, JsonValue]
NodesInputMasks = Mapping[str, NodeInputMask]
# dest: source
VALID_STATUS_TRANSITIONS = {
ExecutionStatus.QUEUED: [
ExecutionStatus.INCOMPLETE,
],
ExecutionStatus.RUNNING: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.TERMINATED, # For resuming halted execution
],
ExecutionStatus.COMPLETED: [
ExecutionStatus.RUNNING,
],
ExecutionStatus.FAILED: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
],
ExecutionStatus.TERMINATED: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
],
ExecutionStatus.SKIPPED: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
],
}
class GraphExecutionMeta(BaseDbModel):
@@ -94,10 +127,15 @@ class GraphExecutionMeta(BaseDbModel):
user_id: str
graph_id: str
graph_version: int
preset_id: Optional[str] = None
inputs: Optional[BlockInput] # no default -> required in the OpenAPI spec
credential_inputs: Optional[dict[str, CredentialsMetaInput]]
nodes_input_masks: Optional[dict[str, BlockInput]]
preset_id: Optional[str]
status: ExecutionStatus
started_at: datetime
ended_at: datetime
is_shared: bool = False
share_token: Optional[str] = None
class Stats(BaseModel):
model_config = ConfigDict(
@@ -179,6 +217,18 @@ class GraphExecutionMeta(BaseDbModel):
user_id=_graph_exec.userId,
graph_id=_graph_exec.agentGraphId,
graph_version=_graph_exec.agentGraphVersion,
inputs=cast(BlockInput | None, _graph_exec.inputs),
credential_inputs=(
{
name: CredentialsMetaInput.model_validate(cmi)
for name, cmi in cast(dict, _graph_exec.credentialInputs).items()
}
if _graph_exec.credentialInputs
else None
),
nodes_input_masks=cast(
dict[str, BlockInput] | None, _graph_exec.nodesInputMasks
),
preset_id=_graph_exec.agentPresetId,
status=ExecutionStatus(_graph_exec.executionStatus),
started_at=start_time,
@@ -202,11 +252,13 @@ class GraphExecutionMeta(BaseDbModel):
if stats
else None
),
is_shared=_graph_exec.isShared,
share_token=_graph_exec.shareToken,
)
class GraphExecution(GraphExecutionMeta):
inputs: BlockInput
inputs: BlockInput # type: ignore - incompatible override is intentional
outputs: CompletedBlockOutput
@staticmethod
@@ -226,15 +278,18 @@ class GraphExecution(GraphExecutionMeta):
)
inputs = {
**{
# inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
)
},
**(
graph_exec.inputs
or {
# fallback: extract inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
)
}
),
**{
# input from webhook-triggered block
"payload": exec.input_data["payload"]
@@ -252,14 +307,13 @@ class GraphExecution(GraphExecutionMeta):
if (
block := get_block(exec.block_id)
) and block.block_type == BlockType.OUTPUT:
outputs[exec.input_data["name"]].append(
exec.input_data.get("value", None)
)
outputs[exec.input_data["name"]].append(exec.input_data.get("value"))
return GraphExecution(
**{
field_name: getattr(graph_exec, field_name)
for field_name in GraphExecutionMeta.model_fields
if field_name != "inputs"
},
inputs=inputs,
outputs=outputs,
@@ -292,13 +346,17 @@ class GraphExecutionWithNodes(GraphExecution):
node_executions=node_executions,
)
def to_graph_execution_entry(self, user_context: "UserContext"):
def to_graph_execution_entry(
self,
user_context: "UserContext",
compiled_nodes_input_masks: Optional[NodesInputMasks] = None,
):
return GraphExecutionEntry(
user_id=self.user_id,
graph_id=self.graph_id,
graph_version=self.graph_version or 0,
graph_exec_id=self.id,
nodes_input_masks={}, # FIXME: store credentials on AgentGraphExecution
nodes_input_masks=compiled_nodes_input_masks,
user_context=user_context,
)
@@ -315,10 +373,9 @@ class NodeExecutionResult(BaseModel):
input_data: BlockInput
output_data: CompletedBlockOutput
add_time: datetime
queue_time: datetime | None = None
start_time: datetime | None = None
end_time: datetime | None = None
stats: NodeExecutionStats | None = None
queue_time: datetime | None
start_time: datetime | None
end_time: datetime | None
@staticmethod
def from_db(_node_exec: AgentNodeExecution, user_id: Optional[str] = None):
@@ -336,7 +393,7 @@ class NodeExecutionResult(BaseModel):
else:
input_data: BlockInput = defaultdict()
for data in _node_exec.Input or []:
input_data[data.name] = type_utils.convert(data.data, type[Any])
input_data[data.name] = type_utils.convert(data.data, JsonValue)
output_data: CompletedBlockOutput = defaultdict(list)
@@ -345,7 +402,7 @@ class NodeExecutionResult(BaseModel):
output_data[name].extend(messages)
else:
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
output_data[data.name].append(type_utils.convert(data.data, JsonValue))
graph_execution: AgentGraphExecution | None = _node_exec.GraphExecution
if graph_execution:
@@ -370,7 +427,6 @@ class NodeExecutionResult(BaseModel):
queue_time=_node_exec.queuedTime,
start_time=_node_exec.startedTime,
end_time=_node_exec.endedTime,
stats=stats,
)
def to_node_execution_entry(
@@ -541,9 +597,12 @@ async def get_graph_execution(
async def create_graph_execution(
graph_id: str,
graph_version: int,
starting_nodes_input: list[tuple[str, BlockInput]],
starting_nodes_input: list[tuple[str, BlockInput]], # list[(node_id, BlockInput)]
inputs: Mapping[str, JsonValue],
user_id: str,
preset_id: str | None = None,
preset_id: Optional[str] = None,
credential_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> GraphExecutionWithNodes:
"""
Create a new AgentGraphExecution record.
@@ -551,11 +610,18 @@ async def create_graph_execution(
The id of the AgentGraphExecution and the list of ExecutionResult for each node.
"""
result = await AgentGraphExecution.prisma().create(
data=AgentGraphExecutionCreateInput(
agentGraphId=graph_id,
agentGraphVersion=graph_version,
executionStatus=ExecutionStatus.QUEUED,
NodeExecutions={
data={
"agentGraphId": graph_id,
"agentGraphVersion": graph_version,
"executionStatus": ExecutionStatus.INCOMPLETE,
"inputs": SafeJson(inputs),
"credentialInputs": (
SafeJson(credential_inputs) if credential_inputs else Json({})
),
"nodesInputMasks": (
SafeJson(nodes_input_masks) if nodes_input_masks else Json({})
),
"NodeExecutions": {
"create": [
AgentNodeExecutionCreateInput(
agentNodeId=node_id,
@@ -571,9 +637,9 @@ async def create_graph_execution(
for node_id, node_input in starting_nodes_input
]
},
userId=user_id,
agentPresetId=preset_id,
),
"userId": user_id,
"agentPresetId": preset_id,
},
include=GRAPH_EXECUTION_INCLUDE_WITH_NODES,
)
@@ -584,7 +650,7 @@ async def upsert_execution_input(
node_id: str,
graph_exec_id: str,
input_name: str,
input_data: Any,
input_data: JsonValue,
node_exec_id: str | None = None,
) -> tuple[str, BlockInput]:
"""
@@ -633,7 +699,7 @@ async def upsert_execution_input(
)
return existing_execution.id, {
**{
input_data.name: type_utils.convert(input_data.data, type[Any])
input_data.name: type_utils.convert(input_data.data, JsonValue)
for input_data in existing_execution.Input or []
},
input_name: input_data,
@@ -656,42 +722,6 @@ async def upsert_execution_input(
)
async def create_node_execution(
node_exec_id: str,
node_id: str,
graph_exec_id: str,
input_name: str,
input_data: Any,
) -> None:
"""Create a new node execution with the first input."""
json_input_data = SafeJson(input_data)
await AgentNodeExecution.prisma().create(
data=AgentNodeExecutionCreateInput(
id=node_exec_id,
agentNodeId=node_id,
agentGraphExecutionId=graph_exec_id,
executionStatus=ExecutionStatus.INCOMPLETE,
Input={"create": {"name": input_name, "data": json_input_data}},
)
)
async def add_input_to_node_execution(
node_exec_id: str,
input_name: str,
input_data: Any,
) -> None:
"""Add an input to an existing node execution."""
json_input_data = SafeJson(input_data)
await AgentNodeExecutionInputOutput.prisma().create(
data=AgentNodeExecutionInputOutputCreateInput(
name=input_name,
data=json_input_data,
referencedByInputExecId=node_exec_id,
)
)
async def upsert_execution_output(
node_exec_id: str,
output_name: str,
@@ -730,6 +760,11 @@ async def update_graph_execution_stats(
status: ExecutionStatus | None = None,
stats: GraphExecutionStats | None = None,
) -> GraphExecution | None:
if not status and not stats:
raise ValueError(
f"Must provide either status or stats to update for execution {graph_exec_id}"
)
update_data: AgentGraphExecutionUpdateManyMutationInput = {}
if stats:
@@ -741,20 +776,25 @@ async def update_graph_execution_stats(
if status:
update_data["executionStatus"] = status
updated_count = await AgentGraphExecution.prisma().update_many(
where={
"id": graph_exec_id,
"OR": [
{"executionStatus": ExecutionStatus.RUNNING},
{"executionStatus": ExecutionStatus.QUEUED},
# Terminated graph can be resumed.
{"executionStatus": ExecutionStatus.TERMINATED},
],
},
where_clause: AgentGraphExecutionWhereInput = {"id": graph_exec_id}
if status:
if allowed_from := VALID_STATUS_TRANSITIONS.get(status, []):
# Add OR clause to check if current status is one of the allowed source statuses
where_clause["AND"] = [
{"id": graph_exec_id},
{"OR": [{"executionStatus": s} for s in allowed_from]},
]
else:
raise ValueError(
f"Status {status} cannot be set via update for execution {graph_exec_id}. "
f"This status can only be set at creation or is not a valid target status."
)
await AgentGraphExecution.prisma().update_many(
where=where_clause,
data=update_data,
)
if updated_count == 0:
return None
graph_exec = await AgentGraphExecution.prisma().find_unique_or_raise(
where={"id": graph_exec_id},
@@ -762,6 +802,7 @@ async def update_graph_execution_stats(
[*get_io_block_ids(), *get_webhook_block_ids()]
),
)
return GraphExecution.from_db(graph_exec)
@@ -926,7 +967,7 @@ class GraphExecutionEntry(BaseModel):
graph_exec_id: str
graph_id: str
graph_version: int
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None
nodes_input_masks: Optional[NodesInputMasks] = None
user_context: UserContext
@@ -988,6 +1029,18 @@ class NodeExecutionEvent(NodeExecutionResult):
)
class SharedExecutionResponse(BaseModel):
"""Public-safe response for shared executions"""
id: str
graph_name: str
graph_description: Optional[str]
status: ExecutionStatus
created_at: datetime
outputs: CompletedBlockOutput # Only the final outputs, no intermediate data
# Deliberately exclude: user_id, inputs, credentials, node details
ExecutionEvent = Annotated[
GraphExecutionEvent | NodeExecutionEvent, Field(discriminator="event_type")
]
@@ -1165,3 +1218,98 @@ async def get_block_error_stats(
)
for row in result
]
async def update_graph_execution_share_status(
execution_id: str,
user_id: str,
is_shared: bool,
share_token: str | None,
shared_at: datetime | None,
) -> None:
"""Update the sharing status of a graph execution."""
await AgentGraphExecution.prisma().update(
where={"id": execution_id},
data={
"isShared": is_shared,
"shareToken": share_token,
"sharedAt": shared_at,
},
)
async def get_graph_execution_by_share_token(
share_token: str,
) -> SharedExecutionResponse | None:
"""Get a shared execution with limited public-safe data."""
execution = await AgentGraphExecution.prisma().find_first(
where={
"shareToken": share_token,
"isShared": True,
"isDeleted": False,
},
include={
"AgentGraph": True,
"NodeExecutions": {
"include": {
"Output": True,
"Node": {
"include": {
"AgentBlock": True,
}
},
},
},
},
)
if not execution:
return None
# Extract outputs from OUTPUT blocks only (consistent with GraphExecution.from_db)
outputs: CompletedBlockOutput = defaultdict(list)
if execution.NodeExecutions:
for node_exec in execution.NodeExecutions:
if node_exec.Node and node_exec.Node.agentBlockId:
# Get the block definition to check its type
block = get_block(node_exec.Node.agentBlockId)
if block and block.block_type == BlockType.OUTPUT:
# For OUTPUT blocks, the data is stored in executionData or Input
# The executionData contains the structured input with 'name' and 'value' fields
if hasattr(node_exec, "executionData") and node_exec.executionData:
exec_data = type_utils.convert(
node_exec.executionData, dict[str, Any]
)
if "name" in exec_data:
name = exec_data["name"]
value = exec_data.get("value")
outputs[name].append(value)
elif node_exec.Input:
# Build input_data from Input relation
input_data = {}
for data in node_exec.Input:
if data.name and data.data is not None:
input_data[data.name] = type_utils.convert(
data.data, JsonValue
)
if "name" in input_data:
name = input_data["name"]
value = input_data.get("value")
outputs[name].append(value)
return SharedExecutionResponse(
id=execution.id,
graph_name=(
execution.AgentGraph.name
if (execution.AgentGraph and execution.AgentGraph.name)
else "Untitled Agent"
),
graph_description=(
execution.AgentGraph.description if execution.AgentGraph else None
),
status=ExecutionStatus(execution.executionStatus),
created_at=execution.createdAt,
outputs=outputs,
)

View File

@@ -1,6 +1,7 @@
import logging
import uuid
from collections import defaultdict
from datetime import datetime, timezone
from typing import TYPE_CHECKING, Any, Literal, Optional, cast
from prisma.enums import SubmissionStatus
@@ -12,7 +13,7 @@ from prisma.types import (
AgentNodeLinkCreateInput,
StoreListingVersionWhereInput,
)
from pydantic import Field, JsonValue, create_model
from pydantic import BaseModel, Field, create_model
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
@@ -28,12 +29,14 @@ from backend.data.model import (
from backend.integrations.providers import ProviderName
from backend.util import type as type_utils
from backend.util.json import SafeJson
from backend.util.models import Pagination
from .block import Block, BlockInput, BlockSchema, BlockType, get_block, get_blocks
from .db import BaseDbModel, query_raw_with_schema, transaction
from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE
if TYPE_CHECKING:
from .execution import NodesInputMasks
from .integrations import Webhook
logger = logging.getLogger(__name__)
@@ -159,6 +162,8 @@ class BaseGraph(BaseDbModel):
is_active: bool = True
name: str
description: str
instructions: str | None = None
recommended_schedule_cron: str | None = None
nodes: list[Node] = []
links: list[Link] = []
forked_from_id: str | None = None
@@ -205,6 +210,35 @@ class BaseGraph(BaseDbModel):
None,
)
@computed_field
@property
def trigger_setup_info(self) -> "GraphTriggerInfo | None":
if not (
self.webhook_input_node
and (trigger_block := self.webhook_input_node.block).webhook_config
):
return None
return GraphTriggerInfo(
provider=trigger_block.webhook_config.provider,
config_schema={
**(json_schema := trigger_block.input_schema.jsonschema()),
"properties": {
pn: sub_schema
for pn, sub_schema in json_schema["properties"].items()
if not is_credentials_field_name(pn)
},
"required": [
pn
for pn in json_schema.get("required", [])
if not is_credentials_field_name(pn)
],
},
credentials_input_name=next(
iter(trigger_block.input_schema.get_credentials_fields()), None
),
)
@staticmethod
def _generate_schema(
*props: tuple[type[AgentInputBlock.Input] | type[AgentOutputBlock.Input], dict],
@@ -238,6 +272,14 @@ class BaseGraph(BaseDbModel):
}
class GraphTriggerInfo(BaseModel):
provider: ProviderName
config_schema: dict[str, Any] = Field(
description="Input schema for the trigger block"
)
credentials_input_name: Optional[str]
class Graph(BaseGraph):
sub_graphs: list[BaseGraph] = [] # Flattened sub-graphs
@@ -342,6 +384,8 @@ class GraphModel(Graph):
user_id: str
nodes: list[NodeModel] = [] # type: ignore
created_at: datetime
@property
def starting_nodes(self) -> list[NodeModel]:
outbound_nodes = {link.sink_id for link in self.links}
@@ -354,6 +398,10 @@ class GraphModel(Graph):
if node.id not in outbound_nodes or node.id in input_nodes
]
@property
def webhook_input_node(self) -> NodeModel | None: # type: ignore
return cast(NodeModel, super().webhook_input_node)
def meta(self) -> "GraphMeta":
"""
Returns a GraphMeta object with metadata about the graph.
@@ -414,7 +462,7 @@ class GraphModel(Graph):
def validate_graph(
self,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
):
"""
Validate graph structure and raise `ValueError` on issues.
@@ -428,7 +476,7 @@ class GraphModel(Graph):
def _validate_graph(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> None:
errors = GraphModel._validate_graph_get_errors(
graph, for_run, nodes_input_masks
@@ -442,7 +490,7 @@ class GraphModel(Graph):
def validate_graph_get_errors(
self,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
@@ -464,7 +512,7 @@ class GraphModel(Graph):
def _validate_graph_get_errors(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
@@ -655,9 +703,12 @@ class GraphModel(Graph):
version=graph.version,
forked_from_id=graph.forkedFromId,
forked_from_version=graph.forkedFromVersion,
created_at=graph.createdAt,
is_active=graph.isActive,
name=graph.name or "",
description=graph.description or "",
instructions=graph.instructions,
recommended_schedule_cron=graph.recommendedScheduleCron,
nodes=[NodeModel.from_db(node, for_export) for node in graph.Nodes or []],
links=list(
{
@@ -696,6 +747,13 @@ class GraphMeta(Graph):
return GraphMeta(**graph.model_dump())
class GraphsPaginated(BaseModel):
"""Response schema for paginated graphs."""
graphs: list[GraphMeta]
pagination: Pagination
# --------------------- CRUD functions --------------------- #
@@ -724,31 +782,42 @@ async def set_node_webhook(node_id: str, webhook_id: str | None) -> NodeModel:
return NodeModel.from_db(node)
async def list_graphs(
async def list_graphs_paginated(
user_id: str,
page: int = 1,
page_size: int = 25,
filter_by: Literal["active"] | None = "active",
) -> list[GraphMeta]:
) -> GraphsPaginated:
"""
Retrieves graph metadata objects.
Default behaviour is to get all currently active graphs.
Retrieves paginated graph metadata objects.
Args:
user_id: The ID of the user that owns the graphs.
page: Page number (1-based).
page_size: Number of graphs per page.
filter_by: An optional filter to either select graphs.
user_id: The ID of the user that owns the graph.
Returns:
list[GraphMeta]: A list of objects representing the retrieved graphs.
GraphsPaginated: Paginated list of graph metadata.
"""
where_clause: AgentGraphWhereInput = {"userId": user_id}
if filter_by == "active":
where_clause["isActive"] = True
# Get total count
total_count = await AgentGraph.prisma().count(where=where_clause)
total_pages = (total_count + page_size - 1) // page_size
# Get paginated results
offset = (page - 1) * page_size
graphs = await AgentGraph.prisma().find_many(
where=where_clause,
distinct=["id"],
order={"version": "desc"},
include=AGENT_GRAPH_INCLUDE,
skip=offset,
take=page_size,
)
graph_models: list[GraphMeta] = []
@@ -762,7 +831,15 @@ async def list_graphs(
logger.error(f"Error processing graph {graph.id}: {e}")
continue
return graph_models
return GraphsPaginated(
graphs=graph_models,
pagination=Pagination(
total_items=total_count,
total_pages=total_pages,
current_page=page,
page_size=page_size,
),
)
async def get_graph_metadata(graph_id: str, version: int | None = None) -> Graph | None:
@@ -1045,6 +1122,7 @@ async def __create_graph(tx, graph: Graph, user_id: str):
version=graph.version,
name=graph.name,
description=graph.description,
recommendedScheduleCron=graph.recommended_schedule_cron,
isActive=graph.is_active,
userId=user_id,
forkedFromId=graph.forked_from_id,
@@ -1103,6 +1181,7 @@ def make_graph_model(creatable_graph: Graph, user_id: str) -> GraphModel:
return GraphModel(
**creatable_graph.model_dump(exclude={"nodes"}),
user_id=user_id,
created_at=datetime.now(tz=timezone.utc),
nodes=[
NodeModel(
**creatable_node.model_dump(),

View File

@@ -59,9 +59,15 @@ def graph_execution_include(
}
AGENT_PRESET_INCLUDE: prisma.types.AgentPresetInclude = {
"InputPresets": True,
"Webhook": True,
}
INTEGRATION_WEBHOOK_INCLUDE: prisma.types.IntegrationWebhookInclude = {
"AgentNodes": {"include": AGENT_NODE_INCLUDE},
"AgentPresets": {"include": {"InputPresets": True}},
"AgentPresets": {"include": AGENT_PRESET_INCLUDE},
}

View File

@@ -0,0 +1,59 @@
from enum import Enum
from typing import Optional
from pydantic import BaseModel, Field
class ConditionOperator(str, Enum):
AND = "and"
OR = "or"
class OptionalBlockConditions(BaseModel):
"""Conditions that determine when a block should be skipped"""
on_missing_credentials: bool = Field(
default=False,
description="Skip block if any required credentials are missing",
)
check_skip_input: bool = Field(
default=True,
description="Check the standard 'skip' input to control skip behavior",
)
kv_flag: Optional[str] = Field(
default=None,
description="Key-value store flag name that controls skip behavior",
)
operator: ConditionOperator = Field(
default=ConditionOperator.OR,
description="Logical operator for combining conditions (AND/OR)",
)
class OptionalBlockConfig(BaseModel):
"""Configuration for making a block optional/skippable"""
enabled: bool = Field(
default=False,
description="Whether this block can be optionally skipped",
)
conditions: OptionalBlockConditions = Field(
default_factory=OptionalBlockConditions,
description="Conditions that trigger skipping",
)
skip_message: Optional[str] = Field(
default=None,
description="Custom message to log when block is skipped",
)
def get_optional_config(node_metadata: dict) -> Optional[OptionalBlockConfig]:
"""Extract optional block configuration from node metadata"""
if "optional" not in node_metadata:
return None
optional_data = node_metadata.get("optional", {})
if not optional_data:
return None
return OptionalBlockConfig(**optional_data)

View File

@@ -1,8 +1,7 @@
import logging
import os
from functools import cache
from autogpt_libs.utils.cache import thread_cached
from autogpt_libs.utils.cache import cached, thread_cached
from dotenv import load_dotenv
from redis import Redis
from redis.asyncio import Redis as AsyncRedis
@@ -13,7 +12,7 @@ load_dotenv()
HOST = os.getenv("REDIS_HOST", "localhost")
PORT = int(os.getenv("REDIS_PORT", "6379"))
PASSWORD = os.getenv("REDIS_PASSWORD", "password")
PASSWORD = os.getenv("REDIS_PASSWORD", None)
logger = logging.getLogger(__name__)
@@ -35,7 +34,7 @@ def disconnect():
get_redis().close()
@cache
@cached()
def get_redis() -> Redis:
return connect()

View File

@@ -7,6 +7,7 @@ from typing import Optional, cast
from urllib.parse import quote_plus
from autogpt_libs.auth.models import DEFAULT_USER_ID
from autogpt_libs.utils.cache import cached
from fastapi import HTTPException
from prisma.enums import NotificationType
from prisma.models import User as PrismaUser
@@ -23,7 +24,11 @@ from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()
# Cache decorator alias for consistent user lookup caching
cache_user_lookup = cached(maxsize=1000, ttl_seconds=300)
@cache_user_lookup
async def get_or_create_user(user_data: dict) -> User:
try:
user_id = user_data.get("sub")
@@ -49,6 +54,7 @@ async def get_or_create_user(user_data: dict) -> User:
raise DatabaseError(f"Failed to get or create user {user_data}: {e}") from e
@cache_user_lookup
async def get_user_by_id(user_id: str) -> User:
user = await prisma.user.find_unique(where={"id": user_id})
if not user:
@@ -64,6 +70,7 @@ async def get_user_email_by_id(user_id: str) -> Optional[str]:
raise DatabaseError(f"Failed to get user email for user {user_id}: {e}") from e
@cache_user_lookup
async def get_user_by_email(email: str) -> Optional[User]:
try:
user = await prisma.user.find_unique(where={"email": email})
@@ -74,7 +81,17 @@ async def get_user_by_email(email: str) -> Optional[User]:
async def update_user_email(user_id: str, email: str):
try:
# Get old email first for cache invalidation
old_user = await prisma.user.find_unique(where={"id": user_id})
old_email = old_user.email if old_user else None
await prisma.user.update(where={"id": user_id}, data={"email": email})
# Selectively invalidate only the specific user entries
get_user_by_id.cache_delete(user_id)
if old_email:
get_user_by_email.cache_delete(old_email)
get_user_by_email.cache_delete(email)
except Exception as e:
raise DatabaseError(
f"Failed to update user email for user {user_id}: {e}"
@@ -114,6 +131,8 @@ async def update_user_integrations(user_id: str, data: UserIntegrations):
where={"id": user_id},
data={"integrations": encrypted_data},
)
# Invalidate cache for this user
get_user_by_id.cache_delete(user_id)
async def migrate_and_encrypt_user_integrations():
@@ -285,6 +304,10 @@ async def update_user_notification_preference(
)
if not user:
raise ValueError(f"User not found with ID: {user_id}")
# Invalidate cache for this user since notification preferences are part of user data
get_user_by_id.cache_delete(user_id)
preferences: dict[NotificationType, bool] = {
NotificationType.AGENT_RUN: user.notifyOnAgentRun or True,
NotificationType.ZERO_BALANCE: user.notifyOnZeroBalance or True,
@@ -323,6 +346,8 @@ async def set_user_email_verification(user_id: str, verified: bool) -> None:
where={"id": user_id},
data={"emailVerified": verified},
)
# Invalidate cache for this user
get_user_by_id.cache_delete(user_id)
except Exception as e:
raise DatabaseError(
f"Failed to set email verification status for user {user_id}: {e}"
@@ -407,6 +432,10 @@ async def update_user_timezone(user_id: str, timezone: str) -> User:
)
if not user:
raise ValueError(f"User not found with ID: {user_id}")
# Invalidate cache for this user
get_user_by_id.cache_delete(user_id)
return User.from_db(user)
except Exception as e:
raise DatabaseError(f"Failed to update timezone for user {user_id}: {e}") from e

View File

@@ -107,7 +107,7 @@ async def generate_activity_status_for_execution(
# Check if we have OpenAI API key
try:
settings = Settings()
if not settings.secrets.openai_api_key:
if not settings.secrets.openai_internal_api_key:
logger.debug(
"OpenAI API key not configured, skipping activity status generation"
)
@@ -115,7 +115,7 @@ async def generate_activity_status_for_execution(
# Get all node executions for this graph execution
node_executions = await db_client.get_node_executions(
graph_exec_id=graph_exec_id, include_exec_data=True
graph_exec_id, include_exec_data=True
)
# Get graph metadata and full graph structure for name, description, and links
@@ -187,7 +187,7 @@ async def generate_activity_status_for_execution(
credentials = APIKeyCredentials(
id="openai",
provider="openai",
api_key=SecretStr(settings.secrets.openai_api_key),
api_key=SecretStr(settings.secrets.openai_internal_api_key),
title="System OpenAI",
)
@@ -423,7 +423,6 @@ async def _call_llm_direct(
credentials=credentials,
llm_model=LlmModel.GPT4O_MINI,
prompt=prompt,
json_format=False,
max_tokens=150,
compress_prompt_to_fit=True,
)

View File

@@ -468,7 +468,7 @@ class TestGenerateActivityStatusForExecution:
):
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
mock_settings.return_value.secrets.openai_api_key = "test_key"
mock_settings.return_value.secrets.openai_internal_api_key = "test_key"
mock_llm.return_value = (
"I analyzed your data and provided the requested insights."
)
@@ -520,7 +520,7 @@ class TestGenerateActivityStatusForExecution:
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=True,
):
mock_settings.return_value.secrets.openai_api_key = ""
mock_settings.return_value.secrets.openai_internal_api_key = ""
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
@@ -546,7 +546,7 @@ class TestGenerateActivityStatusForExecution:
"backend.executor.activity_status_generator.is_feature_enabled",
return_value=True,
):
mock_settings.return_value.secrets.openai_api_key = "test_key"
mock_settings.return_value.secrets.openai_internal_api_key = "test_key"
result = await generate_activity_status_for_execution(
graph_exec_id="test_exec",
@@ -581,7 +581,7 @@ class TestGenerateActivityStatusForExecution:
):
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
mock_settings.return_value.secrets.openai_api_key = "test_key"
mock_settings.return_value.secrets.openai_internal_api_key = "test_key"
mock_llm.return_value = "Agent completed execution."
result = await generate_activity_status_for_execution(
@@ -633,7 +633,7 @@ class TestIntegration:
):
mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id)
mock_settings.return_value.secrets.openai_api_key = "test_key"
mock_settings.return_value.secrets.openai_internal_api_key = "test_key"
mock_response = LLMResponse(
raw_response={},

View File

@@ -4,13 +4,12 @@ from typing import Callable, Concatenate, ParamSpec, TypeVar, cast
from backend.data import db
from backend.data.credit import UsageTransactionMetadata, get_user_credit_model
from backend.data.execution import (
add_input_to_node_execution,
create_graph_execution,
create_node_execution,
get_block_error_stats,
get_execution_kv_data,
get_graph_execution_meta,
get_graph_executions,
get_latest_node_execution,
get_node_execution,
get_node_executions,
set_execution_kv_data,
@@ -18,6 +17,7 @@ from backend.data.execution import (
update_graph_execution_stats,
update_node_execution_status,
update_node_execution_status_batch,
upsert_execution_input,
upsert_execution_output,
)
from backend.data.generate_data import get_user_execution_summary_data
@@ -85,6 +85,16 @@ class DatabaseManager(AppService):
async def health_check(self) -> str:
if not db.is_connected():
raise UnhealthyServiceError("Database is not connected")
try:
# Test actual database connectivity by executing a simple query
# This will fail if Prisma query engine is not responding
result = await db.query_raw_with_schema("SELECT 1 as health_check")
if not result or result[0].get("health_check") != 1:
raise UnhealthyServiceError("Database query test failed")
except Exception as e:
raise UnhealthyServiceError(f"Database health check failed: {e}")
return await super().health_check()
@classmethod
@@ -105,13 +115,13 @@ class DatabaseManager(AppService):
create_graph_execution = _(create_graph_execution)
get_node_execution = _(get_node_execution)
get_node_executions = _(get_node_executions)
get_latest_node_execution = _(get_latest_node_execution)
update_node_execution_status = _(update_node_execution_status)
update_node_execution_status_batch = _(update_node_execution_status_batch)
update_graph_execution_start_time = _(update_graph_execution_start_time)
update_graph_execution_stats = _(update_graph_execution_stats)
upsert_execution_input = _(upsert_execution_input)
upsert_execution_output = _(upsert_execution_output)
create_node_execution = _(create_node_execution)
add_input_to_node_execution = _(add_input_to_node_execution)
get_execution_kv_data = _(get_execution_kv_data)
set_execution_kv_data = _(set_execution_kv_data)
get_block_error_stats = _(get_block_error_stats)
@@ -171,12 +181,10 @@ class DatabaseManagerClient(AppServiceClient):
get_graph_executions = _(d.get_graph_executions)
get_graph_execution_meta = _(d.get_graph_execution_meta)
get_node_executions = _(d.get_node_executions)
create_node_execution = _(d.create_node_execution)
update_node_execution_status = _(d.update_node_execution_status)
update_graph_execution_start_time = _(d.update_graph_execution_start_time)
update_graph_execution_stats = _(d.update_graph_execution_stats)
upsert_execution_output = _(d.upsert_execution_output)
add_input_to_node_execution = _(d.add_input_to_node_execution)
# Graphs
get_graph_metadata = _(d.get_graph_metadata)
@@ -191,6 +199,14 @@ class DatabaseManagerClient(AppServiceClient):
# User Emails
get_user_email_by_id = _(d.get_user_email_by_id)
# Library
list_library_agents = _(d.list_library_agents)
add_store_agent_to_library = _(d.add_store_agent_to_library)
# Store
get_store_agents = _(d.get_store_agents)
get_store_agent_details = _(d.get_store_agent_details)
class DatabaseManagerAsyncClient(AppServiceClient):
d = DatabaseManager
@@ -201,12 +217,16 @@ class DatabaseManagerAsyncClient(AppServiceClient):
create_graph_execution = d.create_graph_execution
get_connected_output_nodes = d.get_connected_output_nodes
get_latest_node_execution = d.get_latest_node_execution
get_graph = d.get_graph
get_graph_metadata = d.get_graph_metadata
get_graph_execution_meta = d.get_graph_execution_meta
get_node = d.get_node
get_node_execution = d.get_node_execution
get_node_executions = d.get_node_executions
get_user_integrations = d.get_user_integrations
upsert_execution_input = d.upsert_execution_input
upsert_execution_output = d.upsert_execution_output
update_graph_execution_stats = d.update_graph_execution_stats
update_node_execution_status = d.update_node_execution_status
update_node_execution_status_batch = d.update_node_execution_status_batch

View File

@@ -1,154 +0,0 @@
import logging
import threading
from collections import OrderedDict
from functools import wraps
from typing import TYPE_CHECKING, Any, Optional
from backend.data.execution import ExecutionStatus, NodeExecutionResult
from backend.data.model import GraphExecutionStats, NodeExecutionStats
if TYPE_CHECKING:
from backend.executor import DatabaseManagerClient
logger = logging.getLogger(__name__)
def with_lock(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
with self._lock:
return func(self, *args, **kwargs)
return wrapper
class ExecutionCache:
def __init__(self, graph_exec_id: str, db_client: "DatabaseManagerClient"):
self._lock = threading.RLock()
self._graph_exec_id = graph_exec_id
self._graph_stats: GraphExecutionStats = GraphExecutionStats()
self._node_executions: OrderedDict[str, NodeExecutionResult] = OrderedDict()
for execution in db_client.get_node_executions(self._graph_exec_id):
self._node_executions[execution.node_exec_id] = execution
@with_lock
def get_node_execution(self, node_exec_id: str) -> Optional[NodeExecutionResult]:
execution = self._node_executions.get(node_exec_id)
return execution.model_copy(deep=True) if execution else None
@with_lock
def get_latest_node_execution(self, node_id: str) -> Optional[NodeExecutionResult]:
for execution in reversed(self._node_executions.values()):
if (
execution.node_id == node_id
and execution.status != ExecutionStatus.INCOMPLETE
):
return execution.model_copy(deep=True)
return None
@with_lock
def get_node_executions(
self,
*,
statuses: Optional[list] = None,
block_ids: Optional[list] = None,
node_id: Optional[str] = None,
):
results = []
for execution in self._node_executions.values():
if statuses and execution.status not in statuses:
continue
if block_ids and execution.block_id not in block_ids:
continue
if node_id and execution.node_id != node_id:
continue
results.append(execution.model_copy(deep=True))
return results
@with_lock
def update_node_execution_status(
self,
exec_id: str,
status: ExecutionStatus,
execution_data: Optional[dict] = None,
stats: Optional[dict] = None,
):
if exec_id not in self._node_executions:
raise RuntimeError(f"Execution {exec_id} not found in cache")
execution = self._node_executions[exec_id]
execution.status = status
if execution_data:
execution.input_data.update(execution_data)
if stats:
execution.stats = execution.stats or NodeExecutionStats()
current_stats = execution.stats.model_dump()
current_stats.update(stats)
execution.stats = NodeExecutionStats.model_validate(current_stats)
@with_lock
def upsert_execution_output(
self, node_exec_id: str, output_name: str, output_data: Any
) -> NodeExecutionResult:
if node_exec_id not in self._node_executions:
raise RuntimeError(f"Execution {node_exec_id} not found in cache")
execution = self._node_executions[node_exec_id]
if output_name not in execution.output_data:
execution.output_data[output_name] = []
execution.output_data[output_name].append(output_data)
return execution
@with_lock
def update_graph_stats(
self, status: Optional[ExecutionStatus] = None, stats: Optional[dict] = None
):
if status is not None:
pass
if stats is not None:
current_stats = self._graph_stats.model_dump()
current_stats.update(stats)
self._graph_stats = GraphExecutionStats.model_validate(current_stats)
@with_lock
def update_graph_start_time(self):
"""Update graph start time (handled by database persistence)."""
pass
@with_lock
def find_incomplete_execution_for_input(
self, node_id: str, input_name: str
) -> tuple[str, NodeExecutionResult] | None:
for exec_id, execution in self._node_executions.items():
if (
execution.node_id == node_id
and execution.status == ExecutionStatus.INCOMPLETE
and input_name not in execution.input_data
):
return exec_id, execution
return None
@with_lock
def add_node_execution(
self, node_exec_id: str, execution: NodeExecutionResult
) -> None:
self._node_executions[node_exec_id] = execution
@with_lock
def update_execution_input(
self, exec_id: str, input_name: str, input_data: Any
) -> dict:
if exec_id not in self._node_executions:
raise RuntimeError(f"Execution {exec_id} not found in cache")
execution = self._node_executions[exec_id]
execution.input_data[input_name] = input_data
return execution.input_data.copy()
def finalize(self) -> None:
with self._lock:
self._node_executions.clear()
self._graph_stats = GraphExecutionStats()

View File

@@ -1,355 +0,0 @@
"""Test execution creation with proper ID generation and persistence."""
import asyncio
import threading
import uuid
from datetime import datetime
import pytest
from backend.data.execution import ExecutionStatus
from backend.executor.execution_data import ExecutionDataClient
@pytest.fixture
def event_loop():
loop = asyncio.new_event_loop()
yield loop
loop.close()
@pytest.fixture
async def execution_client_with_mock_db(event_loop):
"""Create an ExecutionDataClient with proper database records."""
from datetime import datetime, timezone
from unittest.mock import AsyncMock, MagicMock, patch
from prisma.models import AgentGraph, AgentGraphExecution, User
from backend.data.execution import ExecutionStatus, GraphExecutionMeta
# Create test database records to satisfy foreign key constraints
try:
await User.prisma().create(
data={
"id": "test_user_123",
"email": "test@example.com",
"name": "Test User",
}
)
await AgentGraph.prisma().create(
data={
"id": "test_graph_456",
"version": 1,
"userId": "test_user_123",
"name": "Test Graph",
"description": "Test graph for execution tests",
}
)
from prisma.enums import AgentExecutionStatus
await AgentGraphExecution.prisma().create(
data={
"id": "test_graph_exec_id",
"userId": "test_user_123",
"agentGraphId": "test_graph_456",
"agentGraphVersion": 1,
"executionStatus": AgentExecutionStatus.RUNNING,
}
)
except Exception:
# Records might already exist, that's fine
pass
# Mock the graph execution metadata - align with assertions below
mock_graph_meta = GraphExecutionMeta(
id="test_graph_exec_id",
user_id="test_user_123",
graph_id="test_graph_456",
graph_version=1,
status=ExecutionStatus.RUNNING,
started_at=datetime.now(timezone.utc),
ended_at=datetime.now(timezone.utc),
stats=None,
)
# Create client with ThreadPoolExecutor and graph metadata (constructed inside patch)
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=1)
# Storage for tracking created executions
created_executions = []
async def mock_create_node_execution(
node_exec_id, node_id, graph_exec_id, input_name, input_data
):
"""Mock execution creation that records what was created."""
created_executions.append(
{
"node_exec_id": node_exec_id,
"node_id": node_id,
"graph_exec_id": graph_exec_id,
"input_name": input_name,
"input_data": input_data,
}
)
return node_exec_id
def sync_mock_create_node_execution(
node_exec_id, node_id, graph_exec_id, input_name, input_data
):
"""Mock sync execution creation that records what was created."""
created_executions.append(
{
"node_exec_id": node_exec_id,
"node_id": node_id,
"graph_exec_id": graph_exec_id,
"input_name": input_name,
"input_data": input_data,
}
)
return node_exec_id
# Prepare mock async and sync DB clients
async_mock_client = AsyncMock()
async_mock_client.create_node_execution = mock_create_node_execution
sync_mock_client = MagicMock()
sync_mock_client.create_node_execution = sync_mock_create_node_execution
# Mock graph execution for return values
from backend.data.execution import GraphExecutionMeta
mock_graph_update = GraphExecutionMeta(
id="test_graph_exec_id",
user_id="test_user_123",
graph_id="test_graph_456",
graph_version=1,
status=ExecutionStatus.RUNNING,
started_at=datetime.now(timezone.utc),
ended_at=datetime.now(timezone.utc),
stats=None,
)
# No-ops for other sync methods used by the client during tests
sync_mock_client.add_input_to_node_execution.side_effect = lambda **kwargs: None
sync_mock_client.update_node_execution_status.side_effect = (
lambda *args, **kwargs: None
)
sync_mock_client.upsert_execution_output.side_effect = lambda **kwargs: None
sync_mock_client.update_graph_execution_stats.side_effect = (
lambda *args, **kwargs: mock_graph_update
)
sync_mock_client.update_graph_execution_start_time.side_effect = (
lambda *args, **kwargs: mock_graph_update
)
thread = threading.Thread(target=event_loop.run_forever, daemon=True)
thread.start()
with patch(
"backend.executor.execution_data.get_database_manager_async_client",
return_value=async_mock_client,
), patch(
"backend.executor.execution_data.get_database_manager_client",
return_value=sync_mock_client,
), patch(
"backend.executor.execution_data.get_execution_event_bus"
), patch(
"backend.executor.execution_data.non_blocking_persist", lambda func: func
):
# Now construct the client under the patch so it captures the mocked clients
client = ExecutionDataClient(executor, "test_graph_exec_id", mock_graph_meta)
# Store the mocks for the test to access if needed
setattr(client, "_test_async_client", async_mock_client)
setattr(client, "_test_sync_client", sync_mock_client)
setattr(client, "_created_executions", created_executions)
yield client
# Cleanup test database records
try:
await AgentGraphExecution.prisma().delete_many(
where={"id": "test_graph_exec_id"}
)
await AgentGraph.prisma().delete_many(where={"id": "test_graph_456"})
await User.prisma().delete_many(where={"id": "test_user_123"})
except Exception:
# Cleanup may fail if records don't exist
pass
# Cleanup
event_loop.call_soon_threadsafe(event_loop.stop)
thread.join(timeout=1)
class TestExecutionCreation:
"""Test execution creation with proper ID generation and persistence."""
async def test_execution_creation_with_valid_ids(
self, execution_client_with_mock_db
):
"""Test that execution creation generates and persists valid IDs."""
client = execution_client_with_mock_db
node_id = "test_node_789"
input_name = "test_input"
input_data = "test_value"
block_id = "test_block_abc"
# This should trigger execution creation since cache is empty
exec_id, input_dict = client.upsert_execution_input(
node_id=node_id,
input_name=input_name,
input_data=input_data,
block_id=block_id,
)
# Verify execution ID is valid UUID
try:
uuid.UUID(exec_id)
except ValueError:
pytest.fail(f"Generated execution ID '{exec_id}' is not a valid UUID")
# Verify execution was created in cache with complete data
assert exec_id in client._cache._node_executions
cached_execution = client._cache._node_executions[exec_id]
# Check all required fields have valid values
assert cached_execution.user_id == "test_user_123"
assert cached_execution.graph_id == "test_graph_456"
assert cached_execution.graph_version == 1
assert cached_execution.graph_exec_id == "test_graph_exec_id"
assert cached_execution.node_exec_id == exec_id
assert cached_execution.node_id == node_id
assert cached_execution.block_id == block_id
assert cached_execution.status == ExecutionStatus.INCOMPLETE
assert cached_execution.input_data == {input_name: input_data}
assert isinstance(cached_execution.add_time, datetime)
# Verify execution was persisted to database with our generated ID
created_executions = getattr(client, "_created_executions", [])
assert len(created_executions) == 1
created = created_executions[0]
assert created["node_exec_id"] == exec_id # Our generated ID was used
assert created["node_id"] == node_id
assert created["graph_exec_id"] == "test_graph_exec_id"
assert created["input_name"] == input_name
assert created["input_data"] == input_data
# Verify input dict returned correctly
assert input_dict == {input_name: input_data}
async def test_execution_reuse_vs_creation(self, execution_client_with_mock_db):
"""Test that execution reuse works and creation only happens when needed."""
client = execution_client_with_mock_db
node_id = "reuse_test_node"
block_id = "reuse_test_block"
# Create first execution
exec_id_1, input_dict_1 = client.upsert_execution_input(
node_id=node_id,
input_name="input_1",
input_data="value_1",
block_id=block_id,
)
# This should reuse the existing INCOMPLETE execution
exec_id_2, input_dict_2 = client.upsert_execution_input(
node_id=node_id,
input_name="input_2",
input_data="value_2",
block_id=block_id,
)
# Should reuse the same execution
assert exec_id_1 == exec_id_2
assert input_dict_2 == {"input_1": "value_1", "input_2": "value_2"}
# Only one execution should be created in database
created_executions = getattr(client, "_created_executions", [])
assert len(created_executions) == 1
# Verify cache has the merged inputs
cached_execution = client._cache._node_executions[exec_id_1]
assert cached_execution.input_data == {
"input_1": "value_1",
"input_2": "value_2",
}
# Now complete the execution and try to add another input
client.update_node_status_and_publish(
exec_id=exec_id_1, status=ExecutionStatus.COMPLETED
)
# Verify the execution status was actually updated in the cache
updated_execution = client._cache._node_executions[exec_id_1]
assert (
updated_execution.status == ExecutionStatus.COMPLETED
), f"Expected COMPLETED but got {updated_execution.status}"
# This should create a NEW execution since the first is no longer INCOMPLETE
exec_id_3, input_dict_3 = client.upsert_execution_input(
node_id=node_id,
input_name="input_3",
input_data="value_3",
block_id=block_id,
)
# Should be a different execution
assert exec_id_3 != exec_id_1
assert input_dict_3 == {"input_3": "value_3"}
# Verify cache behavior: should have two different executions in cache now
cached_executions = client._cache._node_executions
assert len(cached_executions) == 2
assert exec_id_1 in cached_executions
assert exec_id_3 in cached_executions
# First execution should be COMPLETED
assert cached_executions[exec_id_1].status == ExecutionStatus.COMPLETED
# Third execution should be INCOMPLETE (newly created)
assert cached_executions[exec_id_3].status == ExecutionStatus.INCOMPLETE
async def test_multiple_nodes_get_different_execution_ids(
self, execution_client_with_mock_db
):
"""Test that different nodes get different execution IDs."""
client = execution_client_with_mock_db
# Create executions for different nodes
exec_id_a, _ = client.upsert_execution_input(
node_id="node_a",
input_name="test_input",
input_data="test_value",
block_id="block_a",
)
exec_id_b, _ = client.upsert_execution_input(
node_id="node_b",
input_name="test_input",
input_data="test_value",
block_id="block_b",
)
# Should be different executions with different IDs
assert exec_id_a != exec_id_b
# Both should be valid UUIDs
uuid.UUID(exec_id_a)
uuid.UUID(exec_id_b)
# Both should be in cache
cached_executions = client._cache._node_executions
assert len(cached_executions) == 2
assert exec_id_a in cached_executions
assert exec_id_b in cached_executions
# Both should have correct node IDs
assert cached_executions[exec_id_a].node_id == "node_a"
assert cached_executions[exec_id_b].node_id == "node_b"
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -1,338 +0,0 @@
import logging
import threading
import uuid
from concurrent.futures import Executor
from datetime import datetime, timezone
from typing import TYPE_CHECKING, Any, Callable, ParamSpec, TypeVar, cast
from backend.data import redis_client as redis
from backend.data.credit import UsageTransactionMetadata
from backend.data.execution import (
ExecutionStatus,
GraphExecutionMeta,
NodeExecutionResult,
)
from backend.data.graph import Node
from backend.data.model import GraphExecutionStats
from backend.executor.execution_cache import ExecutionCache
from backend.util.clients import (
get_database_manager_async_client,
get_database_manager_client,
get_execution_event_bus,
)
from backend.util.settings import Settings
if TYPE_CHECKING:
from backend.executor import DatabaseManagerAsyncClient, DatabaseManagerClient
settings = Settings()
logger = logging.getLogger(__name__)
P = ParamSpec("P")
T = TypeVar("T")
def non_blocking_persist(func: Callable[P, T]) -> Callable[P, None]:
from functools import wraps
@wraps(func)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> None:
# First argument is always self for methods - access through cast for typing
self = cast("ExecutionDataClient", args[0])
future = self._executor.submit(func, *args, **kwargs)
self._pending_tasks.add(future)
return wrapper
class ExecutionDataClient:
def __init__(
self, executor: Executor, graph_exec_id: str, graph_metadata: GraphExecutionMeta
):
self._executor = executor
self._graph_exec_id = graph_exec_id
self._cache = ExecutionCache(graph_exec_id, self.db_client_sync)
self._pending_tasks = set()
self._graph_metadata = graph_metadata
self.graph_lock = threading.RLock()
def finalize_execution(self, timeout: float = 30.0):
logger.info(f"Flushing db writes for execution {self._graph_exec_id}")
exceptions = []
# Wait for all pending database operations to complete
logger.debug(
f"Waiting for {len(self._pending_tasks)} pending database operations"
)
for future in list(self._pending_tasks):
try:
future.result(timeout=timeout)
except Exception as e:
logger.error(f"Background database operation failed: {e}")
exceptions.append(e)
finally:
self._pending_tasks.discard(future)
self._cache.finalize()
if exceptions:
logger.error(f"Background persistence failed with {len(exceptions)} errors")
raise RuntimeError(
f"Background persistence failed with {len(exceptions)} errors: {exceptions}"
)
@property
def db_client_async(self) -> "DatabaseManagerAsyncClient":
return get_database_manager_async_client()
@property
def db_client_sync(self) -> "DatabaseManagerClient":
return get_database_manager_client()
@property
def event_bus(self):
return get_execution_event_bus()
async def get_node(self, node_id: str) -> Node:
return await self.db_client_async.get_node(node_id)
def spend_credits(
self,
user_id: str,
cost: int,
metadata: UsageTransactionMetadata,
) -> int:
return self.db_client_sync.spend_credits(
user_id=user_id, cost=cost, metadata=metadata
)
def get_graph_execution_meta(
self, user_id: str, execution_id: str
) -> GraphExecutionMeta | None:
return self.db_client_sync.get_graph_execution_meta(
user_id=user_id, execution_id=execution_id
)
def get_graph_metadata(
self, graph_id: str, graph_version: int | None = None
) -> Any:
return self.db_client_sync.get_graph_metadata(graph_id, graph_version)
def get_credits(self, user_id: str) -> int:
return self.db_client_sync.get_credits(user_id)
def get_user_email_by_id(self, user_id: str) -> str | None:
return self.db_client_sync.get_user_email_by_id(user_id)
def get_latest_node_execution(self, node_id: str) -> NodeExecutionResult | None:
return self._cache.get_latest_node_execution(node_id)
def get_node_execution(self, node_exec_id: str) -> NodeExecutionResult | None:
return self._cache.get_node_execution(node_exec_id)
def get_node_executions(
self,
*,
node_id: str | None = None,
statuses: list[ExecutionStatus] | None = None,
block_ids: list[str] | None = None,
) -> list[NodeExecutionResult]:
return self._cache.get_node_executions(
statuses=statuses, block_ids=block_ids, node_id=node_id
)
def update_node_status_and_publish(
self,
exec_id: str,
status: ExecutionStatus,
execution_data: dict | None = None,
stats: dict[str, Any] | None = None,
):
self._cache.update_node_execution_status(exec_id, status, execution_data, stats)
self._persist_node_status_to_db(exec_id, status, execution_data, stats)
def upsert_execution_input(
self, node_id: str, input_name: str, input_data: Any, block_id: str
) -> tuple[str, dict]:
# Validate input parameters to prevent foreign key constraint errors
if not node_id or not isinstance(node_id, str):
raise ValueError(f"Invalid node_id: {node_id}")
if not self._graph_exec_id or not isinstance(self._graph_exec_id, str):
raise ValueError(f"Invalid graph_exec_id: {self._graph_exec_id}")
if not block_id or not isinstance(block_id, str):
raise ValueError(f"Invalid block_id: {block_id}")
# UPDATE: Try to find an existing incomplete execution for this node and input
if result := self._cache.find_incomplete_execution_for_input(
node_id, input_name
):
exec_id, _ = result
updated_input_data = self._cache.update_execution_input(
exec_id, input_name, input_data
)
self._persist_add_input_to_db(exec_id, input_name, input_data)
return exec_id, updated_input_data
# CREATE: No suitable execution found, create new one
node_exec_id = str(uuid.uuid4())
logger.debug(
f"Creating new execution {node_exec_id} for node {node_id} "
f"in graph execution {self._graph_exec_id}"
)
new_execution = NodeExecutionResult(
user_id=self._graph_metadata.user_id,
graph_id=self._graph_metadata.graph_id,
graph_version=self._graph_metadata.graph_version,
graph_exec_id=self._graph_exec_id,
node_exec_id=node_exec_id,
node_id=node_id,
block_id=block_id,
status=ExecutionStatus.INCOMPLETE,
input_data={input_name: input_data},
output_data={},
add_time=datetime.now(timezone.utc),
)
self._cache.add_node_execution(node_exec_id, new_execution)
self._persist_new_node_execution_to_db(
node_exec_id, node_id, input_name, input_data
)
return node_exec_id, {input_name: input_data}
def upsert_execution_output(
self, node_exec_id: str, output_name: str, output_data: Any
):
self._cache.upsert_execution_output(node_exec_id, output_name, output_data)
self._persist_execution_output_to_db(node_exec_id, output_name, output_data)
def update_graph_stats_and_publish(
self,
status: ExecutionStatus | None = None,
stats: GraphExecutionStats | None = None,
) -> None:
stats_dict = stats.model_dump() if stats else None
self._cache.update_graph_stats(status=status, stats=stats_dict)
self._persist_graph_stats_to_db(status=status, stats=stats)
def update_graph_start_time_and_publish(self) -> None:
self._cache.update_graph_start_time()
self._persist_graph_start_time_to_db()
@non_blocking_persist
def _persist_node_status_to_db(
self,
exec_id: str,
status: ExecutionStatus,
execution_data: dict | None = None,
stats: dict[str, Any] | None = None,
):
exec_update = self.db_client_sync.update_node_execution_status(
exec_id, status, execution_data, stats
)
self.event_bus.publish(exec_update)
@non_blocking_persist
def _persist_add_input_to_db(
self, node_exec_id: str, input_name: str, input_data: Any
):
self.db_client_sync.add_input_to_node_execution(
node_exec_id=node_exec_id,
input_name=input_name,
input_data=input_data,
)
@non_blocking_persist
def _persist_execution_output_to_db(
self, node_exec_id: str, output_name: str, output_data: Any
):
self.db_client_sync.upsert_execution_output(
node_exec_id=node_exec_id,
output_name=output_name,
output_data=output_data,
)
if exec_update := self.get_node_execution(node_exec_id):
self.event_bus.publish(exec_update)
@non_blocking_persist
def _persist_graph_stats_to_db(
self,
status: ExecutionStatus | None = None,
stats: GraphExecutionStats | None = None,
):
graph_update = self.db_client_sync.update_graph_execution_stats(
self._graph_exec_id, status, stats
)
if not graph_update:
raise RuntimeError(
f"Failed to update graph execution stats for {self._graph_exec_id}"
)
self.event_bus.publish(graph_update)
@non_blocking_persist
def _persist_graph_start_time_to_db(self):
graph_update = self.db_client_sync.update_graph_execution_start_time(
self._graph_exec_id
)
if not graph_update:
raise RuntimeError(
f"Failed to update graph execution start time for {self._graph_exec_id}"
)
self.event_bus.publish(graph_update)
async def generate_activity_status(
self,
graph_id: str,
graph_version: int,
execution_stats: GraphExecutionStats,
user_id: str,
execution_status: ExecutionStatus,
) -> str | None:
from backend.executor.activity_status_generator import (
generate_activity_status_for_execution,
)
return await generate_activity_status_for_execution(
graph_exec_id=self._graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
execution_stats=execution_stats,
db_client=self.db_client_async,
user_id=user_id,
execution_status=execution_status,
)
@non_blocking_persist
def _send_execution_update(self, execution: NodeExecutionResult):
"""Send execution update to event bus."""
try:
self.event_bus.publish(execution)
except Exception as e:
logger.warning(f"Failed to send execution update: {e}")
@non_blocking_persist
def _persist_new_node_execution_to_db(
self, node_exec_id: str, node_id: str, input_name: str, input_data: Any
):
try:
self.db_client_sync.create_node_execution(
node_exec_id=node_exec_id,
node_id=node_id,
graph_exec_id=self._graph_exec_id,
input_name=input_name,
input_data=input_data,
)
except Exception as e:
logger.error(
f"Failed to create node execution {node_exec_id} for node {node_id} "
f"in graph execution {self._graph_exec_id}: {e}"
)
raise
def increment_execution_count(self, user_id: str) -> int:
r = redis.get_redis()
k = f"uec:{user_id}"
counter = cast(int, r.incr(k))
if counter == 1:
r.expire(k, settings.config.execution_counter_expiration_time)
return counter

View File

@@ -1,668 +0,0 @@
"""Test suite for ExecutionDataClient."""
import asyncio
import threading
import pytest
from backend.data.execution import ExecutionStatus
from backend.executor.execution_data import ExecutionDataClient
@pytest.fixture
def event_loop():
loop = asyncio.new_event_loop()
yield loop
loop.close()
@pytest.fixture
def execution_client(event_loop):
from datetime import datetime, timezone
from unittest.mock import AsyncMock, MagicMock, patch
from backend.data.execution import ExecutionStatus, GraphExecutionMeta
mock_graph_meta = GraphExecutionMeta(
id="test_graph_exec_id",
user_id="test_user_id",
graph_id="test_graph_id",
graph_version=1,
status=ExecutionStatus.RUNNING,
started_at=datetime.now(timezone.utc),
ended_at=datetime.now(timezone.utc),
stats=None,
)
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=1)
# Mock all database operations to prevent connection attempts
async_mock_client = AsyncMock()
sync_mock_client = MagicMock()
# Mock all database methods to return None or empty results
sync_mock_client.get_node_executions.return_value = []
sync_mock_client.create_node_execution.return_value = None
sync_mock_client.add_input_to_node_execution.return_value = None
sync_mock_client.update_node_execution_status.return_value = None
sync_mock_client.upsert_execution_output.return_value = None
sync_mock_client.update_graph_execution_stats.return_value = mock_graph_meta
sync_mock_client.update_graph_execution_start_time.return_value = mock_graph_meta
# Mock event bus to prevent connection attempts
mock_event_bus = MagicMock()
mock_event_bus.publish.return_value = None
thread = threading.Thread(target=event_loop.run_forever, daemon=True)
thread.start()
with patch(
"backend.executor.execution_data.get_database_manager_async_client",
return_value=async_mock_client,
), patch(
"backend.executor.execution_data.get_database_manager_client",
return_value=sync_mock_client,
), patch(
"backend.executor.execution_data.get_execution_event_bus",
return_value=mock_event_bus,
), patch(
"backend.executor.execution_data.non_blocking_persist", lambda func: func
):
client = ExecutionDataClient(executor, "test_graph_exec_id", mock_graph_meta)
yield client
event_loop.call_soon_threadsafe(event_loop.stop)
thread.join(timeout=1)
class TestExecutionDataClient:
async def test_update_node_status_writes_to_cache_immediately(
self, execution_client
):
"""Test that node status updates are immediately visible in cache."""
# First create an execution to update
node_exec_id, _ = execution_client.upsert_execution_input(
node_id="test-node",
input_name="test_input",
input_data="test_value",
block_id="test-block",
)
status = ExecutionStatus.RUNNING
execution_data = {"step": "processing"}
stats = {"duration": 5.2}
# Update status of existing execution
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=status,
execution_data=execution_data,
stats=stats,
)
# Verify immediate visibility in cache
cached_exec = execution_client.get_node_execution(node_exec_id)
assert cached_exec is not None
assert cached_exec.status == status
# execution_data should be merged with existing input_data, not replace it
expected_input_data = {"test_input": "test_value", "step": "processing"}
assert cached_exec.input_data == expected_input_data
def test_update_node_status_execution_not_found_raises_error(
self, execution_client
):
"""Test that updating non-existent execution raises error instead of creating it."""
non_existent_id = "does-not-exist"
with pytest.raises(
RuntimeError, match="Execution does-not-exist not found in cache"
):
execution_client.update_node_status_and_publish(
exec_id=non_existent_id, status=ExecutionStatus.COMPLETED
)
async def test_upsert_execution_output_writes_to_cache_immediately(
self, execution_client
):
"""Test that output updates are immediately visible in cache."""
# First create an execution
node_exec_id, _ = execution_client.upsert_execution_input(
node_id="test-node",
input_name="test_input",
input_data="test_value",
block_id="test-block",
)
output_name = "result"
output_data = {"answer": 42, "confidence": 0.95}
# Update to RUNNING status first
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=ExecutionStatus.RUNNING,
execution_data={"input": "test"},
)
execution_client.upsert_execution_output(
node_exec_id=node_exec_id, output_name=output_name, output_data=output_data
)
# Check output through the node execution
cached_exec = execution_client.get_node_execution(node_exec_id)
assert cached_exec is not None
assert output_name in cached_exec.output_data
assert cached_exec.output_data[output_name] == [output_data]
async def test_get_node_execution_reads_from_cache(self, execution_client):
"""Test that get_node_execution returns cached data immediately."""
# First create an execution to work with
node_exec_id, _ = execution_client.upsert_execution_input(
node_id="test-node",
input_name="test_input",
input_data="test_value",
block_id="test-block",
)
# Then update its status
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=ExecutionStatus.COMPLETED,
execution_data={"result": "success"},
)
result = execution_client.get_node_execution(node_exec_id)
assert result is not None
assert result.status == ExecutionStatus.COMPLETED
# execution_data gets merged with existing input_data
expected_input_data = {"test_input": "test_value", "result": "success"}
assert result.input_data == expected_input_data
async def test_get_latest_node_execution_reads_from_cache(self, execution_client):
"""Test that get_latest_node_execution returns cached data."""
node_id = "node-1"
# First create an execution for this node
node_exec_id, _ = execution_client.upsert_execution_input(
node_id=node_id,
input_name="test_input",
input_data="test_value",
block_id="test-block",
)
# Then update its status to make it non-INCOMPLETE (so it's returned by get_latest)
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=ExecutionStatus.RUNNING,
execution_data={"from": "cache"},
)
result = execution_client.get_latest_node_execution(node_id)
assert result is not None
assert result.status == ExecutionStatus.RUNNING
# execution_data gets merged with existing input_data
expected_input_data = {"test_input": "test_value", "from": "cache"}
assert result.input_data == expected_input_data
async def test_get_node_executions_sync_filters_correctly(self, execution_client):
# Create executions with different statuses
executions = [
(ExecutionStatus.RUNNING, "block-a"),
(ExecutionStatus.COMPLETED, "block-a"),
(ExecutionStatus.FAILED, "block-b"),
(ExecutionStatus.RUNNING, "block-b"),
]
exec_ids = []
for i, (status, block_id) in enumerate(executions):
# First create the execution
exec_id, _ = execution_client.upsert_execution_input(
node_id=f"node-{i}",
input_name="test_input",
input_data="test_value",
block_id=block_id,
)
exec_ids.append(exec_id)
# Then update its status and metadata
execution_client.update_node_status_and_publish(
exec_id=exec_id, status=status, execution_data={"block": block_id}
)
# Update cached execution with graph_exec_id and block_id for filtering
# Note: In real implementation, these would be set during creation
# For test purposes, we'll skip this manual update since the filtering
# logic should work with the data as created
# Test status filtering
running_execs = execution_client.get_node_executions(
statuses=[ExecutionStatus.RUNNING]
)
assert len(running_execs) == 2
assert all(e.status == ExecutionStatus.RUNNING for e in running_execs)
# Test block_id filtering
block_a_execs = execution_client.get_node_executions(block_ids=["block-a"])
assert len(block_a_execs) == 2
assert all(e.block_id == "block-a" for e in block_a_execs)
# Test combined filtering
running_block_b = execution_client.get_node_executions(
statuses=[ExecutionStatus.RUNNING], block_ids=["block-b"]
)
assert len(running_block_b) == 1
assert running_block_b[0].status == ExecutionStatus.RUNNING
assert running_block_b[0].block_id == "block-b"
async def test_write_then_read_consistency(self, execution_client):
"""Test critical race condition scenario: immediate read after write."""
# First create an execution to work with
node_exec_id, _ = execution_client.upsert_execution_input(
node_id="consistency-test-node",
input_name="test_input",
input_data="test_value",
block_id="test-block",
)
# Write status
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=ExecutionStatus.RUNNING,
execution_data={"step": 1},
)
# Write output
execution_client.upsert_execution_output(
node_exec_id=node_exec_id,
output_name="intermediate",
output_data={"progress": 50},
)
# Update status again
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=ExecutionStatus.COMPLETED,
execution_data={"step": 2},
)
# All changes should be immediately visible
cached_exec = execution_client.get_node_execution(node_exec_id)
assert cached_exec is not None
assert cached_exec.status == ExecutionStatus.COMPLETED
# execution_data gets merged with existing input_data - step 2 overwrites step 1
expected_input_data = {"test_input": "test_value", "step": 2}
assert cached_exec.input_data == expected_input_data
# Output should be visible in execution record
assert cached_exec.output_data["intermediate"] == [{"progress": 50}]
async def test_concurrent_operations_are_thread_safe(self, execution_client):
"""Test that concurrent operations don't corrupt cache."""
num_threads = 3 # Reduced for simpler test
operations_per_thread = 5 # Reduced for simpler test
# Create all executions upfront
created_exec_ids = []
for thread_id in range(num_threads):
for i in range(operations_per_thread):
exec_id, _ = execution_client.upsert_execution_input(
node_id=f"node-{thread_id}-{i}",
input_name="test_input",
input_data="test_value",
block_id=f"block-{thread_id}-{i}",
)
created_exec_ids.append((exec_id, thread_id, i))
def worker(thread_data):
"""Perform multiple operations from a thread."""
thread_id, ops = thread_data
for i, (exec_id, _, _) in enumerate(ops):
# Status updates
execution_client.update_node_status_and_publish(
exec_id=exec_id,
status=ExecutionStatus.RUNNING,
execution_data={"thread": thread_id, "op": i},
)
# Output updates (use just one exec_id per thread for outputs)
if i == 0: # Only add outputs to first execution of each thread
execution_client.upsert_execution_output(
node_exec_id=exec_id,
output_name=f"output_{i}",
output_data={"thread": thread_id, "value": i},
)
# Organize executions by thread
thread_data = []
for tid in range(num_threads):
thread_ops = [
exec_data for exec_data in created_exec_ids if exec_data[1] == tid
]
thread_data.append((tid, thread_ops))
# Start multiple threads
threads = []
for data in thread_data:
thread = threading.Thread(target=worker, args=(data,))
threads.append(thread)
thread.start()
# Wait for completion
for thread in threads:
thread.join()
# Verify data integrity
expected_executions = num_threads * operations_per_thread
all_executions = execution_client.get_node_executions()
assert len(all_executions) == expected_executions
# Verify outputs - only first execution of each thread should have outputs
output_count = 0
for execution in all_executions:
if execution.output_data:
output_count += 1
assert output_count == num_threads # One output per thread
async def test_sync_and_async_versions_consistent(self, execution_client):
"""Test that sync and async versions of output operations behave the same."""
# First create the execution
node_exec_id, _ = execution_client.upsert_execution_input(
node_id="sync-async-test-node",
input_name="test_input",
input_data="test_value",
block_id="test-block",
)
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=ExecutionStatus.RUNNING,
execution_data={"input": "test"},
)
execution_client.upsert_execution_output(
node_exec_id=node_exec_id,
output_name="sync_result",
output_data={"method": "sync"},
)
execution_client.upsert_execution_output(
node_exec_id=node_exec_id,
output_name="async_result",
output_data={"method": "async"},
)
cached_exec = execution_client.get_node_execution(node_exec_id)
assert cached_exec is not None
assert "sync_result" in cached_exec.output_data
assert "async_result" in cached_exec.output_data
assert cached_exec.output_data["sync_result"] == [{"method": "sync"}]
assert cached_exec.output_data["async_result"] == [{"method": "async"}]
async def test_finalize_execution_completes_and_clears_cache(
self, execution_client
):
"""Test that finalize_execution waits for background tasks and clears cache."""
# First create the execution
node_exec_id, _ = execution_client.upsert_execution_input(
node_id="pending-test-node",
input_name="test_input",
input_data="test_value",
block_id="test-block",
)
# Trigger some background operations
execution_client.update_node_status_and_publish(
exec_id=node_exec_id, status=ExecutionStatus.RUNNING
)
execution_client.upsert_execution_output(
node_exec_id=node_exec_id, output_name="test", output_data={"value": 1}
)
# Wait for background tasks - may fail in test environment due to DB issues
try:
execution_client.finalize_execution(timeout=5.0)
except RuntimeError as e:
# In test environment, background DB operations may fail, but cache should still be cleared
assert "Background persistence failed" in str(e)
# Cache should be cleared regardless of background task failures
all_executions = execution_client.get_node_executions()
assert len(all_executions) == 0 # Cache should be cleared
async def test_manager_usage_pattern(self, execution_client):
# Create executions first
node_exec_id_1, _ = execution_client.upsert_execution_input(
node_id="node-1",
input_name="input1",
input_data="data1",
block_id="block-1",
)
node_exec_id_2, _ = execution_client.upsert_execution_input(
node_id="node-2",
input_name="input_from_node1",
input_data="value1",
block_id="block-2",
)
# Simulate manager.py workflow
# 1. Start execution
execution_client.update_node_status_and_publish(
exec_id=node_exec_id_1,
status=ExecutionStatus.RUNNING,
execution_data={"input": "data1"},
)
# 2. Node produces output
execution_client.upsert_execution_output(
node_exec_id=node_exec_id_1,
output_name="result",
output_data={"output": "value1"},
)
# 3. Complete first node
execution_client.update_node_status_and_publish(
exec_id=node_exec_id_1, status=ExecutionStatus.COMPLETED
)
# 4. Start second node (would read output from first)
execution_client.update_node_status_and_publish(
exec_id=node_exec_id_2,
status=ExecutionStatus.RUNNING,
execution_data={"input_from_node1": "value1"},
)
# 5. Manager queries for executions
all_executions = execution_client.get_node_executions()
running_executions = execution_client.get_node_executions(
statuses=[ExecutionStatus.RUNNING]
)
completed_executions = execution_client.get_node_executions(
statuses=[ExecutionStatus.COMPLETED]
)
# Verify manager can see all data immediately
assert len(all_executions) == 2
assert len(running_executions) == 1
assert len(completed_executions) == 1
# Verify output is accessible
exec_1 = execution_client.get_node_execution(node_exec_id_1)
assert exec_1 is not None
assert exec_1.output_data["result"] == [{"output": "value1"}]
def test_stats_handling_in_update_node_status(self, execution_client):
"""Test that stats parameter is properly handled in update_node_status_and_publish."""
# Create a fake execution directly in cache to avoid database issues
from datetime import datetime, timezone
from backend.data.execution import NodeExecutionResult
node_exec_id = "test-stats-exec-id"
fake_execution = NodeExecutionResult(
user_id="test-user",
graph_id="test-graph",
graph_version=1,
graph_exec_id="test-graph-exec",
node_exec_id=node_exec_id,
node_id="stats-test-node",
block_id="test-block",
status=ExecutionStatus.INCOMPLETE,
input_data={"test_input": "test_value"},
output_data={},
add_time=datetime.now(timezone.utc),
queue_time=None,
start_time=None,
end_time=None,
stats=None,
)
# Add directly to cache
execution_client._cache.add_node_execution(node_exec_id, fake_execution)
stats = {"token_count": 150, "processing_time": 2.5}
# Update status with stats
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=ExecutionStatus.RUNNING,
execution_data={"input": "test"},
stats=stats,
)
# Verify execution was updated and stats are stored
execution = execution_client.get_node_execution(node_exec_id)
assert execution is not None
assert execution.status == ExecutionStatus.RUNNING
# Stats should be stored in proper stats field
assert execution.stats is not None
stats_dict = execution.stats.model_dump()
# Only check the fields we set, ignore defaults
assert stats_dict["token_count"] == 150
assert stats_dict["processing_time"] == 2.5
# Update with additional stats
additional_stats = {"error_count": 0}
execution_client.update_node_status_and_publish(
exec_id=node_exec_id,
status=ExecutionStatus.COMPLETED,
stats=additional_stats,
)
# Stats should be merged
execution = execution_client.get_node_execution(node_exec_id)
assert execution is not None
assert execution.status == ExecutionStatus.COMPLETED
stats_dict = execution.stats.model_dump()
# Check the merged stats
assert stats_dict["token_count"] == 150
assert stats_dict["processing_time"] == 2.5
assert stats_dict["error_count"] == 0
async def test_upsert_execution_input_scenarios(self, execution_client):
"""Test different scenarios of upsert_execution_input - create vs update."""
node_id = "test-node"
graph_exec_id = (
"test_graph_exec_id" # Must match the ExecutionDataClient's scope
)
# Scenario 1: Create new execution when none exists
exec_id_1, input_data_1 = execution_client.upsert_execution_input(
node_id=node_id,
input_name="first_input",
input_data="value1",
block_id="test-block",
)
# Should create new execution
execution = execution_client.get_node_execution(exec_id_1)
assert execution is not None
assert execution.status == ExecutionStatus.INCOMPLETE
assert execution.node_id == node_id
assert execution.graph_exec_id == graph_exec_id
assert input_data_1 == {"first_input": "value1"}
# Scenario 2: Add input to existing INCOMPLETE execution
exec_id_2, input_data_2 = execution_client.upsert_execution_input(
node_id=node_id,
input_name="second_input",
input_data="value2",
block_id="test-block",
)
# Should use same execution
assert exec_id_2 == exec_id_1
assert input_data_2 == {"first_input": "value1", "second_input": "value2"}
# Verify execution has both inputs
execution = execution_client.get_node_execution(exec_id_1)
assert execution is not None
assert execution.input_data == {
"first_input": "value1",
"second_input": "value2",
}
# Scenario 3: Create new execution when existing is not INCOMPLETE
execution_client.update_node_status_and_publish(
exec_id=exec_id_1, status=ExecutionStatus.RUNNING
)
exec_id_3, input_data_3 = execution_client.upsert_execution_input(
node_id=node_id,
input_name="third_input",
input_data="value3",
block_id="test-block",
)
# Should create new execution
assert exec_id_3 != exec_id_1
execution_3 = execution_client.get_node_execution(exec_id_3)
assert execution_3 is not None
assert input_data_3 == {"third_input": "value3"}
# Verify we now have 2 executions
all_executions = execution_client.get_node_executions()
assert len(all_executions) == 2
def test_graph_stats_operations(self, execution_client):
"""Test graph-level stats and start time operations."""
# Test update_graph_stats_and_publish
from backend.data.model import GraphExecutionStats
stats = GraphExecutionStats(
walltime=10.5, cputime=8.2, node_count=5, node_error_count=1
)
execution_client.update_graph_stats_and_publish(
status=ExecutionStatus.RUNNING, stats=stats
)
# Verify stats are stored in cache
cached_stats = execution_client._cache._graph_stats
assert cached_stats.walltime == 10.5
execution_client.update_graph_start_time_and_publish()
cached_stats = execution_client._cache._graph_stats
assert cached_stats.walltime == 10.5
def test_public_methods_accessible(self, execution_client):
"""Test that public methods are accessible."""
assert hasattr(execution_client._cache, "update_node_execution_status")
assert hasattr(execution_client._cache, "upsert_execution_output")
assert hasattr(execution_client._cache, "add_node_execution")
assert hasattr(execution_client._cache, "find_incomplete_execution_for_input")
assert hasattr(execution_client._cache, "update_execution_input")
assert hasattr(execution_client, "upsert_execution_input")
assert hasattr(execution_client, "update_node_status_and_publish")
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -5,31 +5,15 @@ import threading
import time
from collections import defaultdict
from concurrent.futures import Future, ThreadPoolExecutor
from typing import Any, Optional, TypeVar, cast
from contextlib import asynccontextmanager
from typing import TYPE_CHECKING, Any, Optional, TypeVar, cast
from pika.adapters.blocking_connection import BlockingChannel
from pika.spec import Basic, BasicProperties
from prometheus_client import Gauge, start_http_server
from pydantic import JsonValue
from redis.asyncio.lock import Lock as RedisLock
from backend.blocks.agent import AgentExecutorBlock
from backend.blocks.io import AgentOutputBlock
from backend.data.block import (
BlockData,
BlockInput,
BlockOutput,
BlockSchema,
get_block,
)
from backend.data.credit import UsageTransactionMetadata
from backend.data.execution import (
ExecutionQueue,
ExecutionStatus,
GraphExecutionEntry,
NodeExecutionEntry,
UserContext,
)
from backend.data.graph import Link, Node
from backend.blocks.persistence import get_storage_key
from backend.data.model import GraphExecutionStats, NodeExecutionStats
from backend.data.notifications import (
AgentRunData,
@@ -39,14 +23,47 @@ from backend.data.notifications import (
ZeroBalanceData,
)
from backend.data.rabbitmq import SyncRabbitMQ
from backend.executor.execution_data import ExecutionDataClient
from backend.executor.activity_status_generator import (
generate_activity_status_for_execution,
)
from backend.executor.utils import LogMetadata
from backend.notifications.notifications import queue_notification
from backend.util.clients import get_database_manager_async_client
from backend.util.exceptions import InsufficientBalanceError, ModerationError
if TYPE_CHECKING:
from backend.executor import DatabaseManagerClient, DatabaseManagerAsyncClient
from prometheus_client import Gauge, start_http_server
from backend.blocks.agent import AgentExecutorBlock
from backend.data import redis_client as redis
from backend.data.block import (
BlockInput,
BlockOutput,
BlockOutputEntry,
BlockSchema,
get_block,
)
from backend.data.credit import UsageTransactionMetadata
from backend.data.execution import (
ExecutionQueue,
ExecutionStatus,
GraphExecution,
GraphExecutionEntry,
NodeExecutionEntry,
NodeExecutionResult,
NodesInputMasks,
UserContext,
)
from backend.data.graph import Link, Node
from backend.data.optional_block import get_optional_config
from backend.executor.utils import (
GRACEFUL_SHUTDOWN_TIMEOUT_SECONDS,
GRAPH_EXECUTION_CANCEL_QUEUE_NAME,
GRAPH_EXECUTION_QUEUE_NAME,
CancelExecutionEvent,
ExecutionOutputEntry,
LogMetadata,
NodeExecutionProgress,
block_usage_cost,
create_execution_queue_config,
@@ -55,17 +72,20 @@ from backend.executor.utils import (
validate_exec,
)
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.notifications.notifications import queue_notification
from backend.server.v2.AutoMod.manager import automod_manager
from backend.util import json
from backend.util.clients import get_notification_manager_client
from backend.util.clients import (
get_async_execution_event_bus,
get_database_manager_client,
get_execution_event_bus,
get_notification_manager_client,
)
from backend.util.decorator import (
async_error_logged,
async_time_measured,
error_logged,
time_measured,
)
from backend.util.exceptions import InsufficientBalanceError, ModerationError
from backend.util.file import clean_exec_files
from backend.util.logging import TruncatedLogger, configure_logging
from backend.util.metrics import DiscordChannel
@@ -108,18 +128,99 @@ def execute_graph(
T = TypeVar("T")
async def should_skip_node(
node: Node,
creds_manager: IntegrationCredentialsManager,
user_id: str,
user_context: UserContext,
input_data: BlockInput,
graph_id: str,
) -> tuple[bool, str]:
"""
Check if a node should be skipped based on optional configuration.
Returns:
Tuple of (should_skip, skip_reason)
"""
optional_config = get_optional_config(node.metadata)
if not optional_config or not optional_config.enabled:
return False, ""
conditions = optional_config.conditions
skip_reasons = []
conditions_met = []
# Check credential availability
if conditions.on_missing_credentials:
node_block = node.block
input_model = cast(type[BlockSchema], node_block.input_schema)
for field_name, input_type in input_model.get_credentials_fields().items():
if field_name in input_data:
credentials_meta = input_type(**input_data[field_name])
# Check if credentials exist without acquiring lock
if not await creds_manager.exists(user_id, credentials_meta.id):
skip_reasons.append(f"Missing credentials: {field_name}")
conditions_met.append(True)
break
else:
# All credentials exist
if conditions.on_missing_credentials:
conditions_met.append(False)
# Check standard skip_run_block input (automatically added for optional blocks)
if conditions.check_skip_input and "skip_run_block" in input_data:
skip_value = input_data.get("skip_run_block", False)
if skip_value is True: # Skip if input is True
skip_reasons.append("Skip input is true")
conditions_met.append(True)
else:
conditions_met.append(False)
# Check key-value flag
if conditions.kv_flag:
# Determine storage key (assume within_agent scope for now)
storage_key = get_storage_key(conditions.kv_flag, "within_agent", graph_id)
# Retrieve the value from KV store
kv_value = await get_database_manager_async_client().get_execution_kv_data(
user_id=user_id,
key=storage_key,
)
# Skip if flag is True (treat missing/None as False)
if kv_value is True:
skip_reasons.append(f"KV flag '{conditions.kv_flag}' is true")
conditions_met.append(True)
else:
conditions_met.append(False)
# Apply logical operator
if not conditions_met:
return False, ""
if conditions.operator == "and":
should_skip = all(conditions_met)
else: # OR
should_skip = any(conditions_met)
skip_message = optional_config.skip_message or "; ".join(skip_reasons)
return should_skip, skip_message
async def execute_node(
node: Node,
creds_manager: IntegrationCredentialsManager,
data: NodeExecutionEntry,
execution_stats: NodeExecutionStats | None = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> BlockOutput:
"""
Execute a node in the graph. This will trigger a block execution on a node,
persist the execution result, and return the subsequent node to be executed.
Args:
db_client: The client to send execution updates to the server.
creds_manager: The manager to acquire and release credentials.
data: The execution data for executing the current node.
execution_stats: The execution statistics to be updated.
@@ -216,20 +317,21 @@ async def execute_node(
async def _enqueue_next_nodes(
execution_data_client: ExecutionDataClient,
db_client: "DatabaseManagerAsyncClient",
node: Node,
output: BlockData,
output: BlockOutputEntry,
user_id: str,
graph_exec_id: str,
graph_id: str,
log_metadata: LogMetadata,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]],
nodes_input_masks: Optional[NodesInputMasks],
user_context: UserContext,
) -> list[NodeExecutionEntry]:
async def add_enqueued_execution(
node_exec_id: str, node_id: str, block_id: str, data: BlockInput
) -> NodeExecutionEntry:
execution_data_client.update_node_status_and_publish(
await async_update_node_execution_status(
db_client=db_client,
exec_id=node_exec_id,
status=ExecutionStatus.QUEUED,
execution_data=data,
@@ -262,22 +364,21 @@ async def _enqueue_next_nodes(
next_data = parse_execution_output(output, next_output_name)
if next_data is None and output_name != next_output_name:
return enqueued_executions
next_node = await execution_data_client.get_node(next_node_id)
next_node = await db_client.get_node(next_node_id)
# Multiple node can register the same next node, we need this to be atomic
# To avoid same execution to be enqueued multiple times,
# Or the same input to be consumed multiple times.
with execution_data_client.graph_lock:
async with synchronized(f"upsert_input-{next_node_id}-{graph_exec_id}"):
# Add output data to the earliest incomplete execution, or create a new one.
next_node_exec_id, next_node_input = (
execution_data_client.upsert_execution_input(
node_id=next_node_id,
input_name=next_input_name,
input_data=next_data,
block_id=next_node.block_id,
)
next_node_exec_id, next_node_input = await db_client.upsert_execution_input(
node_id=next_node_id,
graph_exec_id=graph_exec_id,
input_name=next_input_name,
input_data=next_data,
)
execution_data_client.update_node_status_and_publish(
await async_update_node_execution_status(
db_client=db_client,
exec_id=next_node_exec_id,
status=ExecutionStatus.INCOMPLETE,
)
@@ -289,8 +390,8 @@ async def _enqueue_next_nodes(
if link.is_static and link.sink_name not in next_node_input
}
if static_link_names and (
latest_execution := execution_data_client.get_latest_node_execution(
next_node_id
latest_execution := await db_client.get_latest_node_execution(
next_node_id, graph_exec_id
)
):
for name in static_link_names:
@@ -329,8 +430,9 @@ async def _enqueue_next_nodes(
# If link is static, there could be some incomplete executions waiting for it.
# Load and complete the input missing input data, and try to re-enqueue them.
for iexec in execution_data_client.get_node_executions(
for iexec in await db_client.get_node_executions(
node_id=next_node_id,
graph_exec_id=graph_exec_id,
statuses=[ExecutionStatus.INCOMPLETE],
):
idata = iexec.input_data
@@ -394,15 +496,12 @@ class ExecutionProcessor:
9. Node executor enqueues the next executed nodes to the node execution queue.
"""
# Current execution data client (scoped to current graph execution)
execution_data: ExecutionDataClient
@async_error_logged(swallow=True)
async def on_node_execution(
self,
node_exec: NodeExecutionEntry,
node_exec_progress: NodeExecutionProgress,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]],
nodes_input_masks: Optional[NodesInputMasks],
graph_stats_pair: tuple[GraphExecutionStats, threading.Lock],
) -> NodeExecutionStats:
log_metadata = LogMetadata(
@@ -414,7 +513,8 @@ class ExecutionProcessor:
node_id=node_exec.node_id,
block_name="-",
)
node = await self.execution_data.get_node(node_exec.node_id)
db_client = get_db_async_client()
node = await db_client.get_node(node_exec.node_id)
execution_stats = NodeExecutionStats()
timing_info, status = await self._on_node_execution(
@@ -422,6 +522,7 @@ class ExecutionProcessor:
node_exec=node_exec,
node_exec_progress=node_exec_progress,
stats=execution_stats,
db_client=db_client,
log_metadata=log_metadata,
nodes_input_masks=nodes_input_masks,
)
@@ -445,12 +546,15 @@ class ExecutionProcessor:
if node_error and not isinstance(node_error, str):
node_stats["error"] = str(node_error) or node_stats.__class__.__name__
self.execution_data.update_node_status_and_publish(
await async_update_node_execution_status(
db_client=db_client,
exec_id=node_exec.node_exec_id,
status=status,
stats=node_stats,
)
self.execution_data.update_graph_stats_and_publish(
await async_update_graph_execution_state(
db_client=db_client,
graph_exec_id=node_exec.graph_exec_id,
stats=graph_stats,
)
@@ -463,17 +567,22 @@ class ExecutionProcessor:
node_exec: NodeExecutionEntry,
node_exec_progress: NodeExecutionProgress,
stats: NodeExecutionStats,
db_client: "DatabaseManagerAsyncClient",
log_metadata: LogMetadata,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> ExecutionStatus:
status = ExecutionStatus.RUNNING
async def persist_output(output_name: str, output_data: Any) -> None:
self.execution_data.upsert_execution_output(
await db_client.upsert_execution_output(
node_exec_id=node_exec.node_exec_id,
output_name=output_name,
output_data=output_data,
)
if exec_update := await db_client.get_node_execution(
node_exec.node_exec_id
):
await send_async_execution_update(exec_update)
node_exec_progress.add_output(
ExecutionOutputEntry(
@@ -484,8 +593,31 @@ class ExecutionProcessor:
)
try:
# Check if node should be skipped
should_skip, skip_reason = await should_skip_node(
node=node,
creds_manager=self.creds_manager,
user_id=node_exec.user_id,
user_context=node_exec.user_context,
input_data=node_exec.inputs,
graph_id=node_exec.graph_id,
)
if should_skip:
log_metadata.info(
f"Skipping node execution {node_exec.node_exec_id}: {skip_reason}"
)
await async_update_node_execution_status(
db_client=db_client,
exec_id=node_exec.node_exec_id,
status=ExecutionStatus.SKIPPED,
stats={"skip_reason": skip_reason},
)
return ExecutionStatus.SKIPPED
log_metadata.info(f"Start node execution {node_exec.node_exec_id}")
self.execution_data.update_node_status_and_publish(
await async_update_node_execution_status(
db_client=db_client,
exec_id=node_exec.node_exec_id,
status=ExecutionStatus.RUNNING,
)
@@ -546,8 +678,6 @@ class ExecutionProcessor:
self.node_evaluation_thread = threading.Thread(
target=self.node_evaluation_loop.run_forever, daemon=True
)
# single thread executor
self.execution_data_executor = ThreadPoolExecutor(max_workers=1)
self.node_execution_thread.start()
self.node_evaluation_thread.start()
logger.info(f"[GraphExecutor] {self.tid} started")
@@ -567,13 +697,9 @@ class ExecutionProcessor:
node_eid="*",
block_name="-",
)
db_client = get_db_client()
# Get graph execution metadata first via sync client
from backend.util.clients import get_database_manager_client
db_client_sync = get_database_manager_client()
exec_meta = db_client_sync.get_graph_execution_meta(
exec_meta = db_client.get_graph_execution_meta(
user_id=graph_exec.user_id,
execution_id=graph_exec.graph_exec_id,
)
@@ -583,15 +709,12 @@ class ExecutionProcessor:
)
return
# Create scoped ExecutionDataClient for this graph execution with metadata
self.execution_data = ExecutionDataClient(
self.execution_data_executor, graph_exec.graph_exec_id, exec_meta
)
if exec_meta.status == ExecutionStatus.QUEUED:
if exec_meta.status in [ExecutionStatus.QUEUED, ExecutionStatus.INCOMPLETE]:
log_metadata.info(f"⚙️ Starting graph execution #{graph_exec.graph_exec_id}")
exec_meta.status = ExecutionStatus.RUNNING
self.execution_data.update_graph_start_time_and_publish()
send_execution_update(
db_client.update_graph_execution_start_time(graph_exec.graph_exec_id)
)
elif exec_meta.status == ExecutionStatus.RUNNING:
log_metadata.info(
f"⚙️ Graph execution #{graph_exec.graph_exec_id} is already running, continuing where it left off."
@@ -601,7 +724,9 @@ class ExecutionProcessor:
log_metadata.info(
f"⚙️ Graph execution #{graph_exec.graph_exec_id} was disturbed, continuing where it left off."
)
self.execution_data.update_graph_stats_and_publish(
update_graph_execution_state(
db_client=db_client,
graph_exec_id=graph_exec.graph_exec_id,
status=ExecutionStatus.RUNNING,
)
else:
@@ -632,10 +757,12 @@ class ExecutionProcessor:
# Activity status handling
activity_status = asyncio.run_coroutine_threadsafe(
self.execution_data.generate_activity_status(
generate_activity_status_for_execution(
graph_exec_id=graph_exec.graph_exec_id,
graph_id=graph_exec.graph_id,
graph_version=graph_exec.graph_version,
execution_stats=exec_stats,
db_client=get_db_async_client(),
user_id=graph_exec.user_id,
execution_status=status,
),
@@ -650,14 +777,15 @@ class ExecutionProcessor:
)
# Communication handling
self._handle_agent_run_notif(graph_exec, exec_stats)
self._handle_agent_run_notif(db_client, graph_exec, exec_stats)
finally:
self.execution_data.update_graph_stats_and_publish(
update_graph_execution_state(
db_client=db_client,
graph_exec_id=graph_exec.graph_exec_id,
status=exec_meta.status,
stats=exec_stats,
)
self.execution_data.finalize_execution()
def _charge_usage(
self,
@@ -666,6 +794,7 @@ class ExecutionProcessor:
) -> tuple[int, int]:
total_cost = 0
remaining_balance = 0
db_client = get_db_client()
block = get_block(node_exec.block_id)
if not block:
logger.error(f"Block {node_exec.block_id} not found.")
@@ -675,7 +804,7 @@ class ExecutionProcessor:
block=block, input_data=node_exec.inputs
)
if cost > 0:
remaining_balance = self.execution_data.spend_credits(
remaining_balance = db_client.spend_credits(
user_id=node_exec.user_id,
cost=cost,
metadata=UsageTransactionMetadata(
@@ -693,7 +822,7 @@ class ExecutionProcessor:
cost, usage_count = execution_usage_cost(execution_count)
if cost > 0:
remaining_balance = self.execution_data.spend_credits(
remaining_balance = db_client.spend_credits(
user_id=node_exec.user_id,
cost=cost,
metadata=UsageTransactionMetadata(
@@ -726,6 +855,7 @@ class ExecutionProcessor:
"""
execution_status: ExecutionStatus = ExecutionStatus.RUNNING
error: Exception | None = None
db_client = get_db_client()
execution_stats_lock = threading.Lock()
# State holders ----------------------------------------------------
@@ -736,7 +866,7 @@ class ExecutionProcessor:
execution_queue = ExecutionQueue[NodeExecutionEntry]()
try:
if self.execution_data.get_credits(graph_exec.user_id) <= 0:
if db_client.get_credits(graph_exec.user_id) <= 0:
raise InsufficientBalanceError(
user_id=graph_exec.user_id,
message="You have no credits left to run an agent.",
@@ -748,7 +878,7 @@ class ExecutionProcessor:
try:
if moderation_error := asyncio.run_coroutine_threadsafe(
automod_manager.moderate_graph_execution_inputs(
db_client=self.execution_data.db_client_async,
db_client=get_db_async_client(),
graph_exec=graph_exec,
),
self.node_evaluation_loop,
@@ -763,34 +893,16 @@ class ExecutionProcessor:
# ------------------------------------------------------------
# Prepopulate queue ---------------------------------------
# ------------------------------------------------------------
queued_executions = self.execution_data.get_node_executions(
for node_exec in db_client.get_node_executions(
graph_exec.graph_exec_id,
statuses=[
ExecutionStatus.RUNNING,
ExecutionStatus.QUEUED,
ExecutionStatus.TERMINATED,
],
)
log_metadata.info(
f"Pre-populating queue with {len(queued_executions)} executions from cache"
)
for i, node_exec in enumerate(queued_executions):
log_metadata.info(
f" [{i}] {node_exec.node_exec_id}: status={node_exec.status}, node={node_exec.node_id}"
)
try:
node_entry = node_exec.to_node_execution_entry(
graph_exec.user_context
)
execution_queue.add(node_entry)
log_metadata.info(" Added to execution queue successfully")
except Exception as e:
log_metadata.error(f" Failed to add to execution queue: {e}")
log_metadata.info(
f"Execution queue populated with {len(queued_executions)} executions"
)
):
node_entry = node_exec.to_node_execution_entry(graph_exec.user_context)
execution_queue.add(node_entry)
# ------------------------------------------------------------
# Main dispatch / polling loop -----------------------------
@@ -810,14 +922,13 @@ class ExecutionProcessor:
try:
cost, remaining_balance = self._charge_usage(
node_exec=queued_node_exec,
execution_count=self.execution_data.increment_execution_count(
graph_exec.user_id
),
execution_count=increment_execution_count(graph_exec.user_id),
)
with execution_stats_lock:
execution_stats.cost += cost
# Check if we crossed the low balance threshold
self._handle_low_balance(
db_client=db_client,
user_id=graph_exec.user_id,
current_balance=remaining_balance,
transaction_cost=cost,
@@ -825,17 +936,19 @@ class ExecutionProcessor:
except InsufficientBalanceError as balance_error:
error = balance_error # Set error to trigger FAILED status
node_exec_id = queued_node_exec.node_exec_id
self.execution_data.upsert_execution_output(
db_client.upsert_execution_output(
node_exec_id=node_exec_id,
output_name="error",
output_data=str(error),
)
self.execution_data.update_node_status_and_publish(
update_node_execution_status(
db_client=db_client,
exec_id=node_exec_id,
status=ExecutionStatus.FAILED,
)
self._handle_insufficient_funds_notif(
db_client,
graph_exec.user_id,
graph_exec.graph_id,
error,
@@ -922,13 +1035,12 @@ class ExecutionProcessor:
time.sleep(0.1)
# loop done --------------------------------------------------
# Background task finalization moved to finally block
# Output moderation
try:
if moderation_error := asyncio.run_coroutine_threadsafe(
automod_manager.moderate_graph_execution_outputs(
db_client=self.execution_data.db_client_async,
db_client=get_db_async_client(),
graph_exec_id=graph_exec.graph_exec_id,
user_id=graph_exec.user_id,
graph_id=graph_exec.graph_id,
@@ -982,6 +1094,7 @@ class ExecutionProcessor:
error=error,
graph_exec_id=graph_exec.graph_exec_id,
log_metadata=log_metadata,
db_client=db_client,
)
@error_logged(swallow=True)
@@ -994,6 +1107,7 @@ class ExecutionProcessor:
error: Exception | None,
graph_exec_id: str,
log_metadata: LogMetadata,
db_client: "DatabaseManagerClient",
) -> None:
"""
Clean up running node executions and evaluations when graph execution ends.
@@ -1027,7 +1141,8 @@ class ExecutionProcessor:
)
while queued_execution := execution_queue.get_or_none():
self.execution_data.update_node_status_and_publish(
update_node_execution_status(
db_client=db_client,
exec_id=queued_execution.node_exec_id,
status=execution_status,
stats={"error": str(error)} if error else None,
@@ -1042,7 +1157,7 @@ class ExecutionProcessor:
node_id: str,
graph_exec: GraphExecutionEntry,
log_metadata: LogMetadata,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]],
nodes_input_masks: Optional[NodesInputMasks],
execution_queue: ExecutionQueue[NodeExecutionEntry],
) -> None:
"""Process a node's output, update its status, and enqueue next nodes.
@@ -1055,10 +1170,12 @@ class ExecutionProcessor:
nodes_input_masks: Optional map of node input overrides
execution_queue: Queue to add next executions to
"""
db_client = get_db_async_client()
log_metadata.debug(f"Enqueue nodes for {node_id}: {output}")
for next_execution in await _enqueue_next_nodes(
execution_data_client=self.execution_data,
db_client=db_client,
node=output.node,
output=output.data,
user_id=graph_exec.user_id,
@@ -1072,13 +1189,15 @@ class ExecutionProcessor:
def _handle_agent_run_notif(
self,
db_client: "DatabaseManagerClient",
graph_exec: GraphExecutionEntry,
exec_stats: GraphExecutionStats,
):
metadata = self.execution_data.get_graph_metadata(
metadata = db_client.get_graph_metadata(
graph_exec.graph_id, graph_exec.graph_version
)
outputs = self.execution_data.get_node_executions(
outputs = db_client.get_node_executions(
graph_exec.graph_exec_id,
block_ids=[AgentOutputBlock().id],
)
@@ -1107,12 +1226,13 @@ class ExecutionProcessor:
def _handle_insufficient_funds_notif(
self,
db_client: "DatabaseManagerClient",
user_id: str,
graph_id: str,
e: InsufficientBalanceError,
):
shortfall = abs(e.amount) - e.balance
metadata = self.execution_data.get_graph_metadata(graph_id)
metadata = db_client.get_graph_metadata(graph_id)
base_url = (
settings.config.frontend_base_url or settings.config.platform_base_url
)
@@ -1131,7 +1251,7 @@ class ExecutionProcessor:
)
try:
user_email = self.execution_data.get_user_email_by_id(user_id)
user_email = db_client.get_user_email_by_id(user_id)
alert_message = (
f"❌ **Insufficient Funds Alert**\n"
@@ -1153,6 +1273,7 @@ class ExecutionProcessor:
def _handle_low_balance(
self,
db_client: "DatabaseManagerClient",
user_id: str,
current_balance: int,
transaction_cost: int,
@@ -1181,7 +1302,7 @@ class ExecutionProcessor:
)
try:
user_email = self.execution_data.get_user_email_by_id(user_id)
user_email = db_client.get_user_email_by_id(user_id)
alert_message = (
f"⚠️ **Low Balance Alert**\n"
f"User: {user_email or user_id}\n"
@@ -1559,3 +1680,117 @@ class ExecutionManager(AppProcess):
)
logger.info(f"{prefix} ✅ Finished GraphExec cleanup")
# ------- UTILITIES ------- #
def get_db_client() -> "DatabaseManagerClient":
return get_database_manager_client()
def get_db_async_client() -> "DatabaseManagerAsyncClient":
return get_database_manager_async_client()
@func_retry
async def send_async_execution_update(
entry: GraphExecution | NodeExecutionResult | None,
) -> None:
if entry is None:
return
await get_async_execution_event_bus().publish(entry)
@func_retry
def send_execution_update(entry: GraphExecution | NodeExecutionResult | None):
if entry is None:
return
return get_execution_event_bus().publish(entry)
async def async_update_node_execution_status(
db_client: "DatabaseManagerAsyncClient",
exec_id: str,
status: ExecutionStatus,
execution_data: BlockInput | None = None,
stats: dict[str, Any] | None = None,
) -> NodeExecutionResult:
"""Sets status and fetches+broadcasts the latest state of the node execution"""
exec_update = await db_client.update_node_execution_status(
exec_id, status, execution_data, stats
)
await send_async_execution_update(exec_update)
return exec_update
def update_node_execution_status(
db_client: "DatabaseManagerClient",
exec_id: str,
status: ExecutionStatus,
execution_data: BlockInput | None = None,
stats: dict[str, Any] | None = None,
) -> NodeExecutionResult:
"""Sets status and fetches+broadcasts the latest state of the node execution"""
exec_update = db_client.update_node_execution_status(
exec_id, status, execution_data, stats
)
send_execution_update(exec_update)
return exec_update
async def async_update_graph_execution_state(
db_client: "DatabaseManagerAsyncClient",
graph_exec_id: str,
status: ExecutionStatus | None = None,
stats: GraphExecutionStats | None = None,
) -> GraphExecution | None:
"""Sets status and fetches+broadcasts the latest state of the graph execution"""
graph_update = await db_client.update_graph_execution_stats(
graph_exec_id, status, stats
)
if graph_update:
await send_async_execution_update(graph_update)
else:
logger.error(f"Failed to update graph execution stats for {graph_exec_id}")
return graph_update
def update_graph_execution_state(
db_client: "DatabaseManagerClient",
graph_exec_id: str,
status: ExecutionStatus | None = None,
stats: GraphExecutionStats | None = None,
) -> GraphExecution | None:
"""Sets status and fetches+broadcasts the latest state of the graph execution"""
graph_update = db_client.update_graph_execution_stats(graph_exec_id, status, stats)
if graph_update:
send_execution_update(graph_update)
else:
logger.error(f"Failed to update graph execution stats for {graph_exec_id}")
return graph_update
@asynccontextmanager
async def synchronized(key: str, timeout: int = 60):
r = await redis.get_redis_async()
lock: RedisLock = r.lock(f"lock:{key}", timeout=timeout)
try:
await lock.acquire()
yield
finally:
if await lock.locked() and await lock.owned():
await lock.release()
def increment_execution_count(user_id: str) -> int:
"""
Increment the execution count for a given user,
this will be used to charge the user for the execution cost.
"""
r = redis.get_redis()
k = f"uec:{user_id}" # User Execution Count global key
counter = cast(int, r.incr(k))
if counter == 1:
r.expire(k, settings.config.execution_counter_expiration_time)
return counter

View File

@@ -32,17 +32,13 @@ async def test_handle_low_balance_threshold_crossing(server: SpinTestServer):
mock_settings.config.low_balance_threshold = 500 # $5 threshold
mock_settings.config.frontend_base_url = "https://test.com"
# Initialize the execution processor and mock its execution_data
execution_processor.on_graph_executor_start()
# Mock the execution_data attribute since it's created in on_graph_execution
mock_execution_data = MagicMock()
execution_processor.execution_data = mock_execution_data
mock_execution_data.get_user_email_by_id.return_value = "test@example.com"
# Create mock database client
mock_db_client = MagicMock()
mock_db_client.get_user_email_by_id.return_value = "test@example.com"
# Test the low balance handler
execution_processor._handle_low_balance(
db_client=mock_db_client,
user_id=user_id,
current_balance=current_balance,
transaction_cost=transaction_cost,
@@ -66,19 +62,6 @@ async def test_handle_low_balance_threshold_crossing(server: SpinTestServer):
assert "$4.00" in discord_message
assert "$6.00" in discord_message
# Cleanup execution processor threads
try:
execution_processor.node_execution_loop.call_soon_threadsafe(
execution_processor.node_execution_loop.stop
)
execution_processor.node_evaluation_loop.call_soon_threadsafe(
execution_processor.node_evaluation_loop.stop
)
execution_processor.node_execution_thread.join(timeout=1)
execution_processor.node_evaluation_thread.join(timeout=1)
except Exception:
pass # Ignore cleanup errors
@pytest.mark.asyncio(loop_scope="session")
async def test_handle_low_balance_no_notification_when_not_crossing(
@@ -107,17 +90,12 @@ async def test_handle_low_balance_no_notification_when_not_crossing(
mock_get_client.return_value = mock_client
mock_settings.config.low_balance_threshold = 500 # $5 threshold
# Initialize the execution processor and mock its execution_data
execution_processor.on_graph_executor_start()
# Mock the execution_data attribute since it's created in on_graph_execution
mock_execution_data = MagicMock()
execution_processor.execution_data = mock_execution_data
mock_execution_data.get_user_email_by_id.return_value = "test@example.com"
# Create mock database client
mock_db_client = MagicMock()
# Test the low balance handler
execution_processor._handle_low_balance(
db_client=mock_db_client,
user_id=user_id,
current_balance=current_balance,
transaction_cost=transaction_cost,
@@ -127,19 +105,6 @@ async def test_handle_low_balance_no_notification_when_not_crossing(
mock_queue_notif.assert_not_called()
mock_client.discord_system_alert.assert_not_called()
# Cleanup execution processor threads
try:
execution_processor.node_execution_loop.call_soon_threadsafe(
execution_processor.node_execution_loop.stop
)
execution_processor.node_evaluation_loop.call_soon_threadsafe(
execution_processor.node_evaluation_loop.stop
)
execution_processor.node_execution_thread.join(timeout=1)
execution_processor.node_evaluation_thread.join(timeout=1)
except Exception:
pass # Ignore cleanup errors
@pytest.mark.asyncio(loop_scope="session")
async def test_handle_low_balance_no_duplicate_when_already_below(
@@ -168,17 +133,12 @@ async def test_handle_low_balance_no_duplicate_when_already_below(
mock_get_client.return_value = mock_client
mock_settings.config.low_balance_threshold = 500 # $5 threshold
# Initialize the execution processor and mock its execution_data
execution_processor.on_graph_executor_start()
# Mock the execution_data attribute since it's created in on_graph_execution
mock_execution_data = MagicMock()
execution_processor.execution_data = mock_execution_data
mock_execution_data.get_user_email_by_id.return_value = "test@example.com"
# Create mock database client
mock_db_client = MagicMock()
# Test the low balance handler
execution_processor._handle_low_balance(
db_client=mock_db_client,
user_id=user_id,
current_balance=current_balance,
transaction_cost=transaction_cost,
@@ -187,16 +147,3 @@ async def test_handle_low_balance_no_duplicate_when_already_below(
# Verify no notification was sent (user was already below threshold)
mock_queue_notif.assert_not_called()
mock_client.discord_system_alert.assert_not_called()
# Cleanup execution processor threads
try:
execution_processor.node_execution_loop.call_soon_threadsafe(
execution_processor.node_execution_loop.stop
)
execution_processor.node_evaluation_loop.call_soon_threadsafe(
execution_processor.node_evaluation_loop.stop
)
execution_processor.node_execution_thread.join(timeout=1)
execution_processor.node_evaluation_thread.join(timeout=1)
except Exception:
pass # Ignore cleanup errors

View File

@@ -35,21 +35,20 @@ async def execute_graph(
logger.info(f"Input data: {input_data}")
# --- Test adding new executions --- #
response = await agent_server.test_execute_graph(
graph_exec = await agent_server.test_execute_graph(
user_id=test_user.id,
graph_id=test_graph.id,
graph_version=test_graph.version,
node_input=input_data,
)
graph_exec_id = response.graph_exec_id
logger.info(f"Created execution with ID: {graph_exec_id}")
logger.info(f"Created execution with ID: {graph_exec.id}")
# Execution queue should be empty
logger.info("Waiting for execution to complete...")
result = await wait_execution(test_user.id, graph_exec_id, 30)
result = await wait_execution(test_user.id, graph_exec.id, 30)
logger.info(f"Execution completed with {len(result)} results")
assert len(result) == num_execs
return graph_exec_id
return graph_exec.id
async def assert_sample_graph_executions(
@@ -379,7 +378,7 @@ async def test_execute_preset(server: SpinTestServer):
# Verify execution
assert result is not None
graph_exec_id = result["id"]
graph_exec_id = result.id
# Wait for execution to complete
executions = await wait_execution(test_user.id, graph_exec_id)
@@ -468,7 +467,7 @@ async def test_execute_preset_with_clash(server: SpinTestServer):
# Verify execution
assert result is not None, "Result must not be None"
graph_exec_id = result["id"]
graph_exec_id = result.id
# Wait for execution to complete
executions = await wait_execution(test_user.id, graph_exec_id)

View File

@@ -191,15 +191,22 @@ class GraphExecutionJobInfo(GraphExecutionJobArgs):
id: str
name: str
next_run_time: str
timezone: str = Field(default="UTC", description="Timezone used for scheduling")
@staticmethod
def from_db(
job_args: GraphExecutionJobArgs, job_obj: JobObj
) -> "GraphExecutionJobInfo":
# Extract timezone from the trigger if it's a CronTrigger
timezone_str = "UTC"
if hasattr(job_obj.trigger, "timezone"):
timezone_str = str(job_obj.trigger.timezone)
return GraphExecutionJobInfo(
id=job_obj.id,
name=job_obj.name,
next_run_time=job_obj.next_run_time.isoformat(),
timezone=timezone_str,
**job_args.model_dump(),
)
@@ -395,6 +402,7 @@ class Scheduler(AppService):
input_data: BlockInput,
input_credentials: dict[str, CredentialsMetaInput],
name: Optional[str] = None,
user_timezone: str | None = None,
) -> GraphExecutionJobInfo:
# Validate the graph before scheduling to prevent runtime failures
# We don't need the return value, just want the validation to run
@@ -408,7 +416,18 @@ class Scheduler(AppService):
)
)
logger.info(f"Scheduling job for user {user_id} in UTC (cron: {cron})")
# Use provided timezone or default to UTC
# Note: Timezone should be passed from the client to avoid database lookups
if not user_timezone:
user_timezone = "UTC"
logger.warning(
f"No timezone provided for user {user_id}, using UTC for scheduling. "
f"Client should pass user's timezone for correct scheduling."
)
logger.info(
f"Scheduling job for user {user_id} with timezone {user_timezone} (cron: {cron})"
)
job_args = GraphExecutionJobArgs(
user_id=user_id,
@@ -422,12 +441,12 @@ class Scheduler(AppService):
execute_graph,
kwargs=job_args.model_dump(),
name=name,
trigger=CronTrigger.from_crontab(cron, timezone="UTC"),
trigger=CronTrigger.from_crontab(cron, timezone=user_timezone),
jobstore=Jobstores.EXECUTION.value,
replace_existing=True,
)
logger.info(
f"Added job {job.id} with cron schedule '{cron}' in UTC, input data: {input_data}"
f"Added job {job.id} with cron schedule '{cron}' in timezone {user_timezone}, input data: {input_data}"
)
return GraphExecutionJobInfo.from_db(job_args, job)

View File

@@ -4,20 +4,27 @@ import threading
import time
from collections import defaultdict
from concurrent.futures import Future
from typing import Any, Optional
from typing import Any, Mapping, Optional, cast
from pydantic import BaseModel, JsonValue, ValidationError
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.block import Block, BlockData, BlockInput, BlockType, get_block
from backend.data.block import (
Block,
BlockCostType,
BlockInput,
BlockOutputEntry,
BlockType,
get_block,
)
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCostType
from backend.data.db import prisma
from backend.data.execution import (
ExecutionStatus,
GraphExecutionStats,
GraphExecutionWithNodes,
NodesInputMasks,
UserContext,
)
from backend.data.graph import GraphModel, Node
@@ -239,7 +246,7 @@ def _tokenise(path: str) -> list[tuple[str, str]] | None:
# --------------------------------------------------------------------------- #
def parse_execution_output(output: BlockData, name: str) -> Any | None:
def parse_execution_output(output: BlockOutputEntry, name: str) -> JsonValue | None:
"""
Retrieve a nested value out of `output` using the flattened *name*.
@@ -263,7 +270,7 @@ def parse_execution_output(output: BlockData, name: str) -> Any | None:
if tokens is None:
return None
cur: Any = data
cur: JsonValue = data
for delim, ident in tokens:
if delim == LIST_SPLIT:
# list[index]
@@ -428,7 +435,7 @@ def validate_exec(
async def _validate_node_input_credentials(
graph: GraphModel,
user_id: str,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> dict[str, dict[str, str]]:
"""
Checks all credentials for all nodes of the graph and returns structured errors.
@@ -508,8 +515,8 @@ async def _validate_node_input_credentials(
def make_node_credentials_input_map(
graph: GraphModel,
graph_credentials_input: dict[str, CredentialsMetaInput],
) -> dict[str, dict[str, JsonValue]]:
graph_credentials_input: Mapping[str, CredentialsMetaInput],
) -> NodesInputMasks:
"""
Maps credentials for an execution to the correct nodes.
@@ -544,8 +551,8 @@ def make_node_credentials_input_map(
async def validate_graph_with_credentials(
graph: GraphModel,
user_id: str,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> dict[str, dict[str, str]]:
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> Mapping[str, Mapping[str, str]]:
"""
Validate graph including credentials and return structured errors per node.
@@ -575,7 +582,7 @@ async def _construct_starting_node_execution_input(
graph: GraphModel,
user_id: str,
graph_inputs: BlockInput,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> list[tuple[str, BlockInput]]:
"""
Validates and prepares the input data for executing a graph.
@@ -616,7 +623,7 @@ async def _construct_starting_node_execution_input(
# Extract request input data, and assign it to the input pin.
if block.block_type == BlockType.INPUT:
input_name = node.input_default.get("name")
input_name = cast(str | None, node.input_default.get("name"))
if input_name and input_name in graph_inputs:
input_data = {"value": graph_inputs[input_name]}
@@ -643,9 +650,9 @@ async def validate_and_construct_node_execution_input(
user_id: str,
graph_inputs: BlockInput,
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[dict[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> tuple[GraphModel, list[tuple[str, BlockInput]], dict[str, dict[str, JsonValue]]]:
graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> tuple[GraphModel, list[tuple[str, BlockInput]], NodesInputMasks]:
"""
Public wrapper that handles graph fetching, credential mapping, and validation+construction.
This centralizes the logic used by both scheduler validation and actual execution.
@@ -659,7 +666,9 @@ async def validate_and_construct_node_execution_input(
nodes_input_masks: Node inputs to use.
Returns:
tuple[GraphModel, list[tuple[str, BlockInput]]]: Graph model and list of tuples for node execution input.
GraphModel: Full graph object for the given `graph_id`.
list[tuple[node_id, BlockInput]]: Starting node IDs with corresponding inputs.
dict[str, BlockInput]: Node input masks including all passed-in credentials.
Raises:
NotFoundError: If the graph is not found.
@@ -700,11 +709,11 @@ async def validate_and_construct_node_execution_input(
def _merge_nodes_input_masks(
overrides_map_1: dict[str, dict[str, JsonValue]],
overrides_map_2: dict[str, dict[str, JsonValue]],
) -> dict[str, dict[str, JsonValue]]:
overrides_map_1: NodesInputMasks,
overrides_map_2: NodesInputMasks,
) -> NodesInputMasks:
"""Perform a per-node merge of input overrides"""
result = overrides_map_1.copy()
result = dict(overrides_map_1).copy()
for node_id, overrides2 in overrides_map_2.items():
if node_id in result:
result[node_id] = {**result[node_id], **overrides2}
@@ -854,8 +863,8 @@ async def add_graph_execution(
inputs: Optional[BlockInput] = None,
preset_id: Optional[str] = None,
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[dict[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> GraphExecutionWithNodes:
"""
Adds a graph execution to the queue and returns the execution entry.
@@ -879,7 +888,7 @@ async def add_graph_execution(
else:
edb = get_database_manager_async_client()
graph, starting_nodes_input, nodes_input_masks = (
graph, starting_nodes_input, compiled_nodes_input_masks = (
await validate_and_construct_node_execution_input(
graph_id=graph_id,
user_id=user_id,
@@ -892,37 +901,43 @@ async def add_graph_execution(
graph_exec = None
try:
# Sanity check: running add_graph_execution with the properties of
# the graph_exec created here should create the same execution again.
graph_exec = await edb.create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
inputs=inputs or {},
credential_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
)
# Fetch user context for the graph execution
user_context = await get_user_context(user_id)
queue = await get_async_execution_queue()
graph_exec_entry = graph_exec.to_graph_execution_entry(user_context)
if nodes_input_masks:
graph_exec_entry.nodes_input_masks = nodes_input_masks
graph_exec_entry = graph_exec.to_graph_execution_entry(
user_context=await get_user_context(user_id),
compiled_nodes_input_masks=compiled_nodes_input_masks,
)
logger.info(
f"Created graph execution #{graph_exec.id} for graph "
f"#{graph_id} with {len(starting_nodes_input)} starting nodes. "
f"Now publishing to execution queue."
)
await queue.publish_message(
exec_queue = await get_async_execution_queue()
await exec_queue.publish_message(
routing_key=GRAPH_EXECUTION_ROUTING_KEY,
message=graph_exec_entry.model_dump_json(),
exchange=GRAPH_EXECUTION_EXCHANGE,
)
logger.info(f"Published execution {graph_exec.id} to RabbitMQ queue")
bus = get_async_execution_event_bus()
await bus.publish(graph_exec)
graph_exec.status = ExecutionStatus.QUEUED
await edb.update_graph_execution_stats(
graph_exec_id=graph_exec.id,
status=graph_exec.status,
)
await get_async_execution_event_bus().publish(graph_exec)
return graph_exec
except BaseException as e:
@@ -952,7 +967,7 @@ async def add_graph_execution(
class ExecutionOutputEntry(BaseModel):
node: Node
node_exec_id: str
data: BlockData
data: BlockOutputEntry
class NodeExecutionProgress:

View File

@@ -1,6 +1,7 @@
from typing import cast
import pytest
from pytest_mock import MockerFixture
from backend.executor.utils import merge_execution_input, parse_execution_output
from backend.util.mock import MockObject
@@ -276,3 +277,147 @@ def test_merge_execution_input():
result = merge_execution_input(data)
assert "mixed" in result
assert result["mixed"].attr[0]["key"] == "value3"
@pytest.mark.asyncio
async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
"""
Verify that calling the function with its own output creates the same execution again.
"""
from backend.data.execution import GraphExecutionWithNodes
from backend.data.model import CredentialsMetaInput
from backend.executor.utils import add_graph_execution
from backend.integrations.providers import ProviderName
# Mock data
graph_id = "test-graph-id"
user_id = "test-user-id"
inputs = {"test_input": "test_value"}
preset_id = "test-preset-id"
graph_version = 1
graph_credentials_inputs = {
"cred_key": CredentialsMetaInput(
id="cred-id", provider=ProviderName("test_provider"), type="oauth2"
)
}
nodes_input_masks = {"node1": {"input1": "masked_value"}}
# Mock the graph object returned by validate_and_construct_node_execution_input
mock_graph = mocker.MagicMock()
mock_graph.version = graph_version
# Mock the starting nodes input and compiled nodes input masks
starting_nodes_input = [
("node1", {"input1": "value1"}),
("node2", {"input1": "value2"}),
]
compiled_nodes_input_masks = {"node1": {"input1": "compiled_mask"}}
# Mock the graph execution object
mock_graph_exec = mocker.MagicMock(spec=GraphExecutionWithNodes)
mock_graph_exec.id = "execution-id-123"
mock_graph_exec.node_executions = [] # Add this to avoid AttributeError
mock_graph_exec.to_graph_execution_entry.return_value = mocker.MagicMock()
# Mock user context
mock_user_context = {"user_id": user_id, "context": "test_context"}
# Mock the queue and event bus
mock_queue = mocker.AsyncMock()
mock_event_bus = mocker.MagicMock()
mock_event_bus.publish = mocker.AsyncMock()
# Setup mocks
mock_validate = mocker.patch(
"backend.executor.utils.validate_and_construct_node_execution_input"
)
mock_edb = mocker.patch("backend.executor.utils.execution_db")
mock_prisma = mocker.patch("backend.executor.utils.prisma")
mock_get_user_context = mocker.patch("backend.executor.utils.get_user_context")
mock_get_queue = mocker.patch("backend.executor.utils.get_async_execution_queue")
mock_get_event_bus = mocker.patch(
"backend.executor.utils.get_async_execution_event_bus"
)
# Setup mock returns
mock_validate.return_value = (
mock_graph,
starting_nodes_input,
compiled_nodes_input_masks,
)
mock_prisma.is_connected.return_value = True
mock_edb.create_graph_execution = mocker.AsyncMock(return_value=mock_graph_exec)
mock_edb.update_graph_execution_stats = mocker.AsyncMock(
return_value=mock_graph_exec
)
mock_edb.update_node_execution_status_batch = mocker.AsyncMock()
mock_get_user_context.return_value = mock_user_context
mock_get_queue.return_value = mock_queue
mock_get_event_bus.return_value = mock_event_bus
# Call the function - first execution
result1 = await add_graph_execution(
graph_id=graph_id,
user_id=user_id,
inputs=inputs,
preset_id=preset_id,
graph_version=graph_version,
graph_credentials_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
)
# Store the parameters used in the first call to create_graph_execution
first_call_kwargs = mock_edb.create_graph_execution.call_args[1]
# Verify the create_graph_execution was called with correct parameters
mock_edb.create_graph_execution.assert_called_once_with(
user_id=user_id,
graph_id=graph_id,
graph_version=mock_graph.version,
inputs=inputs,
credential_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
)
# Set up the graph execution mock to have properties we can extract
mock_graph_exec.graph_id = graph_id
mock_graph_exec.user_id = user_id
mock_graph_exec.graph_version = graph_version
mock_graph_exec.inputs = inputs
mock_graph_exec.credential_inputs = graph_credentials_inputs
mock_graph_exec.nodes_input_masks = nodes_input_masks
mock_graph_exec.preset_id = preset_id
# Create a second mock execution for the sanity check
mock_graph_exec_2 = mocker.MagicMock(spec=GraphExecutionWithNodes)
mock_graph_exec_2.id = "execution-id-456"
mock_graph_exec_2.to_graph_execution_entry.return_value = mocker.MagicMock()
# Reset mocks and set up for second call
mock_edb.create_graph_execution.reset_mock()
mock_edb.create_graph_execution.return_value = mock_graph_exec_2
mock_validate.reset_mock()
# Sanity check: call add_graph_execution with properties from first result
# This should create the same execution parameters
result2 = await add_graph_execution(
graph_id=mock_graph_exec.graph_id,
user_id=mock_graph_exec.user_id,
inputs=mock_graph_exec.inputs,
preset_id=mock_graph_exec.preset_id,
graph_version=mock_graph_exec.graph_version,
graph_credentials_inputs=mock_graph_exec.credential_inputs,
nodes_input_masks=mock_graph_exec.nodes_input_masks,
)
# Verify that create_graph_execution was called with identical parameters
second_call_kwargs = mock_edb.create_graph_execution.call_args[1]
# The sanity check: both calls should use identical parameters
assert first_call_kwargs == second_call_kwargs
# Both executions should succeed (though they create different objects)
assert result1 == mock_graph_exec
assert result2 == mock_graph_exec_2

View File

@@ -1,13 +1,14 @@
import functools
from typing import TYPE_CHECKING
from autogpt_libs.utils.cache import cached
if TYPE_CHECKING:
from ..providers import ProviderName
from ._base import BaseWebhooksManager
# --8<-- [start:load_webhook_managers]
@functools.cache
@cached()
def load_webhook_managers() -> dict["ProviderName", type["BaseWebhooksManager"]]:
webhook_managers = {}

View File

@@ -7,10 +7,9 @@ from backend.data.graph import set_node_webhook
from backend.integrations.creds_manager import IntegrationCredentialsManager
from . import get_webhook_manager, supports_webhooks
from .utils import setup_webhook_for_block
if TYPE_CHECKING:
from backend.data.graph import BaseGraph, GraphModel, Node, NodeModel
from backend.data.graph import BaseGraph, GraphModel, NodeModel
from backend.data.model import Credentials
from ._base import BaseWebhooksManager
@@ -43,32 +42,19 @@ async def _on_graph_activate(graph: "BaseGraph", user_id: str) -> "BaseGraph": .
async def _on_graph_activate(graph: "BaseGraph | GraphModel", user_id: str):
get_credentials = credentials_manager.cached_getter(user_id)
updated_nodes = []
for new_node in graph.nodes:
block_input_schema = cast(BlockSchema, new_node.block.input_schema)
node_credentials = None
if (
# Webhook-triggered blocks are only allowed to have 1 credentials input
(
creds_field_name := next(
iter(block_input_schema.get_credentials_fields()), None
for creds_field_name in block_input_schema.get_credentials_fields().keys():
# Prevent saving graph with non-existent credentials
if (
creds_meta := new_node.input_default.get(creds_field_name)
) and not await get_credentials(creds_meta["id"]):
raise ValueError(
f"Node #{new_node.id} input '{creds_field_name}' updated with "
f"non-existent credentials #{creds_meta['id']}"
)
)
and (creds_meta := new_node.input_default.get(creds_field_name))
and not (node_credentials := await get_credentials(creds_meta["id"]))
):
raise ValueError(
f"Node #{new_node.id} input '{creds_field_name}' updated with "
f"non-existent credentials #{creds_meta['id']}"
)
updated_node = await on_node_activate(
user_id, graph.id, new_node, credentials=node_credentials
)
updated_nodes.append(updated_node)
graph.nodes = updated_nodes
return graph
@@ -85,20 +71,14 @@ async def on_graph_deactivate(graph: "GraphModel", user_id: str):
block_input_schema = cast(BlockSchema, node.block.input_schema)
node_credentials = None
if (
# Webhook-triggered blocks are only allowed to have 1 credentials input
(
creds_field_name := next(
iter(block_input_schema.get_credentials_fields()), None
for creds_field_name in block_input_schema.get_credentials_fields().keys():
if (creds_meta := node.input_default.get(creds_field_name)) and not (
node_credentials := await get_credentials(creds_meta["id"])
):
logger.warning(
f"Node #{node.id} input '{creds_field_name}' referenced "
f"non-existent credentials #{creds_meta['id']}"
)
)
and (creds_meta := node.input_default.get(creds_field_name))
and not (node_credentials := await get_credentials(creds_meta["id"]))
):
logger.error(
f"Node #{node.id} input '{creds_field_name}' referenced non-existent "
f"credentials #{creds_meta['id']}"
)
updated_node = await on_node_deactivate(
user_id, node, credentials=node_credentials
@@ -109,32 +89,6 @@ async def on_graph_deactivate(graph: "GraphModel", user_id: str):
return graph
async def on_node_activate(
user_id: str,
graph_id: str,
node: "Node",
*,
credentials: Optional["Credentials"] = None,
) -> "Node":
"""Hook to be called when the node is activated/created"""
if node.block.webhook_config:
new_webhook, feedback = await setup_webhook_for_block(
user_id=user_id,
trigger_block=node.block,
trigger_config=node.input_default,
for_graph_id=graph_id,
)
if new_webhook:
node = await set_node_webhook(node.id, new_webhook.id)
else:
logger.debug(
f"Node #{node.id} does not have everything for a webhook: {feedback}"
)
return node
async def on_node_deactivate(
user_id: str,
node: "NodeModel",

View File

@@ -4,7 +4,6 @@ from typing import TYPE_CHECKING, Optional, cast
from pydantic import JsonValue
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.providers import ProviderName
from backend.util.settings import Config
from . import get_webhook_manager, supports_webhooks
@@ -13,6 +12,7 @@ if TYPE_CHECKING:
from backend.data.block import Block, BlockSchema
from backend.data.integrations import Webhook
from backend.data.model import Credentials
from backend.integrations.providers import ProviderName
logger = logging.getLogger(__name__)
app_config = Config()
@@ -20,7 +20,7 @@ credentials_manager = IntegrationCredentialsManager()
# TODO: add test to assert this matches the actual API route
def webhook_ingress_url(provider_name: ProviderName, webhook_id: str) -> str:
def webhook_ingress_url(provider_name: "ProviderName", webhook_id: str) -> str:
return (
f"{app_config.platform_base_url}/api/integrations/{provider_name.value}"
f"/webhooks/{webhook_id}/ingress"
@@ -144,3 +144,69 @@ async def setup_webhook_for_block(
)
logger.debug(f"Acquired webhook: {webhook}")
return webhook, None
async def migrate_legacy_triggered_graphs():
from prisma.models import AgentGraph
from backend.data.graph import AGENT_GRAPH_INCLUDE, GraphModel, set_node_webhook
from backend.data.model import is_credentials_field_name
from backend.server.v2.library.db import create_preset
from backend.server.v2.library.model import LibraryAgentPresetCreatable
triggered_graphs = [
GraphModel.from_db(_graph)
for _graph in await AgentGraph.prisma().find_many(
where={
"isActive": True,
"Nodes": {"some": {"NOT": [{"webhookId": None}]}},
},
include=AGENT_GRAPH_INCLUDE,
)
]
n_migrated_webhooks = 0
for graph in triggered_graphs:
try:
if not (
(trigger_node := graph.webhook_input_node) and trigger_node.webhook_id
):
continue
# Use trigger node's inputs for the preset
preset_credentials = {
field_name: creds_meta
for field_name, creds_meta in trigger_node.input_default.items()
if is_credentials_field_name(field_name)
}
preset_inputs = {
field_name: value
for field_name, value in trigger_node.input_default.items()
if not is_credentials_field_name(field_name)
}
# Create a triggered preset for the graph
await create_preset(
graph.user_id,
LibraryAgentPresetCreatable(
graph_id=graph.id,
graph_version=graph.version,
inputs=preset_inputs,
credentials=preset_credentials,
name=graph.name,
description=graph.description,
webhook_id=trigger_node.webhook_id,
is_active=True,
),
)
# Detach webhook from the graph node
await set_node_webhook(trigger_node.id, None)
n_migrated_webhooks += 1
except Exception as e:
logger.error(f"Failed to migrate graph #{graph.id} trigger to preset: {e}")
continue
logger.info(f"Migrated {n_migrated_webhooks} node triggers to triggered presets")

View File

@@ -0,0 +1,287 @@
"""
Prometheus instrumentation for FastAPI services.
This module provides centralized metrics collection and instrumentation
for all FastAPI services in the AutoGPT platform.
"""
import logging
from typing import Optional
from fastapi import FastAPI
from prometheus_client import Counter, Gauge, Histogram, Info
from prometheus_fastapi_instrumentator import Instrumentator, metrics
logger = logging.getLogger(__name__)
# Custom business metrics with controlled cardinality
GRAPH_EXECUTIONS = Counter(
"autogpt_graph_executions_total",
"Total number of graph executions",
labelnames=[
"status"
], # Removed graph_id and user_id to prevent cardinality explosion
)
GRAPH_EXECUTIONS_BY_USER = Counter(
"autogpt_graph_executions_by_user_total",
"Total number of graph executions by user (sampled)",
labelnames=["status"], # Only status, user_id tracked separately when needed
)
BLOCK_EXECUTIONS = Counter(
"autogpt_block_executions_total",
"Total number of block executions",
labelnames=["block_type", "status"], # block_type is bounded
)
BLOCK_DURATION = Histogram(
"autogpt_block_duration_seconds",
"Duration of block executions in seconds",
labelnames=["block_type"],
buckets=[0.1, 0.25, 0.5, 1, 2.5, 5, 10, 30, 60],
)
WEBSOCKET_CONNECTIONS = Gauge(
"autogpt_websocket_connections_total",
"Total number of active WebSocket connections",
# Removed user_id label - track total only to prevent cardinality explosion
)
SCHEDULER_JOBS = Gauge(
"autogpt_scheduler_jobs",
"Current number of scheduled jobs",
labelnames=["job_type", "status"],
)
DATABASE_QUERIES = Histogram(
"autogpt_database_query_duration_seconds",
"Duration of database queries in seconds",
labelnames=["operation", "table"],
buckets=[0.01, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5],
)
RABBITMQ_MESSAGES = Counter(
"autogpt_rabbitmq_messages_total",
"Total number of RabbitMQ messages",
labelnames=["queue", "status"],
)
AUTHENTICATION_ATTEMPTS = Counter(
"autogpt_auth_attempts_total",
"Total number of authentication attempts",
labelnames=["method", "status"],
)
API_KEY_USAGE = Counter(
"autogpt_api_key_usage_total",
"API key usage by provider",
labelnames=["provider", "block_type", "status"],
)
# Function/operation level metrics with controlled cardinality
GRAPH_OPERATIONS = Counter(
"autogpt_graph_operations_total",
"Graph operations by type",
labelnames=["operation", "status"], # create, update, delete, execute, etc.
)
USER_OPERATIONS = Counter(
"autogpt_user_operations_total",
"User operations by type",
labelnames=["operation", "status"], # login, register, update_profile, etc.
)
RATE_LIMIT_HITS = Counter(
"autogpt_rate_limit_hits_total",
"Number of rate limit hits",
labelnames=["endpoint"], # Removed user_id to prevent cardinality explosion
)
SERVICE_INFO = Info(
"autogpt_service",
"Service information",
)
def instrument_fastapi(
app: FastAPI,
service_name: str,
expose_endpoint: bool = True,
endpoint: str = "/metrics",
include_in_schema: bool = False,
excluded_handlers: Optional[list] = None,
) -> Instrumentator:
"""
Instrument a FastAPI application with Prometheus metrics.
Args:
app: FastAPI application instance
service_name: Name of the service for metrics labeling
expose_endpoint: Whether to expose /metrics endpoint
endpoint: Path for metrics endpoint
include_in_schema: Whether to include metrics endpoint in OpenAPI schema
excluded_handlers: List of paths to exclude from metrics
Returns:
Configured Instrumentator instance
"""
# Set service info
try:
from importlib.metadata import version
service_version = version("autogpt-platform-backend")
except Exception:
service_version = "unknown"
SERVICE_INFO.info(
{
"service": service_name,
"version": service_version,
}
)
# Create instrumentator with default metrics
instrumentator = Instrumentator(
should_group_status_codes=True,
should_ignore_untemplated=True,
should_respect_env_var=True,
should_instrument_requests_inprogress=True,
excluded_handlers=excluded_handlers or ["/health", "/readiness"],
env_var_name="ENABLE_METRICS",
inprogress_name="autogpt_http_requests_inprogress",
inprogress_labels=True,
)
# Add default HTTP metrics
instrumentator.add(
metrics.default(
metric_namespace="autogpt",
metric_subsystem=service_name.replace("-", "_"),
)
)
# Add request size metrics
instrumentator.add(
metrics.request_size(
metric_namespace="autogpt",
metric_subsystem=service_name.replace("-", "_"),
)
)
# Add response size metrics
instrumentator.add(
metrics.response_size(
metric_namespace="autogpt",
metric_subsystem=service_name.replace("-", "_"),
)
)
# Add latency metrics with custom buckets for better granularity
instrumentator.add(
metrics.latency(
metric_namespace="autogpt",
metric_subsystem=service_name.replace("-", "_"),
buckets=[0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10, 30, 60],
)
)
# Add combined metrics (requests by method and status)
instrumentator.add(
metrics.combined_size(
metric_namespace="autogpt",
metric_subsystem=service_name.replace("-", "_"),
)
)
# Instrument the app
instrumentator.instrument(app)
# Expose metrics endpoint if requested
if expose_endpoint:
instrumentator.expose(
app,
endpoint=endpoint,
include_in_schema=include_in_schema,
tags=["monitoring"] if include_in_schema else None,
)
logger.info(f"Metrics endpoint exposed at {endpoint} for {service_name}")
return instrumentator
def record_graph_execution(graph_id: str, status: str, user_id: str):
"""Record a graph execution event.
Args:
graph_id: Graph identifier (kept for future sampling/debugging)
status: Execution status (success/error/validation_error)
user_id: User identifier (kept for future sampling/debugging)
"""
# Track overall executions without high-cardinality labels
GRAPH_EXECUTIONS.labels(status=status).inc()
# Optionally track per-user executions (implement sampling if needed)
# For now, just track status to avoid cardinality explosion
GRAPH_EXECUTIONS_BY_USER.labels(status=status).inc()
def record_block_execution(block_type: str, status: str, duration: float):
"""Record a block execution event with duration."""
BLOCK_EXECUTIONS.labels(block_type=block_type, status=status).inc()
BLOCK_DURATION.labels(block_type=block_type).observe(duration)
def update_websocket_connections(user_id: str, delta: int):
"""Update the number of active WebSocket connections.
Args:
user_id: User identifier (kept for future sampling/debugging)
delta: Change in connection count (+1 for connect, -1 for disconnect)
"""
# Track total connections without user_id to prevent cardinality explosion
if delta > 0:
WEBSOCKET_CONNECTIONS.inc(delta)
else:
WEBSOCKET_CONNECTIONS.dec(abs(delta))
def record_database_query(operation: str, table: str, duration: float):
"""Record a database query with duration."""
DATABASE_QUERIES.labels(operation=operation, table=table).observe(duration)
def record_rabbitmq_message(queue: str, status: str):
"""Record a RabbitMQ message event."""
RABBITMQ_MESSAGES.labels(queue=queue, status=status).inc()
def record_authentication_attempt(method: str, status: str):
"""Record an authentication attempt."""
AUTHENTICATION_ATTEMPTS.labels(method=method, status=status).inc()
def record_api_key_usage(provider: str, block_type: str, status: str):
"""Record API key usage by provider and block."""
API_KEY_USAGE.labels(provider=provider, block_type=block_type, status=status).inc()
def record_rate_limit_hit(endpoint: str, user_id: str):
"""Record a rate limit hit.
Args:
endpoint: API endpoint that was rate limited
user_id: User identifier (kept for future sampling/debugging)
"""
RATE_LIMIT_HITS.labels(endpoint=endpoint).inc()
def record_graph_operation(operation: str, status: str):
"""Record a graph operation (create, update, delete, execute, etc.)."""
GRAPH_OPERATIONS.labels(operation=operation, status=status).inc()
def record_user_operation(operation: str, status: str):
"""Record a user operation (login, register, etc.)."""
USER_OPERATIONS.labels(operation=operation, status=status).inc()

View File

@@ -63,7 +63,7 @@ except ImportError:
# Cost System
try:
from backend.data.cost import BlockCost, BlockCostType
from backend.data.block import BlockCost, BlockCostType
except ImportError:
from backend.data.block_cost_config import BlockCost, BlockCostType

View File

@@ -8,7 +8,7 @@ from typing import Callable, List, Optional, Type
from pydantic import SecretStr
from backend.data.cost import BlockCost, BlockCostType
from backend.data.block import BlockCost, BlockCostType
from backend.data.model import (
APIKeyCredentials,
Credentials,

View File

@@ -8,9 +8,8 @@ BLOCK_COSTS configuration used by the execution system.
import logging
from typing import List, Type
from backend.data.block import Block
from backend.data.block import Block, BlockCost
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCost
from backend.sdk.registry import AutoRegistry
logger = logging.getLogger(__name__)

View File

@@ -7,7 +7,7 @@ from typing import Any, Callable, List, Optional, Set, Type
from pydantic import BaseModel, SecretStr
from backend.data.cost import BlockCost
from backend.data.block import BlockCost
from backend.data.model import (
APIKeyCredentials,
Credentials,

View File

@@ -6,10 +6,10 @@ import logging
import threading
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Type
from pydantic import BaseModel, SecretStr
from pydantic import BaseModel
from backend.blocks.basic import Block
from backend.data.model import APIKeyCredentials, Credentials
from backend.data.model import Credentials
from backend.integrations.oauth.base import BaseOAuthHandler
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks._base import BaseWebhooksManager
@@ -17,6 +17,8 @@ from backend.integrations.webhooks._base import BaseWebhooksManager
if TYPE_CHECKING:
from backend.sdk.provider import Provider
logger = logging.getLogger(__name__)
class SDKOAuthCredentials(BaseModel):
"""OAuth credentials configuration for SDK providers."""
@@ -102,21 +104,8 @@ class AutoRegistry:
"""Register an environment variable as an API key for a provider."""
with cls._lock:
cls._api_key_mappings[provider] = env_var_name
# Dynamically check if the env var exists and create credential
import os
api_key = os.getenv(env_var_name)
if api_key:
credential = APIKeyCredentials(
id=f"{provider}-default",
provider=provider,
api_key=SecretStr(api_key),
title=f"Default {provider} credentials",
)
# Check if credential already exists to avoid duplicates
if not any(c.id == credential.id for c in cls._default_credentials):
cls._default_credentials.append(credential)
# Note: The credential itself is created by ProviderBuilder.with_api_key()
# We only store the mapping here to avoid duplication
@classmethod
def get_all_credentials(cls) -> List[Credentials]:
@@ -210,3 +199,43 @@ class AutoRegistry:
webhooks.load_webhook_managers = patched_load
except Exception as e:
logging.warning(f"Failed to patch webhook managers: {e}")
# Patch credentials store to include SDK-registered credentials
try:
import sys
from typing import Any
# Get the module from sys.modules to respect mocking
if "backend.integrations.credentials_store" in sys.modules:
creds_store: Any = sys.modules["backend.integrations.credentials_store"]
else:
import backend.integrations.credentials_store
creds_store: Any = backend.integrations.credentials_store
if hasattr(creds_store, "IntegrationCredentialsStore"):
store_class = creds_store.IntegrationCredentialsStore
if hasattr(store_class, "get_all_creds"):
original_get_all_creds = store_class.get_all_creds
async def patched_get_all_creds(self, user_id: str):
# Get original credentials
original_creds = await original_get_all_creds(self, user_id)
# Add SDK-registered credentials
sdk_creds = cls.get_all_credentials()
# Combine credentials, avoiding duplicates by ID
existing_ids = {c.id for c in original_creds}
for cred in sdk_creds:
if cred.id not in existing_ids:
original_creds.append(cred)
return original_creds
store_class.get_all_creds = patched_get_all_creds
logger.info(
"Successfully patched IntegrationCredentialsStore.get_all_creds"
)
except Exception as e:
logging.warning(f"Failed to patch credentials store: {e}")

View File

@@ -88,6 +88,7 @@ async def test_send_graph_execution_result(
user_id="user-1",
graph_id="test_graph",
graph_version=1,
preset_id=None,
status=ExecutionStatus.COMPLETED,
started_at=datetime.now(tz=timezone.utc),
ended_at=datetime.now(tz=timezone.utc),
@@ -101,6 +102,8 @@ async def test_send_graph_execution_result(
"input_1": "some input value :)",
"input_2": "some *other* input value",
},
credential_inputs=None,
nodes_input_masks=None,
outputs={
"the_output": ["some output value"],
"other_output": ["sike there was another output"],

View File

@@ -1,5 +1,6 @@
from fastapi import FastAPI
from backend.monitoring.instrumentation import instrument_fastapi
from backend.server.middleware.security import SecurityHeadersMiddleware
from .routes.v1 import v1_router
@@ -13,3 +14,12 @@ external_app = FastAPI(
external_app.add_middleware(SecurityHeadersMiddleware)
external_app.include_router(v1_router, prefix="/v1")
# Add Prometheus instrumentation
instrument_fastapi(
external_app,
service_name="external-api",
expose_endpoint=True,
endpoint="/metrics",
include_in_schema=True,
)

View File

@@ -1,16 +1,14 @@
from fastapi import Depends, HTTPException, Request
from fastapi import HTTPException, Security
from fastapi.security import APIKeyHeader
from prisma.enums import APIKeyPermission
from backend.data.api_key import has_permission, validate_api_key
from backend.data.api_key import APIKeyInfo, has_permission, validate_api_key
api_key_header = APIKeyHeader(name="X-API-Key")
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False)
async def require_api_key(request: Request):
async def require_api_key(api_key: str | None = Security(api_key_header)) -> APIKeyInfo:
"""Base middleware for API key authentication"""
api_key = await api_key_header(request)
if api_key is None:
raise HTTPException(status_code=401, detail="Missing API key")
@@ -19,18 +17,19 @@ async def require_api_key(request: Request):
if not api_key_obj:
raise HTTPException(status_code=401, detail="Invalid API key")
request.state.api_key = api_key_obj
return api_key_obj
def require_permission(permission: APIKeyPermission):
"""Dependency function for checking specific permissions"""
async def check_permission(api_key=Depends(require_api_key)):
async def check_permission(
api_key: APIKeyInfo = Security(require_api_key),
) -> APIKeyInfo:
if not has_permission(api_key, permission):
raise HTTPException(
status_code=403,
detail=f"API key missing required permission: {permission}",
detail=f"API key lacks the required permission '{permission}'",
)
return api_key

View File

@@ -2,14 +2,14 @@ import logging
from collections import defaultdict
from typing import Annotated, Any, Optional, Sequence
from fastapi import APIRouter, Body, Depends, HTTPException
from fastapi import APIRouter, Body, HTTPException, Security
from prisma.enums import AgentExecutionStatus, APIKeyPermission
from typing_extensions import TypedDict
import backend.data.block
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.api_key import APIKey
from backend.data.api_key import APIKeyInfo
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.executor.utils import add_graph_execution
from backend.server.external.middleware import require_permission
@@ -47,9 +47,9 @@ class GraphExecutionResult(TypedDict):
@v1_router.get(
path="/blocks",
tags=["blocks"],
dependencies=[Depends(require_permission(APIKeyPermission.READ_BLOCK))],
dependencies=[Security(require_permission(APIKeyPermission.READ_BLOCK))],
)
def get_graph_blocks() -> Sequence[dict[Any, Any]]:
async def get_graph_blocks() -> Sequence[dict[Any, Any]]:
blocks = [block() for block in backend.data.block.get_blocks().values()]
return [b.to_dict() for b in blocks if not b.disabled]
@@ -57,12 +57,12 @@ def get_graph_blocks() -> Sequence[dict[Any, Any]]:
@v1_router.post(
path="/blocks/{block_id}/execute",
tags=["blocks"],
dependencies=[Depends(require_permission(APIKeyPermission.EXECUTE_BLOCK))],
dependencies=[Security(require_permission(APIKeyPermission.EXECUTE_BLOCK))],
)
async def execute_graph_block(
block_id: str,
data: BlockInput,
api_key: APIKey = Depends(require_permission(APIKeyPermission.EXECUTE_BLOCK)),
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.EXECUTE_BLOCK)),
) -> CompletedBlockOutput:
obj = backend.data.block.get_block(block_id)
if not obj:
@@ -82,7 +82,7 @@ async def execute_graph(
graph_id: str,
graph_version: int,
node_input: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)],
api_key: APIKey = Depends(require_permission(APIKeyPermission.EXECUTE_GRAPH)),
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.EXECUTE_GRAPH)),
) -> dict[str, Any]:
try:
graph_exec = await add_graph_execution(
@@ -104,7 +104,7 @@ async def execute_graph(
async def get_graph_execution_results(
graph_id: str,
graph_exec_id: str,
api_key: APIKey = Depends(require_permission(APIKeyPermission.READ_GRAPH)),
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.READ_GRAPH)),
) -> GraphExecutionResult:
graph = await graph_db.get_graph(graph_id, user_id=api_key.user_id)
if not graph:

View File

@@ -81,6 +81,10 @@ class SecurityHeadersMiddleware(BaseHTTPMiddleware):
response.headers["X-XSS-Protection"] = "1; mode=block"
response.headers["Referrer-Policy"] = "strict-origin-when-cross-origin"
# Add noindex header for shared execution pages
if "/public/shared" in request.url.path:
response.headers["X-Robots-Tag"] = "noindex, nofollow"
# Default: Disable caching for all endpoints
# Only allow caching for explicitly permitted paths
if not self.is_cacheable_path(request.url.path):

View File

@@ -3,7 +3,7 @@ from typing import Any, Optional
import pydantic
from backend.data.api_key import APIKeyPermission, APIKeyWithoutHash
from backend.data.api_key import APIKeyInfo, APIKeyPermission
from backend.data.graph import Graph
from backend.util.timezone_name import TimeZoneName
@@ -34,10 +34,6 @@ class WSSubscribeGraphExecutionsRequest(pydantic.BaseModel):
graph_id: str
class ExecuteGraphResponse(pydantic.BaseModel):
graph_exec_id: str
class CreateGraph(pydantic.BaseModel):
graph: Graph
@@ -49,7 +45,7 @@ class CreateAPIKeyRequest(pydantic.BaseModel):
class CreateAPIKeyResponse(pydantic.BaseModel):
api_key: APIKeyWithoutHash
api_key: APIKeyInfo
plain_text_key: str

View File

@@ -12,11 +12,13 @@ from autogpt_libs.auth import add_auth_responses_to_openapi
from autogpt_libs.auth import verify_settings as verify_auth_settings
from fastapi.exceptions import RequestValidationError
from fastapi.routing import APIRoute
from prisma.errors import PrismaError
import backend.data.block
import backend.data.db
import backend.data.graph
import backend.data.user
import backend.integrations.webhooks.utils
import backend.server.routers.postmark.postmark
import backend.server.routers.v1
import backend.server.v2.admin.credit_admin_routes
@@ -35,10 +37,12 @@ import backend.util.settings
from backend.blocks.llm import LlmModel
from backend.data.model import Credentials
from backend.integrations.providers import ProviderName
from backend.monitoring.instrumentation import instrument_fastapi
from backend.server.external.api import external_app
from backend.server.middleware.security import SecurityHeadersMiddleware
from backend.util import json
from backend.util.cloud_storage import shutdown_cloud_storage_handler
from backend.util.exceptions import NotAuthorizedError, NotFoundError
from backend.util.feature_flag import initialize_launchdarkly, shutdown_launchdarkly
from backend.util.service import UnhealthyServiceError
@@ -76,6 +80,8 @@ async def lifespan_context(app: fastapi.FastAPI):
await backend.data.user.migrate_and_encrypt_user_integrations()
await backend.data.graph.fix_llm_provider_credentials()
await backend.data.graph.migrate_llm_models(LlmModel.GPT4O)
await backend.integrations.webhooks.utils.migrate_legacy_triggered_graphs()
with launch_darkly_context():
yield
@@ -137,6 +143,16 @@ app.add_middleware(SecurityHeadersMiddleware)
# Add 401 responses to authenticated endpoints in OpenAPI spec
add_auth_responses_to_openapi(app)
# Add Prometheus instrumentation
instrument_fastapi(
app,
service_name="rest-api",
expose_endpoint=True,
endpoint="/metrics",
include_in_schema=settings.config.app_env
== backend.util.settings.AppEnvironment.LOCAL,
)
def handle_internal_http_error(status_code: int = 500, log_error: bool = True):
def handler(request: fastapi.Request, exc: Exception):
@@ -195,10 +211,14 @@ async def validation_error_handler(
)
app.add_exception_handler(PrismaError, handle_internal_http_error(500))
app.add_exception_handler(NotFoundError, handle_internal_http_error(404, False))
app.add_exception_handler(NotAuthorizedError, handle_internal_http_error(403, False))
app.add_exception_handler(RequestValidationError, validation_error_handler)
app.add_exception_handler(pydantic.ValidationError, validation_error_handler)
app.add_exception_handler(ValueError, handle_internal_http_error(400))
app.add_exception_handler(Exception, handle_internal_http_error(500))
app.include_router(backend.server.routers.v1.v1_router, tags=["v1"], prefix="/api")
app.include_router(
backend.server.v2.store.routes.router, tags=["v2"], prefix="/api/store"
@@ -363,6 +383,7 @@ class AgentServer(backend.util.service.AppProcess):
preset_id=preset_id,
user_id=user_id,
inputs=inputs or {},
credential_inputs={},
)
@staticmethod

View File

@@ -1,14 +1,17 @@
import asyncio
import base64
import logging
import time
import uuid
from collections import defaultdict
from datetime import datetime
from datetime import datetime, timezone
from typing import Annotated, Any, Sequence
import pydantic
import stripe
from autogpt_libs.auth import get_user_id, requires_user
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from autogpt_libs.utils.cache import cached
from fastapi import (
APIRouter,
Body,
@@ -27,30 +30,19 @@ from typing_extensions import Optional, TypedDict
import backend.server.integrations.router
import backend.server.routers.analytics
import backend.server.v2.library.db as library_db
from backend.data import api_key as api_key_db
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.api_key import (
APIKeyError,
APIKeyNotFoundError,
APIKeyPermissionError,
APIKeyWithoutHash,
generate_api_key,
get_api_key_by_id,
list_user_api_keys,
revoke_api_key,
suspend_api_key,
update_api_key_permissions,
)
from backend.data.block import BlockInput, CompletedBlockOutput, get_block, get_blocks
from backend.data.credit import (
AutoTopUpConfig,
RefundRequest,
TransactionHistory,
get_auto_top_up,
get_block_costs,
get_user_credit_model,
set_auto_top_up,
)
from backend.data.execution import UserContext
from backend.data.model import CredentialsMetaInput
from backend.data.notifications import NotificationPreference, NotificationPreferenceDTO
from backend.data.onboarding import (
@@ -74,11 +66,15 @@ from backend.integrations.webhooks.graph_lifecycle_hooks import (
on_graph_activate,
on_graph_deactivate,
)
from backend.monitoring.instrumentation import (
record_block_execution,
record_graph_execution,
record_graph_operation,
)
from backend.server.model import (
CreateAPIKeyRequest,
CreateAPIKeyResponse,
CreateGraph,
ExecuteGraphResponse,
RequestTopUp,
SetGraphActiveVersion,
TimezoneResponse,
@@ -91,7 +87,6 @@ from backend.util.cloud_storage import get_cloud_storage_handler
from backend.util.exceptions import GraphValidationError, NotFoundError
from backend.util.settings import Settings
from backend.util.timezone_utils import (
convert_cron_to_utc,
convert_utc_time_to_user_timezone,
get_user_timezone_or_utc,
)
@@ -109,6 +104,7 @@ def _create_file_size_error(size_bytes: int, max_size_mb: int) -> HTTPException:
settings = Settings()
logger = logging.getLogger(__name__)
_user_credit_model = get_user_credit_model()
# Define the API routes
@@ -177,7 +173,6 @@ async def get_user_timezone_route(
summary="Update user timezone",
tags=["auth"],
dependencies=[Security(requires_user)],
response_model=TimezoneResponse,
)
async def update_user_timezone_route(
user_id: Annotated[str, Security(get_user_id)], request: UpdateTimezoneRequest
@@ -268,18 +263,37 @@ async def is_onboarding_enabled():
########################################################
@cached()
def _get_cached_blocks() -> Sequence[dict[Any, Any]]:
"""
Get cached blocks with thundering herd protection.
Uses sync_cache decorator to prevent multiple concurrent requests
from all executing the expensive block loading operation.
"""
from backend.data.credit import get_block_cost
block_classes = get_blocks()
result = []
for block_class in block_classes.values():
block_instance = block_class()
if not block_instance.disabled:
# Get costs for this specific block class without creating another instance
costs = get_block_cost(block_instance)
result.append({**block_instance.to_dict(), "costs": costs})
return result
@v1_router.get(
path="/blocks",
summary="List available blocks",
tags=["blocks"],
dependencies=[Security(requires_user)],
)
def get_graph_blocks() -> Sequence[dict[Any, Any]]:
blocks = [block() for block in get_blocks().values()]
costs = get_block_costs()
return [
{**b.to_dict(), "costs": costs.get(b.id, [])} for b in blocks if not b.disabled
]
async def get_graph_blocks() -> Sequence[dict[Any, Any]]:
return _get_cached_blocks()
@v1_router.post(
@@ -288,15 +302,45 @@ def get_graph_blocks() -> Sequence[dict[Any, Any]]:
tags=["blocks"],
dependencies=[Security(requires_user)],
)
async def execute_graph_block(block_id: str, data: BlockInput) -> CompletedBlockOutput:
async def execute_graph_block(
block_id: str, data: BlockInput, user_id: Annotated[str, Security(get_user_id)]
) -> CompletedBlockOutput:
obj = get_block(block_id)
if not obj:
raise HTTPException(status_code=404, detail=f"Block #{block_id} not found.")
output = defaultdict(list)
async for name, data in obj.execute(data):
output[name].append(data)
return output
# Get user context for block execution
user = await get_user_by_id(user_id)
if not user:
raise HTTPException(status_code=404, detail="User not found.")
user_context = UserContext(timezone=user.timezone)
start_time = time.time()
try:
output = defaultdict(list)
async for name, data in obj.execute(
data,
user_context=user_context,
user_id=user_id,
# Note: graph_exec_id and graph_id are not available for direct block execution
):
output[name].append(data)
# Record successful block execution with duration
duration = time.time() - start_time
block_type = obj.__class__.__name__
record_block_execution(
block_type=block_type, status="success", duration=duration
)
return output
except Exception:
# Record failed block execution
duration = time.time() - start_time
block_type = obj.__class__.__name__
record_block_execution(block_type=block_type, status="error", duration=duration)
raise
@v1_router.post(
@@ -589,7 +633,13 @@ class DeleteGraphResponse(TypedDict):
async def list_graphs(
user_id: Annotated[str, Security(get_user_id)],
) -> Sequence[graph_db.GraphMeta]:
return await graph_db.list_graphs(filter_by="active", user_id=user_id)
paginated_result = await graph_db.list_graphs_paginated(
user_id=user_id,
page=1,
page_size=250,
filter_by="active",
)
return paginated_result.graphs
@v1_router.get(
@@ -783,7 +833,7 @@ async def execute_graph(
],
graph_version: Optional[int] = None,
preset_id: Optional[str] = None,
) -> ExecuteGraphResponse:
) -> execution_db.GraphExecutionMeta:
current_balance = await _user_credit_model.get_credits(user_id)
if current_balance <= 0:
raise HTTPException(
@@ -792,7 +842,7 @@ async def execute_graph(
)
try:
graph_exec = await execution_utils.add_graph_execution(
result = await execution_utils.add_graph_execution(
graph_id=graph_id,
user_id=user_id,
inputs=inputs,
@@ -800,8 +850,16 @@ async def execute_graph(
graph_version=graph_version,
graph_credentials_inputs=credentials_inputs,
)
return ExecuteGraphResponse(graph_exec_id=graph_exec.id)
# Record successful graph execution
record_graph_execution(graph_id=graph_id, status="success", user_id=user_id)
record_graph_operation(operation="execute", status="success")
return result
except GraphValidationError as e:
# Record failed graph execution
record_graph_execution(
graph_id=graph_id, status="validation_error", user_id=user_id
)
record_graph_operation(operation="execute", status="validation_error")
# Return structured validation errors that the frontend can parse
raise HTTPException(
status_code=400,
@@ -812,6 +870,11 @@ async def execute_graph(
"node_errors": e.node_errors,
},
)
except Exception:
# Record any other failures
record_graph_execution(graph_id=graph_id, status="error", user_id=user_id)
record_graph_operation(operation="execute", status="error")
raise
@v1_router.post(
@@ -865,7 +928,12 @@ async def _stop_graph_run(
async def list_graphs_executions(
user_id: Annotated[str, Security(get_user_id)],
) -> list[execution_db.GraphExecutionMeta]:
return await execution_db.get_graph_executions(user_id=user_id)
paginated_result = await execution_db.get_graph_executions_paginated(
user_id=user_id,
page=1,
page_size=250,
)
return paginated_result.executions
@v1_router.get(
@@ -936,6 +1004,99 @@ async def delete_graph_execution(
)
class ShareRequest(pydantic.BaseModel):
"""Optional request body for share endpoint."""
pass # Empty body is fine
class ShareResponse(pydantic.BaseModel):
"""Response from share endpoints."""
share_url: str
share_token: str
@v1_router.post(
"/graphs/{graph_id}/executions/{graph_exec_id}/share",
dependencies=[Security(requires_user)],
)
async def enable_execution_sharing(
graph_id: Annotated[str, Path],
graph_exec_id: Annotated[str, Path],
user_id: Annotated[str, Security(get_user_id)],
_body: ShareRequest = Body(default=ShareRequest()),
) -> ShareResponse:
"""Enable sharing for a graph execution."""
# Verify the execution belongs to the user
execution = await execution_db.get_graph_execution(
user_id=user_id, execution_id=graph_exec_id
)
if not execution:
raise HTTPException(status_code=404, detail="Execution not found")
# Generate a unique share token
share_token = str(uuid.uuid4())
# Update the execution with share info
await execution_db.update_graph_execution_share_status(
execution_id=graph_exec_id,
user_id=user_id,
is_shared=True,
share_token=share_token,
shared_at=datetime.now(timezone.utc),
)
# Return the share URL
frontend_url = Settings().config.frontend_base_url or "http://localhost:3000"
share_url = f"{frontend_url}/share/{share_token}"
return ShareResponse(share_url=share_url, share_token=share_token)
@v1_router.delete(
"/graphs/{graph_id}/executions/{graph_exec_id}/share",
status_code=HTTP_204_NO_CONTENT,
dependencies=[Security(requires_user)],
)
async def disable_execution_sharing(
graph_id: Annotated[str, Path],
graph_exec_id: Annotated[str, Path],
user_id: Annotated[str, Security(get_user_id)],
) -> None:
"""Disable sharing for a graph execution."""
# Verify the execution belongs to the user
execution = await execution_db.get_graph_execution(
user_id=user_id, execution_id=graph_exec_id
)
if not execution:
raise HTTPException(status_code=404, detail="Execution not found")
# Remove share info
await execution_db.update_graph_execution_share_status(
execution_id=graph_exec_id,
user_id=user_id,
is_shared=False,
share_token=None,
shared_at=None,
)
@v1_router.get("/public/shared/{share_token}")
async def get_shared_execution(
share_token: Annotated[
str,
Path(regex=r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$"),
],
) -> execution_db.SharedExecutionResponse:
"""Get a shared graph execution by share token (no auth required)."""
execution = await execution_db.get_graph_execution_by_share_token(share_token)
if not execution:
raise HTTPException(status_code=404, detail="Shared execution not found")
return execution
########################################################
##################### Schedules ########################
########################################################
@@ -947,6 +1108,10 @@ class ScheduleCreationRequest(pydantic.BaseModel):
cron: str
inputs: dict[str, Any]
credentials: dict[str, CredentialsMetaInput] = pydantic.Field(default_factory=dict)
timezone: Optional[str] = pydantic.Field(
default=None,
description="User's timezone for scheduling (e.g., 'America/New_York'). If not provided, will use user's saved timezone or UTC.",
)
@v1_router.post(
@@ -971,26 +1136,22 @@ async def create_graph_execution_schedule(
detail=f"Graph #{graph_id} v{schedule_params.graph_version} not found.",
)
user = await get_user_by_id(user_id)
user_timezone = get_user_timezone_or_utc(user.timezone if user else None)
# Convert cron expression from user timezone to UTC
try:
utc_cron = convert_cron_to_utc(schedule_params.cron, user_timezone)
except ValueError as e:
raise HTTPException(
status_code=400,
detail=f"Invalid cron expression for timezone {user_timezone}: {e}",
)
# Use timezone from request if provided, otherwise fetch from user profile
if schedule_params.timezone:
user_timezone = schedule_params.timezone
else:
user = await get_user_by_id(user_id)
user_timezone = get_user_timezone_or_utc(user.timezone if user else None)
result = await get_scheduler_client().add_execution_schedule(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
name=schedule_params.name,
cron=utc_cron, # Send UTC cron to scheduler
cron=schedule_params.cron,
input_data=schedule_params.inputs,
input_credentials=schedule_params.credentials,
user_timezone=user_timezone,
)
# Convert the next_run_time back to user timezone for display
@@ -1012,24 +1173,11 @@ async def list_graph_execution_schedules(
user_id: Annotated[str, Security(get_user_id)],
graph_id: str = Path(),
) -> list[scheduler.GraphExecutionJobInfo]:
schedules = await get_scheduler_client().get_execution_schedules(
return await get_scheduler_client().get_execution_schedules(
user_id=user_id,
graph_id=graph_id,
)
# Get user timezone for conversion
user = await get_user_by_id(user_id)
user_timezone = get_user_timezone_or_utc(user.timezone if user else None)
# Convert next_run_time to user timezone for display
for schedule in schedules:
if schedule.next_run_time:
schedule.next_run_time = convert_utc_time_to_user_timezone(
schedule.next_run_time, user_timezone
)
return schedules
@v1_router.get(
path="/schedules",
@@ -1040,20 +1188,7 @@ async def list_graph_execution_schedules(
async def list_all_graphs_execution_schedules(
user_id: Annotated[str, Security(get_user_id)],
) -> list[scheduler.GraphExecutionJobInfo]:
schedules = await get_scheduler_client().get_execution_schedules(user_id=user_id)
# Get user timezone for conversion
user = await get_user_by_id(user_id)
user_timezone = get_user_timezone_or_utc(user.timezone if user else None)
# Convert UTC next_run_time to user timezone for display
for schedule in schedules:
if schedule.next_run_time:
schedule.next_run_time = convert_utc_time_to_user_timezone(
schedule.next_run_time, user_timezone
)
return schedules
return await get_scheduler_client().get_execution_schedules(user_id=user_id)
@v1_router.delete(
@@ -1084,7 +1219,6 @@ async def delete_graph_execution_schedule(
@v1_router.post(
"/api-keys",
summary="Create new API key",
response_model=CreateAPIKeyResponse,
tags=["api-keys"],
dependencies=[Security(requires_user)],
)
@@ -1092,128 +1226,73 @@ async def create_api_key(
request: CreateAPIKeyRequest, user_id: Annotated[str, Security(get_user_id)]
) -> CreateAPIKeyResponse:
"""Create a new API key"""
try:
api_key, plain_text = await generate_api_key(
name=request.name,
user_id=user_id,
permissions=request.permissions,
description=request.description,
)
return CreateAPIKeyResponse(api_key=api_key, plain_text_key=plain_text)
except APIKeyError as e:
logger.error(
"Could not create API key for user %s: %s. Review input and permissions.",
user_id,
e,
)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Verify request payload and try again."},
)
api_key_info, plain_text_key = await api_key_db.create_api_key(
name=request.name,
user_id=user_id,
permissions=request.permissions,
description=request.description,
)
return CreateAPIKeyResponse(api_key=api_key_info, plain_text_key=plain_text_key)
@v1_router.get(
"/api-keys",
summary="List user API keys",
response_model=list[APIKeyWithoutHash] | dict[str, str],
tags=["api-keys"],
dependencies=[Security(requires_user)],
)
async def get_api_keys(
user_id: Annotated[str, Security(get_user_id)],
) -> list[APIKeyWithoutHash]:
) -> list[api_key_db.APIKeyInfo]:
"""List all API keys for the user"""
try:
return await list_user_api_keys(user_id)
except APIKeyError as e:
logger.error("Failed to list API keys for user %s: %s", user_id, e)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Check API key service availability."},
)
return await api_key_db.list_user_api_keys(user_id)
@v1_router.get(
"/api-keys/{key_id}",
summary="Get specific API key",
response_model=APIKeyWithoutHash,
tags=["api-keys"],
dependencies=[Security(requires_user)],
)
async def get_api_key(
key_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> APIKeyWithoutHash:
) -> api_key_db.APIKeyInfo:
"""Get a specific API key"""
try:
api_key = await get_api_key_by_id(key_id, user_id)
if not api_key:
raise HTTPException(status_code=404, detail="API key not found")
return api_key
except APIKeyError as e:
logger.error("Error retrieving API key %s for user %s: %s", key_id, user_id, e)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Ensure the key ID is correct."},
)
api_key = await api_key_db.get_api_key_by_id(key_id, user_id)
if not api_key:
raise HTTPException(status_code=404, detail="API key not found")
return api_key
@v1_router.delete(
"/api-keys/{key_id}",
summary="Revoke API key",
response_model=APIKeyWithoutHash,
tags=["api-keys"],
dependencies=[Security(requires_user)],
)
async def delete_api_key(
key_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> Optional[APIKeyWithoutHash]:
) -> api_key_db.APIKeyInfo:
"""Revoke an API key"""
try:
return await revoke_api_key(key_id, user_id)
except APIKeyNotFoundError:
raise HTTPException(status_code=404, detail="API key not found")
except APIKeyPermissionError:
raise HTTPException(status_code=403, detail="Permission denied")
except APIKeyError as e:
logger.error("Failed to revoke API key %s for user %s: %s", key_id, user_id, e)
raise HTTPException(
status_code=400,
detail={
"message": str(e),
"hint": "Verify permissions or try again later.",
},
)
return await api_key_db.revoke_api_key(key_id, user_id)
@v1_router.post(
"/api-keys/{key_id}/suspend",
summary="Suspend API key",
response_model=APIKeyWithoutHash,
tags=["api-keys"],
dependencies=[Security(requires_user)],
)
async def suspend_key(
key_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> Optional[APIKeyWithoutHash]:
) -> api_key_db.APIKeyInfo:
"""Suspend an API key"""
try:
return await suspend_api_key(key_id, user_id)
except APIKeyNotFoundError:
raise HTTPException(status_code=404, detail="API key not found")
except APIKeyPermissionError:
raise HTTPException(status_code=403, detail="Permission denied")
except APIKeyError as e:
logger.error("Failed to suspend API key %s for user %s: %s", key_id, user_id, e)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Check user permissions and retry."},
)
return await api_key_db.suspend_api_key(key_id, user_id)
@v1_router.put(
"/api-keys/{key_id}/permissions",
summary="Update key permissions",
response_model=APIKeyWithoutHash,
tags=["api-keys"],
dependencies=[Security(requires_user)],
)
@@ -1221,22 +1300,8 @@ async def update_permissions(
key_id: str,
request: UpdatePermissionsRequest,
user_id: Annotated[str, Security(get_user_id)],
) -> Optional[APIKeyWithoutHash]:
) -> api_key_db.APIKeyInfo:
"""Update API key permissions"""
try:
return await update_api_key_permissions(key_id, user_id, request.permissions)
except APIKeyNotFoundError:
raise HTTPException(status_code=404, detail="API key not found")
except APIKeyPermissionError:
raise HTTPException(status_code=403, detail="Permission denied")
except APIKeyError as e:
logger.error(
"Failed to update permissions for API key %s of user %s: %s",
key_id,
user_id,
e,
)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Ensure permissions list is valid."},
)
return await api_key_db.update_api_key_permissions(
key_id, user_id, request.permissions
)

View File

@@ -1,4 +1,5 @@
import json
from datetime import datetime
from io import BytesIO
from unittest.mock import AsyncMock, Mock, patch
@@ -109,8 +110,8 @@ def test_get_graph_blocks(
# Mock block costs
mocker.patch(
"backend.server.routers.v1.get_block_costs",
return_value={"test-block": [{"cost": 10, "type": "credit"}]},
"backend.data.credit.get_block_cost",
return_value=[{"cost": 10, "type": "credit"}],
)
response = client.get("/blocks")
@@ -146,6 +147,15 @@ def test_execute_graph_block(
return_value=mock_block,
)
# Mock user for user_context
mock_user = Mock()
mock_user.timezone = "UTC"
mocker.patch(
"backend.server.routers.v1.get_user_by_id",
return_value=mock_user,
)
request_data = {
"input_name": "test_input",
"input_value": "test_value",
@@ -265,11 +275,12 @@ def test_get_graphs(
name="Test Graph",
description="A test graph",
user_id=test_user_id,
created_at=datetime(2025, 9, 4, 13, 37),
)
mocker.patch(
"backend.server.routers.v1.graph_db.list_graphs",
return_value=[mock_graph],
"backend.data.graph.list_graphs_paginated",
return_value=Mock(graphs=[mock_graph]),
)
response = client.get("/graphs")
@@ -299,6 +310,7 @@ def test_get_graph(
name="Test Graph",
description="A test graph",
user_id=test_user_id,
created_at=datetime(2025, 9, 4, 13, 37),
)
mocker.patch(
@@ -348,6 +360,7 @@ def test_delete_graph(
name="Test Graph",
description="A test graph",
user_id=test_user_id,
created_at=datetime(2025, 9, 4, 13, 37),
)
mocker.patch(

View File

@@ -3,8 +3,9 @@ API Key authentication utilities for FastAPI applications.
"""
import inspect
import logging
import secrets
from typing import Any, Callable, Optional
from typing import Any, Awaitable, Callable, Optional
from fastapi import HTTPException, Request
from fastapi.security import APIKeyHeader
@@ -12,6 +13,8 @@ from starlette.status import HTTP_401_UNAUTHORIZED
from backend.util.exceptions import MissingConfigError
logger = logging.getLogger(__name__)
class APIKeyAuthenticator(APIKeyHeader):
"""
@@ -51,7 +54,8 @@ class APIKeyAuthenticator(APIKeyHeader):
header_name (str): The name of the header containing the API key
expected_token (Optional[str]): The expected API key value for simple token matching
validator (Optional[Callable]): Custom validation function that takes an API key
string and returns a boolean or object. Can be async.
string and returns a truthy value if and only if the passed string is a
valid API key. Can be async.
status_if_missing (int): HTTP status code to use for validation errors
message_if_invalid (str): Error message to return when validation fails
"""
@@ -60,7 +64,9 @@ class APIKeyAuthenticator(APIKeyHeader):
self,
header_name: str,
expected_token: Optional[str] = None,
validator: Optional[Callable[[str], bool]] = None,
validator: Optional[
Callable[[str], Any] | Callable[[str], Awaitable[Any]]
] = None,
status_if_missing: int = HTTP_401_UNAUTHORIZED,
message_if_invalid: str = "Invalid API key",
):
@@ -75,7 +81,7 @@ class APIKeyAuthenticator(APIKeyHeader):
self.message_if_invalid = message_if_invalid
async def __call__(self, request: Request) -> Any:
api_key = await super()(request)
api_key = await super().__call__(request)
if api_key is None:
raise HTTPException(
status_code=self.status_if_missing, detail="No API key in request"
@@ -106,4 +112,9 @@ class APIKeyAuthenticator(APIKeyHeader):
f"{self.__class__.__name__}.expected_token is not set; "
"either specify it or provide a custom validator"
)
return secrets.compare_digest(api_key, self.expected_token)
try:
return secrets.compare_digest(api_key, self.expected_token)
except TypeError as e:
# If value is not an ASCII string, compare_digest raises a TypeError
logger.warning(f"{self.model.name} API key check failed: {e}")
return False

View File

@@ -0,0 +1,537 @@
"""
Unit tests for APIKeyAuthenticator class.
"""
from unittest.mock import Mock, patch
import pytest
from fastapi import HTTPException, Request
from starlette.status import HTTP_401_UNAUTHORIZED, HTTP_403_FORBIDDEN
from backend.server.utils.api_key_auth import APIKeyAuthenticator
from backend.util.exceptions import MissingConfigError
@pytest.fixture
def mock_request():
"""Create a mock request object."""
request = Mock(spec=Request)
request.state = Mock()
request.headers = {}
return request
@pytest.fixture
def api_key_auth():
"""Create a basic APIKeyAuthenticator instance."""
return APIKeyAuthenticator(
header_name="X-API-Key", expected_token="test-secret-token"
)
@pytest.fixture
def api_key_auth_custom_validator():
"""Create APIKeyAuthenticator with custom validator."""
def custom_validator(api_key: str) -> bool:
return api_key == "custom-valid-key"
return APIKeyAuthenticator(header_name="X-API-Key", validator=custom_validator)
@pytest.fixture
def api_key_auth_async_validator():
"""Create APIKeyAuthenticator with async custom validator."""
async def async_validator(api_key: str) -> bool:
return api_key == "async-valid-key"
return APIKeyAuthenticator(header_name="X-API-Key", validator=async_validator)
@pytest.fixture
def api_key_auth_object_validator():
"""Create APIKeyAuthenticator that returns objects from validator."""
async def object_validator(api_key: str):
if api_key == "user-key":
return {"user_id": "123", "permissions": ["read", "write"]}
return None
return APIKeyAuthenticator(header_name="X-API-Key", validator=object_validator)
# ========== Basic Initialization Tests ========== #
def test_init_with_expected_token():
"""Test initialization with expected token."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="test-token")
assert auth.model.name == "X-API-Key"
assert auth.expected_token == "test-token"
assert auth.custom_validator is None
assert auth.status_if_missing == HTTP_401_UNAUTHORIZED
assert auth.message_if_invalid == "Invalid API key"
def test_init_with_custom_validator():
"""Test initialization with custom validator."""
def validator(key: str) -> bool:
return True
auth = APIKeyAuthenticator(header_name="Authorization", validator=validator)
assert auth.model.name == "Authorization"
assert auth.expected_token is None
assert auth.custom_validator == validator
assert auth.status_if_missing == HTTP_401_UNAUTHORIZED
assert auth.message_if_invalid == "Invalid API key"
def test_init_with_custom_parameters():
"""Test initialization with custom status and message."""
auth = APIKeyAuthenticator(
header_name="X-Custom-Key",
expected_token="token",
status_if_missing=HTTP_403_FORBIDDEN,
message_if_invalid="Access denied",
)
assert auth.model.name == "X-Custom-Key"
assert auth.status_if_missing == HTTP_403_FORBIDDEN
assert auth.message_if_invalid == "Access denied"
def test_scheme_name_generation():
"""Test that scheme_name is generated correctly."""
auth = APIKeyAuthenticator(header_name="X-Custom-Header", expected_token="token")
assert auth.scheme_name == "APIKeyAuthenticator-X-Custom-Header"
# ========== Authentication Flow Tests ========== #
@pytest.mark.asyncio
async def test_api_key_missing(api_key_auth, mock_request):
"""Test behavior when API key is missing from request."""
# Mock the parent class method to return None (no API key)
with pytest.raises(HTTPException) as exc_info:
await api_key_auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "No API key in request"
@pytest.mark.asyncio
async def test_api_key_valid(api_key_auth, mock_request):
"""Test behavior with valid API key."""
# Mock the parent class to return the API key
with patch.object(
api_key_auth.__class__.__bases__[0],
"__call__",
return_value="test-secret-token",
):
result = await api_key_auth(mock_request)
assert result is True
@pytest.mark.asyncio
async def test_api_key_invalid(api_key_auth, mock_request):
"""Test behavior with invalid API key."""
# Mock the parent class to return an invalid API key
with patch.object(
api_key_auth.__class__.__bases__[0], "__call__", return_value="invalid-token"
):
with pytest.raises(HTTPException) as exc_info:
await api_key_auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
# ========== Custom Validator Tests ========== #
@pytest.mark.asyncio
async def test_custom_status_and_message(mock_request):
"""Test custom status code and message."""
auth = APIKeyAuthenticator(
header_name="X-API-Key",
expected_token="valid-token",
status_if_missing=HTTP_403_FORBIDDEN,
message_if_invalid="Access forbidden",
)
# Test missing key
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_403_FORBIDDEN
assert exc_info.value.detail == "No API key in request"
# Test invalid key
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value="invalid-token"
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_403_FORBIDDEN
assert exc_info.value.detail == "Access forbidden"
@pytest.mark.asyncio
async def test_custom_sync_validator(api_key_auth_custom_validator, mock_request):
"""Test with custom synchronous validator."""
# Mock the parent class to return the API key
with patch.object(
api_key_auth_custom_validator.__class__.__bases__[0],
"__call__",
return_value="custom-valid-key",
):
result = await api_key_auth_custom_validator(mock_request)
assert result is True
@pytest.mark.asyncio
async def test_custom_sync_validator_invalid(
api_key_auth_custom_validator, mock_request
):
"""Test custom synchronous validator with invalid key."""
with patch.object(
api_key_auth_custom_validator.__class__.__bases__[0],
"__call__",
return_value="invalid-key",
):
with pytest.raises(HTTPException) as exc_info:
await api_key_auth_custom_validator(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_custom_async_validator(api_key_auth_async_validator, mock_request):
"""Test with custom async validator."""
with patch.object(
api_key_auth_async_validator.__class__.__bases__[0],
"__call__",
return_value="async-valid-key",
):
result = await api_key_auth_async_validator(mock_request)
assert result is True
@pytest.mark.asyncio
async def test_custom_async_validator_invalid(
api_key_auth_async_validator, mock_request
):
"""Test custom async validator with invalid key."""
with patch.object(
api_key_auth_async_validator.__class__.__bases__[0],
"__call__",
return_value="invalid-key",
):
with pytest.raises(HTTPException) as exc_info:
await api_key_auth_async_validator(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_validator_returns_object(api_key_auth_object_validator, mock_request):
"""Test validator that returns an object instead of boolean."""
with patch.object(
api_key_auth_object_validator.__class__.__bases__[0],
"__call__",
return_value="user-key",
):
result = await api_key_auth_object_validator(mock_request)
expected_result = {"user_id": "123", "permissions": ["read", "write"]}
assert result == expected_result
# Verify the object is stored in request state
assert mock_request.state.api_key == expected_result
@pytest.mark.asyncio
async def test_validator_returns_none(api_key_auth_object_validator, mock_request):
"""Test validator that returns None (falsy)."""
with patch.object(
api_key_auth_object_validator.__class__.__bases__[0],
"__call__",
return_value="invalid-key",
):
with pytest.raises(HTTPException) as exc_info:
await api_key_auth_object_validator(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_validator_database_lookup_simulation(mock_request):
"""Test simulation of database lookup validator."""
# Simulate database records
valid_api_keys = {
"key123": {"user_id": "user1", "active": True},
"key456": {"user_id": "user2", "active": False},
}
async def db_validator(api_key: str):
record = valid_api_keys.get(api_key)
return record if record and record["active"] else None
auth = APIKeyAuthenticator(header_name="X-API-Key", validator=db_validator)
# Test valid active key
with patch.object(auth.__class__.__bases__[0], "__call__", return_value="key123"):
result = await auth(mock_request)
assert result == {"user_id": "user1", "active": True}
assert mock_request.state.api_key == {"user_id": "user1", "active": True}
# Test inactive key
mock_request.state = Mock() # Reset state
with patch.object(auth.__class__.__bases__[0], "__call__", return_value="key456"):
with pytest.raises(HTTPException):
await auth(mock_request)
# Test non-existent key
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value="nonexistent"
):
with pytest.raises(HTTPException):
await auth(mock_request)
# ========== Default Validator Tests ========== #
@pytest.mark.asyncio
async def test_default_validator_key_valid(api_key_auth):
"""Test default validator with valid token."""
result = await api_key_auth.default_validator("test-secret-token")
assert result is True
@pytest.mark.asyncio
async def test_default_validator_key_invalid(api_key_auth):
"""Test default validator with invalid token."""
result = await api_key_auth.default_validator("wrong-token")
assert result is False
@pytest.mark.asyncio
async def test_default_validator_missing_expected_token():
"""Test default validator when expected_token is not set."""
auth = APIKeyAuthenticator(header_name="X-API-Key")
with pytest.raises(MissingConfigError) as exc_info:
await auth.default_validator("any-token")
assert "expected_token is not set" in str(exc_info.value)
assert "either specify it or provide a custom validator" in str(exc_info.value)
@pytest.mark.asyncio
async def test_default_validator_uses_constant_time_comparison(api_key_auth):
"""
Test that default validator uses secrets.compare_digest for timing attack protection
"""
with patch("secrets.compare_digest") as mock_compare:
mock_compare.return_value = True
await api_key_auth.default_validator("test-token")
mock_compare.assert_called_once_with("test-token", "test-secret-token")
@pytest.mark.asyncio
async def test_api_key_empty(mock_request):
"""Test behavior with empty string API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
with patch.object(auth.__class__.__bases__[0], "__call__", return_value=""):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_whitespace_only(mock_request):
"""Test behavior with whitespace-only API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=" \t\n "
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_very_long(mock_request):
"""Test behavior with extremely long API key (potential DoS protection)."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# Create a very long API key (10MB)
long_api_key = "a" * (10 * 1024 * 1024)
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=long_api_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_null_bytes(mock_request):
"""Test behavior with API key containing null bytes."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
api_key_with_null = "valid\x00token"
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=api_key_with_null
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_control_characters(mock_request):
"""Test behavior with API key containing control characters."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# API key with various control characters
api_key_with_control = "valid\r\n\t\x1b[31mtoken"
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=api_key_with_control
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_unicode_characters(mock_request):
"""Test behavior with Unicode characters in API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# API key with Unicode characters
unicode_api_key = "validтокен🔑"
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=unicode_api_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_unicode_characters_normalization_attack(mock_request):
"""Test that Unicode normalization doesn't bypass validation."""
# Create auth with composed Unicode character
auth = APIKeyAuthenticator(
header_name="X-API-Key", expected_token="café" # é is composed
)
# Try with decomposed version (c + a + f + e + ´)
decomposed_key = "cafe\u0301" # é as combining character
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=decomposed_key
):
# Should fail because secrets.compare_digest doesn't normalize
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_binary_data(mock_request):
"""Test behavior with binary data in API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# Binary data that might cause encoding issues
binary_api_key = bytes([0xFF, 0xFE, 0xFD, 0xFC, 0x80, 0x81]).decode(
"latin1", errors="ignore"
)
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=binary_api_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_regex_dos_attack_pattern(mock_request):
"""Test behavior with API key of repeated characters (pattern attack)."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# Pattern that might cause regex DoS in poorly implemented validators
repeated_key = "a" * 1000 + "b" * 1000 + "c" * 1000
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=repeated_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_keys_with_newline_variations(mock_request):
"""Test different newline characters in API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
newline_variations = [
"valid\ntoken", # Unix newline
"valid\r\ntoken", # Windows newline
"valid\rtoken", # Mac newline
"valid\x85token", # NEL (Next Line)
"valid\x0Btoken", # Vertical Tab
"valid\x0Ctoken", # Form Feed
]
for api_key in newline_variations:
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=api_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"

View File

@@ -147,10 +147,8 @@ class AutoModManager:
return None
# Get completed executions and collect outputs
completed_executions = await db_client.get_node_executions( # type: ignore
graph_exec_id=graph_exec_id,
statuses=[ExecutionStatus.COMPLETED],
include_exec_data=True,
completed_executions = await db_client.get_node_executions(
graph_exec_id, statuses=[ExecutionStatus.COMPLETED], include_exec_data=True
)
if not completed_executions:
@@ -220,7 +218,7 @@ class AutoModManager:
):
"""Update node execution statuses for frontend display when moderation fails"""
# Import here to avoid circular imports
from backend.util.clients import get_async_execution_event_bus
from backend.executor.manager import send_async_execution_update
if moderation_type == "input":
# For input moderation, mark queued/running/incomplete nodes as failed
@@ -234,10 +232,8 @@ class AutoModManager:
target_statuses = [ExecutionStatus.COMPLETED]
# Get the executions that need to be updated
executions_to_update = await db_client.get_node_executions( # type: ignore
graph_exec_id=graph_exec_id,
statuses=target_statuses,
include_exec_data=True,
executions_to_update = await db_client.get_node_executions(
graph_exec_id, statuses=target_statuses, include_exec_data=True
)
if not executions_to_update:
@@ -280,12 +276,10 @@ class AutoModManager:
updated_execs = await asyncio.gather(*exec_updates)
# Send all websocket updates in parallel
event_bus = get_async_execution_event_bus()
await asyncio.gather(
*[
event_bus.publish(updated_exec)
send_async_execution_update(updated_exec)
for updated_exec in updated_execs
if updated_exec is not None
]
)

View File

@@ -1,18 +1,16 @@
import functools
import logging
from datetime import datetime, timedelta, timezone
import prisma
from autogpt_libs.utils.cache import cached
import backend.data.block
from backend.blocks import load_all_blocks
from backend.blocks.llm import LlmModel
from backend.data.block import Block, BlockCategory, BlockSchema
from backend.data.credit import get_block_costs
from backend.data.block import Block, BlockCategory, BlockInfo, BlockSchema
from backend.integrations.providers import ProviderName
from backend.server.v2.builder.model import (
BlockCategoryResponse,
BlockData,
BlockResponse,
BlockType,
CountResponse,
@@ -25,7 +23,7 @@ from backend.util.models import Pagination
logger = logging.getLogger(__name__)
llm_models = [name.name.lower().replace("_", " ") for name in LlmModel]
_static_counts_cache: dict | None = None
_suggested_blocks: list[BlockData] | None = None
_suggested_blocks: list[BlockInfo] | None = None
def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse]:
@@ -53,7 +51,7 @@ def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse
# Append if the category has less than the specified number of blocks
if len(categories[category].blocks) < category_blocks:
categories[category].blocks.append(block.to_dict())
categories[category].blocks.append(block.get_info())
# Sort categories by name
return sorted(categories.values(), key=lambda x: x.name)
@@ -109,10 +107,8 @@ def get_blocks(
take -= 1
blocks.append(block)
costs = get_block_costs()
return BlockResponse(
blocks=[{**b.to_dict(), "costs": costs.get(b.id, [])} for b in blocks],
blocks=[b.get_info() for b in blocks],
pagination=Pagination(
total_items=total,
total_pages=(total + page_size - 1) // page_size,
@@ -174,11 +170,9 @@ def search_blocks(
take -= 1
blocks.append(block)
costs = get_block_costs()
return SearchBlocksResponse(
blocks=BlockResponse(
blocks=[{**b.to_dict(), "costs": costs.get(b.id, [])} for b in blocks],
blocks=[b.get_info() for b in blocks],
pagination=Pagination(
total_items=total,
total_pages=(total + page_size - 1) // page_size,
@@ -302,7 +296,7 @@ def _matches_llm_model(schema_cls: type[BlockSchema], query: str) -> bool:
return False
@functools.cache
@cached()
def _get_all_providers() -> dict[ProviderName, Provider]:
providers: dict[ProviderName, Provider] = {}
@@ -323,7 +317,7 @@ def _get_all_providers() -> dict[ProviderName, Provider]:
return providers
async def get_suggested_blocks(count: int = 5) -> list[BlockData]:
async def get_suggested_blocks(count: int = 5) -> list[BlockInfo]:
global _suggested_blocks
if _suggested_blocks is not None and len(_suggested_blocks) >= count:
@@ -351,7 +345,7 @@ async def get_suggested_blocks(count: int = 5) -> list[BlockData]:
# Get the top blocks based on execution count
# But ignore Input and Output blocks
blocks: list[tuple[BlockData, int]] = []
blocks: list[tuple[BlockInfo, int]] = []
for block_type in load_all_blocks().values():
block: Block[BlockSchema, BlockSchema] = block_type()
@@ -366,7 +360,7 @@ async def get_suggested_blocks(count: int = 5) -> list[BlockData]:
(row["execution_count"] for row in results if row["block_id"] == block.id),
0,
)
blocks.append((block.to_dict(), execution_count))
blocks.append((block.get_info(), execution_count))
# Sort blocks by execution count
blocks.sort(key=lambda x: x[1], reverse=True)

View File

@@ -1,9 +1,10 @@
from typing import Any, Literal
from typing import Literal
from pydantic import BaseModel
import backend.server.v2.library.model as library_model
import backend.server.v2.store.model as store_model
from backend.data.block import BlockInfo
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
@@ -16,29 +17,27 @@ FilterType = Literal[
BlockType = Literal["all", "input", "action", "output"]
BlockData = dict[str, Any]
# Suggestions
class SuggestionsResponse(BaseModel):
otto_suggestions: list[str]
recent_searches: list[str]
providers: list[ProviderName]
top_blocks: list[BlockData]
top_blocks: list[BlockInfo]
# All blocks
class BlockCategoryResponse(BaseModel):
name: str
total_blocks: int
blocks: list[BlockData]
blocks: list[BlockInfo]
model_config = {"use_enum_values": False} # <== use enum names like "AI"
# Input/Action/Output and see all for block categories
class BlockResponse(BaseModel):
blocks: list[BlockData]
blocks: list[BlockInfo]
pagination: Pagination
@@ -71,7 +70,7 @@ class SearchBlocksResponse(BaseModel):
class SearchResponse(BaseModel):
items: list[BlockData | library_model.LibraryAgent | store_model.StoreAgent]
items: list[BlockInfo | library_model.LibraryAgent | store_model.StoreAgent]
total_items: dict[FilterType, int]
page: int
more_pages: bool

View File

@@ -16,7 +16,7 @@ import backend.server.v2.store.media as store_media
from backend.data.block import BlockInput
from backend.data.db import transaction
from backend.data.execution import get_graph_execution
from backend.data.includes import library_agent_include
from backend.data.includes import AGENT_PRESET_INCLUDE, library_agent_include
from backend.data.model import CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks.graph_lifecycle_hooks import on_graph_activate
@@ -144,6 +144,92 @@ async def list_library_agents(
raise store_exceptions.DatabaseError("Failed to fetch library agents") from e
async def list_favorite_library_agents(
user_id: str,
page: int = 1,
page_size: int = 50,
) -> library_model.LibraryAgentResponse:
"""
Retrieves a paginated list of favorite LibraryAgent records for a given user.
Args:
user_id: The ID of the user whose favorite LibraryAgents we want to retrieve.
page: Current page (1-indexed).
page_size: Number of items per page.
Returns:
A LibraryAgentResponse containing the list of favorite agents and pagination details.
Raises:
DatabaseError: If there is an issue fetching from Prisma.
"""
logger.debug(
f"Fetching favorite library agents for user_id={user_id}, "
f"page={page}, page_size={page_size}"
)
if page < 1 or page_size < 1:
logger.warning(f"Invalid pagination: page={page}, page_size={page_size}")
raise store_exceptions.DatabaseError("Invalid pagination input")
where_clause: prisma.types.LibraryAgentWhereInput = {
"userId": user_id,
"isDeleted": False,
"isArchived": False,
"isFavorite": True, # Only fetch favorites
}
# Sort favorites by updated date descending
order_by: prisma.types.LibraryAgentOrderByInput = {"updatedAt": "desc"}
try:
library_agents = await prisma.models.LibraryAgent.prisma().find_many(
where=where_clause,
include=library_agent_include(user_id),
order=order_by,
skip=(page - 1) * page_size,
take=page_size,
)
agent_count = await prisma.models.LibraryAgent.prisma().count(
where=where_clause
)
logger.debug(
f"Retrieved {len(library_agents)} favorite library agents for user #{user_id}"
)
# Only pass valid agents to the response
valid_library_agents: list[library_model.LibraryAgent] = []
for agent in library_agents:
try:
library_agent = library_model.LibraryAgent.from_db(agent)
valid_library_agents.append(library_agent)
except Exception as e:
# Skip this agent if there was an error
logger.error(
f"Error parsing LibraryAgent #{agent.id} from DB item: {e}"
)
continue
# Return the response with only valid agents
return library_model.LibraryAgentResponse(
agents=valid_library_agents,
pagination=Pagination(
total_items=agent_count,
total_pages=(agent_count + page_size - 1) // page_size,
current_page=page,
page_size=page_size,
),
)
except prisma.errors.PrismaError as e:
logger.error(f"Database error fetching favorite library agents: {e}")
raise store_exceptions.DatabaseError(
"Failed to fetch favorite library agents"
) from e
async def get_library_agent(id: str, user_id: str) -> library_model.LibraryAgent:
"""
Get a specific agent from the user's library.
@@ -617,7 +703,7 @@ async def list_presets(
where=query_filter,
skip=(page - 1) * page_size,
take=page_size,
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
total_items = await prisma.models.AgentPreset.prisma().count(where=query_filter)
total_pages = (total_items + page_size - 1) // page_size
@@ -662,7 +748,7 @@ async def get_preset(
try:
preset = await prisma.models.AgentPreset.prisma().find_unique(
where={"id": preset_id},
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
if not preset or preset.userId != user_id or preset.isDeleted:
return None
@@ -709,15 +795,12 @@ async def create_preset(
)
for name, data in {
**preset.inputs,
**{
key: creds_meta.model_dump(exclude_none=True)
for key, creds_meta in preset.credentials.items()
},
**preset.credentials,
}.items()
]
},
),
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
return library_model.LibraryAgentPreset.from_db(new_preset)
except prisma.errors.PrismaError as e:
@@ -747,6 +830,25 @@ async def create_preset_from_graph_execution(
if not graph_execution:
raise NotFoundError(f"Graph execution #{graph_exec_id} not found")
# Sanity check: credential inputs must be available if required for this preset
if graph_execution.credential_inputs is None:
graph = await graph_db.get_graph(
graph_id=graph_execution.graph_id,
version=graph_execution.graph_version,
user_id=graph_execution.user_id,
include_subgraphs=True,
)
if not graph:
raise NotFoundError(
f"Graph #{graph_execution.graph_id} not found or accessible"
)
elif len(graph.aggregate_credentials_inputs()) > 0:
raise ValueError(
f"Graph execution #{graph_exec_id} can't be turned into a preset "
"because it was run before this feature existed "
"and so the input credentials were not saved."
)
logger.debug(
f"Creating preset for user #{user_id} from graph execution #{graph_exec_id}",
)
@@ -754,7 +856,7 @@ async def create_preset_from_graph_execution(
user_id=user_id,
preset=library_model.LibraryAgentPresetCreatable(
inputs=graph_execution.inputs,
credentials={}, # FIXME
credentials=graph_execution.credential_inputs or {},
graph_id=graph_execution.graph_id,
graph_version=graph_execution.graph_version,
name=create_request.name,
@@ -834,7 +936,7 @@ async def update_preset(
updated = await prisma.models.AgentPreset.prisma(tx).update(
where={"id": preset_id},
data=update_data,
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
if not updated:
raise RuntimeError(f"AgentPreset #{preset_id} vanished while updating")
@@ -849,7 +951,7 @@ async def set_preset_webhook(
) -> library_model.LibraryAgentPreset:
current = await prisma.models.AgentPreset.prisma().find_unique(
where={"id": preset_id},
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
if not current or current.userId != user_id:
raise NotFoundError(f"Preset #{preset_id} not found")
@@ -861,7 +963,7 @@ async def set_preset_webhook(
if webhook_id
else {"Webhook": {"disconnect": True}}
),
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
if not updated:
raise RuntimeError(f"AgentPreset #{preset_id} vanished while updating")

View File

@@ -1,6 +1,6 @@
import datetime
from enum import Enum
from typing import Any, Optional
from typing import TYPE_CHECKING, Any, Optional
import prisma.enums
import prisma.models
@@ -9,9 +9,11 @@ import pydantic
import backend.data.block as block_model
import backend.data.graph as graph_model
from backend.data.model import CredentialsMetaInput, is_credentials_field_name
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
if TYPE_CHECKING:
from backend.data.integrations import Webhook
class LibraryAgentStatus(str, Enum):
COMPLETED = "COMPLETED" # All runs completed
@@ -20,14 +22,6 @@ class LibraryAgentStatus(str, Enum):
ERROR = "ERROR" # Agent is in an error state
class LibraryAgentTriggerInfo(pydantic.BaseModel):
provider: ProviderName
config_schema: dict[str, Any] = pydantic.Field(
description="Input schema for the trigger block"
)
credentials_input_name: Optional[str]
class LibraryAgent(pydantic.BaseModel):
"""
Represents an agent in the library, including metadata for display and
@@ -49,6 +43,7 @@ class LibraryAgent(pydantic.BaseModel):
name: str
description: str
instructions: str | None = None
input_schema: dict[str, Any] # Should be BlockIOObjectSubSchema in frontend
output_schema: dict[str, Any]
@@ -59,7 +54,7 @@ class LibraryAgent(pydantic.BaseModel):
has_external_trigger: bool = pydantic.Field(
description="Whether the agent has an external trigger (e.g. webhook) node"
)
trigger_setup_info: Optional[LibraryAgentTriggerInfo] = None
trigger_setup_info: Optional[graph_model.GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs)
new_output: bool
@@ -70,6 +65,12 @@ class LibraryAgent(pydantic.BaseModel):
# Indicates if this agent is the latest version
is_latest_version: bool
# Whether the agent is marked as favorite by the user
is_favorite: bool
# Recommended schedule cron (from marketplace agents)
recommended_schedule_cron: str | None = None
@staticmethod
def from_db(
agent: prisma.models.LibraryAgent,
@@ -126,39 +127,19 @@ class LibraryAgent(pydantic.BaseModel):
updated_at=updated_at,
name=graph.name,
description=graph.description,
instructions=graph.instructions,
input_schema=graph.input_schema,
output_schema=graph.output_schema,
credentials_input_schema=(
graph.credentials_input_schema if sub_graphs is not None else None
),
has_external_trigger=graph.has_external_trigger,
trigger_setup_info=(
LibraryAgentTriggerInfo(
provider=trigger_block.webhook_config.provider,
config_schema={
**(json_schema := trigger_block.input_schema.jsonschema()),
"properties": {
pn: sub_schema
for pn, sub_schema in json_schema["properties"].items()
if not is_credentials_field_name(pn)
},
"required": [
pn
for pn in json_schema.get("required", [])
if not is_credentials_field_name(pn)
],
},
credentials_input_name=next(
iter(trigger_block.input_schema.get_credentials_fields()), None
),
)
if graph.webhook_input_node
and (trigger_block := graph.webhook_input_node.block).webhook_config
else None
),
trigger_setup_info=graph.trigger_setup_info,
new_output=new_output,
can_access_graph=can_access_graph,
is_latest_version=is_latest_version,
is_favorite=agent.isFavorite,
recommended_schedule_cron=agent.AgentGraph.recommendedScheduleCron,
)
@@ -282,12 +263,21 @@ class LibraryAgentPreset(LibraryAgentPresetCreatable):
id: str
user_id: str
created_at: datetime.datetime
updated_at: datetime.datetime
webhook: "Webhook | None"
@classmethod
def from_db(cls, preset: prisma.models.AgentPreset) -> "LibraryAgentPreset":
from backend.data.integrations import Webhook
if preset.InputPresets is None:
raise ValueError("InputPresets must be included in AgentPreset query")
if preset.webhookId and preset.Webhook is None:
raise ValueError(
"Webhook must be included in AgentPreset query when webhookId is set"
)
input_data: block_model.BlockInput = {}
input_credentials: dict[str, CredentialsMetaInput] = {}
@@ -303,6 +293,7 @@ class LibraryAgentPreset(LibraryAgentPresetCreatable):
return cls(
id=preset.id,
user_id=preset.userId,
created_at=preset.createdAt,
updated_at=preset.updatedAt,
graph_id=preset.agentGraphId,
graph_version=preset.agentGraphVersion,
@@ -312,6 +303,7 @@ class LibraryAgentPreset(LibraryAgentPresetCreatable):
inputs=input_data,
credentials=input_credentials,
webhook_id=preset.webhookId,
webhook=Webhook.from_db(preset.Webhook) if preset.Webhook else None,
)

View File

@@ -79,6 +79,54 @@ async def list_library_agents(
) from e
@router.get(
"/favorites",
summary="List Favorite Library Agents",
responses={
500: {"description": "Server error", "content": {"application/json": {}}},
},
)
async def list_favorite_library_agents(
user_id: str = Security(autogpt_auth_lib.get_user_id),
page: int = Query(
1,
ge=1,
description="Page number to retrieve (must be >= 1)",
),
page_size: int = Query(
15,
ge=1,
description="Number of agents per page (must be >= 1)",
),
) -> library_model.LibraryAgentResponse:
"""
Get all favorite agents in the user's library.
Args:
user_id: ID of the authenticated user.
page: Page number to retrieve.
page_size: Number of agents per page.
Returns:
A LibraryAgentResponse containing favorite agents and pagination metadata.
Raises:
HTTPException: If a server/database error occurs.
"""
try:
return await library_db.list_favorite_library_agents(
user_id=user_id,
page=page,
page_size=page_size,
)
except Exception as e:
logger.error(f"Could not list favorite library agents for user #{user_id}: {e}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e),
) from e
@router.get("/{library_agent_id}", summary="Get Library Agent")
async def get_library_agent(
library_agent_id: str,

View File

@@ -6,8 +6,10 @@ from fastapi import APIRouter, Body, HTTPException, Query, Security, status
import backend.server.v2.library.db as db
import backend.server.v2.library.model as models
from backend.data.execution import GraphExecutionMeta
from backend.data.graph import get_graph
from backend.data.integrations import get_webhook
from backend.data.model import CredentialsMetaInput
from backend.executor.utils import add_graph_execution, make_node_credentials_input_map
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks import get_webhook_manager
@@ -369,48 +371,41 @@ async def execute_preset(
preset_id: str,
user_id: str = Security(autogpt_auth_lib.get_user_id),
inputs: dict[str, Any] = Body(..., embed=True, default_factory=dict),
) -> dict[str, Any]: # FIXME: add proper return type
credential_inputs: dict[str, CredentialsMetaInput] = Body(
..., embed=True, default_factory=dict
),
) -> GraphExecutionMeta:
"""
Execute a preset given graph parameters, returning the execution ID on success.
Args:
preset_id (str): ID of the preset to execute.
user_id (str): ID of the authenticated user.
inputs (dict[str, Any]): Optionally, additional input data for the graph execution.
preset_id: ID of the preset to execute.
user_id: ID of the authenticated user.
inputs: Optionally, inputs to override the preset for execution.
credential_inputs: Optionally, credentials to override the preset for execution.
Returns:
{id: graph_exec_id}: A response containing the execution ID.
GraphExecutionMeta: Object representing the created execution.
Raises:
HTTPException: If the preset is not found or an error occurs while executing the preset.
"""
try:
preset = await db.get_preset(user_id, preset_id)
if not preset:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Preset #{preset_id} not found",
)
# Merge input overrides with preset inputs
merged_node_input = preset.inputs | inputs
execution = await add_graph_execution(
user_id=user_id,
graph_id=preset.graph_id,
graph_version=preset.graph_version,
preset_id=preset_id,
inputs=merged_node_input,
)
logger.debug(f"Execution added: {execution} with input: {merged_node_input}")
return {"id": execution.id}
except HTTPException:
raise
except Exception as e:
logger.exception("Preset execution failed for user %s: %s", user_id, e)
preset = await db.get_preset(user_id, preset_id)
if not preset:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e),
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Preset #{preset_id} not found",
)
# Merge input overrides with preset inputs
merged_node_input = preset.inputs | inputs
merged_credential_inputs = preset.credentials | credential_inputs
return await add_graph_execution(
user_id=user_id,
graph_id=preset.graph_id,
graph_version=preset.graph_version,
preset_id=preset_id,
inputs=merged_node_input,
graph_credentials_inputs=merged_credential_inputs,
)

View File

@@ -50,9 +50,11 @@ async def test_get_library_agents_success(
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
can_access_graph=True,
is_latest_version=True,
is_favorite=False,
updated_at=datetime.datetime(2023, 1, 1, 0, 0, 0),
),
library_model.LibraryAgent(
@@ -69,9 +71,11 @@ async def test_get_library_agents_success(
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
can_access_graph=False,
is_latest_version=True,
is_favorite=False,
updated_at=datetime.datetime(2023, 1, 1, 0, 0, 0),
),
],
@@ -119,6 +123,76 @@ def test_get_library_agents_error(mocker: pytest_mock.MockFixture, test_user_id:
)
@pytest.mark.asyncio
async def test_get_favorite_library_agents_success(
mocker: pytest_mock.MockFixture,
test_user_id: str,
) -> None:
mocked_value = library_model.LibraryAgentResponse(
agents=[
library_model.LibraryAgent(
id="test-agent-1",
graph_id="test-agent-1",
graph_version=1,
name="Favorite Agent 1",
description="Test Favorite Description 1",
image_url=None,
creator_name="Test Creator",
creator_image_url="",
input_schema={"type": "object", "properties": {}},
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
can_access_graph=True,
is_latest_version=True,
is_favorite=True,
updated_at=datetime.datetime(2023, 1, 1, 0, 0, 0),
),
],
pagination=Pagination(
total_items=1, total_pages=1, current_page=1, page_size=15
),
)
mock_db_call = mocker.patch(
"backend.server.v2.library.db.list_favorite_library_agents"
)
mock_db_call.return_value = mocked_value
response = client.get("/agents/favorites")
assert response.status_code == 200
data = library_model.LibraryAgentResponse.model_validate(response.json())
assert len(data.agents) == 1
assert data.agents[0].is_favorite is True
assert data.agents[0].name == "Favorite Agent 1"
mock_db_call.assert_called_once_with(
user_id=test_user_id,
page=1,
page_size=15,
)
def test_get_favorite_library_agents_error(
mocker: pytest_mock.MockFixture, test_user_id: str
):
mock_db_call = mocker.patch(
"backend.server.v2.library.db.list_favorite_library_agents"
)
mock_db_call.side_effect = Exception("Test error")
response = client.get("/agents/favorites")
assert response.status_code == 500
mock_db_call.assert_called_once_with(
user_id=test_user_id,
page=1,
page_size=15,
)
def test_add_agent_to_library_success(
mocker: pytest_mock.MockFixture, test_user_id: str
):
@@ -139,6 +213,7 @@ def test_add_agent_to_library_success(
new_output=False,
can_access_graph=True,
is_latest_version=True,
is_favorite=False,
updated_at=FIXED_NOW,
)

View File

@@ -1,3 +1,4 @@
import asyncio
import logging
from datetime import datetime, timezone
@@ -9,6 +10,7 @@ import prisma.types
import backend.server.v2.store.exceptions
import backend.server.v2.store.model
from backend.data.db import transaction
from backend.data.graph import (
GraphMeta,
GraphModel,
@@ -70,7 +72,7 @@ async def get_store_agents(
)
sanitized_query = sanitize_query(search_query)
where_clause = {}
where_clause: prisma.types.StoreAgentWhereInput = {"is_available": True}
if featured:
where_clause["featured"] = featured
if creators:
@@ -94,15 +96,13 @@ async def get_store_agents(
try:
agents = await prisma.models.StoreAgent.prisma().find_many(
where=prisma.types.StoreAgentWhereInput(**where_clause),
where=where_clause,
order=order_by,
skip=(page - 1) * page_size,
take=page_size,
)
total = await prisma.models.StoreAgent.prisma().count(
where=prisma.types.StoreAgentWhereInput(**where_clause)
)
total = await prisma.models.StoreAgent.prisma().count(where=where_clause)
total_pages = (total + page_size - 1) // page_size
store_agents: list[backend.server.v2.store.model.StoreAgent] = []
@@ -183,6 +183,36 @@ async def get_store_agent_details(
store_listing.hasApprovedVersion if store_listing else False
)
if active_version_id:
agent_by_active = await prisma.models.StoreAgent.prisma().find_first(
where={"storeListingVersionId": active_version_id}
)
if agent_by_active:
agent = agent_by_active
elif store_listing:
latest_approved = (
await prisma.models.StoreListingVersion.prisma().find_first(
where={
"storeListingId": store_listing.id,
"submissionStatus": prisma.enums.SubmissionStatus.APPROVED,
},
order=[{"version": "desc"}],
)
)
if latest_approved:
agent_latest = await prisma.models.StoreAgent.prisma().find_first(
where={"storeListingVersionId": latest_approved.id}
)
if agent_latest:
agent = agent_latest
if store_listing and store_listing.ActiveVersion:
recommended_schedule_cron = (
store_listing.ActiveVersion.recommendedScheduleCron
)
else:
recommended_schedule_cron = None
logger.debug(f"Found agent details for {username}/{agent_name}")
return backend.server.v2.store.model.StoreAgentDetails(
store_listing_version_id=agent.storeListingVersionId,
@@ -190,8 +220,8 @@ async def get_store_agent_details(
agent_name=agent.agent_name,
agent_video=agent.agent_video or "",
agent_image=agent.agent_image,
creator=agent.creator_username,
creator_avatar=agent.creator_avatar,
creator=agent.creator_username or "",
creator_avatar=agent.creator_avatar or "",
sub_heading=agent.sub_heading,
description=agent.description,
categories=agent.categories,
@@ -201,6 +231,7 @@ async def get_store_agent_details(
last_updated=agent.updated_at,
active_version_id=active_version_id,
has_approved_version=has_approved_version,
recommended_schedule_cron=recommended_schedule_cron,
)
except backend.server.v2.store.exceptions.AgentNotFoundError:
raise
@@ -263,8 +294,8 @@ async def get_store_agent_by_version_id(
agent_name=agent.agent_name,
agent_video=agent.agent_video or "",
agent_image=agent.agent_image,
creator=agent.creator_username,
creator_avatar=agent.creator_avatar,
creator=agent.creator_username or "",
creator_avatar=agent.creator_avatar or "",
sub_heading=agent.sub_heading,
description=agent.description,
categories=agent.categories,
@@ -468,6 +499,7 @@ async def get_store_submissions(
sub_heading=sub.sub_heading,
slug=sub.slug,
description=sub.description,
instructions=getattr(sub, "instructions", None),
image_urls=sub.image_urls or [],
date_submitted=sub.date_submitted or datetime.now(tz=timezone.utc),
status=sub.status,
@@ -559,9 +591,11 @@ async def create_store_submission(
video_url: str | None = None,
image_urls: list[str] = [],
description: str = "",
instructions: str | None = None,
sub_heading: str = "",
categories: list[str] = [],
changes_summary: str | None = "Initial Submission",
recommended_schedule_cron: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
"""
Create the first (and only) store listing and thus submission as a normal user
@@ -629,6 +663,7 @@ async def create_store_submission(
video_url=video_url,
image_urls=image_urls,
description=description,
instructions=instructions,
sub_heading=sub_heading,
categories=categories,
changes_summary=changes_summary,
@@ -650,11 +685,13 @@ async def create_store_submission(
videoUrl=video_url,
imageUrls=image_urls,
description=description,
instructions=instructions,
categories=categories,
subHeading=sub_heading,
submissionStatus=prisma.enums.SubmissionStatus.PENDING,
submittedAt=datetime.now(tz=timezone.utc),
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
)
]
},
@@ -679,6 +716,7 @@ async def create_store_submission(
slug=slug,
sub_heading=sub_heading,
description=description,
instructions=instructions,
image_urls=image_urls,
date_submitted=listing.createdAt,
status=prisma.enums.SubmissionStatus.PENDING,
@@ -710,6 +748,8 @@ async def edit_store_submission(
sub_heading: str = "",
categories: list[str] = [],
changes_summary: str | None = "Update submission",
recommended_schedule_cron: str | None = None,
instructions: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
"""
Edit an existing store listing submission.
@@ -789,6 +829,8 @@ async def edit_store_submission(
sub_heading=sub_heading,
categories=categories,
changes_summary=changes_summary,
recommended_schedule_cron=recommended_schedule_cron,
instructions=instructions,
)
# For PENDING submissions, we can update the existing version
@@ -804,6 +846,8 @@ async def edit_store_submission(
categories=categories,
subHeading=sub_heading,
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
instructions=instructions,
),
)
@@ -822,6 +866,7 @@ async def edit_store_submission(
sub_heading=sub_heading,
slug=current_version.StoreListing.slug,
description=description,
instructions=instructions,
image_urls=image_urls,
date_submitted=updated_version.submittedAt or updated_version.createdAt,
status=updated_version.submissionStatus,
@@ -863,9 +908,11 @@ async def create_store_version(
video_url: str | None = None,
image_urls: list[str] = [],
description: str = "",
instructions: str | None = None,
sub_heading: str = "",
categories: list[str] = [],
changes_summary: str | None = "Initial submission",
recommended_schedule_cron: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
"""
Create a new version for an existing store listing
@@ -930,11 +977,13 @@ async def create_store_version(
videoUrl=video_url,
imageUrls=image_urls,
description=description,
instructions=instructions,
categories=categories,
subHeading=sub_heading,
submissionStatus=prisma.enums.SubmissionStatus.PENDING,
submittedAt=datetime.now(),
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
storeListingId=store_listing_id,
)
)
@@ -950,6 +999,7 @@ async def create_store_version(
slug=listing.slug,
sub_heading=sub_heading,
description=description,
instructions=instructions,
image_urls=image_urls,
date_submitted=datetime.now(),
status=prisma.enums.SubmissionStatus.PENDING,
@@ -1126,7 +1176,20 @@ async def get_my_agents(
try:
search_filter: prisma.types.LibraryAgentWhereInput = {
"userId": user_id,
"AgentGraph": {"is": {"StoreListings": {"none": {"isDeleted": False}}}},
"AgentGraph": {
"is": {
"StoreListings": {
"none": {
"isDeleted": False,
"Versions": {
"some": {
"isAvailable": True,
}
},
}
}
}
},
"isArchived": False,
"isDeleted": False,
}
@@ -1150,6 +1213,7 @@ async def get_my_agents(
last_edited=graph.updatedAt or graph.createdAt,
description=graph.description or "",
agent_image=library_agent.imageUrl,
recommended_schedule_cron=graph.recommendedScheduleCron,
)
for library_agent in library_agents
if (graph := library_agent.AgentGraph)
@@ -1200,40 +1264,103 @@ async def get_agent(store_listing_version_id: str) -> GraphModel:
#####################################################
async def _get_missing_sub_store_listing(
graph: prisma.models.AgentGraph,
) -> list[prisma.models.AgentGraph]:
"""
Agent graph can have sub-graphs, and those sub-graphs also need to be store listed.
This method fetches the sub-graphs, and returns the ones not listed in the store.
"""
sub_graphs = await get_sub_graphs(graph)
if not sub_graphs:
return []
async def _approve_sub_agent(
tx,
sub_graph: prisma.models.AgentGraph,
main_agent_name: str,
main_agent_version: int,
main_agent_user_id: str,
) -> None:
"""Approve a single sub-agent by creating/updating store listings as needed"""
heading = f"Sub-agent of {main_agent_name} v{main_agent_version}"
# Fetch all the sub-graphs that are listed, and return the ones missing.
store_listed_sub_graphs = {
(listing.agentGraphId, listing.agentGraphVersion)
for listing in await prisma.models.StoreListingVersion.prisma().find_many(
where={
"OR": [
{
"agentGraphId": sub_graph.id,
"agentGraphVersion": sub_graph.version,
}
for sub_graph in sub_graphs
],
"submissionStatus": prisma.enums.SubmissionStatus.APPROVED,
"isDeleted": False,
}
# Find existing listing for this sub-agent
listing = await prisma.models.StoreListing.prisma(tx).find_first(
where={"agentGraphId": sub_graph.id, "isDeleted": False},
include={"Versions": True},
)
# Early return: Create new listing if none exists
if not listing:
await prisma.models.StoreListing.prisma(tx).create(
data=prisma.types.StoreListingCreateInput(
slug=f"sub-agent-{sub_graph.id[:8]}",
agentGraphId=sub_graph.id,
agentGraphVersion=sub_graph.version,
owningUserId=main_agent_user_id,
hasApprovedVersion=True,
Versions={
"create": [
_create_sub_agent_version_data(
sub_graph, heading, main_agent_name
)
]
},
)
)
}
return
return [
sub_graph
for sub_graph in sub_graphs
if (sub_graph.id, sub_graph.version) not in store_listed_sub_graphs
]
# Find version matching this sub-graph
matching_version = next(
(
v
for v in listing.Versions or []
if v.agentGraphId == sub_graph.id
and v.agentGraphVersion == sub_graph.version
),
None,
)
# Early return: Approve existing version if found and not already approved
if matching_version:
if matching_version.submissionStatus == prisma.enums.SubmissionStatus.APPROVED:
return # Already approved, nothing to do
await prisma.models.StoreListingVersion.prisma(tx).update(
where={"id": matching_version.id},
data={
"submissionStatus": prisma.enums.SubmissionStatus.APPROVED,
"reviewedAt": datetime.now(tz=timezone.utc),
},
)
await prisma.models.StoreListing.prisma(tx).update(
where={"id": listing.id}, data={"hasApprovedVersion": True}
)
return
# Create new version if no matching version found
next_version = max((v.version for v in listing.Versions or []), default=0) + 1
await prisma.models.StoreListingVersion.prisma(tx).create(
data={
**_create_sub_agent_version_data(sub_graph, heading, main_agent_name),
"version": next_version,
"storeListingId": listing.id,
}
)
await prisma.models.StoreListing.prisma(tx).update(
where={"id": listing.id}, data={"hasApprovedVersion": True}
)
def _create_sub_agent_version_data(
sub_graph: prisma.models.AgentGraph, heading: str, main_agent_name: str
) -> prisma.types.StoreListingVersionCreateInput:
"""Create store listing version data for a sub-agent"""
return prisma.types.StoreListingVersionCreateInput(
agentGraphId=sub_graph.id,
agentGraphVersion=sub_graph.version,
name=sub_graph.name or heading,
submissionStatus=prisma.enums.SubmissionStatus.APPROVED,
subHeading=heading,
description=(
f"{heading}: {sub_graph.description}" if sub_graph.description else heading
),
changesSummary=f"Auto-approved as sub-agent of {main_agent_name}",
isAvailable=False,
submittedAt=datetime.now(tz=timezone.utc),
imageUrls=[], # Sub-agents don't need images
categories=[], # Sub-agents don't need categories
)
async def review_store_submission(
@@ -1271,33 +1398,46 @@ async def review_store_submission(
# If approving, update the listing to indicate it has an approved version
if is_approved and store_listing_version.AgentGraph:
heading = f"Sub-graph of {store_listing_version.name}v{store_listing_version.agentGraphVersion}"
sub_store_listing_versions = [
prisma.types.StoreListingVersionCreateWithoutRelationsInput(
agentGraphId=sub_graph.id,
agentGraphVersion=sub_graph.version,
name=sub_graph.name or heading,
submissionStatus=prisma.enums.SubmissionStatus.APPROVED,
subHeading=heading,
description=f"{heading}: {sub_graph.description}",
changesSummary=f"This listing is added as a {heading} / #{store_listing_version.agentGraphId}.",
isAvailable=False, # Hide sub-graphs from the store by default.
submittedAt=datetime.now(tz=timezone.utc),
async with transaction() as tx:
# Handle sub-agent approvals in transaction
await asyncio.gather(
*[
_approve_sub_agent(
tx,
sub_graph,
store_listing_version.name,
store_listing_version.agentGraphVersion,
store_listing_version.StoreListing.owningUserId,
)
for sub_graph in await get_sub_graphs(
store_listing_version.AgentGraph
)
]
)
for sub_graph in await _get_missing_sub_store_listing(
store_listing_version.AgentGraph
)
]
await prisma.models.StoreListing.prisma().update(
where={"id": store_listing_version.StoreListing.id},
data={
"hasApprovedVersion": True,
"ActiveVersion": {"connect": {"id": store_listing_version_id}},
"Versions": {"create": sub_store_listing_versions},
},
)
# Update the AgentGraph with store listing data
await prisma.models.AgentGraph.prisma().update(
where={
"graphVersionId": {
"id": store_listing_version.agentGraphId,
"version": store_listing_version.agentGraphVersion,
}
},
data={
"name": store_listing_version.name,
"description": store_listing_version.description,
"recommendedScheduleCron": store_listing_version.recommendedScheduleCron,
"instructions": store_listing_version.instructions,
},
)
await prisma.models.StoreListing.prisma(tx).update(
where={"id": store_listing_version.StoreListing.id},
data={
"hasApprovedVersion": True,
"ActiveVersion": {"connect": {"id": store_listing_version_id}},
},
)
# If rejecting an approved agent, update the StoreListing accordingly
if is_rejecting_approved:
@@ -1453,6 +1593,7 @@ async def review_store_submission(
else ""
),
description=submission.description,
instructions=submission.instructions,
image_urls=submission.imageUrls or [],
date_submitted=submission.submittedAt or submission.createdAt,
status=submission.submissionStatus,
@@ -1588,6 +1729,7 @@ async def get_admin_listings_with_versions(
sub_heading=version.subHeading,
slug=listing.slug,
description=version.description,
instructions=version.instructions,
image_urls=version.imageUrls or [],
date_submitted=version.submittedAt or version.createdAt,
status=version.submissionStatus,

Some files were not shown because too many files have changed in this diff Show More