Compare commits

..

159 Commits

Author SHA1 Message Date
Zamil Majdy
d42e144322 style: Format code after orjson integration
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 07:20:10 +00:00
Zamil Majdy
4f9ff74d02 feat(backend): Add orjson support to SafeJson for improved performance
- Add orjson as dependency for faster JSON serialization/deserialization
- Implement graceful fallback to standard json if orjson unavailable
- Maintain full backward compatibility with existing code
- Support for orjson-specific optimizations (indentation, etc.)
- Performance improvements for all database operations using SafeJson

orjson provides significant performance benefits especially for large JSON payloads
common in AI-generated content and execution outputs.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 07:16:47 +00:00
Zamil Majdy
cca84fabe3 fix(backend): Sanitize null bytes in SafeJson to prevent PostgreSQL 22P05 errors
- Add _sanitize_null_bytes function to recursively remove \u0000 characters from data structures
- Implement sanitization in SafeJson before serialization to prevent database storage issues
- Add alerting for excessive retry attempts (50+ retries) in retry mechanisms
- This fixes the root cause of 99+ retry loops that occurred when nodes generated content with null bytes

Fixes issue where AIStructuredResponseGeneratorBlock output containing null bytes
would cause PostgreSQL "unsupported Unicode escape sequence" errors, leading to
excessive retries and failed node executions.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 07:01:33 +00:00
Reinier van der Leer
0b267f573e feat(blocks): Improve JSON generation+parsing in AI Structured Response block (#10960)
The AI Structured Response Generator block currently doesn't support
responses that aren't pure JSON. This prohibits multi-step prompting
because reasoning content is not allowed in the response, which in turn
limits performance.

### Changes 🏗️

- Adjust prompt to enclose JSON in pre-defined tags so we can extract it
from a response that isn't pure JSON
- Adjust mechanism to extract and parse JSON
- Add `force_json_output` input (advanced, default `False`)
- Update incorrect `max_output_tokens` values for Claude 4 and 3.7 to
prevent responses from being cut off due to `max_tokens`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] LLMs correctly follows response generation instructions
- [x] LLMs follow system response format instructions even if user
prompt contains conflicting instructions
  - [x] JSON is extracted from response successfully
  - [x] `force_json_output` works (at least for models that support it)

Tested with Claude 4 Sonnet, various GPT models, and Llama 3.3 70B.

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-09-25 17:42:25 +00:00
Reinier van der Leer
7bd571d9ce fix(blocks): Default disable HTML escaping in all blocks with templating features (#10955)
- Resolves #10954

Unnecessary escaping distorts content and so should be disabled wherever
the output isn't used in HTML.

### Changes 🏗️

- Disable HTML escaping on prompt value insertion in AI blocks
- Make HTML escaping optional in text formatting and output blocks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x]
`SandboxedEnvironment(autoescape=False).from_string(template_str).render(values)`
doesn't escape characters with HTML entities
2025-09-25 12:04:38 +00:00
Zamil Majdy
7a331651ba feat(backend): enhance database indexes for AgentGraph and AgentGraphExecution performance (#10985)
## Summary

Enhances database performance by improving indexes on `AgentGraph` and
`AgentGraphExecution` tables for better query efficiency.

### Changes 🏗️

- **Database Schema**: Updated Prisma schema to enhance database indexes
- Modified `AgentGraph` index from `[userId, isActive]` to `[userId,
isActive, id, version]` for better compound query performance
- Enhanced `AgentGraphExecution` index from `[userId]` to `[userId,
isDeleted, createdAt]` to support filtered queries with sorting
- **Migration**: Auto-generated Prisma migration to implement the index
changes
- Drops existing indexes: `AgentGraph_userId_isActive_idx` and
`AgentGraphExecution_userId_idx`
- Creates new compound indexes:
`AgentGraph_userId_isActive_id_version_idx` and
`AgentGraphExecution_userId_isDeleted_createdAt_idx`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified migration runs successfully
  - [x] Confirmed database queries continue to work with new indexes
  - [x] Tested that existing functionality remains unaffected

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-25 09:05:53 +00:00
Zamil Majdy
5bc69adc33 Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-09-25 16:09:01 +07:00
Krzysztof Czerwinski
f4bcc8494f hotfix: Fix Agent node missing inputs and outputs (#10987)
Restore `include=AGENT_GRAPH_INCLUDE` that is needed to build schema
from the nodes.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I/O is back on the Agent node
2025-09-25 08:56:32 +00:00
Zamil Majdy
4c000086e6 feat(backend): implement clean k6 load testing infrastructure (#10978)
## Summary

Implement comprehensive k6 load testing infrastructure for the AutoGPT
Platform with clean file organization, unified test runner, and cloud
integration.

## Key Features

### 🗂️ Clean File Organization
- **tests/basic/**: Simple validation tests (connectivity, single
endpoints)
- **tests/api/**: Core functionality tests (API endpoints, graph
execution)
- **tests/marketplace/**: User-facing feature tests (public/library
access)
- **tests/comprehensive/**: End-to-end scenario tests (complete user
journeys)
- **orchestrator/**: Advanced test orchestration for full suites

### 🚀 Unified Test Runner
- **Single entry point**: `run-tests.js` for both local and cloud
execution
- **7 available tests**: From basic connectivity to comprehensive
platform journeys
- **Flexible execution**: Run individual tests, comma-separated lists,
or all tests
- **Auto-configuration**: Different VU/duration settings for local vs
cloud execution

### 🔐 Advanced Authentication
- **Pre-authenticated tokens**: 24-hour JWT tokens eliminate Supabase
rate limiting
- **Configurable generation**: Default 10 tokens, scalable to 150+ for
high concurrency
- **Graceful handling**: Proper auth failure detection and recovery
- **ES module compatible**: Modern JavaScript with full import/export
support

### ☁️ k6 Cloud Integration
- **Cloud execution**: Tests run on k6 cloud infrastructure for
consistent results
- **Real-time monitoring**: Live dashboards with performance metrics
- **URL tracking**: Automatic test result URL capture and storage
- **Sequential orchestration**: Proper delays between tests for resource
management

## Test Coverage

### Performance Validated
- **Core API**: 100 VUs successfully testing `/api/credits`,
`/api/graphs`, `/api/blocks`, `/api/executions`
- **Graph Execution**: 80 VUs for complete workflow pipeline testing
- **Marketplace**: 150 VUs for public browsing, 100 VUs for
authenticated library operations
- **Authentication**: 150+ concurrent users with pre-authenticated token
scaling

### User Journey Simulation
- **Dashboard workflows**: Credits checking, graph management, execution
monitoring
- **Marketplace browsing**: Public search, agent discovery, category
filtering
- **Library operations**: Agent adding, favoriting, forking, detailed
views
- **Complete workflows**: End-to-end platform usage with realistic user
behavior

## Technical Implementation

### ES Module Compatibility
- Full ES module support with modern JavaScript imports/exports
- Proper module execution patterns for Node.js compatibility
- Clean separation between CommonJS legacy and modern ES modules

### Error Handling & Monitoring  
- **Separate metrics**: HTTP status, authentication, JSON validation,
overall success
- **Graceful degradation**: Auth failures don't crash VUs, proper error
tracking
- **Performance thresholds**: Configurable P95/P99 latency and error
rate limits
- **Custom counters**: Track operation types, success rates, user
journey completion

### Infrastructure Benefits
- **Rate limit protection**: Pre-auth tokens prevent Supabase auth
bottlenecks
- **Scalable testing**: Support for 150+ concurrent users with proper
token management
- **Cloud consistency**: Tests run on dedicated k6 cloud servers for
reliable results
- **Development workflow**: Local execution for debugging, cloud for
performance validation

## Usage

### Quick Start
```bash
# Setup and verification
export SUPABASE_SERVICE_KEY="your-service-key"
node generate-tokens.js
node run-tests.js verify

# Local testing (development)
node run-tests.js run core-api-test DEV

# Cloud testing (performance)
node run-tests.js cloud all DEV
```

### NPM Scripts
```bash
npm run verify    # Quick setup check
npm test         # All tests locally  
npm run cloud    # All tests in k6 cloud
```

## Validation Results

 **Authentication**: 100% success rate with fresh 24-hour tokens  
 **File Structure**: All imports and references verified correct  
 **Test Execution**: All 7 tests execute successfully with proper
metrics
 **Cloud Integration**: k6 cloud execution working with proper
credentials
 **Documentation**: Complete README with usage examples and
troubleshooting

## Files Changed

### Core Infrastructure
- `run-tests.js`: Unified test runner supporting local/cloud execution
- `generate-tokens.js`: ES module compatible token generation with
24-hour expiry
- `README.md`: Comprehensive documentation with updated file references

### Organized Test Structure  
- `tests/basic/connectivity-test.js`: Basic connectivity validation
- `tests/basic/single-endpoint-test.js`: Individual API endpoint testing
- `tests/api/core-api-test.js`: Core authenticated API endpoints
- `tests/api/graph-execution-test.js`: Graph workflow pipeline testing  
- `tests/marketplace/public-access-test.js`: Public marketplace browsing
- `tests/marketplace/library-access-test.js`: Authenticated
marketplace/library operations
- `tests/comprehensive/platform-journey-test.js`: Complete user journey
simulation

### Configuration
- `configs/environment.js`: Environment URLs and performance settings
- `package.json`: NPM scripts and dependencies for unified workflow

This infrastructure provides a solid foundation for continuous
performance monitoring and load testing of the AutoGPT Platform.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-25 12:51:54 +07:00
Krzysztof Czerwinski
9c6cc5b29d Merge branch 'dev' 2025-09-25 13:28:17 +09:00
Toran Bruce Richards
b34973ca47 feat: Add 'depth' parameter to DataForSEO Related Keywords block (#10983)
Fixes #10982

<!-- Clearly explain the need for these changes: -->
The DataForSEO Related Keywords block was missing the `depth` parameter,
which is a critical parameter that controls the comprehensiveness of
keyword research. The depth parameter determines the number of related
keywords returned by the API, ranging from 1 keyword at depth 0 to
approximately 4680 keywords at depth 4.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Added `depth` parameter to the DataForSEO Related Keywords block as an
integer input field (range 0-4)
- Added `depth` parameter to the `related_keywords` method signature in
the API client
- Updated the API client to include the depth parameter in the request
payload when provided
- Added documentation explaining the depth parameter's effect on the
number of returned keywords
- Fixed missing parameter in function signature that was causing runtime
errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified the depth parameter appears correctly in the block UI
with appropriate range validation (0-4)
  - [x] Confirmed the parameter is passed correctly to the API client
- [x] Tested that omitting the depth parameter doesn't break existing
functionality (defaults to None)
- [x] Verified the implementation follows the existing pattern for
optional parameters in the DataForSEO blocks

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

Note: No configuration changes were required for this feature addition.

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
2025-09-24 21:29:47 +00:00
Nicholas Tindle
2bc6a56877 fix(backend): Fix GCS timeout error in FileInput blocks (#10976)
## Summary
- Fixed "Timeout context manager should be used inside a task" error
occurring intermittently in FileInput blocks when downloading files from
Google Cloud Storage
- Implemented proper async session management for GCS client to ensure
operations run within correct task context
- Added comprehensive logging to help diagnose and monitor the issue in
production

## Changes
### Core Fix
- Modified `CloudStorageHandler._retrieve_file_gcs()` to create a fresh
GCS client and session for each download operation
- This ensures the aiohttp session is always created within the proper
async task context, preventing the timeout error
- The fix trades a small amount of efficiency for reliability, but only
affects download operations

### Logging Enhancements
- Added detailed logging in `store_media_file()` to track execution
context and async task state
- Enhanced `scan_content_safe()` to specifically catch and log timeout
errors with CRITICAL level
- Added context logging in virus scanner around `asyncio.create_task()`
calls
- Upgraded key debug logs to info level for visibility in production

### Code Quality
- Fixed unbound variable issue where `async_client` could be referenced
before initialization
- Replaced bare `except:` clauses with proper exception handling
- Fixed unused parameters warning in `__aexit__` method

## Testing
- The timeout error was occurring intermittently in production when
FileInput blocks processed GCS files
- With these changes, the error should be eliminated as the session is
always created in the correct context
- Comprehensive logging allows monitoring of the fix effectiveness in
production


## Context
The root cause was that `gcloud-aio-storage` was creating its internal
aiohttp session/timeout context outside of an async task context when
called by the executor. This happened intermittently depending on how
the executor scheduled block execution.

## Related Issues
- Addresses timeout errors reported in FileInput block execution
- Improves reliability of file uploads from the platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test a multiple file input agent and it works
  - [x] Test the agent that is causing the failure and it works

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 16:21:41 -05:00
Nicholas Tindle
87c773d03a fix(backend): Fix GCS timeout error in FileInput blocks (#10976)
## Summary
- Fixed "Timeout context manager should be used inside a task" error
occurring intermittently in FileInput blocks when downloading files from
Google Cloud Storage
- Implemented proper async session management for GCS client to ensure
operations run within correct task context
- Added comprehensive logging to help diagnose and monitor the issue in
production

## Changes
### Core Fix
- Modified `CloudStorageHandler._retrieve_file_gcs()` to create a fresh
GCS client and session for each download operation
- This ensures the aiohttp session is always created within the proper
async task context, preventing the timeout error
- The fix trades a small amount of efficiency for reliability, but only
affects download operations

### Logging Enhancements
- Added detailed logging in `store_media_file()` to track execution
context and async task state
- Enhanced `scan_content_safe()` to specifically catch and log timeout
errors with CRITICAL level
- Added context logging in virus scanner around `asyncio.create_task()`
calls
- Upgraded key debug logs to info level for visibility in production

### Code Quality
- Fixed unbound variable issue where `async_client` could be referenced
before initialization
- Replaced bare `except:` clauses with proper exception handling
- Fixed unused parameters warning in `__aexit__` method

## Testing
- The timeout error was occurring intermittently in production when
FileInput blocks processed GCS files
- With these changes, the error should be eliminated as the session is
always created in the correct context
- Comprehensive logging allows monitoring of the fix effectiveness in
production


## Context
The root cause was that `gcloud-aio-storage` was creating its internal
aiohttp session/timeout context outside of an async task context when
called by the executor. This happened intermittently depending on how
the executor scheduled block execution.

## Related Issues
- Addresses timeout errors reported in FileInput block execution
- Improves reliability of file uploads from the platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test a multiple file input agent and it works
  - [x] Test the agent that is causing the failure and it works

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 21:06:51 +00:00
Swifty
ebeefc96e8 feat(backend): implement caching layer for store API endpoints (Part 1) (#10975)
## Summary
This PR introduces comprehensive caching for the Store API endpoints to
improve performance and reduce database load. This is **Part 1** in a
series of PRs to add comprehensive caching across our entire API.

### Key improvements:
- Implements caching layer using the existing `@cached` decorator from
`autogpt_libs.utils.cache`
- Reduces database queries by 80-90% for frequently accessed public data
- Built-in thundering herd protection prevents database overload during
cache expiry
- Selective cache invalidation ensures data freshness when mutations
occur

## Details

### Cached endpoints with TTLs:
- **Public data (5-10 min TTL):**
  - `/agents` - Store agents list (2 min)
  - `/agents/{username}/{agent_name}` - Agent details (5 min)
  - `/graph/{store_listing_version_id}` - Agent graphs (10 min)
  - `/agents/{store_listing_version_id}` - Agent by version (10 min)
  - `/creators` - Creators list (5 min)
  - `/creator/{username}` - Creator details (5 min)

- **User-specific data (1 min TTL):**
  - `/profile` - User profiles (5 min)
  - `/myagents` - User's own agents (1 min)
  - `/submissions` - User's submissions (1 min)

### Cache invalidation strategy:
- Profile updates → clear user's profile cache
- New reviews → clear specific agent cache + agents list
- New submissions → clear agents list + user's caches
- Submission edits → clear related version caches

### Cache management endpoints:
- `GET /cache/info` - Monitor cache statistics
- `POST /cache/clear` - Clear all caches
- `POST /cache/clear/{cache_name}` - Clear specific cache

## Changes  
<!-- REQUIRED: Bullet point summary of changes -->
- Added caching decorators to all suitable GET endpoints in store routes
- Implemented cache invalidation on data mutations (POST/PUT/DELETE)
- Added cache management endpoints for monitoring and manual clearing
- Created comprehensive test suite for cache_delete functionality
- Verified thundering herd protection works correctly

## Testing
<!-- How to test your changes -->
-  Created comprehensive test suite (`test_cache_delete.py`)
validating:
  - Selective cache deletion works correctly
  - Cache entries are properly invalidated on mutations
  - Other cache entries remain unaffected
  - cache_info() accurately reflects state
-  Tested thundering herd protection with concurrent requests
-  Verified all endpoints return correct data with and without cache

## Checklist
<!-- REQUIRED: Be sure to check these off before marking the PR ready
for review. -->
- [x] I have self-reviewed this PR's diff, line by line
- [x] I have updated and tested the software architecture documentation
(if applicable)
- [x] I have run the agent to verify that it still works (if applicable)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 10:01:52 +00:00
Nicholas Tindle
83fe8d5b94 fix(backend): make preset migration not crash the system (#10966)
<!-- Clearly explain the need for these changes: -->
For those who develop blocks, they may or may not exist in the code at
the same time as the database.
> Create block in one branch, test, then move to another branch the
block is not in

This migration will prevent startup in that case.

### Changes 🏗️
Adds a try except around the migration
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test that startup actually works

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 14:23:22 +07:00
Zamil Majdy
50689218ed feat(backend): implement comprehensive load testing performance fixes + database health improvements (#10965) 2025-09-24 14:22:57 +07:00
Aayush Shah
ddff09a8e4 feat(blocks): add NotionReadPage block (#10760)
Introduces a Notion Read Page block that fetches a page by ID via the
Notion REST API. This is a first step toward Notion integration in the
AutoGPT Platform.

Motivation - Notion was not integrated yet. Im starting with a small
block to add capability incrementally.

### Notes
- I referred to the Todoist block implementation as a reference since
I’m a beginner.
- This is my first PR here  
- The block passed `docker compose run --rm rest_server pytest -q`
successfully

<!-- Clearly explain the need for these changes: -->

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

### Test plan
- [x] Ran `docker compose run --rm rest_server pytest -q
backend/blocks/test/test_block.py -k notion`
- [x] Confirmed tests passed (2 passed, 652 deselected, warnings only).
- [x] Ran poetry run format to fix linters and tests

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2025-09-19 18:54:47 +00:00
Ubbe
0c363a1cea fix(frontend): force dynamic rendering on marketplace (#10957)
## Changes 🏗️

When building on Vercel:
```
    at Object.start (.next/server/chunks/2744.js:1:312830) {
  description: "Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
  digest: 'DYNAMIC_SERVER_USAGE'
}
Failed to get server auth token: Error: Dynamic server usage: Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error
    at r (.next/server/chunks/8450.js:22:7298)
    at n (.next/server/chunks/4735.js:1:37020)
    at g (.next/server/chunks/555.js:1:31925)
    at m (.next/server/chunks/555.js:1:87056)
    at h (.next/server/chunks/555.js:1:932)
    at k (.next/server/chunks/555.js:1:25195)
    at queryFn (.next/server/chunks/555.js:1:25590)
    at Object.f [as fn] (.next/server/chunks/2744.js:1:316625)
    at q (.next/server/chunks/2744.js:1:312288)
    at Object.start (.next/server/chunks/2744.js:1:312830) {
  description: "Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
  digest: 'DYNAMIC_SERVER_USAGE'
}
```
That's because the `/marketplace` page prefetches the store agents data
on the server, and that query uses `cookies` for Auth. In theory, those
endpoints can be called without auth, but I think if you are logged that
affects the results.

The simpler for now is to tell Next.js to not try to statically render
it and render on the fly with caching. According to AI we shouldn't see
much difference performance wise:

> Short answer: Usually no noticeable slowdown. You’ll trade a small
TTFB increase (server renders per request) for correct behavior with
cookies. Overall interactivity stays the same since we still dehydrate
React Query data.
Why it’s fine:
Server already had to fetch marketplace data; doing it at request-time
vs build-time is roughly the same cost for users.
Hydration uses the prefetched data, avoiding extra client round-trips.
If you want extra speed:
If those endpoints don’t need auth, we can skip reading cookies during
server prefetch and enable ISR (e.g., revalidate=60) for partial
caching.
Or move the cookie-dependent parts to the client and keep the page
static.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Page load marketplace is fine and not slow 
  - [x] No build cookies errors 


### For configuration changes:

None
2025-09-19 08:24:08 +00:00
Ubbe
e5d870a348 refactor(frontend): move old components to __legacy__ (#10953)
## Changes 🏗️

Moving non-design-system components ( old ) to a `components/__legacy__`
folder 📁 so like this is more obvious for developers that they should
not import them or use them on new features. What is now top-level in
`/components` is what it is actively maintained.

Document some existing components like `<Alert />`. More on this coming
on follow-up PRs.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test and types pass on the CI
  - [x] Run app locally, click around, looks good 

### For configuration changes:

None
2025-09-18 21:37:43 +00:00
Reinier van der Leer
3f19cba28f fix(frontend/builder): Fix moved blocks disappearing on save (#10951)
- Resolves #10926
- Fixes a bug introduced in #10779

### Changes 🏗️

- Fix `.metadata.position` in graph save payload
- Make node reconciliation after graph save more robust

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Moved nodes don't disappear on graph save
2025-09-18 13:34:06 +00:00
Reinier van der Leer
a978e91271 fix(ci, backend): Update Redis image & amend config to work with it (#10952)
CI is currently broken because Bitnami has pulled all `bitnami/redis`
images.
The current official Redis image on Docker Hub is `redis`.

### Changes 🏗️

- Replace `bitnami/redis:6.2` by `redis:latest` in Backend CI workflow
file
- Make `REDIS_PASSWORD` optional in the backend settings

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI no longer broken
2025-09-18 13:02:49 +00:00
Ubbe
f283e6c514 refactor(frontend): cleanup of components folder (2/3) (#10942)
## Changes 🏗️

Following up my initial PR to tidy up the `components` folder
https://github.com/Significant-Gravitas/AutoGPT/pull/10940.

This is mostly moving files around and renaming some + documenting them
on the design system as needed. Should be pretty safe as long as types
on the CI pass.

## Checklist 📋

### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Click around, looks ok
  - [x] Test and types pass on the CI  

### For configuration changes:

None
2025-09-18 16:21:18 +09:00
Ubbe
9fc2101e7e refactor(frontend): tidy up on components folder (#10940)
## Changes 🏗️

Re-organise the `components` folder, moving things which are not re-used
across screens or part of the design system out of it.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] It works and test/types pass CI wise 

### For configuration changes:

None
2025-09-17 12:56:49 +00:00
Bentlybro
634f826d82 Merge branch 'master' into dev 2025-09-17 11:35:29 +01:00
Ubbe
6d6bf308fc fix(frontend): marketplace page load and caching (#10934)
## Changes 🏗️

### **Server-Side:**
-  **ISR Cache**: Page cached for 60 seconds, served instantly
-  **Prefetch**: All API calls made on server, not client
-  **Static Generation**: HTML pre-rendered with data
-  **Streaming**: Loading states show immediately

### **Client-Side:**
-  **No API Calls**: Data hydrated from server cache
-  **Fast Hydration**: React Query uses prefetched data
-  **Smart Caching**: 60s stale time prevents unnecessary requests
-  **Progressive Loading**: Suspense boundaries for better UX

### **🔄 Caching Strategy:**

1. **Server**: ISR cache (60s) → API calls → Static HTML
2. **CDN**: Cached HTML served instantly
3. **Client**: Hydrated data from server → No additional API calls
4. **Background**: ISR regenerates stale pages automatically

### **🎯 Result:**
- **First Visit**: Instant HTML + hydrated data (no client API calls)
- **Subsequent Visits**: Instant cached page
- **Background Updates**: Automatic revalidation every 60s
- **Optimal Performance**: Server-side rendering + client-side caching

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Marketplace page loads are faster 

### For configuration changes:

None
2025-09-17 07:23:43 +00:00
Nicholas Tindle
dd84fb5c66 feat(platform): Add public share links for agent run results (#10938)
<!-- Clearly explain the need for these changes: -->
This PR adds the ability for users to share their agent run results
publicly via shareable links. Users can generate a public link that
allows anyone to view the outputs of a specific agent execution without
requiring authentication. This feature enables users to share their
agent results with clients, colleagues, or the community.


https://github.com/user-attachments/assets/5508f430-07d0-4cd3-87bc-301b0b005cce


### Changes 🏗️

#### Backend Changes
- **Database Schema**: Added share tracking fields to
`AgentGraphExecution` model in Prisma schema:
  - `isShared`: Boolean flag to track if execution is shared
  - `shareToken`: Unique token for the share URL
  - `sharedAt`: Timestamp when sharing was enabled

- **API Endpoints**: Added three new REST endpoints in
`/backend/backend/server/routers/v1.py`:
- `POST /graphs/{graph_id}/executions/{graph_exec_id}/share`: Enable
sharing for an execution
- `DELETE /graphs/{graph_id}/executions/{graph_exec_id}/share`: Disable
sharing
- `GET /share/{share_token}`: Retrieve shared execution data (public
endpoint)

- **Data Models**:
- Created `SharedExecutionResponse` model for public-safe execution data
- Added `ShareRequest` and `ShareResponse` Pydantic models for type-safe
API responses
  - Updated `GraphExecutionMeta` to include share status fields

- **Security**:
- All share management endpoints verify user ownership before allowing
changes
- Public endpoint only exposes OUTPUT block data, no intermediate
execution details
  - Share tokens are UUIDs for security

#### Frontend Changes
- **ShareButton Component**
(`/frontend/src/components/ShareButton.tsx`):
  - Modal dialog for managing share settings
  - Copy-to-clipboard functionality for share links
  - Clear warnings about public accessibility
  - Uses Orval-generated API hooks for enable/disable operations

- **Share Page**
(`/frontend/src/app/(no-navbar)/share/[token]/page.tsx`):
  - Clean, navigation-free page for viewing shared executions
- Reuses existing `RunOutputs` component for consistent output rendering
  - Proper error handling for invalid/disabled share links
  - Loading states during data fetch

- **API Integration**:
- Fixed custom mutator to properly set Content-Type headers for POST
requests with empty bodies
  - Generated TypeScript types via Orval for type-safe API calls

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Test plan: -->
- [x] Enable sharing for an agent execution and verify share link is
generated
  - [x] Copy share link and verify it copies to clipboard
- [x] Open share link in incognito/private browser and verify outputs
are displayed
  - [x] Disable sharing and verify share link returns 404
- [x] Try to enable/disable sharing for another user's execution (should
fail with 404)
  - [x] Verify share page shows proper loading and error states
- [x] Test that only OUTPUT blocks are shown in shared view, no
intermediate data
=
2025-09-17 06:21:33 +00:00
Zamil Majdy
33679f3ffe feat(platform): Add instructions field to agent submissions (#10931)
## Summary

Added an optional "Instructions" field for agent submissions to help
users understand how to run agents and what to expect.

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/015c4f0b-4bdd-48df-af30-9e52ad283e8b"
/>

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/3242cee8-a4ad-4536-bc12-64b491a8ef68"
/>

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a9b63e1c-94c0-41a4-a44f-b9f98e446793"
/>


### Changes Made

**Backend:**
- Added `instructions` field to `AgentGraph` and `StoreListingVersion`
database models
- Updated `StoreSubmission`, `LibraryAgent`, and related Pydantic models
- Modified store submission API routes to handle instructions parameter
- Updated all database functions to properly save/retrieve instructions
field
- Added graceful handling for cases where database doesn't yet have the
field

**Frontend:**
- Added instructions field to agent submission flow (PublishAgentModal)
- Positioned below "Recommended Schedule" section as specified
- Added instructions display in library/run flow (RunAgentModal)  
- Positioned above credentials section with informative blue styling
- Added proper form validation with 2000 character limit
- Updated all TypeScript types and API client interfaces

### Key Features

-  Optional field - fully backward compatible
-  Proper positioning in both submission and run flows
-  Character limit validation (2000 chars)
-  User-friendly display with "How to use this agent" styling
-  Only shows when instructions are provided

### Testing

- Verified Pydantic model validation works correctly
- Confirmed schema validation enforces character limits
- Tested graceful handling of missing database fields
- Code formatting and linting completed

## Test plan

- [ ] Test agent submission with instructions field
- [ ] Test agent submission without instructions (backward
compatibility)
- [ ] Verify instructions display correctly in run modal
- [ ] Test character limit validation
- [ ] Verify database migrations work properly

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-17 03:55:45 +00:00
Abhimanyu Yadav
fc8c5ccbb6 feat(backend): enhance agent retrieval logic in store agent page (#10933)
This PR enhances the agent retrieval logic in the store database to
ensure accurate fetching of the latest approved agent versions. The
changes address scenarios where agents may have multiple versions with
different approval statuses.

## 🔧 Changes Made

### Enhanced Agent Retrieval Logic (`get_store_agent_details`)
- **Active Version Priority**: Added logic to prioritize fetching agents
based on the `activeVersionId` when available
- **Fallback to Latest Approved**: When no active version is set, the
system now falls back to the latest approved version (sorted by version
number descending)
- **Improved Accuracy**: Ensures users always see the most relevant
agent version based on the current store listing state

### Improved Agent Filtering (`get_my_agents`)
- **Enhanced Store Listing Filter**: Modified the filter to only include
store listings that have at least one available version
- **Nested Version Check**: Added nested filtering to check for
`isAvailable: true` in the versions, preventing empty or unavailable
listings from appearing

##  Testing Checklist

- [x] Test fetching agent details with an active version set
- [x] Test fetching agent details without an active version (should fall
back to latest approved)
- [x] Test `get_my_agents` returns only agents with available store
listing versions
- [x] Verify no agents with only unavailable versions appear in results
- [x] Test with agents having multiple versions with different approval
statuses
2025-09-17 02:57:39 +00:00
Reinier van der Leer
7d2ab61546 feat(platform): Disable Trigger Setup through Builder (#10418)
We want users to set up triggers through the Library rather than the
Builder.

- Resolves #10413


https://github.com/user-attachments/assets/515ed80d-6569-4e26-862f-2a663115218c

### Changes 🏗️

- Update node UI to push users to Library for trigger set-up and
management
  - Add note redirecting to Library for trigger set-up
  - Remove webhook status indicator and webhook URL section
- Add `libraryAgent: LibraryAgent` to `BuilderContext` for access inside
`CustomNode`
  - Move library agent loader from `FlowEditor` to `useAgentGraph`

- Implement `migrate_legacy_triggered_graphs` migrator function
- Remove `on_node_activate` hook (which previously handled webhook
setup)
- Propagate `created_at` from DB to `GraphModel` and
`LibraryAgentPreset` models

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Existing node triggers are converted to triggered presets (visible
in the Library)
    - [x] Converted triggered presets work
  - [x] Trigger node inputs are disabled and handles are hidden
- [x] Trigger node message links to the correct Library Agent when saved
2025-09-16 22:52:51 +00:00
Reinier van der Leer
c2f11dbcfa fix(blocks): Fix feedback loops in AI Structured Response Generator (#10932)
Improve the overall reliability of the AI Structured Response Generator
block from ~40% to ~100%. This block has been giving me a lot of hassle
over the past week and this improvement is an easy win.

- Resolves #10916

### Changes 🏗️

- Improve reliability of AI Structured Response Generator block
  - Fix feedback loops (total success rate ~40% -> 100%)
  - Improve system prompt (one-shot success rate ~40% -> ~76%)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] JSON decode errors are turned into a useful feedback message
  - [x] LLM effectively corrects itself based on the feedback message
2025-09-16 22:50:29 +00:00
Nicholas Tindle
f82adeb959 feat(library): Add agent favoriting functionality (#10828)
### Need 💡

This PR introduces the ability for users to "favorite" agents in the
library view, enhancing agent discoverability and organization.
Favorited agents will be visually marked with a heart icon and
prioritized in the library list, appearing at the top. This feature is
distinct from pinning specific agent runs.

### Changes 🏗️

*   **Backend:**
* Updated `LibraryAgent` model in `backend/server/v2/library/model.py`
to include the `is_favorite` field when fetching from the database.
*   **Frontend:**
* Updated `LibraryAgent` TypeScript type in
`autogpt-server-api/types.ts` to include `is_favorite`.
* Modified `LibraryAgentCard.tsx` to display a clickable heart icon,
indicating the favorite status.
* Implemented a click handler on the heart icon to toggle the
`is_favorite` status via an API call, including loading states and toast
notifications.
* Updated `useLibraryAgentList.ts` to implement client-side sorting,
ensuring favorited agents appear at the top of the list.
* Updated `openapi.json` to include `is_favorite` in the `LibraryAgent`
schema and regenerated frontend API types.
    *   Installed `@orval/core` for API generation.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify that the heart icon is displayed correctly on
`LibraryAgentCard` for both favorited (filled red) and unfavorited
(outlined gray) agents.
  - [x] Click the heart icon on an unfavorited agent:
    - [x] Confirm the icon changes to filled red.
    - [x] Verify a "Added to favorites" toast notification appears.
    - [x] Confirm the agent moves to the top of the library list.
- [x] Check that the agent card does not navigate to the agent details
page.
  - [x] Click the heart icon on a favorited agent:
    - [x] Confirm the icon changes to outlined gray.
    - [x] Verify a "Removed from favorites" toast notification appears.
- [x] Confirm the agent's position adjusts in the list (no longer at the
very top unless other sorting criteria apply).
- [x] Check that the agent card does not navigate to the agent details
page.
- [x] Test the loading state: rapidly click the heart icon and observe
the `opacity-50 cursor-not-allowed` styling.
- [x] Verify that the sorting correctly places all favorited agents at
the top, maintaining their original relative order within the favorited
group, and the same for unfavorited agents.

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---
<a
href="https://cursor.com/background-agent?bcId=bc-43e8f98c-e4ea-4149-afc8-5eea3d1ab439">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-43e8f98c-e4ea-4149-afc8-5eea3d1ab439">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-16 22:43:50 +00:00
Nicholas Tindle
6f08a1cca7 fix: the api key credentials weren't registering correctly (#10936) 2025-09-16 13:27:04 -05:00
Ubbe
1ddf92eed4 fix(frontend): new agent run page design refinements (#10924)
## Changes 🏗️

Implements all the following changes...

1. The margins between the runs, on the left hand side.. reduced them
around `6px` ?
2. Make agent inputs full width
3. Make "Schedule setup" section displayed in a second modal
4. When an agent is running, we should not show the "Delete agent"
button
5. Copy changes around the actions for agent/runs
6. Large button height should be `52px`
7. Fix margins between + New Run button and the runs & scheduled menu
8. Make border white on cards

Also... 
- improve the naming of some components to reflect better their
context/usage
- show on the inputs section when an agent is using already API keys or
credentials
- fix runs/schedules not auto-selecting once created

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with the new agent runs page enabled
  - [x] Test the above 

### For configuration changes:

None
2025-09-16 14:34:52 +00:00
Reinier van der Leer
4c0dd27157 dx(platform): Add manual dispatch to deploy workflows (#10918)
When deploying from the infra repo, migrations aren't run which can
cause issues. We need to be able to manually dispatch deployment from
this repo so that the migrations are run as well.

### Changes 🏗️

- add manual dispatch to deploy workflows

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Either it works or it doesn't but this PR won't break anything
existing
2025-09-16 15:57:56 +02:00
Zamil Majdy
17fcf68f2e feat: Separate OpenAI key for smart agent execution summary and other internal AI calls (#10930)
### Changes 🏗️

Separate the API key for internal usage (smart agent execution summary)
and block usage.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual test after deployment
2025-09-16 09:14:27 +00:00
Reinier van der Leer
381558342a fix(frontend/builder): Fix moved blocks disappearing on no-op save (#10927)
- Resolves #10926

### Changes 🏗️

- Fix save no-op if graph has no changes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving a graph after only moving nodes doesn't make those nodes
disappear
2025-09-16 08:15:10 +00:00
Zamil Majdy
1fdc02467b feat(backend): Add comprehensive Prometheus instrumentation for observability (#10923)
## Summary
- Implement comprehensive Prometheus metrics instrumentation for all
FastAPI services
- Add custom business metrics for graph/block executions
- Enable dual publishing to both Grafana Cloud and internal Prometheus

## Related Infrastructure PR
-
https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/214

## Changes

### 📊 Metrics Infrastructure
- Added `prometheus-fastapi-instrumentator` dependency for automatic
HTTP metrics
- Created centralized `instrumentation.py` module for consistent metrics
across services
- Instrumented REST API, WebSocket, and External API services

### 📈 Automatic HTTP Metrics
All FastAPI services now automatically collect:
- **Request latency**: Histogram with custom buckets (10ms to 60s)
- **Request/response size**: Track payload sizes
- **Request counts**: By method, endpoint, and status code
- **Active requests**: Real-time count of in-progress requests
- **Error rates**: 4xx and 5xx responses

### 🎯 Custom Business Metrics
Added domain-specific metrics:
- **Graph executions**: Count by status (success/error/validation_error)
- **Block executions**: Count and duration by block_type and status
- **WebSocket connections**: Active connection gauge
- **Database queries**: Duration histogram by operation and table
- **RabbitMQ messages**: Count by queue and status
- **Authentication**: Attempts by method and status
- **API key usage**: By provider and block type
- **Rate limiting**: Hit count by endpoint

### 🔌 Service Endpoints
Each service exposes metrics at `/metrics`:
- REST API (port 8006): `/metrics`
- WebSocket (port 8001): `/metrics`
- External API: `/external-api/metrics`
- Executor (port 8002): Already had metrics, now enhanced

### 🏷️ Kubernetes Integration
Updated Helm charts with pod annotations:
```yaml
prometheus.io/scrape: "true"
prometheus.io/port: "8006"  # or appropriate port
prometheus.io/path: "/metrics"
```

## Testing
- [x] Install dependencies: `poetry install`
- [x] Run services: `poetry run serve`
- [x] Check metrics endpoints are accessible
- [x] Verify metrics are being collected
- [x] Confirm Grafana Agent can scrape metrics
- [x] Test graph/block execution tracking
- [x] Verify WebSocket connection metrics

## Performance Impact
- Minimal overhead (~1-2ms per request)
- Metrics are collected asynchronously
- Can be disabled via `ENABLE_METRICS=false` env var

## Next Steps
1. Deploy to dev environment
2. Configure Grafana Cloud dashboards
3. Set up alerting rules based on metrics
4. Add more custom business metrics as needed

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-16 12:58:04 +07:00
Nicholas Tindle
f262bb9307 fix(platform): add timezone awareness to scheduler (#10921)
### Changes 🏗️

This PR restores and improves timezone awareness in the scheduler
service to correctly handle daylight savings time (DST) transitions. The
changes ensure that scheduled agents run at the correct local time even
when crossing DST boundaries.

#### Backend Changes:
- **Scheduler Service (`scheduler.py`):**
- Added `user_timezone` parameter to `add_graph_execution_schedule()`
method
  - CronTrigger now uses the user's timezone instead of hardcoded UTC
  - Added timezone field to `GraphExecutionJobInfo` for visibility
  - Falls back to UTC with a warning if no timezone is provided
  - Extracts and includes timezone information from job triggers

- **API Router (`v1.py`):**
  - Added optional `timezone` field to `ScheduleCreationRequest`
- Fetches user's saved timezone from profile if not provided in request
  - Passes timezone to scheduler client when creating schedules
  - Converts `next_run_time` back to user timezone for display

#### Frontend Changes:
- **Schedule Creation Modal:**
  - Now sends user's timezone with schedule creation requests
- Uses browser's local timezone if user hasn't set one in their profile

- **Schedule Display Components:**
  - Updated to show timezone information in schedule details
  - Improved formatting of schedule information in monitoring views
  - Fixed schedule table display to properly show timezone-aware times

- **Cron Expression Utils:**
  - Removed UTC conversion logic from `formatTime()` function
  - Cron expressions are now stored in the schedule's timezone
  - Simplified humanization logic since no conversion is needed

- **API Types & OpenAPI:**
  - Added `timezone` field to schedule-related types
  - Updated OpenAPI schema to include timezone parameter

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  
### Test Plan 🧪

#### 1. Schedule Creation Tests
- [ ] Create a new schedule and verify the timezone is correctly saved
- [ ] Create a schedule without specifying timezone - should use user's
profile timezone
- [ ] Create a schedule when user has no profile timezone - should
default to UTC with warning

#### 2. Daylight Savings Time Tests
- [ ] Create a schedule for a daily task at 2:00 PM in a DST timezone
(e.g., America/New_York)
- [ ] Verify the schedule runs at 2:00 PM local time before DST
transition
- [ ] Verify the schedule still runs at 2:00 PM local time after DST
transition
- [ ] Check that the next_run_time adjusts correctly across DST
boundaries

#### 3. Display and UI Tests
- [ ] Verify timezone is displayed in schedule details view
- [ ] Verify schedule times are shown in user's local timezone in
monitoring page
- [ ] Verify cron expression humanization shows correct local times
- [ ] Check that schedule table shows timezone information

#### 4. API Tests
- [ ] Test schedule creation API with timezone parameter
- [ ] Test schedule creation API without timezone parameter
- [ ] Verify GET schedules endpoint returns timezone information
- [ ] Verify next_run_time is converted to user timezone in responses

#### 5. Edge Cases
- [ ] Test with various timezones (UTC, EST, PST, Europe/London,
Asia/Tokyo)
- [ ] Test with invalid timezone strings - should handle gracefully
- [ ] Test scheduling at DST transition times (2:00 AM during spring
forward)
- [ ] Verify existing schedules without timezone info default to UTC

#### 6. Regression Tests
- [ ] Verify existing schedules continue to work
- [ ] Verify schedule deletion still works
- [ ] Verify schedule listing endpoints work correctly
- [ ] Check that scheduled graph executions trigger as expected

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-15 18:18:03 -05:00
Nicholas Tindle
5a6978b07d feat(frontend): Add expandable view for block output (#10773)
### Need for these changes 💥


https://github.com/user-attachments/assets/5b9007a1-0c49-44c6-9e8b-52bf23eec72c


Users currently cannot view the full output result from a block when
inspecting the Output Data History panel or node previews, as the
content is clipped. This makes debugging and analysis of complex outputs
difficult, forcing users to copy data to external editors. This feature
improves developer efficiency and user experience, especially for blocks
with large or nested responses, and reintroduces a highly requested
functionality that existed previously.

### Changes 🏗️

* **New `ExpandableOutputDialog` component:** Introduced a reusable
modal dialog (`ExpandableOutputDialog.tsx`) designed to display
complete, untruncated output data.
* **`DataTable.tsx` enhancement:** Added an "Expand" button (Maximize2
icon) to each data entry in the Output Data History panel. This button
appears on hover and opens the `ExpandableOutputDialog` for a full view
of the data.
* **`NodeOutputs.tsx` enhancement:** Integrated the "Expand" button into
node output previews, allowing users to view full output data directly
from the node details.
* The `ExpandableOutputDialog` provides a large, scrollable content
area, displaying individual items in organized cards, with options to
copy individual items or all data, along with execution ID and pin name
metadata.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Navigate to an agent session with executed blocks.
  - [x] Open the Output Data History panel.
  - [x] Hover over a data entry to reveal the "Expand" button.
- [x] Click the "Expand" button and verify the `ExpandableOutputDialog`
opens, displaying the full, untruncated content.
  - [x] Verify scrolling works for large outputs within the dialog.
  - [x] Test "Copy Item" and "Copy All" buttons within the dialog.
  - [x] Navigate to a custom node in the graph.
  - [x] Inspect a node's output (if applicable).
  - [x] Hover over the output data to reveal the "Expand" button.
- [x] Click the "Expand" button and verify the `ExpandableOutputDialog`
opens, displaying the full content.

---
Linear Issue:
[OPEN-2593](https://linear.app/autogpt/issue/OPEN-2593/add-expandable-view-for-full-block-output-preview)

<a
href="https://cursor.com/background-agent?bcId=bc-27badeb8-2b49-4286-aa16-8245dfd33bfc">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-27badeb8-2b49-4286-aa16-8245dfd33bfc">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-15 14:19:40 +00:00
Nicholas Tindle
339ec733cb fix(platform): add timezone awareness to scheduler (#10921)
### Changes 🏗️

This PR restores and improves timezone awareness in the scheduler
service to correctly handle daylight savings time (DST) transitions. The
changes ensure that scheduled agents run at the correct local time even
when crossing DST boundaries.

#### Backend Changes:
- **Scheduler Service (`scheduler.py`):**
- Added `user_timezone` parameter to `add_graph_execution_schedule()`
method
  - CronTrigger now uses the user's timezone instead of hardcoded UTC
  - Added timezone field to `GraphExecutionJobInfo` for visibility
  - Falls back to UTC with a warning if no timezone is provided
  - Extracts and includes timezone information from job triggers

- **API Router (`v1.py`):**
  - Added optional `timezone` field to `ScheduleCreationRequest`
- Fetches user's saved timezone from profile if not provided in request
  - Passes timezone to scheduler client when creating schedules
  - Converts `next_run_time` back to user timezone for display

#### Frontend Changes:
- **Schedule Creation Modal:**
  - Now sends user's timezone with schedule creation requests
- Uses browser's local timezone if user hasn't set one in their profile

- **Schedule Display Components:**
  - Updated to show timezone information in schedule details
  - Improved formatting of schedule information in monitoring views
  - Fixed schedule table display to properly show timezone-aware times

- **Cron Expression Utils:**
  - Removed UTC conversion logic from `formatTime()` function
  - Cron expressions are now stored in the schedule's timezone
  - Simplified humanization logic since no conversion is needed

- **API Types & OpenAPI:**
  - Added `timezone` field to schedule-related types
  - Updated OpenAPI schema to include timezone parameter

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  
### Test Plan 🧪

#### 1. Schedule Creation Tests
- [ ] Create a new schedule and verify the timezone is correctly saved
- [ ] Create a schedule without specifying timezone - should use user's
profile timezone
- [ ] Create a schedule when user has no profile timezone - should
default to UTC with warning

#### 2. Daylight Savings Time Tests
- [ ] Create a schedule for a daily task at 2:00 PM in a DST timezone
(e.g., America/New_York)
- [ ] Verify the schedule runs at 2:00 PM local time before DST
transition
- [ ] Verify the schedule still runs at 2:00 PM local time after DST
transition
- [ ] Check that the next_run_time adjusts correctly across DST
boundaries

#### 3. Display and UI Tests
- [ ] Verify timezone is displayed in schedule details view
- [ ] Verify schedule times are shown in user's local timezone in
monitoring page
- [ ] Verify cron expression humanization shows correct local times
- [ ] Check that schedule table shows timezone information

#### 4. API Tests
- [ ] Test schedule creation API with timezone parameter
- [ ] Test schedule creation API without timezone parameter
- [ ] Verify GET schedules endpoint returns timezone information
- [ ] Verify next_run_time is converted to user timezone in responses

#### 5. Edge Cases
- [ ] Test with various timezones (UTC, EST, PST, Europe/London,
Asia/Tokyo)
- [ ] Test with invalid timezone strings - should handle gracefully
- [ ] Test scheduling at DST transition times (2:00 AM during spring
forward)
- [ ] Verify existing schedules without timezone info default to UTC

#### 6. Regression Tests
- [ ] Verify existing schedules continue to work
- [ ] Verify schedule deletion still works
- [ ] Verify schedule listing endpoints work correctly
- [ ] Check that scheduled graph executions trigger as expected

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-15 06:15:52 +00:00
Ubbe
6575b655f0 fix(frontend): improve agent runs page loading state (#10914)
## Changes 🏗️


https://github.com/user-attachments/assets/356e5364-45be-4f6e-bd1c-cc8e42bf294d

And also tidy up the some of the logic around hooks. I also added a
`okData` helper to avoid having to type case ( `as` ) so much with the
generated types ( given the `response` is a union depending on `status:
200 | 400 | 401` ... )

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run PR locally with the `new-agent-runs` flag enabled
  - [x] Check the nice loading state 

### For configuration changes:

None
2025-09-15 04:56:26 +00:00
Ubbe
7c2df24d7c fix(frontend): delete actions behind dialogs in agent runs view (#10915)
## Changes 🏗️

<img width="800" height="630" alt="Screenshot 2025-09-12 at 17 38 34"
src="https://github.com/user-attachments/assets/103d7e10-e924-4831-b0e7-b7df608a205f"
/>

<img width="800" height="524" alt="Screenshot 2025-09-12 at 17 38 30"
src="https://github.com/user-attachments/assets/aeec2ac7-4bea-4ec9-be0c-4491104733cb"
/>

<img width="800" height="750" alt="Screenshot 2025-09-12 at 17 38 26"
src="https://github.com/user-attachments/assets/e0b28097-8352-4431-ae4a-9dc3e3bcf9eb"
/>

- All the `Delete` actions on the new Agent Library Runs page should be
behind confirmation dialogs
- Re-arrange the file structure a bit 💆🏽 
- Make the buttons min-width a bit more generous

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Test the above 

#### For configuration changes:

None
2025-09-15 04:55:58 +00:00
Reinier van der Leer
23eafa178c fix(backend/db): Unbreak store materialized views refresh job (#10906)
- Resolves #10898

### Changes 🏗️

- Fix and re-create `refresh_store_materialized_views` DB function and
its pg_cron job

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without issues (locally)
  - [x] Refresh function can be run without issues (locally)
2025-09-14 23:31:18 +00:00
Zamil Majdy
27fccdbf31 fix(backend/executor): Make graph execution status transitions atomic and enforce state machine (#10863)
## Summary
- Fixed race condition issues in `update_graph_execution_stats` function
- Implemented atomic status transitions using database-level constraints
- Added state machine enforcement to prevent invalid status transitions
- Eliminated code duplication and improved error handling

## Problem
The `update_graph_execution_stats` function had race condition
vulnerabilities where concurrent status updates could cause invalid
transitions like RUNNING → QUEUED. The function was not durable and
could result in executions moving backwards in their lifecycle, causing
confusion and potential system inconsistencies.

## Root Cause Analysis
1. **Race Conditions**: The function used a broad OR clause that allowed
updates from multiple source statuses without validating the specific
transition
2. **No Atomicity**: No atomic check to ensure the status hadn't changed
between read and write operations
3. **Missing State Machine**: No enforcement of valid state transitions
according to execution lifecycle rules

## Solution Implementation

### 1. Atomic Status Transitions
- Use database-level atomicity by including the current allowed source
statuses in the WHERE clause during updates
- This ensures only valid transitions can occur at the database level

### 2. State Machine Enforcement
Define valid transitions as a module constant
`VALID_STATUS_TRANSITIONS`:
- `INCOMPLETE` → `QUEUED`, `RUNNING`, `FAILED`, `TERMINATED`
- `QUEUED` → `RUNNING`, `FAILED`, `TERMINATED`  
- `RUNNING` → `COMPLETED`, `TERMINATED`, `FAILED`
- `TERMINATED` → `RUNNING` (for resuming halted execution)
- `COMPLETED` and `FAILED` are terminal states with no allowed
transitions

### 3. Improved Error Handling
- Early validation with clear error messages for invalid parameters
- Graceful handling when transitions fail - return current state instead
of None
- Proper logging of invalid transition attempts

### 4. Code Quality Improvements
- Eliminated code duplication in fetch logic
- Added proper type hints and casting
- Made status transitions constant for better maintainability

## Benefits
 **Prevents Invalid Regressions**: No more RUNNING → QUEUED transitions
 **Atomic Operations**: Database-level consistency guarantees  
 **Clear Error Messages**: Better debugging and monitoring  
 **Maintainable Code**: Clean logic flow without duplication  
 **Race Condition Safe**: Handles concurrent updates gracefully  

## Test Plan
- [x] Function imports and basic structure validation
- [x] Code formatting and linting checks pass
- [x] Type checking passes for modified files
- [x] Pre-commit hooks validation

## Technical Details
The key insight is using the database query itself to enforce valid
transitions by filtering on allowed source statuses in the WHERE clause.
This makes the operation truly atomic and eliminates the race condition
window that existed in the previous implementation.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-09-14 23:31:02 +00:00
Reinier van der Leer
fb8fbc9d1f fix(backend/db): Keep CreditTransaction entries on User delete (#10917)
This is a non-critical improvement for bookkeeping purposes.

- Change `CreditTransaction` <- `User` relation to `ON DELETE NO ACTION`
so that `CreditTransactions` are not automatically deleted when we
delete a user's data.

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without problems
2025-09-12 19:42:16 +02:00
Reinier van der Leer
6a86e70fd6 fix(backend/db): Keep CreditTransaction entries on User delete (#10917)
This is a non-critical improvement for bookkeeping purposes.

### Changes 🏗️

- Change `CreditTransaction` <- `User` relation to `ON DELETE NO ACTION`
so that `CreditTransactions` are not automatically deleted when we
delete a user's data.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without problems
2025-09-12 17:11:31 +00:00
Ubbe
6a2d7e0fb0 fix(frontend): handle avatar missing images better (#10903)
## Changes 🏗️

I think this helps `next/image` being more tolerant when optimising
images from certain origins according to Claude.

## Checklist 📋

### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Deploy preview to dev
  - [x] Verify avatar images load better 

### For configuration changes:

None
2025-09-12 02:56:24 +00:00
Nicholas Tindle
3d6ea3088e fix(backend): Add Airtable record normalization and upsert features (#10908)
Introduces normalization of Airtable record outputs to include all
fields with appropriate empty values and optional field metadata.
Enhances record creation to support finding existing records by
specified fields and updating them if found, enabling upsert-like
behavior. Updates block schemas and logic for list, get, and create
operations to support these new features.<!-- Clearly explain the need
for these changes: -->

### Changes 🏗️
Allows normalization of the response of the airtable blocks
Allows you to use create base to find ones already made
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test that it doesn't break existing agents
  - [x] Test that the results for checkboxes are returned
2025-09-11 20:26:34 +00:00
Nicholas Tindle
64b4480b1e Merge branch 'master' into dev 2025-09-11 15:07:22 -05:00
Swifty
f490b01abb feat(frontend): Add Vercel Analytics and Speed Insights (#10904)
## Summary
- Added Vercel Analytics for tracking page views and user interactions
- Added Vercel Speed Insights for monitoring Web Vitals and performance
metrics
- Fixed incorrect placement of SpeedInsights component (was between html
and head tags)

## Changes
- Import Analytics and SpeedInsights components from Vercel packages
- Place both components correctly within the body tag
- Ensure proper HTML structure and Next.js best practices

## Test plan
- [x] Verify components are imported correctly
- [x] Confirm no HTML validation errors
- [x] Test that analytics work when deployed to Vercel
- [x] Verify Speed Insights metrics are being collected
2025-09-11 10:58:11 +00:00
Bentlybro
e56a4a135d Revert "fix(backend): Add Airtable record normalization + find/create base (#10891)"
This reverts commit 5da41e0753.
2025-09-10 14:36:40 +01:00
Ubbe
e70c970ab6 feat(frontend): new <Avatar /> component using next/image (#10897)
## Changes 🏗️

<img width="800" height="648" alt="Screenshot 2025-09-10 at 22 00 01"
src="https://github.com/user-attachments/assets/eb396d62-01f2-45e5-9150-4e01dfcb71d0"
/><br />

Adds a new `<Avatar />` component and uses that across the app. Is a
copy of
[shadcn/avatar](https://duckduckgo.com/?q=shadcn+avatar&t=brave&ia=web)
with the following modifications:
- renders images with
[`next/image`](https://duckduckgo.com/?q=next+image&t=brave&ia=web) by
default
- this ensures avatars rendered on the app are optimised and resized ✔️
- it will work as long as all the domains are white-listed in
`nextjs.config.mjs`
- allows to bypass and use a normal `<img />` tag via an `as` prop if
needed
  - sometimes we might need to render images from a dynamic cdn 🤷🏽‍♂️   

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] ...

### For configuration changes:

None
2025-09-10 13:25:09 +00:00
Abhimanyu Yadav
3bbce71678 feat(builder): Block menu redesign - part 3 (#10864)
### Changes 🏗️

#### Block Menu Redesign - Part 3

This PR continues the block menu redesign effort, implementing the new
content sections and improving the overall user experience. The changes
focus on better organization, pagination, error handling, and visual
consistency.

#### Key Features Implemented:

**1. New Content Organization**
- **All Blocks Content**: Complete listing of all available blocks with
category-based organization and infinite scroll support
(`AllBlocksContent/`)
- **My Agents Content**: Display and manage user's own agents with
pagination (`MyAgentsContent/`)
- **Marketplace Agents Content**: Browse and add marketplace agents with
improved loading states (`MarketplaceAgentsContent/`)
- **Integration Blocks**: Dedicated view for integration-specific blocks
with better filtering (`IntegrationBlocks/`)
- **Suggestion Content**: Smart suggestions based on user context and
search history (`SuggestionContent/`)
- **Integrations Content**: Browse available integrations in a dedicated
view (`IntegrationsContent/`)

**2. Enhanced UI Components**
- **Paginated Lists**: New pagination components for blocks and
integrations (`PaginatedBlocksContent/`, `PaginatedIntegrationList/`)
- **Block List**: Reusable block list component with consistent styling
(`BlockList/`)
- **Improved Error Handling**: Comprehensive error states with retry
functionality across all content types
- **Loading States**: Skeleton loaders for better perceived performance

**3. Infrastructure Improvements**
- **Centralized Styles**: New `style.ts` file for consistent styling
across components
- **Better State Management**: Enhanced context provider with improved
menu state handling
- **Mock Flag Support**: Added feature flags for testing new block
features
- **Default State Enum**: Refactored to use enums for menu default
states

**4. Visual Assets**
- Added 50+ new integration icons/logos for better visual representation
- Updated existing integration images for consistency

**5. Code Quality**
- Improved error handling with proper error cards and retry mechanisms
- Consistent formatting and import organization
- Enhanced TypeScript types and interfaces
- Better separation of concerns with dedicated hooks for each content
type

#### Technical Details:
- **Files Changed**: 96 files
- **Additions**: 1,380 lines
- **Deletions**: 162 lines
- **New Components**: 10+ new React components with dedicated hooks
- **Integration Icons**: 50+ new PNG images for various integrations

#### Breaking Changes:
None - All changes are backwards compatible

---

### Test Plan 📋

- [x] Create a new agent and verify all blocks are accessible
- [x] Test infinite scroll in "All Blocks" view
- [x] Verify pagination works correctly in marketplace agents view
- [x] Test error states by simulating network failures
- [x] Check that all new integration icons display correctly
- [x] Test adding agents from marketplace view
- [x] Ensure skeleton loaders appear during data fetching

> Generated by claude
2025-09-10 11:58:07 +00:00
Abhimanyu Yadav
34fbf4377f fix(frontend): allow lazy loading of images (#10895)
The `next/image` component has inbuilt lazy loading enabled, but in some
components, we are bypassing it using a priority flag. So, I have
reverted this in this PR.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Lazy loading is working perfectly locally.
2025-09-10 11:38:37 +00:00
dependabot[bot]
f682ef885a chore(frontend/deps-dev): Bump 16 dev dependencies to newer minor versions (#10837)
Bumps the development-dependencies group with 16 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
|
[@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests)
| `4.1.0` | `4.1.1` |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.54.2`
| `1.55.0` |
|
[@storybook/addon-a11y](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-docs](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding)
| `9.1.2` | `9.1.4` |
|
[@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs)
| `9.1.2` | `9.1.4` |
|
[@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query)
| `5.83.1` | `5.86.0` |
|
[@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools)
| `5.84.2` | `5.86.0` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `24.2.1` | `24.3.0` |
| [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.3` |
`13.1.4` |
| [concurrently](https://github.com/open-cli-tools/concurrently) |
`9.2.0` | `9.2.1` |
|
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
| `15.4.6` | `15.5.2` |
|
[eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin)
| `9.1.2` | `9.1.4` |
| [msw](https://github.com/mswjs/msw) | `2.10.4` | `2.11.1` |
|
[storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/core)
| `9.1.2` | `9.1.4` |


Updates `@chromatic-com/storybook` from 4.1.0 to 4.1.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/releases"><code>@​chromatic-com/storybook</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v4.1.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.1/CHANGELOG.md"><code>@​chromatic-com/storybook</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v4.1.1 (Wed Aug 20 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e38f120b09"><code>e38f120</code></a>
Bump version to: 4.1.1 [skip ci]</li>
<li><a
href="adb8570cbb"><code>adb8570</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="9e38298e21"><code>9e38298</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/390">#390</a>
from chromaui/next</li>
<li><a
href="7b3a184d0a"><code>7b3a184</code></a>
Merge branch 'main' into next</li>
<li><a
href="2faff8b888"><code>2faff8b</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/389">#389</a>
from chromaui/ndelangen/sb10-beta-compatibility</li>
<li><a
href="51015a75f2"><code>51015a7</code></a>
Update yarn.lock to add support for Storybook 10.0.0-0</li>
<li><a
href="6f050adca2"><code>6f050ad</code></a>
Update package.json</li>
<li>See full diff in <a
href="https://github.com/chromaui/addon-visual-tests/compare/v4.1.0...v4.1.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `@playwright/test` from 1.54.2 to 1.55.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/microsoft/playwright/releases"><code>@​playwright/test</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v1.55.0</h2>
<h2>New APIs</h2>
<ul>
<li>New Property <a
href="https://playwright.dev/docs/api/class-teststepinfo#test-step-info-title-path">testStepInfo.titlePath</a>
Returns the full title path starting from the test file, including test
and step titles.</li>
</ul>
<h2>Codegen</h2>
<ul>
<li>Automatic <code>toBeVisible()</code> assertions: Codegen can now
generate automatic <code>toBeVisible()</code> assertions for common UI
interactions. This feature can be enabled in the Codegen settings
UI.</li>
</ul>
<h2>Breaking Changes</h2>
<ul>
<li>⚠️ Dropped support for Chromium extension manifest v2.</li>
</ul>
<h2>Miscellaneous</h2>
<ul>
<li>Added support for Debian 13 &quot;Trixie&quot;.</li>
</ul>
<h2>Browser Versions</h2>
<ul>
<li>Chromium 140.0.7339.16</li>
<li>Mozilla Firefox 141.0</li>
<li>WebKit 26.0</li>
</ul>
<p>This version was also tested against the following stable
channels:</p>
<ul>
<li>Google Chrome 139</li>
<li>Microsoft Edge 139</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f992162f04"><code>f992162</code></a>
chore: mark v1.55.0 (<a
href="https://redirect.github.com/microsoft/playwright/issues/37121">#37121</a>)</li>
<li><a
href="4a92ea0025"><code>4a92ea0</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37113">#37113</a>):
docs: add release-notes for v1.55</li>
<li><a
href="aa05507bba"><code>aa05507</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37114">#37114</a>):
test: move browser._launchServer in child process</li>
<li><a
href="27ae7dc639"><code>27ae7dc</code></a>
test: tree gardening (<a
href="https://redirect.github.com/microsoft/playwright/issues/37107">#37107</a>)</li>
<li><a
href="cd09d859a9"><code>cd09d85</code></a>
test: unflake &quot;should pick element&quot; (<a
href="https://redirect.github.com/microsoft/playwright/issues/37103">#37103</a>)</li>
<li><a
href="72e47728ba"><code>72e4772</code></a>
chore(trace-viewer): remove unused code (<a
href="https://redirect.github.com/microsoft/playwright/issues/37097">#37097</a>)</li>
<li><a
href="5b8c7d648a"><code>5b8c7d6</code></a>
chore(dotnet): float is non-nullable (<a
href="https://redirect.github.com/microsoft/playwright/issues/37095">#37095</a>)</li>
<li><a
href="c7bf035c35"><code>c7bf035</code></a>
test(webkit): closing dialog &gt; contenteditable (<a
href="https://redirect.github.com/microsoft/playwright/issues/37084">#37084</a>)</li>
<li><a
href="9fd6986f8d"><code>9fd6986</code></a>
test: skip debug-controller tests in driver mode (<a
href="https://redirect.github.com/microsoft/playwright/issues/37090">#37090</a>)</li>
<li><a
href="4c2f44d591"><code>4c2f44d</code></a>
test(bidi): use the nightly channel only for Firefox in CI (<a
href="https://redirect.github.com/microsoft/playwright/issues/37086">#37086</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/microsoft/playwright/compare/v1.54.2...v1.55.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-a11y` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-a11y</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-a11y</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/a11y">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-docs` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-docs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-docs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="0f86613a92"><code>0f86613</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32287">#32287</a>
from storybookjs/shilman/error-utm</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li><a
href="f8ff03a47d"><code>f8ff03a</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32238">#32238</a>
from storybookjs/sidnioulz/issue-31436-table</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/docs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-links` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-links</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-links</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/links">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-onboarding` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-onboarding</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-onboarding</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/onboarding">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/nextjs` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="1dba82472a"><code>1dba824</code></a>
Next.js: Avoid multiple webpack versions at runtime</li>
<li><a
href="a0151efbce"><code>a0151ef</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32306">#32306</a>
from storybookjs/valentin/fix-nextjs-webpack-error</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="bbb4ffe209"><code>bbb4ffe</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32286">#32286</a>
from storybookjs/shilman/configure-utm</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/frameworks/nextjs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/eslint-plugin-query` from 5.83.1 to 5.86.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/eslint-plugin-query</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.86.0</h2>
<p>Version 5.86.0 - 9/4/25, 9:27 AM (Manual Release)</p>
<h2>Changes</h2>
<p>Note: This release contains BREAKING CHANGES for the
<code>experimental_streamedQuery</code> API:</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>query-core: add custom reducer support to streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9532">#9532</a>)
(8f24580) by <a
href="https://github.com/marcog83"><code>@​marcog83</code></a></li>
</ul>
<p>BREAKING CHANGE: The <code>maxChunks</code> parameter has been
removed from <code>streamedQuery</code>.
Use a custom <code>reducer</code> function to control data aggregation
behavior instead.</p>
<p>BREAKING CHANGE: When using a custom reducer function with
<code>streamedQuery</code>,
the <code>initialValue</code> parameter is now required and must be
provided.</p>
<ul>
<li>rename queryFn to streamFn in streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9606">#9606</a>)
(b25412a) by Dominik Dorfmeister</li>
</ul>
<p>BREAKING CHANGE: <code>queryFn</code> has been renamed to
<code>streamFn</code></p>
<h3>Chore</h3>
<ul>
<li>tsconfig.json: simplify &quot;include&quot; patterns by
consolidating file extensions and directory paths (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9547">#9547</a>)
(7306474) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Test</h3>
<ul>
<li>react-query/useMutationState: clarify assertions and improve code
formatting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9611">#9611</a>)
(43049c5) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Other</h3>
<ul>
<li>(c75a994) by BennettLiam</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1599bb4742"><code>1599bb4</code></a>
release: v5.86.0</li>
<li><a
href="7306474eee"><code>7306474</code></a>
chore(tsconfig.json): simplify 'include' patterns by consolidating file
exten...</li>
<li>See full diff in <a
href="https://github.com/TanStack/query/commits/v5.86.0/packages/eslint-plugin-query">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/react-query-devtools` from 5.84.2 to 5.86.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/react-query-devtools</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.86.0</h2>
<p>Version 5.86.0 - 9/4/25, 9:27 AM (Manual Release)</p>
<h2>Changes</h2>
<p>Note: This release contains BREAKING CHANGES for the
<code>experimental_streamedQuery</code> API:</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>query-core: add custom reducer support to streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9532">#9532</a>)
(8f24580) by <a
href="https://github.com/marcog83"><code>@​marcog83</code></a></li>
</ul>
<p>BREAKING CHANGE: The <code>maxChunks</code> parameter has been
removed from <code>streamedQuery</code>.
Use a custom <code>reducer</code> function to control data aggregation
behavior instead.</p>
<p>BREAKING CHANGE: When using a custom reducer function with
<code>streamedQuery</code>,
the <code>initialValue</code> parameter is now required and must be
provided.</p>
<ul>
<li>rename queryFn to streamFn in streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9606">#9606</a>)
(b25412a) by Dominik Dorfmeister</li>
</ul>
<p>BREAKING CHANGE: <code>queryFn</code> has been renamed to
<code>streamFn</code></p>
<h3>Chore</h3>
<ul>
<li>tsconfig.json: simplify &quot;include&quot; patterns by
consolidating file extensions and directory paths (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9547">#9547</a>)
(7306474) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Test</h3>
<ul>
<li>react-query/useMutationState: clarify assertions and improve code
formatting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9611">#9611</a>)
(43049c5) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Other</h3>
<ul>
<li>(c75a994) by BennettLiam</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1599bb4742"><code>1599bb4</code></a>
release: v5.86.0</li>
<li><a
href="7306474eee"><code>7306474</code></a>
chore(tsconfig.json): simplify 'include' patterns by consolidating file
exten...</li>
<li><a
href="0a35234ce4"><code>0a35234</code></a>
release: v5.85.9</li>
<li><a
href="aec19c93a5"><code>aec19c9</code></a>
release: v5.85.8</li>
<li><a
href="a978b34b1b"><code>a978b34</code></a>
release: v5.85.7</li>
<li><a
href="b998f68a57"><code>b998f68</code></a>
release: v5.85.6</li>
<li><a
href="0a0e01443d"><code>0a0e014</code></a>
release: v5.85.5</li>
<li><a
href="ef0a9d2c7e"><code>ef0a9d2</code></a>
release: v5.85.4</li>
<li><a
href="b6516bd25e"><code>b6516bd</code></a>
release: v5.85.3</li>
<li><a
href="e14f39c6ee"><code>e14f39c</code></a>
release: v5.85.2</li>
<li>Additional commits viewable in <a
href="https://github.com/TanStack/query/commits/v5.86.0/packages/react-query-devtools">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/node` from 24.2.1 to 24.3.0
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />

Updates `chromatic` from 13.1.3 to 13.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/releases">chromatic's
releases</a>.</em></p>
<blockquote>
<h2>v13.1.4</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Feat:Fix outdated and incorrect links in the CLI <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1202">#1202</a>
(<a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a>)</li>
<li>Show setup URL on build errors when onboarding. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1201">#1201</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a></li>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/blob/main/CHANGELOG.md">chromatic's
changelog</a>.</em></p>
<blockquote>
<h1>v13.1.4 (Fri Aug 29 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Feat:Fix outdated and incorrect links in the CLI <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1202">#1202</a>
(<a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a>)</li>
<li>Show setup URL on build errors when onboarding. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1201">#1201</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a></li>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c85259331b"><code>c852593</code></a>
Bump version to: 13.1.4 [skip ci]</li>
<li><a
href="aae7c41b86"><code>aae7c41</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="062d1fadee"><code>062d1fa</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1202">#1202</a>
from chromaui/feat_fix_links</li>
<li><a
href="15f72ae8ea"><code>15f72ae</code></a>
Removes references to the viewlayer used by noCSFGlobs</li>
<li><a
href="1a0e200bce"><code>1a0e200</code></a>
Remove viewlayer from nocsf file</li>
<li><a
href="f4c52ab0af"><code>f4c52ab</code></a>
Attempt to fix linting errors</li>
<li><a
href="0db944b07c"><code>0db944b</code></a>
Feat:Fix outdated and incorrect links in the CLI</li>
<li><a
href="8879737ab0"><code>8879737</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1201">#1201</a>
from chromaui/jmhobbs/cap-3287-chromatic-cicli-issue...</li>
<li><a
href="d3c25a21a1"><code>d3c25a2</code></a>
Improve links during onboarding with system errors</li>
<li><a
href="cce1b29cdf"><code>cce1b29</code></a>
Show setup URL on build errors when onboarding.</li>
<li>See full diff in <a
href="https://github.com/chromaui/chromatic-cli/compare/v13.1.3...v13.1.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `concurrently` from 9.2.0 to 9.2.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/open-cli-tools/concurrently/releases">concurrently's
releases</a>.</em></p>
<blockquote>
<h2>v9.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>chore: update eslint-plugin-simple-import-sort from v10 to v12 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/551">open-cli-tools/concurrently#551</a></li>
<li>chore: update eslint-config-prettier from v9 to v10 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/552">open-cli-tools/concurrently#552</a></li>
<li>Remove lodash by <a
href="https://github.com/gustavohenke"><code>@​gustavohenke</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/555">open-cli-tools/concurrently#555</a></li>
<li>chore: update coveralls-next from v4 to v5 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/557">open-cli-tools/concurrently#557</a></li>
<li>Replace jest with vitest by <a
href="https://github.com/gustavohenke"><code>@​gustavohenke</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/554">open-cli-tools/concurrently#554</a></li>
<li>Upgrade to pnpm v10 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/558">open-cli-tools/concurrently#558</a></li>
<li>chore: remove unused eslint-plugin-jest by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/559">open-cli-tools/concurrently#559</a></li>
<li>Minor dependency updates by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/560">open-cli-tools/concurrently#560</a></li>
<li>Migrate to ESLint v9 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/561">open-cli-tools/concurrently#561</a></li>
<li>Update shell-quote to 1.8.3 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/562">open-cli-tools/concurrently#562</a></li>
<li>Full coverage by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/563">open-cli-tools/concurrently#563</a></li>
<li>Update GH actions/workflows, enable NPM provenance by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/564">open-cli-tools/concurrently#564</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1">https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="414cd016c6"><code>414cd01</code></a>
9.2.1</li>
<li><a
href="0dfedb028c"><code>0dfedb0</code></a>
Update GH actions/workflows, enable npm provenance (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/564">#564</a>)</li>
<li><a
href="ee81511999"><code>ee81511</code></a>
Remove obsolete tsdk config</li>
<li><a
href="09d3d7b11f"><code>09d3d7b</code></a>
Full coverage (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/563">#563</a>)</li>
<li><a
href="8cfc6a6cb4"><code>8cfc6a6</code></a>
Update shell-quote to 1.8.3 (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/562">#562</a>)</li>
<li><a
href="4c403f8b01"><code>4c403f8</code></a>
Migrate to ESLint v9 (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/561">#561</a>)</li>
<li><a
href="8bfcaf7828"><code>8bfcaf7</code></a>
Minor dependency updates (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/560">#560</a>)</li>
<li><a
href="389fec4830"><code>389fec4</code></a>
Enable watch mode &amp; coverage for unit tests by default</li>
<li><a
href="7993ce6817"><code>7993ce6</code></a>
chore: remove unused eslint-plugin-jest (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/559">#559</a>)</li>
<li><a
href="58300f45eb"><code>58300f4</code></a>
Remove obsolete .npmrc file</li>
<li>Additional commits viewable in <a
href="https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `eslint-config-next` from 15.4.6 to 15.5.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.5.2</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: disable unknownatrules lint rule entirely (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83059">#83059</a>)</li>
<li>revert: add ?dpl to fonts in /_next/static/media (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83062">#83062</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgub"><code>@​bgub</code></a> and <a
href="https://github.com/ztanner"><code>@​ztanner</code></a> for
helping!</p>
<h2>v15.5.1</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: aliased navigations should apply scroll handling (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82900">#82900</a>)</li>
<li>Turbopack: fix invalid NFT entry with file behind symlink (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82887">#82887</a>)</li>
<li>fix: typesafe linking to route handlers and pages API routes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82858">#82858</a>)</li>
<li>fix: change &quot;noUnknownAtRules&quot; to &quot;warn&quot; for
Biome (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82974">#82974</a>)</li>
<li>fix: add path normalization to getRelativePath for Windows (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82918">#82918</a>)</li>
<li>feat: add typesafety with config.typedRoutes to redirect() and
permanentRedirect() (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82860">#82860</a>)</li>
<li>fix: avoid importing types that will be unused (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82856">#82856</a>)</li>
<li>fix: update the config.api.responseLimit type (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82852">#82852</a>)</li>
<li>fix: update validation return types (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82854">#82854</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgub"><code>@​bgub</code></a>, <a
href="https://github.com/mischnic"><code>@​mischnic</code></a>, and <a
href="https://github.com/ztanner"><code>@​ztanner</code></a> for
helping!</p>
<h2>v15.5.1-canary.26</h2>
<h3>Core Changes</h3>
<ul>
<li>Upgrade React from <code>2805f0ed-20250903</code> to
<code>3302d1f7-20250903</code>: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83400">#83400</a></li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>Fixed broken link in backend-for-frontend.mdx: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83310">#83310</a></li>
<li>Update next-response.mdx: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83302">#83302</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/geraldochristiano"><code>@​geraldochristiano</code></a>
and <a href="https://github.com/blntgvn42"><code>@​blntgvn42</code></a>
for helping!</p>
<h2>v15.5.1-canary.25</h2>
<h3>Core Changes</h3>
<ul>
<li>fix(ppr): fix prerender info matching for rewritten paths: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83359">#83359</a></li>
<li>fix: shrink bottom stack container height: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83377">#83377</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="497ec6aa08"><code>497ec6a</code></a>
v15.5.2</li>
<li><a
href="cc68ced552"><code>cc68ced</code></a>
v15.5.1</li>
<li><a
href="7e08c8223d"><code>7e08c82</code></a>
v15.5.0</li>
<li>...

_Description has been truncated_

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-10 10:21:53 +00:00
Reinier van der Leer
2ffd249aac fix(backend/external-api): Improve security & reliability of API key storage (#10796)
Our API key generation, storage, and verification system has a couple of
issues that need to be ironed out before full-scale deployment.

### Changes 🏗️

- Move from unsalted SHA256 to salted Scrypt hashing for API keys
- Avoid false-negative API key validation due to prefix collision
- Refactor API key management code for clarity
- [refactor(backend): Clean up API key DB & API code
(#10797)](https://github.com/Significant-Gravitas/AutoGPT/pull/10797)
  - Rename models and properties in `backend.data.api_key` for clarity
- Eliminate redundant/custom/boilerplate error handling/wrapping in API
key endpoint call stack
- Remove redundant/inaccurate `response_model` declarations from API key
endpoints

Dependencies for `autogpt_libs`:
- Add `cryptography` as a dependency
- Add `pyright` as a dev dependency

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Performing these actions through the UI (still) works:
    - [x] Creating an API key
    - [x] Listing owned API keys
    - [x] Deleting an owned API key
  - [x] Newly created API key can be used in Swagger UI
  - [x] Existing API key can be used in Swagger UI
  - [x] Existing API key is re-encrypted with salt on use
2025-09-10 09:34:49 +00:00
Ubbe
986245ec43 feat(frontend): run agent page improvements (#10879)
## Changes 🏗️

- Add all the cron scheduling options ( _yearly, monthly, weekly,
custom, etc..._ ) using the new Design System components
- Add missing agent/run actions: export agent + delete agent

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with `new-agent-runs` enabled
  - [x] Test the above 

### For configuration changes:

None
2025-09-10 07:44:51 +00:00
Bentlybro
f89717153f Merge branch 'master' into dev 2025-09-10 08:52:35 +01:00
Nicholas Tindle
5da41e0753 fix(backend): Add Airtable record normalization + find/create base (#10891)
## Summary
Fixes critical issue with Airtable API where empty/false fields are
completely omitted from responses, causing inconsistent data structures.
Also improves the create base block to prevent duplicate bases.

<!-- Clearly explain the need for these changes: -->
The Airtable API has a problematic behavior where it omits fields with
"empty" values from responses:
- Unchecked checkboxes are missing entirely instead of returning `false`
- Empty number fields are missing instead of returning `0`
- This makes it impossible to distinguish between "field doesn't exist"
and "field is false/empty"
- Users were getting inconsistent record structures that broke their
workflows

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

#### 1. **Added Record Normalization**
(`backend/blocks/airtable/_api.py`)
- New `get_table_schema()` function to fetch table field definitions
- New `get_empty_value_for_field()` to determine appropriate empty
values per field type
- New `normalize_records()` to fill in missing fields with proper
defaults:
  - Checkbox → `false`
  - Number/Currency/Percent/Duration/Rating → `0`
  - Text fields → `""`
  - Multiple selects/attachments/collaborators → `[]`
  - Dates/Single selects → `null`
- New `get_base_tables()` to fetch tables for a base

#### 2. **Enhanced List and Get Record Blocks**
(`backend/blocks/airtable/records.py`)
- Added `normalize_output` parameter (defaults to `true`) - ensures all
fields are present
- Added `include_field_metadata` parameter to optionally include field
type information
- When normalization is enabled, fetches schema once and normalizes all
records
- Can be disabled by setting `normalize_output=false` for raw Airtable
response

#### 3. **Simplified Create Records Block**
- Added `skip_normalization` parameter (default `false`) - normalized
output by default
- Records now always include all fields with proper empty values

#### 4. **Enhanced Create Base Block**
(`backend/blocks/airtable/bases.py`)
- Added `find_existing` parameter (defaults to `true`) to prevent
duplicate bases
- When finding an existing base, now fetches and returns table
information
- Added `was_created` output field to indicate whether base was created
or found

### Testing 📋

-  All Airtable block tests pass
-  Tested normalization with records containing missing checkbox fields
-  Verified all field types get appropriate empty values
-  Tested create base find-or-create functionality
-  Ran `poetry run format` and `poetry run lint`

### Migration Guide

This update makes the blocks behave more predictably:
- **List/Get Records**: All fields are now included by default. Set
`normalize_output: false` if you need the raw Airtable response
- **Create Records**: Simply creates records, no more upsert confusion
- **Create Base**: Prevents duplicate bases by default. Set
`find_existing: false` to force creation

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes were required - all changes are code-only.
2025-09-10 04:57:26 +00:00
Nicholas Tindle
cddeb185a8 feat(blocks): Add Gmail Draft Reply Block (#10882)
### Changes 🏗️

This PR adds a new `GmailDraftReplyBlock` that enables creating draft
replies to existing Gmail email threads. This block complements the
existing Gmail blocks by providing specialized functionality for
replying within email conversations.

**New Block: GmailDraftReplyBlock**
- **Purpose**: Creates draft replies to Gmail threads with intelligent
content type detection
- **Key Features**:
-  Automatic HTML detection: Draft replies containing HTML tags are
formatted as text/html
-  No hard-wrap for plain text: Plain text draft replies preserve
natural line flow (prevents 78-character hard-wrap issue)
-  Manual content type override: Use content_type parameter to force
specific format ("auto", "plain", or "html")
-  Reply-all functionality: Option to reply to all original recipients
-  Thread preservation: Maintains proper email threading with
In-Reply-To and References headers
  -  Full Unicode/emoji support with UTF-8 encoding
  -  File attachment support

**Implementation Details**:
- Retrieves parent message metadata to build proper reply context
- Automatically determines recipients based on reply mode (reply vs
reply-all)
- Adds "Re:" prefix to subject if not already present
- Maintains email thread continuity with proper headers
- Supports OAuth scopes: `gmail.modify` and `gmail.readonly`

**Inputs**:
- `threadId`: Thread ID to reply in
- `parentMessageId`: ID of the message being replied to  
- `to`, `cc`, `bcc`: Optional recipient lists
- `replyAll`: Boolean to reply to all original recipients
- `subject`: Optional custom subject
- `body`: Email body (plain text or HTML)
- `content_type`: Optional content type override
- `attachments`: Optional file attachments

**Outputs**:
- `draftId`: Created draft ID
- `messageId`: Draft message ID
- `threadId`: Thread ID
- `status`: Draft creation status
- `error`: Error message if any

Closes #10846

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
  - [x] Block includes test input/output configuration
  - [x] Mock test handler implemented for unit testing
  - [x] Proper error handling included
  - [x] OAuth authentication properly configured
- [x] Content type detection logic tested (auto-detects HTML vs plain
text)
  - [x] Threading headers properly maintained for email conversations

<details>
  <summary>Additional Testing Notes</summary>
  
  - The block uses the existing Gmail authentication infrastructure
  - Test credentials and mock outputs are configured for CI/CD
- The `_make_mime_text` helper function ensures proper content
formatting
  - Reply-all logic properly deduplicates recipients
</details>

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Note**: No configuration changes required - uses existing Google OAuth
configuration.

<details>
  <summary>Configuration Compatibility</summary>

  - Uses existing `GOOGLE_OAUTH_IS_CONFIGURED` flag
  - Leverages existing Google OAuth credentials system
  - No new environment variables or services required
</details>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
2025-09-10 04:24:15 +00:00
Reinier van der Leer
08a3fd6d26 Revert "fix(backend): Improve GitHub PR URL validation and API URL generation" (#10890)
Reverts Significant-Gravitas/AutoGPT#10317

The changes omit the hostname from the "prepared" URL, breaking some
GitHub blocks.
2025-09-09 21:47:51 +00:00
Nicholas Tindle
39b30bc82c ci: Add 'Bash(gh pr edit:*)' to allowed tools for claude (#10885) 2025-09-09 12:36:46 -05:00
Reinier van der Leer
2df0e2b750 fix(backend/api): Fix & add tests for APIKeyAuthenticator (#10881)
- Resolves #10875

### Changes 🏗️

- Fix use of `super().__call__` in `APIKeyAuthenticator.__call__`
- Fix non-ASCII API key validation
- Add tests for `APIKeyAuthenticator`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test implementations have been verified manually
  - [x] All the new tests pass
2025-09-09 16:09:51 +00:00
Ubbe
925f249ce1 feat(frontend): use new output renderers on new run page (#10876)
## Changes 🏗️

Integrating the great work @ntindle did on the rich agent output
renderers into the new Agent run page in the library 💜

- Implemented enhanced output rendering in `<RunDetails />` using the
shared output-renderers
- Added `<RunOutputs />` sub-component at
`RunDetails/components/RunOutputs.tsx` that:
- [x] builds items from `run.outputs`, extracts metadata, picks a
renderer via `globalRegistry`, and falls back to `TextRenderer`
- [x] renders `<OutputActions />` for copy/download and a list of
`<OutputItems />`.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run an agent on the view which outputs rich content
- [x] See the output use the new renderers, for example code is
higlighted

### For configuration changes:

None
2025-09-09 05:44:37 +00:00
Ubbe
e8cf3edbf4 feat(frontend): add timezone to new library agent page (#10874)
## Changes 🏗️

<img width="800" height="756" alt="Screenshot 2025-09-09 at 14 03 24"
src="https://github.com/user-attachments/assets/65f3e3a8-1ce0-491c-824a-601a494d3a36"
/>

<img width="600" height="493" alt="Screenshot 2025-09-09 at 14 03 28"
src="https://github.com/user-attachments/assets/457b37a3-6b3b-49b8-912c-c72cf06e8e58"
/>

Following the nice changes @ntindle did regarding timezones, bring them
into the new page:
- display the timezone when scheduling an agent on the new modal
- display the timezone for a schedule on the new schedule details view

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with `agent-new-runs` flag ON
  - [x] Open an agent on the new page
- [x] On the new modal, create a schedule, it display the timezone alert
  - [x] Once created, on the schedule view, it displays the timezone   

### For configuration changes:

None
2025-09-09 05:37:48 +00:00
Nicholas Tindle
dc03ea718c fix(backend): Report process errors to Sentry before cleanup (#10873)
Adds error reporting to Sentry for exceptions (excluding
KeyboardInterrupt and SystemExit) before executing process cleanup.
Silently ignores failures if Sentry is unavailable.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds cleanup for sentry
Adds disabling for sentry
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test all services with manual exception raising
  - [x] Remove those excptions
  - [x] Make sure they show up in sentry
2025-09-09 04:56:51 +00:00
Nicholas Tindle
dbee580d80 build(frontend): Increase build process memory limit to 4 GiB (#10861)
Increase memory limit of frontend build process to 4GiB to fix
out-of-memory build failures
2025-09-08 05:04:17 +00:00
Nicholas Tindle
0325ec0a2c fix(frontend): Fix environment variable handling in Docker builds for dev/prod deployments (#10859)
<!-- Clearly explain the need for these changes: -->
Sentry was not being enabled in dev/prod deployments because environment
variables were being incorrectly overwritten during the Docker build
process.

### Changes 🏗️

- Fixed Dockerfile environment variable merging logic to prevent
`.env.default` from overwriting `.env.production` values
- Added `NODE_ENV=production` to build stage to ensure Next.js looks for
production env files
- Updated env file merging to only run when not in CI/CD (when
`.env.production` doesn't exist)
- When `.env.production` exists (CI/CD), now merges defaults with
production values properly

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] Verify local Docker builds still work with `docker compose up`
- [ ] Verify dev deployment has `NEXT_PUBLIC_APP_ENV=dev` in built
JavaScript
- [ ] Verify prod deployment has `NEXT_PUBLIC_APP_ENV=prod` in built
JavaScript
- [ ] Verify Sentry is enabled in dev/prod deployments
(`isProdOrDev=true`)

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Technical Details

**Root Cause:**
1. CI/CD workflow creates `.env.production` with correct values (e.g.,
`NEXT_PUBLIC_APP_ENV=dev`)
2. Dockerfile's env merging logic always created `.env` from
`.env.default`
3. Next.js loads `.env.production` first, then `.env` second
4. Since `.env` is loaded after `.env.production`, it overwrites the
values
5. `.env.default` has `NEXT_PUBLIC_APP_ENV=local`, causing `getAppEnv()`
to return "local" instead of "dev"/"prod"
6. This made `isProdOrDev` evaluate to `false`, disabling Sentry

**Solution:**
The Dockerfile now checks if `.env.production` exists:
- If yes (CI/CD): Merges `.env.default` + `.env.production` →
`.env.production` (production values take precedence)
- If no (local): Merges `.env.default` + `.env` → `.env` (user values
take precedence)

This ensures production deployments get the correct environment
variables while preserving local development workflow.

🤖 Description generated + Investigation assisted with [Claude
Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-06 17:05:45 +00:00
Ubbe
3952a1a226 feat(frontend): smol improvements on new run view (#10854)
## Changes 🏗️

<img width="800" height="790" alt="Screenshot 2025-09-05 at 17 22 36"
src="https://github.com/user-attachments/assets/8b22424c-1968-4c4f-9eed-3d5d5185751d"
/>

- Make a nicer empty state and display it when there are no
runs/schedules
- Rename search param to `executionId` to mirror what was on the old
page
- Reduce polling when execution is happening to 1.5s ( 3.s is too slow
maybe... )
- Make sure the run details page also updates when a run is happening

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Tested the above 

### For configuration changes:

None

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-09-06 09:02:25 +00:00
Krzysztof Czerwinski
cfc975d39b feat(backend): Type for API block data response (#10763)
Moving to auto-generated frontend types caused returned blocks data to
no longer have proper typing.

### Changes 🏗️

- Add `BlockInfo` model and `get_info` function that returns it to the
`Block` class, including costs
- Move `BlockCost` and `BlockCostType` to `block.py` to prevent circular
imports

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Endpoints using the new type work correctly

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-09-06 04:21:48 +00:00
Zamil Majdy
46e0f6cc45 feat(platform): Add recommended run schedule for agent execution (#10827)
## Summary

<img width="1000" alt="Screenshot 2025-09-02 at 9 46 49 PM"
src="https://github.com/user-attachments/assets/d78100c7-7974-4d37-a788-757764d8b6b7"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 20 24 PM"
src="https://github.com/user-attachments/assets/cd092963-8e26-4198-b65a-4416b2307a50"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 22 30 PM"
src="https://github.com/user-attachments/assets/e16b3bdb-c48c-4dec-9281-b2a35b3e21d0"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 20 38 PM"
src="https://github.com/user-attachments/assets/11d74a39-f4b4-4fce-8d30-0e6a925f3a9b"
/>

• Added recommended schedule cron expression as an optional input
throughout the platform
• Implemented complete data flow from builder → store submission → agent
library → run page
• Fixed UI layout issues including button text overflow and ensured
proper component reusability

## Changes

### Backend
- Added `recommended_schedule_cron` field to `AgentGraph` schema and
database migration
- Updated API models (`LibraryAgent`, `MyAgent`,
`StoreSubmissionRequest`) to include the new field
- Enhanced store submission approval flow to persist recommended
schedule to database

### Frontend
- Added recommended schedule input to builder page (SaveControl
component) with overflow-safe styling
- Updated store submission modal (PublishAgentModal) with schedule
configuration
- Enhanced agent run page with schedule tip display and pre-filled
schedule dialog
- Refactored `CronSchedulerDialog` with discriminated union types for
better reusability
- Fixed layout issues including button text truncation and popover width
constraints
- Implemented robust cron expression parsing with 100% reversibility
between UI and cron format

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-06 03:57:03 +02:00
Nicholas Tindle
c03af5c196 feat(frontend): allow sentry disable in dev (#10858)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Vercel is logging things it shouldnt

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manually verified in vercel
2025-09-05 20:29:39 +00:00
Nicholas Tindle
00cbfb8f80 feat(frontend): re-enable sentry in dev (#10857)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- We want sentry to actually work so we can do testing
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] we're just re-enabling. it wroks in prod
2025-09-05 19:38:10 +00:00
Ubbe
3beafae955 feat(frontend): new agent library run page (#10835)
## Changes 🏗️

This is the new **Agent Library Run** page. Sorry in advance for the
massive PR 🙏🏽 . I got carried away and it has been tricky to split it (
_maybe I abused the agent too much_ 🤔 )

<img width="800" height="1085" alt="Screenshot 2025-09-04 at 13 58 33"
src="https://github.com/user-attachments/assets/b709edb9-d2b5-48ad-a04d-dddf10c89af3"
/>

<img width="800" height="338" alt="Screenshot 2025-09-04 at 13 54 51"
src="https://github.com/user-attachments/assets/efa28be2-d2dd-477f-af13-33ddd1d639dd"
/>

<img width="800" height="598" alt="Screenshot 2025-09-04 at 13 54 18"
src="https://github.com/user-attachments/assets/806ab620-3492-4c5b-b4e2-f17b89756dd8"
/>

- Schedules are now on the sidebar tabbed along with runs
- The whole UI has been updated to match the new designs and design
system
- There is no more "run draft" view as the modal is in charge of new
runs now 💪🏽
- The page is responsive and mobile friendly 📱 


Uploading mobile.mov…


https://github.com/user-attachments/assets/0e483062-0e50-4fa6-aaad-a1f6766df931

### Safety

I understand this is a lot of changes. However is all behind a feature
flag, `new-agent-runs`, when OFF it will display the old library agent
view. The old library agent view can still be accessed under:
`/library/legacy/{id}` for reference 👍🏽

### Testing

I haven't any tests for now... 💆🏽 I want to get this enabled on dev so
we can start running our agents there through the new page and modal and
start catching edge-cases.

Tests will come later in the form of E2E for the happy paths, and
probably I will introduce [Vitest](https://vitest.dev/) + [Testing
Library](https://testing-library.com/) for the finer details...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test the above

### For configuration changes:

None, the feature flag is already configured 🙏🏽

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-09-05 06:39:54 +00:00
seer-by-sentry[bot]
9cd186a2f3 fix(backend): Improve GitHub PR URL validation and API URL generation (#10317)
### Changes 🏗️

Fixes
[AUTOGPT-SERVER-4EN](https://sentry.io/organizations/significant-gravitas/issues/6731949478/).
The issue was that: Issue URL passed to PR file reader, regex failed,
leading to issue API call, returning object iterated as keys, causing
AttributeError.

- Refactor `prepare_pr_api_url` to improve validation of GitHub PR URLs.
- Update regex to specifically target github.com URLs.
- Raise ValueError with a descriptive message for invalid URLs.
- Correctly construct the API URL using the extracted repository path.

This fix was generated by Seer in Sentry, triggered automatically. 👁️
Run ID: 265077

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6731949478/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test plan:
- [x] Provide an invalid GitHub PR URL and verify that a ValueError is
raised with a descriptive message.
- [x] Provide a valid GitHub PR URL and verify that the API URL is
correctly constructed.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Bently <Github@bentlybro.com>
2025-09-05 06:35:45 +00:00
Nicholas Tindle
dcf26bd3d4 ci: update Claude code action (#10851)
<!-- Clearly explain the need for these changes: -->
Claude code now uses prompt not system prompt
### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Swaps to peomot from custom system prompt
### Checklist 📋

#### For code changes:
N/A
2025-09-05 06:33:09 +00:00
Bently
b97f097c9d feat(blocks): add AI Image Customizer block using Googles Nano Banana (#10845)
Add new AutoGPT Platform Block that uses google/gemini-2.5-flash-image
model via Replicate API.

Features:
- Text prompt input for image generation
- Optional list of image URLs as input
- Configurable output format (jpg/png, defaults to png)
- Single model option: google/gemini-2.5-flash-image
- Returns image_url output for generated images

Fixes #10815

🤖 Generated with [Claude Code](https://claude.ai/code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the AI image customizer block and upload 2 images to see if it
uses them in the image generation/edits


<img width="1536" height="672" alt="tmprhzqasxz"
src="https://github.com/user-attachments/assets/39d7adbd-2847-4988-aeab-1c5453290174"
/>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-05 06:31:12 +01:00
Reinier van der Leer
ce24975a9d fix(backend): Unbreak get_webhook query (#10850)
- Resolves #10849

### Changes 🏗️

- Use `AGENT_PRESET_INCLUDE` in `INTEGRATION_WEBHOOK_INCLUDE` so the
`AgentPreset.from_db(..)` doesn't break

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Webhook ingress works
2025-09-04 19:00:05 +00:00
Bently
75c90e49ce feat(blocks): add AI Image Customizer block using Googles Nano Banana (#10845)
Add new AutoGPT Platform Block that uses google/gemini-2.5-flash-image
model via Replicate API.

Features:
- Text prompt input for image generation
- Optional list of image URLs as input
- Configurable output format (jpg/png, defaults to png)
- Single model option: google/gemini-2.5-flash-image
- Returns image_url output for generated images

Fixes #10815

🤖 Generated with [Claude Code](https://claude.ai/code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the AI image customizer block and upload 2 images to see if it
uses them in the image generation/edits


<img width="1536" height="672" alt="tmprhzqasxz"
src="https://github.com/user-attachments/assets/39d7adbd-2847-4988-aeab-1c5453290174"
/>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-04 18:51:32 +00:00
Zamil Majdy
2e38f132e7 feat(backend/store): implement sub-agent approval support (#10842)
## Summary
Implement comprehensive sub-agent approval flow following the business
requirements from the flow diagram.

<img width="1956" height="1448" alt="image"
src="https://github.com/user-attachments/assets/8de35e5b-9d3e-4dc2-bff0-47b49dbebc83"
/>


### Key Features
-  **Auto-approve sub-agents** when main agent is approved
-  **Handle all scenarios**: new listings, existing versions, missing
versions
-  **Transaction safety** with atomic operations via database
transactions
-  **Parallel processing** using asyncio.gather for performance
optimization
-  **Hidden from store** with isAvailable=false for all sub-agents

### Implementation Details
- **Replaced** `_get_missing_sub_store_listing` with comprehensive
`_handle_sub_agent_approvals`
- **Added** `_approve_sub_agent` function with early returns for clean,
readable code flow
- **Used** `transaction()` context manager to ensure data consistency
across operations
- **Process sub-agents in parallel** while maintaining transaction
integrity

### Business Logic Flow
1. **Check if sub-agent is already listed** in store
2. **If not listed**: create new store listing with `isAvailable=false`
3. **If listed but not approved**: approve the correct version  
4. **If correct version not listed**: create store listing version and
approve it
5. **If already approved**: no action needed (early return)

All sub-agents remain **hidden from public store** while being
internally approved for system use.

## Files Changed
- `backend/server/v2/store/db.py` - Core implementation of sub-agent
approval logic

## Test Plan
- [ ] Verify main agent approval triggers sub-agent approvals
- [ ] Test all sub-agent scenarios: new, existing unapproved, existing
approved
- [ ] Confirm sub-agents remain hidden (`isAvailable=false`)
- [ ] Validate transaction rollback on failures
- [ ] Check parallel processing works correctly

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-09-04 17:59:37 +00:00
Nicholas Tindle
5be6987d58 ci: Add model argument to Claude workflow (#10843) 2025-09-04 10:22:53 -05:00
Nicholas Tindle
b59592be9b feat(ci): Claude workflow for Dependabot PR analysis (#10841)
Created workflow to analyze Dependabot PRs with Claude, including
detailed dependency analysis and changelog review.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds workflow for claude to do dependabot
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
N/A

#### For configuration changes:
N/A

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-09-04 14:48:07 +00:00
dependabot[bot]
7ea17df9ed chore(frontend/deps): Bump recharts from 2.15.3 to 3.1.2 in /autogpt_platform/frontend (#10807)
Bumps [recharts](https://github.com/recharts/recharts) from 2.15.3 to
3.1.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/recharts/recharts/releases">recharts's
releases</a>.</em></p>
<blockquote>
<h2>v3.1.2</h2>
<h2>What's Changed</h2>
<h3>Fix</h3>
<ul>
<li><code>Label/Polar Charts</code>: <code>Label</code> viewbox should
now be present in polar charts and address <a
href="https://redirect.github.com/recharts/recharts/issues/6030">recharts/recharts#6030</a>
by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6180">recharts/recharts#6180</a></li>
<li>Add LRU cache for string size measurements (<a
href="https://redirect.github.com/recharts/recharts/issues/3955">#3955</a>)
by <a
href="https://github.com/shreedharbhat98"><code>@​shreedharbhat98</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6176">recharts/recharts#6176</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/recharts/recharts/compare/v3.1.1...v3.1.2">https://github.com/recharts/recharts/compare/v3.1.1...v3.1.2</a></p>
<h2>v3.1.1</h2>
<h2>What's Changed</h2>
<h3>Fix</h3>
<ul>
<li><code>General</code>: Don't apply duplicate IDs in the DOM by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6111">recharts/recharts#6111</a></li>
<li><code>Stacked Area/Bar</code>: give all graphical items their own
unique identifier and use that to select stacked data. Fixes issue where
stacked charts could not be created from the graphical item
<code>data</code> prop <a
href="https://redirect.github.com/recharts/recharts/issues/6073">recharts/recharts#6073</a>
by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a></li>
<li><code>Stacked Area/Bar</code>: exclude stacked axis domain when not
relevant for axis by <a
href="https://github.com/rinkstiekema"><code>@​rinkstiekema</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6162">recharts/recharts#6162</a>
fixes issue where numeric stacked charts would not render correctly</li>
<li><code>Area Chart</code>: ranged area chart - show active dot on both
points instead of just the top one by <a
href="https://github.com/sroy8091"><code>@​sroy8091</code></a> in <a
href="https://redirect.github.com/recharts/recharts/pull/6116">recharts/recharts#6116</a>
fixes <a
href="https://redirect.github.com/recharts/recharts/issues/6080">#6080</a></li>
<li><code>Polar Charts/Label</code>: fix <code>Label</code> in polar
charts by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6126">recharts/recharts#6126</a></li>
<li><code>Scatter/ErrorBar</code>: choose implicit Scatter ErrorBar
direction based on chart layout (to be the same as 2.x) by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6159">recharts/recharts#6159</a></li>
<li><code>X/YAxis/Reference Components</code>: allow axis values and
reference items to render when there is no data but there is a
domain/explicit ticks set by <a
href="https://github.com/ethphan"><code>@​ethphan</code></a> in <a
href="https://redirect.github.com/recharts/recharts/pull/6161">recharts/recharts#6161</a></li>
<li><code>X/YAxis</code>: pass axis padding info to custom tick
components by <a
href="https://github.com/shreedharbhat98"><code>@​shreedharbhat98</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6163">recharts/recharts#6163</a></li>
</ul>
<h3>Chore / Testing</h3>
<ul>
<li>good progress on our journey to enable
<code>strictNullChecks</code></li>
<li>addition of playwright visual regression tests to CI</li>
<li>split <code>Animate</code> into <code>JavascriptAnimate</code> and
<code>CSSTransitionAnimate</code> by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6175">recharts/recharts#6175</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/sroy8091"><code>@​sroy8091</code></a>
made their first contribution in <a
href="https://redirect.github.com/recharts/recharts/pull/6116">recharts/recharts#6116</a></li>
<li><a href="https://github.com/ethphan"><code>@​ethphan</code></a> made
their first contribution in <a
href="https://redirect.github.com/recharts/recharts/pull/6161">recharts/recharts#6161</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/recharts/recharts/compare/v3.1.0...v3.1.1">https://github.com/recharts/recharts/compare/v3.1.0...v3.1.1</a></p>
<h2>v3.1.0</h2>
<h2>What's Changed</h2>
<p>Bug fixes (old and new) and a few new hooks post 3.0 launch!</p>
<h3>Feat</h3>
<p>More hooks!</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="079545c631"><code>079545c</code></a>
3.1.2</li>
<li><a
href="c6c26b5eae"><code>c6c26b5</code></a>
Label viewbox in polar charts (<a
href="https://redirect.github.com/recharts/recharts/issues/6180">#6180</a>)</li>
<li><a
href="6c4acb7299"><code>6c4acb7</code></a>
Add CompositeAnimationManager to allow testing for multiple concurrent
animat...</li>
<li><a
href="e56913daed"><code>e56913d</code></a>
feat: implement LRU cache for string size measurements (<a
href="https://redirect.github.com/recharts/recharts/issues/3955">#3955</a>)
(<a
href="https://redirect.github.com/recharts/recharts/issues/6176">#6176</a>)</li>
<li><a
href="cf648902cd"><code>cf64890</code></a>
3.1.1</li>
<li><a
href="c4ec635cf5"><code>c4ec635</code></a>
Split Animate into JavascriptAnimate and CSSTransitionAnimate (<a
href="https://redirect.github.com/recharts/recharts/issues/6175">#6175</a>)</li>
<li><a
href="e243a12d02"><code>e243a12</code></a>
Add animation tests for Rectangle (<a
href="https://redirect.github.com/recharts/recharts/issues/6174">#6174</a>)</li>
<li><a
href="a1e07aaa1c"><code>a1e07aa</code></a>
docs: convert class components to functional components (<a
href="https://redirect.github.com/recharts/recharts/issues/6172">#6172</a>)</li>
<li><a
href="1f8d4cb84d"><code>1f8d4cb</code></a>
fix: Exclude stacked axis domain when not relevant for axis (<a
href="https://redirect.github.com/recharts/recharts/issues/6162">#6162</a>)</li>
<li><a
href="fe28f2e63a"><code>fe28f2e</code></a>
chore(deps-dev): bump the storybook group with 8 updates (<a
href="https://redirect.github.com/recharts/recharts/issues/6166">#6166</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/recharts/recharts/compare/v2.15.3...v3.1.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=recharts&package-manager=npm_and_yarn&previous-version=2.15.3&new-version=3.1.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 14:22:30 +00:00
Reinier van der Leer
22e692bdda fix(frontend): Update in-view run with real-time updates (#10839)
- Resolves #10838

### Changes 🏗️

- Update `selectedRun` with received graph execution update if
applicable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Agent outputs appear in real-time
2025-09-04 13:43:44 +00:00
Nicholas Tindle
901e9eba5d feat(frontend): Enhanced output rendering system for agent runs (#10819)
## Summary
Introduces a modular, extensible output renderer system supporting
multiple content types (text, code, images, videos, JSON, markdown) for
agent run outputs. The system includes smart clipboard operations,
concatenated downloads, and rich markdown rendering with LaTeX math and
video embedding support.

## Changes 🏗️

### Core Output Rendering System
- **Added extensible renderer architecture**
(`output-renderers/types.ts`)
  - Plugin-based system with priority ordering
  - Registry pattern for automatic renderer selection
  - Support for custom metadata and MIME types

### Output Renderers
- **TextRenderer**: Plain text with proper formatting and line breaks
- **CodeRenderer**: Syntax-highlighted code blocks with language
detection
- **JSONRenderer**: Collapsible, formatted JSON with syntax highlighting
- **ImageRenderer**: Image display with support for URLs, data URIs, and
file uploads
- **VideoRenderer**: Embedded video player for YouTube, Vimeo, and
direct video files
- **MarkdownRenderer**: Rich markdown with:
  - GitHub Flavored Markdown (tables, task lists, strikethrough)
- LaTeX math rendering via KaTeX (inline `$...$` and display `$$...$$`)
  - Syntax highlighting via highlight.js
  - Video embedding (YouTube/Vimeo URLs auto-convert to embeds)
  - Clickable heading anchors
  - Dark mode support

### User Interface Components
- **OutputItem**: Individual output display with renderer selection
- **OutputActions**: Hover-based action buttons for:
  - Copy to clipboard with smart MIME type detection
- Download with intelligent concatenation (text files merge, binaries
separate)
  - Share functionality (placeholder for future implementation)
- **AgentRunOutputView**: Main output view component with feature flag
integration

### Clipboard & Download Features
- Smart clipboard operations using native ClipboardItem API
- MIME type detection and browser capability checking
- Fallback strategies for unsupported content types
- Concatenated downloads for text-based outputs
- Individual downloads for binary content

### Feature Flag Integration
- Added `ENABLE_ENHANCED_OUTPUT_HANDLING` flag to LaunchDarkly
- Backwards compatible with existing output display
- Graceful fallback for disabled feature flag

### Styling & UX
- Max width constraints (950px card, 660px content)
- Hover-based action buttons for clean interface
- Dark mode support across all renderers
- Responsive design for various content types
- Loading states and error handling

## Test Plan 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

### Test Scenarios:
- [x] **Basic Output Rendering**
  - [x] Execute agent with text output - verify proper formatting
  - [x] Execute agent with JSON output - verify collapsible tree view
  - [x] Execute agent with code output - verify syntax highlighting
  
- [x] **Rich Content**
  - [x] Test markdown rendering with headers, lists, tables
  - [x] Test LaTeX math expressions (inline and display)
  - [x] Test code blocks within markdown
  - [x] Test task lists and strikethrough
  
- [x] **Media Handling**
  - [x] Upload and display PNG/JPEG images
  - [x] Test video URL embedding (YouTube/Vimeo)
  - [x] Test direct video file playback
  
- [x] **Clipboard Operations**
  - [x] Copy plain text output
  - [x] Copy formatted code
  - [x] Copy JSON data
  - [x] Copy markdown content
  - [x] Verify fallback for unsupported MIME types
  
- [x] **Download Functionality**
  - [x] Download single text output
  - [x] Download multiple text outputs (verify concatenation)
  - [x] Download mixed content (verify separate files)
  - [x] Download images and binary content
  
- [x] **Feature Flag**
  - [x] Enable flag - verify enhanced rendering
  - [x] Disable flag - verify fallback to original view
  - [x] Check backwards compatibility
 
  
- [x] **Edge Cases**
  - [x] Large JSON objects (performance)
  - [x] Very long text outputs
  - [x] Mixed content types in single run
  - [x] Malformed markdown
  - [x] Invalid video URLs

## Dependencies Added
- `react-markdown` (9.0.3) - Already present
- `remark-gfm` (4.0.1) - GitHub Flavored Markdown
- `remark-math` (6.0.0) - LaTeX math support
- `rehype-katex` (7.0.1) - Math rendering
- `katex` (0.16.22) - Math typesetting
- `rehype-highlight` (7.0.2) - Syntax highlighting
- `highlight.js` (11.11.1) - Highlighting library
- `rehype-slug` (6.0.0) - Heading anchors
- `rehype-autolink-headings` (7.1.0) - Clickable headings

## Notes
- Mermaid diagram support was attempted but removed due to compatibility
issues
- Share functionality is stubbed out for future implementation
- PNG file upload rendering issue has logging in place for debugging
- All components follow existing UI patterns and use Tailwind CSS

## Screenshots
<img width="1656" height="1250" alt="image"
src="https://github.com/user-attachments/assets/af7542fe-db89-4521-aaf5-19e33d48a409"
/>
## Related Issues
- Implements SECRT-1209

---------

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-04 13:37:26 +00:00
dependabot[bot]
bbd6709bd6 chore(backend/deps): Bump cryptography from 43.0.3 to 45.0.7 in /autogpt_platform/backend (#10810)
Bumps [cryptography](https://github.com/pyca/cryptography) from 43.0.3
to 45.0.7.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>45.0.7 - 2025-09-01</p>
<pre><code>
* Added a function to support an upcoming ``pyOpenSSL`` release.
<p>.. _v45-0-6:</p>
<p>45.0.6 - 2025-08-05<br />
</code></pre></p>
<ul>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.2.</li>
</ul>
<p>.. _v45-0-5:</p>
<p>45.0.5 - 2025-07-02</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.1.
<p>.. _v45-0-4:</p>
<p>45.0.4 - 2025-06-09<br />
</code></pre></p>
<ul>
<li>Fixed decrypting PKCS#8 files encrypted with SHA1-RC4. (This is not
considered secure, and is supported only for backwards
compatibility.)</li>
</ul>
<p>.. _v45-0-3:</p>
<p>45.0.3 - 2025-05-25</p>
<pre><code>
* Fixed decrypting PKCS#8 files encrypted with long salts (this impacts
keys
  encrypted by Bouncy Castle).
* Fixed decrypting PKCS#8 files encrypted with DES-CBC-MD5. While wildly
  insecure, this remains prevalent.
<p>.. _v45-0-2:</p>
<p>45.0.2 - 2025-05-17<br />
</code></pre></p>
<ul>
<li>Fixed using <code>mypy</code> with <code>cryptography</code> on
older versions of Python.</li>
</ul>
<p>.. _v45-0-1:</p>
<p>45.0.1 - 2025-05-17</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.0.
&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f52a3e1496"><code>f52a3e1</code></a>
prep for a 45.0.7 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13378">#13378</a>)</li>
<li><a
href="66198c23c9"><code>66198c2</code></a>
Bump for release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13249">#13249</a>)</li>
<li><a
href="3e53a233b6"><code>3e53a23</code></a>
Bump for 45.0.5 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13135">#13135</a>)</li>
<li><a
href="678c0c59f7"><code>678c0c5</code></a>
prepare for 45.0.4 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13058">#13058</a>)</li>
<li><a
href="5038495987"><code>5038495</code></a>
backports for 45.0.3 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12979">#12979</a>)</li>
<li><a
href="f81c07535d"><code>f81c075</code></a>
Backport mypy fixes for release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12930">#12930</a>)</li>
<li><a
href="8ea28e0bc7"><code>8ea28e0</code></a>
bump for 45.0.1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12922">#12922</a>)</li>
<li><a
href="67840977c9"><code>6784097</code></a>
bump for 45 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12886">#12886</a>)</li>
<li><a
href="2d9c1c9cbe"><code>2d9c1c9</code></a>
bump MSRV to 1.74 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12919">#12919</a>)</li>
<li><a
href="6c18874cc2"><code>6c18874</code></a>
Bump BoringSSL, OpenSSL, AWS-LC in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/12918">#12918</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/43.0.3...45.0.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=43.0.3&new-version=45.0.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 13:35:07 +00:00
dependabot[bot]
3ec1721d6d chore(libs/deps-dev): Bump ruff from 0.12.9 to 0.12.11 in /autogpt_platform/autogpt_libs in the development-dependencies group (#10804)
Bumps the development-dependencies group in
/autogpt_platform/autogpt_libs with 1 update:
[ruff](https://github.com/astral-sh/ruff).

Updates `ruff` from 0.12.9 to 0.12.11
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.12.11</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend <code>AIR311</code> and
<code>AIR312</code> rules (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20082">#20082</a>)</li>
<li>[<code>airflow</code>] Replace wrong path
<code>airflow.io.storage</code> with <code>airflow.io.store</code>
(<code>AIR311</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20081">#20081</a>)</li>
<li>[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx-in-async-function</code>
(<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20091">#20091</a>)</li>
<li>[<code>flake8-logging-format</code>] Add auto-fix for f-string
logging calls (<code>G004</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19303">#19303</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add autofix for
<code>PTH211</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20009">#20009</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Make <code>PTH100</code> fix
unsafe because it can change behavior (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20100">#20100</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>pyflakes</code>, <code>pylint</code>] Fix false positives
caused by <code>__class__</code> cell handling (<code>F841</code>,
<code>PLE0117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20048">#20048</a>)</li>
<li>[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code>
matching for top-level modules (<code>F401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20115">#20115</a>)</li>
<li>[<code>ruff</code>] Fix false positive for t-strings in
<code>default-factory-kwarg</code> (<code>RUF026</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20032">#20032</a>)</li>
<li>[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19647">#19647</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Handle empty t-strings in
<code>unnecessary-empty-iterable-within-deque-call</code>
(<code>RUF037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20045">#20045</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix incorrect <code>D413</code> links in docstrings convention FAQ
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/20089">#20089</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Update links to the table showing
the correspondence between <code>os</code> and <code>pathlib</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20103">#20103</a>)</li>
</ul>
<h2>Contributors</h2>
<ul>
<li><a
href="https://github.com/AlexWaygood"><code>@​AlexWaygood</code></a></li>
<li><a href="https://github.com/Avasam"><code>@​Avasam</code></a></li>
<li><a
href="https://github.com/BurntSushi"><code>@​BurntSushi</code></a></li>
<li><a href="https://github.com/Gankra"><code>@​Gankra</code></a></li>
<li><a
href="https://github.com/Glyphack"><code>@​Glyphack</code></a></li>
<li><a
href="https://github.com/JelleZijlstra"><code>@​JelleZijlstra</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/MatthewMckee4"><code>@​MatthewMckee4</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a
href="https://github.com/PrettyWood"><code>@​PrettyWood</code></a></li>
<li><a href="https://github.com/Renkai"><code>@​Renkai</code></a></li>
<li><a href="https://github.com/TaKO8Ki"><code>@​TaKO8Ki</code></a></li>
<li><a
href="https://github.com/amyreese"><code>@​amyreese</code></a></li>
<li><a href="https://github.com/carljm"><code>@​carljm</code></a></li>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
<li><a
href="https://github.com/github-actions"><code>@​github-actions</code></a></li>
<li><a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.12.11</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend <code>AIR311</code> and
<code>AIR312</code> rules (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20082">#20082</a>)</li>
<li>[<code>airflow</code>] Replace wrong path
<code>airflow.io.storage</code> with <code>airflow.io.store</code>
(<code>AIR311</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20081">#20081</a>)</li>
<li>[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx-in-async-function</code>
(<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20091">#20091</a>)</li>
<li>[<code>flake8-logging-format</code>] Add auto-fix for f-string
logging calls (<code>G004</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19303">#19303</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add autofix for
<code>PTH211</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20009">#20009</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Make <code>PTH100</code> fix
unsafe because it can change behavior (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20100">#20100</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>pyflakes</code>, <code>pylint</code>] Fix false positives
caused by <code>__class__</code> cell handling (<code>F841</code>,
<code>PLE0117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20048">#20048</a>)</li>
<li>[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code>
matching for top-level modules (<code>F401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20115">#20115</a>)</li>
<li>[<code>ruff</code>] Fix false positive for t-strings in
<code>default-factory-kwarg</code> (<code>RUF026</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20032">#20032</a>)</li>
<li>[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19647">#19647</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Handle empty t-strings in
<code>unnecessary-empty-iterable-within-deque-call</code>
(<code>RUF037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20045">#20045</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix incorrect <code>D413</code> links in docstrings convention FAQ
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/20089">#20089</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Update links to the table showing
the correspondence between <code>os</code> and <code>pathlib</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20103">#20103</a>)</li>
</ul>
<h2>0.12.10</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-simplify</code>] Implement fix for
<code>maxsplit</code> without separator (<code>SIM905</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19851">#19851</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add fixes for <code>PTH102</code>
and <code>PTH103</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19514">#19514</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>isort</code>] Handle multiple continuation lines after module
docstring (<code>I002</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19818">#19818</a>)</li>
<li>[<code>pyupgrade</code>] Avoid reporting <code>__future__</code>
features as unnecessary when they are used (<code>UP010</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19769">#19769</a>)</li>
<li>[<code>pyupgrade</code>] Handle nested <code>Optional</code>s
(<code>UP045</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19770">#19770</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>pycodestyle</code>] Make <code>E731</code> fix unsafe instead
of display-only for class assignments (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19700">#19700</a>)</li>
<li>[<code>pyflakes</code>] Add secondary annotation showing previous
definition (<code>F811</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19900">#19900</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix description of global config file discovery strategy (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19188">#19188</a>)</li>
<li>Update outdated links to <a
href="https://typing.python.org/en/latest/source/stubs.html">https://typing.python.org/en/latest/source/stubs.html</a>
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/19992">#19992</a>)</li>
<li>[<code>flake8-annotations</code>] Remove unused import in example
(<code>ANN401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20000">#20000</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c2bc15bc15"><code>c2bc15b</code></a>
Bump 0.12.11 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20136">#20136</a>)</li>
<li><a
href="e586f6dcc4"><code>e586f6d</code></a>
[ty] Benchmarks for problematic implicit instance attributes cases (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20133">#20133</a>)</li>
<li><a
href="76a6b7e3e2"><code>76a6b7e</code></a>
[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code> matching
for top-level modules (`F4...</li>
<li><a
href="1ce65714c0"><code>1ce6571</code></a>
Move GitLab output rendering to <code>ruff_db</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20117">#20117</a>)</li>
<li><a
href="d9aaacd01f"><code>d9aaacd</code></a>
[ty] Evaluate reachability of non-definitely-bound to Ambiguous (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19579">#19579</a>)</li>
<li><a
href="18eaa659c1"><code>18eaa65</code></a>
[ty] Introduce a representation for the top/bottom materialization of an
inva...</li>
<li><a
href="af259faed5"><code>af259fa</code></a>
[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx</code> (<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20091">#20091</a>)</li>
<li><a
href="d75ef3823c"><code>d75ef38</code></a>
[ty] print diagnostics with fully qualified name to disambiguate some
cases (...</li>
<li><a
href="89ca493fd9"><code>89ca493</code></a>
[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (#...</li>
<li><a
href="4b80f5fa4f"><code>4b80f5f</code></a>
[ty] Optimize TDD atom ordering (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20098">#20098</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.9...0.12.11">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.12.9&new-version=0.12.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 13:26:59 +00:00
Swifty
8c8a2ab0c2 feat(blocks): Add Bannerbear API block for text overlay on images (#10768)
## Summary
- Implemented a new Bannerbear API block that enables adding text
overlays to images using template designs
- Block supports customizable text styling (color, font, size, weight,
alignment)
- Always uses synchronous API mode for immediate image generation
results


[agent_ead942d9-58a2-4be6-bdb3-99010c489466.json](https://github.com/user-attachments/files/22027352/agent_ead942d9-58a2-4be6-bdb3-99010c489466.json)

<img width="140" height="572" alt="Screenshot 2025-08-28 at 16 28 35"
src="https://github.com/user-attachments/assets/096b532b-31dc-4ca6-bd68-c00b7594426c"
/>


## Features
- **Text overlay capabilities**: Add multiple text layers to images
using Bannerbear templates
- **Customizable styling**: Support for color, font family, font size,
font weight, and text alignment
- **Image support**: Optional ability to add images to templates
- **Smart field handling**: Only sends non-empty optional parameters to
the API
- **Webhook & metadata**: Advanced options for webhook notifications and
custom metadata

## Implementation Details
- Created provider configuration with API key authentication
- Implemented `BannerbearTextOverlayBlock` with proper input/output
schemas
- Extracted API calls to private method `_make_api_request()` for test
mocking support
- Follows SDK guide patterns and integrates with AutoGPT platform

## Use Case
This block will be used in the Ad generator agent for creating dynamic
marketing materials and social media graphics with text overlays.

## Test plan
- [x] Block imports successfully
- [x] Block instantiates with unique ID
- [x] Code passes linting and formatting checks
- [x] Manual testing with actual Bannerbear API key
- [x] Integration testing with Ad generator agent
2025-09-04 13:15:00 +00:00
Krzysztof Czerwinski
4041e1f39c feat(backend/infra): Cleanup platform and db docker compose files (#10830)
Supabase `db/docker/docker-compose.yml` overrides env vars set in
`autogpt_platform/.env` file. This PR fixes that and simplifies the
compose files further.

### Changes 🏗️

`autogpt_platform/docker-compose.platform.yml`:
- Move hardcoded `DATABASE_URL` and `DIRECT_URL` to `x-backend-env` on
top as it repeats for most services.
- Remove `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` from
`rabbitmq` service and use env files instead

`autogpt_platform/db/docker/docker-compose.yml`:
- Remove hardcoded env vars from `x-supabase-env` - these are already
defined in `.env`
- Remove env vars from services that are already defined in `.env` files

*Changes to db compose file only affect self-hosted Supabase*

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Platform, db works when self-hosting
2025-09-04 12:01:49 +00:00
Nicholas Tindle
7cbdc1ad1a security(frontend): Upgrade next from 15.4.6 to 15.4.7 (#10820)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Server-side Request Forgery (SSRF)
<br/>[SNYK-JS-NEXT-12299318](https://snyk.io/vuln/SNYK-JS-NEXT-12299318)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJjM2E3MTAxZi1mNTI2LTQxMGUtYjVkOS0wMDhmYzMxNjk0MmQiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImMzYTcxMDFmLWY1MjYtNDEwZS1iNWQ5LTAwOGZjMzE2OTQyZCJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Server-side Request Forgery
(SSRF)](https://learn.snyk.io/lesson/ssrf-server-side-request-forgery/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.6","to":"15.4.7"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-12299318"],"prId":"c3a7101f-f526-410e-b5d9-008fc316942d","prPublicId":"c3a7101f-f526-410e-b5d9-008fc316942d","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-12299318"],"vulns":["SNYK-JS-NEXT-12299318"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-04 09:18:12 +00:00
Reinier van der Leer
2a19aa0ed3 fix(frontend/library): Show total runs count above runs list (#10832)
- Resolves #10831

### Changes 🏗️

- Show number of total runs instead of currently loaded runs
- Show loading spinner instead of zero while loading

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Counter shows number of total runs, even if it exceeds number of
currently loaded items
2025-09-04 06:43:19 +00:00
Nicholas Tindle
6d39dfe382 feat(frontend): Add admin button to user profile dropdown (#10774)
<!-- Clearly explain the need for these changes: -->

### Need 💡

This PR addresses Linear issue
[OPEN-2232](https://linear.app/autogpt/issue/OPEN-2232/add-admin-pages-in-dropdown)
by adding an "Admin" button to the user account dropdown menu. This
button is only visible to users with an "admin" role and provides direct
navigation to the admin marketplace management page, making existing
admin functionalities accessible from the new UI.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Added Admin Icon**: Integrated `IconSliders` into the `IconType`
enum and `getAccountMenuOptionIcon` function.
- **Dynamic Menu Generation**: Introduced
`getAccountMenuItems(userRole?: string)` to dynamically construct the
account menu. This function conditionally adds an "Admin" menu item
(linking to `/admin/marketplace`) if the `userRole` is "admin".
- **Navbar Integration**: Updated `NavbarView.tsx` to utilize the
`useSupabase` hook to retrieve the current user's role and then render
the account menu using the new dynamic `getAccountMenuItems` function
for both desktop and mobile views.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Log in as an admin user and verify the "Admin" button appears in
the account dropdown.
- [x] Click the "Admin" button and confirm navigation to
`/admin/marketplace`.
- [x] Log in as a non-admin user and verify the "Admin" button does not
appear in the account dropdown.
- [x] Verify all other existing menu items (e.g., "Edit profile", "Log
out") function correctly for both admin and non-admin users.
  - [x] Test the above scenarios on both desktop and mobile views.

---
Linear Issue:
[OPEN-2232](https://linear.app/autogpt/issue/OPEN-2232/add-admin-pages-in-dropdown)

<a
href="https://cursor.com/background-agent?bcId=bc-2dceda38-31b4-4e8e-8277-fb87c8858abf">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-2dceda38-31b4-4e8e-8277-fb87c8858abf">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-03 09:26:24 +00:00
Ubbe
57ecc10535 fix(frontend): typed inputs in new run modal (#10799)
## Changes 🏗️

###  Make the **Credentials Inputs** show up on the new Run Agent Modal
<img width="450" height="784" alt="Screenshot 2025-09-02 at 00 54 19"
src="https://github.com/user-attachments/assets/26ad8242-a1bc-45f6-9149-a3d207683679"
/>


### Fixes on other modals...

<img width="450" height="579" alt="Screenshot 2025-09-02 at 00 04 40"
src="https://github.com/user-attachments/assets/fa2f9ed9-207b-4599-9e60-3e37c4be6ea9"
/>
 
<img width="450" height="579" alt="Screenshot 2025-09-02 at 00 04 44"
src="https://github.com/user-attachments/assets/92e062a7-f161-423e-b6c9-f998fbdef102"
/>

<img width="450" height="634" alt="Screenshot 2025-09-02 at 00 47 06"
src="https://github.com/user-attachments/assets/93009809-0df0-44c5-b2d2-a9aa0f501312"
/>


- always use buttons/inputs in small size ( _due to the tight space_ ) 
- use from the design system
- always use pretty scrollbars
- Configure
[`tailwind-scrollbars`](https://github.com/adoxography/tailwind-scrollbar)
for pretty scrollbars
- prevent content in dialog to overflow when scrollable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with the new agent run modal flag enabled
  - [x] Check the above 

#### For configuration changes:

None
2025-09-03 06:58:07 +00:00
Reinier van der Leer
4928ce3f90 feat(library): Create presets from runs (#10823)
- Resolves #9307

### Changes 🏗️

- feat(library): Create presets from runs
  - Prevent creating preset from run with unknown credentials
- Fix running presets with credentials
  - Add `credential_inputs` parameter to `execute_preset` endpoint

API:
- Return `GraphExecutionMeta` from `*/execute` endpoints

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]` for an agent that *does not* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
- Go to `/library/agents/[id]` for an agent that *does* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Error toast appears with descriptive message
- Initiate a new run; once finished, click "Create preset from run";
fill out the dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
2025-09-03 01:26:12 +00:00
Reinier van der Leer
e16e69ca55 feat(library, executor): Make "Run Again" work with credentials (#10821)
- Resolves [OPEN-2549: Make "Run again" work with credentials in
`AgentRunDetailsView`](https://linear.app/autogpt/issue/OPEN-2549/make-run-again-work-with-credentials-in-agentrundetailsview)
- Resolves #10237

### Changes 🏗️

- feat(frontend/library): Make "Run Again" button work for runs with
credentials
- feat(backend/executor): Store passed-in credentials on
`GraphExecution`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]` for an agent with credentials inputs
  - Run the agent manually
    - [x] -> runs successfully
- [x] -> "Run again" shows among the action buttons on the newly created
run
  - Click "Run again"
    - [x] -> runs successfully
2025-09-02 18:34:56 +00:00
Nicholas Tindle
4bcc73f784 ci: Update GitHub Actions workflow for Claude integration (#10822) 2025-09-02 17:06:49 +01:00
Nicholas Tindle
c8240a4d6b ci: Modify Claude workflow permissions and action version (#10818) 2025-09-02 08:43:54 -07:00
Ubbe
f669db4a10 fix(frontend): prevent using mock feature flags (#10792)
## Changes 🏗️

Make sure `NEXT_PUBLIC_PW_TEST` is set only when running Playwright.
This forces the app to use "mock" feature flags, so the tests run stable
and predictable despite changes on LaunchDarkly.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x]  should not have `PW_TEST=true` ...

### For configuration changes:

None
2025-09-02 14:18:18 +00:00
Reinier van der Leer
0e755a5c85 feat(platform/library): Support UX for manual-setup triggers (#10309)
- Resolves #10234

### Preview

#### Manual setup triggers
![preview of the setup screen for manual-setup
triggers](https://github.com/user-attachments/assets/295d2968-ad11-4291-b360-2eb2acb03397)
![preview of the view for an active manual-setup
trigger](https://github.com/user-attachments/assets/d0ae2246-2305-48f5-aea8-8adb37336401)

#### Auto-setup triggers
![preview of the view for an active auto-setup
trigger](https://github.com/user-attachments/assets/63856311-fc99-450c-ae1f-86951e40dc26)

### Changes 🏗️

- Add "Trigger status" section to `AgentRunDraftView`
- Add `AgentPreset.webhook`, so we can show webhook URL in library
  - Add `AGENT_PRESET_INCLUDE` to `backend.data.includes`
- Add `BaseGraph.trigger_setup_info` (computed field)
- Rename `LibraryAgentTriggerInfo` to `GraphTriggerInfo`; move to
`backend.data.graph`

Refactor:
- Move contents of `@/components/agents/` to
`@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/`
- Fix small type difference between legacy & generated
`LibraryAgent.image_url`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Setting up GitHub trigger works
  - [x] Setting up manual trigger works
  - [x] Enabling/disabling manual trigger through Library works
2025-09-02 10:23:32 +00:00
Reinier van der Leer
dfdc71f97f feat(backend/external-api): Make API key auth work in Swagger UI (#10783)
![Swagger UI API key auth
dialog](https://github.com/user-attachments/assets/02026802-51f9-410d-bdb8-53840d5eb17b)

- Resolves #10782

### Changes 🏗️

- Use `Security(..)` for security dependencies
- Minor tweaks to auth mechanism (similar to #10720)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] API key auth feature appears in Swagger UI
  - [ ] API key auth *works* in Swagger UI (@ntindle wanna test this?)
2025-09-02 09:24:14 +00:00
Krzysztof Czerwinski
def008408c feat(frontend): Preserve openapi.json spec file on generation failure (#10791)
`openapi.json` file is cleared when script fails to retrieve api spec
from the server. This shouldn't happen and it breaks building docker
containers.

### Changes 🏗️

Use temp file during generation to prevent actual file clearing on
failure.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Spec file doesn't get cleared on failure
  - [x] Spec file is correctly generated
  - [x] Works when frontend is run in docker container
2025-09-02 01:37:54 +00:00
Swifty
916d0adabb feat(frontend): Add graph search functionality to builder (#10776)
## Summary
- Added search functionality to find nodes in the graph by block type,
node ID, and input/output names
- Search icon added to both new and old control panels
- Implemented node highlighting on hover and navigation on click


https://github.com/user-attachments/assets/8cc69186-5582-446d-b2cd-601de992144f



## Changes
- Created `GraphSearchMenu` component for the new control panel
- Created `GraphSearchControl` component for the old control panel  
- Added `GraphSearchContent` component with search UI similar to
BlockMenu
- Implemented `useGraphSearch` hook with fuzzy search logic
- Added node highlighting without viewport movement on hover
- Added node navigation with centering and highlighting on selection

## Features
- Search by block type name, node ID, or input/output field names
- Real-time filtering with keyboard navigation support
- Visual feedback with node highlighting on hover
- Click to navigate and center on selected node
- Consistent styling with BlockMenu including category colors
- Works in both old and new control panels

## Test plan
- [x] Test search functionality in both old and new control panels
- [x] Verify search by block type name works
- [x] Verify search by node ID works  
- [x] Verify search by input/output names works
- [x] Test keyboard navigation (arrow keys and enter)
- [x] Verify node highlighting on hover
- [x] Verify node navigation on click
- [x] Check popover alignment with control panel top
2025-09-01 18:42:13 +00:00
Abhimanyu Yadav
417ee7f0e1 feat(frontend): redesign-block-menu-part-2 (#10793)
In this PR, I have added:

- a search input
- conditional rendering of the search page and the default page
- a sidebar for the default page (with the correct data)

### Screenshot

<img width="1512" height="982" alt="Screenshot 2025-09-01 at 12 28
34 PM"
src="https://github.com/user-attachments/assets/891ab99f-dde5-47b8-a980-a700845f10c2"
/>

#### Checklist:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything works perfectly locally.
2025-09-01 10:04:29 +00:00
Abhimanyu Yadav
ae4c9897b4 refactor(frontend): Revamp marketplace agent page data fetching and structure (#10756)
- Updated the agent page to utilize React Query for data fetching,
improving performance and reliability.
- Removed legacy API calls and integrated prefetching for creator
details and agents.
- Introduced a new MainAgentPage component for better separation of
concerns.
- Added a hydration boundary for managing server state.

> It’s important to note that I haven’t changed any UI in this, as it’s
out of scope for this PR.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have manually tested both `Add to Library` and `Download`
functions, and they are working correctly.
  - [x] All fetching functions are working perfectly.
  - [x] All end-to-end tests are also working correctly.
2025-09-01 05:40:01 +00:00
Ubbe
7544028b94 fix(frontend): use new dialog in schedule agent (#10786)
## Changes 🏗️

Should fix the issue where sometimes the schedule modal wouldn't appear
when clicking on the CTA.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Set up schedules multiple times, look good on the modal
gent from monitor, and confirm it executes correctly

#### For configuration changes:

None
2025-08-30 04:47:12 +00:00
Ubbe
ad49946890 feat(frontend): New Run Agent Modal (2/2) (#10769)
## Changes 🏗️

<img width="400" height="821" alt="Screenshot 2025-08-28 at 23 57 41"
src="https://github.com/user-attachments/assets/f5f7c0a6-0b87-4c1f-b644-3ee2ddd1db95"
/>

<img width="400" height="822" alt="Screenshot 2025-08-28 at 23 57 47"
src="https://github.com/user-attachments/assets/120dbb60-d9e1-4a4a-a593-971badb4a97a"
/>

This is the final piece of work on the new **Run Agent Modal**... It is
all behind a feature flag so I'm relatively comfortable is safe. The
idea is to test with the team once it lands into dev to try different
combinations of agent inputs / credentials and schedules...

I have moved and tied a lot of the original logic around running agents.
Mostly importantly, I have made all the dynamic inputs adhere to the
design system.

### AI changes summary  

- Allow to run schedules in the main modal body
- Integrate and tidy old logic around dynamic run agent inputs 
- Integrate and tidy old logic around credentials inputs
- Refactor: `<TypeBasedInputs />` to use Design System components
(`atoms/Input`, `atoms/Select`, `molecules/MultiToggle`, and native
date/time picker via `<Input />` using the browser's date picker )
- Added support for `type="date"` and `type="datetime-local"` to `<Input
/>` ( _for the above_ )
- On the `<Select />` component:
  - added `size` prop (`small` | `medium`).
- added rich items: `icon`, `disabled`, `separator`, `onSelect`, and
`renderItem` prop.
- stories updated/added for size variants, icons/separators, and custom
rendering.
- Added and documented to the design system:
  - `molecules/TimePicker` + story.
- `atoms/FileInput`: added `accept` and `maxFileSize` props; story
documents constraints.
- `atoms/Progress` stories (Basic, CustomMax, Sizes, Live) with
fixed-width container.
  - `atoms/Switch` stories (Basic, Disabled, WithLabel).
  - `molecules/Dialog` story: Modal-over-Modal example.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open Storybook and verify new/updated stories render correctly.
  - [x] In app, validate modals open/close correctly using DS `Dialog`.
- [x] Validate DS Select rich items (icon, separator, disabled, action)
behave as expected.
  - [x] Run lints and ensure no errors.
  - [x] Manually test File upload constraints (type/size) and progress.

### For configuration changes:
None
2025-08-29 11:38:36 +00:00
Reinier van der Leer
b333d56492 fix(frontend): Fix checking of Date and NaN input values (#10781)
Date values were being rejected as "empty" by the run input form.

![screenshot demonstrating the
issue](https://github.com/user-attachments/assets/cf89414d-9b27-4da5-8a4a-64d9439c7084)

### Changes 🏗️

- Specifically handle `Date` type values in `isEmpty`
- Specifically handle `NaN` values in `isEmpty`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Date values are no longer rejected as "empty"
2025-08-29 11:03:15 +00:00
Abhimanyu Yadav
b23cb14e49 test(frontend): fix library flaky e2e test (#10764)
A test in one of my pr is failing something like… 

<img width="1044" height="452" alt="Screenshot 2025-08-28 at 9 39 07 AM"
src="https://github.com/user-attachments/assets/9c8b8996-50a2-44c6-8a2c-c3904f07ced5"
/>

That’s why I fixed it.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All E2E tests are now working correctly.
2025-08-29 10:19:01 +00:00
Swifty
e71a44521a feat(frontend): Add editable node titles with hover pencil icon (#10779)
## Summary
- Adds ability to edit custom node titles by clicking a pencil icon that
appears on hover
- Custom titles are saved in node metadata and persist across saves
- Original node type is shown in tooltip when hovering over custom
titles



https://github.com/user-attachments/assets/a0a41ac9-1ffb-44c8-9e1c-f4c42e032b49



## Changes
- **CustomNode.tsx**: 
  - Added inline title editing with pencil icon on hover
  - Implemented state management for title editing mode
  - Added tooltip to show original node type for custom titles
  - Prevents custom names from being copied when duplicating nodes

- **useAgentGraph.tsx**:
- Updated graph save/load logic to preserve metadata including custom
titles
  - Ensures metadata persistence through all node operations

## Technical Details
- Uses existing `metadata` JSON field in AgentNode model (no database
changes needed)
- Stores custom title in `metadata.customized_name`
- Backward compatible - nodes without custom titles display normally

## Test Plan
- [x] Hover over node title shows pencil icon
- [x] Click pencil icon to edit title
- [x] Press Enter or blur to save, Escape to cancel
- [x] Custom title persists after saving graph
- [x] Tooltip shows original node type when hovering over custom title
- [x] Copying node doesn't copy custom name
- [x] Backward compatible with existing graphs
2025-08-29 10:02:35 +00:00
Swifty
15aa091f65 feat(frontend): add drag and drop functionality from blocks menu to flow canvas (#10777)
Closes #1055
2025-08-29 10:55:43 +02:00
Swifty
3e4ca19036 docs: Add comprehensive Block SDK guide (#10767)
## Summary
- Added comprehensive Block SDK guide documenting the new SDK pattern
for creating blocks
- Integrated the guide into the documentation structure
- Updated existing documentation to reference the new guide

## Changes
- Created `docs/content/platform/block-sdk-guide.md` with detailed
instructions for:
  - Provider configuration using `ProviderBuilder`
  - Block schema definition and implementation
  - Authentication methods (API keys, OAuth, webhooks)
  - Testing and validation
  - File organization and best practices

- Updated documentation structure:
  - Added guide to `mkdocs.yml` navigation
  - Added cross-references in `new_blocks.md`
  - Added links in `blocks/blocks.md` overview
  - Updated `CLAUDE.md` with reference to the new guide

## Test plan
- [ ] Documentation builds correctly with mkdocs
- [ ] All internal links resolve properly
- [ ] Guide examples are syntactically correct
- [ ] Navigation structure is logical and accessible
2025-08-28 15:54:13 +00:00
dependabot[bot]
a595da02f7 chore(backend/deps-dev): bump the development-dependencies group across 1 directory with 6 updates (#10710)
Bumps the development-dependencies group with 6 updates in the
/autogpt_platform/backend directory:

| Package | From | To |
| --- | --- | --- |
| [faker](https://github.com/joke2k/faker) | `37.4.2` | `37.5.3` |
| [poethepoet](https://github.com/nat-n/poethepoet) | `0.36.0` |
`0.37.0` |
| [pre-commit](https://github.com/pre-commit/pre-commit) | `4.2.0` |
`4.3.0` |
| [pyright](https://github.com/RobertCraigie/pyright-python) | `1.1.403`
| `1.1.404` |
| [requests](https://github.com/psf/requests) | `2.32.4` | `2.32.5` |
| [ruff](https://github.com/astral-sh/ruff) | `0.12.4` | `0.12.9` |


Updates `faker` from 37.4.2 to 37.5.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/releases">faker's
releases</a>.</em></p>
<blockquote>
<h2>Release v37.5.3</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.5.3/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.5.2</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.5.2/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.5.1</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.5.1/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.5.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.5.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.4.3</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.4.3/CHANGELOG.md">CHANGELOG.md</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/blob/master/CHANGELOG.md">faker's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.5.2...v37.5.3">v37.5.3
- 2025-07-30</a></h3>
<ul>
<li>Allow <code>Decimal</code> type for <code>min_value</code> and
<code>max_value</code> in <code>pydecimal</code>. Thanks <a
href="https://github.com/sshishov"><code>@​sshishov</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.5.1...v37.5.2">v37.5.2
- 2025-07-30</a></h3>
<ul>
<li>Fix Turkish Republic National Number (TCKN) provider. Thanks <a
href="https://github.com/fleizean"><code>@​fleizean</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.5.0...v37.5.1">v37.5.1
- 2025-07-30</a></h3>
<ul>
<li>Fix unnatural Korean company names in <code>ko_KR</code> locale.
Thanks <a
href="https://github.com/r-4bb1t"><code>@​r-4bb1t</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.4.3...v37.5.0">v37.5.0
- 2025-07-30</a></h3>
<ul>
<li>Add Spanish lorem provider for <code>es_ES</code>,
<code>es_AR</code> and <code>es_MX</code>. Thanks <a
href="https://github.com/Pandede"><code>@​Pandede</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.4.2...v37.4.3">v37.4.3
- 2025-07-30</a></h3>
<ul>
<li>Fix male names in <code>sv_SE</code> locale. Thanks <a
href="https://github.com/peterk"><code>@​peterk</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c7db7f583d"><code>c7db7f5</code></a>
Bump version: 37.5.2 → 37.5.3</li>
<li><a
href="f4fbe8f933"><code>f4fbe8f</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="2a55697c46"><code>2a55697</code></a>
format code</li>
<li><a
href="614e3255e0"><code>614e325</code></a>
Placate mypy</li>
<li><a
href="f8e5d868f2"><code>f8e5d86</code></a>
fix(pydecimal): allow <code>Decimal</code> type for
<code>min_value</code> and <code>max_value</code> in `pyde...</li>
<li><a
href="4cf26710f7"><code>4cf2671</code></a>
Bump version: 37.5.1 → 37.5.2</li>
<li><a
href="fecc0373fd"><code>fecc037</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="3e94c67740"><code>3e94c67</code></a>
Fix Turkish Republic National Number (TCKN) provider (<a
href="https://redirect.github.com/joke2k/faker/issues/2232">#2232</a>)</li>
<li><a
href="867b08e984"><code>867b08e</code></a>
more samples</li>
<li><a
href="5acc936b6d"><code>5acc936</code></a>
update stubs</li>
<li>Additional commits viewable in <a
href="https://github.com/joke2k/faker/compare/v37.4.2...v37.5.3">compare
view</a></li>
</ul>
</details>
<br />

Updates `poethepoet` from 0.36.0 to 0.37.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nat-n/poethepoet/releases">poethepoet's
releases</a>.</em></p>
<blockquote>
<h2>0.37.0</h2>
<h2>Enhancements</h2>
<ul>
<li>Support configuring task level verbosity by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/304">nat-n/poethepoet#304</a></li>
<li>Direct most non-task output to stderr by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/304">nat-n/poethepoet#304</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nat-n/poethepoet/compare/v0.36.0...v0.37.0">https://github.com/nat-n/poethepoet/compare/v0.36.0...v0.37.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9c582c8d25"><code>9c582c8</code></a>
Bump version to 0.37.0</li>
<li><a
href="6eb522f791"><code>6eb522f</code></a>
feat: Support task level verbosity config and use stderr for most
non-task ou...</li>
<li>See full diff in <a
href="https://github.com/nat-n/poethepoet/compare/v0.36.0...v0.37.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pre-commit` from 4.2.0 to 4.3.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pre-commit/pre-commit/releases">pre-commit's
releases</a>.</em></p>
<blockquote>
<h2>pre-commit v4.3.0</h2>
<h3>Features</h3>
<ul>
<li><code>language: docker</code> / <code>language: docker_image</code>:
detect rootless docker.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3446">#3446</a>
PR by <a
href="https://github.com/matthewhughes934"><code>@​matthewhughes934</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/1243">#1243</a>
issue by <a
href="https://github.com/dkolepp"><code>@​dkolepp</code></a>.</li>
</ul>
</li>
<li><code>language: julia</code>: avoid <code>startup.jl</code> when
executing hooks.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3496">#3496</a>
PR by <a
href="https://github.com/ericphanson"><code>@​ericphanson</code></a>.</li>
</ul>
</li>
<li><code>language: dart</code>: support latest dart versions which
require a higher sdk
lower bound.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3507">#3507</a>
PR by <a
href="https://github.com/bc-lee"><code>@​bc-lee</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md">pre-commit's
changelog</a>.</em></p>
<blockquote>
<h1>4.3.0 - 2025-08-09</h1>
<h3>Features</h3>
<ul>
<li><code>language: docker</code> / <code>language: docker_image</code>:
detect rootless docker.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3446">#3446</a>
PR by <a
href="https://github.com/matthewhughes934"><code>@​matthewhughes934</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/1243">#1243</a>
issue by <a
href="https://github.com/dkolepp"><code>@​dkolepp</code></a>.</li>
</ul>
</li>
<li><code>language: julia</code>: avoid <code>startup.jl</code> when
executing hooks.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3496">#3496</a>
PR by <a
href="https://github.com/ericphanson"><code>@​ericphanson</code></a>.</li>
</ul>
</li>
<li><code>language: dart</code>: support latest dart versions which
require a higher sdk
lower bound.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3507">#3507</a>
PR by <a
href="https://github.com/bc-lee"><code>@​bc-lee</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b74a22d96c"><code>b74a22d</code></a>
v4.3.0</li>
<li><a
href="cc899de192"><code>cc899de</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3507">#3507</a>
from bc-lee/dart-fix</li>
<li><a
href="2a0bcea757"><code>2a0bcea</code></a>
Downgrade Dart SDK version installed in the CI</li>
<li><a
href="f1cc7a445f"><code>f1cc7a4</code></a>
Make Dart pre-commit hook compatible with the latest Dart SDKs</li>
<li><a
href="72a3b71f0e"><code>72a3b71</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3504">#3504</a>
from pre-commit/pre-commit-ci-update-config</li>
<li><a
href="c8925a457a"><code>c8925a4</code></a>
[pre-commit.ci] pre-commit autoupdate</li>
<li><a
href="a5fe6c500c"><code>a5fe6c5</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3496">#3496</a>
from ericphanson/eph/jl-startup</li>
<li><a
href="6f1f433a9c"><code>6f1f433</code></a>
Julia language: skip startup.jl file</li>
<li><a
href="c6817210b1"><code>c681721</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3499">#3499</a>
from pre-commit/pre-commit-ci-update-config</li>
<li><a
href="4fd4537bc6"><code>4fd4537</code></a>
[pre-commit.ci] pre-commit autoupdate</li>
<li>Additional commits viewable in <a
href="https://github.com/pre-commit/pre-commit/compare/v4.2.0...v4.3.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pyright` from 1.1.403 to 1.1.404
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d393df1703"><code>d393df1</code></a>
Pyright NPM Package update to 1.1.404 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/352">#352</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.403...v1.1.404">compare
view</a></li>
</ul>
</details>
<br />

Updates `requests` from 2.32.4 to 2.32.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/psf/requests/releases">requests's
releases</a>.</em></p>
<blockquote>
<h2>v2.32.5</h2>
<h2>2.32.5 (2025-08-18)</h2>
<p><strong>Bugfixes</strong></p>
<ul>
<li>The SSLContext caching feature originally introduced in 2.32.0 has
created
a new class of issues in Requests that have had negative impact across a
number
of use cases. The Requests team has decided to revert this feature as
long term
maintenance of it is proving to be unsustainable in its current
iteration.</li>
</ul>
<p><strong>Deprecations</strong></p>
<ul>
<li>Added support for Python 3.14.</li>
<li>Dropped support for Python 3.8 following its end of support.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's
changelog</a>.</em></p>
<blockquote>
<h2>2.32.5 (2025-08-18)</h2>
<p><strong>Bugfixes</strong></p>
<ul>
<li>The SSLContext caching feature originally introduced in 2.32.0 has
created
a new class of issues in Requests that have had negative impact across a
number
of use cases. The Requests team has decided to revert this feature as
long term
maintenance of it is proving to be unsustainable in its current
iteration.</li>
</ul>
<p><strong>Deprecations</strong></p>
<ul>
<li>Added support for Python 3.14.</li>
<li>Dropped support for Python 3.8 following its end of support.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b25c87d7cb"><code>b25c87d</code></a>
v2.32.5</li>
<li><a
href="131e506079"><code>131e506</code></a>
Merge pull request <a
href="https://redirect.github.com/psf/requests/issues/7010">#7010</a>
from psf/dependabot/github_actions/actions/checkout-...</li>
<li><a
href="b336cb2bc6"><code>b336cb2</code></a>
Bump actions/checkout from 4.2.0 to 5.0.0</li>
<li><a
href="46e939b552"><code>46e939b</code></a>
Update publish workflow to use <code>artifact-id</code> instead of
<code>name</code></li>
<li><a
href="4b9c546aa3"><code>4b9c546</code></a>
Merge pull request <a
href="https://redirect.github.com/psf/requests/issues/6999">#6999</a>
from psf/dependabot/github_actions/step-security/har...</li>
<li><a
href="7618dbef01"><code>7618dbe</code></a>
Bump step-security/harden-runner from 2.12.0 to 2.13.0</li>
<li><a
href="2edca11103"><code>2edca11</code></a>
Add support for Python 3.14 and drop support for Python 3.8 (<a
href="https://redirect.github.com/psf/requests/issues/6993">#6993</a>)</li>
<li><a
href="fec96cd597"><code>fec96cd</code></a>
Update Makefile rules (<a
href="https://redirect.github.com/psf/requests/issues/6996">#6996</a>)</li>
<li><a
href="d58d8aa2f4"><code>d58d8aa</code></a>
docs: clarify timeout parameter uses seconds in Session.request (<a
href="https://redirect.github.com/psf/requests/issues/6994">#6994</a>)</li>
<li><a
href="91a3eabd3d"><code>91a3eab</code></a>
Bump github/codeql-action from 3.28.5 to 3.29.0</li>
<li>Additional commits viewable in <a
href="https://github.com/psf/requests/compare/v2.32.4...v2.32.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.12.4 to 0.12.9
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.12.9</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add check for
<code>airflow.secrets.cache.SecretCache</code> (<code>AIR301</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17707">#17707</a>)</li>
<li>[<code>ruff</code>] Offer a safe fix for multi-digit zeros
(<code>RUF064</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19847">#19847</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>flake8-blind-except</code>] Fix <code>BLE001</code>
false-positive on <code>raise ... from None</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19755">#19755</a>)</li>
<li>[<code>flake8-comprehensions</code>] Fix false positive for
<code>C420</code> with attribute, subscript, or slice assignment targets
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/19513">#19513</a>)</li>
<li>[<code>flake8-simplify</code>] Fix handling of U+001C..U+001F
whitespace (<code>SIM905</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19849">#19849</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>pylint</code>] Use lowercase hex characters to match the
formatter (<code>PLE2513</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19808">#19808</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix <code>lint.future-annotations</code> link (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19876">#19876</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>
<p>Build <code>riscv64</code> binaries for release (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19819">#19819</a>)</p>
</li>
<li>
<p>Add rule code to error description in GitLab output (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19896">#19896</a>)</p>
</li>
<li>
<p>Improve rendering of the <code>full</code> output format (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19415">#19415</a>)</p>
<p>Below is an example diff for <a
href="https://docs.astral.sh/ruff/rules/unused-import/"><code>F401</code></a>:</p>
<pre lang="diff"><code>-unused.py:8:19: F401 [*] `pathlib` imported but
unused
+F401 [*] `pathlib` imported but unused
+  --&gt; unused.py:8:19
    |
  7 | # Unused, _not_ marked as required (due to the alias).
  8 | import pathlib as non_alias
-   |                   ^^^^^^^^^ F401
+   |                   ^^^^^^^^^
  9 |
 10 | # Unused, marked as required.
    |
-   = help: Remove unused import: `pathlib`
+help: Remove unused import: `pathlib`
</code></pre>
<p>For now, the primary difference is the movement of the filename, line
number, and column information to a second line in the header. This new
representation will allow us to make further additions to Ruff's
diagnostics, such as adding sub-diagnostics and multiple annotations to
the same snippet.</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.12.9</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add check for
<code>airflow.secrets.cache.SecretCache</code> (<code>AIR301</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17707">#17707</a>)</li>
<li>[<code>ruff</code>] Offer a safe fix for multi-digit zeros
(<code>RUF064</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19847">#19847</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>flake8-blind-except</code>] Fix <code>BLE001</code>
false-positive on <code>raise ... from None</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19755">#19755</a>)</li>
<li>[<code>flake8-comprehensions</code>] Fix false positive for
<code>C420</code> with attribute, subscript, or slice assignment targets
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/19513">#19513</a>)</li>
<li>[<code>flake8-simplify</code>] Fix handling of U+001C..U+001F
whitespace (<code>SIM905</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19849">#19849</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>pylint</code>] Use lowercase hex characters to match the
formatter (<code>PLE2513</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19808">#19808</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix <code>lint.future-annotations</code> link (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19876">#19876</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>
<p>Build <code>riscv64</code> binaries for release (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19819">#19819</a>)</p>
</li>
<li>
<p>Add rule code to error description in GitLab output (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19896">#19896</a>)</p>
</li>
<li>
<p>Improve rendering of the <code>full</code> output format (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19415">#19415</a>)</p>
<p>Below is an example diff for <a
href="https://docs.astral.sh/ruff/rules/unused-import/"><code>F401</code></a>:</p>
<pre lang="diff"><code>-unused.py:8:19: F401 [*] `pathlib` imported but
unused
+F401 [*] `pathlib` imported but unused
+  --&gt; unused.py:8:19
    |
  7 | # Unused, _not_ marked as required (due to the alias).
  8 | import pathlib as non_alias
-   |                   ^^^^^^^^^ F401
+   |                   ^^^^^^^^^
  9 |
 10 | # Unused, marked as required.
    |
-   = help: Remove unused import: `pathlib`
+help: Remove unused import: `pathlib`
</code></pre>
<p>For now, the primary difference is the movement of the filename, line
number, and column information to a second line in the header. This new
representation will allow us to make further additions to Ruff's
diagnostics, such as adding sub-diagnostics and multiple annotations to
the same snippet.</p>
</li>
</ul>
<h2>0.12.8</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ef422460de"><code>ef42246</code></a>
Bump 0.12.9 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19917">#19917</a>)</li>
<li><a
href="dc2e8ab377"><code>dc2e8ab</code></a>
[ty] support <code>kw_only=True</code> for <code>dataclass()</code> and
<code>field()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19677">#19677</a>)</li>
<li><a
href="9aaa82d037"><code>9aaa82d</code></a>
Feature/build riscv64 bin (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19819">#19819</a>)</li>
<li><a
href="3288ac2dfb"><code>3288ac2</code></a>
[ty] Add caching to <code>CodeGeneratorKind::matches()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19912">#19912</a>)</li>
<li><a
href="1167ed61cf"><code>1167ed6</code></a>
[ty] Rename <code>functionArgumentNames</code> to
<code>callArgumentNames</code> inlay hint setting...</li>
<li><a
href="2ee47d87b6"><code>2ee47d8</code></a>
[ty] Default <code>ty.inlayHints.*</code> server settings to true (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19910">#19910</a>)</li>
<li><a
href="d324cedfc2"><code>d324ced</code></a>
[ty] Remove py-fuzzer skips for seeds that are no longer slow (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19906">#19906</a>)</li>
<li><a
href="5a570c8e6d"><code>5a570c8</code></a>
[ty] fix deferred name loading in PEP695 generic classes/functions (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19888">#19888</a>)</li>
<li><a
href="baadb5a78d"><code>baadb5a</code></a>
[ty] Add some additional type safety to <code>CycleDetector</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19903">#19903</a>)</li>
<li><a
href="df0648aae0"><code>df0648a</code></a>
[<code>flake8-blind-except</code>] Fix <code>BLE001</code>
false-positive on <code>raise ... from None</code> ...</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.4...0.12.9">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-28 15:22:40 +00:00
Reinier van der Leer
12cdd45551 refactor(backend): Improve auth setup & OpenAPI generation (#10720)
Our current auth setup (`autogpt_libs.auth` + its usage) is quite
inconsistent and doesn't do all of its jobs properly. The 401 responses
you get when unauthenticated are not included in the OpenAPI spec,
causing these to be unaccounted for in the generated frontend API
client. Usage of the FastAPI dependencies supplied by
`autogpt_libs.auth.depends` aren't consistently used the same way,
making maintenance on these hard to oversee. API tests use many
different ways to get around the auth requirement, making this also hard
to maintain and oversee.
This pull request aims to fix all of this and give us a consistent,
clean, and self-documenting API auth implementation.

- Resolves #10715

### Changes 🏗️

- Homogenize use of `autogpt_libs.auth` security dependencies throughout
the backend
- Fix OpenAPI schema generation for 401 responses
  - Handle possible 401 responses in frontend
- Tighten validation and add warnings for weak settings in
`autogpt_libs.auth.config`
- Increase test coverage for `autogpt_libs.auth` to 100%
- Standardize auth setup for API tests
- Rename `APIKeyValidator` to `APIKeyAuthenticator` and move to its own
module in `backend.server`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All tests for `autogpt_libs.auth` pass
  - [x] All tests for `backend.server` pass
  - [x] @ntindle does a security audit for these changes
- [x] OpenAPI spec for authenticated routes is generated with the
appropriate `401` response

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-28 14:46:50 +00:00
Reinier van der Leer
df3c81a7a6 fix(backend): Fix 4 (deprecation) warnings on startup (#10759)
Fixes these warnings on startup:
```
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_config.py:373: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
  warnings.warn(message, UserWarning)
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_config.py:323: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/
  warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:298: PydanticDeprecatedSince20: `json_encoders` is deprecated. See https://docs.pydantic.dev/2.11/concepts/serialization/#custom-serializers for alternatives. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/
  warnings.warn(
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_fields.py:294: UserWarning: `alias` specification on field "created_at" must be set on outermost annotation to take effect.
  warnings.warn(
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_fields.py:294: UserWarning: `alias` specification on field "updated_at" must be set on outermost annotation to take effect.
  warnings.warn(
```

- Resolves #10758

### Changes 🏗️

- Fix field annotations in `backend/blocks/exa/websets.py`
- Replace deprecated JSON encoder specification in
`backend/blocks/wordpress/_api.py` by field serializer
- Move deprecated `schema_extra` example specification in
`backend/server/integrations/models.py` to `Field(examples=...)`

The two remaining warnings that appear on start-up aren't trivial to
fix.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Changes are trivial and do not require further testing
2025-08-28 11:34:58 +00:00
Ubbe
0f477e2392 feat(frontend): new run agent modal (1/2) (#10696)
## Changes 🏗️

<img width="600" height="624" alt="Screenshot 2025-08-25 at 23 22 24"
src="https://github.com/user-attachments/assets/a66b0a02-cb7a-47f3-8759-e955fb76f865"
/>

<img width="600" height="748" alt="Screenshot 2025-08-25 at 23 22 40"
src="https://github.com/user-attachments/assets/0357bd0b-9875-41a4-8752-d7dbc7a82ff6"
/>

The new **Agent Run Modal**, to be used when running agents. This is PR
1/2 ( _as I learned there is so much into running agents_ 🔮 ). The first
part sets up "the easy things":
- the run view
- the schedule run view
- the switch between them
- the agent details

On the next PR, I will add support for the current agent run inputs (
[and all their
types...](https://github.com/Significant-Gravitas/AutoGPT/blob/dev/autogpt_platform/frontend/src/components/type-based-input.tsx)
😆 ) + webhook triggers...

## Checklist 📋

### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] with the flag ON ( is now OFF in dev but ON local )
  - [x] clicking `New Run` on the new library page shows the new modal
  - [x] Details are shown on the modal header
  - [x] Agent details are shown
  - [x] You can schedule runs 
   
### For configuration changes:

None

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-08-27 10:38:12 +00:00
Swifty
82618aede0 Merge remote-tracking branch 'origin/dev' 2025-08-27 11:39:16 +02:00
Swifty
b713093276 Merge master into dev to tidy up merge history (#10753) 2025-08-27 11:14:19 +02:00
Bently
3718b948ea feat(blocks): Add Ideogram V3 model (#10752)
Adds support for Ideogram V3 model while maintaining backward
compatibility with existing models (V1,
V1_TURBO, V2, V2_TURBO). Updates default model to V3 and implements
smart API routing to handle
Ideogram's new V3 endpoint requirements.

Changes Made

- Added V3 model support: Added V_3 to IdeogramModelName enum and set as
default
- Dual API endpoint handling:
- V3 models route to new /v1/ideogram-v3/generate endpoint with updated
payload format
- Legacy models (V1, V2, Turbo variants) continue using /generate
endpoint
- Model-specific feature filtering:
- V1 models: Basic parameters only (no style_type or color_palette
support)
- V2/V2_TURBO: Full legacy feature support including style_type and
color_palette
- V3: New endpoint with aspect ratio mapping and updated parameter
structure
- Aspect ratio compatibility: Added mapping between internal enum values
and V3's expected format
(ASPECT_1_1 → 1x1)
- Updated pricing: V3 model costs 18 credits (vs 16 for other models)
- Updated default usage: Store image generation now uses V3 by default

Technical Details

Ideogram updated their API with a separate V3 endpoint that has
different requirements:
- Different URL path (/v1/ideogram-v3/generate)
- Different aspect ratio format (e.g., 1x1 instead of ASPECT_1_1)
- Model-specific feature support (V1 models don't support style_type,
etc.)

The implementation intelligently routes requests to the appropriate
endpoint based on the selected model
while maintaining a single unified interface.

I tested all the models and they are working here
<img width="1804" height="887" alt="image"
src="https://github.com/user-attachments/assets/9f2e44ca-50a4-487f-987c-3230dd72fb5e"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the Ideogram model block and watch as they all work!
2025-08-27 08:33:47 +00:00
Swifty
44d739386b feat(platform/blocks): Added stagehand integration (#10751)
Added basic stagehand integration:

<img width="667" height="609" alt="Screenshot 2025-08-27 at 09 20 18"
src="https://github.com/user-attachments/assets/11ab2941-0913-4346-a1d4-45980711e0f9"
/>


[stagehand_v35.json](https://github.com/user-attachments/files/22002924/stagehand_v35.json)

### Changes 🏗️

- Act Block
- Extract Block
- Observe Block

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have added a sample agent
- [x] I have created an agent that uses these blocks and ensured it runs
2025-08-27 08:33:41 +00:00
Abhimanyu Yadav
533d2d0277 refactor(frontend): Revamp creator page data fetching and structure (#10737)
### Changes 🏗️
- Updated the creator page to utilize React Query for data fetching,
improving performance and reliability.
- Removed legacy API calls and integrated prefetching for creator
details and agents.
- Introduced a new MainCreatorPage component for better separation of
concerns.
- Added a hydration boundary for managing server state.

### Checklist 📋

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All marketplace E2E tests are working.
- [x] I’ve tested all the links and checked if everything renders
perfectly on the marketplace page.
2025-08-27 04:04:57 +00:00
Abhimanyu Yadav
c6821484c7 fix(frontend): add min-width to agent run draft view component (#10731)
- resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10618

When we have a dropdown with a large description, the actions button is
moved out of the dialog box. To fix this, I’ve added a temporary
solution, but in the future, we need to change the entire layout.


### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything works perfectly locally.
2025-08-27 04:04:16 +00:00
Reinier van der Leer
fecbd3042d fix(frontend/library): Fix new runs not appearing in list (#10750)
- Fixes #10749

### Changes 🏗️

- Fix implementation of `useAgentRunsInfinite.upsertAgentRun(..)`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] New runs appear in runs list
2025-08-26 17:44:10 +00:00
Zamil Majdy
c0172c93aa fix(backend/executor): prevent infinite requeueing of malformed messages (#10746)
### Changes 🏗️

This PR fixes an infinite loop issue in the execution manager where
malformed or unparseable messages would be continuously requeued,
causing high CPU usage and preventing the system from processing
legitimate messages.

**Key changes:**
- Modified `_ack_message()` function to accept explicit `requeue`
parameter
- Set `requeue=False` for malformed/unparseable messages that cannot be
fixed by retrying
- Set `requeue=False` for duplicate execution attempts (graph already
running)
- Kept `requeue=True` for legitimate failures that may succeed on retry
(e.g., temporary resource constraints, network issues)

**Technical details:**
The previous implementation always set `requeue=True` when rejecting
messages with `basic_nack()`. This caused problematic messages to be
immediately re-delivered to the consumer, creating an infinite loop for:
1. Messages with invalid JSON that cannot be parsed
2. Messages for executions that are already running (duplicates)

These scenarios will never succeed regardless of how many times they're
retried, so they should be rejected without requeueing to prevent
resource exhaustion.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified malformed messages are rejected without requeue
- [x] Confirmed duplicate execution messages are rejected without
requeue
- [x] Ensured legitimate failures (shutdown, pool full) still requeue
properly
- [x] Tested that normal message processing continues to work correctly
2025-08-26 18:34:58 +07:00
Krzysztof Czerwinski
8a68e03eb1 feat(backend): Blocks Menu redesign backend (#10128)
Backend for the Blocks Menu Redesign.

### Changes 🏗️

- Add optional `agent_name` to the `AgentExecutorBlock` - displayed as
the block name in the Builder
- Include `output_schema` in the `LibraryAgent` model
- Make `v2.store.db.py:get_store_agents` accept multiple creators filter
- Add `api/builder` router with endpoints (and accompanying logic in
`v2/builder/db` and models in `v2/builder/models`)
  - `/suggestions`: elements for the suggestions tab
  - `/categories`: categories with a number of blocks per each
  - `/blocks`: blocks based on category, type or provider
  - `/providers`: integration providers with their block counts
- `/serach`: search blocks (including integrations), marketplace agents
and user library agents
  - `/counts`: element counts for each category in the Blocks Menu.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Modified function `get_store_agents` works in existing code paths
  - [x] Agent executor block works
  - [x] New endpoints work
  - [x] Existing Builder menu is unaffected

---------

Co-authored-by: Abhimanyu Yadav <abhimanyu1992002@gmail.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-08-26 02:23:10 +00:00
Reinier van der Leer
da585a34e1 fix(frontend): Propagate API auth errors to original requestor (#10716)
- Resolves #10713

### Changes 🏗️

- Remove early exit in API proxy that suppresses auth errors
- Remove unused `proxy-action.ts`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Publish Agent dialog works when logged out
  - [x] Publish Agent dialog works when logged in

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-25 19:02:41 +00:00
Copilot
bc43d05cac feat(backend): Ensure database manager clients only include methods needed for their contexts (#10717)
The database manager had both sync and async clients that contained
overlapping methods, including some that weren't actually used in their
respective contexts. This violated the principle that each client should
only expose the methods it needs.

## Problem

The `DatabaseManagerClient` (sync) included
`get_user_execution_summary_data`, but this method was only ever used in
async contexts like the notifications system. This created unnecessary
coupling and violated the design goal of having focused,
context-specific clients.

## Solution

After comprehensive analysis of actual method usage across the codebase:

- **Removed** `get_user_execution_summary_data` from
`DatabaseManagerClient` since it's only used in async contexts
(notifications)
- **Verified** all remaining methods on both clients are actively used
in their respective contexts:
- Sync client (11 methods): Used in monitoring and main execution thread
- Async client (26 methods): Used in node execution, blocks, and
notifications
- **Maintained** the base `DatabaseManager` class with the union of all
methods needed by either client

## Impact

Each client now contains exactly the methods it needs for its specific
usage patterns:

- `DatabaseManagerClient` handles synchronous operations like monitoring
and credit management
- `DatabaseManagerAsyncClient` handles asynchronous operations like node
execution, persistence, and notifications

The change is minimal and surgical - only removing one unused method
while preserving all actually-used functionality.

Fixes #10658.

<!-- START COPILOT CODING AGENT TIPS -->
---

 Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Bently <Github@bentlybro.com>
2025-08-25 18:56:49 +00:00
Mitansh Jadhav
469b1fccbb fix(blocks): handle invalid or empty response from MusicGen model (#10533)
Handle invalid or empty response from MusicGen model


Fixes: #9145
> ⚠️ Note: This PR does not directly fix issue #9145 (failed run marked
as success), but improves the validation of the URL to reduce the
chances of invalid states entering the system. This is a related
improvement, but not the root cause fix.


### Description
During execution of the meta/musicgen model via Replicate API, the
application failed
with an error indicating the model returned an empty or invalid
response.
Although some API calls succeeded, this error showed the logic was not
checking the
structure and content of the result properly before processing it.

PROBLEM:
CONTEXT:
API: Replicate
MODEL: meta/musicgen:671ac645
STATUS: Failed after 3 attempts
ERROR_MESSAGE: "Unexpected error: Model returned empty or invalid
response"
CAUSE:
- The original logic did not validate result structure.
- It assumed any non-null output was valid, including strings like "No
output received".
- This led to invalid/malformed results being passed to the frontend.


### Changes 🏗️

- Added `AIMusicGeneratorBlock` to support music generation using Meta’s
MusicGen models via Replicate API.
- Supports configurable inputs like prompt, model version, duration,
temperature, top_k/p, and normalization.
- Uses robust retry logic for reliability.
- Output returns audio URL; errors return user-friendly message.

BEFORE_CODE: |
```
if result and result != "No output received":
     yield "result", result
     return
```

AFTER_CODE: |

```
if result and isinstance(result, str) and result.startswith("http"):
      yield "result", result
      return
```

### Checklist 📋

#### For code changes:
- [x] Clearly listed changes in the PR description
- [x] Added test plan and mock outputs
- [x] Tested with various prompts and confirmed working output

### Test Plan

- [x] Ran locally with valid Replicate API key
- [x] Generated audio with different prompts
- [x] Simulated failure to verify retry and error message

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-25 16:35:12 +00:00
Bently
1aa7e10cbd feat(AM): fix moderation id message (#10733)
this fixes and makes the moderation message properly show the moderation
ID

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] try trigger moderation and have it shows the moderation id in the
error message
2025-08-25 16:08:26 +00:00
Nicholas Tindle
890bb3b8b4 feat(backend): implement low balance and insufficient funds notifications (#10656)
Co-authored-by: SwiftyOS <craigswift13@gmail.com>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: majdyz <zamil@agpt.co>
2025-08-25 11:17:40 -05:00
Nicholas Tindle
2bb8e91040 feat(backend): Add user timezone support to backend (#10707)
Co-authored-by: Swifty <craigswift13@gmail.com>
resolve issue #10692 where scheduled time and actual run
2025-08-25 11:00:07 -05:00
Nicholas Tindle
76090f0ba2 feat(backend,frontend): Send applicant email on review response (#10718)
### Changes 🏗️

This PR implements email notifications for agent creators when their
agent submissions are approved or rejected by an admin in the
marketplace.

Specifically, the changes include:
- Added `AGENT_APPROVED` and `AGENT_REJECTED` notification types to
`schema.prisma`.
- Created `AgentApprovalData` and `AgentRejectionData` Pydantic models
for notification data.
- Configured the notification system to use immediate queues and new
Jinja2 templates for these types.
- Designed two new email templates: `agent_approved.html.jinja2` and
`agent_rejected.html.jinja2`, with dynamic content for agent details,
reviewer feedback, and relevant action links.
- Modified the `review_store_submission` function to:
    - Include `User` and `Reviewer` data in the database query.
- Construct and queue the appropriate email notification based on the
approval/rejection status.
- Ensure email sending failures do not block the agent review process.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Approve an agent via the admin dashboard.
- [x] Verify the agent creator receives an "Agent Approved" email with
correct details and a link to the store.
  - [x] Reject an agent via the admin dashboard (providing a reason).
- [x] Verify the agent creator receives an "Agent Rejected" email with
correct details, the rejection reason, and a link to resubmit.
- [x] Verify that if email sending fails (e.g., misconfigured SMTP), the
agent approval/rejection process still completes successfully without
error.

<img width="664" height="975" alt="image"
src="https://github.com/user-attachments/assets/d397f2dc-56eb-45ab-877e-b17f1fc234d1"
/>
<img width="664" height="975" alt="image"
src="https://github.com/user-attachments/assets/25597752-f68c-46fe-8888-6c32f5dada01"
/>


---
Linear Issue: [SECRT-1168](https://linear.app/autogpt/issue/SECRT-1168)

<a
href="https://cursor.com/background-agent?bcId=bc-7394906c-0341-4bd0-8842-6d9d6f83c56c">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-7394906c-0341-4bd0-8842-6d9d6f83c56c">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-08-25 14:24:16 +00:00
Bently
5b12e02c4e feat(AutoMod): add `moderation id` to moderation message (#10728)
AutoModManager now captures and propagates the content_id from the
moderation API for both input and output moderation. AutoModResponse and
ModerationError are updated to include content_id, allowing better
traceability of moderation actions and error reporting, with this the
error message will now show ``Failed due to content moderation
(Moderation ID: uuid-here)``

This is good for if a user is having a issue with automod and its
falsely flagging there runs we can use the moderation ID to look at
automod to see whats going on

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

Just some updates to receive the content_id from AutoMod and then show
it in the "Failed due to content moderation" message

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run autogpt with AM and trigger it and it will show the content_id
in the error message
2025-08-25 12:47:11 +00:00
Nicholas Tindle
e0520f5e0a fix(frontend): Validate and sanitize cron expressions for scheduler API (#10719)
<!-- Clearly explain the need for these changes: -->

### Need for these changes 💥

This PR resolves Linear issue `SECRT-1290`, addressing a critical bug
where the scheduler API fails with a "Wrong number of fields" error when
empty or invalid cron expressions are submitted from the frontend. This
was causing production errors and a poor user experience. It was an off
by one error

### Changes 🏗️


Fix off by one error + add additional logging / error messaging when
someone makes an invalid cron


https://github.com/user-attachments/assets/775881a9-707b-4c4f-b23a-bd7118a358ee



### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Attempt to schedule an agent with an empty cron expression from
the UI and confirm a frontend toast error.
- [x] Attempt to schedule an agent with an incomplete yearly cron (no
months selected) from the UI and confirm a frontend toast error and UI
warning.
- [x] Attempt to schedule an agent with an incomplete monthly cron (no
days selected) from the UI and confirm a frontend toast error and UI
warning.
- [x] Attempt to schedule an agent with an incomplete weekly cron (no
days selected) from the UI and confirm a frontend toast error and UI
warning.
- [x] Verify that valid cron expressions can still be scheduled
successfully.
  - [x] Run backend unit tests for scheduler cron validation.
  - [x] Run frontend unit tests for cron expression utility.


---
Linear Issue: [SECRT-1290](https://linear.app/autogpt/issue/SECRT-1290)

<a
href="https://cursor.com/background-agent?bcId=bc-8bc10502-9498-4dbd-afa2-93e15990fa8c">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-8bc10502-9498-4dbd-afa2-93e15990fa8c">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-08-25 11:16:26 +00:00
Swifty
a9530b7304 fix(frontend): Remove old experience alert (#10730)
At the bottom of the library page is an alert directing people to the
old monitoring page. This PR removes this link, as the old monitoring
page no longer works.

### Changes 🏗️

Remove Alert directing users to the old monitoring page. 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Check alert no longer shows on the library page
2025-08-25 10:56:01 +00:00
Ubbe
8a9c165faf feat(frontend): new run modal components (#10729)
## Changes 🏗️

Add components needed for the new **Agent Run Modal** ( _splitting PRs
in this way the modal PR will be smaller_ 💆🏽 ).
[Design
reference](https://www.figma.com/design/14jjs3hH3Hmkq4hGqxZWco/agent-runs-unification?node-id=188-15511&t=6fVja182TuoluMwc-1).

### `<Breadcrumbs />`

<img width="248" height="72" alt="Screenshot 2025-08-25 at 17 52 36"
src="https://github.com/user-attachments/assets/6191aa03-bb6b-47fe-af8c-20dbdb1b9d06"
/>

Before the project was using Breadcrumbs from Shadcn directly, it now
uses new ones styled following the AutoGPT Design System.

### `<MultiToggle />` 

<img width="350" height="148" alt="Screenshot 2025-08-25 at 17 52 07"
src="https://github.com/user-attachments/assets/e1bbb735-62e5-4c73-929a-52ec0109f274"
/>

### `<Collapisble />`

<img width="350" height="135" alt="Screenshot 2025-08-25 at 17 52 50"
src="https://github.com/user-attachments/assets/e4ee4026-8bd5-4d08-8875-3ecb573bd6bb"
/>

### `<ShowMoreText />`

<img width="500" height="60" alt="Screenshot 2025-08-25 at 17 52 17"
src="https://github.com/user-attachments/assets/2e85a192-b7ab-4f5f-b35d-5ed9d3ef6132"
/>

This is very similar to `<Collapsible />`, the difference is designed to
work specifically with text. The `more/less` trigger is displayed next
to the text in a simple way. `<Collapsible />` might used with other
elements, for example like FAQ or hiding/expanding forms.

### `<LLMItem />`

<img width="300" height="78" alt="Screenshot 2025-08-25 at 17 52 26"
src="https://github.com/user-attachments/assets/7b904e15-75d3-4f11-9863-2d0db072e884"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook
  - [x] Stories look good and make sense 

#### For configuration changes:

None
2025-08-25 10:29:55 +00:00
Nicholas Tindle
476bfc6c84 feat(backend): add store meta blocks (#10633)
<!-- Clearly explain the need for these changes: -->
This PR implements blocks that enable users to interact with the AutoGPT
store and library programmatically. This addresses the need for agents
to be able to add other agents from the store to their library and
manage agent collections automatically, as requested in Linear issue
OPEN-2602. These are locked behind LaunchDarkly for now.


https://github.com/user-attachments/assets/b8518961-abbf-4e9d-a31e-2f3d13fa6b0d


### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Added new store operations blocks**
(`backend/blocks/system/store_operations.py`):
- `GetStoreAgentDetailsBlock`: Retrieves detailed information about an
agent from the store
- `SearchStoreAgentsBlock`: Searches for agents in the store with
various filters

  
- **Added new library operations blocks**
(`backend/blocks/system/library_operations.py`):
  - `ListLibraryAgentsBlock`: Lists all agents in the user's library
- `AddToLibraryFromStoreBlock`: Adds an agent from the store to user's
library

- **Updated block exports** in `backend/blocks/system/__init__.py` to
include new blocks

- **Added comprehensive tests** for store operations in
`backend/blocks/test/test_store_operations.py`

- **Enhanced executor database utilities** in
`backend/executor/database.py` with new helper methods for agent
management

- **Updated frontend marketplace page** to properly handle the new store
operations

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Created unit tests for all new store operation blocks
- [x] Tested GetStoreAgentDetailsBlock retrieves correct agent
information
- [x] Tested SearchStoreAgentsBlock filters and returns agents correctly
- [x] Tested AddToLibraryFromStoreBlock successfully adds agents to
library
  - [x] Tested error handling for non-existent agents and invalid inputs
  - [x] Verified all blocks integrate properly with the database manager
  - [x] Confirmed blocks appear in the block registry and are accessible

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-08-25 07:56:23 +00:00
Bently
61f17e5b97 feat(scheduler): Remove Schedule "Every Minute" Option (#10706)
This simply removes the schedule "every minute" option from the schedule
tasks UI, Its still possible to set a every minute schedule via the
custom option

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Check the Schedule Tasks UI to see there is no more "Every Minute"
option
- [x] Check you can still set a schedule a agent to run every minute
using the custom option
2025-08-24 21:15:11 +00:00
Nicholas Tindle
a54bed6d68 fix(frontend): filter credentials to ones that are supported (#10725)
<!-- Clearly explain the need for these changes: -->
We're showing invalid credential types on the selections  across the app
### Changes 🏗️
We go from (incorrect)
<img width="2551" height="1202" alt="image"
src="https://github.com/user-attachments/assets/e566ed6c-b6c9-4047-80fd-0f2c8cef0bf9"
/>
to this with the fix
<img width="2551" height="1202" alt="image"
src="https://github.com/user-attachments/assets/c720a3d4-9c03-48c5-82a3-d30752bce13c"
/>

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] test the broken agent and upload images proving its no longer
broken
2025-08-24 21:13:05 +00:00
Nicholas Tindle
5502256bea feat(backend): DiscordGetCurrentUserBlock to fetch authenticated user details via OAuth2 (#10723)
<!-- Clearly explain the need for these changes: -->

We want a way to get the user's id from discord without them having to
enable dev mode so this is a way -- oauth login

<img width="2551" height="1202" alt="image"
src="https://github.com/user-attachments/assets/71be07a9-fd37-4ea7-91a1-ced8972fda29"
/>


### Changes 🏗️
- Created DiscordOAuthHandler for managing OAuth2 flow, including login
URL generation, token exchange, and revocation.
- Implemented support for PKCE in the OAuth2 flow.
- Enhanced error handling for user info retrieval and token management.
- Add discord block for getting the logged in user
- Add new client secret field to .env.default

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] add the blocks and test they all work


#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-08-24 20:53:48 +00:00
Nicholas Tindle
bd97727763 feat(platform): add ability to reject approved agents in admin dashboard (#10671)
<!-- Clearly explain the need for these changes: -->
We need the ability to reject or remove already approved agents from the
marketplace via the Admin Dashboard. Previously, once an agent was
approved, there was no easy way to remove it from the marketplace
without direct database intervention.

This addresses several use cases:
- Removing agents that require credentials (short-term solution
discussed with Reinier)
- Handling broken agents mistakenly approved
- Managing outdated or problematic agents
- Quick response to issues without engineering support

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- **Backend**: Modified `review_store_submission` function in
`/backend/server/v2/store/db.py` to handle rejecting already approved
agents
  - Added logic to detect when rejecting an approved agent
  - Updates StoreListing to remove agent from marketplace when rejected
  - Handles multiple approved versions correctly
- **Frontend**: Updated admin marketplace UI components
- `expandable-row.tsx`: Show action buttons for both PENDING and
APPROVED agents
  - `approve-reject-buttons.tsx`: 
- Show only "Revoke" button for approved agents (hide Approve button)
    - Update button text from "Reject" to "Revoke" for approved agents
    - Update dialog titles and descriptions appropriately

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Navigate to Admin Dashboard > Marketplace Management
- [x] Find a PENDING agent and verify both Approve and Reject buttons
appear
  - [x] Find an APPROVED agent and verify only "Revoke" button appears
- [x] Click Revoke on an approved agent and verify dialog shows "Revoke
Approved Agent" title
- [x] Submit revocation with comments and verify agent status changes to
REJECTED
  - [x] Verify the agent is removed from the public marketplace
- [x] Test with an agent that has multiple approved versions - verify it
switches to another approved version
- [x] Test with an agent that has only one approved version - verify
hasApprovedVersion is set to false

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this uses existing admin
authentication and database schema.

Fixes SECRT-1218

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-08-23 21:30:18 +00:00
Reinier van der Leer
aa256f21cd feat(platform/library): Infinite scroll in Agent Runs list (#10709)
- Resolves #10645

### Changes 🏗️

- Implement infinite scroll in the Agent Runs list (on
`/library/agents/[id]`)
- Add horizontal scroll support to `ScrollArea` and `InfiniteScroll`
components
- Fix `InfiniteScroll` triggering twice
- Fix date handling by React Queries
  - Add response mutator to parse dates coming out of API
  - Make legacy `GraphExecutionMeta` compatible with generated type

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Open `/library/agents/[id]`
    - [x] Agent runs list loads
  - Scroll agent runs list to the end
    - [x] More runs are loaded and appear in the list
2025-08-22 15:35:09 +00:00
Swifty
2848e62f8a feat(backend): Add BaaS integration blocks (#10350)
### Changes 🏗️

This PR adds Meeting BaaS (Bot-as-a-Service) integration to the AutoGPT
platform, enabling automated meeting recording and transcription
capabilities.

<img width="1157" height="633" alt="Screenshot 2025-08-22 at 15 06 15"
src="https://github.com/user-attachments/assets/53b88bc8-5580-4287-b6ed-3ae249aed69f"
/>

[BAAS
Test_v12.json](https://github.com/user-attachments/files/21938290/BAAS.Test_v12.json)

**New Features:**
- **Meeting Recording Bot Management:**
  - Deploy bots to join and record meetings automatically
  - Support for multiple meeting platforms via meeting URL
  - Scheduled bot deployment with Unix timestamp support
  - Custom bot avatars and entry messages
  - Webhook support for real-time event notifications
  
- **Meeting Data Operations:**
  - Retrieve MP4 recordings and transcripts from completed meetings
  - Delete recording data for privacy/storage management
  - Force bots to leave ongoing meetings
  
**Technical Implementation:**
- Added 4 new files under `backend/blocks/baas/`:
  - `__init__.py`: Package initialization
  - `_api.py`: Meeting BaaS API client with comprehensive endpoints
  - `_config.py`: Provider configuration using SDK pattern
- `bots.py`: 4 bot management blocks (Join, Leave, Fetch Data, Delete
Recording)

**Key Capabilities:**
- Join meetings with customizable bot names and avatars
- Automatic transcription with configurable speech-to-text providers
- Time-limited MP4 download URLs for recordings
- Reserved bot slots for joining 4 minutes before meetings
- Automatic leave timeouts configuration
- Custom metadata support for tracking

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created and executed an agent with Meeting BaaS bot deployment
blocks
  - [x] Tested bot joining meetings with various configurations
  - [x] Verified recording retrieval and transcript functionality
  - [x] Tested bot removal from ongoing meetings
  - [x] Confirmed data deletion operations work correctly
  - [x] Verified error handling for invalid API keys and bot IDs
  - [x] Tested webhook URL configuration

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-22 15:03:35 +00:00
Bently
fa5ff9ca3c feat(docs): Update Ollama setup docs with new methods (#10693)
### Changes 🏗️

Expanded the Ollama setup instructions to include desktop app, Docker,
and legacy CLI methods. Added new screenshots for network exposure and
model selection. Clarified steps for starting the AutoGPT platform,
configuring models, and troubleshooting. Included instructions for
adding custom models and improved overall documentation structure.


#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Look at the new ollama doc to make sure it makes sense and is easy
to follow
2025-08-21 21:32:53 +00:00
Swifty
7c908c10b8 feat(blocks): Add DataForSEO keyword research blocks (#10711)
<!-- Clearly explain the need for these changes: -->
This PR adds two new DataForSEO blocks for keyword research
functionality, enabling users to get keyword suggestions and related
keywords using the DataForSEO Labs API.



https://github.com/user-attachments/assets/55b3f64b-20b2-4c6d-b307-01327d476fe2

[DataForSeo
Poc_v3.json](https://github.com/user-attachments/files/21916605/DataForSeo.Poc_v3.json)


### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Added `DataForSeoKeywordSuggestionsBlock` for getting keyword
suggestions from DataForSEO Labs
- Added `DataForSeoRelatedKeywordsBlock` for getting related keywords
from DataForSEO Labs
- Implemented proper Pydantic models (`KeywordSuggestion` and
`RelatedKeyword`) for type-safe outputs
- Added mockable private methods (`_fetch_keyword_suggestions` and
`_fetch_related_keywords`) for better testability
- Included comprehensive test mocks to allow testing without actual API
credentials
- Both blocks support optional SERP info and clickstream data
- Added DataForSEO provider configuration using the SDK's
ProviderBuilder pattern

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run block tests for DataForSeoKeywordSuggestionsBlock
  - [x] Run block tests for DataForSeoRelatedKeywordsBlock
  - [x] Verify mocks work correctly without API credentials
  - [x] Confirm proper Pydantic model serialization
  - [x] Run poetry format and fix any linting issues
2025-08-21 21:17:34 +00:00
Copilot
68cd1cb398 feat(platform): Enhance GitHub Copilot agent setup with CI-aligned environment and comprehensive platform documentation integration (#10708)
This PR transforms GitHub Copilot from a general coding assistant into a
platform-specific expert by providing comprehensive onboarding and a
fully configured development environment that mirrors the AutoGPT
platform's CI/CD pipeline.

## Key Features

### CI-Aligned Development Environment
- **Production-Grade Setup**: The `copilot-setup-steps.yml` workflow now
mirrors the platform's CI workflows, providing Copilot with the same
environment used for testing and deployment
- **Essential Services**: Automatically starts Docker services
(PostgreSQL, Redis, RabbitMQ, ClamAV) required for platform development
- **Database Readiness**: Includes Prisma migrations and client
generation to ensure the database is immediately usable
- **Environment Configuration**: Copies default environment files and
validates service health, eliminating manual setup steps

### Comprehensive Platform Knowledge
- **Unified Documentation**: The `copilot-instructions.md` file
integrates knowledge from multiple sources (installer.md,
advanced_setup.md, CLAUDE.md, AGENTS.md) into a single comprehensive
guide
- **Development Patterns**: Includes advanced patterns for block
development, API creation, frontend work, and security implementation
- **Architecture Insights**: Provides deep understanding of the agent
block system, database schema, middleware, and CI/CD workflows
- **Troubleshooting Guide**: Comprehensive error handling and validation
steps based on production experience

### Enhanced Development Workflow
```bash
# Before: Trial-and-error dependency installation
# After: Fully configured environment ready immediately

# Copilot now understands:
poetry run serve                    # Backend server (port 8000)
pnpm dev                           # Frontend server (port 3000)
docker compose up -d               # Essential services
poetry run pytest backend/blocks/test/test_block.py -xvs  # Block validation
```

## Benefits

This configuration provides Copilot with:
- **Immediate Productivity**: No time wasted on environment setup or
dependency discovery
- **Platform Expertise**: Deep understanding of AutoGPT's architecture,
security patterns, and development workflows
- **CI Consistency**: Local development environment matches production
testing environment
- **Comprehensive Context**: Access to all platform documentation and
development patterns in a single source

The setup eliminates the slow trial-and-error process typical of
LLM-based development tools and ensures Copilot works efficiently from
the first interaction.

## References

- [GitHub Copilot Agent Environment
Setup](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/customize-the-agent-environment#preinstalling-tools-or-dependencies-in-copilots-environment)
- [AutoGPT Platform CI Workflows](.github/workflows/)
- [Platform Development Guide](autogpt_platform/CLAUDE.md)

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-08-21 21:00:19 +00:00
Reinier van der Leer
f4538d6f5a build(backend): Change base image to debian:13-slim w/ Python 3.13 (#10654)
- Resolves #10653

The objective is to move to a base image with fewer active
vulnerabilities. Hence the choice for `debian:13-slim` (0 high, 1
medium, 21 low severity), a huge improvement compared to our current
base image `python:3.11.10-slim-bookworm` (4 high, 11 medium, 15 low
severity).

### Changes 🏗️

- Change backend base image to `debian:13-slim`
  - Use Python 3.13
- Fix now-deprecated use of class property in `AppProcess` and
`BaseAppService`
- Expand backend CI matrix to run with Python 3.11 through 3.13
- Update Python version constraint in `pyproject.toml` to include Python
3.13

Also, unrelated:
- Update `autogpt-platform-backend` package version to `v0.6.22`, the
latest release

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI passes
  - [x] No new errors in deployment logs
  - [x] Everything seems to work normally in deployment
2025-08-20 19:34:33 +00:00
dependabot[bot]
4589b15450 chore(libs/deps-dev): Bump ruff from 0.12.3 to 0.12.4 in /autogpt_platform/autogpt_libs in the development-dependencies group (#10421)
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.12.3&new-version=0.12.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-08-20 14:01:32 +00:00
Abhimanyu Yadav
ccc4d0dc6c fix(frontend): clear agent name and description on agent file removal (#10703)
With this PR, when a user removes the agent file, the agent name and
description will also be removed if they haven’t been changed. However,
if the user has modified either of them, they will remain.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Everything works perfectly on local, and the tests are also
passing.
2025-08-20 13:41:37 +00:00
Swifty
c4483fa6c7 Merge branch 'dev' 2025-08-20 15:22:08 +02:00
Swifty
c2af8c1a6a Reapply "Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard"
This reverts commit 260dd526c9.
2025-08-20 15:13:46 +02:00
Abhimanyu Yadav
2610c4579f feat(platform/dashboard): Enable editing for agent submissions (#10545)
- resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10511

In this PR, I’ve added backend endpoints and a frontend UI for edit
functionality on the Agent Dashboard. Now, users can update their store
submission, if status is `PENDING` or `APPROVED`, but not for `REJECTED`
and `DRAFT`. When users make changes to a pending status submission, the
changes are made to the same version. However, when users make changes
to an approved status submission, a new store listing version is
created.

Backend works something like this: 

<img width="866" height="832" alt="Screenshot 2025-08-15 at 9 39 02 AM"
src="https://github.com/user-attachments/assets/209c60ac-8350-43c1-ba4c-7378d95ecba7"
/>

### Changes
- I’ve updated the `StoreSubmission` view to include `video_url` and
`categories`.
- I’ve added a new frontend UI for editing submissions.
- I’ve created an endpoint for editing submissions.
- I’ve added more end-to-end tests to ensure the edit submission
functionality works as expected.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I have checked manually, everything is working perfectly.
  - [x] All e2e tests are also passing.

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: neo <neo.dowithless@gmail.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-08-20 02:49:29 +00:00
Abhimanyu Yadav
0c09b0c459 chore(api): remove launch darkly feature flags from api key endpoints (#10694)
Some API key endpoints have the Launch Darkly feature flag enabled,
while others don’t. To ensure consistency and remove the API key flag
from the Launch Darkly dashboard, I’m also removing it from the left
endpoints.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything is working fine locally
2025-08-19 17:10:43 +00:00
Abhimanyu Yadav
1105e6c0d2 tests(frontend): e2e tests for api key page (#10683)
I’ve added three tests for the API keys page:

- The test checks if the user is redirected to the login page when
they’re not authenticated.
- The test verifies that a new API key is created successfully.
- The test ensures that an existing API key can be revoked.

<img width="470" height="143" alt="Screenshot 2025-08-19 at 10 56 19 AM"
src="https://github.com/user-attachments/assets/d27bf736-61ec-435b-a6c4-820e4f3a5e2f"
/>

I’ve also removed the feature flag from the `delete_api_key` endpoint,
so we can use it on CI and in the local environment.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] tests are working perfectly locally.

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2025-08-19 16:04:15 +00:00
Nicholas Tindle
483c399812 Revert "feat(platform): add ability to reject approved agents in admin dashboard"
This reverts commit 92515b3683.
2025-08-19 00:49:30 -05:00
Nicholas Tindle
260dd526c9 Revert "Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard"
This reverts commit 105d5dc7e9, reversing
changes made to 92515b3683.
2025-08-19 00:49:10 -05:00
Nicholas Tindle
75a159db01 Revert "style(frontend): format code and improve dialog titles in approve-reject buttons"
This reverts commit 62032e6584.
2025-08-19 00:48:22 -05:00
Nicholas Tindle
62032e6584 style(frontend): format code and improve dialog titles in approve-reject buttons 2025-08-18 23:32:43 -05:00
Nicholas Tindle
105d5dc7e9 Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard 2025-08-18 22:59:27 -05:00
Nicholas Tindle
92515b3683 feat(platform): add ability to reject approved agents in admin dashboard
- Modified backend review_store_submission to handle rejecting approved agents
- Added logic to update StoreListing when rejecting approved agents
- Updated UI to show "Revoke" button for approved agents
- Only shows approve button for pending agents
- Updates dialog text appropriately for revoking vs rejecting

Fixes SECRT-1218

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-18 10:35:25 -05:00
787 changed files with 46948 additions and 8063 deletions

244
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,244 @@
# GitHub Copilot Instructions for AutoGPT
This file provides comprehensive onboarding information for GitHub Copilot coding agent to work efficiently with the AutoGPT repository.
## Repository Overview
**AutoGPT** is a powerful platform for creating, deploying, and managing continuous AI agents that automate complex workflows. This is a large monorepo (~150MB) containing multiple components:
- **AutoGPT Platform** (`autogpt_platform/`) - Main focus: Modern AI agent platform (Polyform Shield License)
- **Classic AutoGPT** (`classic/`) - Legacy agent system (MIT License)
- **Documentation** (`docs/`) - MkDocs-based documentation site
- **Infrastructure** - Docker configurations, CI/CD, and development tools
**Primary Languages & Frameworks:**
- **Backend**: Python 3.10-3.13, FastAPI, Prisma ORM, PostgreSQL, RabbitMQ
- **Frontend**: TypeScript, Next.js 15, React, Tailwind CSS, Radix UI
- **Development**: Docker, Poetry, pnpm, Playwright, Storybook
## Build and Validation Instructions
### Essential Setup Commands
**Always run these commands in the correct directory and in this order:**
1. **Initial Setup** (required once):
```bash
# Clone and enter repository
git clone <repo> && cd AutoGPT
# Start all services (database, redis, rabbitmq, clamav)
cd autogpt_platform && docker compose --profile local up deps --build --detach
```
2. **Backend Setup** (always run before backend development):
```bash
cd autogpt_platform/backend
poetry install # Install dependencies
poetry run prisma migrate dev # Run database migrations
poetry run prisma generate # Generate Prisma client
```
3. **Frontend Setup** (always run before frontend development):
```bash
cd autogpt_platform/frontend
pnpm install # Install dependencies
```
### Runtime Requirements
**Critical:** Always ensure Docker services are running before starting development:
```bash
cd autogpt_platform && docker compose --profile local up deps --build --detach
```
**Python Version:** Use Python 3.11 (required; managed by Poetry via pyproject.toml)
**Node.js Version:** Use Node.js 21+ with pnpm package manager
### Development Commands
**Backend Development:**
```bash
cd autogpt_platform/backend
poetry run serve # Start development server (port 8000)
poetry run test # Run all tests (requires ~5 minutes)
poetry run pytest path/to/test.py # Run specific test
poetry run format # Format code (Black + isort) - always run first
poetry run lint # Lint code (ruff) - run after format
```
**Frontend Development:**
```bash
cd autogpt_platform/frontend
pnpm dev # Start development server (port 3000) - use for active development
pnpm build # Build for production (only needed for E2E tests or deployment)
pnpm test # Run Playwright E2E tests (requires build first)
pnpm test-ui # Run tests with UI
pnpm format # Format and lint code
pnpm storybook # Start component development server
```
### Testing Strategy
**Backend Tests:**
- **Block Tests**: `poetry run pytest backend/blocks/test/test_block.py -xvs` (validates all blocks)
- **Specific Block**: `poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[BlockName]' -xvs`
- **Snapshot Tests**: Use `--snapshot-update` when output changes, always review with `git diff`
**Frontend Tests:**
- **E2E Tests**: Always run `pnpm dev` before `pnpm test` (Playwright requires running instance)
- **Component Tests**: Use Storybook for isolated component development
### Critical Validation Steps
**Before committing changes:**
1. Run `poetry run format` (backend) and `pnpm format` (frontend)
2. Ensure all tests pass in modified areas
3. Verify Docker services are still running
4. Check that database migrations apply cleanly
**Common Issues & Workarounds:**
- **Prisma issues**: Run `poetry run prisma generate` after schema changes
- **Permission errors**: Ensure Docker has proper permissions
- **Port conflicts**: Check the `docker-compose.yml` file for the current list of exposed ports. You can list all mapped ports with:
- **Test timeouts**: Backend tests can take 5+ minutes, use `-x` flag to stop on first failure
## Project Layout & Architecture
### Core Architecture
**AutoGPT Platform** (`autogpt_platform/`):
- `backend/` - FastAPI server with async support
- `backend/backend/` - Core API logic
- `backend/blocks/` - Agent execution blocks
- `backend/data/` - Database models and schemas
- `schema.prisma` - Database schema definition
- `frontend/` - Next.js application
- `src/app/` - App Router pages and layouts
- `src/components/` - Reusable React components
- `src/lib/` - Utilities and configurations
- `autogpt_libs/` - Shared Python utilities
- `docker-compose.yml` - Development stack orchestration
**Key Configuration Files:**
- `pyproject.toml` - Python dependencies and tooling
- `package.json` - Node.js dependencies and scripts
- `schema.prisma` - Database schema and migrations
- `next.config.mjs` - Next.js configuration
- `tailwind.config.ts` - Styling configuration
### Security & Middleware
**Cache Protection**: Backend includes middleware preventing sensitive data caching in browsers/proxies
**Authentication**: JWT-based with Supabase integration
**User ID Validation**: All data access requires user ID checks - verify this for any `data/*.py` changes
### Development Workflow
**GitHub Actions**: Multiple CI/CD workflows in `.github/workflows/`
- `platform-backend-ci.yml` - Backend testing and validation
- `platform-frontend-ci.yml` - Frontend testing and validation
- `platform-fullstack-ci.yml` - End-to-end integration tests
**Pre-commit Hooks**: Run linting and formatting checks
**Conventional Commits**: Use format `type(scope): description` (e.g., `feat(backend): add API`)
### Key Source Files
**Backend Entry Points:**
- `backend/backend/server/server.py` - FastAPI application setup
- `backend/backend/data/` - Database models and user management
- `backend/blocks/` - Agent execution blocks and logic
**Frontend Entry Points:**
- `frontend/src/app/layout.tsx` - Root application layout
- `frontend/src/app/page.tsx` - Home page
- `frontend/src/lib/supabase/` - Authentication and database client
**Protected Routes**: Update `frontend/lib/supabase/middleware.ts` when adding protected routes
### Agent Block System
Agents are built using a visual block-based system where each block performs a single action. Blocks are defined in `backend/blocks/` and must include:
- Block definition with input/output schemas
- Execution logic with proper error handling
- Tests validating functionality
### Database & ORM
**Prisma ORM** with PostgreSQL backend including pgvector for embeddings:
- Schema in `schema.prisma`
- Migrations in `backend/migrations/`
- Always run `prisma migrate dev` and `prisma generate` after schema changes
## Environment Configuration
### Configuration Files Priority Order
1. **Backend**: `/backend/.env.default` → `/backend/.env` (user overrides)
2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides)
3. **Platform**: `/.env.default` (Supabase/shared) → `/.env` (user overrides)
4. Docker Compose `environment:` sections override file-based config
5. Shell environment variables have highest precedence
### Docker Environment Setup
- All services use hardcoded defaults (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
- Copy `.env.default` files to `.env` for local development customization
## Advanced Development Patterns
### Adding New Blocks
1. Create file in `/backend/backend/blocks/`
2. Inherit from `Block` base class with input/output schemas
3. Implement `run` method with proper error handling
4. Generate block UUID using `uuid.uuid4()`
5. Register in block registry
6. Write tests alongside block implementation
7. Consider how inputs/outputs connect with other blocks in graph editor
### API Development
1. Update routes in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside route files
4. For `data/*.py` changes, validate user ID checks
5. Run `poetry run test` to verify changes
### Frontend Development
1. Components in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for component development
4. Test user-facing features with Playwright E2E tests
5. Update protected routes in middleware when needed
### Security Guidelines
**Cache Protection Middleware** (`/backend/backend/server/middleware/security.py`):
- Default: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses allow list approach for cacheable paths (static assets, health checks, public pages)
- Prevents sensitive data caching in browsers/proxies
- Add new cacheable endpoints to `CACHEABLE_PATHS`
### CI/CD Alignment
The repository has comprehensive CI workflows that test:
- **Backend**: Python 3.11-3.13, services (Redis/RabbitMQ/ClamAV), Prisma migrations, Poetry lock validation
- **Frontend**: Node.js 21, pnpm, Playwright with Docker Compose stack, API schema validation
- **Integration**: Full-stack type checking and E2E testing
Match these patterns when developing locally - the copilot setup environment mirrors these CI configurations.
## Collaboration with Other AI Assistants
This repository is actively developed with assistance from Claude (via CLAUDE.md files). When working on this codebase:
- Check for existing CLAUDE.md files that provide additional context
- Follow established patterns and conventions already in the codebase
- Maintain consistency with existing code style and architecture
- Consider that changes may be reviewed and extended by both human developers and AI assistants
## Trust These Instructions
These instructions are comprehensive and tested. Only perform additional searches if:
1. Information here is incomplete for your specific task
2. You encounter errors not covered by the workarounds
3. You need to understand implementation details not covered above
For detailed platform development patterns, refer to `autogpt_platform/CLAUDE.md` and `AGENTS.md` in the repository root.

View File

@@ -0,0 +1,97 @@
name: Auto Fix CI Failures
on:
workflow_run:
workflows: ["CI"]
types:
- completed
permissions:
contents: write
pull-requests: write
actions: read
issues: write
id-token: write # Required for OIDC token exchange
jobs:
auto-fix:
if: |
github.event.workflow_run.conclusion == 'failure' &&
github.event.workflow_run.pull_requests[0] &&
!startsWith(github.event.workflow_run.head_branch, 'claude-auto-fix-ci-')
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup git identity
run: |
git config --global user.email "claude[bot]@users.noreply.github.com"
git config --global user.name "claude[bot]"
- name: Create fix branch
id: branch
run: |
BRANCH_NAME="claude-auto-fix-ci-${{ github.event.workflow_run.head_branch }}-${{ github.run_id }}"
git checkout -b "$BRANCH_NAME"
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
- name: Get CI failure details
id: failure_details
uses: actions/github-script@v7
with:
script: |
const run = await github.rest.actions.getWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const jobs = await github.rest.actions.listJobsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const failedJobs = jobs.data.jobs.filter(job => job.conclusion === 'failure');
let errorLogs = [];
for (const job of failedJobs) {
const logs = await github.rest.actions.downloadJobLogsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
job_id: job.id
});
errorLogs.push({
jobName: job.name,
logs: logs.data
});
}
return {
runUrl: run.data.html_url,
failedJobs: failedJobs.map(j => j.name),
errorLogs: errorLogs
};
- name: Fix CI failures with Claude
id: claude
uses: anthropics/claude-code-action@v1
with:
prompt: |
/fix-ci
Failed CI Run: ${{ fromJSON(steps.failure_details.outputs.result).runUrl }}
Failed Jobs: ${{ join(fromJSON(steps.failure_details.outputs.result).failedJobs, ', ') }}
PR Number: ${{ github.event.workflow_run.pull_requests[0].number }}
Branch Name: ${{ steps.branch.outputs.branch_name }}
Base Branch: ${{ github.event.workflow_run.head_branch }}
Repository: ${{ github.repository }}
Error logs:
${{ toJSON(fromJSON(steps.failure_details.outputs.result).errorLogs) }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--allowedTools 'Edit,MultiEdit,Write,Read,Glob,Grep,LS,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*)'"

379
.github/workflows/claude-dependabot.yml vendored Normal file
View File

@@ -0,0 +1,379 @@
# Claude Dependabot PR Review Workflow
#
# This workflow automatically runs Claude analysis on Dependabot PRs to:
# - Identify dependency changes and their versions
# - Look up changelogs for updated packages
# - Assess breaking changes and security impacts
# - Provide actionable recommendations for the development team
#
# Triggered on: Dependabot PRs (opened, synchronize)
# Requirements: ANTHROPIC_API_KEY secret must be configured
name: Claude Dependabot PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
dependabot-review:
# Only run on Dependabot PRs
if: github.actor == 'dependabot[bot]'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Dependabot Analysis
id: claude_review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |
You are Claude, an AI assistant specialized in reviewing Dependabot dependency update PRs.
Your primary tasks are:
1. **Analyze the dependency changes** in this Dependabot PR
2. **Look up changelogs** for all updated dependencies to understand what changed
3. **Identify breaking changes** and assess potential impact on the AutoGPT codebase
4. **Provide actionable recommendations** for the development team
## Analysis Process:
1. **Identify Changed Dependencies**:
- Use git diff to see what dependencies were updated
- Parse package.json, poetry.lock, requirements files, etc.
- List all package versions: old → new
2. **Changelog Research**:
- For each updated dependency, look up its changelog/release notes
- Use WebFetch to access GitHub releases, NPM package pages, PyPI project pages. The pr should also have some details
- Focus on versions between the old and new versions
- Identify: breaking changes, deprecations, security fixes, new features
3. **Breaking Change Assessment**:
- Categorize changes: BREAKING, MAJOR, MINOR, PATCH, SECURITY
- Assess impact on AutoGPT's usage patterns
- Check if AutoGPT uses affected APIs/features
- Look for migration guides or upgrade instructions
4. **Codebase Impact Analysis**:
- Search the AutoGPT codebase for usage of changed APIs
- Identify files that might be affected by breaking changes
- Check test files for deprecated usage patterns
- Look for configuration changes needed
## Output Format:
Provide a comprehensive review comment with:
### 🔍 Dependency Analysis Summary
- List of updated packages with version changes
- Overall risk assessment (LOW/MEDIUM/HIGH)
### 📋 Detailed Changelog Review
For each updated dependency:
- **Package**: name (old_version → new_version)
- **Changes**: Summary of key changes
- **Breaking Changes**: List any breaking changes
- **Security Fixes**: Note security improvements
- **Migration Notes**: Any upgrade steps needed
### ⚠️ Impact Assessment
- **Breaking Changes Found**: Yes/No with details
- **Affected Files**: List AutoGPT files that may need updates
- **Test Impact**: Any tests that may need updating
- **Configuration Changes**: Required config updates
### 🛠️ Recommendations
- **Action Required**: What the team should do
- **Testing Focus**: Areas to test thoroughly
- **Follow-up Tasks**: Any additional work needed
- **Merge Recommendation**: APPROVE/REVIEW_NEEDED/HOLD
### 📚 Useful Links
- Links to relevant changelogs, migration guides, documentation
Be thorough but concise. Focus on actionable insights that help the development team make informed decisions about the dependency updates.

View File

@@ -30,18 +30,296 @@ jobs:
github.event.issue.author_association == 'COLLABORATOR'
)
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: read
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout repository
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@beta
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr edit:*)"
--model opus
additional_permissions: |
actions: read

View File

@@ -0,0 +1,302 @@
name: "Copilot Setup Steps"
# Automatically run the setup steps when they are changed to allow for easy validation, and
# allow manual testing through the repository's "Actions" tab
on:
workflow_dispatch:
push:
paths:
- .github/workflows/copilot-setup-steps.yml
pull_request:
paths:
- .github/workflows/copilot-setup-steps.yml
jobs:
# The job MUST be called `copilot-setup-steps` or it will not be picked up by Copilot.
copilot-setup-steps:
runs-on: ubuntu-latest
timeout-minutes: 45
# Set the permissions to the lowest permissions possible needed for your steps.
# Copilot will be given its own token for its operations.
permissions:
# If you want to clone the repository as part of your setup steps, for example to install dependencies, you'll need the `contents: read` permission. If you don't clone the repository in your setup steps, Copilot will do this for you automatically after the steps complete.
contents: read
# You can define any steps you want, and they will run before the agent starts.
# If you do not check out your code, Copilot will do this for you.
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"

View File

@@ -5,6 +5,13 @@ on:
branches: [ dev ]
paths:
- 'autogpt_platform/**'
workflow_dispatch:
inputs:
git_ref:
description: 'Git ref (branch/tag) of AutoGPT to deploy'
required: true
default: 'master'
type: string
permissions:
contents: 'read'
@@ -19,6 +26,8 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.git_ref || github.ref_name }}
- name: Set up Python
uses: actions/setup-python@v5
@@ -48,4 +57,4 @@ jobs:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: build_deploy_dev
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "repository": "${{ github.repository }}"}'
client-payload: '{"ref": "${{ github.event.inputs.git_ref || github.ref }}", "repository": "${{ github.repository }}"}'

View File

@@ -3,6 +3,7 @@ name: AutoGPT Platform - Deploy Prod Environment
on:
release:
types: [published]
workflow_dispatch:
permissions:
contents: 'read'
@@ -17,6 +18,8 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.ref_name || 'master' }}
- name: Set up Python
uses: actions/setup-python@v5
@@ -36,7 +39,7 @@ jobs:
DATABASE_URL: ${{ secrets.BACKEND_DATABASE_URL }}
DIRECT_URL: ${{ secrets.BACKEND_DATABASE_URL }}
trigger:
needs: migrate
runs-on: ubuntu-latest
@@ -47,4 +50,5 @@ jobs:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: build_deploy_prod
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "repository": "${{ github.repository }}"}'
client-payload: |
{"ref": "${{ github.ref_name || 'master' }}", "repository": "${{ github.repository }}"}

View File

@@ -32,14 +32,12 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.11"]
python-version: ["3.11", "3.12", "3.13"]
runs-on: ubuntu-latest
services:
redis:
image: bitnami/redis:6.2
env:
REDIS_PASSWORD: testpassword
image: redis:latest
ports:
- 6379:6379
rabbitmq:
@@ -201,10 +199,9 @@ jobs:
DIRECT_URL: ${{ steps.supabase.outputs.DB_URL }}
SUPABASE_URL: ${{ steps.supabase.outputs.API_URL }}
SUPABASE_SERVICE_ROLE_KEY: ${{ steps.supabase.outputs.SERVICE_ROLE_KEY }}
SUPABASE_JWT_SECRET: ${{ steps.supabase.outputs.JWT_SECRET }}
JWT_VERIFY_KEY: ${{ steps.supabase.outputs.JWT_SECRET }}
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
REDIS_PASSWORD: "testpassword"
ENCRYPTION_KEY: "dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=" # DO NOT USE IN PRODUCTION!!
env:

View File

@@ -160,7 +160,7 @@ jobs:
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml up -d
NEXT_PUBLIC_PW_TEST=true docker compose -f ../docker-compose.yml up -d
env:
DOCKER_BUILDKIT: 1
BUILDX_CACHE_FROM: type=local,src=/tmp/.buildx-cache

View File

@@ -61,24 +61,27 @@ poetry run pytest path/to/test.py --snapshot-update
```bash
# Install dependencies
cd frontend && npm install
cd frontend && pnpm i
# Start development server
npm run dev
pnpm dev
# Run E2E tests
npm run test
pnpm test
# Run Storybook for component development
npm run storybook
pnpm storybook
# Build production
npm run build
pnpm build
# Type checking
npm run types
pnpm types
```
We have a components library in autogpt_platform/frontend/src/components/atoms that should be used when adding new pages and components.
## Architecture Overview
### Backend Architecture
@@ -149,14 +152,23 @@ Key models (defined in `/backend/schema.prisma`):
**Adding a new block:**
1. Create new file in `/backend/backend/blocks/`
2. Inherit from `Block` base class
3. Define input/output schemas
4. Implement `run` method
5. Register in block registry
6. Generate the block uuid using `uuid.uuid4()`
Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-sdk-guide.md) which covers:
- Provider configuration with `ProviderBuilder`
- Block schema definition
- Authentication (API keys, OAuth, webhooks)
- Testing and validation
- File organization
Note: when making many new blocks analyze the interfaces for each of these blcoks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
Quick steps:
1. Create new file in `/backend/backend/blocks/`
2. Configure provider using `ProviderBuilder` in `_config.py`
3. Inherit from `Block` base class
4. Define input/output schemas using `BlockSchema`
5. Implement async `run` method
6. Generate unique block ID using `uuid.uuid4()`
7. Test with `poetry run pytest backend/blocks/test/test_block.py`
Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
**Modifying the API:**

View File

@@ -1,35 +0,0 @@
import hashlib
import secrets
from typing import NamedTuple
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
raw: str
prefix: str
postfix: str
hash: str
class APIKeyManager:
PREFIX: str = "agpt_"
PREFIX_LENGTH: int = 8
POSTFIX_LENGTH: int = 8
def generate_api_key(self) -> APIKeyContainer:
"""Generate a new API key with all its parts."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
return APIKeyContainer(
raw=raw_key,
prefix=raw_key[: self.PREFIX_LENGTH],
postfix=raw_key[-self.POSTFIX_LENGTH :],
hash=hashlib.sha256(raw_key.encode()).hexdigest(),
)
def verify_api_key(self, provided_key: str, stored_hash: str) -> bool:
"""Verify if a provided API key matches the stored hash."""
if not provided_key.startswith(self.PREFIX):
return False
provided_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(provided_hash, stored_hash)

View File

@@ -0,0 +1,78 @@
import hashlib
import secrets
from typing import NamedTuple
from cryptography.hazmat.primitives.kdf.scrypt import Scrypt
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
key: str
head: str
tail: str
hash: str
salt: str
class APIKeySmith:
PREFIX: str = "agpt_"
HEAD_LENGTH: int = 8
TAIL_LENGTH: int = 8
def generate_key(self) -> APIKeyContainer:
"""Generate a new API key with secure hashing."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
hash, salt = self.hash_key(raw_key)
return APIKeyContainer(
key=raw_key,
head=raw_key[: self.HEAD_LENGTH],
tail=raw_key[-self.TAIL_LENGTH :],
hash=hash,
salt=salt,
)
def verify_key(
self, provided_key: str, known_hash: str, known_salt: str | None = None
) -> bool:
"""
Verify an API key against a known hash (+ salt).
Supports verifying both legacy SHA256 and secure Scrypt hashes.
"""
if not provided_key.startswith(self.PREFIX):
return False
# Handle legacy SHA256 hashes (migration support)
if known_salt is None:
legacy_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(legacy_hash, known_hash)
try:
salt_bytes = bytes.fromhex(known_salt)
provided_hash = self._hash_key_with_salt(provided_key, salt_bytes)
return secrets.compare_digest(provided_hash, known_hash)
except (ValueError, TypeError):
return False
def hash_key(self, raw_key: str) -> tuple[str, str]:
"""Migrate a legacy hash to secure hash format."""
salt = self._generate_salt()
hash = self._hash_key_with_salt(raw_key, salt)
return hash, salt.hex()
def _generate_salt(self) -> bytes:
"""Generate a random salt for hashing."""
return secrets.token_bytes(32)
def _hash_key_with_salt(self, raw_key: str, salt: bytes) -> str:
"""Hash API key using Scrypt with salt."""
kdf = Scrypt(
length=32,
salt=salt,
n=2**14, # CPU/memory cost parameter
r=8, # Block size parameter
p=1, # Parallelization parameter
)
key_hash = kdf.derive(raw_key.encode())
return key_hash.hex()

View File

@@ -0,0 +1,79 @@
import hashlib
from autogpt_libs.api_key.keysmith import APIKeySmith
def test_generate_api_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
assert key.key.startswith(keysmith.PREFIX)
assert key.head == key.key[: keysmith.HEAD_LENGTH]
assert key.tail == key.key[-keysmith.TAIL_LENGTH :]
assert len(key.hash) == 64 # 32 bytes hex encoded
assert len(key.salt) == 64 # 32 bytes hex encoded
def test_verify_new_secure_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test correct key validates
assert keysmith.verify_key(key.key, key.hash, key.salt) is True
# Test wrong key fails
wrong_key = f"{keysmith.PREFIX}wrongkey123"
assert keysmith.verify_key(wrong_key, key.hash, key.salt) is False
def test_verify_legacy_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}legacykey123"
legacy_hash = hashlib.sha256(legacy_key.encode()).hexdigest()
# Test legacy key validates without salt
assert keysmith.verify_key(legacy_key, legacy_hash) is True
# Test wrong legacy key fails
wrong_key = f"{keysmith.PREFIX}wronglegacy"
assert keysmith.verify_key(wrong_key, legacy_hash) is False
def test_rehash_existing_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}migratekey123"
# Migrate the legacy key
new_hash, new_salt = keysmith.hash_key(legacy_key)
# Verify migrated key works
assert keysmith.verify_key(legacy_key, new_hash, new_salt) is True
# Verify different key fails with migrated hash
wrong_key = f"{keysmith.PREFIX}wrongkey"
assert keysmith.verify_key(wrong_key, new_hash, new_salt) is False
def test_invalid_key_prefix():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test key without proper prefix fails
invalid_key = "invalid_prefix_key"
assert keysmith.verify_key(invalid_key, key.hash, key.salt) is False
def test_secure_hash_requires_salt():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Secure hash without salt should fail
assert keysmith.verify_key(key.key, key.hash) is False
def test_invalid_salt_format():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Invalid salt format should fail gracefully
assert keysmith.verify_key(key.key, key.hash, "invalid_hex") is False

View File

@@ -1,13 +1,13 @@
from .depends import requires_admin_user, requires_user
from .jwt_utils import parse_jwt_token
from .middleware import APIKeyValidator, auth_middleware
from .config import verify_settings
from .dependencies import get_user_id, requires_admin_user, requires_user
from .helpers import add_auth_responses_to_openapi
from .models import User
__all__ = [
"parse_jwt_token",
"requires_user",
"verify_settings",
"get_user_id",
"requires_admin_user",
"APIKeyValidator",
"auth_middleware",
"requires_user",
"add_auth_responses_to_openapi",
"User",
]

View File

@@ -1,11 +1,90 @@
import logging
import os
from jwt.algorithms import get_default_algorithms, has_crypto
logger = logging.getLogger(__name__)
class AuthConfigError(ValueError):
"""Raised when authentication configuration is invalid."""
pass
ALGO_RECOMMENDATION = (
"We highly recommend using an asymmetric algorithm such as ES256, "
"because when leaked, a shared secret would allow anyone to "
"forge valid tokens and impersonate users. "
"More info: https://supabase.com/docs/guides/auth/signing-keys#choosing-the-right-signing-algorithm" # noqa
)
class Settings:
def __init__(self):
self.JWT_SECRET_KEY: str = os.getenv("SUPABASE_JWT_SECRET", "")
self.ENABLE_AUTH: bool = os.getenv("ENABLE_AUTH", "false").lower() == "true"
self.JWT_ALGORITHM: str = "HS256"
self.JWT_VERIFY_KEY: str = os.getenv(
"JWT_VERIFY_KEY", os.getenv("SUPABASE_JWT_SECRET", "")
).strip()
self.JWT_ALGORITHM: str = os.getenv("JWT_SIGN_ALGORITHM", "HS256").strip()
self.validate()
def validate(self):
if not self.JWT_VERIFY_KEY:
raise AuthConfigError(
"JWT_VERIFY_KEY must be set. "
"An empty JWT secret would allow anyone to forge valid tokens."
)
if len(self.JWT_VERIFY_KEY) < 32:
logger.warning(
"⚠️ JWT_VERIFY_KEY appears weak (less than 32 characters). "
"Consider using a longer, cryptographically secure secret."
)
supported_algorithms = get_default_algorithms().keys()
if not has_crypto:
logger.warning(
"⚠️ Asymmetric JWT verification is not available "
"because the 'cryptography' package is not installed. "
+ ALGO_RECOMMENDATION
)
if (
self.JWT_ALGORITHM not in supported_algorithms
or self.JWT_ALGORITHM == "none"
):
raise AuthConfigError(
f"Invalid JWT_SIGN_ALGORITHM: '{self.JWT_ALGORITHM}'. "
"Supported algorithms are listed on "
"https://pyjwt.readthedocs.io/en/stable/algorithms.html"
)
if self.JWT_ALGORITHM.startswith("HS"):
logger.warning(
f"⚠️ JWT_SIGN_ALGORITHM is set to '{self.JWT_ALGORITHM}', "
"a symmetric shared-key signature algorithm. " + ALGO_RECOMMENDATION
)
settings = Settings()
_settings: Settings = None # type: ignore
def get_settings() -> Settings:
global _settings
if not _settings:
_settings = Settings()
return _settings
def verify_settings() -> None:
global _settings
if not _settings:
_settings = Settings() # calls validation indirectly
return
_settings.validate()

View File

@@ -0,0 +1,306 @@
"""
Comprehensive tests for auth configuration to ensure 100% line and branch coverage.
These tests verify critical security checks preventing JWT token forgery.
"""
import logging
import os
import pytest
from pytest_mock import MockerFixture
from autogpt_libs.auth.config import AuthConfigError, Settings
def test_environment_variable_precedence(mocker: MockerFixture):
"""Test that environment variables take precedence over defaults."""
secret = "environment-secret-key-with-proper-length-123456"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_environment_variable_backwards_compatible(mocker: MockerFixture):
"""Test that SUPABASE_JWT_SECRET is read if JWT_VERIFY_KEY is not set."""
secret = "environment-secret-key-with-proper-length-123456"
mocker.patch.dict(os.environ, {"SUPABASE_JWT_SECRET": secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_auth_config_error_inheritance():
"""Test that AuthConfigError is properly defined as an Exception."""
assert issubclass(AuthConfigError, Exception)
error = AuthConfigError("test message")
assert str(error) == "test message"
def test_settings_static_after_creation(mocker: MockerFixture):
"""Test that settings maintain their values after creation."""
secret = "immutable-secret-key-with-proper-length-12345"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
settings = Settings()
original_secret = settings.JWT_VERIFY_KEY
# Changing environment after creation shouldn't affect settings
os.environ["JWT_VERIFY_KEY"] = "different-secret"
assert settings.JWT_VERIFY_KEY == original_secret
def test_settings_load_with_valid_secret(mocker: MockerFixture):
"""Test auth enabled with a valid JWT secret."""
valid_secret = "a" * 32 # 32 character secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": valid_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == valid_secret
def test_settings_load_with_strong_secret(mocker: MockerFixture):
"""Test auth enabled with a cryptographically strong secret."""
strong_secret = "super-secret-jwt-token-with-at-least-32-characters-long"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": strong_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == strong_secret
assert len(settings.JWT_VERIFY_KEY) >= 32
def test_secret_empty_raises_error(mocker: MockerFixture):
"""Test that auth enabled with empty secret raises AuthConfigError."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": ""}, clear=True)
with pytest.raises(Exception) as exc_info:
Settings()
assert "JWT_VERIFY_KEY" in str(exc_info.value)
def test_secret_missing_raises_error(mocker: MockerFixture):
"""Test that auth enabled without secret env var raises AuthConfigError."""
mocker.patch.dict(os.environ, {}, clear=True)
with pytest.raises(Exception) as exc_info:
Settings()
assert "JWT_VERIFY_KEY" in str(exc_info.value)
@pytest.mark.parametrize("secret", [" ", " ", "\t", "\n", " \t\n "])
def test_secret_only_whitespace_raises_error(mocker: MockerFixture, secret: str):
"""Test that auth enabled with whitespace-only secret raises error."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
with pytest.raises(ValueError):
Settings()
def test_secret_weak_logs_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that weak JWT secret triggers warning log."""
weak_secret = "short" # Less than 32 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": weak_secret}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert settings.JWT_VERIFY_KEY == weak_secret
assert "key appears weak" in caplog.text.lower()
assert "less than 32 characters" in caplog.text
def test_secret_31_char_logs_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that 31-character secret triggers warning (boundary test)."""
secret_31 = "a" * 31 # Exactly 31 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret_31}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert len(settings.JWT_VERIFY_KEY) == 31
assert "key appears weak" in caplog.text.lower()
def test_secret_32_char_no_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that 32-character secret does not trigger warning (boundary test)."""
secret_32 = "a" * 32 # Exactly 32 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret_32}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert len(settings.JWT_VERIFY_KEY) == 32
assert "JWT secret appears weak" not in caplog.text
def test_secret_whitespace_stripped(mocker: MockerFixture):
"""Test that JWT secret whitespace is stripped."""
secret = "a" * 32
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": f" {secret} "}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_secret_with_special_characters(mocker: MockerFixture):
"""Test JWT secret with special characters."""
special_secret = "!@#$%^&*()_+-=[]{}|;:,.<>?`~" + "a" * 10 # 40 chars total
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": special_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == special_secret
def test_secret_with_unicode(mocker: MockerFixture):
"""Test JWT secret with unicode characters."""
unicode_secret = "秘密🔐キー" + "a" * 25 # Ensure >32 bytes
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": unicode_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == unicode_secret
def test_secret_very_long(mocker: MockerFixture):
"""Test JWT secret with excessive length."""
long_secret = "a" * 1000 # 1000 character secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": long_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == long_secret
assert len(settings.JWT_VERIFY_KEY) == 1000
def test_secret_with_newline(mocker: MockerFixture):
"""Test JWT secret containing newlines."""
multiline_secret = "secret\nwith\nnewlines" + "a" * 20
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": multiline_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == multiline_secret
def test_secret_base64_encoded(mocker: MockerFixture):
"""Test JWT secret that looks like base64."""
base64_secret = "dGhpc19pc19hX3NlY3JldF9rZXlfd2l0aF9wcm9wZXJfbGVuZ3Ro"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": base64_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == base64_secret
def test_secret_numeric_only(mocker: MockerFixture):
"""Test JWT secret with only numbers."""
numeric_secret = "1234567890" * 4 # 40 character numeric secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": numeric_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == numeric_secret
def test_algorithm_default_hs256(mocker: MockerFixture):
"""Test that JWT algorithm defaults to HS256."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": "a" * 32}, clear=True)
settings = Settings()
assert settings.JWT_ALGORITHM == "HS256"
def test_algorithm_whitespace_stripped(mocker: MockerFixture):
"""Test that JWT algorithm whitespace is stripped."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": " HS256 "},
clear=True,
)
settings = Settings()
assert settings.JWT_ALGORITHM == "HS256"
def test_no_crypto_warning(mocker: MockerFixture, caplog: pytest.LogCaptureFixture):
"""Test warning when crypto package is not available."""
secret = "a" * 32
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
# Mock has_crypto to return False
mocker.patch("autogpt_libs.auth.config.has_crypto", False)
with caplog.at_level(logging.WARNING):
Settings()
assert "Asymmetric JWT verification is not available" in caplog.text
assert "cryptography" in caplog.text
def test_algorithm_invalid_raises_error(mocker: MockerFixture):
"""Test that invalid JWT algorithm raises AuthConfigError."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": "INVALID_ALG"},
clear=True,
)
with pytest.raises(AuthConfigError) as exc_info:
Settings()
assert "Invalid JWT_SIGN_ALGORITHM" in str(exc_info.value)
assert "INVALID_ALG" in str(exc_info.value)
def test_algorithm_none_raises_error(mocker: MockerFixture):
"""Test that 'none' algorithm raises AuthConfigError."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": "none"},
clear=True,
)
with pytest.raises(AuthConfigError) as exc_info:
Settings()
assert "Invalid JWT_SIGN_ALGORITHM" in str(exc_info.value)
@pytest.mark.parametrize("algorithm", ["HS256", "HS384", "HS512"])
def test_algorithm_symmetric_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture, algorithm: str
):
"""Test warning for symmetric algorithms (HS256, HS384, HS512)."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": algorithm},
clear=True,
)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert algorithm in caplog.text
assert "symmetric shared-key signature algorithm" in caplog.text
assert settings.JWT_ALGORITHM == algorithm
@pytest.mark.parametrize(
"algorithm",
["ES256", "ES384", "ES512", "RS256", "RS384", "RS512", "PS256", "PS384", "PS512"],
)
def test_algorithm_asymmetric_no_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture, algorithm: str
):
"""Test that asymmetric algorithms do not trigger warning."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": algorithm},
clear=True,
)
with caplog.at_level(logging.WARNING):
settings = Settings()
# Should not contain the symmetric algorithm warning
assert "symmetric shared-key signature algorithm" not in caplog.text
assert settings.JWT_ALGORITHM == algorithm

View File

@@ -0,0 +1,45 @@
"""
FastAPI dependency functions for JWT-based authentication and authorization.
These are the high-level dependency functions used in route definitions.
"""
import fastapi
from .jwt_utils import get_jwt_payload, verify_user
from .models import User
def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
FastAPI dependency that requires a valid authenticated user.
Raises:
HTTPException: 401 for authentication failures
"""
return verify_user(jwt_payload, admin_only=False)
def requires_admin_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
FastAPI dependency that requires a valid admin user.
Raises:
HTTPException: 401 for authentication failures, 403 for insufficient permissions
"""
return verify_user(jwt_payload, admin_only=True)
def get_user_id(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> str:
"""
FastAPI dependency that returns the ID of the authenticated user.
Raises:
HTTPException: 401 for authentication failures or missing user ID
"""
user_id = jwt_payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
return user_id

View File

@@ -0,0 +1,335 @@
"""
Comprehensive integration tests for authentication dependencies.
Tests the full authentication flow from HTTP requests to user validation.
"""
import os
import pytest
from fastapi import FastAPI, HTTPException, Security
from fastapi.testclient import TestClient
from pytest_mock import MockerFixture
from autogpt_libs.auth.dependencies import (
get_user_id,
requires_admin_user,
requires_user,
)
from autogpt_libs.auth.models import User
class TestAuthDependencies:
"""Test suite for authentication dependency functions."""
@pytest.fixture
def app(self):
"""Create a test FastAPI application."""
app = FastAPI()
@app.get("/user")
def get_user_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id, "role": user.role}
@app.get("/admin")
def get_admin_endpoint(user: User = Security(requires_admin_user)):
return {"user_id": user.user_id, "role": user.role}
@app.get("/user-id")
def get_user_id_endpoint(user_id: str = Security(get_user_id)):
return {"user_id": user_id}
return app
@pytest.fixture
def client(self, app):
"""Create a test client."""
return TestClient(app)
def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user with valid JWT payload."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
# Mock get_jwt_payload to return our test payload
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
assert isinstance(user, User)
assert user.user_id == "user-123"
assert user.role == "user"
def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user accepts admin users."""
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
assert user.user_id == "admin-456"
assert user.role == "admin"
def test_requires_user_missing_sub(self):
"""Test requires_user with missing user ID."""
jwt_payload = {"role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_requires_user_empty_sub(self):
"""Test requires_user with empty user ID."""
jwt_payload = {"sub": "", "role": "user"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
assert exc_info.value.status_code == 401
def test_requires_admin_user_with_admin(self, mocker: MockerFixture):
"""Test requires_admin_user with admin role."""
jwt_payload = {
"sub": "admin-789",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_admin_user(jwt_payload)
assert user.user_id == "admin-789"
assert user.role == "admin"
def test_requires_admin_user_with_regular_user(self):
"""Test requires_admin_user rejects regular users."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_admin_user(jwt_payload)
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
def test_requires_admin_user_missing_role(self):
"""Test requires_admin_user with missing role."""
jwt_payload = {"sub": "user-123", "email": "user@example.com"}
with pytest.raises(KeyError):
requires_admin_user(jwt_payload)
def test_get_user_id_with_valid_payload(self, mocker: MockerFixture):
"""Test get_user_id extracts user ID correctly."""
jwt_payload = {"sub": "user-id-xyz", "role": "user"}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = get_user_id(jwt_payload)
assert user_id == "user-id-xyz"
def test_get_user_id_missing_sub(self):
"""Test get_user_id with missing user ID."""
jwt_payload = {"role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_get_user_id_none_sub(self):
"""Test get_user_id with None user ID."""
jwt_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
assert exc_info.value.status_code == 401
class TestAuthDependenciesIntegration:
"""Integration tests for auth dependencies with FastAPI."""
acceptable_jwt_secret = "test-secret-with-proper-length-123456"
@pytest.fixture
def create_token(self, mocker: MockerFixture):
"""Helper to create JWT tokens."""
import jwt
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": self.acceptable_jwt_secret},
clear=True,
)
def _create_token(payload, secret=self.acceptable_jwt_secret):
return jwt.encode(payload, secret, algorithm="HS256")
return _create_token
def test_endpoint_auth_enabled_no_token(self):
"""Test endpoints require token when auth is enabled."""
app = FastAPI()
@app.get("/test")
def test_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id}
client = TestClient(app)
# Should fail without auth header
response = client.get("/test")
assert response.status_code == 401
def test_endpoint_with_valid_token(self, create_token):
"""Test endpoint with valid JWT token."""
app = FastAPI()
@app.get("/test")
def test_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id, "role": user.role}
client = TestClient(app)
token = create_token(
{"sub": "test-user", "role": "user", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get("/test", headers={"Authorization": f"Bearer {token}"})
assert response.status_code == 200
assert response.json()["user_id"] == "test-user"
def test_admin_endpoint_requires_admin_role(self, create_token):
"""Test admin endpoint rejects non-admin users."""
app = FastAPI()
@app.get("/admin")
def admin_endpoint(user: User = Security(requires_admin_user)):
return {"user_id": user.user_id}
client = TestClient(app)
# Regular user token
user_token = create_token(
{"sub": "regular-user", "role": "user", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get(
"/admin", headers={"Authorization": f"Bearer {user_token}"}
)
assert response.status_code == 403
# Admin token
admin_token = create_token(
{"sub": "admin-user", "role": "admin", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get(
"/admin", headers={"Authorization": f"Bearer {admin_token}"}
)
assert response.status_code == 200
assert response.json()["user_id"] == "admin-user"
class TestAuthDependenciesEdgeCases:
"""Edge case tests for authentication dependencies."""
def test_dependency_with_complex_payload(self):
"""Test dependencies handle complex JWT payloads."""
complex_payload = {
"sub": "user-123",
"role": "admin",
"email": "test@example.com",
"app_metadata": {"provider": "email", "providers": ["email"]},
"user_metadata": {
"full_name": "Test User",
"avatar_url": "https://example.com/avatar.jpg",
},
"aud": "authenticated",
"iat": 1234567890,
"exp": 9999999999,
}
user = requires_user(complex_payload)
assert user.user_id == "user-123"
assert user.email == "test@example.com"
admin = requires_admin_user(complex_payload)
assert admin.role == "admin"
def test_dependency_with_unicode_in_payload(self):
"""Test dependencies handle unicode in JWT payloads."""
unicode_payload = {
"sub": "user-😀-123",
"role": "user",
"email": "测试@example.com",
"name": "日本語",
}
user = requires_user(unicode_payload)
assert "😀" in user.user_id
assert user.email == "测试@example.com"
def test_dependency_with_null_values(self):
"""Test dependencies handle null values in payload."""
null_payload = {
"sub": "user-123",
"role": "user",
"email": None,
"phone": None,
"metadata": None,
}
user = requires_user(null_payload)
assert user.user_id == "user-123"
assert user.email is None
def test_concurrent_requests_isolation(self):
"""Test that concurrent requests don't interfere with each other."""
payload1 = {"sub": "user-1", "role": "user"}
payload2 = {"sub": "user-2", "role": "admin"}
# Simulate concurrent processing
user1 = requires_user(payload1)
user2 = requires_admin_user(payload2)
assert user1.user_id == "user-1"
assert user2.user_id == "user-2"
assert user1.role == "user"
assert user2.role == "admin"
@pytest.mark.parametrize(
"payload,expected_error,admin_only",
[
(None, "Authorization header is missing", False),
({}, "User ID not found", False),
({"sub": ""}, "User ID not found", False),
({"role": "user"}, "User ID not found", False),
({"sub": "user", "role": "user"}, "Admin access required", True),
],
)
def test_dependency_error_cases(
self, payload, expected_error: str, admin_only: bool
):
"""Test that errors propagate correctly through dependencies."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
with pytest.raises(HTTPException) as exc_info:
verify_user(payload, admin_only=admin_only)
assert expected_error in exc_info.value.detail
def test_dependency_valid_user(self):
"""Test valid user case for dependency."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
# Valid case
user = verify_user({"sub": "user", "role": "user"}, admin_only=False)
assert user.user_id == "user"

View File

@@ -1,46 +0,0 @@
import fastapi
from .config import settings
from .middleware import auth_middleware
from .models import DEFAULT_USER_ID, User
def requires_user(payload: dict = fastapi.Depends(auth_middleware)) -> User:
return verify_user(payload, admin_only=False)
def requires_admin_user(
payload: dict = fastapi.Depends(auth_middleware),
) -> User:
return verify_user(payload, admin_only=True)
def verify_user(payload: dict | None, admin_only: bool) -> User:
if not payload:
if settings.ENABLE_AUTH:
raise fastapi.HTTPException(
status_code=401, detail="Authorization header is missing"
)
# This handles the case when authentication is disabled
payload = {"sub": DEFAULT_USER_ID, "role": "admin"}
user_id = payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
if admin_only and payload["role"] != "admin":
raise fastapi.HTTPException(status_code=403, detail="Admin access required")
return User.from_payload(payload)
def get_user_id(payload: dict = fastapi.Depends(auth_middleware)) -> str:
user_id = payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
return user_id

View File

@@ -1,68 +0,0 @@
import pytest
from .depends import requires_admin_user, requires_user, verify_user
def test_verify_user_no_payload():
user = verify_user(None, admin_only=False)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_verify_user_no_user_id():
with pytest.raises(Exception):
verify_user({"role": "admin"}, admin_only=False)
def test_verify_user_not_admin():
with pytest.raises(Exception):
verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"},
admin_only=True,
)
def test_verify_user_with_admin_role():
user = verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "admin"},
admin_only=True,
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_verify_user_with_user_role():
user = verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"},
admin_only=False,
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "user"
def test_requires_user():
user = requires_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"}
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "user"
def test_requires_user_no_user_id():
with pytest.raises(Exception):
requires_user({"role": "user"})
def test_requires_admin_user():
user = requires_admin_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "admin"}
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_requires_admin_user_not_admin():
with pytest.raises(Exception):
requires_admin_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"}
)

View File

@@ -0,0 +1,68 @@
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from .jwt_utils import bearer_jwt_auth
def add_auth_responses_to_openapi(app: FastAPI) -> None:
"""
Set up custom OpenAPI schema generation that adds 401 responses
to all authenticated endpoints.
This is needed when using HTTPBearer with auto_error=False to get proper
401 responses instead of 403, but FastAPI only automatically adds security
responses when auto_error=True.
"""
def custom_openapi():
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
version=app.version,
description=app.description,
routes=app.routes,
)
# Add 401 response to all endpoints that have security requirements
for path, methods in openapi_schema["paths"].items():
for method, details in methods.items():
security_schemas = [
schema
for auth_option in details.get("security", [])
for schema in auth_option.keys()
]
if bearer_jwt_auth.scheme_name not in security_schemas:
continue
if "responses" not in details:
details["responses"] = {}
details["responses"]["401"] = {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
}
# Ensure #/components/responses exists
if "components" not in openapi_schema:
openapi_schema["components"] = {}
if "responses" not in openapi_schema["components"]:
openapi_schema["components"]["responses"] = {}
# Define 401 response
openapi_schema["components"]["responses"]["HTTP401NotAuthenticatedError"] = {
"description": "Authentication required",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"detail": {"type": "string"}},
}
}
},
}
app.openapi_schema = openapi_schema
return app.openapi_schema
app.openapi = custom_openapi

View File

@@ -0,0 +1,435 @@
"""
Comprehensive tests for auth helpers module to achieve 100% coverage.
Tests OpenAPI schema generation and authentication response handling.
"""
from unittest import mock
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from autogpt_libs.auth.helpers import add_auth_responses_to_openapi
from autogpt_libs.auth.jwt_utils import bearer_jwt_auth
def test_add_auth_responses_to_openapi_basic():
"""Test adding 401 responses to OpenAPI schema."""
app = FastAPI(title="Test App", version="1.0.0")
# Add some test endpoints with authentication
from fastapi import Depends
from autogpt_libs.auth.dependencies import requires_user
@app.get("/protected", dependencies=[Depends(requires_user)])
def protected_endpoint():
return {"message": "Protected"}
@app.get("/public")
def public_endpoint():
return {"message": "Public"}
# Apply the OpenAPI customization
add_auth_responses_to_openapi(app)
# Get the OpenAPI schema
schema = app.openapi()
# Verify basic schema properties
assert schema["info"]["title"] == "Test App"
assert schema["info"]["version"] == "1.0.0"
# Verify 401 response component is added
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# Verify 401 response structure
error_response = schema["components"]["responses"]["HTTP401NotAuthenticatedError"]
assert error_response["description"] == "Authentication required"
assert "application/json" in error_response["content"]
assert "schema" in error_response["content"]["application/json"]
# Verify schema properties
response_schema = error_response["content"]["application/json"]["schema"]
assert response_schema["type"] == "object"
assert "detail" in response_schema["properties"]
assert response_schema["properties"]["detail"]["type"] == "string"
def test_add_auth_responses_to_openapi_with_security():
"""Test that 401 responses are added only to secured endpoints."""
app = FastAPI()
# Mock endpoint with security
from fastapi import Security
from autogpt_libs.auth.dependencies import get_user_id
@app.get("/secured")
def secured_endpoint(user_id: str = Security(get_user_id)):
return {"user_id": user_id}
@app.post("/also-secured")
def another_secured(user_id: str = Security(get_user_id)):
return {"status": "ok"}
@app.get("/unsecured")
def unsecured_endpoint():
return {"public": True}
# Apply OpenAPI customization
add_auth_responses_to_openapi(app)
# Get schema
schema = app.openapi()
# Check that secured endpoints have 401 responses
if "/secured" in schema["paths"]:
if "get" in schema["paths"]["/secured"]:
secured_get = schema["paths"]["/secured"]["get"]
if "responses" in secured_get:
assert "401" in secured_get["responses"]
assert (
secured_get["responses"]["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
if "/also-secured" in schema["paths"]:
if "post" in schema["paths"]["/also-secured"]:
secured_post = schema["paths"]["/also-secured"]["post"]
if "responses" in secured_post:
assert "401" in secured_post["responses"]
# Check that unsecured endpoint does not have 401 response
if "/unsecured" in schema["paths"]:
if "get" in schema["paths"]["/unsecured"]:
unsecured_get = schema["paths"]["/unsecured"]["get"]
if "responses" in unsecured_get:
assert "401" not in unsecured_get.get("responses", {})
def test_add_auth_responses_to_openapi_cached_schema():
"""Test that OpenAPI schema is cached after first generation."""
app = FastAPI()
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema twice
schema1 = app.openapi()
schema2 = app.openapi()
# Should return the same cached object
assert schema1 is schema2
def test_add_auth_responses_to_openapi_existing_responses():
"""Test handling endpoints that already have responses defined."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get(
"/with-responses",
responses={
200: {"description": "Success"},
404: {"description": "Not found"},
},
)
def endpoint_with_responses(jwt: dict = Security(get_jwt_payload)):
return {"data": "test"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Check that existing responses are preserved and 401 is added
if "/with-responses" in schema["paths"]:
if "get" in schema["paths"]["/with-responses"]:
responses = schema["paths"]["/with-responses"]["get"].get("responses", {})
# Original responses should be preserved
if "200" in responses:
assert responses["200"]["description"] == "Success"
if "404" in responses:
assert responses["404"]["description"] == "Not found"
# 401 should be added
if "401" in responses:
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_add_auth_responses_to_openapi_no_security_endpoints():
"""Test with app that has no secured endpoints."""
app = FastAPI()
@app.get("/public1")
def public1():
return {"message": "public1"}
@app.post("/public2")
def public2():
return {"message": "public2"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Component should still be added for consistency
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# But no endpoints should have 401 responses
for path in schema["paths"].values():
for method in path.values():
if isinstance(method, dict) and "responses" in method:
assert "401" not in method["responses"]
def test_add_auth_responses_to_openapi_multiple_security_schemes():
"""Test endpoints with multiple security requirements."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.dependencies import requires_admin_user, requires_user
from autogpt_libs.auth.models import User
@app.get("/multi-auth")
def multi_auth(
user: User = Security(requires_user),
admin: User = Security(requires_admin_user),
):
return {"status": "super secure"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Should have 401 response
if "/multi-auth" in schema["paths"]:
if "get" in schema["paths"]["/multi-auth"]:
responses = schema["paths"]["/multi-auth"]["get"].get("responses", {})
if "401" in responses:
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_add_auth_responses_to_openapi_empty_components():
"""Test when OpenAPI schema has no components section initially."""
app = FastAPI()
# Mock get_openapi to return schema without components
original_get_openapi = get_openapi
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Remove components if it exists
if "components" in schema:
del schema["components"]
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Components should be created
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
def test_add_auth_responses_to_openapi_all_http_methods():
"""Test that all HTTP methods are handled correctly."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get("/resource")
def get_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "GET"}
@app.post("/resource")
def post_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "POST"}
@app.put("/resource")
def put_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "PUT"}
@app.patch("/resource")
def patch_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "PATCH"}
@app.delete("/resource")
def delete_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "DELETE"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# All methods should have 401 response
if "/resource" in schema["paths"]:
for method in ["get", "post", "put", "patch", "delete"]:
if method in schema["paths"]["/resource"]:
method_spec = schema["paths"]["/resource"][method]
if "responses" in method_spec:
assert "401" in method_spec["responses"]
def test_bearer_jwt_auth_scheme_config():
"""Test that bearer_jwt_auth is configured correctly."""
assert bearer_jwt_auth.scheme_name == "HTTPBearerJWT"
assert bearer_jwt_auth.auto_error is False
def test_add_auth_responses_with_no_routes():
"""Test OpenAPI generation with app that has no routes."""
app = FastAPI(title="Empty App")
# Apply customization to empty app
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Should still have basic structure
assert schema["info"]["title"] == "Empty App"
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
def test_custom_openapi_function_replacement():
"""Test that the custom openapi function properly replaces the default."""
app = FastAPI()
# Store original function
original_openapi = app.openapi
# Apply customization
add_auth_responses_to_openapi(app)
# Function should be replaced
assert app.openapi != original_openapi
assert callable(app.openapi)
def test_endpoint_without_responses_section():
"""Test endpoint that has security but no responses section initially."""
app = FastAPI()
from fastapi import Security
from fastapi.openapi.utils import get_openapi as original_get_openapi
from autogpt_libs.auth.jwt_utils import get_jwt_payload
# Create endpoint
@app.get("/no-responses")
def endpoint_without_responses(jwt: dict = Security(get_jwt_payload)):
return {"data": "test"}
# Mock get_openapi to remove responses from the endpoint
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Remove responses from our endpoint to trigger line 40
if "/no-responses" in schema.get("paths", {}):
if "get" in schema["paths"]["/no-responses"]:
# Delete responses to force the code to create it
if "responses" in schema["paths"]["/no-responses"]["get"]:
del schema["paths"]["/no-responses"]["get"]["responses"]
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema and verify 401 was added
schema = app.openapi()
# The endpoint should now have 401 response
if "/no-responses" in schema["paths"]:
if "get" in schema["paths"]["/no-responses"]:
responses = schema["paths"]["/no-responses"]["get"].get("responses", {})
assert "401" in responses
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_components_with_existing_responses():
"""Test when components already has a responses section."""
app = FastAPI()
# Mock get_openapi to return schema with existing components/responses
from fastapi.openapi.utils import get_openapi as original_get_openapi
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Add existing components/responses
if "components" not in schema:
schema["components"] = {}
schema["components"]["responses"] = {
"ExistingResponse": {"description": "An existing response"}
}
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Both responses should exist
assert "ExistingResponse" in schema["components"]["responses"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# Verify our 401 response structure
error_response = schema["components"]["responses"][
"HTTP401NotAuthenticatedError"
]
assert error_response["description"] == "Authentication required"
def test_openapi_schema_persistence():
"""Test that modifications to OpenAPI schema persist correctly."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get("/test")
def test_endpoint(jwt: dict = Security(get_jwt_payload)):
return {"test": True}
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema multiple times
schema1 = app.openapi()
# Modify the cached schema (shouldn't affect future calls)
schema1["info"]["title"] = "Modified Title"
# Clear cache and get again
app.openapi_schema = None
schema2 = app.openapi()
# Should regenerate with original title
assert schema2["info"]["title"] == app.title
assert schema2["info"]["title"] != "Modified Title"

View File

@@ -1,11 +1,48 @@
from typing import Any, Dict
import logging
from typing import Any
import jwt
from fastapi import HTTPException, Security
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from .config import settings
from .config import get_settings
from .models import User
logger = logging.getLogger(__name__)
# Bearer token authentication scheme
bearer_jwt_auth = HTTPBearer(
bearerFormat="jwt", scheme_name="HTTPBearerJWT", auto_error=False
)
def parse_jwt_token(token: str) -> Dict[str, Any]:
def get_jwt_payload(
credentials: HTTPAuthorizationCredentials | None = Security(bearer_jwt_auth),
) -> dict[str, Any]:
"""
Extract and validate JWT payload from HTTP Authorization header.
This is the core authentication function that handles:
- Reading the `Authorization` header to obtain the JWT token
- Verifying the JWT token's signature
- Decoding the JWT token's payload
:param credentials: HTTP Authorization credentials from bearer token
:return: JWT payload dictionary
:raises HTTPException: 401 if authentication fails
"""
if not credentials:
raise HTTPException(status_code=401, detail="Authorization header is missing")
try:
payload = parse_jwt_token(credentials.credentials)
logger.debug("Token decoded successfully")
return payload
except ValueError as e:
raise HTTPException(status_code=401, detail=str(e))
def parse_jwt_token(token: str) -> dict[str, Any]:
"""
Parse and validate a JWT token.
@@ -13,10 +50,11 @@ def parse_jwt_token(token: str) -> Dict[str, Any]:
:return: The decoded payload
:raises ValueError: If the token is invalid or expired
"""
settings = get_settings()
try:
payload = jwt.decode(
token,
settings.JWT_SECRET_KEY,
settings.JWT_VERIFY_KEY,
algorithms=[settings.JWT_ALGORITHM],
audience="authenticated",
)
@@ -25,3 +63,18 @@ def parse_jwt_token(token: str) -> Dict[str, Any]:
raise ValueError("Token has expired")
except jwt.InvalidTokenError as e:
raise ValueError(f"Invalid token: {str(e)}")
def verify_user(jwt_payload: dict | None, admin_only: bool) -> User:
if jwt_payload is None:
raise HTTPException(status_code=401, detail="Authorization header is missing")
user_id = jwt_payload.get("sub")
if not user_id:
raise HTTPException(status_code=401, detail="User ID not found in token")
if admin_only and jwt_payload["role"] != "admin":
raise HTTPException(status_code=403, detail="Admin access required")
return User.from_payload(jwt_payload)

View File

@@ -0,0 +1,308 @@
"""
Comprehensive tests for JWT token parsing and validation.
Ensures 100% line and branch coverage for JWT security functions.
"""
import os
from datetime import datetime, timedelta, timezone
import jwt
import pytest
from fastapi import HTTPException
from fastapi.security import HTTPAuthorizationCredentials
from pytest_mock import MockerFixture
from autogpt_libs.auth import config, jwt_utils
from autogpt_libs.auth.config import Settings
from autogpt_libs.auth.models import User
MOCK_JWT_SECRET = "test-secret-key-with-at-least-32-characters"
TEST_USER_PAYLOAD = {
"sub": "test-user-id",
"role": "user",
"aud": "authenticated",
"email": "test@example.com",
}
TEST_ADMIN_PAYLOAD = {
"sub": "admin-user-id",
"role": "admin",
"aud": "authenticated",
"email": "admin@example.com",
}
@pytest.fixture(autouse=True)
def mock_config(mocker: MockerFixture):
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": MOCK_JWT_SECRET}, clear=True)
mocker.patch.object(config, "_settings", Settings())
yield
def create_token(payload, secret=None, algorithm="HS256"):
"""Helper to create JWT tokens."""
if secret is None:
secret = MOCK_JWT_SECRET
return jwt.encode(payload, secret, algorithm=algorithm)
def test_parse_jwt_token_valid():
"""Test parsing a valid JWT token."""
token = create_token(TEST_USER_PAYLOAD)
result = jwt_utils.parse_jwt_token(token)
assert result["sub"] == "test-user-id"
assert result["role"] == "user"
assert result["aud"] == "authenticated"
def test_parse_jwt_token_expired():
"""Test parsing an expired JWT token."""
expired_payload = {
**TEST_USER_PAYLOAD,
"exp": datetime.now(timezone.utc) - timedelta(hours=1),
}
token = create_token(expired_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Token has expired" in str(exc_info.value)
def test_parse_jwt_token_invalid_signature():
"""Test parsing a token with invalid signature."""
# Create token with different secret
token = create_token(TEST_USER_PAYLOAD, secret="wrong-secret")
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_malformed():
"""Test parsing a malformed token."""
malformed_tokens = [
"not.a.token",
"invalid",
"",
# Header only
"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9",
# No signature
"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0In0",
]
for token in malformed_tokens:
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_wrong_audience():
"""Test parsing a token with wrong audience."""
wrong_aud_payload = {**TEST_USER_PAYLOAD, "aud": "wrong-audience"}
token = create_token(wrong_aud_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_missing_audience():
"""Test parsing a token without audience claim."""
no_aud_payload = {k: v for k, v in TEST_USER_PAYLOAD.items() if k != "aud"}
token = create_token(no_aud_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_get_jwt_payload_with_valid_token():
"""Test extracting JWT payload with valid bearer token."""
token = create_token(TEST_USER_PAYLOAD)
credentials = HTTPAuthorizationCredentials(scheme="Bearer", credentials=token)
result = jwt_utils.get_jwt_payload(credentials)
assert result["sub"] == "test-user-id"
assert result["role"] == "user"
def test_get_jwt_payload_no_credentials():
"""Test JWT payload when no credentials provided."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(None)
assert exc_info.value.status_code == 401
assert "Authorization header is missing" in exc_info.value.detail
def test_get_jwt_payload_invalid_token():
"""Test JWT payload extraction with invalid token."""
credentials = HTTPAuthorizationCredentials(
scheme="Bearer", credentials="invalid.token.here"
)
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(credentials)
assert exc_info.value.status_code == 401
assert "Invalid token" in exc_info.value.detail
def test_verify_user_with_valid_user():
"""Test verifying a valid user."""
user = jwt_utils.verify_user(TEST_USER_PAYLOAD, admin_only=False)
assert isinstance(user, User)
assert user.user_id == "test-user-id"
assert user.role == "user"
assert user.email == "test@example.com"
def test_verify_user_with_admin():
"""Test verifying an admin user."""
user = jwt_utils.verify_user(TEST_ADMIN_PAYLOAD, admin_only=True)
assert isinstance(user, User)
assert user.user_id == "admin-user-id"
assert user.role == "admin"
def test_verify_user_admin_only_with_regular_user():
"""Test verifying regular user when admin is required."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(TEST_USER_PAYLOAD, admin_only=True)
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
def test_verify_user_no_payload():
"""Test verifying user with no payload."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(None, admin_only=False)
assert exc_info.value.status_code == 401
assert "Authorization header is missing" in exc_info.value.detail
def test_verify_user_missing_sub():
"""Test verifying user with payload missing 'sub' field."""
invalid_payload = {"role": "user", "email": "test@example.com"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_empty_sub():
"""Test verifying user with empty 'sub' field."""
invalid_payload = {"sub": "", "role": "user"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_none_sub():
"""Test verifying user with None 'sub' field."""
invalid_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_missing_role_admin_check():
"""Test verifying admin when role field is missing."""
no_role_payload = {"sub": "user-id"}
with pytest.raises(KeyError):
# This will raise KeyError when checking payload["role"]
jwt_utils.verify_user(no_role_payload, admin_only=True)
# ======================== EDGE CASES ======================== #
def test_jwt_with_additional_claims():
"""Test JWT token with additional custom claims."""
extra_claims_payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
"custom_claim": "custom_value",
"permissions": ["read", "write"],
"metadata": {"key": "value"},
}
token = create_token(extra_claims_payload)
result = jwt_utils.parse_jwt_token(token)
assert result["sub"] == "user-id"
assert result["custom_claim"] == "custom_value"
assert result["permissions"] == ["read", "write"]
def test_jwt_with_numeric_sub():
"""Test JWT token with numeric user ID."""
payload = {
"sub": 12345, # Numeric ID
"role": "user",
"aud": "authenticated",
}
# Should convert to string internally
user = jwt_utils.verify_user(payload, admin_only=False)
assert user.user_id == 12345
def test_jwt_with_very_long_sub():
"""Test JWT token with very long user ID."""
long_id = "a" * 1000
payload = {
"sub": long_id,
"role": "user",
"aud": "authenticated",
}
user = jwt_utils.verify_user(payload, admin_only=False)
assert user.user_id == long_id
def test_jwt_with_special_characters_in_claims():
"""Test JWT token with special characters in claims."""
payload = {
"sub": "user@example.com/special-chars!@#$%",
"role": "admin",
"aud": "authenticated",
"email": "test+special@example.com",
}
user = jwt_utils.verify_user(payload, admin_only=True)
assert "special-chars!@#$%" in user.user_id
def test_jwt_with_future_iat():
"""Test JWT token with issued-at time in future."""
future_payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
"iat": datetime.now(timezone.utc) + timedelta(hours=1),
}
token = create_token(future_payload)
# PyJWT validates iat claim and should reject future tokens
with pytest.raises(ValueError, match="not yet valid"):
jwt_utils.parse_jwt_token(token)
def test_jwt_with_different_algorithms():
"""Test that only HS256 algorithm is accepted."""
payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
}
# Try different algorithms
algorithms = ["HS384", "HS512", "none"]
for algo in algorithms:
if algo == "none":
# Special case for 'none' algorithm (security vulnerability if accepted)
token = create_token(payload, "", algorithm="none")
else:
token = create_token(payload, algorithm=algo)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)

View File

@@ -1,140 +0,0 @@
import inspect
import logging
import secrets
from typing import Any, Callable, Optional
from fastapi import HTTPException, Request, Security
from fastapi.security import APIKeyHeader, HTTPBearer
from starlette.status import HTTP_401_UNAUTHORIZED
from .config import settings
from .jwt_utils import parse_jwt_token
security = HTTPBearer()
logger = logging.getLogger(__name__)
async def auth_middleware(request: Request):
if not settings.ENABLE_AUTH:
# If authentication is disabled, allow the request to proceed
logger.warning("Auth disabled")
return {}
security = HTTPBearer()
credentials = await security(request)
if not credentials:
raise HTTPException(status_code=401, detail="Authorization header is missing")
try:
payload = parse_jwt_token(credentials.credentials)
request.state.user = payload
logger.debug("Token decoded successfully")
except ValueError as e:
raise HTTPException(status_code=401, detail=str(e))
return payload
class APIKeyValidator:
"""
Configurable API key validator that supports custom validation functions
for FastAPI applications.
This class provides a flexible way to implement API key authentication with optional
custom validation logic. It can be used for simple token matching
or more complex validation scenarios like database lookups.
Examples:
Simple token validation:
```python
validator = APIKeyValidator(
header_name="X-API-Key",
expected_token="your-secret-token"
)
@app.get("/protected", dependencies=[Depends(validator.get_dependency())])
def protected_endpoint():
return {"message": "Access granted"}
```
Custom validation with database lookup:
```python
async def validate_with_db(api_key: str):
api_key_obj = await db.get_api_key(api_key)
return api_key_obj if api_key_obj and api_key_obj.is_active else None
validator = APIKeyValidator(
header_name="X-API-Key",
validate_fn=validate_with_db
)
```
Args:
header_name (str): The name of the header containing the API key
expected_token (Optional[str]): The expected API key value for simple token matching
validate_fn (Optional[Callable]): Custom validation function that takes an API key
string and returns a boolean or object. Can be async.
error_status (int): HTTP status code to use for validation errors
error_message (str): Error message to return when validation fails
"""
def __init__(
self,
header_name: str,
expected_token: Optional[str] = None,
validate_fn: Optional[Callable[[str], bool]] = None,
error_status: int = HTTP_401_UNAUTHORIZED,
error_message: str = "Invalid API key",
):
# Create the APIKeyHeader as a class property
self.security_scheme = APIKeyHeader(name=header_name)
self.expected_token = expected_token
self.custom_validate_fn = validate_fn
self.error_status = error_status
self.error_message = error_message
async def default_validator(self, api_key: str) -> bool:
if not self.expected_token:
raise ValueError(
"Expected Token Required to be set when uisng API Key Validator default validation"
)
return secrets.compare_digest(api_key, self.expected_token)
async def __call__(
self, request: Request, api_key: str = Security(APIKeyHeader)
) -> Any:
if api_key is None:
raise HTTPException(status_code=self.error_status, detail="Missing API key")
# Use custom validation if provided, otherwise use default equality check
validator = self.custom_validate_fn or self.default_validator
result = (
await validator(api_key)
if inspect.iscoroutinefunction(validator)
else validator(api_key)
)
if not result:
raise HTTPException(
status_code=self.error_status, detail=self.error_message
)
# Store validation result in request state if it's not just a boolean
if result is not True:
request.state.api_key = result
return result
def get_dependency(self):
"""
Returns a callable dependency that FastAPI will recognize as a security scheme
"""
async def validate_api_key(
request: Request, api_key: str = Security(self.security_scheme)
) -> Any:
return await self(request, api_key)
# This helps FastAPI recognize it as a security dependency
validate_api_key.__name__ = f"validate_{self.security_scheme.model.name}"
return validate_api_key

View File

@@ -1,3 +1,5 @@
from typing import Optional
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
@@ -13,8 +15,8 @@ class RateLimitSettings(BaseSettings):
default="6379", description="Redis port", validation_alias="REDIS_PORT"
)
redis_password: str = Field(
default="password",
redis_password: Optional[str] = Field(
default=None,
description="Redis password",
validation_alias="REDIS_PASSWORD",
)

View File

@@ -11,7 +11,7 @@ class RateLimiter:
self,
redis_host: str = RATE_LIMIT_SETTINGS.redis_host,
redis_port: str = RATE_LIMIT_SETTINGS.redis_port,
redis_password: str = RATE_LIMIT_SETTINGS.redis_password,
redis_password: str | None = RATE_LIMIT_SETTINGS.redis_password,
requests_per_minute: int = RATE_LIMIT_SETTINGS.requests_per_minute,
):
self.redis = Redis(

View File

@@ -1,90 +1,68 @@
import asyncio
import inspect
import logging
import threading
import time
from functools import wraps
from typing import (
Awaitable,
Any,
Callable,
ParamSpec,
Protocol,
Tuple,
TypeVar,
cast,
overload,
runtime_checkable,
)
P = ParamSpec("P")
R = TypeVar("R")
R_co = TypeVar("R_co", covariant=True)
logger = logging.getLogger(__name__)
@overload
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]:
pass
def _make_hashable_key(
args: tuple[Any, ...], kwargs: dict[str, Any]
) -> tuple[Any, ...]:
"""
Convert args and kwargs into a hashable cache key.
Handles unhashable types like dict, list, set by converting them to
their sorted string representations.
"""
@overload
def thread_cached(func: Callable[P, R]) -> Callable[P, R]:
pass
def make_hashable(obj: Any) -> Any:
"""Recursively convert an object to a hashable representation."""
if isinstance(obj, dict):
# Sort dict items to ensure consistent ordering
return (
"__dict__",
tuple(sorted((k, make_hashable(v)) for k, v in obj.items())),
)
elif isinstance(obj, (list, tuple)):
return ("__list__", tuple(make_hashable(item) for item in obj))
elif isinstance(obj, set):
return ("__set__", tuple(sorted(make_hashable(item) for item in obj)))
elif hasattr(obj, "__dict__"):
# Handle objects with __dict__ attribute
return ("__obj__", obj.__class__.__name__, make_hashable(obj.__dict__))
else:
# For basic hashable types (str, int, bool, None, etc.)
try:
hash(obj)
return obj
except TypeError:
# Fallback: convert to string representation
return ("__str__", str(obj))
def thread_cached(
func: Callable[P, R] | Callable[P, Awaitable[R]],
) -> Callable[P, R] | Callable[P, Awaitable[R]]:
thread_local = threading.local()
def _clear():
if hasattr(thread_local, "cache"):
del thread_local.cache
if inspect.iscoroutinefunction(func):
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = await cast(Callable[P, Awaitable[R]], func)(
*args, **kwargs
)
return cache[key]
setattr(async_wrapper, "clear_cache", _clear)
return async_wrapper
else:
def sync_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
setattr(sync_wrapper, "clear_cache", _clear)
return sync_wrapper
def clear_thread_cache(func: Callable) -> None:
if clear := getattr(func, "clear_cache", None):
clear()
FuncT = TypeVar("FuncT")
R_co = TypeVar("R_co", covariant=True)
hashable_args = tuple(make_hashable(arg) for arg in args)
hashable_kwargs = tuple(sorted((k, make_hashable(v)) for k, v in kwargs.items()))
return (hashable_args, hashable_kwargs)
@runtime_checkable
class AsyncCachedFunction(Protocol[P, R_co]):
"""Protocol for async functions with cache management methods."""
class CachedFunction(Protocol[P, R_co]):
"""Protocol for cached functions with cache management methods."""
def cache_clear(self) -> None:
"""Clear all cached entries."""
@@ -94,101 +72,169 @@ class AsyncCachedFunction(Protocol[P, R_co]):
"""Get cache statistics."""
return {}
async def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
def cache_delete(self, *args: P.args, **kwargs: P.kwargs) -> bool:
"""Delete a specific cache entry by its arguments. Returns True if entry existed."""
return False
def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
"""Call the cached function."""
return None # type: ignore
def async_ttl_cache(
maxsize: int = 128, ttl_seconds: int | None = None
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
def cached(
*,
maxsize: int = 128,
ttl_seconds: int | None = None,
) -> Callable[[Callable], CachedFunction]:
"""
TTL (Time To Live) cache decorator for async functions.
Thundering herd safe cache decorator for both sync and async functions.
Similar to functools.lru_cache but works with async functions and includes optional TTL.
Uses double-checked locking to prevent multiple threads/coroutines from
executing the expensive operation simultaneously during cache misses.
Args:
func: The function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
ttl_seconds: Time to live in seconds. If None, entries never expire (like lru_cache)
ttl_seconds: Time to live in seconds. If None, entries never expire
Returns:
Decorator function
Decorated function or decorator
Example:
# With TTL
@async_ttl_cache(maxsize=1000, ttl_seconds=300)
async def api_call(param: str) -> dict:
@cache() # Default: maxsize=128, no TTL
def expensive_sync_operation(param: str) -> dict:
return {"result": param}
# Without TTL (permanent cache like lru_cache)
@async_ttl_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
@cache() # Works with async too
async def expensive_async_operation(param: str) -> dict:
return {"result": param}
@cache(maxsize=1000, ttl_seconds=300) # Custom maxsize and TTL
def another_operation(param: str) -> dict:
return {"result": param}
"""
def decorator(
async_func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
# Cache storage - use union type to handle both cases
cache_storage: dict[tuple, R | Tuple[R, float]] = {}
def decorator(target_func):
# Cache storage and locks
cache_storage = {}
@wraps(async_func)
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# Create cache key from arguments
key = (args, tuple(sorted(kwargs.items())))
current_time = time.time()
if inspect.iscoroutinefunction(target_func):
# Async function with asyncio.Lock
cache_lock = asyncio.Lock()
# Check if we have a valid cached entry
if key in cache_storage:
if ttl_seconds is None:
# No TTL - return cached result directly
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, cache_storage[key])
else:
# With TTL - check expiration
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, result)
@wraps(target_func)
async def async_wrapper(*args: P.args, **kwargs: P.kwargs):
key = _make_hashable_key(args, kwargs)
current_time = time.time()
# Fast path: check cache without lock
if key in cache_storage:
if ttl_seconds is None:
logger.debug(f"Cache hit for {target_func.__name__}")
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(f"Cache hit for {target_func.__name__}")
return result
# Slow path: acquire lock for cache miss/expiry
async with cache_lock:
# Double-check: another coroutine might have populated cache
if key in cache_storage:
if ttl_seconds is None:
return cache_storage[key]
else:
# Expired entry
del cache_storage[key]
logger.debug(
f"Cache entry expired for {async_func.__name__}"
)
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
return result
# Cache miss or expired - fetch fresh data
logger.debug(
f"Cache miss for {async_func.__name__} with key: {str(key)[:50]}"
)
result = await async_func(*args, **kwargs)
# Cache miss - execute function
logger.debug(f"Cache miss for {target_func.__name__}")
result = await target_func(*args, **kwargs)
# Store in cache
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Store result
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Simple cleanup when cache gets too large
if len(cache_storage) > maxsize:
# Remove oldest entries (simple FIFO cleanup)
cutoff = maxsize // 2
oldest_keys = list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
logger.debug(
f"Cache cleanup: removed {len(oldest_keys)} entries for {async_func.__name__}"
)
# Cleanup if needed
if len(cache_storage) > maxsize:
cutoff = maxsize // 2
oldest_keys = (
list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
)
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
return result
return result
# Add cache management methods (similar to functools.lru_cache)
wrapper = async_wrapper
else:
# Sync function with threading.Lock
cache_lock = threading.Lock()
@wraps(target_func)
def sync_wrapper(*args: P.args, **kwargs: P.kwargs):
key = _make_hashable_key(args, kwargs)
current_time = time.time()
# Fast path: check cache without lock
if key in cache_storage:
if ttl_seconds is None:
logger.debug(f"Cache hit for {target_func.__name__}")
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(f"Cache hit for {target_func.__name__}")
return result
# Slow path: acquire lock for cache miss/expiry
with cache_lock:
# Double-check: another thread might have populated cache
if key in cache_storage:
if ttl_seconds is None:
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
return result
# Cache miss - execute function
logger.debug(f"Cache miss for {target_func.__name__}")
result = target_func(*args, **kwargs)
# Store result
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Cleanup if needed
if len(cache_storage) > maxsize:
cutoff = maxsize // 2
oldest_keys = (
list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
)
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
return result
wrapper = sync_wrapper
# Add cache management methods
def cache_clear() -> None:
cache_storage.clear()
@@ -199,68 +245,84 @@ def async_ttl_cache(
"ttl_seconds": ttl_seconds,
}
# Attach methods to wrapper
def cache_delete(*args, **kwargs) -> bool:
"""Delete a specific cache entry. Returns True if entry existed."""
key = _make_hashable_key(args, kwargs)
if key in cache_storage:
del cache_storage[key]
return True
return False
setattr(wrapper, "cache_clear", cache_clear)
setattr(wrapper, "cache_info", cache_info)
setattr(wrapper, "cache_delete", cache_delete)
return cast(AsyncCachedFunction[P, R], wrapper)
return cast(CachedFunction, wrapper)
return decorator
@overload
def async_cache(
func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
pass
@overload
def async_cache(
func: None = None,
*,
maxsize: int = 128,
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
pass
def async_cache(
func: Callable[P, Awaitable[R]] | None = None,
*,
maxsize: int = 128,
) -> (
AsyncCachedFunction[P, R]
| Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]
):
def thread_cached(func):
"""
Process-level cache decorator for async functions (no TTL).
Thread-local cache decorator for both sync and async functions.
Similar to functools.lru_cache but works with async functions.
This is a convenience wrapper around async_ttl_cache with ttl_seconds=None.
Each thread gets its own cache, which is useful for request-scoped caching
in web applications where you want to cache within a single request but
not across requests.
Args:
func: The async function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
func: The function to cache
Returns:
Decorated function or decorator
Decorated function with thread-local caching
Example:
# Without parentheses (uses default maxsize=128)
@async_cache
async def get_data(param: str) -> dict:
@thread_cached
def expensive_operation(param: str) -> dict:
return {"result": param}
# With parentheses and custom maxsize
@async_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
# Expensive computation here
@thread_cached # Works with async too
async def expensive_async_operation(param: str) -> dict:
return {"result": param}
"""
if func is None:
# Called with parentheses @async_cache() or @async_cache(maxsize=...)
return async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
thread_local = threading.local()
def _clear():
if hasattr(thread_local, "cache"):
del thread_local.cache
if inspect.iscoroutinefunction(func):
@wraps(func)
async def async_wrapper(*args, **kwargs):
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = _make_hashable_key(args, kwargs)
if key not in cache:
cache[key] = await func(*args, **kwargs)
return cache[key]
setattr(async_wrapper, "clear_cache", _clear)
return async_wrapper
else:
# Called without parentheses @async_cache
decorator = async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
return decorator(func)
@wraps(func)
def sync_wrapper(*args, **kwargs):
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = _make_hashable_key(args, kwargs)
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
setattr(sync_wrapper, "clear_cache", _clear)
return sync_wrapper
def clear_thread_cache(func: Callable) -> None:
"""Clear thread-local cache for a function."""
if clear := getattr(func, "clear_cache", None):
clear()

View File

@@ -16,12 +16,7 @@ from unittest.mock import Mock
import pytest
from autogpt_libs.utils.cache import (
async_cache,
async_ttl_cache,
clear_thread_cache,
thread_cached,
)
from autogpt_libs.utils.cache import cached, clear_thread_cache, thread_cached
class TestThreadCached:
@@ -330,102 +325,202 @@ class TestThreadCached:
assert mock.call_count == 2
class TestAsyncTTLCache:
"""Tests for the @async_ttl_cache decorator."""
class TestCache:
"""Tests for the unified @cache decorator (works for both sync and async)."""
@pytest.mark.asyncio
async def test_basic_caching(self):
"""Test basic caching functionality."""
def test_basic_sync_caching(self):
"""Test basic sync caching functionality."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def cached_function(x: int, y: int = 0) -> int:
@cached()
def expensive_sync_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
return x + y
# First call
result1 = expensive_sync_function(1, 2)
assert result1 == 3
assert call_count == 1
# Second call with same args - should use cache
result2 = expensive_sync_function(1, 2)
assert result2 == 3
assert call_count == 1
# Different args - should call function again
result3 = expensive_sync_function(2, 3)
assert result3 == 5
assert call_count == 2
@pytest.mark.asyncio
async def test_basic_async_caching(self):
"""Test basic async caching functionality."""
call_count = 0
@cached()
async def expensive_async_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01) # Simulate async work
return x + y
# First call
result1 = await cached_function(1, 2)
result1 = await expensive_async_function(1, 2)
assert result1 == 3
assert call_count == 1
# Second call with same args - should use cache
result2 = await cached_function(1, 2)
result2 = await expensive_async_function(1, 2)
assert result2 == 3
assert call_count == 1 # No additional call
assert call_count == 1
# Different args - should call function again
result3 = await cached_function(2, 3)
result3 = await expensive_async_function(2, 3)
assert result3 == 5
assert call_count == 2
@pytest.mark.asyncio
async def test_ttl_expiration(self):
"""Test that cache entries expire after TTL."""
def test_sync_thundering_herd_protection(self):
"""Test that concurrent sync calls don't cause thundering herd."""
call_count = 0
results = []
@async_ttl_cache(maxsize=10, ttl_seconds=1) # Short TTL
async def short_lived_cache(x: int) -> int:
@cached()
def slow_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 2
time.sleep(0.1) # Simulate expensive operation
return x * x
def worker():
result = slow_function(5)
results.append(result)
# Launch multiple concurrent threads
with ThreadPoolExecutor(max_workers=5) as executor:
futures = [executor.submit(worker) for _ in range(5)]
for future in futures:
future.result()
# All results should be the same
assert all(result == 25 for result in results)
# Only one thread should have executed the expensive operation
assert call_count == 1
@pytest.mark.asyncio
async def test_async_thundering_herd_protection(self):
"""Test that concurrent async calls don't cause thundering herd."""
call_count = 0
@cached()
async def slow_async_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.1) # Simulate expensive operation
return x * x
# Launch concurrent coroutines
tasks = [slow_async_function(7) for _ in range(5)]
results = await asyncio.gather(*tasks)
# All results should be the same
assert all(result == 49 for result in results)
# Only one coroutine should have executed the expensive operation
assert call_count == 1
def test_ttl_functionality(self):
"""Test TTL functionality with sync function."""
call_count = 0
@cached(maxsize=10, ttl_seconds=1) # Short TTL
def ttl_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 3
# First call
result1 = await short_lived_cache(5)
assert result1 == 10
result1 = ttl_function(3)
assert result1 == 9
assert call_count == 1
# Second call immediately - should use cache
result2 = await short_lived_cache(5)
assert result2 == 10
result2 = ttl_function(3)
assert result2 == 9
assert call_count == 1
# Wait for TTL to expire
time.sleep(1.1)
# Third call after expiration - should call function again
result3 = ttl_function(3)
assert result3 == 9
assert call_count == 2
@pytest.mark.asyncio
async def test_async_ttl_functionality(self):
"""Test TTL functionality with async function."""
call_count = 0
@cached(maxsize=10, ttl_seconds=1) # Short TTL
async def async_ttl_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01)
return x * 4
# First call
result1 = await async_ttl_function(3)
assert result1 == 12
assert call_count == 1
# Second call immediately - should use cache
result2 = await async_ttl_function(3)
assert result2 == 12
assert call_count == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Third call after expiration - should call function again
result3 = await short_lived_cache(5)
assert result3 == 10
result3 = await async_ttl_function(3)
assert result3 == 12
assert call_count == 2
@pytest.mark.asyncio
async def test_cache_info(self):
def test_cache_info(self):
"""Test cache info functionality."""
@async_ttl_cache(maxsize=5, ttl_seconds=300)
async def info_test_function(x: int) -> int:
@cached(maxsize=10, ttl_seconds=60)
def info_test_function(x: int) -> int:
return x * 3
# Check initial cache info
info = info_test_function.cache_info()
assert info["size"] == 0
assert info["maxsize"] == 5
assert info["ttl_seconds"] == 300
assert info["maxsize"] == 10
assert info["ttl_seconds"] == 60
# Add an entry
await info_test_function(1)
info_test_function(1)
info = info_test_function.cache_info()
assert info["size"] == 1
@pytest.mark.asyncio
async def test_cache_clear(self):
def test_cache_clear(self):
"""Test cache clearing functionality."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def clearable_function(x: int) -> int:
@cached()
def clearable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 4
# First call
result1 = await clearable_function(2)
result1 = clearable_function(2)
assert result1 == 8
assert call_count == 1
# Second call - should use cache
result2 = await clearable_function(2)
result2 = clearable_function(2)
assert result2 == 8
assert call_count == 1
@@ -433,273 +528,149 @@ class TestAsyncTTLCache:
clearable_function.cache_clear()
# Third call after clear - should call function again
result3 = await clearable_function(2)
result3 = clearable_function(2)
assert result3 == 8
assert call_count == 2
@pytest.mark.asyncio
async def test_maxsize_cleanup(self):
"""Test that cache cleans up when maxsize is exceeded."""
async def test_async_cache_clear(self):
"""Test cache clearing functionality with async function."""
call_count = 0
@async_ttl_cache(maxsize=3, ttl_seconds=60)
async def size_limited_function(x: int) -> int:
@cached()
async def async_clearable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x**2
await asyncio.sleep(0.01)
return x * 5
# Fill cache to maxsize
await size_limited_function(1) # call_count: 1
await size_limited_function(2) # call_count: 2
await size_limited_function(3) # call_count: 3
info = size_limited_function.cache_info()
assert info["size"] == 3
# Add one more entry - should trigger cleanup
await size_limited_function(4) # call_count: 4
# Cache size should be reduced (cleanup removes oldest entries)
info = size_limited_function.cache_info()
assert info["size"] is not None and info["size"] <= 3 # Should be cleaned up
@pytest.mark.asyncio
async def test_argument_variations(self):
"""Test caching with different argument patterns."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def arg_test_function(a: int, b: str = "default", *, c: int = 100) -> str:
nonlocal call_count
call_count += 1
return f"{a}-{b}-{c}"
# Different ways to call with same logical arguments
result1 = await arg_test_function(1, "test", c=200)
assert call_count == 1
# Same arguments, same order - should use cache
result2 = await arg_test_function(1, "test", c=200)
assert call_count == 1
assert result1 == result2
# Different arguments - should call function
result3 = await arg_test_function(2, "test", c=200)
assert call_count == 2
assert result1 != result3
@pytest.mark.asyncio
async def test_exception_handling(self):
"""Test that exceptions are not cached."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def exception_function(x: int) -> int:
nonlocal call_count
call_count += 1
if x < 0:
raise ValueError("Negative value not allowed")
return x * 2
# Successful call - should be cached
result1 = await exception_function(5)
# First call
result1 = await async_clearable_function(2)
assert result1 == 10
assert call_count == 1
# Same successful call - should use cache
result2 = await exception_function(5)
# Second call - should use cache
result2 = await async_clearable_function(2)
assert result2 == 10
assert call_count == 1
# Exception call - should not be cached
with pytest.raises(ValueError):
await exception_function(-1)
# Clear cache
async_clearable_function.cache_clear()
# Third call after clear - should call function again
result3 = await async_clearable_function(2)
assert result3 == 10
assert call_count == 2
# Same exception call - should call again (not cached)
with pytest.raises(ValueError):
await exception_function(-1)
@pytest.mark.asyncio
async def test_async_function_returns_results_not_coroutines(self):
"""Test that cached async functions return actual results, not coroutines."""
call_count = 0
@cached()
async def async_result_function(x: int) -> str:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01)
return f"result_{x}"
# First call
result1 = await async_result_function(1)
assert result1 == "result_1"
assert isinstance(result1, str) # Should be string, not coroutine
assert call_count == 1
# Second call - should return cached result (string), not coroutine
result2 = await async_result_function(1)
assert result2 == "result_1"
assert isinstance(result2, str) # Should be string, not coroutine
assert call_count == 1 # Function should not be called again
# Verify results are identical
assert result1 is result2 # Should be same cached object
def test_cache_delete(self):
"""Test selective cache deletion functionality."""
call_count = 0
@cached()
def deletable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 6
# First call for x=1
result1 = deletable_function(1)
assert result1 == 6
assert call_count == 1
# First call for x=2
result2 = deletable_function(2)
assert result2 == 12
assert call_count == 2
# Second calls - should use cache
assert deletable_function(1) == 6
assert deletable_function(2) == 12
assert call_count == 2
# Delete specific entry for x=1
was_deleted = deletable_function.cache_delete(1)
assert was_deleted is True
# Call with x=1 should execute function again
result3 = deletable_function(1)
assert result3 == 6
assert call_count == 3
@pytest.mark.asyncio
async def test_concurrent_calls(self):
"""Test caching behavior with concurrent calls."""
call_count = 0
# Call with x=2 should still use cache
assert deletable_function(2) == 12
assert call_count == 3
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def concurrent_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.05) # Simulate work
return x * x
# Launch concurrent calls with same arguments
tasks = [concurrent_function(3) for _ in range(5)]
results = await asyncio.gather(*tasks)
# All results should be the same
assert all(result == 9 for result in results)
# Note: Due to race conditions, call_count might be up to 5 for concurrent calls
# This tests that the cache doesn't break under concurrent access
assert 1 <= call_count <= 5
class TestAsyncCache:
"""Tests for the @async_cache decorator (no TTL)."""
# Try to delete non-existent entry
was_deleted = deletable_function.cache_delete(99)
assert was_deleted is False
@pytest.mark.asyncio
async def test_basic_caching_no_ttl(self):
"""Test basic caching functionality without TTL."""
async def test_async_cache_delete(self):
"""Test selective cache deletion functionality with async function."""
call_count = 0
@async_cache(maxsize=10)
async def cached_function(x: int, y: int = 0) -> int:
@cached()
async def async_deletable_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01) # Simulate async work
return x + y
await asyncio.sleep(0.01)
return x * 7
# First call
result1 = await cached_function(1, 2)
assert result1 == 3
# First call for x=1
result1 = await async_deletable_function(1)
assert result1 == 7
assert call_count == 1
# Second call with same args - should use cache
result2 = await cached_function(1, 2)
assert result2 == 3
assert call_count == 1 # No additional call
# Third call after some time - should still use cache (no TTL)
await asyncio.sleep(0.05)
result3 = await cached_function(1, 2)
assert result3 == 3
assert call_count == 1 # Still no additional call
# Different args - should call function again
result4 = await cached_function(2, 3)
assert result4 == 5
# First call for x=2
result2 = await async_deletable_function(2)
assert result2 == 14
assert call_count == 2
@pytest.mark.asyncio
async def test_no_ttl_vs_ttl_behavior(self):
"""Test the difference between TTL and no-TTL caching."""
ttl_call_count = 0
no_ttl_call_count = 0
# Second calls - should use cache
assert await async_deletable_function(1) == 7
assert await async_deletable_function(2) == 14
assert call_count == 2
@async_ttl_cache(maxsize=10, ttl_seconds=1) # Short TTL
async def ttl_function(x: int) -> int:
nonlocal ttl_call_count
ttl_call_count += 1
return x * 2
# Delete specific entry for x=1
was_deleted = async_deletable_function.cache_delete(1)
assert was_deleted is True
@async_cache(maxsize=10) # No TTL
async def no_ttl_function(x: int) -> int:
nonlocal no_ttl_call_count
no_ttl_call_count += 1
return x * 2
# Call with x=1 should execute function again
result3 = await async_deletable_function(1)
assert result3 == 7
assert call_count == 3
# First calls
await ttl_function(5)
await no_ttl_function(5)
assert ttl_call_count == 1
assert no_ttl_call_count == 1
# Call with x=2 should still use cache
assert await async_deletable_function(2) == 14
assert call_count == 3
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Second calls after TTL expiry
await ttl_function(5) # Should call function again (TTL expired)
await no_ttl_function(5) # Should use cache (no TTL)
assert ttl_call_count == 2 # TTL function called again
assert no_ttl_call_count == 1 # No-TTL function still cached
@pytest.mark.asyncio
async def test_async_cache_info(self):
"""Test cache info for no-TTL cache."""
@async_cache(maxsize=5)
async def info_test_function(x: int) -> int:
return x * 3
# Check initial cache info
info = info_test_function.cache_info()
assert info["size"] == 0
assert info["maxsize"] == 5
assert info["ttl_seconds"] is None # No TTL
# Add an entry
await info_test_function(1)
info = info_test_function.cache_info()
assert info["size"] == 1
class TestTTLOptional:
"""Tests for optional TTL functionality."""
@pytest.mark.asyncio
async def test_ttl_none_behavior(self):
"""Test that ttl_seconds=None works like no TTL."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=None)
async def no_ttl_via_none(x: int) -> int:
nonlocal call_count
call_count += 1
return x**2
# First call
result1 = await no_ttl_via_none(3)
assert result1 == 9
assert call_count == 1
# Wait (would expire if there was TTL)
await asyncio.sleep(0.1)
# Second call - should still use cache
result2 = await no_ttl_via_none(3)
assert result2 == 9
assert call_count == 1 # No additional call
# Check cache info
info = no_ttl_via_none.cache_info()
assert info["ttl_seconds"] is None
@pytest.mark.asyncio
async def test_cache_options_comparison(self):
"""Test different cache options work as expected."""
ttl_calls = 0
no_ttl_calls = 0
@async_ttl_cache(maxsize=10, ttl_seconds=1) # With TTL
async def ttl_function(x: int) -> int:
nonlocal ttl_calls
ttl_calls += 1
return x * 10
@async_cache(maxsize=10) # Process-level cache (no TTL)
async def process_function(x: int) -> int:
nonlocal no_ttl_calls
no_ttl_calls += 1
return x * 10
# Both should cache initially
await ttl_function(3)
await process_function(3)
assert ttl_calls == 1
assert no_ttl_calls == 1
# Immediate second calls - both should use cache
await ttl_function(3)
await process_function(3)
assert ttl_calls == 1
assert no_ttl_calls == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# After TTL expiry
await ttl_function(3) # Should call function again
await process_function(3) # Should still use cache
assert ttl_calls == 2 # TTL cache expired, called again
assert no_ttl_calls == 1 # Process cache never expires
# Try to delete non-existent entry
was_deleted = async_deletable_function.cache_delete(99)
assert was_deleted is False

View File

@@ -54,7 +54,7 @@ version = "1.2.0"
description = "Backport of asyncio.Runner, a context manager that controls event loop life cycle."
optional = false
python-versions = "<3.11,>=3.8"
groups = ["main"]
groups = ["dev"]
markers = "python_version < \"3.11\""
files = [
{file = "backports_asyncio_runner-1.2.0-py3-none-any.whl", hash = "sha256:0da0a936a8aeb554eccb426dc55af3ba63bcdc69fa1a600b5bb305413a4477b5"},
@@ -85,6 +85,87 @@ files = [
{file = "certifi-2025.7.14.tar.gz", hash = "sha256:8ea99dbdfaaf2ba2f9bac77b9249ef62ec5218e7c2b2e903378ed5fccf765995"},
]
[[package]]
name = "cffi"
version = "1.17.1"
description = "Foreign Function Interface for Python calling C code."
optional = false
python-versions = ">=3.8"
groups = ["main"]
markers = "platform_python_implementation != \"PyPy\""
files = [
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"},
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"},
{file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"},
{file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"},
{file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"},
{file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"},
{file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"},
{file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"},
{file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"},
{file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"},
{file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"},
{file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"},
{file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"},
{file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"},
{file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"},
{file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"},
{file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"},
{file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"},
{file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"},
{file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"},
{file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"},
{file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"},
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"},
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"},
]
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "3.4.2"
@@ -208,12 +289,176 @@ version = "0.4.6"
description = "Cross-platform colored terminal text."
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["main"]
groups = ["main", "dev"]
files = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
[[package]]
name = "coverage"
version = "7.10.5"
description = "Code coverage measurement for Python"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "coverage-7.10.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c6a5c3414bfc7451b879141ce772c546985163cf553f08e0f135f0699a911801"},
{file = "coverage-7.10.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bc8e4d99ce82f1710cc3c125adc30fd1487d3cf6c2cd4994d78d68a47b16989a"},
{file = "coverage-7.10.5-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:02252dc1216e512a9311f596b3169fad54abcb13827a8d76d5630c798a50a754"},
{file = "coverage-7.10.5-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:73269df37883e02d460bee0cc16be90509faea1e3bd105d77360b512d5bb9c33"},
{file = "coverage-7.10.5-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f8a81b0614642f91c9effd53eec284f965577591f51f547a1cbeb32035b4c2f"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6a29f8e0adb7f8c2b95fa2d4566a1d6e6722e0a637634c6563cb1ab844427dd9"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:fcf6ab569436b4a647d4e91accba12509ad9f2554bc93d3aee23cc596e7f99c3"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:90dc3d6fb222b194a5de60af8d190bedeeddcbc7add317e4a3cd333ee6b7c879"},
{file = "coverage-7.10.5-cp310-cp310-win32.whl", hash = "sha256:414a568cd545f9dc75f0686a0049393de8098414b58ea071e03395505b73d7a8"},
{file = "coverage-7.10.5-cp310-cp310-win_amd64.whl", hash = "sha256:e551f9d03347196271935fd3c0c165f0e8c049220280c1120de0084d65e9c7ff"},
{file = "coverage-7.10.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c177e6ffe2ebc7c410785307758ee21258aa8e8092b44d09a2da767834f075f2"},
{file = "coverage-7.10.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:14d6071c51ad0f703d6440827eaa46386169b5fdced42631d5a5ac419616046f"},
{file = "coverage-7.10.5-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:61f78c7c3bc272a410c5ae3fde7792b4ffb4acc03d35a7df73ca8978826bb7ab"},
{file = "coverage-7.10.5-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f39071caa126f69d63f99b324fb08c7b1da2ec28cbb1fe7b5b1799926492f65c"},
{file = "coverage-7.10.5-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:343a023193f04d46edc46b2616cdbee68c94dd10208ecd3adc56fcc54ef2baa1"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:585ffe93ae5894d1ebdee69fc0b0d4b7c75d8007983692fb300ac98eed146f78"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:b0ef4e66f006ed181df29b59921bd8fc7ed7cd6a9289295cd8b2824b49b570df"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:eb7b0bbf7cc1d0453b843eca7b5fa017874735bef9bfdfa4121373d2cc885ed6"},
{file = "coverage-7.10.5-cp311-cp311-win32.whl", hash = "sha256:1d043a8a06987cc0c98516e57c4d3fc2c1591364831e9deb59c9e1b4937e8caf"},
{file = "coverage-7.10.5-cp311-cp311-win_amd64.whl", hash = "sha256:fefafcca09c3ac56372ef64a40f5fe17c5592fab906e0fdffd09543f3012ba50"},
{file = "coverage-7.10.5-cp311-cp311-win_arm64.whl", hash = "sha256:7e78b767da8b5fc5b2faa69bb001edafcd6f3995b42a331c53ef9572c55ceb82"},
{file = "coverage-7.10.5-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c2d05c7e73c60a4cecc7d9b60dbfd603b4ebc0adafaef371445b47d0f805c8a9"},
{file = "coverage-7.10.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:32ddaa3b2c509778ed5373b177eb2bf5662405493baeff52278a0b4f9415188b"},
{file = "coverage-7.10.5-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:dd382410039fe062097aa0292ab6335a3f1e7af7bba2ef8d27dcda484918f20c"},
{file = "coverage-7.10.5-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7fa22800f3908df31cea6fb230f20ac49e343515d968cc3a42b30d5c3ebf9b5a"},
{file = "coverage-7.10.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f366a57ac81f5e12797136552f5b7502fa053c861a009b91b80ed51f2ce651c6"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5f1dc8f1980a272ad4a6c84cba7981792344dad33bf5869361576b7aef42733a"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:2285c04ee8676f7938b02b4936d9b9b672064daab3187c20f73a55f3d70e6b4a"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c2492e4dd9daab63f5f56286f8a04c51323d237631eb98505d87e4c4ff19ec34"},
{file = "coverage-7.10.5-cp312-cp312-win32.whl", hash = "sha256:38a9109c4ee8135d5df5505384fc2f20287a47ccbe0b3f04c53c9a1989c2bbaf"},
{file = "coverage-7.10.5-cp312-cp312-win_amd64.whl", hash = "sha256:6b87f1ad60b30bc3c43c66afa7db6b22a3109902e28c5094957626a0143a001f"},
{file = "coverage-7.10.5-cp312-cp312-win_arm64.whl", hash = "sha256:672a6c1da5aea6c629819a0e1461e89d244f78d7b60c424ecf4f1f2556c041d8"},
{file = "coverage-7.10.5-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ef3b83594d933020f54cf65ea1f4405d1f4e41a009c46df629dd964fcb6e907c"},
{file = "coverage-7.10.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2b96bfdf7c0ea9faebce088a3ecb2382819da4fbc05c7b80040dbc428df6af44"},
{file = "coverage-7.10.5-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:63df1fdaffa42d914d5c4d293e838937638bf75c794cf20bee12978fc8c4e3bc"},
{file = "coverage-7.10.5-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8002dc6a049aac0e81ecec97abfb08c01ef0c1fbf962d0c98da3950ace89b869"},
{file = "coverage-7.10.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:63d4bb2966d6f5f705a6b0c6784c8969c468dbc4bcf9d9ded8bff1c7e092451f"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1f672efc0731a6846b157389b6e6d5d5e9e59d1d1a23a5c66a99fd58339914d5"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:3f39cef43d08049e8afc1fde4a5da8510fc6be843f8dea350ee46e2a26b2f54c"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:2968647e3ed5a6c019a419264386b013979ff1fb67dd11f5c9886c43d6a31fc2"},
{file = "coverage-7.10.5-cp313-cp313-win32.whl", hash = "sha256:0d511dda38595b2b6934c2b730a1fd57a3635c6aa2a04cb74714cdfdd53846f4"},
{file = "coverage-7.10.5-cp313-cp313-win_amd64.whl", hash = "sha256:9a86281794a393513cf117177fd39c796b3f8e3759bb2764259a2abba5cce54b"},
{file = "coverage-7.10.5-cp313-cp313-win_arm64.whl", hash = "sha256:cebd8e906eb98bb09c10d1feed16096700b1198d482267f8bf0474e63a7b8d84"},
{file = "coverage-7.10.5-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0520dff502da5e09d0d20781df74d8189ab334a1e40d5bafe2efaa4158e2d9e7"},
{file = "coverage-7.10.5-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d9cd64aca68f503ed3f1f18c7c9174cbb797baba02ca8ab5112f9d1c0328cd4b"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:0913dd1613a33b13c4f84aa6e3f4198c1a21ee28ccb4f674985c1f22109f0aae"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:1b7181c0feeb06ed8a02da02792f42f829a7b29990fef52eff257fef0885d760"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:36d42b7396b605f774d4372dd9c49bed71cbabce4ae1ccd074d155709dd8f235"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:b4fdc777e05c4940b297bf47bf7eedd56a39a61dc23ba798e4b830d585486ca5"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:42144e8e346de44a6f1dbd0a56575dd8ab8dfa7e9007da02ea5b1c30ab33a7db"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:66c644cbd7aed8fe266d5917e2c9f65458a51cfe5eeff9c05f15b335f697066e"},
{file = "coverage-7.10.5-cp313-cp313t-win32.whl", hash = "sha256:2d1b73023854068c44b0c554578a4e1ef1b050ed07cf8b431549e624a29a66ee"},
{file = "coverage-7.10.5-cp313-cp313t-win_amd64.whl", hash = "sha256:54a1532c8a642d8cc0bd5a9a51f5a9dcc440294fd06e9dda55e743c5ec1a8f14"},
{file = "coverage-7.10.5-cp313-cp313t-win_arm64.whl", hash = "sha256:74d5b63fe3f5f5d372253a4ef92492c11a4305f3550631beaa432fc9df16fcff"},
{file = "coverage-7.10.5-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:68c5e0bc5f44f68053369fa0d94459c84548a77660a5f2561c5e5f1e3bed7031"},
{file = "coverage-7.10.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:cf33134ffae93865e32e1e37df043bef15a5e857d8caebc0099d225c579b0fa3"},
{file = "coverage-7.10.5-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:ad8fa9d5193bafcf668231294241302b5e683a0518bf1e33a9a0dfb142ec3031"},
{file = "coverage-7.10.5-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:146fa1531973d38ab4b689bc764592fe6c2f913e7e80a39e7eeafd11f0ef6db2"},
{file = "coverage-7.10.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6013a37b8a4854c478d3219ee8bc2392dea51602dd0803a12d6f6182a0061762"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:eb90fe20db9c3d930fa2ad7a308207ab5b86bf6a76f54ab6a40be4012d88fcae"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:384b34482272e960c438703cafe63316dfbea124ac62006a455c8410bf2a2262"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:467dc74bd0a1a7de2bedf8deaf6811f43602cb532bd34d81ffd6038d6d8abe99"},
{file = "coverage-7.10.5-cp314-cp314-win32.whl", hash = "sha256:556d23d4e6393ca898b2e63a5bca91e9ac2d5fb13299ec286cd69a09a7187fde"},
{file = "coverage-7.10.5-cp314-cp314-win_amd64.whl", hash = "sha256:f4446a9547681533c8fa3e3c6cf62121eeee616e6a92bd9201c6edd91beffe13"},
{file = "coverage-7.10.5-cp314-cp314-win_arm64.whl", hash = "sha256:5e78bd9cf65da4c303bf663de0d73bf69f81e878bf72a94e9af67137c69b9fe9"},
{file = "coverage-7.10.5-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:5661bf987d91ec756a47c7e5df4fbcb949f39e32f9334ccd3f43233bbb65e508"},
{file = "coverage-7.10.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a46473129244db42a720439a26984f8c6f834762fc4573616c1f37f13994b357"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:1f64b8d3415d60f24b058b58d859e9512624bdfa57a2d1f8aff93c1ec45c429b"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:44d43de99a9d90b20e0163f9770542357f58860a26e24dc1d924643bd6aa7cb4"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a931a87e5ddb6b6404e65443b742cb1c14959622777f2a4efd81fba84f5d91ba"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:f9559b906a100029274448f4c8b8b0a127daa4dade5661dfd821b8c188058842"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:b08801e25e3b4526ef9ced1aa29344131a8f5213c60c03c18fe4c6170ffa2874"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:ed9749bb8eda35f8b636fb7632f1c62f735a236a5d4edadd8bbcc5ea0542e732"},
{file = "coverage-7.10.5-cp314-cp314t-win32.whl", hash = "sha256:609b60d123fc2cc63ccee6d17e4676699075db72d14ac3c107cc4976d516f2df"},
{file = "coverage-7.10.5-cp314-cp314t-win_amd64.whl", hash = "sha256:0666cf3d2c1626b5a3463fd5b05f5e21f99e6aec40a3192eee4d07a15970b07f"},
{file = "coverage-7.10.5-cp314-cp314t-win_arm64.whl", hash = "sha256:bc85eb2d35e760120540afddd3044a5bf69118a91a296a8b3940dfc4fdcfe1e2"},
{file = "coverage-7.10.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:62835c1b00c4a4ace24c1a88561a5a59b612fbb83a525d1c70ff5720c97c0610"},
{file = "coverage-7.10.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5255b3bbcc1d32a4069d6403820ac8e6dbcc1d68cb28a60a1ebf17e47028e898"},
{file = "coverage-7.10.5-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3876385722e335d6e991c430302c24251ef9c2a9701b2b390f5473199b1b8ebf"},
{file = "coverage-7.10.5-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8048ce4b149c93447a55d279078c8ae98b08a6951a3c4d2d7e87f4efc7bfe100"},
{file = "coverage-7.10.5-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4028e7558e268dd8bcf4d9484aad393cafa654c24b4885f6f9474bf53183a82a"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:03f47dc870eec0367fcdd603ca6a01517d2504e83dc18dbfafae37faec66129a"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:2d488d7d42b6ded7ea0704884f89dcabd2619505457de8fc9a6011c62106f6e5"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b3dcf2ead47fa8be14224ee817dfc1df98043af568fe120a22f81c0eb3c34ad2"},
{file = "coverage-7.10.5-cp39-cp39-win32.whl", hash = "sha256:02650a11324b80057b8c9c29487020073d5e98a498f1857f37e3f9b6ea1b2426"},
{file = "coverage-7.10.5-cp39-cp39-win_amd64.whl", hash = "sha256:b45264dd450a10f9e03237b41a9a24e85cbb1e278e5a32adb1a303f58f0017f3"},
{file = "coverage-7.10.5-py3-none-any.whl", hash = "sha256:0be24d35e4db1d23d0db5c0f6a74a962e2ec83c426b5cac09f4234aadef38e4a"},
{file = "coverage-7.10.5.tar.gz", hash = "sha256:f2e57716a78bc3ae80b2207be0709a3b2b63b9f2dcf9740ee6ac03588a2015b6"},
]
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli ; python_full_version <= \"3.11.0a6\""]
[[package]]
name = "cryptography"
version = "45.0.6"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.7"
groups = ["main"]
files = [
{file = "cryptography-45.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:048e7ad9e08cf4c0ab07ff7f36cc3115924e22e2266e034450a890d9e312dd74"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:44647c5d796f5fc042bbc6d61307d04bf29bccb74d188f18051b635f20a9c75f"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e40b80ecf35ec265c452eea0ba94c9587ca763e739b8e559c128d23bff7ebbbf"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:00e8724bdad672d75e6f069b27970883179bd472cd24a63f6e620ca7e41cc0c5"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7a3085d1b319d35296176af31c90338eeb2ddac8104661df79f80e1d9787b8b2"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1b7fa6a1c1188c7ee32e47590d16a5a0646270921f8020efc9a511648e1b2e08"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:275ba5cc0d9e320cd70f8e7b96d9e59903c815ca579ab96c1e37278d231fc402"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:f4028f29a9f38a2025abedb2e409973709c660d44319c61762202206ed577c42"},
{file = "cryptography-45.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ee411a1b977f40bd075392c80c10b58025ee5c6b47a822a33c1198598a7a5f05"},
{file = "cryptography-45.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:e2a21a8eda2d86bb604934b6b37691585bd095c1f788530c1fcefc53a82b3453"},
{file = "cryptography-45.0.6-cp311-abi3-win32.whl", hash = "sha256:d063341378d7ee9c91f9d23b431a3502fc8bfacd54ef0a27baa72a0843b29159"},
{file = "cryptography-45.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:833dc32dfc1e39b7376a87b9a6a4288a10aae234631268486558920029b086ec"},
{file = "cryptography-45.0.6-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:3436128a60a5e5490603ab2adbabc8763613f638513ffa7d311c900a8349a2a0"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0d9ef57b6768d9fa58e92f4947cea96ade1233c0e236db22ba44748ffedca394"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ea3c42f2016a5bbf71825537c2ad753f2870191134933196bee408aac397b3d9"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:20ae4906a13716139d6d762ceb3e0e7e110f7955f3bc3876e3a07f5daadec5f3"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2dac5ec199038b8e131365e2324c03d20e97fe214af051d20c49db129844e8b3"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:18f878a34b90d688982e43f4b700408b478102dd58b3e39de21b5ebf6509c301"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:5bd6020c80c5b2b2242d6c48487d7b85700f5e0038e67b29d706f98440d66eb5"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:eccddbd986e43014263eda489abbddfbc287af5cddfd690477993dbb31e31016"},
{file = "cryptography-45.0.6-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:550ae02148206beb722cfe4ef0933f9352bab26b087af00e48fdfb9ade35c5b3"},
{file = "cryptography-45.0.6-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5b64e668fc3528e77efa51ca70fadcd6610e8ab231e3e06ae2bab3b31c2b8ed9"},
{file = "cryptography-45.0.6-cp37-abi3-win32.whl", hash = "sha256:780c40fb751c7d2b0c6786ceee6b6f871e86e8718a8ff4bc35073ac353c7cd02"},
{file = "cryptography-45.0.6-cp37-abi3-win_amd64.whl", hash = "sha256:20d15aed3ee522faac1a39fbfdfee25d17b1284bafd808e1640a74846d7c4d1b"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:705bb7c7ecc3d79a50f236adda12ca331c8e7ecfbea51edd931ce5a7a7c4f012"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:826b46dae41a1155a0c0e66fafba43d0ede1dc16570b95e40c4d83bfcf0a451d"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:cc4d66f5dc4dc37b89cfef1bd5044387f7a1f6f0abb490815628501909332d5d"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:f68f833a9d445cc49f01097d95c83a850795921b3f7cc6488731e69bde3288da"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3b5bf5267e98661b9b888a9250d05b063220dfa917a8203744454573c7eb79db"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:2384f2ab18d9be88a6e4f8972923405e2dbb8d3e16c6b43f15ca491d7831bd18"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-macosx_10_9_x86_64.whl", hash = "sha256:fc022c1fa5acff6def2fc6d7819bbbd31ccddfe67d075331a65d9cfb28a20983"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3de77e4df42ac8d4e4d6cdb342d989803ad37707cf8f3fbf7b088c9cbdd46427"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:599c8d7df950aa68baa7e98f7b73f4f414c9f02d0e8104a30c0182a07732638b"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:31a2b9a10530a1cb04ffd6aa1cd4d3be9ed49f7d77a4dafe198f3b382f41545c"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:e5b3dda1b00fb41da3af4c5ef3f922a200e33ee5ba0f0bc9ecf0b0c173958385"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:629127cfdcdc6806dfe234734d7cb8ac54edaf572148274fa377a7d3405b0043"},
{file = "cryptography-45.0.6.tar.gz", hash = "sha256:5c966c732cf6e4a276ce83b6e4c729edda2df6929083a952cc7da973c539c719"},
]
[package.dependencies]
cffi = {version = ">=1.14", markers = "platform_python_implementation != \"PyPy\""}
[package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs ; python_full_version >= \"3.8.0\"", "sphinx-rtd-theme (>=3.0.0) ; python_full_version >= \"3.8.0\""]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox (>=2024.4.15)", "nox[uv] (>=2024.3.2) ; python_full_version >= \"3.8.0\""]
pep8test = ["check-sdist ; python_full_version >= \"3.8.0\"", "click (>=8.0.1)", "mypy (>=1.4)", "ruff (>=0.3.6)"]
sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==45.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
name = "deprecation"
version = "2.1.0"
@@ -235,7 +480,7 @@ version = "1.3.0"
description = "Backport of PEP 654 (exception groups)"
optional = false
python-versions = ">=3.7"
groups = ["main"]
groups = ["main", "dev"]
markers = "python_version < \"3.11\""
files = [
{file = "exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10"},
@@ -710,7 +955,7 @@ version = "2.1.0"
description = "brain-dead simple config-ini parsing"
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["dev"]
files = [
{file = "iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"},
{file = "iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7"},
@@ -757,6 +1002,18 @@ dynamodb = ["boto3 (>=1.9.71)"]
redis = ["redis (>=2.10.5)"]
test-filesource = ["pyyaml (>=5.3.1)", "watchdog (>=3.0.0)"]
[[package]]
name = "nodeenv"
version = "1.9.1"
description = "Node.js virtual environment builder"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["dev"]
files = [
{file = "nodeenv-1.9.1-py2.py3-none-any.whl", hash = "sha256:ba11c9782d29c27c70ffbdda2d7415098754709be8a7056d79a737cd901155c9"},
{file = "nodeenv-1.9.1.tar.gz", hash = "sha256:6ec12890a2dab7946721edbfbcd91f3319c6ccc9aec47be7c7e6b7011ee6645f"},
]
[[package]]
name = "opentelemetry-api"
version = "1.35.0"
@@ -779,7 +1036,7 @@ version = "25.0"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["main", "dev"]
files = [
{file = "packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484"},
{file = "packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"},
@@ -791,7 +1048,7 @@ version = "1.6.0"
description = "plugin and hook calling mechanisms for python"
optional = false
python-versions = ">=3.9"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746"},
{file = "pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3"},
@@ -883,6 +1140,19 @@ files = [
[package.dependencies]
pyasn1 = ">=0.6.1,<0.7.0"
[[package]]
name = "pycparser"
version = "2.22"
description = "C parser in Python"
optional = false
python-versions = ">=3.8"
groups = ["main"]
markers = "platform_python_implementation != \"PyPy\""
files = [
{file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"},
{file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"},
]
[[package]]
name = "pydantic"
version = "2.11.7"
@@ -1047,7 +1317,7 @@ version = "2.19.2"
description = "Pygments is a syntax highlighting package written in Python."
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
@@ -1068,6 +1338,9 @@ files = [
{file = "pyjwt-2.10.1.tar.gz", hash = "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953"},
]
[package.dependencies]
cryptography = {version = ">=3.4.0", optional = true, markers = "extra == \"crypto\""}
[package.extras]
crypto = ["cryptography (>=3.4.0)"]
dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=6.0.0,<7.0.0)", "sphinx", "sphinx-rtd-theme", "zope.interface"]
@@ -1086,13 +1359,34 @@ files = [
{file = "pyrfc3339-2.0.1.tar.gz", hash = "sha256:e47843379ea35c1296c3b6c67a948a1a490ae0584edfcbdea0eaffb5dd29960b"},
]
[[package]]
name = "pyright"
version = "1.1.404"
description = "Command line wrapper for pyright"
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "pyright-1.1.404-py3-none-any.whl", hash = "sha256:c7b7ff1fdb7219c643079e4c3e7d4125f0dafcc19d253b47e898d130ea426419"},
{file = "pyright-1.1.404.tar.gz", hash = "sha256:455e881a558ca6be9ecca0b30ce08aa78343ecc031d37a198ffa9a7a1abeb63e"},
]
[package.dependencies]
nodeenv = ">=1.6.0"
typing-extensions = ">=4.1"
[package.extras]
all = ["nodejs-wheel-binaries", "twine (>=3.4.1)"]
dev = ["twine (>=3.4.1)"]
nodejs = ["nodejs-wheel-binaries"]
[[package]]
name = "pytest"
version = "8.4.1"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.9"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7"},
{file = "pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c"},
@@ -1116,7 +1410,7 @@ version = "1.1.0"
description = "Pytest support for asyncio"
optional = false
python-versions = ">=3.9"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pytest_asyncio-1.1.0-py3-none-any.whl", hash = "sha256:5fe2d69607b0bd75c656d1211f969cadba035030156745ee09e7d71740e58ecf"},
{file = "pytest_asyncio-1.1.0.tar.gz", hash = "sha256:796aa822981e01b68c12e4827b8697108f7205020f24b5793b3c41555dab68ea"},
@@ -1130,13 +1424,33 @@ pytest = ">=8.2,<9"
docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1)"]
testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)"]
[[package]]
name = "pytest-cov"
version = "6.2.1"
description = "Pytest plugin for measuring coverage."
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "pytest_cov-6.2.1-py3-none-any.whl", hash = "sha256:f5bc4c23f42f1cdd23c70b1dab1bbaef4fc505ba950d53e0081d0730dd7e86d5"},
{file = "pytest_cov-6.2.1.tar.gz", hash = "sha256:25cc6cc0a5358204b8108ecedc51a9b57b34cc6b8c967cc2c01a4e00d8a67da2"},
]
[package.dependencies]
coverage = {version = ">=7.5", extras = ["toml"]}
pluggy = ">=1.2"
pytest = ">=6.2.5"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "virtualenv"]
[[package]]
name = "pytest-mock"
version = "3.14.1"
description = "Thin-wrapper around the mock package for easier use with pytest"
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pytest_mock-3.14.1-py3-none-any.whl", hash = "sha256:178aefcd11307d874b4cd3100344e7e2d888d9791a6a1d9bfe90fbc1b74fd1d0"},
{file = "pytest_mock-3.14.1.tar.gz", hash = "sha256:159e9edac4c451ce77a5cdb9fc5d1100708d2dd4ba3c3df572f14097351af80e"},
@@ -1253,30 +1567,31 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.12.3"
version = "0.12.11"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.12.3-py3-none-linux_armv6l.whl", hash = "sha256:47552138f7206454eaf0c4fe827e546e9ddac62c2a3d2585ca54d29a890137a2"},
{file = "ruff-0.12.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:0a9153b000c6fe169bb307f5bd1b691221c4286c133407b8827c406a55282041"},
{file = "ruff-0.12.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:fa6b24600cf3b750e48ddb6057e901dd5b9aa426e316addb2a1af185a7509882"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2506961bf6ead54887ba3562604d69cb430f59b42133d36976421bc8bd45901"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c4faaff1f90cea9d3033cbbcdf1acf5d7fb11d8180758feb31337391691f3df0"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40dced4a79d7c264389de1c59467d5d5cefd79e7e06d1dfa2c75497b5269a5a6"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:0262d50ba2767ed0fe212aa7e62112a1dcbfd46b858c5bf7bbd11f326998bafc"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12371aec33e1a3758597c5c631bae9a5286f3c963bdfb4d17acdd2d395406687"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:560f13b6baa49785665276c963edc363f8ad4b4fc910a883e2625bdb14a83a9e"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:023040a3499f6f974ae9091bcdd0385dd9e9eb4942f231c23c57708147b06311"},
{file = "ruff-0.12.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:883d844967bffff5ab28bba1a4d246c1a1b2933f48cb9840f3fdc5111c603b07"},
{file = "ruff-0.12.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2120d3aa855ff385e0e562fdee14d564c9675edbe41625c87eeab744a7830d12"},
{file = "ruff-0.12.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:6b16647cbb470eaf4750d27dddc6ebf7758b918887b56d39e9c22cce2049082b"},
{file = "ruff-0.12.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e1417051edb436230023575b149e8ff843a324557fe0a265863b7602df86722f"},
{file = "ruff-0.12.3-py3-none-win32.whl", hash = "sha256:dfd45e6e926deb6409d0616078a666ebce93e55e07f0fb0228d4b2608b2c248d"},
{file = "ruff-0.12.3-py3-none-win_amd64.whl", hash = "sha256:a946cf1e7ba3209bdef039eb97647f1c77f6f540e5845ec9c114d3af8df873e7"},
{file = "ruff-0.12.3-py3-none-win_arm64.whl", hash = "sha256:5f9c7c9c8f84c2d7f27e93674d27136fbf489720251544c4da7fb3d742e011b1"},
{file = "ruff-0.12.3.tar.gz", hash = "sha256:f1b5a4b6668fd7b7ea3697d8d98857390b40c1320a63a178eee6be0899ea2d77"},
{file = "ruff-0.12.11-py3-none-linux_armv6l.whl", hash = "sha256:93fce71e1cac3a8bf9200e63a38ac5c078f3b6baebffb74ba5274fb2ab276065"},
{file = "ruff-0.12.11-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8e33ac7b28c772440afa80cebb972ffd823621ded90404f29e5ab6d1e2d4b93"},
{file = "ruff-0.12.11-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d69fb9d4937aa19adb2e9f058bc4fbfe986c2040acb1a4a9747734834eaa0bfd"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:411954eca8464595077a93e580e2918d0a01a19317af0a72132283e28ae21bee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6a2c0a2e1a450f387bf2c6237c727dd22191ae8c00e448e0672d624b2bbd7fb0"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ca4c3a7f937725fd2413c0e884b5248a19369ab9bdd850b5781348ba283f644"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:4d1df0098124006f6a66ecf3581a7f7e754c4df7644b2e6704cd7ca80ff95211"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5a8dd5f230efc99a24ace3b77e3555d3fbc0343aeed3fc84c8d89e75ab2ff793"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4dc75533039d0ed04cd33fb8ca9ac9620b99672fe7ff1533b6402206901c34ee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4fc58f9266d62c6eccc75261a665f26b4ef64840887fc6cbc552ce5b29f96cc8"},
{file = "ruff-0.12.11-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5a0113bd6eafd545146440225fe60b4e9489f59eb5f5f107acd715ba5f0b3d2f"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:0d737b4059d66295c3ea5720e6efc152623bb83fde5444209b69cd33a53e2000"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:916fc5defee32dbc1fc1650b576a8fed68f5e8256e2180d4d9855aea43d6aab2"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c984f07d7adb42d3ded5be894fb4007f30f82c87559438b4879fe7aa08c62b39"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e07fbb89f2e9249f219d88331c833860489b49cdf4b032b8e4432e9b13e8a4b9"},
{file = "ruff-0.12.11-py3-none-win32.whl", hash = "sha256:c792e8f597c9c756e9bcd4d87cf407a00b60af77078c96f7b6366ea2ce9ba9d3"},
{file = "ruff-0.12.11-py3-none-win_amd64.whl", hash = "sha256:a3283325960307915b6deb3576b96919ee89432ebd9c48771ca12ee8afe4a0fd"},
{file = "ruff-0.12.11-py3-none-win_arm64.whl", hash = "sha256:bae4d6e6a2676f8fb0f98b74594a048bae1b944aab17e9f5d504062303c6dbea"},
{file = "ruff-0.12.11.tar.gz", hash = "sha256:c6b09ae8426a65bbee5425b9d0b82796dbb07cb1af045743c79bfb163001165d"},
]
[[package]]
@@ -1410,7 +1725,7 @@ version = "2.2.1"
description = "A lil' TOML parser"
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["dev"]
markers = "python_version < \"3.11\""
files = [
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
@@ -1453,7 +1768,7 @@ version = "4.14.1"
description = "Backported and Experimental Type Hints for Python 3.9+"
optional = false
python-versions = ">=3.9"
groups = ["main"]
groups = ["main", "dev"]
files = [
{file = "typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76"},
{file = "typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36"},
@@ -1614,4 +1929,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<4.0"
content-hash = "f67db13e6f68b1d67a55eee908c1c560bfa44da8509f98f842889a7570a9830f"
content-hash = "0c40b63c3c921846cf05ccfb4e685d4959854b29c2c302245f9832e20aac6954"

View File

@@ -9,21 +9,25 @@ packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
colorama = "^0.4.6"
cryptography = "^45.0"
expiringdict = "^1.2.2"
fastapi = "^0.116.1"
google-cloud-logging = "^3.12.1"
launchdarkly-server-sdk = "^9.12.0"
pydantic = "^2.11.7"
pydantic-settings = "^2.10.1"
pyjwt = "^2.10.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
pyjwt = { version = "^2.10.1", extras = ["crypto"] }
redis = "^6.2.0"
supabase = "^2.16.0"
uvicorn = "^0.35.0"
[tool.poetry.group.dev.dependencies]
ruff = "^0.12.3"
pyright = "^1.1.404"
pytest = "^8.4.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
pytest-cov = "^6.2.1"
ruff = "^0.12.11"
[build-system]
requires = ["poetry-core"]

View File

@@ -16,13 +16,12 @@ DB_SCHEMA=platform
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
DIRECT_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
PRISMA_SCHEMA="postgres/schema.prisma"
ENABLE_AUTH=true
## ===== REQUIRED SERVICE CREDENTIALS ===== ##
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=password
# REDIS_PASSWORD=
# RabbitMQ Credentials
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
@@ -31,7 +30,7 @@ RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
# Supabase Authentication
SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
JWT_VERIFY_KEY=your-super-secret-jwt-token-with-at-least-32-characters-long
## ===== REQUIRED SECURITY KEYS ===== ##
# Generate using: from cryptography.fernet import Fernet;Fernet.generate_key().decode()
@@ -67,6 +66,11 @@ NVIDIA_API_KEY=
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
# Notion OAuth App server credentials - https://developers.notion.com/docs/authorization
# Configure a public integration
NOTION_CLIENT_ID=
NOTION_CLIENT_SECRET=
# Google OAuth App server credentials - https://console.cloud.google.com/apis/credentials, and enable gmail api and set scopes
# https://console.cloud.google.com/apis/credentials/consent ?project=<your_project_id>
# You'll need to add/enable the following scopes (minimum):
@@ -106,6 +110,15 @@ TODOIST_CLIENT_SECRET=
NOTION_CLIENT_ID=
NOTION_CLIENT_SECRET=
# Discord OAuth App credentials
# 1. Go to https://discord.com/developers/applications
# 2. Create a new application
# 3. Go to OAuth2 section and add redirect URI: http://localhost:3000/auth/integrations/oauth_callback
# 4. Copy Client ID and Client Secret below
DISCORD_CLIENT_ID=
DISCORD_CLIENT_SECRET=
REDDIT_CLIENT_ID=
REDDIT_CLIENT_SECRET=
@@ -166,4 +179,4 @@ SMARTLEAD_API_KEY=
ZEROBOUNCE_API_KEY=
# Other Services
AUTOMOD_API_KEY=
AUTOMOD_API_KEY=

View File

@@ -9,4 +9,12 @@ secrets/*
!secrets/.gitkeep
*.ignore.*
*.ign.*
*.ign.*
# Load test results and reports
load-tests/*_RESULTS.md
load-tests/*_REPORT.md
load-tests/results/
load-tests/*.json
load-tests/*.log
load-tests/node_modules/*

View File

@@ -1,31 +1,43 @@
FROM python:3.11.10-slim-bookworm AS builder
FROM debian:13-slim AS builder
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV DEBIAN_FRONTEND=noninteractive
WORKDIR /app
RUN echo 'Acquire::http::Pipeline-Depth 0;\nAcquire::http::No-Cache true;\nAcquire::BrokenProxy true;\n' > /etc/apt/apt.conf.d/99fixbadproxy
# Update package list and install build dependencies in a single layer
# Install Node.js repository key and setup
RUN apt-get update --allow-releaseinfo-change --fix-missing \
&& apt-get install -y curl ca-certificates gnupg \
&& mkdir -p /etc/apt/keyrings \
&& curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \
&& echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list
# Update package list and install Python, Node.js, and build dependencies
RUN apt-get update \
&& apt-get install -y \
python3.13 \
python3.13-dev \
python3.13-venv \
python3-pip \
build-essential \
libpq5 \
libz-dev \
libssl-dev \
postgresql-client
postgresql-client \
nodejs \
&& rm -rf /var/lib/apt/lists/*
ENV POETRY_HOME=/opt/poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_CREATE=false
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV PATH=/opt/poetry/bin:$PATH
# Upgrade pip and setuptools to fix security vulnerabilities
RUN pip3 install --upgrade pip setuptools
RUN pip3 install poetry
RUN pip3 install poetry --break-system-packages
# Copy and install dependencies
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
@@ -37,27 +49,35 @@ RUN poetry install --no-ansi --no-root
COPY autogpt_platform/backend/schema.prisma ./
RUN poetry run prisma generate
FROM python:3.11.10-slim-bookworm AS server_dependencies
FROM debian:13-slim AS server_dependencies
WORKDIR /app
ENV POETRY_HOME=/opt/poetry \
POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_CREATE=false
POETRY_VIRTUALENVS_CREATE=true \
POETRY_VIRTUALENVS_IN_PROJECT=true \
DEBIAN_FRONTEND=noninteractive
ENV PATH=/opt/poetry/bin:$PATH
# Upgrade pip and setuptools to fix security vulnerabilities
RUN pip3 install --upgrade pip setuptools
# Install Python without upgrading system-managed packages
RUN apt-get update && apt-get install -y \
python3.13 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Copy only necessary files from builder
COPY --from=builder /app /app
COPY --from=builder /usr/local/lib/python3.11 /usr/local/lib/python3.11
COPY --from=builder /usr/local/bin /usr/local/bin
# Copy Prisma binaries
COPY --from=builder /usr/local/lib/python3* /usr/local/lib/python3*
COPY --from=builder /usr/local/bin/poetry /usr/local/bin/poetry
# Copy Node.js installation for Prisma
COPY --from=builder /usr/bin/node /usr/bin/node
COPY --from=builder /usr/lib/node_modules /usr/lib/node_modules
COPY --from=builder /usr/bin/npm /usr/bin/npm
COPY --from=builder /usr/bin/npx /usr/bin/npx
COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries
ENV PATH="/app/.venv/bin:$PATH"
ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH"
RUN mkdir -p /app/autogpt_platform/autogpt_libs
RUN mkdir -p /app/autogpt_platform/backend

View File

@@ -132,17 +132,58 @@ def test_endpoint_success(snapshot: Snapshot):
### Testing with Authentication
For the main API routes that use JWT authentication, auth is provided by the `autogpt_libs.auth` module. If the test actually uses the `user_id`, the recommended approach for testing is to mock the `get_jwt_payload` function, which underpins all higher-level auth functions used in the API (`requires_user`, `requires_admin_user`, `get_user_id`).
If the test doesn't need the `user_id` specifically, mocking is not necessary as during tests auth is disabled anyway (see `conftest.py`).
#### Using Global Auth Fixtures
Two global auth fixtures are provided by `backend/server/conftest.py`:
- `mock_jwt_user` - Regular user with `test_user_id` ("test-user-id")
- `mock_jwt_admin` - Admin user with `admin_user_id` ("admin-user-id")
These provide the easiest way to set up authentication mocking in test modules:
```python
def override_auth_middleware():
return {"sub": "test-user-id"}
import fastapi
import fastapi.testclient
import pytest
from backend.server.v2.myroute import router
def override_get_user_id():
return "test-user-id"
app = fastapi.FastAPI()
app.include_router(router)
client = fastapi.testclient.TestClient(app)
app.dependency_overrides[auth_middleware] = override_auth_middleware
app.dependency_overrides[get_user_id] = override_get_user_id
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user['get_jwt_payload']
yield
app.dependency_overrides.clear()
```
For admin-only endpoints, use `mock_jwt_admin` instead:
```python
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_admin):
"""Setup auth overrides for admin tests"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_admin['get_jwt_payload']
yield
app.dependency_overrides.clear()
```
The IDs are also available separately as fixtures:
- `test_user_id`
- `admin_user_id`
- `target_user_id` (for admin <-> user operations)
### Mocking External Services
```python
@@ -153,10 +194,10 @@ def test_external_api_call(mocker, snapshot):
"backend.services.external_api.call",
return_value=mock_response
)
response = client.post("/api/process")
assert response.status_code == 200
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response.json(), indent=2, sort_keys=True),
@@ -187,6 +228,17 @@ def test_external_api_call(mocker, snapshot):
- Use `async def` with `@pytest.mark.asyncio` for testing async functions directly
### 5. Fixtures
#### Global Fixtures (conftest.py)
Authentication fixtures are available globally from `conftest.py`:
- `mock_jwt_user` - Standard user authentication
- `mock_jwt_admin` - Admin user authentication
- `configured_snapshot` - Pre-configured snapshot fixture
#### Custom Fixtures
Create reusable fixtures for common test data:
```python
@@ -202,9 +254,18 @@ def test_create_user(sample_user, snapshot):
# ... test implementation
```
#### Test Isolation
All tests must use fixtures that ensure proper isolation:
- Authentication overrides are automatically cleaned up after each test
- Database connections are properly managed with cleanup
- Mock objects are reset between tests
## CI/CD Integration
The GitHub Actions workflow automatically runs tests on:
- Pull requests
- Pushes to main branch
@@ -216,16 +277,19 @@ Snapshot tests work in CI by:
## Troubleshooting
### Snapshot Mismatches
- Review the diff carefully
- If changes are expected: `poetry run pytest --snapshot-update`
- If changes are unexpected: Fix the code causing the difference
### Async Test Issues
- Ensure async functions use `@pytest.mark.asyncio`
- Use `AsyncMock` for mocking async functions
- FastAPI TestClient handles async automatically
### Import Errors
- Check that all dependencies are in `pyproject.toml`
- Run `poetry install` to ensure dependencies are installed
- Verify import paths are correct
@@ -234,4 +298,4 @@ Snapshot tests work in CI by:
Snapshot testing provides a powerful way to ensure API responses remain consistent. Combined with traditional assertions, it creates a robust test suite that catches regressions while remaining maintainable.
Remember: Good tests are as important as good code!
Remember: Good tests are as important as good code!

View File

@@ -1,4 +1,3 @@
import functools
import importlib
import logging
import os
@@ -6,6 +5,8 @@ import re
from pathlib import Path
from typing import TYPE_CHECKING, TypeVar
from autogpt_libs.utils.cache import cached
logger = logging.getLogger(__name__)
@@ -15,7 +16,7 @@ if TYPE_CHECKING:
T = TypeVar("T")
@functools.cache
@cached()
def load_all_blocks() -> dict[str, type["Block"]]:
from backend.data.block import Block
from backend.util.settings import Config

View File

@@ -1,8 +1,6 @@
import logging
from typing import Any, Optional
from pydantic import JsonValue
from backend.data.block import (
Block,
BlockCategory,
@@ -12,7 +10,7 @@ from backend.data.block import (
BlockType,
get_block,
)
from backend.data.execution import ExecutionStatus
from backend.data.execution import ExecutionStatus, NodesInputMasks
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util.json import validate_with_jsonschema
from backend.util.retry import func_retry
@@ -25,12 +23,15 @@ class AgentExecutorBlock(Block):
user_id: str = SchemaField(description="User ID")
graph_id: str = SchemaField(description="Graph ID")
graph_version: int = SchemaField(description="Graph Version")
agent_name: Optional[str] = SchemaField(
default=None, description="Name to display in the Builder UI"
)
inputs: BlockInput = SchemaField(description="Input data for the graph")
input_schema: dict = SchemaField(description="Input schema for the graph")
output_schema: dict = SchemaField(description="Output schema for the graph")
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = SchemaField(
nodes_input_masks: Optional[NodesInputMasks] = SchemaField(
default=None, hidden=True
)

View File

@@ -0,0 +1,154 @@
from enum import Enum
from typing import Literal
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import MediaFileType
class GeminiImageModel(str, Enum):
NANO_BANANA = "google/nano-banana"
class OutputFormat(str, Enum):
JPG = "jpg"
PNG = "png"
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="replicate",
api_key=SecretStr("mock-replicate-api-key"),
title="Mock Replicate API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
class AIImageCustomizerBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
description="Replicate API key with permissions for Google Gemini image models",
)
prompt: str = SchemaField(
description="A text description of the image you want to generate",
title="Prompt",
)
model: GeminiImageModel = SchemaField(
description="The AI model to use for image generation and editing",
default=GeminiImageModel.NANO_BANANA,
title="Model",
)
images: list[MediaFileType] = SchemaField(
description="Optional list of input images to reference or modify",
default=[],
title="Input Images",
)
output_format: OutputFormat = SchemaField(
description="Format of the output image",
default=OutputFormat.PNG,
title="Output Format",
)
class Output(BlockSchema):
image_url: MediaFileType = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(
id="d76bbe4c-930e-4894-8469-b66775511f71",
description=(
"Generate and edit custom images using Google's Nano-Banana model from Gemini 2.5. "
"Provide a prompt and optional reference images to create or modify images."
),
categories={BlockCategory.AI, BlockCategory.MULTIMEDIA},
input_schema=AIImageCustomizerBlock.Input,
output_schema=AIImageCustomizerBlock.Output,
test_input={
"prompt": "Make the scene more vibrant and colorful",
"model": GeminiImageModel.NANO_BANANA,
"images": [],
"output_format": OutputFormat.JPG,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("image_url", "https://replicate.delivery/generated-image.jpg"),
],
test_mock={
"run_model": lambda *args, **kwargs: MediaFileType(
"https://replicate.delivery/generated-image.jpg"
),
},
test_credentials=TEST_CREDENTIALS,
)
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
graph_exec_id: str,
user_id: str,
**kwargs,
) -> BlockOutput:
try:
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.model.value,
prompt=input_data.prompt,
images=input_data.images,
output_format=input_data.output_format.value,
)
yield "image_url", result
except Exception as e:
yield "error", str(e)
async def run_model(
self,
api_key: SecretStr,
model_name: str,
prompt: str,
images: list[MediaFileType],
output_format: str,
) -> MediaFileType:
client = ReplicateClient(api_token=api_key.get_secret_value())
input_params: dict = {
"prompt": prompt,
"output_format": output_format,
}
# Add images to input if provided (API expects "image_input" parameter)
if images:
input_params["image_input"] = [str(img) for img in images]
output: FileOutput | str = await client.async_run( # type: ignore
model_name,
input=input_params,
wait=False,
)
if isinstance(output, FileOutput):
return MediaFileType(output.url)
if isinstance(output, str):
return MediaFileType(output)
raise ValueError("No output received from the model")

View File

@@ -166,7 +166,7 @@ class AIMusicGeneratorBlock(Block):
output_format=input_data.output_format,
normalization_strategy=input_data.normalization_strategy,
)
if result and result != "No output received":
if result and isinstance(result, str) and result.startswith("http"):
yield "result", result
return
else:

View File

@@ -661,6 +661,167 @@ async def update_field(
#################################################################
async def get_table_schema(
credentials: Credentials,
base_id: str,
table_id_or_name: str,
) -> dict:
"""
Get the schema for a specific table, including all field definitions.
Args:
credentials: Airtable API credentials
base_id: The base ID
table_id_or_name: The table ID or name
Returns:
Dict containing table schema with fields information
"""
# First get all tables to find the right one
response = await Requests().get(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables",
headers={"Authorization": credentials.auth_header()},
)
data = response.json()
tables = data.get("tables", [])
# Find the matching table
for table in tables:
if table.get("id") == table_id_or_name or table.get("name") == table_id_or_name:
return table
raise ValueError(f"Table '{table_id_or_name}' not found in base '{base_id}'")
def get_empty_value_for_field(field_type: str) -> Any:
"""
Return the appropriate empty value for a given Airtable field type.
Args:
field_type: The Airtable field type
Returns:
The appropriate empty value for that field type
"""
# Fields that should be false when empty
if field_type == "checkbox":
return False
# Fields that should be empty arrays
if field_type in [
"multipleSelects",
"multipleRecordLinks",
"multipleAttachments",
"multipleLookupValues",
"multipleCollaborators",
]:
return []
# Fields that should be 0 when empty (numeric types)
if field_type in [
"number",
"percent",
"currency",
"rating",
"duration",
"count",
"autoNumber",
]:
return 0
# Fields that should be empty strings
if field_type in [
"singleLineText",
"multilineText",
"email",
"url",
"phoneNumber",
"richText",
"barcode",
]:
return ""
# Everything else gets null (dates, single selects, formulas, etc.)
return None
async def normalize_records(
records: list[dict],
table_schema: dict,
include_field_metadata: bool = False,
) -> dict:
"""
Normalize Airtable records to include all fields with proper empty values.
Args:
records: List of record objects from Airtable API
table_schema: Table schema containing field definitions
include_field_metadata: Whether to include field metadata in response
Returns:
Dict with normalized records and optionally field metadata
"""
fields = table_schema.get("fields", [])
# Normalize each record
normalized_records = []
for record in records:
normalized = {
"id": record.get("id"),
"createdTime": record.get("createdTime"),
"fields": {},
}
# Add existing fields
existing_fields = record.get("fields", {})
# Add all fields from schema, using empty values for missing ones
for field in fields:
field_name = field["name"]
field_type = field["type"]
if field_name in existing_fields:
# Field exists, use its value
normalized["fields"][field_name] = existing_fields[field_name]
else:
# Field is missing, add appropriate empty value
normalized["fields"][field_name] = get_empty_value_for_field(field_type)
normalized_records.append(normalized)
# Build result dictionary
if include_field_metadata:
field_metadata = {}
for field in fields:
metadata = {"type": field["type"], "id": field["id"]}
# Add type-specific metadata
options = field.get("options", {})
if field["type"] == "currency" and "symbol" in options:
metadata["symbol"] = options["symbol"]
metadata["precision"] = options.get("precision", 2)
elif field["type"] == "duration" and "durationFormat" in options:
metadata["format"] = options["durationFormat"]
elif field["type"] == "percent" and "precision" in options:
metadata["precision"] = options["precision"]
elif (
field["type"] in ["singleSelect", "multipleSelects"]
and "choices" in options
):
metadata["choices"] = [choice["name"] for choice in options["choices"]]
elif field["type"] == "rating" and "max" in options:
metadata["max"] = options["max"]
metadata["icon"] = options.get("icon", "star")
metadata["color"] = options.get("color", "yellowBright")
field_metadata[field["name"]] = metadata
return {"records": normalized_records, "field_metadata": field_metadata}
else:
return {"records": normalized_records}
async def list_records(
credentials: Credentials,
base_id: str,
@@ -1249,3 +1410,26 @@ async def list_bases(
)
return response.json()
async def get_base_tables(
credentials: Credentials,
base_id: str,
) -> list[dict]:
"""
Get all tables for a specific base.
Args:
credentials: Airtable API credentials
base_id: The ID of the base
Returns:
list[dict]: List of table objects with their schemas
"""
response = await Requests().get(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables",
headers={"Authorization": credentials.auth_header()},
)
data = response.json()
return data.get("tables", [])

View File

@@ -14,13 +14,13 @@ from backend.sdk import (
SchemaField,
)
from ._api import create_base, list_bases
from ._api import create_base, get_base_tables, list_bases
from ._config import airtable
class AirtableCreateBaseBlock(Block):
"""
Creates a new base in an Airtable workspace.
Creates a new base in an Airtable workspace, or returns existing base if one with the same name exists.
"""
class Input(BlockSchema):
@@ -31,6 +31,10 @@ class AirtableCreateBaseBlock(Block):
description="The workspace ID where the base will be created"
)
name: str = SchemaField(description="The name of the new base")
find_existing: bool = SchemaField(
description="If true, return existing base with same name instead of creating duplicate",
default=True,
)
tables: list[dict] = SchemaField(
description="At least one table and field must be specified. Array of table objects to create in the base. Each table should have 'name' and 'fields' properties",
default=[
@@ -50,14 +54,18 @@ class AirtableCreateBaseBlock(Block):
)
class Output(BlockSchema):
base_id: str = SchemaField(description="The ID of the created base")
base_id: str = SchemaField(description="The ID of the created or found base")
tables: list[dict] = SchemaField(description="Array of table objects")
table: dict = SchemaField(description="A single table object")
was_created: bool = SchemaField(
description="True if a new base was created, False if existing was found",
default=True,
)
def __init__(self):
super().__init__(
id="f59b88a8-54ce-4676-a508-fd614b4e8dce",
description="Create a new base in Airtable",
description="Create or find a base in Airtable",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
@@ -66,6 +74,31 @@ class AirtableCreateBaseBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# If find_existing is true, check if a base with this name already exists
if input_data.find_existing:
# List all bases to check for existing one with same name
# Note: Airtable API doesn't have a direct search, so we need to list and filter
existing_bases = await list_bases(credentials)
for base in existing_bases.get("bases", []):
if base.get("name") == input_data.name:
# Base already exists, return it
base_id = base.get("id")
yield "base_id", base_id
yield "was_created", False
# Get the tables for this base
try:
tables = await get_base_tables(credentials, base_id)
yield "tables", tables
for table in tables:
yield "table", table
except Exception:
# If we can't get tables, return empty list
yield "tables", []
return
# No existing base found or find_existing is false, create new one
data = await create_base(
credentials,
input_data.workspace_id,
@@ -74,6 +107,7 @@ class AirtableCreateBaseBlock(Block):
)
yield "base_id", data.get("id", None)
yield "was_created", True
yield "tables", data.get("tables", [])
for table in data.get("tables", []):
yield "table", table

View File

@@ -2,7 +2,7 @@
Airtable record operation blocks.
"""
from typing import Optional
from typing import Optional, cast
from backend.sdk import (
APIKeyCredentials,
@@ -18,7 +18,9 @@ from ._api import (
create_record,
delete_multiple_records,
get_record,
get_table_schema,
list_records,
normalize_records,
update_multiple_records,
)
from ._config import airtable
@@ -54,12 +56,24 @@ class AirtableListRecordsBlock(Block):
return_fields: list[str] = SchemaField(
description="Specific fields to return (comma-separated)", default=[]
)
normalize_output: bool = SchemaField(
description="Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response)",
default=True,
)
include_field_metadata: bool = SchemaField(
description="Include field type and configuration metadata (requires normalize_output=true)",
default=False,
)
class Output(BlockSchema):
records: list[dict] = SchemaField(description="Array of record objects")
offset: Optional[str] = SchemaField(
description="Offset for next page (null if no more records)", default=None
)
field_metadata: Optional[dict] = SchemaField(
description="Field type and configuration metadata (only when include_field_metadata=true)",
default=None,
)
def __init__(self):
super().__init__(
@@ -73,6 +87,7 @@ class AirtableListRecordsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
data = await list_records(
credentials,
input_data.base_id,
@@ -88,8 +103,33 @@ class AirtableListRecordsBlock(Block):
fields=input_data.return_fields if input_data.return_fields else None,
)
yield "records", data.get("records", [])
yield "offset", data.get("offset", None)
records = data.get("records", [])
# Normalize output if requested
if input_data.normalize_output:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the records
normalized_data = await normalize_records(
records,
table_schema,
include_field_metadata=input_data.include_field_metadata,
)
yield "records", normalized_data["records"]
yield "offset", data.get("offset", None)
if (
input_data.include_field_metadata
and "field_metadata" in normalized_data
):
yield "field_metadata", normalized_data["field_metadata"]
else:
yield "records", records
yield "offset", data.get("offset", None)
class AirtableGetRecordBlock(Block):
@@ -104,11 +144,23 @@ class AirtableGetRecordBlock(Block):
base_id: str = SchemaField(description="The Airtable base ID")
table_id_or_name: str = SchemaField(description="Table ID or name")
record_id: str = SchemaField(description="The record ID to retrieve")
normalize_output: bool = SchemaField(
description="Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response)",
default=True,
)
include_field_metadata: bool = SchemaField(
description="Include field type and configuration metadata (requires normalize_output=true)",
default=False,
)
class Output(BlockSchema):
id: str = SchemaField(description="The record ID")
fields: dict = SchemaField(description="The record fields")
created_time: str = SchemaField(description="The record created time")
field_metadata: Optional[dict] = SchemaField(
description="Field type and configuration metadata (only when include_field_metadata=true)",
default=None,
)
def __init__(self):
super().__init__(
@@ -122,6 +174,7 @@ class AirtableGetRecordBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
record = await get_record(
credentials,
input_data.base_id,
@@ -129,9 +182,34 @@ class AirtableGetRecordBlock(Block):
input_data.record_id,
)
yield "id", record.get("id", None)
yield "fields", record.get("fields", None)
yield "created_time", record.get("createdTime", None)
# Normalize output if requested
if input_data.normalize_output:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the single record (wrap in list and unwrap result)
normalized_data = await normalize_records(
[record],
table_schema,
include_field_metadata=input_data.include_field_metadata,
)
normalized_record = normalized_data["records"][0]
yield "id", normalized_record.get("id", None)
yield "fields", normalized_record.get("fields", None)
yield "created_time", normalized_record.get("createdTime", None)
if (
input_data.include_field_metadata
and "field_metadata" in normalized_data
):
yield "field_metadata", normalized_data["field_metadata"]
else:
yield "id", record.get("id", None)
yield "fields", record.get("fields", None)
yield "created_time", record.get("createdTime", None)
class AirtableCreateRecordsBlock(Block):
@@ -148,6 +226,10 @@ class AirtableCreateRecordsBlock(Block):
records: list[dict] = SchemaField(
description="Array of records to create (each with 'fields' object)"
)
skip_normalization: bool = SchemaField(
description="Skip output normalization to get raw Airtable response (faster but may have missing fields)",
default=False,
)
typecast: bool = SchemaField(
description="Automatically convert string values to appropriate types",
default=False,
@@ -173,7 +255,7 @@ class AirtableCreateRecordsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# The create_record API expects records in a specific format
data = await create_record(
credentials,
input_data.base_id,
@@ -182,8 +264,22 @@ class AirtableCreateRecordsBlock(Block):
typecast=input_data.typecast if input_data.typecast else None,
return_fields_by_field_id=input_data.return_fields_by_field_id,
)
result_records = cast(list[dict], data.get("records", []))
yield "records", data.get("records", [])
# Normalize output unless explicitly disabled
if not input_data.skip_normalization and result_records:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the records
normalized_data = await normalize_records(
result_records, table_schema, include_field_metadata=False
)
result_records = normalized_data["records"]
yield "records", result_records
details = data.get("details", None)
if details:
yield "details", details

View File

@@ -0,0 +1,205 @@
"""
Meeting BaaS API client module.
All API calls centralized for consistency and maintainability.
"""
from typing import Any, Dict, List, Optional
from backend.sdk import Requests
class MeetingBaasAPI:
"""Client for Meeting BaaS API endpoints."""
BASE_URL = "https://api.meetingbaas.com"
def __init__(self, api_key: str):
"""Initialize API client with authentication key."""
self.api_key = api_key
self.headers = {"x-meeting-baas-api-key": api_key}
self.requests = Requests()
# Bot Management Endpoints
async def join_meeting(
self,
bot_name: str,
meeting_url: str,
reserved: bool = False,
bot_image: Optional[str] = None,
entry_message: Optional[str] = None,
start_time: Optional[int] = None,
speech_to_text: Optional[Dict[str, Any]] = None,
webhook_url: Optional[str] = None,
automatic_leave: Optional[Dict[str, Any]] = None,
extra: Optional[Dict[str, Any]] = None,
recording_mode: str = "speaker_view",
streaming: Optional[Dict[str, Any]] = None,
deduplication_key: Optional[str] = None,
zoom_sdk_id: Optional[str] = None,
zoom_sdk_pwd: Optional[str] = None,
) -> Dict[str, Any]:
"""
Deploy a bot to join and record a meeting.
POST /bots
"""
body = {
"bot_name": bot_name,
"meeting_url": meeting_url,
"reserved": reserved,
"recording_mode": recording_mode,
}
# Add optional fields if provided
if bot_image is not None:
body["bot_image"] = bot_image
if entry_message is not None:
body["entry_message"] = entry_message
if start_time is not None:
body["start_time"] = start_time
if speech_to_text is not None:
body["speech_to_text"] = speech_to_text
if webhook_url is not None:
body["webhook_url"] = webhook_url
if automatic_leave is not None:
body["automatic_leave"] = automatic_leave
if extra is not None:
body["extra"] = extra
if streaming is not None:
body["streaming"] = streaming
if deduplication_key is not None:
body["deduplication_key"] = deduplication_key
if zoom_sdk_id is not None:
body["zoom_sdk_id"] = zoom_sdk_id
if zoom_sdk_pwd is not None:
body["zoom_sdk_pwd"] = zoom_sdk_pwd
response = await self.requests.post(
f"{self.BASE_URL}/bots",
headers=self.headers,
json=body,
)
return response.json()
async def leave_meeting(self, bot_id: str) -> bool:
"""
Remove a bot from an ongoing meeting.
DELETE /bots/{uuid}
"""
response = await self.requests.delete(
f"{self.BASE_URL}/bots/{bot_id}",
headers=self.headers,
)
return response.status in [200, 204]
async def retranscribe(
self,
bot_uuid: str,
speech_to_text: Optional[Dict[str, Any]] = None,
webhook_url: Optional[str] = None,
) -> Dict[str, Any]:
"""
Re-run transcription on a bot's audio.
POST /bots/retranscribe
"""
body: Dict[str, Any] = {"bot_uuid": bot_uuid}
if speech_to_text is not None:
body["speech_to_text"] = speech_to_text
if webhook_url is not None:
body["webhook_url"] = webhook_url
response = await self.requests.post(
f"{self.BASE_URL}/bots/retranscribe",
headers=self.headers,
json=body,
)
if response.status == 202:
return {"accepted": True}
return response.json()
# Data Retrieval Endpoints
async def get_meeting_data(
self, bot_id: str, include_transcripts: bool = True
) -> Dict[str, Any]:
"""
Retrieve meeting data including recording and transcripts.
GET /bots/meeting_data
"""
params = {
"bot_id": bot_id,
"include_transcripts": str(include_transcripts).lower(),
}
response = await self.requests.get(
f"{self.BASE_URL}/bots/meeting_data",
headers=self.headers,
params=params,
)
return response.json()
async def get_screenshots(self, bot_id: str) -> List[Dict[str, Any]]:
"""
Retrieve screenshots captured during a meeting.
GET /bots/{uuid}/screenshots
"""
response = await self.requests.get(
f"{self.BASE_URL}/bots/{bot_id}/screenshots",
headers=self.headers,
)
result = response.json()
# Ensure we return a list
if isinstance(result, list):
return result
return []
async def delete_data(self, bot_id: str) -> bool:
"""
Delete a bot's recorded data.
POST /bots/{uuid}/delete_data
"""
response = await self.requests.post(
f"{self.BASE_URL}/bots/{bot_id}/delete_data",
headers=self.headers,
)
return response.status == 200
async def list_bots_with_metadata(
self,
limit: Optional[int] = None,
offset: Optional[int] = None,
sort_by: Optional[str] = None,
sort_order: Optional[str] = None,
filter_by: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
List bots with metadata including IDs, names, and meeting details.
GET /bots/bots_with_metadata
"""
params = {}
if limit is not None:
params["limit"] = limit
if offset is not None:
params["offset"] = offset
if sort_by is not None:
params["sort_by"] = sort_by
if sort_order is not None:
params["sort_order"] = sort_order
if filter_by is not None:
params.update(filter_by)
response = await self.requests.get(
f"{self.BASE_URL}/bots/bots_with_metadata",
headers=self.headers,
params=params,
)
return response.json()

View File

@@ -0,0 +1,13 @@
"""
Shared configuration for all Meeting BaaS blocks using the SDK pattern.
"""
from backend.sdk import BlockCostType, ProviderBuilder
# Configure the Meeting BaaS provider with API key authentication
baas = (
ProviderBuilder("baas")
.with_api_key("MEETING_BAAS_API_KEY", "Meeting BaaS API Key")
.with_base_cost(5, BlockCostType.RUN) # Higher cost for meeting recording service
.build()
)

View File

@@ -0,0 +1,217 @@
"""
Meeting BaaS bot (recording) blocks.
"""
from typing import Optional
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
)
from ._api import MeetingBaasAPI
from ._config import baas
class BaasBotJoinMeetingBlock(Block):
"""
Deploy a bot immediately or at a scheduled start_time to join and record a meeting.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
meeting_url: str = SchemaField(
description="The URL of the meeting the bot should join"
)
bot_name: str = SchemaField(
description="Display name for the bot in the meeting"
)
bot_image: str = SchemaField(
description="URL to an image for the bot's avatar (16:9 ratio recommended)",
default="",
)
entry_message: str = SchemaField(
description="Chat message the bot will post upon entry", default=""
)
reserved: bool = SchemaField(
description="Use a reserved bot slot (joins 4 min before meeting)",
default=False,
)
start_time: Optional[int] = SchemaField(
description="Unix timestamp (ms) when bot should join", default=None
)
webhook_url: str | None = SchemaField(
description="URL to receive webhook events for this bot", default=None
)
timeouts: dict = SchemaField(
description="Automatic leave timeouts configuration", default={}
)
extra: dict = SchemaField(
description="Custom metadata to attach to the bot", default={}
)
class Output(BlockSchema):
bot_id: str = SchemaField(description="UUID of the deployed bot")
join_response: dict = SchemaField(
description="Full response from join operation"
)
def __init__(self):
super().__init__(
id="377d1a6a-a99b-46cf-9af3-1d1b12758e04",
description="Deploy a bot to join and record a meeting",
categories={BlockCategory.COMMUNICATION},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
api_key = credentials.api_key.get_secret_value()
api = MeetingBaasAPI(api_key)
# Call API with all parameters
data = await api.join_meeting(
bot_name=input_data.bot_name,
meeting_url=input_data.meeting_url,
reserved=input_data.reserved,
bot_image=input_data.bot_image if input_data.bot_image else None,
entry_message=(
input_data.entry_message if input_data.entry_message else None
),
start_time=input_data.start_time,
speech_to_text={"provider": "Default"},
webhook_url=input_data.webhook_url if input_data.webhook_url else None,
automatic_leave=input_data.timeouts if input_data.timeouts else None,
extra=input_data.extra if input_data.extra else None,
)
yield "bot_id", data.get("bot_id", "")
yield "join_response", data
class BaasBotLeaveMeetingBlock(Block):
"""
Force the bot to exit the call.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot to remove from meeting")
class Output(BlockSchema):
left: bool = SchemaField(description="Whether the bot successfully left")
def __init__(self):
super().__init__(
id="bf77d128-8b25-4280-b5c7-2d553ba7e482",
description="Remove a bot from an ongoing meeting",
categories={BlockCategory.COMMUNICATION},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
api_key = credentials.api_key.get_secret_value()
api = MeetingBaasAPI(api_key)
# Leave meeting
left = await api.leave_meeting(input_data.bot_id)
yield "left", left
class BaasBotFetchMeetingDataBlock(Block):
"""
Pull MP4 URL, transcript & metadata for a completed meeting.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot whose data to fetch")
include_transcripts: bool = SchemaField(
description="Include transcript data in response", default=True
)
class Output(BlockSchema):
mp4_url: str = SchemaField(
description="URL to download the meeting recording (time-limited)"
)
transcript: list = SchemaField(description="Meeting transcript data")
metadata: dict = SchemaField(description="Meeting metadata and bot information")
def __init__(self):
super().__init__(
id="ea7c1309-303c-4da1-893f-89c0e9d64e78",
description="Retrieve recorded meeting data",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
api_key = credentials.api_key.get_secret_value()
api = MeetingBaasAPI(api_key)
# Fetch meeting data
data = await api.get_meeting_data(
bot_id=input_data.bot_id,
include_transcripts=input_data.include_transcripts,
)
yield "mp4_url", data.get("mp4", "")
yield "transcript", data.get("bot_data", {}).get("transcripts", [])
yield "metadata", data.get("bot_data", {}).get("bot", {})
class BaasBotDeleteRecordingBlock(Block):
"""
Purge MP4 + transcript data for privacy or storage management.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot whose data to delete")
class Output(BlockSchema):
deleted: bool = SchemaField(
description="Whether the data was successfully deleted"
)
def __init__(self):
super().__init__(
id="bf8d1aa6-42d8-4944-b6bd-6bac554c0d3b",
description="Permanently delete a meeting's recorded data",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
api_key = credentials.api_key.get_secret_value()
api = MeetingBaasAPI(api_key)
# Delete recording data
deleted = await api.delete_data(input_data.bot_id)
yield "deleted", deleted

View File

@@ -0,0 +1,3 @@
from .text_overlay import BannerbearTextOverlayBlock
__all__ = ["BannerbearTextOverlayBlock"]

View File

@@ -0,0 +1,8 @@
from backend.sdk import BlockCostType, ProviderBuilder
bannerbear = (
ProviderBuilder("bannerbear")
.with_api_key("BANNERBEAR_API_KEY", "Bannerbear API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -0,0 +1,239 @@
import uuid
from typing import TYPE_CHECKING, Any, Dict, List
if TYPE_CHECKING:
pass
from pydantic import SecretStr
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import bannerbear
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="bannerbear",
api_key=SecretStr("mock-bannerbear-api-key"),
title="Mock Bannerbear API Key",
)
class TextModification(BlockSchema):
name: str = SchemaField(
description="The name of the layer to modify in the template"
)
text: str = SchemaField(description="The text content to add to this layer")
color: str = SchemaField(
description="Hex color code for the text (e.g., '#FF0000')",
default="",
advanced=True,
)
font_family: str = SchemaField(
description="Font family to use for the text",
default="",
advanced=True,
)
font_size: int = SchemaField(
description="Font size in pixels",
default=0,
advanced=True,
)
font_weight: str = SchemaField(
description="Font weight (e.g., 'bold', 'normal')",
default="",
advanced=True,
)
text_align: str = SchemaField(
description="Text alignment (left, center, right)",
default="",
advanced=True,
)
class BannerbearTextOverlayBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = bannerbear.credentials_field(
description="API credentials for Bannerbear"
)
template_id: str = SchemaField(
description="The unique ID of your Bannerbear template"
)
project_id: str = SchemaField(
description="Optional: Project ID (required when using Master API Key)",
default="",
advanced=True,
)
text_modifications: List[TextModification] = SchemaField(
description="List of text layers to modify in the template"
)
image_url: str = SchemaField(
description="Optional: URL of an image to use in the template",
default="",
advanced=True,
)
image_layer_name: str = SchemaField(
description="Optional: Name of the image layer in the template",
default="photo",
advanced=True,
)
webhook_url: str = SchemaField(
description="Optional: URL to receive webhook notification when image is ready",
default="",
advanced=True,
)
metadata: str = SchemaField(
description="Optional: Custom metadata to attach to the image",
default="",
advanced=True,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the image generation was successfully initiated"
)
image_url: str = SchemaField(
description="URL of the generated image (if synchronous) or placeholder"
)
uid: str = SchemaField(description="Unique identifier for the generated image")
status: str = SchemaField(description="Status of the image generation")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(
id="c7d3a5c2-05fc-450e-8dce-3b0e04626009",
description="Add text overlay to images using Bannerbear templates. Perfect for creating social media graphics, marketing materials, and dynamic image content.",
categories={BlockCategory.PRODUCTIVITY, BlockCategory.AI},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"template_id": "jJWBKNELpQPvbX5R93Gk",
"text_modifications": [
{
"name": "headline",
"text": "Amazing Product Launch!",
"color": "#FF0000",
},
{
"name": "subtitle",
"text": "50% OFF Today Only",
},
],
"credentials": {
"provider": "bannerbear",
"id": str(uuid.uuid4()),
"type": "api_key",
},
},
test_output=[
("success", True),
("image_url", "https://cdn.bannerbear.com/test-image.jpg"),
("uid", "test-uid-123"),
("status", "completed"),
],
test_mock={
"_make_api_request": lambda *args, **kwargs: {
"uid": "test-uid-123",
"status": "completed",
"image_url": "https://cdn.bannerbear.com/test-image.jpg",
}
},
test_credentials=TEST_CREDENTIALS,
)
async def _make_api_request(self, payload: dict, api_key: str) -> dict:
"""Make the actual API request to Bannerbear. This is separated for easy mocking in tests."""
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
response = await Requests().post(
"https://sync.api.bannerbear.com/v2/images",
headers=headers,
json=payload,
)
if response.status in [200, 201, 202]:
return response.json()
else:
error_msg = f"API request failed with status {response.status}"
if response.text:
try:
error_data = response.json()
error_msg = (
f"{error_msg}: {error_data.get('message', response.text)}"
)
except Exception:
error_msg = f"{error_msg}: {response.text}"
raise Exception(error_msg)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Build the modifications array
modifications = []
# Add text modifications
for text_mod in input_data.text_modifications:
mod_data: Dict[str, Any] = {
"name": text_mod.name,
"text": text_mod.text,
}
# Add optional text styling parameters only if they have values
if text_mod.color and text_mod.color.strip():
mod_data["color"] = text_mod.color
if text_mod.font_family and text_mod.font_family.strip():
mod_data["font_family"] = text_mod.font_family
if text_mod.font_size and text_mod.font_size > 0:
mod_data["font_size"] = text_mod.font_size
if text_mod.font_weight and text_mod.font_weight.strip():
mod_data["font_weight"] = text_mod.font_weight
if text_mod.text_align and text_mod.text_align.strip():
mod_data["text_align"] = text_mod.text_align
modifications.append(mod_data)
# Add image modification if provided and not empty
if input_data.image_url and input_data.image_url.strip():
modifications.append(
{
"name": input_data.image_layer_name,
"image_url": input_data.image_url,
}
)
# Build the request payload - only include non-empty optional fields
payload = {
"template": input_data.template_id,
"modifications": modifications,
}
# Add project_id if provided (required for Master API keys)
if input_data.project_id and input_data.project_id.strip():
payload["project_id"] = input_data.project_id
if input_data.webhook_url and input_data.webhook_url.strip():
payload["webhook_url"] = input_data.webhook_url
if input_data.metadata and input_data.metadata.strip():
payload["metadata"] = input_data.metadata
# Make the API request using the private method
data = await self._make_api_request(
payload, credentials.api_key.get_secret_value()
)
# Synchronous request - image should be ready
yield "success", True
yield "image_url", data.get("image_url", "")
yield "uid", data.get("uid", "")
yield "status", data.get("status", "completed")

View File

@@ -0,0 +1,182 @@
"""
DataForSEO API client with async support using the SDK patterns.
"""
import base64
from typing import Any, Dict, List, Optional
from backend.sdk import Requests, UserPasswordCredentials
class DataForSeoClient:
"""Client for the DataForSEO API using async requests."""
API_URL = "https://api.dataforseo.com"
def __init__(self, credentials: UserPasswordCredentials):
self.credentials = credentials
self.requests = Requests(
trusted_origins=["https://api.dataforseo.com"],
raise_for_status=False,
)
def _get_headers(self) -> Dict[str, str]:
"""Generate the authorization header using Basic Auth."""
username = self.credentials.username.get_secret_value()
password = self.credentials.password.get_secret_value()
credentials_str = f"{username}:{password}"
encoded = base64.b64encode(credentials_str.encode("ascii")).decode("ascii")
return {
"Authorization": f"Basic {encoded}",
"Content-Type": "application/json",
}
async def keyword_suggestions(
self,
keyword: str,
location_code: Optional[int] = None,
language_code: Optional[str] = None,
include_seed_keyword: bool = True,
include_serp_info: bool = False,
include_clickstream_data: bool = False,
limit: int = 100,
) -> List[Dict[str, Any]]:
"""
Get keyword suggestions from DataForSEO Labs.
Args:
keyword: Seed keyword
location_code: Location code for targeting
language_code: Language code (e.g., "en")
include_seed_keyword: Include seed keyword in results
include_serp_info: Include SERP data
include_clickstream_data: Include clickstream metrics
limit: Maximum number of results (up to 3000)
Returns:
API response with keyword suggestions
"""
endpoint = f"{self.API_URL}/v3/dataforseo_labs/google/keyword_suggestions/live"
# Build payload only with non-None values to avoid sending null fields
task_data: dict[str, Any] = {
"keyword": keyword,
}
if location_code is not None:
task_data["location_code"] = location_code
if language_code is not None:
task_data["language_code"] = language_code
if include_seed_keyword is not None:
task_data["include_seed_keyword"] = include_seed_keyword
if include_serp_info is not None:
task_data["include_serp_info"] = include_serp_info
if include_clickstream_data is not None:
task_data["include_clickstream_data"] = include_clickstream_data
if limit is not None:
task_data["limit"] = limit
payload = [task_data]
response = await self.requests.post(
endpoint,
headers=self._get_headers(),
json=payload,
)
data = response.json()
# Check for API errors
if response.status != 200:
error_message = data.get("status_message", "Unknown error")
raise Exception(
f"DataForSEO API error ({response.status}): {error_message}"
)
# Extract the results from the response
if data.get("tasks") and len(data["tasks"]) > 0:
task = data["tasks"][0]
if task.get("status_code") == 20000: # Success code
return task.get("result", [])
else:
error_msg = task.get("status_message", "Task failed")
raise Exception(f"DataForSEO task error: {error_msg}")
return []
async def related_keywords(
self,
keyword: str,
location_code: Optional[int] = None,
language_code: Optional[str] = None,
include_seed_keyword: bool = True,
include_serp_info: bool = False,
include_clickstream_data: bool = False,
limit: int = 100,
depth: Optional[int] = None,
) -> List[Dict[str, Any]]:
"""
Get related keywords from DataForSEO Labs.
Args:
keyword: Seed keyword
location_code: Location code for targeting
language_code: Language code (e.g., "en")
include_seed_keyword: Include seed keyword in results
include_serp_info: Include SERP data
include_clickstream_data: Include clickstream metrics
limit: Maximum number of results (up to 3000)
depth: Keyword search depth (0-4), controls number of returned keywords
Returns:
API response with related keywords
"""
endpoint = f"{self.API_URL}/v3/dataforseo_labs/google/related_keywords/live"
# Build payload only with non-None values to avoid sending null fields
task_data: dict[str, Any] = {
"keyword": keyword,
}
if location_code is not None:
task_data["location_code"] = location_code
if language_code is not None:
task_data["language_code"] = language_code
if include_seed_keyword is not None:
task_data["include_seed_keyword"] = include_seed_keyword
if include_serp_info is not None:
task_data["include_serp_info"] = include_serp_info
if include_clickstream_data is not None:
task_data["include_clickstream_data"] = include_clickstream_data
if limit is not None:
task_data["limit"] = limit
if depth is not None:
task_data["depth"] = depth
payload = [task_data]
response = await self.requests.post(
endpoint,
headers=self._get_headers(),
json=payload,
)
data = response.json()
# Check for API errors
if response.status != 200:
error_message = data.get("status_message", "Unknown error")
raise Exception(
f"DataForSEO API error ({response.status}): {error_message}"
)
# Extract the results from the response
if data.get("tasks") and len(data["tasks"]) > 0:
task = data["tasks"][0]
if task.get("status_code") == 20000: # Success code
return task.get("result", [])
else:
error_msg = task.get("status_message", "Task failed")
raise Exception(f"DataForSEO task error: {error_msg}")
return []

View File

@@ -0,0 +1,17 @@
"""
Configuration for all DataForSEO blocks using the new SDK pattern.
"""
from backend.sdk import BlockCostType, ProviderBuilder
# Build the DataForSEO provider with username/password authentication
dataforseo = (
ProviderBuilder("dataforseo")
.with_user_password(
username_env_var="DATAFORSEO_USERNAME",
password_env_var="DATAFORSEO_PASSWORD",
title="DataForSEO Credentials",
)
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -0,0 +1,273 @@
"""
DataForSEO Google Keyword Suggestions block.
"""
from typing import Any, Dict, List, Optional
from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
)
from ._api import DataForSeoClient
from ._config import dataforseo
class KeywordSuggestion(BlockSchema):
"""Schema for a keyword suggestion result."""
keyword: str = SchemaField(description="The keyword suggestion")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None
)
competition: Optional[float] = SchemaField(
description="Competition level (0-1)", default=None
)
cpc: Optional[float] = SchemaField(
description="Cost per click in USD", default=None
)
keyword_difficulty: Optional[int] = SchemaField(
description="Keyword difficulty score", default=None
)
serp_info: Optional[Dict[str, Any]] = SchemaField(
description="data from SERP for each keyword", default=None
)
clickstream_data: Optional[Dict[str, Any]] = SchemaField(
description="Clickstream data metrics", default=None
)
class DataForSeoKeywordSuggestionsBlock(Block):
"""Block for getting keyword suggestions from DataForSEO Labs."""
class Input(BlockSchema):
credentials: CredentialsMetaInput = dataforseo.credentials_field(
description="DataForSEO credentials (username and password)"
)
keyword: str = SchemaField(description="Seed keyword to get suggestions for")
location_code: Optional[int] = SchemaField(
description="Location code for targeting (e.g., 2840 for USA)",
default=2840, # USA
)
language_code: Optional[str] = SchemaField(
description="Language code (e.g., 'en' for English)",
default="en",
)
include_seed_keyword: bool = SchemaField(
description="Include the seed keyword in results",
default=True,
)
include_serp_info: bool = SchemaField(
description="Include SERP information",
default=False,
)
include_clickstream_data: bool = SchemaField(
description="Include clickstream metrics",
default=False,
)
limit: int = SchemaField(
description="Maximum number of results (up to 3000)",
default=100,
ge=1,
le=3000,
)
class Output(BlockSchema):
suggestions: List[KeywordSuggestion] = SchemaField(
description="List of keyword suggestions with metrics"
)
suggestion: KeywordSuggestion = SchemaField(
description="A single keyword suggestion with metrics"
)
total_count: int = SchemaField(
description="Total number of suggestions returned"
)
seed_keyword: str = SchemaField(
description="The seed keyword used for the query"
)
def __init__(self):
super().__init__(
id="73c3e7c4-2b3f-4e9f-9e3e-8f7a5c3e2d45",
description="Get keyword suggestions from DataForSEO Labs Google API",
categories={BlockCategory.SEARCH, BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"credentials": dataforseo.get_test_credentials().model_dump(),
"keyword": "digital marketing",
"location_code": 2840,
"language_code": "en",
"limit": 1,
},
test_credentials=dataforseo.get_test_credentials(),
test_output=[
(
"suggestion",
lambda x: hasattr(x, "keyword")
and x.keyword == "digital marketing strategy",
),
("suggestions", lambda x: isinstance(x, list) and len(x) == 1),
("total_count", 1),
("seed_keyword", "digital marketing"),
],
test_mock={
"_fetch_keyword_suggestions": lambda *args, **kwargs: [
{
"items": [
{
"keyword": "digital marketing strategy",
"keyword_info": {
"search_volume": 10000,
"competition": 0.5,
"cpc": 2.5,
},
"keyword_properties": {
"keyword_difficulty": 50,
},
}
]
}
]
},
)
async def _fetch_keyword_suggestions(
self,
client: DataForSeoClient,
input_data: Input,
) -> Any:
"""Private method to fetch keyword suggestions - can be mocked for testing."""
return await client.keyword_suggestions(
keyword=input_data.keyword,
location_code=input_data.location_code,
language_code=input_data.language_code,
include_seed_keyword=input_data.include_seed_keyword,
include_serp_info=input_data.include_serp_info,
include_clickstream_data=input_data.include_clickstream_data,
limit=input_data.limit,
)
async def run(
self,
input_data: Input,
*,
credentials: UserPasswordCredentials,
**kwargs,
) -> BlockOutput:
"""Execute the keyword suggestions query."""
client = DataForSeoClient(credentials)
results = await self._fetch_keyword_suggestions(client, input_data)
# Process and format the results
suggestions = []
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", []) if isinstance(first_result, dict) else []
)
for item in items:
# Create the KeywordSuggestion object
suggestion = KeywordSuggestion(
keyword=item.get("keyword", ""),
search_volume=item.get("keyword_info", {}).get("search_volume"),
competition=item.get("keyword_info", {}).get("competition"),
cpc=item.get("keyword_info", {}).get("cpc"),
keyword_difficulty=item.get("keyword_properties", {}).get(
"keyword_difficulty"
),
serp_info=(
item.get("serp_info") if input_data.include_serp_info else None
),
clickstream_data=(
item.get("clickstream_keyword_info")
if input_data.include_clickstream_data
else None
),
)
yield "suggestion", suggestion
suggestions.append(suggestion)
yield "suggestions", suggestions
yield "total_count", len(suggestions)
yield "seed_keyword", input_data.keyword
class KeywordSuggestionExtractorBlock(Block):
"""Extracts individual fields from a KeywordSuggestion object."""
class Input(BlockSchema):
suggestion: KeywordSuggestion = SchemaField(
description="The keyword suggestion object to extract fields from"
)
class Output(BlockSchema):
keyword: str = SchemaField(description="The keyword suggestion")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None
)
competition: Optional[float] = SchemaField(
description="Competition level (0-1)", default=None
)
cpc: Optional[float] = SchemaField(
description="Cost per click in USD", default=None
)
keyword_difficulty: Optional[int] = SchemaField(
description="Keyword difficulty score", default=None
)
serp_info: Optional[Dict[str, Any]] = SchemaField(
description="data from SERP for each keyword", default=None
)
clickstream_data: Optional[Dict[str, Any]] = SchemaField(
description="Clickstream data metrics", default=None
)
def __init__(self):
super().__init__(
id="4193cb94-677c-48b0-9eec-6ac72fffd0f2",
description="Extract individual fields from a KeywordSuggestion object",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"suggestion": KeywordSuggestion(
keyword="test keyword",
search_volume=1000,
competition=0.5,
cpc=2.5,
keyword_difficulty=60,
).model_dump()
},
test_output=[
("keyword", "test keyword"),
("search_volume", 1000),
("competition", 0.5),
("cpc", 2.5),
("keyword_difficulty", 60),
("serp_info", None),
("clickstream_data", None),
],
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
"""Extract fields from the KeywordSuggestion object."""
suggestion = input_data.suggestion
yield "keyword", suggestion.keyword
yield "search_volume", suggestion.search_volume
yield "competition", suggestion.competition
yield "cpc", suggestion.cpc
yield "keyword_difficulty", suggestion.keyword_difficulty
yield "serp_info", suggestion.serp_info
yield "clickstream_data", suggestion.clickstream_data

View File

@@ -0,0 +1,290 @@
"""
DataForSEO Google Related Keywords block.
"""
from typing import Any, Dict, List, Optional
from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
)
from ._api import DataForSeoClient
from ._config import dataforseo
class RelatedKeyword(BlockSchema):
"""Schema for a related keyword result."""
keyword: str = SchemaField(description="The related keyword")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None
)
competition: Optional[float] = SchemaField(
description="Competition level (0-1)", default=None
)
cpc: Optional[float] = SchemaField(
description="Cost per click in USD", default=None
)
keyword_difficulty: Optional[int] = SchemaField(
description="Keyword difficulty score", default=None
)
serp_info: Optional[Dict[str, Any]] = SchemaField(
description="SERP data for the keyword", default=None
)
clickstream_data: Optional[Dict[str, Any]] = SchemaField(
description="Clickstream data metrics", default=None
)
class DataForSeoRelatedKeywordsBlock(Block):
"""Block for getting related keywords from DataForSEO Labs."""
class Input(BlockSchema):
credentials: CredentialsMetaInput = dataforseo.credentials_field(
description="DataForSEO credentials (username and password)"
)
keyword: str = SchemaField(
description="Seed keyword to find related keywords for"
)
location_code: Optional[int] = SchemaField(
description="Location code for targeting (e.g., 2840 for USA)",
default=2840, # USA
)
language_code: Optional[str] = SchemaField(
description="Language code (e.g., 'en' for English)",
default="en",
)
include_seed_keyword: bool = SchemaField(
description="Include the seed keyword in results",
default=True,
)
include_serp_info: bool = SchemaField(
description="Include SERP information",
default=False,
)
include_clickstream_data: bool = SchemaField(
description="Include clickstream metrics",
default=False,
)
limit: int = SchemaField(
description="Maximum number of results (up to 3000)",
default=100,
ge=1,
le=3000,
)
depth: int = SchemaField(
description="Keyword search depth (0-4). Controls the number of returned keywords: 0=1 keyword, 1=~8 keywords, 2=~72 keywords, 3=~584 keywords, 4=~4680 keywords",
default=1,
ge=0,
le=4,
)
class Output(BlockSchema):
related_keywords: List[RelatedKeyword] = SchemaField(
description="List of related keywords with metrics"
)
related_keyword: RelatedKeyword = SchemaField(
description="A related keyword with metrics"
)
total_count: int = SchemaField(
description="Total number of related keywords returned"
)
seed_keyword: str = SchemaField(
description="The seed keyword used for the query"
)
def __init__(self):
super().__init__(
id="8f2e4d6a-1b3c-4a5e-9d7f-2c8e6a4b3f1d",
description="Get related keywords from DataForSEO Labs Google API",
categories={BlockCategory.SEARCH, BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"credentials": dataforseo.get_test_credentials().model_dump(),
"keyword": "content marketing",
"location_code": 2840,
"language_code": "en",
"limit": 1,
},
test_credentials=dataforseo.get_test_credentials(),
test_output=[
(
"related_keyword",
lambda x: hasattr(x, "keyword") and x.keyword == "content strategy",
),
("related_keywords", lambda x: isinstance(x, list) and len(x) == 1),
("total_count", 1),
("seed_keyword", "content marketing"),
],
test_mock={
"_fetch_related_keywords": lambda *args, **kwargs: [
{
"items": [
{
"keyword_data": {
"keyword": "content strategy",
"keyword_info": {
"search_volume": 8000,
"competition": 0.4,
"cpc": 3.0,
},
"keyword_properties": {
"keyword_difficulty": 45,
},
}
}
]
}
]
},
)
async def _fetch_related_keywords(
self,
client: DataForSeoClient,
input_data: Input,
) -> Any:
"""Private method to fetch related keywords - can be mocked for testing."""
return await client.related_keywords(
keyword=input_data.keyword,
location_code=input_data.location_code,
language_code=input_data.language_code,
include_seed_keyword=input_data.include_seed_keyword,
include_serp_info=input_data.include_serp_info,
include_clickstream_data=input_data.include_clickstream_data,
limit=input_data.limit,
depth=input_data.depth,
)
async def run(
self,
input_data: Input,
*,
credentials: UserPasswordCredentials,
**kwargs,
) -> BlockOutput:
"""Execute the related keywords query."""
client = DataForSeoClient(credentials)
results = await self._fetch_related_keywords(client, input_data)
# Process and format the results
related_keywords = []
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", []) if isinstance(first_result, dict) else []
)
for item in items:
# Extract keyword_data from the item
keyword_data = item.get("keyword_data", {})
# Create the RelatedKeyword object
keyword = RelatedKeyword(
keyword=keyword_data.get("keyword", ""),
search_volume=keyword_data.get("keyword_info", {}).get(
"search_volume"
),
competition=keyword_data.get("keyword_info", {}).get("competition"),
cpc=keyword_data.get("keyword_info", {}).get("cpc"),
keyword_difficulty=keyword_data.get("keyword_properties", {}).get(
"keyword_difficulty"
),
serp_info=(
keyword_data.get("serp_info")
if input_data.include_serp_info
else None
),
clickstream_data=(
keyword_data.get("clickstream_keyword_info")
if input_data.include_clickstream_data
else None
),
)
yield "related_keyword", keyword
related_keywords.append(keyword)
yield "related_keywords", related_keywords
yield "total_count", len(related_keywords)
yield "seed_keyword", input_data.keyword
class RelatedKeywordExtractorBlock(Block):
"""Extracts individual fields from a RelatedKeyword object."""
class Input(BlockSchema):
related_keyword: RelatedKeyword = SchemaField(
description="The related keyword object to extract fields from"
)
class Output(BlockSchema):
keyword: str = SchemaField(description="The related keyword")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None
)
competition: Optional[float] = SchemaField(
description="Competition level (0-1)", default=None
)
cpc: Optional[float] = SchemaField(
description="Cost per click in USD", default=None
)
keyword_difficulty: Optional[int] = SchemaField(
description="Keyword difficulty score", default=None
)
serp_info: Optional[Dict[str, Any]] = SchemaField(
description="SERP data for the keyword", default=None
)
clickstream_data: Optional[Dict[str, Any]] = SchemaField(
description="Clickstream data metrics", default=None
)
def __init__(self):
super().__init__(
id="98342061-09d2-4952-bf77-0761fc8cc9a8",
description="Extract individual fields from a RelatedKeyword object",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"related_keyword": RelatedKeyword(
keyword="test related keyword",
search_volume=800,
competition=0.4,
cpc=3.0,
keyword_difficulty=55,
).model_dump()
},
test_output=[
("keyword", "test related keyword"),
("search_volume", 800),
("competition", 0.4),
("cpc", 3.0),
("keyword_difficulty", 55),
("serp_info", None),
("clickstream_data", None),
],
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
"""Extract fields from the RelatedKeyword object."""
related_keyword = input_data.related_keyword
yield "keyword", related_keyword.keyword
yield "search_volume", related_keyword.search_volume
yield "competition", related_keyword.competition
yield "cpc", related_keyword.cpc
yield "keyword_difficulty", related_keyword.keyword_difficulty
yield "serp_info", related_keyword.serp_info
yield "clickstream_data", related_keyword.clickstream_data

View File

@@ -0,0 +1,117 @@
"""
Discord API helper functions for making authenticated requests.
"""
import logging
from typing import Optional
from pydantic import BaseModel
from backend.data.model import OAuth2Credentials
from backend.util.request import Requests
logger = logging.getLogger(__name__)
class DiscordAPIException(Exception):
"""Exception raised for Discord API errors."""
def __init__(self, message: str, status_code: int):
super().__init__(message)
self.status_code = status_code
class DiscordOAuthUser(BaseModel):
"""Model for Discord OAuth user response."""
user_id: str
username: str
avatar_url: str
banner: Optional[str] = None
accent_color: Optional[int] = None
def get_api(credentials: OAuth2Credentials) -> Requests:
"""
Create a Requests instance configured for Discord API calls with OAuth2 credentials.
Args:
credentials: The OAuth2 credentials containing the access token.
Returns:
A configured Requests instance for Discord API calls.
"""
return Requests(
trusted_origins=[],
extra_headers={
"Authorization": f"Bearer {credentials.access_token.get_secret_value()}",
"Content-Type": "application/json",
},
raise_for_status=False,
)
async def get_current_user(credentials: OAuth2Credentials) -> DiscordOAuthUser:
"""
Fetch the current user's information using Discord OAuth2 API.
Reference: https://discord.com/developers/docs/resources/user#get-current-user
Args:
credentials: The OAuth2 credentials.
Returns:
A model containing user data with avatar URL.
Raises:
DiscordAPIException: If the API request fails.
"""
api = get_api(credentials)
response = await api.get("https://discord.com/api/oauth2/@me")
if not response.ok:
error_text = response.text()
raise DiscordAPIException(
f"Failed to fetch user info: {response.status} - {error_text}",
response.status,
)
data = response.json()
logger.info(f"Discord OAuth2 API Response: {data}")
# The /api/oauth2/@me endpoint returns a user object nested in the response
user_info = data.get("user", {})
logger.info(f"User info extracted: {user_info}")
# Build avatar URL
user_id = user_info.get("id")
avatar_hash = user_info.get("avatar")
if avatar_hash:
# Custom avatar
avatar_ext = "gif" if avatar_hash.startswith("a_") else "png"
avatar_url = (
f"https://cdn.discordapp.com/avatars/{user_id}/{avatar_hash}.{avatar_ext}"
)
else:
# Default avatar based on discriminator or user ID
discriminator = user_info.get("discriminator", "0")
if discriminator == "0":
# New username system - use user ID for default avatar
default_avatar_index = (int(user_id) >> 22) % 6
else:
# Legacy discriminator system
default_avatar_index = int(discriminator) % 5
avatar_url = (
f"https://cdn.discordapp.com/embed/avatars/{default_avatar_index}.png"
)
result = DiscordOAuthUser(
user_id=user_id,
username=user_info.get("username", ""),
avatar_url=avatar_url,
banner=user_info.get("banner"),
accent_color=user_info.get("accent_color"),
)
logger.info(f"Returning user data: {result.model_dump()}")
return result

View File

@@ -0,0 +1,74 @@
from typing import Literal
from pydantic import SecretStr
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
OAuth2Credentials,
)
from backend.integrations.providers import ProviderName
from backend.util.settings import Secrets
secrets = Secrets()
DISCORD_OAUTH_IS_CONFIGURED = bool(
secrets.discord_client_id and secrets.discord_client_secret
)
# Bot token credentials (existing)
DiscordBotCredentials = APIKeyCredentials
DiscordBotCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.DISCORD], Literal["api_key"]
]
# OAuth2 credentials (new)
DiscordOAuthCredentials = OAuth2Credentials
DiscordOAuthCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.DISCORD], Literal["oauth2"]
]
def DiscordBotCredentialsField() -> DiscordBotCredentialsInput:
"""Creates a Discord bot token credentials field."""
return CredentialsField(description="Discord bot token")
def DiscordOAuthCredentialsField(scopes: list[str]) -> DiscordOAuthCredentialsInput:
"""Creates a Discord OAuth2 credentials field."""
return CredentialsField(
description="Discord OAuth2 credentials",
required_scopes=set(scopes) | {"identify"}, # Basic user info scope
)
# Test credentials for bot tokens
TEST_BOT_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="discord",
api_key=SecretStr("test_api_key"),
title="Mock Discord API key",
expires_at=None,
)
TEST_BOT_CREDENTIALS_INPUT = {
"provider": TEST_BOT_CREDENTIALS.provider,
"id": TEST_BOT_CREDENTIALS.id,
"type": TEST_BOT_CREDENTIALS.type,
"title": TEST_BOT_CREDENTIALS.type,
}
# Test credentials for OAuth2
TEST_OAUTH_CREDENTIALS = OAuth2Credentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="discord",
access_token=SecretStr("test_access_token"),
title="Mock Discord OAuth",
scopes=["identify"],
username="testuser",
)
TEST_OAUTH_CREDENTIALS_INPUT = {
"provider": TEST_OAUTH_CREDENTIALS.provider,
"id": TEST_OAUTH_CREDENTIALS.id,
"type": TEST_OAUTH_CREDENTIALS.type,
"title": TEST_OAUTH_CREDENTIALS.type,
}

View File

@@ -2,45 +2,29 @@ import base64
import io
import mimetypes
from pathlib import Path
from typing import Any, Literal
from typing import Any
import aiohttp
import discord
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.data.model import APIKeyCredentials, SchemaField
from backend.util.file import store_media_file
from backend.util.type import MediaFileType
DiscordCredentials = CredentialsMetaInput[
Literal[ProviderName.DISCORD], Literal["api_key"]
]
def DiscordCredentialsField() -> DiscordCredentials:
return CredentialsField(description="Discord bot token")
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="discord",
api_key=SecretStr("test_api_key"),
title="Mock Discord API key",
expires_at=None,
from ._auth import (
TEST_BOT_CREDENTIALS,
TEST_BOT_CREDENTIALS_INPUT,
DiscordBotCredentialsField,
DiscordBotCredentialsInput,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.type,
}
# Keep backward compatibility alias
DiscordCredentials = DiscordBotCredentialsInput
DiscordCredentialsField = DiscordBotCredentialsField
TEST_CREDENTIALS = TEST_BOT_CREDENTIALS
TEST_CREDENTIALS_INPUT = TEST_BOT_CREDENTIALS_INPUT
class ReadDiscordMessagesBlock(Block):

View File

@@ -0,0 +1,99 @@
"""
Discord OAuth-based blocks.
"""
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import DiscordOAuthUser, get_current_user
from ._auth import (
DISCORD_OAUTH_IS_CONFIGURED,
TEST_OAUTH_CREDENTIALS,
TEST_OAUTH_CREDENTIALS_INPUT,
DiscordOAuthCredentialsField,
DiscordOAuthCredentialsInput,
)
class DiscordGetCurrentUserBlock(Block):
"""
Gets information about the currently authenticated Discord user using OAuth2.
This block requires Discord OAuth2 credentials (not bot tokens).
"""
class Input(BlockSchema):
credentials: DiscordOAuthCredentialsInput = DiscordOAuthCredentialsField(
["identify"]
)
class Output(BlockSchema):
user_id: str = SchemaField(description="The authenticated user's Discord ID")
username: str = SchemaField(description="The user's username")
avatar_url: str = SchemaField(description="URL to the user's avatar image")
banner_url: str = SchemaField(
description="URL to the user's banner image (if set)", default=""
)
accent_color: int = SchemaField(
description="The user's accent color as an integer", default=0
)
def __init__(self):
super().__init__(
id="8c7e39b8-4e9d-4f3a-b4e1-2a8c9d5f6e3b",
input_schema=DiscordGetCurrentUserBlock.Input,
output_schema=DiscordGetCurrentUserBlock.Output,
description="Gets information about the currently authenticated Discord user using OAuth2 credentials.",
categories={BlockCategory.SOCIAL},
disabled=not DISCORD_OAUTH_IS_CONFIGURED,
test_input={
"credentials": TEST_OAUTH_CREDENTIALS_INPUT,
},
test_credentials=TEST_OAUTH_CREDENTIALS,
test_output=[
("user_id", "123456789012345678"),
("username", "testuser"),
(
"avatar_url",
"https://cdn.discordapp.com/avatars/123456789012345678/avatar.png",
),
("banner_url", ""),
("accent_color", 0),
],
test_mock={
"get_user": lambda _: DiscordOAuthUser(
user_id="123456789012345678",
username="testuser",
avatar_url="https://cdn.discordapp.com/avatars/123456789012345678/avatar.png",
banner=None,
accent_color=0,
)
},
)
@staticmethod
async def get_user(credentials: OAuth2Credentials) -> DiscordOAuthUser:
user_info = await get_current_user(credentials)
return user_info
async def run(
self, input_data: Input, *, credentials: OAuth2Credentials, **kwargs
) -> BlockOutput:
try:
result = await self.get_user(credentials)
# Yield each output field
yield "user_id", result.user_id
yield "username", result.username
yield "avatar_url", result.avatar_url
# Handle banner URL if banner hash exists
if result.banner:
banner_url = f"https://cdn.discordapp.com/banners/{result.user_id}/{result.banner}.png"
yield "banner_url", banner_url
else:
yield "banner_url", ""
yield "accent_color", result.accent_color or 0
except Exception as e:
raise ValueError(f"Failed to get Discord user info: {e}")

View File

@@ -93,11 +93,11 @@ class Webset(BaseModel):
"""
Set of key-value pairs you want to associate with this object.
"""
created_at: Annotated[datetime, Field(alias="createdAt")] | None = None
created_at: Annotated[datetime | None, Field(alias="createdAt")] = None
"""
The date and time the webset was created
"""
updated_at: Annotated[datetime, Field(alias="updatedAt")] | None = None
updated_at: Annotated[datetime | None, Field(alias="updatedAt")] = None
"""
The date and time the webset was last updated
"""

View File

@@ -1094,6 +1094,117 @@ class GmailGetThreadBlock(GmailBase):
return thread
async def _build_reply_message(
service, input_data, graph_exec_id: str, user_id: str
) -> tuple[str, str]:
"""
Builds a reply MIME message for Gmail threads.
Returns:
tuple: (base64-encoded raw message, threadId)
"""
# Get parent message for reply context
parent = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(
userId="me",
id=input_data.parentMessageId,
format="metadata",
metadataHeaders=[
"Subject",
"References",
"Message-ID",
"From",
"To",
"Cc",
"Reply-To",
],
)
.execute()
)
# Build headers dictionary, preserving all values for duplicate headers
headers = {}
for h in parent.get("payload", {}).get("headers", []):
name = h["name"].lower()
value = h["value"]
if name in headers:
# For duplicate headers, keep the first occurrence (most relevant for reply context)
continue
headers[name] = value
# Determine recipients if not specified
if not (input_data.to or input_data.cc or input_data.bcc):
if input_data.replyAll:
recipients = [parseaddr(headers.get("from", ""))[1]]
recipients += [addr for _, addr in getaddresses([headers.get("to", "")])]
recipients += [addr for _, addr in getaddresses([headers.get("cc", "")])]
# Use dict.fromkeys() for O(n) deduplication while preserving order
input_data.to = list(dict.fromkeys(filter(None, recipients)))
else:
# Check Reply-To header first, fall back to From header
reply_to = headers.get("reply-to", "")
from_addr = headers.get("from", "")
sender = parseaddr(reply_to if reply_to else from_addr)[1]
input_data.to = [sender] if sender else []
# Set subject with Re: prefix if not already present
if input_data.subject:
subject = input_data.subject
else:
parent_subject = headers.get("subject", "").strip()
# Only add "Re:" if not already present (case-insensitive check)
if parent_subject.lower().startswith("re:"):
subject = parent_subject
else:
subject = f"Re: {parent_subject}" if parent_subject else "Re:"
# Build references header for proper threading
references = headers.get("references", "").split()
if headers.get("message-id"):
references.append(headers["message-id"])
# Create MIME message
msg = MIMEMultipart()
if input_data.to:
msg["To"] = ", ".join(input_data.to)
if input_data.cc:
msg["Cc"] = ", ".join(input_data.cc)
if input_data.bcc:
msg["Bcc"] = ", ".join(input_data.bcc)
msg["Subject"] = subject
if headers.get("message-id"):
msg["In-Reply-To"] = headers["message-id"]
if references:
msg["References"] = " ".join(references)
# Use the helper function for consistent content type handling
msg.attach(_make_mime_text(input_data.body, input_data.content_type))
# Handle attachments
for attach in input_data.attachments:
local_path = await store_media_file(
user_id=user_id,
graph_exec_id=graph_exec_id,
file=attach,
return_content=False,
)
abs_path = get_exec_file_path(graph_exec_id, local_path)
part = MIMEBase("application", "octet-stream")
with open(abs_path, "rb") as f:
part.set_payload(f.read())
encoders.encode_base64(part)
part.add_header(
"Content-Disposition", f"attachment; filename={Path(abs_path).name}"
)
msg.attach(part)
# Encode message
raw = base64.urlsafe_b64encode(msg.as_bytes()).decode("utf-8")
return raw, input_data.threadId
class GmailReplyBlock(GmailBase):
"""
Replies to Gmail threads with intelligent content type detection.
@@ -1230,93 +1341,146 @@ class GmailReplyBlock(GmailBase):
async def _reply(
self, service, input_data: Input, graph_exec_id: str, user_id: str
) -> dict:
parent = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(
userId="me",
id=input_data.parentMessageId,
format="metadata",
metadataHeaders=[
"Subject",
"References",
"Message-ID",
"From",
"To",
"Cc",
"Reply-To",
],
)
.execute()
# Build the reply message using the shared helper
raw, thread_id = await _build_reply_message(
service, input_data, graph_exec_id, user_id
)
headers = {
h["name"].lower(): h["value"]
for h in parent.get("payload", {}).get("headers", [])
}
if not (input_data.to or input_data.cc or input_data.bcc):
if input_data.replyAll:
recipients = [parseaddr(headers.get("from", ""))[1]]
recipients += [
addr for _, addr in getaddresses([headers.get("to", "")])
]
recipients += [
addr for _, addr in getaddresses([headers.get("cc", "")])
]
dedup: list[str] = []
for r in recipients:
if r and r not in dedup:
dedup.append(r)
input_data.to = dedup
else:
sender = parseaddr(headers.get("reply-to", headers.get("from", "")))[1]
input_data.to = [sender] if sender else []
subject = input_data.subject or (f"Re: {headers.get('subject', '')}".strip())
references = headers.get("references", "").split()
if headers.get("message-id"):
references.append(headers["message-id"])
msg = MIMEMultipart()
if input_data.to:
msg["To"] = ", ".join(input_data.to)
if input_data.cc:
msg["Cc"] = ", ".join(input_data.cc)
if input_data.bcc:
msg["Bcc"] = ", ".join(input_data.bcc)
msg["Subject"] = subject
if headers.get("message-id"):
msg["In-Reply-To"] = headers["message-id"]
if references:
msg["References"] = " ".join(references)
# Use the new helper function for consistent content type handling
msg.attach(_make_mime_text(input_data.body, input_data.content_type))
for attach in input_data.attachments:
local_path = await store_media_file(
user_id=user_id,
graph_exec_id=graph_exec_id,
file=attach,
return_content=False,
)
abs_path = get_exec_file_path(graph_exec_id, local_path)
part = MIMEBase("application", "octet-stream")
with open(abs_path, "rb") as f:
part.set_payload(f.read())
encoders.encode_base64(part)
part.add_header(
"Content-Disposition", f"attachment; filename={Path(abs_path).name}"
)
msg.attach(part)
raw = base64.urlsafe_b64encode(msg.as_bytes()).decode("utf-8")
# Send the message
return await asyncio.to_thread(
lambda: service.users()
.messages()
.send(userId="me", body={"threadId": input_data.threadId, "raw": raw})
.send(userId="me", body={"threadId": thread_id, "raw": raw})
.execute()
)
class GmailDraftReplyBlock(GmailBase):
"""
Creates draft replies to Gmail threads with intelligent content type detection.
Features:
- Automatic HTML detection: Draft replies containing HTML tags are formatted as text/html
- No hard-wrap for plain text: Plain text draft replies preserve natural line flow
- Manual content type override: Use content_type parameter to force specific format
- Reply-all functionality: Option to reply to all original recipients
- Thread preservation: Maintains proper email threading with headers
- Full Unicode/emoji support with UTF-8 encoding
"""
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
[
"https://www.googleapis.com/auth/gmail.modify",
"https://www.googleapis.com/auth/gmail.readonly",
]
)
threadId: str = SchemaField(description="Thread ID to reply in")
parentMessageId: str = SchemaField(
description="ID of the message being replied to"
)
to: list[str] = SchemaField(description="To recipients", default_factory=list)
cc: list[str] = SchemaField(description="CC recipients", default_factory=list)
bcc: list[str] = SchemaField(description="BCC recipients", default_factory=list)
replyAll: bool = SchemaField(
description="Reply to all original recipients", default=False
)
subject: str = SchemaField(description="Email subject", default="")
body: str = SchemaField(description="Email body (plain text or HTML)")
content_type: Optional[Literal["auto", "plain", "html"]] = SchemaField(
description="Content type: 'auto' (default - detects HTML), 'plain', or 'html'",
default=None,
advanced=True,
)
attachments: list[MediaFileType] = SchemaField(
description="Files to attach", default_factory=list, advanced=True
)
class Output(BlockSchema):
draftId: str = SchemaField(description="Created draft ID")
messageId: str = SchemaField(description="Draft message ID")
threadId: str = SchemaField(description="Thread ID")
status: str = SchemaField(description="Draft creation status")
error: str = SchemaField(description="Error message if any")
def __init__(self):
super().__init__(
id="d7a9f3e2-8b4c-4d6f-9e1a-3c5b7f8d2a6e",
description="Create draft replies to Gmail threads with automatic HTML detection and proper text formatting. Plain text draft replies maintain natural paragraph flow without 78-character line wrapping. HTML content is automatically detected and formatted correctly.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailDraftReplyBlock.Input,
output_schema=GmailDraftReplyBlock.Output,
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
test_input={
"threadId": "t1",
"parentMessageId": "m1",
"body": "Thanks for your message. I'll review and get back to you.",
"replyAll": False,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("draftId", "draft1"),
("messageId", "m2"),
("threadId", "t1"),
("status", "draft_created"),
],
test_mock={
"_create_draft_reply": lambda *args, **kwargs: {
"id": "draft1",
"message": {"id": "m2", "threadId": "t1"},
}
},
)
async def run(
self,
input_data: Input,
*,
credentials: GoogleCredentials,
graph_exec_id: str,
user_id: str,
**kwargs,
) -> BlockOutput:
service = self._build_service(credentials, **kwargs)
draft = await self._create_draft_reply(
service,
input_data,
graph_exec_id,
user_id,
)
yield "draftId", draft["id"]
yield "messageId", draft["message"]["id"]
yield "threadId", draft["message"].get("threadId", input_data.threadId)
yield "status", "draft_created"
async def _create_draft_reply(
self, service, input_data: Input, graph_exec_id: str, user_id: str
) -> dict:
# Build the reply message using the shared helper
raw, thread_id = await _build_reply_message(
service, input_data, graph_exec_id, user_id
)
# Create draft with proper thread association
draft = await asyncio.to_thread(
lambda: service.users()
.drafts()
.create(
userId="me",
body={
"message": {
"threadId": thread_id,
"raw": raw,
}
},
)
.execute()
)
return draft
class GmailGetProfileBlock(GmailBase):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(

View File

@@ -30,6 +30,7 @@ TEST_CREDENTIALS_INPUT = {
class IdeogramModelName(str, Enum):
V3 = "V_3"
V2 = "V_2"
V1 = "V_1"
V1_TURBO = "V_1_TURBO"
@@ -95,8 +96,8 @@ class IdeogramModelBlock(Block):
title="Prompt",
)
ideogram_model_name: IdeogramModelName = SchemaField(
description="The name of the Image Generation Model, e.g., V_2",
default=IdeogramModelName.V2,
description="The name of the Image Generation Model, e.g., V_3",
default=IdeogramModelName.V3,
title="Image Generation Model",
advanced=False,
)
@@ -236,6 +237,111 @@ class IdeogramModelBlock(Block):
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
# Use V3 endpoint for V3 model, legacy endpoint for others
if model_name == "V_3":
return await self._run_model_v3(
api_key,
prompt,
seed,
aspect_ratio,
magic_prompt_option,
style_type,
negative_prompt,
color_palette_name,
custom_colors,
)
else:
return await self._run_model_legacy(
api_key,
model_name,
prompt,
seed,
aspect_ratio,
magic_prompt_option,
style_type,
negative_prompt,
color_palette_name,
custom_colors,
)
async def _run_model_v3(
self,
api_key: SecretStr,
prompt: str,
seed: Optional[int],
aspect_ratio: str,
magic_prompt_option: str,
style_type: str,
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
url = "https://api.ideogram.ai/v1/ideogram-v3/generate"
headers = {
"Api-Key": api_key.get_secret_value(),
"Content-Type": "application/json",
}
# Map legacy aspect ratio values to V3 format
aspect_ratio_map = {
"ASPECT_10_16": "10x16",
"ASPECT_16_10": "16x10",
"ASPECT_9_16": "9x16",
"ASPECT_16_9": "16x9",
"ASPECT_3_2": "3x2",
"ASPECT_2_3": "2x3",
"ASPECT_4_3": "4x3",
"ASPECT_3_4": "3x4",
"ASPECT_1_1": "1x1",
"ASPECT_1_3": "1x3",
"ASPECT_3_1": "3x1",
# Additional V3 supported ratios
"ASPECT_1_2": "1x2",
"ASPECT_2_1": "2x1",
"ASPECT_4_5": "4x5",
"ASPECT_5_4": "5x4",
}
v3_aspect_ratio = aspect_ratio_map.get(
aspect_ratio, "1x1"
) # Default to 1x1 if not found
# Use JSON for V3 endpoint (simpler than multipart/form-data)
data: Dict[str, Any] = {
"prompt": prompt,
"aspect_ratio": v3_aspect_ratio,
"magic_prompt": magic_prompt_option,
"style_type": style_type,
}
if seed is not None:
data["seed"] = seed
if negative_prompt:
data["negative_prompt"] = negative_prompt
# Note: V3 endpoint may have different color palette support
# For now, we'll omit color palettes for V3 to avoid errors
try:
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image with V3 endpoint: {str(e)}")
async def _run_model_legacy(
self,
api_key: SecretStr,
model_name: str,
prompt: str,
seed: Optional[int],
aspect_ratio: str,
magic_prompt_option: str,
style_type: str,
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
url = "https://api.ideogram.ai/generate"
headers = {
@@ -249,28 +355,33 @@ class IdeogramModelBlock(Block):
"model": model_name,
"aspect_ratio": aspect_ratio,
"magic_prompt_option": magic_prompt_option,
"style_type": style_type,
}
}
# Only add style_type for V2, V2_TURBO, and V3 models (V1 models don't support it)
if model_name in ["V_2", "V_2_TURBO", "V_3"]:
data["image_request"]["style_type"] = style_type
if seed is not None:
data["image_request"]["seed"] = seed
if negative_prompt:
data["image_request"]["negative_prompt"] = negative_prompt
if color_palette_name != "NONE":
data["color_palette"] = {"name": color_palette_name}
elif custom_colors:
data["color_palette"] = {
"members": [{"color_hex": color} for color in custom_colors]
}
# Only add color palette for V2 and V2_TURBO models (V1 models don't support it)
if model_name in ["V_2", "V_2_TURBO"]:
if color_palette_name != "NONE":
data["color_palette"] = {"name": color_palette_name}
elif custom_colors:
data["color_palette"] = {
"members": [{"color_hex": color} for color in custom_colors]
}
try:
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image: {str(e)}")
raise Exception(f"Failed to fetch image with legacy endpoint: {str(e)}")
async def upscale_image(self, api_key: SecretStr, image_url: str):
url = "https://api.ideogram.ai/upscale"

View File

@@ -10,7 +10,6 @@ from backend.util.settings import Config
from backend.util.text import TextFormatter
from backend.util.type import LongTextType, MediaFileType, ShortTextType
formatter = TextFormatter()
config = Config()
@@ -132,6 +131,11 @@ class AgentOutputBlock(Block):
default="",
advanced=True,
)
escape_html: bool = SchemaField(
default=False,
advanced=True,
description="Whether to escape special characters in the inserted values to be HTML-safe. Enable for HTML output, disable for plain text.",
)
advanced: bool = SchemaField(
description="Whether to treat the output as advanced.",
default=False,
@@ -193,6 +197,7 @@ class AgentOutputBlock(Block):
"""
if input_data.format:
try:
formatter = TextFormatter(autoescape=input_data.escape_html)
yield "output", formatter.format_string(
input_data.format, {input_data.name: input_data.value}
)

View File

@@ -1,5 +1,9 @@
# This file contains a lot of prompt block strings that would trigger "line too long"
# flake8: noqa: E501
import ast
import logging
import re
import secrets
from abc import ABC
from enum import Enum, EnumMeta
from json import JSONDecodeError
@@ -27,7 +31,7 @@ from backend.util.prompt import compress_prompt, estimate_token_count
from backend.util.text import TextFormatter
logger = TruncatedLogger(logging.getLogger(__name__), "[LLM-Block]")
fmt = TextFormatter()
fmt = TextFormatter(autoescape=False)
LLMProviderName = Literal[
ProviderName.AIML_API,
@@ -204,13 +208,13 @@ MODEL_METADATA = {
"anthropic", 200000, 32000
), # claude-opus-4-1-20250805
LlmModel.CLAUDE_4_OPUS: ModelMetadata(
"anthropic", 200000, 8192
"anthropic", 200000, 32000
), # claude-4-opus-20250514
LlmModel.CLAUDE_4_SONNET: ModelMetadata(
"anthropic", 200000, 8192
"anthropic", 200000, 64000
), # claude-4-sonnet-20250514
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 8192
"anthropic", 200000, 64000
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_5_SONNET: ModelMetadata(
"anthropic", 200000, 8192
@@ -382,7 +386,9 @@ def extract_openai_tool_calls(response) -> list[ToolContentBlock] | None:
return None
def get_parallel_tool_calls_param(llm_model: LlmModel, parallel_tool_calls):
def get_parallel_tool_calls_param(
llm_model: LlmModel, parallel_tool_calls: bool | None
):
"""Get the appropriate parallel_tool_calls parameter for OpenAI-compatible APIs."""
if llm_model.startswith("o") or parallel_tool_calls is None:
return openai.NOT_GIVEN
@@ -393,8 +399,8 @@ async def llm_call(
credentials: APIKeyCredentials,
llm_model: LlmModel,
prompt: list[dict],
json_format: bool,
max_tokens: int | None,
force_json_output: bool = False,
tools: list[dict] | None = None,
ollama_host: str = "localhost:11434",
parallel_tool_calls=None,
@@ -407,7 +413,7 @@ async def llm_call(
credentials: The API key credentials to use.
llm_model: The LLM model to use.
prompt: The prompt to send to the LLM.
json_format: Whether the response should be in JSON format.
force_json_output: Whether the response should be in JSON format.
max_tokens: The maximum number of tokens to generate in the chat completion.
tools: The tools to use in the chat completion.
ollama_host: The host for ollama to use.
@@ -446,7 +452,7 @@ async def llm_call(
llm_model, parallel_tool_calls
)
if json_format:
if force_json_output:
response_format = {"type": "json_object"}
response = await oai_client.chat.completions.create(
@@ -559,7 +565,7 @@ async def llm_call(
raise ValueError("Groq does not support tools.")
client = AsyncGroq(api_key=credentials.api_key.get_secret_value())
response_format = {"type": "json_object"} if json_format else None
response_format = {"type": "json_object"} if force_json_output else None
response = await client.chat.completions.create(
model=llm_model.value,
messages=prompt, # type: ignore
@@ -717,7 +723,7 @@ async def llm_call(
)
response_format = None
if json_format:
if force_json_output:
response_format = {"type": "json_object"}
parallel_tool_calls_param = get_parallel_tool_calls_param(
@@ -780,6 +786,17 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
description="The language model to use for answering the prompt.",
advanced=False,
)
force_json_output: bool = SchemaField(
title="Restrict LLM to pure JSON output",
default=False,
description=(
"Whether to force the LLM to produce a JSON-only response. "
"This can increase the block's reliability, "
"but may also reduce the quality of the response "
"because it prohibits the LLM from reasoning "
"before providing its JSON response."
),
)
credentials: AICredentials = AICredentialsField()
sys_prompt: str = SchemaField(
title="System Prompt",
@@ -848,17 +865,18 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
"llm_call": lambda *args, **kwargs: LLMResponse(
raw_response="",
prompt=[""],
response=json.dumps(
{
"key1": "key1Value",
"key2": "key2Value",
}
response=(
'<json_output id="test123456">{\n'
' "key1": "key1Value",\n'
' "key2": "key2Value"\n'
"}</json_output>"
),
tool_calls=None,
prompt_tokens=0,
completion_tokens=0,
reasoning=None,
)
),
"get_collision_proof_output_tag_id": lambda *args: "test123456",
},
)
@@ -867,9 +885,9 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
credentials: APIKeyCredentials,
llm_model: LlmModel,
prompt: list[dict],
json_format: bool,
compress_prompt_to_fit: bool,
max_tokens: int | None,
force_json_output: bool = False,
compress_prompt_to_fit: bool = True,
tools: list[dict] | None = None,
ollama_host: str = "localhost:11434",
) -> LLMResponse:
@@ -882,8 +900,8 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
credentials=credentials,
llm_model=llm_model,
prompt=prompt,
json_format=json_format,
max_tokens=max_tokens,
force_json_output=force_json_output,
tools=tools,
ollama_host=ollama_host,
compress_prompt_to_fit=compress_prompt_to_fit,
@@ -895,10 +913,6 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
logger.debug(f"Calling LLM with input data: {input_data}")
prompt = [json.to_dict(p) for p in input_data.conversation_history]
def trim_prompt(s: str) -> str:
lines = s.strip().split("\n")
return "\n".join([line.strip().lstrip("|") for line in lines])
values = input_data.prompt_values
if values:
input_data.prompt = fmt.format_string(input_data.prompt, values)
@@ -907,27 +921,15 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
if input_data.sys_prompt:
prompt.append({"role": "system", "content": input_data.sys_prompt})
# Use a one-time unique tag to prevent collisions with user/LLM content
output_tag_id = self.get_collision_proof_output_tag_id()
output_tag_start = f'<json_output id="{output_tag_id}">'
if input_data.expected_format:
expected_format = [
f'"{k}": "{v}"' for k, v in input_data.expected_format.items()
]
if input_data.list_result:
format_prompt = (
f'"results": [\n {{\n {", ".join(expected_format)}\n }}\n]'
)
else:
format_prompt = "\n ".join(expected_format)
sys_prompt = trim_prompt(
f"""
|Reply strictly only in the following JSON format:
|{{
| {format_prompt}
|}}
|
|Ensure the response is valid JSON. Do not include any additional text outside of the JSON.
|If you cannot provide all the keys, provide an empty string for the values you cannot answer.
"""
sys_prompt = self.response_format_instructions(
input_data.expected_format,
list_mode=input_data.list_result,
pure_json_mode=input_data.force_json_output,
output_tag_start=output_tag_start,
)
prompt.append({"role": "system", "content": sys_prompt})
@@ -945,18 +947,21 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
except JSONDecodeError as e:
return f"JSON decode error: {e}"
logger.debug(f"LLM request: {prompt}")
retry_prompt = ""
error_feedback_message = ""
llm_model = input_data.model
for retry_count in range(input_data.retry):
logger.debug(f"LLM request: {prompt}")
try:
llm_response = await self.llm_call(
credentials=credentials,
llm_model=llm_model,
prompt=prompt,
compress_prompt_to_fit=input_data.compress_prompt_to_fit,
json_format=bool(input_data.expected_format),
force_json_output=(
input_data.force_json_output
and bool(input_data.expected_format)
),
ollama_host=input_data.ollama_host,
max_tokens=input_data.max_tokens,
)
@@ -970,16 +975,55 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
logger.debug(f"LLM attempt-{retry_count} response: {response_text}")
if input_data.expected_format:
try:
response_obj = self.get_json_from_response(
response_text,
pure_json_mode=input_data.force_json_output,
output_tag_start=output_tag_start,
)
except (ValueError, JSONDecodeError) as parse_error:
censored_response = re.sub(r"[A-Za-z0-9]", "*", response_text)
response_snippet = (
f"{censored_response[:50]}...{censored_response[-30:]}"
)
logger.warning(
f"Error getting JSON from LLM response: {parse_error}\n\n"
f"Response start+end: `{response_snippet}`"
)
prompt.append({"role": "assistant", "content": response_text})
response_obj = json.loads(response_text)
error_feedback_message = self.invalid_response_feedback(
parse_error,
was_parseable=False,
list_mode=input_data.list_result,
pure_json_mode=input_data.force_json_output,
output_tag_start=output_tag_start,
)
prompt.append(
{"role": "user", "content": error_feedback_message}
)
continue
# Handle object response for `force_json_output`+`list_result`
if input_data.list_result and isinstance(response_obj, dict):
if "results" in response_obj:
response_obj = response_obj.get("results", [])
elif len(response_obj) == 1:
response_obj = list(response_obj.values())
if "results" in response_obj and isinstance(
response_obj["results"], list
):
response_obj = response_obj["results"]
else:
error_feedback_message = (
"Expected an array of objects in the 'results' key, "
f"but got: {response_obj}"
)
prompt.append(
{"role": "assistant", "content": response_text}
)
prompt.append(
{"role": "user", "content": error_feedback_message}
)
continue
response_error = "\n".join(
validation_errors = "\n".join(
[
validation_error
for response_item in (
@@ -991,7 +1035,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
]
)
if not response_error:
if not validation_errors:
self.merge_stats(
NodeExecutionStats(
llm_call_count=retry_count + 1,
@@ -1001,6 +1045,16 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
yield "response", response_obj
yield "prompt", self.prompt
return
prompt.append({"role": "assistant", "content": response_text})
error_feedback_message = self.invalid_response_feedback(
validation_errors,
was_parseable=True,
list_mode=input_data.list_result,
pure_json_mode=input_data.force_json_output,
output_tag_start=output_tag_start,
)
prompt.append({"role": "user", "content": error_feedback_message})
else:
self.merge_stats(
NodeExecutionStats(
@@ -1011,21 +1065,6 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
yield "response", {"response": response_text}
yield "prompt", self.prompt
return
retry_prompt = trim_prompt(
f"""
|This is your previous error response:
|--
|{response_text}
|--
|
|And this is the error:
|--
|{response_error}
|--
"""
)
prompt.append({"role": "user", "content": retry_prompt})
except Exception as e:
logger.exception(f"Error calling LLM: {e}")
if (
@@ -1038,9 +1077,133 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
logger.debug(
f"Reducing max_tokens to {input_data.max_tokens} for next attempt"
)
retry_prompt = f"Error calling LLM: {e}"
# Don't add retry prompt for token limit errors,
# just retry with lower maximum output tokens
raise RuntimeError(retry_prompt)
error_feedback_message = f"Error calling LLM: {e}"
raise RuntimeError(error_feedback_message)
def response_format_instructions(
self,
expected_object_format: dict[str, str],
*,
list_mode: bool,
pure_json_mode: bool,
output_tag_start: str,
) -> str:
expected_output_format = json.dumps(expected_object_format, indent=2)
output_type = "object" if not list_mode else "array"
outer_output_type = "object" if pure_json_mode else output_type
if output_type == "array":
indented_obj_format = expected_output_format.replace("\n", "\n ")
expected_output_format = f"[\n {indented_obj_format},\n ...\n]"
if pure_json_mode:
indented_list_format = expected_output_format.replace("\n", "\n ")
expected_output_format = (
"{\n"
' "reasoning": "... (optional)",\n' # for better performance
f' "results": {indented_list_format}\n'
"}"
)
# Preserve indentation in prompt
expected_output_format = expected_output_format.replace("\n", "\n|")
# Prepare prompt
if not pure_json_mode:
expected_output_format = (
f"{output_tag_start}\n{expected_output_format}\n</json_output>"
)
instructions = f"""
|In your response you MUST include a valid JSON {outer_output_type} strictly following this format:
|{expected_output_format}
|
|If you cannot provide all the keys, you MUST provide an empty string for the values you cannot answer.
""".strip()
if not pure_json_mode:
instructions += f"""
|
|You MUST enclose your final JSON answer in {output_tag_start}...</json_output> tags, even if the user specifies a different tag.
|There MUST be exactly ONE {output_tag_start}...</json_output> block in your response, which MUST ONLY contain the JSON {outer_output_type} and nothing else. Other text outside this block is allowed.
""".strip()
return trim_prompt(instructions)
def invalid_response_feedback(
self,
error,
*,
was_parseable: bool,
list_mode: bool,
pure_json_mode: bool,
output_tag_start: str,
) -> str:
outer_output_type = "object" if not list_mode or pure_json_mode else "array"
if was_parseable:
complaint = f"Your previous response did not match the expected {outer_output_type} format."
else:
complaint = f"Your previous response did not contain a parseable JSON {outer_output_type}."
indented_parse_error = str(error).replace("\n", "\n|")
instruction = (
f"Please provide a {output_tag_start}...</json_output> block containing a"
if not pure_json_mode
else "Please provide a"
) + f" valid JSON {outer_output_type} that matches the expected format."
return trim_prompt(
f"""
|{complaint}
|
|{indented_parse_error}
|
|{instruction}
"""
)
def get_json_from_response(
self, response_text: str, *, pure_json_mode: bool, output_tag_start: str
) -> dict[str, Any] | list[dict[str, Any]]:
if pure_json_mode:
# Handle pure JSON responses
try:
return json.loads(response_text)
except JSONDecodeError as first_parse_error:
# If that didn't work, try finding the { and } to deal with possible ```json fences etc.
json_start = response_text.find("{")
json_end = response_text.rfind("}")
try:
return json.loads(response_text[json_start : json_end + 1])
except JSONDecodeError:
# Raise the original error, as it's more likely to be relevant
raise first_parse_error from None
if output_tag_start not in response_text:
raise ValueError(
"Response does not contain the expected "
f"{output_tag_start}...</json_output> block."
)
json_output = (
response_text.split(output_tag_start, 1)[1]
.rsplit("</json_output>", 1)[0]
.strip()
)
return json.loads(json_output)
def get_collision_proof_output_tag_id(self) -> str:
return secrets.token_hex(8)
def trim_prompt(s: str) -> str:
"""Removes indentation up to and including `|` from a multi-line prompt."""
lines = s.strip().split("\n")
return "\n".join([line.strip().lstrip("|") for line in lines])
class AITextGeneratorBlock(AIBlockBase):

View File

@@ -0,0 +1,536 @@
"""
Notion API helper functions and client for making authenticated requests.
"""
from typing import Any, Dict, List, Optional
from backend.data.model import OAuth2Credentials
from backend.util.request import Requests
NOTION_VERSION = "2022-06-28"
class NotionAPIException(Exception):
"""Exception raised for Notion API errors."""
def __init__(self, message: str, status_code: int):
super().__init__(message)
self.status_code = status_code
class NotionClient:
"""Client for interacting with the Notion API."""
def __init__(self, credentials: OAuth2Credentials):
self.credentials = credentials
self.headers = {
"Authorization": credentials.auth_header(),
"Notion-Version": NOTION_VERSION,
"Content-Type": "application/json",
}
self.requests = Requests()
async def get_page(self, page_id: str) -> dict:
"""
Fetch a page by ID.
Args:
page_id: The ID of the page to fetch.
Returns:
The page object from Notion API.
"""
url = f"https://api.notion.com/v1/pages/{page_id}"
response = await self.requests.get(url, headers=self.headers)
if not response.ok:
raise NotionAPIException(
f"Failed to fetch page: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def get_blocks(self, block_id: str, recursive: bool = True) -> List[dict]:
"""
Fetch all blocks from a page or block.
Args:
block_id: The ID of the page or block to fetch children from.
recursive: Whether to fetch nested blocks recursively.
Returns:
List of block objects.
"""
blocks = []
cursor = None
while True:
url = f"https://api.notion.com/v1/blocks/{block_id}/children"
params = {"page_size": 100}
if cursor:
params["start_cursor"] = cursor
response = await self.requests.get(url, headers=self.headers, params=params)
if not response.ok:
raise NotionAPIException(
f"Failed to fetch blocks: {response.status} - {response.text()}",
response.status,
)
data = response.json()
current_blocks = data.get("results", [])
# If recursive, fetch children for blocks that have them
if recursive:
for block in current_blocks:
if block.get("has_children"):
block["children"] = await self.get_blocks(
block["id"], recursive=True
)
blocks.extend(current_blocks)
if not data.get("has_more"):
break
cursor = data.get("next_cursor")
return blocks
async def query_database(
self,
database_id: str,
filter_obj: Optional[dict] = None,
sorts: Optional[List[dict]] = None,
page_size: int = 100,
) -> dict:
"""
Query a database with optional filters and sorts.
Args:
database_id: The ID of the database to query.
filter_obj: Optional filter object for the query.
sorts: Optional list of sort objects.
page_size: Number of results per page.
Returns:
Query results including pages and pagination info.
"""
url = f"https://api.notion.com/v1/databases/{database_id}/query"
payload: Dict[str, Any] = {"page_size": page_size}
if filter_obj:
payload["filter"] = filter_obj
if sorts:
payload["sorts"] = sorts
response = await self.requests.post(url, headers=self.headers, json=payload)
if not response.ok:
raise NotionAPIException(
f"Failed to query database: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def create_page(
self,
parent: dict,
properties: dict,
children: Optional[List[dict]] = None,
icon: Optional[dict] = None,
cover: Optional[dict] = None,
) -> dict:
"""
Create a new page.
Args:
parent: Parent object (page_id or database_id).
properties: Page properties.
children: Optional list of block children.
icon: Optional icon object.
cover: Optional cover object.
Returns:
The created page object.
"""
url = "https://api.notion.com/v1/pages"
payload: Dict[str, Any] = {"parent": parent, "properties": properties}
if children:
payload["children"] = children
if icon:
payload["icon"] = icon
if cover:
payload["cover"] = cover
response = await self.requests.post(url, headers=self.headers, json=payload)
if not response.ok:
raise NotionAPIException(
f"Failed to create page: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def update_page(self, page_id: str, properties: dict) -> dict:
"""
Update a page's properties.
Args:
page_id: The ID of the page to update.
properties: Properties to update.
Returns:
The updated page object.
"""
url = f"https://api.notion.com/v1/pages/{page_id}"
response = await self.requests.patch(
url, headers=self.headers, json={"properties": properties}
)
if not response.ok:
raise NotionAPIException(
f"Failed to update page: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def append_blocks(self, block_id: str, children: List[dict]) -> dict:
"""
Append blocks to a page or block.
Args:
block_id: The ID of the page or block to append to.
children: List of block objects to append.
Returns:
Response with the created blocks.
"""
url = f"https://api.notion.com/v1/blocks/{block_id}/children"
response = await self.requests.patch(
url, headers=self.headers, json={"children": children}
)
if not response.ok:
raise NotionAPIException(
f"Failed to append blocks: {response.status} - {response.text()}",
response.status,
)
return response.json()
async def search(
self,
query: str = "",
filter_obj: Optional[dict] = None,
sort: Optional[dict] = None,
page_size: int = 100,
) -> dict:
"""
Search for pages and databases.
Args:
query: Search query text.
filter_obj: Optional filter object.
sort: Optional sort object.
page_size: Number of results per page.
Returns:
Search results.
"""
url = "https://api.notion.com/v1/search"
payload: Dict[str, Any] = {"page_size": page_size}
if query:
payload["query"] = query
if filter_obj:
payload["filter"] = filter_obj
if sort:
payload["sort"] = sort
response = await self.requests.post(url, headers=self.headers, json=payload)
if not response.ok:
raise NotionAPIException(
f"Search failed: {response.status} - {response.text()}", response.status
)
return response.json()
# Conversion helper functions
def parse_rich_text(rich_text_array: List[dict]) -> str:
"""
Extract plain text from a Notion rich text array.
Args:
rich_text_array: Array of rich text objects from Notion.
Returns:
Plain text string.
"""
if not rich_text_array:
return ""
text_parts = []
for text_obj in rich_text_array:
if "plain_text" in text_obj:
text_parts.append(text_obj["plain_text"])
return "".join(text_parts)
def rich_text_to_markdown(rich_text_array: List[dict]) -> str:
"""
Convert Notion rich text array to markdown with formatting.
Args:
rich_text_array: Array of rich text objects from Notion.
Returns:
Markdown formatted string.
"""
if not rich_text_array:
return ""
markdown_parts = []
for text_obj in rich_text_array:
text = text_obj.get("plain_text", "")
annotations = text_obj.get("annotations", {})
# Apply formatting based on annotations
if annotations.get("code"):
text = f"`{text}`"
else:
if annotations.get("bold"):
text = f"**{text}**"
if annotations.get("italic"):
text = f"*{text}*"
if annotations.get("strikethrough"):
text = f"~~{text}~~"
if annotations.get("underline"):
text = f"<u>{text}</u>"
# Handle links
if text_obj.get("href"):
text = f"[{text}]({text_obj['href']})"
markdown_parts.append(text)
return "".join(markdown_parts)
def block_to_markdown(block: dict, indent_level: int = 0) -> str:
"""
Convert a single Notion block to markdown.
Args:
block: Block object from Notion API.
indent_level: Current indentation level for nested blocks.
Returns:
Markdown string representation of the block.
"""
block_type = block.get("type")
indent = " " * indent_level
markdown_lines = []
# Handle different block types
if block_type == "paragraph":
text = rich_text_to_markdown(block["paragraph"].get("rich_text", []))
if text:
markdown_lines.append(f"{indent}{text}")
elif block_type == "heading_1":
text = parse_rich_text(block["heading_1"].get("rich_text", []))
markdown_lines.append(f"{indent}# {text}")
elif block_type == "heading_2":
text = parse_rich_text(block["heading_2"].get("rich_text", []))
markdown_lines.append(f"{indent}## {text}")
elif block_type == "heading_3":
text = parse_rich_text(block["heading_3"].get("rich_text", []))
markdown_lines.append(f"{indent}### {text}")
elif block_type == "bulleted_list_item":
text = rich_text_to_markdown(block["bulleted_list_item"].get("rich_text", []))
markdown_lines.append(f"{indent}- {text}")
elif block_type == "numbered_list_item":
text = rich_text_to_markdown(block["numbered_list_item"].get("rich_text", []))
# Note: This is simplified - proper numbering would need context
markdown_lines.append(f"{indent}1. {text}")
elif block_type == "to_do":
text = rich_text_to_markdown(block["to_do"].get("rich_text", []))
checked = "x" if block["to_do"].get("checked") else " "
markdown_lines.append(f"{indent}- [{checked}] {text}")
elif block_type == "toggle":
text = rich_text_to_markdown(block["toggle"].get("rich_text", []))
markdown_lines.append(f"{indent}<details>")
markdown_lines.append(f"{indent}<summary>{text}</summary>")
markdown_lines.append(f"{indent}")
# Process children if they exist
if block.get("children"):
for child in block["children"]:
child_markdown = block_to_markdown(child, indent_level + 1)
if child_markdown:
markdown_lines.append(child_markdown)
markdown_lines.append(f"{indent}</details>")
elif block_type == "code":
code = parse_rich_text(block["code"].get("rich_text", []))
language = block["code"].get("language", "")
markdown_lines.append(f"{indent}```{language}")
markdown_lines.append(f"{indent}{code}")
markdown_lines.append(f"{indent}```")
elif block_type == "quote":
text = rich_text_to_markdown(block["quote"].get("rich_text", []))
markdown_lines.append(f"{indent}> {text}")
elif block_type == "divider":
markdown_lines.append(f"{indent}---")
elif block_type == "image":
image = block["image"]
url = image.get("external", {}).get("url") or image.get("file", {}).get(
"url", ""
)
caption = parse_rich_text(image.get("caption", []))
alt_text = caption if caption else "Image"
markdown_lines.append(f"{indent}![{alt_text}]({url})")
if caption:
markdown_lines.append(f"{indent}*{caption}*")
elif block_type == "video":
video = block["video"]
url = video.get("external", {}).get("url") or video.get("file", {}).get(
"url", ""
)
caption = parse_rich_text(video.get("caption", []))
markdown_lines.append(f"{indent}[Video]({url})")
if caption:
markdown_lines.append(f"{indent}*{caption}*")
elif block_type == "file":
file = block["file"]
url = file.get("external", {}).get("url") or file.get("file", {}).get("url", "")
caption = parse_rich_text(file.get("caption", []))
name = caption if caption else "File"
markdown_lines.append(f"{indent}[{name}]({url})")
elif block_type == "bookmark":
url = block["bookmark"].get("url", "")
caption = parse_rich_text(block["bookmark"].get("caption", []))
markdown_lines.append(f"{indent}[{caption if caption else url}]({url})")
elif block_type == "equation":
expression = block["equation"].get("expression", "")
markdown_lines.append(f"{indent}$${expression}$$")
elif block_type == "callout":
text = rich_text_to_markdown(block["callout"].get("rich_text", []))
icon = block["callout"].get("icon", {})
if icon.get("emoji"):
markdown_lines.append(f"{indent}> {icon['emoji']} {text}")
else:
markdown_lines.append(f"{indent}> {text}")
elif block_type == "child_page":
title = block["child_page"].get("title", "Untitled")
markdown_lines.append(f"{indent}📄 [{title}](notion://page/{block['id']})")
elif block_type == "child_database":
title = block["child_database"].get("title", "Untitled Database")
markdown_lines.append(f"{indent}🗂️ [{title}](notion://database/{block['id']})")
elif block_type == "table":
# Tables are complex - for now just indicate there's a table
markdown_lines.append(
f"{indent}[Table with {block['table'].get('table_width', 0)} columns]"
)
elif block_type == "column_list":
# Process columns
if block.get("children"):
markdown_lines.append(f"{indent}<div style='display: flex'>")
for column in block["children"]:
markdown_lines.append(f"{indent}<div style='flex: 1'>")
if column.get("children"):
for child in column["children"]:
child_markdown = block_to_markdown(child, indent_level + 1)
if child_markdown:
markdown_lines.append(child_markdown)
markdown_lines.append(f"{indent}</div>")
markdown_lines.append(f"{indent}</div>")
# Handle children for blocks that haven't been processed yet
elif block.get("children") and block_type not in ["toggle", "column_list"]:
for child in block["children"]:
child_markdown = block_to_markdown(child, indent_level)
if child_markdown:
markdown_lines.append(child_markdown)
return "\n".join(markdown_lines) if markdown_lines else ""
def blocks_to_markdown(blocks: List[dict]) -> str:
"""
Convert a list of Notion blocks to a markdown document.
Args:
blocks: List of block objects from Notion API.
Returns:
Complete markdown document as a string.
"""
markdown_parts = []
for i, block in enumerate(blocks):
markdown = block_to_markdown(block)
if markdown:
markdown_parts.append(markdown)
# Add spacing between top-level blocks (except lists)
if i < len(blocks) - 1:
next_type = blocks[i + 1].get("type", "")
current_type = block.get("type", "")
# Don't add extra spacing between list items
list_types = {"bulleted_list_item", "numbered_list_item", "to_do"}
if not (current_type in list_types and next_type in list_types):
markdown_parts.append("")
return "\n".join(markdown_parts)
def extract_page_title(page: dict) -> str:
"""
Extract the title from a Notion page object.
Args:
page: Page object from Notion API.
Returns:
Page title as a string.
"""
properties = page.get("properties", {})
# Find the title property (it has type "title")
for prop_name, prop_value in properties.items():
if prop_value.get("type") == "title":
return parse_rich_text(prop_value.get("title", []))
return "Untitled"

View File

@@ -0,0 +1,42 @@
from typing import Literal
from pydantic import SecretStr
from backend.data.model import CredentialsField, CredentialsMetaInput, OAuth2Credentials
from backend.integrations.providers import ProviderName
from backend.util.settings import Secrets
secrets = Secrets()
NOTION_OAUTH_IS_CONFIGURED = bool(
secrets.notion_client_id and secrets.notion_client_secret
)
NotionCredentials = OAuth2Credentials
NotionCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.NOTION], Literal["oauth2"]
]
def NotionCredentialsField() -> NotionCredentialsInput:
"""Creates a Notion OAuth2 credentials field."""
return CredentialsField(
description="Connect your Notion account. Ensure the pages/databases are shared with the integration."
)
# Test credentials for Notion OAuth2
TEST_CREDENTIALS = OAuth2Credentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="notion",
access_token=SecretStr("test_access_token"),
title="Mock Notion OAuth",
scopes=["read_content", "insert_content", "update_content"],
username="testuser",
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}

View File

@@ -0,0 +1,360 @@
from __future__ import annotations
from typing import Any, Dict, List, Optional
from pydantic import model_validator
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionCreatePageBlock(Block):
"""Create a new page in Notion with content."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
parent_page_id: Optional[str] = SchemaField(
description="Parent page ID to create the page under. Either this OR parent_database_id is required.",
default=None,
)
parent_database_id: Optional[str] = SchemaField(
description="Parent database ID to create the page in. Either this OR parent_page_id is required.",
default=None,
)
title: str = SchemaField(
description="Title of the new page",
)
content: Optional[str] = SchemaField(
description="Content for the page. Can be plain text or markdown - will be converted to Notion blocks.",
default=None,
)
properties: Optional[Dict[str, Any]] = SchemaField(
description="Additional properties for database pages (e.g., {'Status': 'In Progress', 'Priority': 'High'})",
default=None,
)
icon_emoji: Optional[str] = SchemaField(
description="Emoji to use as the page icon (e.g., '📄', '🚀')", default=None
)
@model_validator(mode="after")
def validate_parent(self):
"""Ensure either parent_page_id or parent_database_id is provided."""
if not self.parent_page_id and not self.parent_database_id:
raise ValueError(
"Either parent_page_id or parent_database_id must be provided"
)
if self.parent_page_id and self.parent_database_id:
raise ValueError(
"Only one of parent_page_id or parent_database_id should be provided, not both"
)
return self
class Output(BlockSchema):
page_id: str = SchemaField(description="ID of the created page.")
page_url: str = SchemaField(description="URL of the created page.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="c15febe0-66ce-4c6f-aebd-5ab351653804",
description="Create a new page in Notion. Requires EITHER a parent_page_id OR parent_database_id. Supports markdown content.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=NotionCreatePageBlock.Input,
output_schema=NotionCreatePageBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"parent_page_id": "00000000-0000-0000-0000-000000000000",
"title": "Test Page",
"content": "This is test content.",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("page_id", "12345678-1234-1234-1234-123456789012"),
(
"page_url",
"https://notion.so/Test-Page-12345678123412341234123456789012",
),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"create_page": lambda *args, **kwargs: (
"12345678-1234-1234-1234-123456789012",
"https://notion.so/Test-Page-12345678123412341234123456789012",
)
},
)
@staticmethod
def _markdown_to_blocks(content: str) -> List[dict]:
"""Convert markdown content to Notion block objects."""
if not content:
return []
blocks = []
lines = content.split("\n")
i = 0
while i < len(lines):
line = lines[i]
# Skip empty lines
if not line.strip():
i += 1
continue
# Headings
if line.startswith("### "):
blocks.append(
{
"type": "heading_3",
"heading_3": {
"rich_text": [
{"type": "text", "text": {"content": line[4:].strip()}}
]
},
}
)
elif line.startswith("## "):
blocks.append(
{
"type": "heading_2",
"heading_2": {
"rich_text": [
{"type": "text", "text": {"content": line[3:].strip()}}
]
},
}
)
elif line.startswith("# "):
blocks.append(
{
"type": "heading_1",
"heading_1": {
"rich_text": [
{"type": "text", "text": {"content": line[2:].strip()}}
]
},
}
)
# Bullet points
elif line.strip().startswith("- "):
blocks.append(
{
"type": "bulleted_list_item",
"bulleted_list_item": {
"rich_text": [
{
"type": "text",
"text": {"content": line.strip()[2:].strip()},
}
]
},
}
)
# Numbered list
elif line.strip() and line.strip()[0].isdigit() and ". " in line:
content_start = line.find(". ") + 2
blocks.append(
{
"type": "numbered_list_item",
"numbered_list_item": {
"rich_text": [
{
"type": "text",
"text": {"content": line[content_start:].strip()},
}
]
},
}
)
# Code block
elif line.strip().startswith("```"):
code_lines = []
language = line[3:].strip() or "plain text"
i += 1
while i < len(lines) and not lines[i].strip().startswith("```"):
code_lines.append(lines[i])
i += 1
blocks.append(
{
"type": "code",
"code": {
"rich_text": [
{
"type": "text",
"text": {"content": "\n".join(code_lines)},
}
],
"language": language,
},
}
)
# Quote
elif line.strip().startswith("> "):
blocks.append(
{
"type": "quote",
"quote": {
"rich_text": [
{
"type": "text",
"text": {"content": line.strip()[2:].strip()},
}
]
},
}
)
# Horizontal rule
elif line.strip() in ["---", "***", "___"]:
blocks.append({"type": "divider", "divider": {}})
# Regular paragraph
else:
# Parse for basic markdown formatting
text_content = line.strip()
rich_text = []
# Simple bold/italic parsing (this is simplified)
if "**" in text_content or "*" in text_content:
# For now, just pass as plain text
# A full implementation would parse and create proper annotations
rich_text = [{"type": "text", "text": {"content": text_content}}]
else:
rich_text = [{"type": "text", "text": {"content": text_content}}]
blocks.append(
{"type": "paragraph", "paragraph": {"rich_text": rich_text}}
)
i += 1
return blocks
@staticmethod
def _build_properties(
title: str, additional_properties: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""Build properties object for page creation."""
properties: Dict[str, Any] = {
"title": {"title": [{"type": "text", "text": {"content": title}}]}
}
if additional_properties:
for key, value in additional_properties.items():
if key.lower() == "title":
continue # Skip title as we already have it
# Try to intelligently map property types
if isinstance(value, bool):
properties[key] = {"checkbox": value}
elif isinstance(value, (int, float)):
properties[key] = {"number": value}
elif isinstance(value, list):
# Assume multi-select
properties[key] = {
"multi_select": [{"name": str(item)} for item in value]
}
elif isinstance(value, str):
# Could be select, rich_text, or other types
# For simplicity, try common patterns
if key.lower() in ["status", "priority", "type", "category"]:
properties[key] = {"select": {"name": value}}
elif key.lower() in ["url", "link"]:
properties[key] = {"url": value}
elif key.lower() in ["email"]:
properties[key] = {"email": value}
else:
properties[key] = {
"rich_text": [{"type": "text", "text": {"content": value}}]
}
return properties
@staticmethod
async def create_page(
credentials: OAuth2Credentials,
title: str,
parent_page_id: Optional[str] = None,
parent_database_id: Optional[str] = None,
content: Optional[str] = None,
properties: Optional[Dict[str, Any]] = None,
icon_emoji: Optional[str] = None,
) -> tuple[str, str]:
"""
Create a new Notion page.
Returns:
Tuple of (page_id, page_url)
"""
if not parent_page_id and not parent_database_id:
raise ValueError(
"Either parent_page_id or parent_database_id must be provided"
)
if parent_page_id and parent_database_id:
raise ValueError(
"Only one of parent_page_id or parent_database_id should be provided, not both"
)
client = NotionClient(credentials)
# Build parent object
if parent_page_id:
parent = {"type": "page_id", "page_id": parent_page_id}
else:
parent = {"type": "database_id", "database_id": parent_database_id}
# Build properties
page_properties = NotionCreatePageBlock._build_properties(title, properties)
# Convert content to blocks if provided
children = None
if content:
children = NotionCreatePageBlock._markdown_to_blocks(content)
# Build icon if provided
icon = None
if icon_emoji:
icon = {"type": "emoji", "emoji": icon_emoji}
# Create the page
result = await client.create_page(
parent=parent, properties=page_properties, children=children, icon=icon
)
page_id = result.get("id", "")
page_url = result.get("url", "")
if not page_id or not page_url:
raise ValueError("Failed to get page ID or URL from Notion response")
return page_id, page_url
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
page_id, page_url = await self.create_page(
credentials,
input_data.title,
input_data.parent_page_id,
input_data.parent_database_id,
input_data.content,
input_data.properties,
input_data.icon_emoji,
)
yield "page_id", page_id
yield "page_url", page_url
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -0,0 +1,285 @@
from __future__ import annotations
from typing import Any, Dict, List, Optional
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient, parse_rich_text
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionReadDatabaseBlock(Block):
"""Query a Notion database and retrieve entries with their properties."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
database_id: str = SchemaField(
description="Notion database ID. Must be accessible by the connected integration.",
)
filter_property: Optional[str] = SchemaField(
description="Property name to filter by (e.g., 'Status', 'Priority')",
default=None,
)
filter_value: Optional[str] = SchemaField(
description="Value to filter for in the specified property", default=None
)
sort_property: Optional[str] = SchemaField(
description="Property name to sort by", default=None
)
sort_direction: Optional[str] = SchemaField(
description="Sort direction: 'ascending' or 'descending'",
default="ascending",
)
limit: int = SchemaField(
description="Maximum number of entries to retrieve",
default=100,
ge=1,
le=100,
)
class Output(BlockSchema):
entries: List[Dict[str, Any]] = SchemaField(
description="List of database entries with their properties."
)
entry: Dict[str, Any] = SchemaField(
description="Individual database entry (yields one per entry found)."
)
entry_ids: List[str] = SchemaField(
description="List of entry IDs for batch operations."
)
entry_id: str = SchemaField(
description="Individual entry ID (yields one per entry found)."
)
count: int = SchemaField(description="Number of entries retrieved.")
database_title: str = SchemaField(description="Title of the database.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="fcd53135-88c9-4ba3-be50-cc6936286e6c",
description="Query a Notion database with optional filtering and sorting, returning structured entries.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=NotionReadDatabaseBlock.Input,
output_schema=NotionReadDatabaseBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"database_id": "00000000-0000-0000-0000-000000000000",
"limit": 10,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"entries",
[{"Name": "Test Entry", "Status": "Active", "_id": "test-123"}],
),
("entry_ids", ["test-123"]),
(
"entry",
{"Name": "Test Entry", "Status": "Active", "_id": "test-123"},
),
("entry_id", "test-123"),
("count", 1),
("database_title", "Test Database"),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"query_database": lambda *args, **kwargs: (
[{"Name": "Test Entry", "Status": "Active", "_id": "test-123"}],
1,
"Test Database",
)
},
)
@staticmethod
def _parse_property_value(prop: dict) -> Any:
"""Parse a Notion property value into a simple Python type."""
prop_type = prop.get("type")
if prop_type == "title":
return parse_rich_text(prop.get("title", []))
elif prop_type == "rich_text":
return parse_rich_text(prop.get("rich_text", []))
elif prop_type == "number":
return prop.get("number")
elif prop_type == "select":
select = prop.get("select")
return select.get("name") if select else None
elif prop_type == "multi_select":
return [item.get("name") for item in prop.get("multi_select", [])]
elif prop_type == "date":
date = prop.get("date")
if date:
return date.get("start")
return None
elif prop_type == "checkbox":
return prop.get("checkbox", False)
elif prop_type == "url":
return prop.get("url")
elif prop_type == "email":
return prop.get("email")
elif prop_type == "phone_number":
return prop.get("phone_number")
elif prop_type == "people":
return [
person.get("name", person.get("id"))
for person in prop.get("people", [])
]
elif prop_type == "files":
files = prop.get("files", [])
return [
f.get(
"name",
f.get("external", {}).get("url", f.get("file", {}).get("url")),
)
for f in files
]
elif prop_type == "relation":
return [rel.get("id") for rel in prop.get("relation", [])]
elif prop_type == "formula":
formula = prop.get("formula", {})
return formula.get(formula.get("type"))
elif prop_type == "rollup":
rollup = prop.get("rollup", {})
return rollup.get(rollup.get("type"))
elif prop_type == "created_time":
return prop.get("created_time")
elif prop_type == "created_by":
return prop.get("created_by", {}).get(
"name", prop.get("created_by", {}).get("id")
)
elif prop_type == "last_edited_time":
return prop.get("last_edited_time")
elif prop_type == "last_edited_by":
return prop.get("last_edited_by", {}).get(
"name", prop.get("last_edited_by", {}).get("id")
)
else:
# Return the raw value for unknown types
return prop
@staticmethod
def _build_filter(property_name: str, value: str) -> dict:
"""Build a simple filter object for a property."""
# This is a simplified filter - in reality, you'd need to know the property type
# For now, we'll try common filter types
return {
"or": [
{"property": property_name, "rich_text": {"contains": value}},
{"property": property_name, "title": {"contains": value}},
{"property": property_name, "select": {"equals": value}},
{"property": property_name, "multi_select": {"contains": value}},
]
}
@staticmethod
async def query_database(
credentials: OAuth2Credentials,
database_id: str,
filter_property: Optional[str] = None,
filter_value: Optional[str] = None,
sort_property: Optional[str] = None,
sort_direction: str = "ascending",
limit: int = 100,
) -> tuple[List[Dict[str, Any]], int, str]:
"""
Query a Notion database and parse the results.
Returns:
Tuple of (entries_list, count, database_title)
"""
client = NotionClient(credentials)
# Build filter if specified
filter_obj = None
if filter_property and filter_value:
filter_obj = NotionReadDatabaseBlock._build_filter(
filter_property, filter_value
)
# Build sorts if specified
sorts = None
if sort_property:
sorts = [{"property": sort_property, "direction": sort_direction}]
# Query the database
result = await client.query_database(
database_id, filter_obj=filter_obj, sorts=sorts, page_size=limit
)
# Parse the entries
entries = []
for page in result.get("results", []):
entry = {}
properties = page.get("properties", {})
for prop_name, prop_value in properties.items():
entry[prop_name] = NotionReadDatabaseBlock._parse_property_value(
prop_value
)
# Add metadata
entry["_id"] = page.get("id")
entry["_url"] = page.get("url")
entry["_created_time"] = page.get("created_time")
entry["_last_edited_time"] = page.get("last_edited_time")
entries.append(entry)
# Get database title (we need to make a separate call for this)
try:
database_url = f"https://api.notion.com/v1/databases/{database_id}"
db_response = await client.requests.get(
database_url, headers=client.headers
)
if db_response.ok:
db_data = db_response.json()
db_title = parse_rich_text(db_data.get("title", []))
else:
db_title = "Unknown Database"
except Exception:
db_title = "Unknown Database"
return entries, len(entries), db_title
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
entries, count, db_title = await self.query_database(
credentials,
input_data.database_id,
input_data.filter_property,
input_data.filter_value,
input_data.sort_property,
input_data.sort_direction or "ascending",
input_data.limit,
)
# Yield the complete list for batch operations
yield "entries", entries
# Extract and yield IDs as a list for batch operations
entry_ids = [entry["_id"] for entry in entries if "_id" in entry]
yield "entry_ids", entry_ids
# Yield each individual entry and its ID for single connections
for entry in entries:
yield "entry", entry
if "_id" in entry:
yield "entry_id", entry["_id"]
yield "count", count
yield "database_title", db_title
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -0,0 +1,64 @@
from __future__ import annotations
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionReadPageBlock(Block):
"""Read a Notion page by ID and return its raw JSON."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
page_id: str = SchemaField(
description="Notion page ID. Must be accessible by the connected integration. You can get this from the page URL notion.so/A-Page-586edd711467478da59fe3ce29a1ffab would be 586edd711467478da59fe35e29a1ffab",
)
class Output(BlockSchema):
page: dict = SchemaField(description="Raw Notion page JSON.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="5246cc1d-34b7-452b-8fc5-3fb25fd8f542",
description="Read a Notion page by its ID and return its raw JSON.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=NotionReadPageBlock.Input,
output_schema=NotionReadPageBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"page_id": "00000000-0000-0000-0000-000000000000",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[("page", dict)],
test_credentials=TEST_CREDENTIALS,
test_mock={
"get_page": lambda *args, **kwargs: {"object": "page", "id": "mocked"}
},
)
@staticmethod
async def get_page(credentials: OAuth2Credentials, page_id: str) -> dict:
client = NotionClient(credentials)
return await client.get_page(page_id)
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
page = await self.get_page(credentials, input_data.page_id)
yield "page", page
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -0,0 +1,109 @@
from __future__ import annotations
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient, blocks_to_markdown, extract_page_title
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionReadPageMarkdownBlock(Block):
"""Read a Notion page and convert it to clean Markdown format."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
page_id: str = SchemaField(
description="Notion page ID. Must be accessible by the connected integration. You can get this from the page URL notion.so/A-Page-586edd711467478da59fe35e29a1ffab would be 586edd711467478da59fe35e29a1ffab",
)
include_title: bool = SchemaField(
description="Whether to include the page title as a header in the markdown",
default=True,
)
class Output(BlockSchema):
markdown: str = SchemaField(description="Page content in Markdown format.")
title: str = SchemaField(description="Page title.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="d1312c4d-fae2-4e70-893d-f4d07cce1d4e",
description="Read a Notion page and convert it to Markdown format with proper formatting for headings, lists, links, and rich text.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=NotionReadPageMarkdownBlock.Input,
output_schema=NotionReadPageMarkdownBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"page_id": "00000000-0000-0000-0000-000000000000",
"include_title": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("markdown", "# Test Page\n\nThis is test content."),
("title", "Test Page"),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"get_page_markdown": lambda *args, **kwargs: (
"# Test Page\n\nThis is test content.",
"Test Page",
)
},
)
@staticmethod
async def get_page_markdown(
credentials: OAuth2Credentials, page_id: str, include_title: bool = True
) -> tuple[str, str]:
"""
Get a Notion page and convert it to markdown.
Args:
credentials: OAuth2 credentials for Notion.
page_id: The ID of the page to fetch.
include_title: Whether to include the page title in the markdown.
Returns:
Tuple of (markdown_content, title)
"""
client = NotionClient(credentials)
# Get page metadata
page = await client.get_page(page_id)
title = extract_page_title(page)
# Get all blocks from the page
blocks = await client.get_blocks(page_id, recursive=True)
# Convert blocks to markdown
content_markdown = blocks_to_markdown(blocks)
# Combine title and content if requested
if include_title and title:
full_markdown = f"# {title}\n\n{content_markdown}"
else:
full_markdown = content_markdown
return full_markdown, title
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
markdown, title = await self.get_page_markdown(
credentials, input_data.page_id, input_data.include_title
)
yield "markdown", markdown
yield "title", title
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -0,0 +1,225 @@
from __future__ import annotations
from typing import List, Optional
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import NotionClient, extract_page_title, parse_rich_text
from ._auth import (
NOTION_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
NotionCredentialsField,
NotionCredentialsInput,
)
class NotionSearchResult(BaseModel):
"""Typed model for Notion search results."""
id: str
type: str # 'page' or 'database'
title: str
url: str
created_time: Optional[str] = None
last_edited_time: Optional[str] = None
parent_type: Optional[str] = None # 'page', 'database', or 'workspace'
parent_id: Optional[str] = None
icon: Optional[str] = None # emoji icon if present
is_inline: Optional[bool] = None # for databases only
class NotionSearchBlock(Block):
"""Search across your Notion workspace for pages and databases."""
class Input(BlockSchema):
credentials: NotionCredentialsInput = NotionCredentialsField()
query: str = SchemaField(
description="Search query text. Leave empty to get all accessible pages/databases.",
default="",
)
filter_type: Optional[str] = SchemaField(
description="Filter results by type: 'page' or 'database'. Leave empty for both.",
default=None,
)
limit: int = SchemaField(
description="Maximum number of results to return", default=20, ge=1, le=100
)
class Output(BlockSchema):
results: List[NotionSearchResult] = SchemaField(
description="List of search results with title, type, URL, and metadata."
)
result: NotionSearchResult = SchemaField(
description="Individual search result (yields one per result found)."
)
result_ids: List[str] = SchemaField(
description="List of IDs from search results for batch operations."
)
count: int = SchemaField(description="Number of results found.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="313515dd-9848-46ea-9cd6-3c627c892c56",
description="Search your Notion workspace for pages and databases by text query.",
categories={BlockCategory.PRODUCTIVITY, BlockCategory.SEARCH},
input_schema=NotionSearchBlock.Input,
output_schema=NotionSearchBlock.Output,
disabled=not NOTION_OAUTH_IS_CONFIGURED,
test_input={
"query": "project",
"limit": 5,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"results",
[
NotionSearchResult(
id="123",
type="page",
title="Project Plan",
url="https://notion.so/Project-Plan-123",
)
],
),
("result_ids", ["123"]),
(
"result",
NotionSearchResult(
id="123",
type="page",
title="Project Plan",
url="https://notion.so/Project-Plan-123",
),
),
("count", 1),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"search_workspace": lambda *args, **kwargs: (
[
NotionSearchResult(
id="123",
type="page",
title="Project Plan",
url="https://notion.so/Project-Plan-123",
)
],
1,
)
},
)
@staticmethod
async def search_workspace(
credentials: OAuth2Credentials,
query: str = "",
filter_type: Optional[str] = None,
limit: int = 20,
) -> tuple[List[NotionSearchResult], int]:
"""
Search the Notion workspace.
Returns:
Tuple of (results_list, count)
"""
client = NotionClient(credentials)
# Build filter if type is specified
filter_obj = None
if filter_type:
filter_obj = {"property": "object", "value": filter_type}
# Execute search
response = await client.search(
query=query, filter_obj=filter_obj, page_size=limit
)
# Parse results
results = []
for item in response.get("results", []):
result_data = {
"id": item.get("id", ""),
"type": item.get("object", ""),
"url": item.get("url", ""),
"created_time": item.get("created_time"),
"last_edited_time": item.get("last_edited_time"),
"title": "", # Will be set below
}
# Extract title based on type
if item.get("object") == "page":
# For pages, get the title from properties
result_data["title"] = extract_page_title(item)
# Add parent info
parent = item.get("parent", {})
if parent.get("type") == "page_id":
result_data["parent_type"] = "page"
result_data["parent_id"] = parent.get("page_id")
elif parent.get("type") == "database_id":
result_data["parent_type"] = "database"
result_data["parent_id"] = parent.get("database_id")
elif parent.get("type") == "workspace":
result_data["parent_type"] = "workspace"
# Add icon if present
icon = item.get("icon")
if icon and icon.get("type") == "emoji":
result_data["icon"] = icon.get("emoji")
elif item.get("object") == "database":
# For databases, get title from the title array
result_data["title"] = parse_rich_text(item.get("title", []))
# Add database-specific metadata
result_data["is_inline"] = item.get("is_inline", False)
# Add parent info
parent = item.get("parent", {})
if parent.get("type") == "page_id":
result_data["parent_type"] = "page"
result_data["parent_id"] = parent.get("page_id")
elif parent.get("type") == "workspace":
result_data["parent_type"] = "workspace"
# Add icon if present
icon = item.get("icon")
if icon and icon.get("type") == "emoji":
result_data["icon"] = icon.get("emoji")
results.append(NotionSearchResult(**result_data))
return results, len(results)
async def run(
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
try:
results, count = await self.search_workspace(
credentials, input_data.query, input_data.filter_type, input_data.limit
)
# Yield the complete list for batch operations
yield "results", results
# Extract and yield IDs as a list for batch operations
result_ids = [r.id for r in results]
yield "result_ids", result_ids
# Yield each individual result for single connections
for result in results:
yield "result", result
yield "count", count
except Exception as e:
yield "error", str(e) if str(e) else "Unknown error"

View File

@@ -523,7 +523,6 @@ class SmartDecisionMakerBlock(Block):
credentials=credentials,
llm_model=input_data.model,
prompt=prompt,
json_format=False,
max_tokens=input_data.max_tokens,
tools=tool_functions,
ollama_host=input_data.ollama_host,

View File

@@ -0,0 +1,8 @@
from backend.sdk import BlockCostType, ProviderBuilder
stagehand = (
ProviderBuilder("stagehand")
.with_api_key("STAGEHAND_API_KEY", "Stagehand API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -0,0 +1,393 @@
import logging
import signal
import threading
from contextlib import contextmanager
from enum import Enum
# Monkey patch Stagehands to prevent signal handling in worker threads
import stagehand.main
from stagehand import Stagehand
from backend.blocks.llm import (
MODEL_METADATA,
AICredentials,
AICredentialsField,
LlmModel,
ModelMetadata,
)
from backend.blocks.stagehand._config import stagehand as stagehand_provider
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
)
# Store the original method
original_register_signal_handlers = stagehand.main.Stagehand._register_signal_handlers
def safe_register_signal_handlers(self):
"""Only register signal handlers in the main thread"""
if threading.current_thread() is threading.main_thread():
original_register_signal_handlers(self)
else:
# Skip signal handling in worker threads
pass
# Replace the method
stagehand.main.Stagehand._register_signal_handlers = safe_register_signal_handlers
@contextmanager
def disable_signal_handling():
"""Context manager to temporarily disable signal handling"""
if threading.current_thread() is not threading.main_thread():
# In worker threads, temporarily replace signal.signal with a no-op
original_signal = signal.signal
def noop_signal(*args, **kwargs):
pass
signal.signal = noop_signal
try:
yield
finally:
signal.signal = original_signal
else:
# In main thread, don't modify anything
yield
logger = logging.getLogger(__name__)
class StagehandRecommendedLlmModel(str, Enum):
"""
This is subset of LLModel from autogpt_platform/backend/backend/blocks/llm.py
It contains only the models recommended by Stagehand
"""
# OpenAI
GPT41 = "gpt-4.1-2025-04-14"
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
# Anthropic
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
@property
def provider_name(self) -> str:
"""
Returns the provider name for the model in the required format for Stagehand:
provider/model_name
"""
model_metadata = MODEL_METADATA[LlmModel(self.value)]
model_name = self.value
if len(model_name.split("/")) == 1 and not self.value.startswith(
model_metadata.provider
):
assert (
model_metadata.provider != "open_router"
), "Logic failed and open_router provider attempted to be prepended to model name! in stagehand/_config.py"
model_name = f"{model_metadata.provider}/{model_name}"
logger.error(f"Model name: {model_name}")
return model_name
@property
def provider(self) -> str:
return MODEL_METADATA[LlmModel(self.value)].provider
@property
def metadata(self) -> ModelMetadata:
return MODEL_METADATA[LlmModel(self.value)]
@property
def context_window(self) -> int:
return MODEL_METADATA[LlmModel(self.value)].context_window
@property
def max_output_tokens(self) -> int | None:
return MODEL_METADATA[LlmModel(self.value)].max_output_tokens
class StagehandObserveBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
instruction: str = SchemaField(
description="Natural language description of elements or actions to discover.",
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
class Output(BlockSchema):
selector: str = SchemaField(description="XPath selector to locate element.")
description: str = SchemaField(description="Human-readable description")
method: str | None = SchemaField(description="Suggested action method")
arguments: list[str] | None = SchemaField(
description="Additional action parameters"
)
def __init__(self):
super().__init__(
id="d3863944-0eaf-45c4-a0c9-63e0fe1ee8b9",
description="Find suggested actions for your workflows",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandObserveBlock.Input,
output_schema=StagehandObserveBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"OBSERVE: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"OBSERVE: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
observe_results = await page.observe(
input_data.instruction,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
)
for result in observe_results:
yield "selector", result.selector
yield "description", result.description
yield "method", result.method
yield "arguments", result.arguments
class StagehandActBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
action: list[str] = SchemaField(
description="Action to perform. Suggested actions are: click, fill, type, press, scroll, select from dropdown. For multi-step actions, add an entry for each step.",
)
variables: dict[str, str] = SchemaField(
description="Variables to use in the action. Variables contains data you want the action to use.",
default_factory=dict,
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
timeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM ready. Extended timeout for slow-loading forms",
default=60000,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the action was completed successfully"
)
message: str = SchemaField(description="Details about the actions execution.")
action: str = SchemaField(description="Action performed")
def __init__(self):
super().__init__(
id="86eba68b-9549-4c0b-a0db-47d85a56cc27",
description="Interact with a web page by performing actions on a web page. Use it to build self-healing and deterministic automations that adapt to website chang.",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandActBlock.Input,
output_schema=StagehandActBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"ACT: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"ACT: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
for action in input_data.action:
action_results = await page.act(
action,
variables=input_data.variables,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
timeoutMs=input_data.timeoutMs,
)
yield "success", action_results.success
yield "message", action_results.message
yield "action", action_results.action
class StagehandExtractBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
instruction: str = SchemaField(
description="Natural language description of elements or actions to discover.",
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
class Output(BlockSchema):
extraction: str = SchemaField(description="Extracted data from the page.")
def __init__(self):
super().__init__(
id="fd3c0b18-2ba6-46ae-9339-fcb40537ad98",
description="Extract structured data from a webpage.",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandExtractBlock.Input,
output_schema=StagehandExtractBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"EXTRACT: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"EXTRACT: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
extraction = await page.extract(
input_data.instruction,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
)
yield "extraction", str(extraction.model_dump()["extraction"])

View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1,283 @@
import logging
from typing import Any
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
# Duplicate pydantic models for store data so we don't accidently change the data shape in the blocks unintentionally when editing the backend
class LibraryAgent(BaseModel):
"""Model representing an agent in the user's library."""
library_agent_id: str = ""
agent_id: str = ""
agent_version: int = 0
agent_name: str = ""
description: str = ""
creator: str = ""
is_archived: bool = False
categories: list[str] = []
class AddToLibraryFromStoreBlock(Block):
"""
Block that adds an agent from the store to the user's library.
This enables users to easily import agents from the marketplace into their personal collection.
"""
class Input(BlockSchema):
store_listing_version_id: str = SchemaField(
description="The ID of the store listing version to add to library"
)
agent_name: str | None = SchemaField(
description="Optional custom name for the agent in your library",
default=None,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the agent was successfully added to library"
)
library_agent_id: str = SchemaField(
description="The ID of the library agent entry"
)
agent_id: str = SchemaField(description="The ID of the agent graph")
agent_version: int = SchemaField(
description="The version number of the agent graph"
)
agent_name: str = SchemaField(description="The name of the agent")
message: str = SchemaField(description="Success or error message")
def __init__(self):
super().__init__(
id="2602a7b1-3f4d-4e5f-9c8b-1a2b3c4d5e6f",
description="Add an agent from the store to your personal library",
categories={BlockCategory.BASIC},
input_schema=AddToLibraryFromStoreBlock.Input,
output_schema=AddToLibraryFromStoreBlock.Output,
test_input={
"store_listing_version_id": "test-listing-id",
"agent_name": "My Custom Agent",
},
test_output=[
("success", True),
("library_agent_id", "test-library-id"),
("agent_id", "test-agent-id"),
("agent_version", 1),
("agent_name", "Test Agent"),
("message", "Agent successfully added to library"),
],
test_mock={
"_add_to_library": lambda *_, **__: LibraryAgent(
library_agent_id="test-library-id",
agent_id="test-agent-id",
agent_version=1,
agent_name="Test Agent",
)
},
)
async def run(
self,
input_data: Input,
*,
user_id: str,
**kwargs,
) -> BlockOutput:
library_agent = await self._add_to_library(
user_id=user_id,
store_listing_version_id=input_data.store_listing_version_id,
custom_name=input_data.agent_name,
)
yield "success", True
yield "library_agent_id", library_agent.library_agent_id
yield "agent_id", library_agent.agent_id
yield "agent_version", library_agent.agent_version
yield "agent_name", library_agent.agent_name
yield "message", "Agent successfully added to library"
async def _add_to_library(
self,
user_id: str,
store_listing_version_id: str,
custom_name: str | None = None,
) -> LibraryAgent:
"""
Add a store agent to the user's library using the existing library database function.
"""
library_agent = (
await get_database_manager_async_client().add_store_agent_to_library(
store_listing_version_id=store_listing_version_id, user_id=user_id
)
)
# If custom name is provided, we could update the library agent name here
# For now, we'll just return the agent info
agent_name = custom_name if custom_name else library_agent.name
return LibraryAgent(
library_agent_id=library_agent.id,
agent_id=library_agent.graph_id,
agent_version=library_agent.graph_version,
agent_name=agent_name,
)
class ListLibraryAgentsBlock(Block):
"""
Block that lists all agents in the user's library.
"""
class Input(BlockSchema):
search_query: str | None = SchemaField(
description="Optional search query to filter agents", default=None
)
limit: int = SchemaField(
description="Maximum number of agents to return", default=50, ge=1, le=100
)
page: int = SchemaField(
description="Page number for pagination", default=1, ge=1
)
class Output(BlockSchema):
agents: list[LibraryAgent] = SchemaField(
description="List of agents in the library",
default_factory=list,
)
agent: LibraryAgent = SchemaField(
description="Individual library agent (yielded for each agent)"
)
total_count: int = SchemaField(
description="Total number of agents in library", default=0
)
page: int = SchemaField(description="Current page number", default=1)
total_pages: int = SchemaField(description="Total number of pages", default=1)
def __init__(self):
super().__init__(
id="082602d3-a74d-4600-9e9c-15b3af7eae98",
description="List all agents in your personal library",
categories={BlockCategory.BASIC, BlockCategory.DATA},
input_schema=ListLibraryAgentsBlock.Input,
output_schema=ListLibraryAgentsBlock.Output,
test_input={
"search_query": None,
"limit": 10,
"page": 1,
},
test_output=[
(
"agents",
[
LibraryAgent(
library_agent_id="test-lib-id",
agent_id="test-agent-id",
agent_version=1,
agent_name="Test Library Agent",
description="A test agent in library",
creator="Test User",
),
],
),
("total_count", 1),
("page", 1),
("total_pages", 1),
(
"agent",
LibraryAgent(
library_agent_id="test-lib-id",
agent_id="test-agent-id",
agent_version=1,
agent_name="Test Library Agent",
description="A test agent in library",
creator="Test User",
),
),
],
test_mock={
"_list_library_agents": lambda *_, **__: {
"agents": [
LibraryAgent(
library_agent_id="test-lib-id",
agent_id="test-agent-id",
agent_version=1,
agent_name="Test Library Agent",
description="A test agent in library",
creator="Test User",
)
],
"total": 1,
"page": 1,
"total_pages": 1,
}
},
)
async def run(
self,
input_data: Input,
*,
user_id: str,
**kwargs,
) -> BlockOutput:
result = await self._list_library_agents(
user_id=user_id,
search_query=input_data.search_query,
limit=input_data.limit,
page=input_data.page,
)
agents = result["agents"]
yield "agents", agents
yield "total_count", result["total"]
yield "page", result["page"]
yield "total_pages", result["total_pages"]
# Yield each agent individually for better graph connectivity
for agent in agents:
yield "agent", agent
async def _list_library_agents(
self,
user_id: str,
search_query: str | None = None,
limit: int = 50,
page: int = 1,
) -> dict[str, Any]:
"""
List agents in the user's library using the database client.
"""
result = await get_database_manager_async_client().list_library_agents(
user_id=user_id,
search_term=search_query,
page=page,
page_size=limit,
)
agents = [
LibraryAgent(
library_agent_id=agent.id,
agent_id=agent.graph_id,
agent_version=agent.graph_version,
agent_name=agent.name,
description=getattr(agent, "description", ""),
creator=getattr(agent, "creator", ""),
is_archived=getattr(agent, "is_archived", False),
categories=getattr(agent, "categories", []),
)
for agent in result.agents
]
return {
"agents": agents,
"total": result.pagination.total_items,
"page": result.pagination.current_page,
"total_pages": result.pagination.total_pages,
}

View File

@@ -0,0 +1,311 @@
import logging
from typing import Literal
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
# Duplicate pydantic models for store data so we don't accidently change the data shape in the blocks unintentionally when editing the backend
class StoreAgent(BaseModel):
"""Model representing a store agent."""
slug: str = ""
name: str = ""
description: str = ""
creator: str = ""
rating: float = 0.0
runs: int = 0
categories: list[str] = []
class StoreAgentDict(BaseModel):
"""Dictionary representation of a store agent."""
slug: str
name: str
description: str
creator: str
rating: float
runs: int
class SearchAgentsResponse(BaseModel):
"""Response from searching store agents."""
agents: list[StoreAgentDict]
total_count: int
class StoreAgentDetails(BaseModel):
"""Detailed information about a store agent."""
found: bool
store_listing_version_id: str = ""
agent_name: str = ""
description: str = ""
creator: str = ""
categories: list[str] = []
runs: int = 0
rating: float = 0.0
class GetStoreAgentDetailsBlock(Block):
"""
Block that retrieves detailed information about an agent from the store.
"""
class Input(BlockSchema):
creator: str = SchemaField(description="The username of the agent creator")
slug: str = SchemaField(description="The name of the agent")
class Output(BlockSchema):
found: bool = SchemaField(
description="Whether the agent was found in the store"
)
store_listing_version_id: str = SchemaField(
description="The store listing version ID"
)
agent_name: str = SchemaField(description="Name of the agent")
description: str = SchemaField(description="Description of the agent")
creator: str = SchemaField(description="Creator of the agent")
categories: list[str] = SchemaField(
description="Categories the agent belongs to", default_factory=list
)
runs: int = SchemaField(
description="Number of times the agent has been run", default=0
)
rating: float = SchemaField(
description="Average rating of the agent", default=0.0
)
def __init__(self):
super().__init__(
id="b604f0ec-6e0d-40a7-bf55-9fd09997cced",
description="Get detailed information about an agent from the store",
categories={BlockCategory.BASIC, BlockCategory.DATA},
input_schema=GetStoreAgentDetailsBlock.Input,
output_schema=GetStoreAgentDetailsBlock.Output,
test_input={"creator": "test-creator", "slug": "test-agent-slug"},
test_output=[
("found", True),
("store_listing_version_id", "test-listing-id"),
("agent_name", "Test Agent"),
("description", "A test agent"),
("creator", "Test Creator"),
("categories", ["productivity", "automation"]),
("runs", 100),
("rating", 4.5),
],
test_mock={
"_get_agent_details": lambda *_, **__: StoreAgentDetails(
found=True,
store_listing_version_id="test-listing-id",
agent_name="Test Agent",
description="A test agent",
creator="Test Creator",
categories=["productivity", "automation"],
runs=100,
rating=4.5,
)
},
static_output=True,
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
details = await self._get_agent_details(
creator=input_data.creator, slug=input_data.slug
)
yield "found", details.found
yield "store_listing_version_id", details.store_listing_version_id
yield "agent_name", details.agent_name
yield "description", details.description
yield "creator", details.creator
yield "categories", details.categories
yield "runs", details.runs
yield "rating", details.rating
async def _get_agent_details(self, creator: str, slug: str) -> StoreAgentDetails:
"""
Retrieve detailed information about a store agent.
"""
# Get by specific version ID
agent_details = (
await get_database_manager_async_client().get_store_agent_details(
username=creator, agent_name=slug
)
)
return StoreAgentDetails(
found=True,
store_listing_version_id=agent_details.store_listing_version_id,
agent_name=agent_details.agent_name,
description=agent_details.description,
creator=agent_details.creator,
categories=(
agent_details.categories if hasattr(agent_details, "categories") else []
),
runs=agent_details.runs,
rating=agent_details.rating,
)
class SearchStoreAgentsBlock(Block):
"""
Block that searches for agents in the store based on various criteria.
"""
class Input(BlockSchema):
query: str | None = SchemaField(
description="Search query to find agents", default=None
)
category: str | None = SchemaField(
description="Filter by category", default=None
)
sort_by: Literal["rating", "runs", "name", "recent"] = SchemaField(
description="How to sort the results", default="rating"
)
limit: int = SchemaField(
description="Maximum number of results to return", default=10, ge=1, le=100
)
class Output(BlockSchema):
agents: list[StoreAgent] = SchemaField(
description="List of agents matching the search criteria",
default_factory=list,
)
agent: StoreAgent = SchemaField(description="Basic information of the agent")
total_count: int = SchemaField(
description="Total number of agents found", default=0
)
def __init__(self):
super().__init__(
id="39524701-026c-4328-87cc-1b88c8e2cb4c",
description="Search for agents in the store",
categories={BlockCategory.BASIC, BlockCategory.DATA},
input_schema=SearchStoreAgentsBlock.Input,
output_schema=SearchStoreAgentsBlock.Output,
test_input={
"query": "productivity",
"category": None,
"sort_by": "rating",
"limit": 10,
},
test_output=[
(
"agents",
[
{
"slug": "test-agent",
"name": "Test Agent",
"description": "A test agent",
"creator": "Test Creator",
"rating": 4.5,
"runs": 100,
}
],
),
("total_count", 1),
(
"agent",
{
"slug": "test-agent",
"name": "Test Agent",
"description": "A test agent",
"creator": "Test Creator",
"rating": 4.5,
"runs": 100,
},
),
],
test_mock={
"_search_agents": lambda *_, **__: SearchAgentsResponse(
agents=[
StoreAgentDict(
slug="test-agent",
name="Test Agent",
description="A test agent",
creator="Test Creator",
rating=4.5,
runs=100,
)
],
total_count=1,
)
},
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
result = await self._search_agents(
query=input_data.query,
category=input_data.category,
sort_by=input_data.sort_by,
limit=input_data.limit,
)
agents = result.agents
total_count = result.total_count
# Convert to dict for output
agents_as_dicts = [agent.model_dump() for agent in agents]
yield "agents", agents_as_dicts
yield "total_count", total_count
for agent_dict in agents_as_dicts:
yield "agent", agent_dict
async def _search_agents(
self,
query: str | None = None,
category: str | None = None,
sort_by: str = "rating",
limit: int = 10,
) -> SearchAgentsResponse:
"""
Search for agents in the store using the existing store database function.
"""
# Map our sort_by to the store's sorted_by parameter
sorted_by_map = {
"rating": "most_popular",
"runs": "most_runs",
"name": "alphabetical",
"recent": "recently_updated",
}
result = await get_database_manager_async_client().get_store_agents(
featured=False,
creators=None,
sorted_by=sorted_by_map.get(sort_by, "most_popular"),
search_query=query,
category=category,
page=1,
page_size=limit,
)
agents = [
StoreAgentDict(
slug=agent.slug,
name=agent.agent_name,
description=agent.description,
creator=agent.creator,
rating=agent.rating,
runs=agent.runs,
)
for agent in result.agents
]
return SearchAgentsResponse(agents=agents, total_count=len(agents))

View File

@@ -30,7 +30,6 @@ class TestLLMStatsTracking:
credentials=llm.TEST_CREDENTIALS,
llm_model=llm.LlmModel.GPT4O,
prompt=[{"role": "user", "content": "Hello"}],
json_format=False,
max_tokens=100,
)
@@ -42,6 +41,8 @@ class TestLLMStatsTracking:
@pytest.mark.asyncio
async def test_ai_structured_response_block_tracks_stats(self):
"""Test that AIStructuredResponseGeneratorBlock correctly tracks stats."""
from unittest.mock import patch
import backend.blocks.llm as llm
block = llm.AIStructuredResponseGeneratorBlock()
@@ -51,7 +52,7 @@ class TestLLMStatsTracking:
return llm.LLMResponse(
raw_response="",
prompt=[],
response='{"key1": "value1", "key2": "value2"}',
response='<json_output id="test123456">{"key1": "value1", "key2": "value2"}</json_output>',
tool_calls=None,
prompt_tokens=15,
completion_tokens=25,
@@ -69,10 +70,12 @@ class TestLLMStatsTracking:
)
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Mock secrets.token_hex to return consistent ID
with patch("secrets.token_hex", return_value="test123456"):
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Check stats
assert block.execution_stats.input_token_count == 15
@@ -143,7 +146,7 @@ class TestLLMStatsTracking:
return llm.LLMResponse(
raw_response="",
prompt=[],
response='{"wrong": "format"}',
response='<json_output id="test123456">{"wrong": "format"}</json_output>',
tool_calls=None,
prompt_tokens=10,
completion_tokens=15,
@@ -154,7 +157,7 @@ class TestLLMStatsTracking:
return llm.LLMResponse(
raw_response="",
prompt=[],
response='{"key1": "value1", "key2": "value2"}',
response='<json_output id="test123456">{"key1": "value1", "key2": "value2"}</json_output>',
tool_calls=None,
prompt_tokens=20,
completion_tokens=25,
@@ -173,10 +176,12 @@ class TestLLMStatsTracking:
)
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Mock secrets.token_hex to return consistent ID
with patch("secrets.token_hex", return_value="test123456"):
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Check stats - should accumulate both calls
# For 2 attempts: attempt 1 (failed) + attempt 2 (success) = 2 total
@@ -269,7 +274,8 @@ class TestLLMStatsTracking:
mock_response.choices = [
MagicMock(
message=MagicMock(
content='{"summary": "Test chunk summary"}', tool_calls=None
content='<json_output id="test123456">{"summary": "Test chunk summary"}</json_output>',
tool_calls=None,
)
)
]
@@ -277,7 +283,7 @@ class TestLLMStatsTracking:
mock_response.choices = [
MagicMock(
message=MagicMock(
content='{"final_summary": "Test final summary"}',
content='<json_output id="test123456">{"final_summary": "Test final summary"}</json_output>',
tool_calls=None,
)
)
@@ -298,11 +304,13 @@ class TestLLMStatsTracking:
max_tokens=1000, # Large enough to avoid chunking
)
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Mock secrets.token_hex to return consistent ID
with patch("secrets.token_hex", return_value="test123456"):
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
print(f"Actual calls made: {call_count}")
print(f"Block stats: {block.execution_stats}")
@@ -457,7 +465,7 @@ class TestLLMStatsTracking:
return llm.LLMResponse(
raw_response="",
prompt=[],
response='{"result": "test"}',
response='<json_output id="test123456">{"result": "test"}</json_output>',
tool_calls=None,
prompt_tokens=10,
completion_tokens=20,
@@ -476,10 +484,12 @@ class TestLLMStatsTracking:
# Run the block
outputs = {}
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Mock secrets.token_hex to return consistent ID
with patch("secrets.token_hex", return_value="test123456"):
async for output_name, output_data in block.run(
input_data, credentials=llm.TEST_CREDENTIALS
):
outputs[output_name] = output_data
# Block finished - now grab and assert stats
assert block.execution_stats is not None

View File

@@ -35,20 +35,19 @@ async def execute_graph(
logger.info("Input data: %s", input_data)
# --- Test adding new executions --- #
response = await agent_server.test_execute_graph(
graph_exec = await agent_server.test_execute_graph(
user_id=test_user.id,
graph_id=test_graph.id,
graph_version=test_graph.version,
node_input=input_data,
)
graph_exec_id = response.graph_exec_id
logger.info("Created execution with ID: %s", graph_exec_id)
logger.info("Created execution with ID: %s", graph_exec.id)
# Execution queue should be empty
logger.info("Waiting for execution to complete...")
result = await wait_execution(test_user.id, graph_exec_id, 30)
result = await wait_execution(test_user.id, graph_exec.id, 30)
logger.info("Execution completed with %d results", len(result))
return graph_exec_id
return graph_exec.id
@pytest.mark.asyncio(loop_scope="session")

View File

@@ -0,0 +1,155 @@
from unittest.mock import MagicMock
import pytest
from backend.blocks.system.library_operations import (
AddToLibraryFromStoreBlock,
LibraryAgent,
)
from backend.blocks.system.store_operations import (
GetStoreAgentDetailsBlock,
SearchAgentsResponse,
SearchStoreAgentsBlock,
StoreAgentDetails,
StoreAgentDict,
)
@pytest.mark.asyncio
async def test_add_to_library_from_store_block_success(mocker):
"""Test successful addition of agent from store to library."""
block = AddToLibraryFromStoreBlock()
# Mock the library agent response
mock_library_agent = MagicMock()
mock_library_agent.id = "lib-agent-123"
mock_library_agent.graph_id = "graph-456"
mock_library_agent.graph_version = 1
mock_library_agent.name = "Test Agent"
mocker.patch.object(
block,
"_add_to_library",
return_value=LibraryAgent(
library_agent_id="lib-agent-123",
agent_id="graph-456",
agent_version=1,
agent_name="Test Agent",
),
)
input_data = block.Input(
store_listing_version_id="store-listing-v1", agent_name="Custom Agent Name"
)
outputs = {}
async for name, value in block.run(input_data, user_id="test-user"):
outputs[name] = value
assert outputs["success"] is True
assert outputs["library_agent_id"] == "lib-agent-123"
assert outputs["agent_id"] == "graph-456"
assert outputs["agent_version"] == 1
assert outputs["agent_name"] == "Test Agent"
assert outputs["message"] == "Agent successfully added to library"
@pytest.mark.asyncio
async def test_get_store_agent_details_block_success(mocker):
"""Test successful retrieval of store agent details."""
block = GetStoreAgentDetailsBlock()
mocker.patch.object(
block,
"_get_agent_details",
return_value=StoreAgentDetails(
found=True,
store_listing_version_id="version-123",
agent_name="Test Agent",
description="A test agent for testing",
creator="Test Creator",
categories=["productivity", "automation"],
runs=100,
rating=4.5,
),
)
input_data = block.Input(creator="Test Creator", slug="test-slug")
outputs = {}
async for name, value in block.run(input_data):
outputs[name] = value
assert outputs["found"] is True
assert outputs["store_listing_version_id"] == "version-123"
assert outputs["agent_name"] == "Test Agent"
assert outputs["description"] == "A test agent for testing"
assert outputs["creator"] == "Test Creator"
assert outputs["categories"] == ["productivity", "automation"]
assert outputs["runs"] == 100
assert outputs["rating"] == 4.5
@pytest.mark.asyncio
async def test_search_store_agents_block(mocker):
"""Test searching for store agents."""
block = SearchStoreAgentsBlock()
mocker.patch.object(
block,
"_search_agents",
return_value=SearchAgentsResponse(
agents=[
StoreAgentDict(
slug="creator1/agent1",
name="Agent One",
description="First test agent",
creator="Creator 1",
rating=4.8,
runs=500,
),
StoreAgentDict(
slug="creator2/agent2",
name="Agent Two",
description="Second test agent",
creator="Creator 2",
rating=4.2,
runs=200,
),
],
total_count=2,
),
)
input_data = block.Input(
query="test", category="productivity", sort_by="rating", limit=10
)
outputs = {}
async for name, value in block.run(input_data):
outputs[name] = value
assert len(outputs["agents"]) == 2
assert outputs["total_count"] == 2
assert outputs["agents"][0]["name"] == "Agent One"
assert outputs["agents"][0]["rating"] == 4.8
@pytest.mark.asyncio
async def test_search_store_agents_block_empty_results(mocker):
"""Test searching with no results."""
block = SearchStoreAgentsBlock()
mocker.patch.object(
block,
"_search_agents",
return_value=SearchAgentsResponse(agents=[], total_count=0),
)
input_data = block.Input(query="nonexistent", limit=10)
outputs = {}
async for name, value in block.run(input_data):
outputs[name] = value
assert outputs["agents"] == []
assert outputs["total_count"] == 0

View File

@@ -172,6 +172,11 @@ class FillTextTemplateBlock(Block):
format: str = SchemaField(
description="Template to format the text using `values`. Use Jinja2 syntax."
)
escape_html: bool = SchemaField(
default=False,
advanced=True,
description="Whether to escape special characters in the inserted values to be HTML-safe. Enable for HTML output, disable for plain text.",
)
class Output(BlockSchema):
output: str = SchemaField(description="Formatted text")
@@ -205,6 +210,7 @@ class FillTextTemplateBlock(Block):
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
formatter = text.TextFormatter(autoescape=input_data.escape_html)
yield "output", formatter.format_string(input_data.format, input_data.values)

View File

@@ -1,4 +1,5 @@
import asyncio
import logging
import time
from datetime import datetime, timedelta
from typing import Any, Literal, Union
@@ -7,6 +8,7 @@ from zoneinfo import ZoneInfo
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.execution import UserContext
from backend.data.model import SchemaField
# Shared timezone literal type for all time/date blocks
@@ -51,16 +53,80 @@ TimezoneLiteral = Literal[
"Etc/GMT+12", # UTC-12:00
]
logger = logging.getLogger(__name__)
def _get_timezone(
format_type: Any, # Any format type with timezone and use_user_timezone attributes
user_timezone: str | None,
) -> ZoneInfo:
"""
Determine which timezone to use based on format settings and user context.
Args:
format_type: The format configuration containing timezone settings
user_timezone: The user's timezone from context
Returns:
ZoneInfo object for the determined timezone
"""
if format_type.use_user_timezone and user_timezone:
tz = ZoneInfo(user_timezone)
logger.debug(f"Using user timezone: {user_timezone}")
else:
tz = ZoneInfo(format_type.timezone)
logger.debug(f"Using specified timezone: {format_type.timezone}")
return tz
def _format_datetime_iso8601(dt: datetime, include_microseconds: bool = False) -> str:
"""
Format a datetime object to ISO8601 string.
Args:
dt: The datetime object to format
include_microseconds: Whether to include microseconds in the output
Returns:
ISO8601 formatted string
"""
if include_microseconds:
return dt.isoformat()
else:
return dt.isoformat(timespec="seconds")
# BACKWARDS COMPATIBILITY NOTE:
# The timezone field is kept at the format level (not block level) for backwards compatibility.
# Existing graphs have timezone saved within format_type, moving it would break them.
#
# The use_user_timezone flag was added to allow using the user's profile timezone.
# Default is False to maintain backwards compatibility - existing graphs will continue
# using their specified timezone.
#
# KNOWN ISSUE: If a user switches between format types (strftime <-> iso8601),
# the timezone setting doesn't carry over. This is a UX issue but fixing it would
# require either:
# 1. Moving timezone to block level (breaking change, needs migration)
# 2. Complex state management to sync timezone across format types
#
# Future migration path: When we do a major version bump, consider moving timezone
# to the block Input level for better UX.
class TimeStrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%H:%M:%S"
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
class TimeISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
include_microseconds: bool = False
@@ -115,25 +181,27 @@ class GetCurrentTimeBlock(Block):
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(
self, input_data: Input, *, user_context: UserContext, **kwargs
) -> BlockOutput:
# Extract timezone from user_context (always present)
effective_timezone = user_context.timezone
# Get the appropriate timezone
tz = _get_timezone(input_data.format_type, effective_timezone)
dt = datetime.now(tz=tz)
if isinstance(input_data.format_type, TimeISO8601Format):
# ISO 8601 format for time only (extract time portion from full ISO datetime)
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
# Get the full ISO format and extract just the time portion with timezone
if input_data.format_type.include_microseconds:
full_iso = dt.isoformat()
else:
full_iso = dt.isoformat(timespec="seconds")
full_iso = _format_datetime_iso8601(
dt, input_data.format_type.include_microseconds
)
# Extract time portion (everything after 'T')
current_time = full_iso.split("T")[1] if "T" in full_iso else full_iso
current_time = f"T{current_time}" # Add T prefix for ISO 8601 time format
else: # TimeStrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
current_time = dt.strftime(input_data.format_type.format)
yield "time", current_time
@@ -141,11 +209,15 @@ class DateStrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%Y-%m-%d"
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
class DateISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
class GetCurrentDateBlock(Block):
@@ -217,20 +289,23 @@ class GetCurrentDateBlock(Block):
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Extract timezone from user_context (required keyword argument)
user_context: UserContext = kwargs["user_context"]
effective_timezone = user_context.timezone
try:
offset = int(input_data.offset)
except ValueError:
offset = 0
# Get the appropriate timezone
tz = _get_timezone(input_data.format_type, effective_timezone)
current_date = datetime.now(tz=tz) - timedelta(days=offset)
if isinstance(input_data.format_type, DateISO8601Format):
# ISO 8601 format for date only (YYYY-MM-DD)
tz = ZoneInfo(input_data.format_type.timezone)
current_date = datetime.now(tz=tz) - timedelta(days=offset)
# ISO 8601 date format is YYYY-MM-DD
date_str = current_date.date().isoformat()
else: # DateStrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
current_date = datetime.now(tz=tz) - timedelta(days=offset)
date_str = current_date.strftime(input_data.format_type.format)
yield "date", date_str
@@ -240,11 +315,15 @@ class StrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%Y-%m-%d %H:%M:%S"
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
class ISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
include_microseconds: bool = False
@@ -316,20 +395,22 @@ class GetCurrentDateAndTimeBlock(Block):
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Extract timezone from user_context (required keyword argument)
user_context: UserContext = kwargs["user_context"]
effective_timezone = user_context.timezone
# Get the appropriate timezone
tz = _get_timezone(input_data.format_type, effective_timezone)
dt = datetime.now(tz=tz)
if isinstance(input_data.format_type, ISO8601Format):
# ISO 8601 format with specified timezone (also RFC3339-compliant)
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
# Format with or without microseconds
if input_data.format_type.include_microseconds:
current_date_time = dt.isoformat()
else:
current_date_time = dt.isoformat(timespec="seconds")
current_date_time = _format_datetime_iso8601(
dt, input_data.format_type.include_microseconds
)
else: # StrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
current_date_time = dt.strftime(input_data.format_type.format)
yield "date_time", current_date_time

View File

@@ -4,6 +4,8 @@ from logging import getLogger
from typing import Any, Dict, List, Union
from urllib.parse import urlencode
from pydantic import field_serializer
from backend.sdk import BaseModel, Credentials, Requests
logger = getLogger(__name__)
@@ -382,8 +384,9 @@ class CreatePostRequest(BaseModel):
# Advanced
metadata: List[Dict[str, Any]] | None = None
class Config:
json_encoders = {datetime: lambda v: v.isoformat()}
@field_serializer("date")
def serialize_date(self, value: datetime | None) -> str | None:
return value.isoformat() if value else None
class PostAuthor(BaseModel):

View File

@@ -6,8 +6,6 @@ from dotenv import load_dotenv
from backend.util.logging import configure_logging
os.environ["ENABLE_AUTH"] = "false"
load_dotenv()
# NOTE: You can run tests like with the --log-cli-level=INFO to see the logs

View File

@@ -1,5 +1,8 @@
from backend.server.v2.library.model import LibraryAgentPreset
from .graph import NodeModel
from .integrations import Webhook # noqa: F401
# Resolve Webhook <- NodeModel forward reference
# Resolve Webhook forward references
NodeModel.model_rebuild()
LibraryAgentPreset.model_rebuild()

View File

@@ -1,57 +1,31 @@
import logging
import uuid
from datetime import datetime, timezone
from typing import List, Optional
from typing import Optional
from autogpt_libs.api_key.key_manager import APIKeyManager
from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission, APIKeyStatus
from prisma.errors import PrismaError
from prisma.models import APIKey as PrismaAPIKey
from prisma.types import (
APIKeyCreateInput,
APIKeyUpdateInput,
APIKeyWhereInput,
APIKeyWhereUniqueInput,
)
from pydantic import BaseModel
from prisma.types import APIKeyWhereUniqueInput
from pydantic import BaseModel, Field
from backend.data.db import BaseDbModel
from backend.util.exceptions import NotAuthorizedError, NotFoundError
logger = logging.getLogger(__name__)
keysmith = APIKeySmith()
# Some basic exceptions
class APIKeyError(Exception):
"""Base exception for API key operations"""
pass
class APIKeyNotFoundError(APIKeyError):
"""Raised when an API key is not found"""
pass
class APIKeyPermissionError(APIKeyError):
"""Raised when there are permission issues with API key operations"""
pass
class APIKeyValidationError(APIKeyError):
"""Raised when API key validation fails"""
pass
class APIKey(BaseDbModel):
class APIKeyInfo(BaseModel):
id: str
name: str
prefix: str
key: str
status: APIKeyStatus = APIKeyStatus.ACTIVE
permissions: List[APIKeyPermission]
postfix: str
head: str = Field(
description=f"The first {APIKeySmith.HEAD_LENGTH} characters of the key"
)
tail: str = Field(
description=f"The last {APIKeySmith.TAIL_LENGTH} characters of the key"
)
status: APIKeyStatus
permissions: list[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime] = None
revoked_at: Optional[datetime] = None
@@ -60,266 +34,211 @@ class APIKey(BaseDbModel):
@staticmethod
def from_db(api_key: PrismaAPIKey):
try:
return APIKey(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
key=api_key.key,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKey from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
return APIKeyInfo(
id=api_key.id,
name=api_key.name,
head=api_key.head,
tail=api_key.tail,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
class APIKeyWithoutHash(BaseModel):
id: str
name: str
prefix: str
postfix: str
status: APIKeyStatus
permissions: List[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime]
revoked_at: Optional[datetime]
description: Optional[str]
user_id: str
class APIKeyInfoWithHash(APIKeyInfo):
hash: str
salt: str | None = None # None for legacy keys
def match(self, plaintext_key: str) -> bool:
"""Returns whether the given key matches this API key object."""
return keysmith.verify_key(plaintext_key, self.hash, self.salt)
@staticmethod
def from_db(api_key: PrismaAPIKey):
try:
return APIKeyWithoutHash(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKeyWithoutHash from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
return APIKeyInfoWithHash(
**APIKeyInfo.from_db(api_key).model_dump(),
hash=api_key.hash,
salt=api_key.salt,
)
def without_hash(self) -> APIKeyInfo:
return APIKeyInfo(**self.model_dump(exclude={"hash", "salt"}))
async def generate_api_key(
async def create_api_key(
name: str,
user_id: str,
permissions: List[APIKeyPermission],
permissions: list[APIKeyPermission],
description: Optional[str] = None,
) -> tuple[APIKeyWithoutHash, str]:
) -> tuple[APIKeyInfo, str]:
"""
Generate a new API key and store it in the database.
Returns the API key object (without hash) and the plain text key.
"""
try:
api_manager = APIKeyManager()
key = api_manager.generate_api_key()
generated_key = keysmith.generate_key()
api_key = await PrismaAPIKey.prisma().create(
data=APIKeyCreateInput(
id=str(uuid.uuid4()),
name=name,
prefix=key.prefix,
postfix=key.postfix,
key=key.hash,
permissions=[p for p in permissions],
description=description,
userId=user_id,
)
)
saved_key_obj = await PrismaAPIKey.prisma().create(
data={
"id": str(uuid.uuid4()),
"name": name,
"head": generated_key.head,
"tail": generated_key.tail,
"hash": generated_key.hash,
"salt": generated_key.salt,
"permissions": [p for p in permissions],
"description": description,
"userId": user_id,
}
)
api_key_without_hash = APIKeyWithoutHash.from_db(api_key)
return api_key_without_hash, key.raw
except PrismaError as e:
logger.error(f"Database error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
return APIKeyInfo.from_db(saved_key_obj), generated_key.key
async def validate_api_key(plain_text_key: str) -> Optional[APIKey]:
async def get_active_api_keys_by_head(head: str) -> list[APIKeyInfoWithHash]:
results = await PrismaAPIKey.prisma().find_many(
where={"head": head, "status": APIKeyStatus.ACTIVE}
)
return [APIKeyInfoWithHash.from_db(key) for key in results]
async def validate_api_key(plaintext_key: str) -> Optional[APIKeyInfo]:
"""
Validate an API key and return the API key object if valid.
Validate an API key and return the API key object if valid and active.
"""
try:
if not plain_text_key.startswith(APIKeyManager.PREFIX):
if not plaintext_key.startswith(APIKeySmith.PREFIX):
logger.warning("Invalid API key format")
return None
prefix = plain_text_key[: APIKeyManager.PREFIX_LENGTH]
api_manager = APIKeyManager()
head = plaintext_key[: APIKeySmith.HEAD_LENGTH]
potential_matches = await get_active_api_keys_by_head(head)
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(prefix=prefix, status=(APIKeyStatus.ACTIVE))
matched_api_key = next(
(pm for pm in potential_matches if pm.match(plaintext_key)),
None,
)
if not api_key:
logger.warning(f"No active API key found with prefix {prefix}")
if not matched_api_key:
# API key not found or invalid
return None
is_valid = api_manager.verify_api_key(plain_text_key, api_key.key)
if not is_valid:
logger.warning("API key verification failed")
return None
return APIKey.from_db(api_key)
except Exception as e:
logger.error(f"Error validating API key: {str(e)}")
raise APIKeyValidationError(f"Failed to validate API key: {str(e)}")
async def revoke_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to revoke this API key."
# Migrate legacy keys to secure format on successful validation
if matched_api_key.salt is None:
matched_api_key = await _migrate_key_to_secure_hash(
plaintext_key, matched_api_key
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(
status=APIKeyStatus.REVOKED, revokedAt=datetime.now(timezone.utc)
),
)
return matched_api_key.without_hash()
except Exception as e:
logger.error(f"Error while validating API key: {e}")
raise RuntimeError("Failed to validate API key") from e
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
async def _migrate_key_to_secure_hash(
plaintext_key: str, key_obj: APIKeyInfoWithHash
) -> APIKeyInfoWithHash:
"""Replace the SHA256 hash of a legacy API key with a salted Scrypt hash."""
try:
new_hash, new_salt = keysmith.hash_key(plaintext_key)
await PrismaAPIKey.prisma().update(
where={"id": key_obj.id}, data={"hash": new_hash, "salt": new_salt}
)
logger.info(f"Migrated legacy API key #{key_obj.id} to secure format")
# Update the API key object with new values for return
key_obj.hash = new_hash
key_obj.salt = new_salt
except Exception as e:
logger.error(f"Failed to migrate legacy API key #{key_obj.id}: {e}")
return key_obj
async def revoke_api_key(key_id: str, user_id: str) -> APIKeyInfo:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise NotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to revoke this API key.")
updated_api_key = await PrismaAPIKey.prisma().update(
where={"id": key_id},
data={
"status": APIKeyStatus.REVOKED,
"revokedAt": datetime.now(timezone.utc),
},
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to revoke.")
return APIKeyInfo.from_db(updated_api_key)
async def list_user_api_keys(user_id: str) -> list[APIKeyInfo]:
api_keys = await PrismaAPIKey.prisma().find_many(
where={"userId": user_id}, order={"createdAt": "desc"}
)
return [APIKeyInfo.from_db(key) for key in api_keys]
async def suspend_api_key(key_id: str, user_id: str) -> APIKeyInfo:
selector: APIKeyWhereUniqueInput = {"id": key_id}
api_key = await PrismaAPIKey.prisma().find_unique(where=selector)
if not api_key:
raise NotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to suspend this API key.")
updated_api_key = await PrismaAPIKey.prisma().update(
where=selector, data={"status": APIKeyStatus.SUSPENDED}
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to suspend.")
return APIKeyInfo.from_db(updated_api_key)
def has_permission(api_key: APIKeyInfo, required_permission: APIKeyPermission) -> bool:
return required_permission in api_key.permissions
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyInfo]:
api_key = await PrismaAPIKey.prisma().find_first(
where={"id": key_id, "userId": user_id}
)
if not api_key:
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
async def list_user_api_keys(user_id: str) -> List[APIKeyWithoutHash]:
try:
where_clause: APIKeyWhereInput = {"userId": user_id}
api_keys = await PrismaAPIKey.prisma().find_many(
where=where_clause, order={"createdAt": "desc"}
)
return [APIKeyWithoutHash.from_db(key) for key in api_keys]
except PrismaError as e:
logger.error(f"Database error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
async def suspend_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to suspend this API key."
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(status=APIKeyStatus.SUSPENDED),
)
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
def has_permission(api_key: APIKey, required_permission: APIKeyPermission) -> bool:
try:
return required_permission in api_key.permissions
except Exception as e:
logger.error(f"Error checking API key permissions: {str(e)}")
return False
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(id=key_id, userId=user_id)
)
if not api_key:
return None
return APIKeyWithoutHash.from_db(api_key)
except PrismaError as e:
logger.error(f"Database error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
return APIKeyInfo.from_db(api_key)
async def update_api_key_permissions(
key_id: str, user_id: str, permissions: List[APIKeyPermission]
) -> Optional[APIKeyWithoutHash]:
key_id: str, user_id: str, permissions: list[APIKeyPermission]
) -> APIKeyInfo:
"""
Update the permissions of an API key.
"""
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if api_key is None:
raise APIKeyNotFoundError("No such API key found.")
if api_key is None:
raise NotFoundError("No such API key found.")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to update this API key."
)
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to update this API key.")
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(permissions=permissions),
)
updated_api_key = await PrismaAPIKey.prisma().update(
where={"id": key_id},
data={"permissions": permissions},
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to update.")
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")
return APIKeyInfo.from_db(updated_api_key)

View File

@@ -1,4 +1,3 @@
import functools
import inspect
import logging
import os
@@ -8,6 +7,7 @@ from enum import Enum
from typing import (
TYPE_CHECKING,
Any,
Callable,
ClassVar,
Generic,
Optional,
@@ -20,6 +20,7 @@ from typing import (
import jsonref
import jsonschema
from autogpt_libs.utils.cache import cached
from prisma.models import AgentBlock
from prisma.types import AgentBlockCreateInput
from pydantic import BaseModel
@@ -44,9 +45,10 @@ if TYPE_CHECKING:
app_config = Config()
BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data).
BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data.
BlockOutput = AsyncGen[BlockData, None] # Output: 1 output pin produces n data.
BlockOutputEntry = tuple[str, Any] # Output data should be a tuple of (name, value).
BlockOutput = AsyncGen[BlockOutputEntry, None] # Output: 1 output pin produces n data.
BlockTestOutput = BlockOutputEntry | tuple[str, Callable[[Any], bool]]
CompletedBlockOutput = dict[str, list[Any]] # Completed stream, collected as a dict.
@@ -89,6 +91,45 @@ class BlockCategory(Enum):
return {"category": self.name, "description": self.value}
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)
class BlockInfo(BaseModel):
id: str
name: str
inputSchema: dict[str, Any]
outputSchema: dict[str, Any]
costs: list[BlockCost]
description: str
categories: list[dict[str, str]]
contributors: list[dict[str, Any]]
staticOutput: bool
uiType: str
class BlockSchema(BaseModel):
cached_jsonschema: ClassVar[dict[str, Any]]
@@ -306,7 +347,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
input_schema: Type[BlockSchemaInputType] = EmptySchema,
output_schema: Type[BlockSchemaOutputType] = EmptySchema,
test_input: BlockInput | list[BlockInput] | None = None,
test_output: BlockData | list[BlockData] | None = None,
test_output: BlockTestOutput | list[BlockTestOutput] | None = None,
test_mock: dict[str, Any] | None = None,
test_credentials: Optional[Credentials | dict[str, Credentials]] = None,
disabled: bool = False,
@@ -452,6 +493,24 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
"uiType": self.block_type.value,
}
def get_info(self) -> BlockInfo:
from backend.data.credit import get_block_cost
return BlockInfo(
id=self.id,
name=self.name,
inputSchema=self.input_schema.jsonschema(),
outputSchema=self.output_schema.jsonschema(),
costs=get_block_cost(self),
description=self.description,
categories=[category.dict() for category in self.categories],
contributors=[
contributor.model_dump() for contributor in self.contributors
],
staticOutput=self.static_output,
uiType=self.block_type.value,
)
async def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
if error := self.input_schema.validate_data(input_data):
raise ValueError(
@@ -663,7 +722,7 @@ def get_block(block_id: str) -> Block[BlockSchema, BlockSchema] | None:
return cls() if cls else None
@functools.cache
@cached()
def get_webhook_block_ids() -> Sequence[str]:
return [
id
@@ -672,7 +731,7 @@ def get_webhook_block_ids() -> Sequence[str]:
]
@functools.cache
@cached()
def get_io_block_ids() -> Sequence[str]:
return [
id

View File

@@ -29,8 +29,7 @@ from backend.blocks.replicate.replicate_block import ReplicateModelBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.blocks.text_to_speech_block import UnrealTextToSpeechBlock
from backend.data.block import Block
from backend.data.cost import BlockCost, BlockCostType
from backend.data.block import Block, BlockCost, BlockCostType
from backend.integrations.credentials_store import (
aiml_api_credentials,
anthropic_credentials,
@@ -307,7 +306,18 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
"type": ideogram_credentials.type,
}
},
)
),
BlockCost(
cost_amount=18,
cost_filter={
"ideogram_model_name": "V_3",
"credentials": {
"id": ideogram_credentials.id,
"provider": ideogram_credentials.provider,
"type": ideogram_credentials.type,
},
},
),
],
AIShortformVideoCreatorBlock: [
BlockCost(

View File

@@ -1,32 +0,0 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from backend.data.block import BlockInput
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)

View File

@@ -2,7 +2,7 @@ import logging
from abc import ABC, abstractmethod
from collections import defaultdict
from datetime import datetime, timezone
from typing import Any, cast
from typing import TYPE_CHECKING, Any, cast
import stripe
from prisma import Json
@@ -23,7 +23,6 @@ from pydantic import BaseModel
from backend.data import db
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCost
from backend.data.model import (
AutoTopUpConfig,
RefundRequest,
@@ -34,13 +33,16 @@ from backend.data.model import (
from backend.data.notifications import NotificationEventModel, RefundRequestData
from backend.data.user import get_user_by_id, get_user_email_by_id
from backend.notifications.notifications import queue_notification_async
from backend.server.model import Pagination
from backend.server.v2.admin.model import UserHistoryResponse
from backend.util.exceptions import InsufficientBalanceError
from backend.util.json import SafeJson
from backend.util.models import Pagination
from backend.util.retry import func_retry
from backend.util.settings import Settings
if TYPE_CHECKING:
from backend.data.block import Block, BlockCost
settings = Settings()
stripe.api_key = settings.secrets.stripe_api_key
logger = logging.getLogger(__name__)
@@ -997,10 +999,14 @@ def get_user_credit_model() -> UserCreditBase:
return UserCredit()
def get_block_costs() -> dict[str, list[BlockCost]]:
def get_block_costs() -> dict[str, list["BlockCost"]]:
return {block().id: costs for block, costs in BLOCK_COSTS.items()}
def get_block_cost(block: "Block") -> list["BlockCost"]:
return BLOCK_COSTS.get(block.__class__, [])
async def get_stripe_customer_id(user_id: str) -> str:
user = await get_user_by_id(user_id)

View File

@@ -7,7 +7,7 @@ from prisma.models import CreditTransaction
from backend.blocks.llm import AITextGeneratorBlock
from backend.data.block import get_block
from backend.data.credit import BetaUserCredit, UsageTransactionMetadata
from backend.data.execution import NodeExecutionEntry
from backend.data.execution import NodeExecutionEntry, UserContext
from backend.data.user import DEFAULT_USER_ID
from backend.executor.utils import block_usage_cost
from backend.integrations.credentials_store import openai_credentials
@@ -75,6 +75,7 @@ async def test_block_credit_usage(server: SpinTestServer):
"type": openai_credentials.type,
},
},
user_context=UserContext(timezone="UTC"),
),
)
assert spending_amount_1 > 0
@@ -88,6 +89,7 @@ async def test_block_credit_usage(server: SpinTestServer):
node_exec_id="test_node_exec",
block_id=AITextGeneratorBlock().id,
inputs={"model": "gpt-4-turbo", "api_key": "owned_api_key"},
user_context=UserContext(timezone="UTC"),
),
)
assert spending_amount_2 == 0

View File

@@ -83,7 +83,7 @@ async def disconnect():
# Transaction timeout constant (in milliseconds)
TRANSACTION_TIMEOUT = 15000 # 15 seconds - Increased from 5s to prevent timeout errors
TRANSACTION_TIMEOUT = 30000 # 30 seconds - Increased from 15s to prevent timeout errors during graph creation under load
@asynccontextmanager

View File

@@ -11,11 +11,14 @@ from typing import (
Generator,
Generic,
Literal,
Mapping,
Optional,
TypeVar,
cast,
overload,
)
from prisma import Json
from prisma.enums import AgentExecutionStatus
from prisma.models import (
AgentGraphExecution,
@@ -24,7 +27,6 @@ from prisma.models import (
AgentNodeExecutionKeyValueData,
)
from prisma.types import (
AgentGraphExecutionCreateInput,
AgentGraphExecutionUpdateManyMutationInput,
AgentGraphExecutionWhereInput,
AgentNodeExecutionCreateInput,
@@ -39,6 +41,7 @@ from pydantic.fields import Field
from backend.server.v2.store.exceptions import DatabaseError
from backend.util import type as type_utils
from backend.util.json import SafeJson
from backend.util.models import Pagination
from backend.util.retry import func_retry
from backend.util.settings import Config
from backend.util.truncate import truncate
@@ -59,7 +62,7 @@ from .includes import (
GRAPH_EXECUTION_INCLUDE_WITH_NODES,
graph_execution_include,
)
from .model import GraphExecutionStats, NodeExecutionStats
from .model import CredentialsMetaInput, GraphExecutionStats, NodeExecutionStats
T = TypeVar("T")
@@ -86,16 +89,49 @@ class BlockErrorStats(BaseModel):
ExecutionStatus = AgentExecutionStatus
NodeInputMask = Mapping[str, JsonValue]
NodesInputMasks = Mapping[str, NodeInputMask]
# dest: source
VALID_STATUS_TRANSITIONS = {
ExecutionStatus.QUEUED: [
ExecutionStatus.INCOMPLETE,
],
ExecutionStatus.RUNNING: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.TERMINATED, # For resuming halted execution
],
ExecutionStatus.COMPLETED: [
ExecutionStatus.RUNNING,
],
ExecutionStatus.FAILED: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
],
ExecutionStatus.TERMINATED: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
],
}
class GraphExecutionMeta(BaseDbModel):
id: str # type: ignore # Override base class to make this required
user_id: str
graph_id: str
graph_version: int
preset_id: Optional[str] = None
inputs: Optional[BlockInput] # no default -> required in the OpenAPI spec
credential_inputs: Optional[dict[str, CredentialsMetaInput]]
nodes_input_masks: Optional[dict[str, BlockInput]]
preset_id: Optional[str]
status: ExecutionStatus
started_at: datetime
ended_at: datetime
is_shared: bool = False
share_token: Optional[str] = None
class Stats(BaseModel):
model_config = ConfigDict(
@@ -177,6 +213,18 @@ class GraphExecutionMeta(BaseDbModel):
user_id=_graph_exec.userId,
graph_id=_graph_exec.agentGraphId,
graph_version=_graph_exec.agentGraphVersion,
inputs=cast(BlockInput | None, _graph_exec.inputs),
credential_inputs=(
{
name: CredentialsMetaInput.model_validate(cmi)
for name, cmi in cast(dict, _graph_exec.credentialInputs).items()
}
if _graph_exec.credentialInputs
else None
),
nodes_input_masks=cast(
dict[str, BlockInput] | None, _graph_exec.nodesInputMasks
),
preset_id=_graph_exec.agentPresetId,
status=ExecutionStatus(_graph_exec.executionStatus),
started_at=start_time,
@@ -200,11 +248,13 @@ class GraphExecutionMeta(BaseDbModel):
if stats
else None
),
is_shared=_graph_exec.isShared,
share_token=_graph_exec.shareToken,
)
class GraphExecution(GraphExecutionMeta):
inputs: BlockInput
inputs: BlockInput # type: ignore - incompatible override is intentional
outputs: CompletedBlockOutput
@staticmethod
@@ -224,15 +274,18 @@ class GraphExecution(GraphExecutionMeta):
)
inputs = {
**{
# inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
)
},
**(
graph_exec.inputs
or {
# fallback: extract inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
)
}
),
**{
# input from webhook-triggered block
"payload": exec.input_data["payload"]
@@ -250,14 +303,13 @@ class GraphExecution(GraphExecutionMeta):
if (
block := get_block(exec.block_id)
) and block.block_type == BlockType.OUTPUT:
outputs[exec.input_data["name"]].append(
exec.input_data.get("value", None)
)
outputs[exec.input_data["name"]].append(exec.input_data.get("value"))
return GraphExecution(
**{
field_name: getattr(graph_exec, field_name)
for field_name in GraphExecutionMeta.model_fields
if field_name != "inputs"
},
inputs=inputs,
outputs=outputs,
@@ -290,13 +342,18 @@ class GraphExecutionWithNodes(GraphExecution):
node_executions=node_executions,
)
def to_graph_execution_entry(self):
def to_graph_execution_entry(
self,
user_context: "UserContext",
compiled_nodes_input_masks: Optional[NodesInputMasks] = None,
):
return GraphExecutionEntry(
user_id=self.user_id,
graph_id=self.graph_id,
graph_version=self.graph_version or 0,
graph_exec_id=self.id,
nodes_input_masks={}, # FIXME: store credentials on AgentGraphExecution
nodes_input_masks=compiled_nodes_input_masks,
user_context=user_context,
)
@@ -332,7 +389,7 @@ class NodeExecutionResult(BaseModel):
else:
input_data: BlockInput = defaultdict()
for data in _node_exec.Input or []:
input_data[data.name] = type_utils.convert(data.data, type[Any])
input_data[data.name] = type_utils.convert(data.data, JsonValue)
output_data: CompletedBlockOutput = defaultdict(list)
@@ -341,7 +398,7 @@ class NodeExecutionResult(BaseModel):
output_data[name].extend(messages)
else:
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
output_data[data.name].append(type_utils.convert(data.data, JsonValue))
graph_execution: AgentGraphExecution | None = _node_exec.GraphExecution
if graph_execution:
@@ -368,7 +425,9 @@ class NodeExecutionResult(BaseModel):
end_time=_node_exec.endedTime,
)
def to_node_execution_entry(self) -> "NodeExecutionEntry":
def to_node_execution_entry(
self, user_context: "UserContext"
) -> "NodeExecutionEntry":
return NodeExecutionEntry(
user_id=self.user_id,
graph_exec_id=self.graph_exec_id,
@@ -377,6 +436,7 @@ class NodeExecutionResult(BaseModel):
node_id=self.node_id,
block_id=self.block_id,
inputs=self.input_data,
user_context=user_context,
)
@@ -384,13 +444,13 @@ class NodeExecutionResult(BaseModel):
async def get_graph_executions(
graph_exec_id: str | None = None,
graph_id: str | None = None,
user_id: str | None = None,
statuses: list[ExecutionStatus] | None = None,
created_time_gte: datetime | None = None,
created_time_lte: datetime | None = None,
limit: int | None = None,
graph_exec_id: Optional[str] = None,
graph_id: Optional[str] = None,
user_id: Optional[str] = None,
statuses: Optional[list[ExecutionStatus]] = None,
created_time_gte: Optional[datetime] = None,
created_time_lte: Optional[datetime] = None,
limit: Optional[int] = None,
) -> list[GraphExecutionMeta]:
"""⚠️ **Optional `user_id` check**: MUST USE check in user-facing endpoints."""
where_filter: AgentGraphExecutionWhereInput = {
@@ -418,6 +478,60 @@ async def get_graph_executions(
return [GraphExecutionMeta.from_db(execution) for execution in executions]
class GraphExecutionsPaginated(BaseModel):
"""Response schema for paginated graph executions."""
executions: list[GraphExecutionMeta]
pagination: Pagination
async def get_graph_executions_paginated(
user_id: str,
graph_id: Optional[str] = None,
page: int = 1,
page_size: int = 25,
statuses: Optional[list[ExecutionStatus]] = None,
created_time_gte: Optional[datetime] = None,
created_time_lte: Optional[datetime] = None,
) -> GraphExecutionsPaginated:
"""Get paginated graph executions for a specific graph."""
where_filter: AgentGraphExecutionWhereInput = {
"isDeleted": False,
"userId": user_id,
}
if graph_id:
where_filter["agentGraphId"] = graph_id
if created_time_gte or created_time_lte:
where_filter["createdAt"] = {
"gte": created_time_gte or datetime.min.replace(tzinfo=timezone.utc),
"lte": created_time_lte or datetime.max.replace(tzinfo=timezone.utc),
}
if statuses:
where_filter["OR"] = [{"executionStatus": status} for status in statuses]
total_count = await AgentGraphExecution.prisma().count(where=where_filter)
total_pages = (total_count + page_size - 1) // page_size
offset = (page - 1) * page_size
executions = await AgentGraphExecution.prisma().find_many(
where=where_filter,
order={"createdAt": "desc"},
take=page_size,
skip=offset,
)
return GraphExecutionsPaginated(
executions=[GraphExecutionMeta.from_db(execution) for execution in executions],
pagination=Pagination(
total_items=total_count,
total_pages=total_pages,
current_page=page,
page_size=page_size,
),
)
async def get_graph_execution_meta(
user_id: str, execution_id: str
) -> GraphExecutionMeta | None:
@@ -479,9 +593,12 @@ async def get_graph_execution(
async def create_graph_execution(
graph_id: str,
graph_version: int,
starting_nodes_input: list[tuple[str, BlockInput]],
starting_nodes_input: list[tuple[str, BlockInput]], # list[(node_id, BlockInput)]
inputs: Mapping[str, JsonValue],
user_id: str,
preset_id: str | None = None,
preset_id: Optional[str] = None,
credential_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> GraphExecutionWithNodes:
"""
Create a new AgentGraphExecution record.
@@ -489,11 +606,18 @@ async def create_graph_execution(
The id of the AgentGraphExecution and the list of ExecutionResult for each node.
"""
result = await AgentGraphExecution.prisma().create(
data=AgentGraphExecutionCreateInput(
agentGraphId=graph_id,
agentGraphVersion=graph_version,
executionStatus=ExecutionStatus.QUEUED,
NodeExecutions={
data={
"agentGraphId": graph_id,
"agentGraphVersion": graph_version,
"executionStatus": ExecutionStatus.INCOMPLETE,
"inputs": SafeJson(inputs),
"credentialInputs": (
SafeJson(credential_inputs) if credential_inputs else Json({})
),
"nodesInputMasks": (
SafeJson(nodes_input_masks) if nodes_input_masks else Json({})
),
"NodeExecutions": {
"create": [
AgentNodeExecutionCreateInput(
agentNodeId=node_id,
@@ -509,9 +633,9 @@ async def create_graph_execution(
for node_id, node_input in starting_nodes_input
]
},
userId=user_id,
agentPresetId=preset_id,
),
"userId": user_id,
"agentPresetId": preset_id,
},
include=GRAPH_EXECUTION_INCLUDE_WITH_NODES,
)
@@ -522,7 +646,7 @@ async def upsert_execution_input(
node_id: str,
graph_exec_id: str,
input_name: str,
input_data: Any,
input_data: JsonValue,
node_exec_id: str | None = None,
) -> tuple[str, BlockInput]:
"""
@@ -571,7 +695,7 @@ async def upsert_execution_input(
)
return existing_execution.id, {
**{
input_data.name: type_utils.convert(input_data.data, type[Any])
input_data.name: type_utils.convert(input_data.data, JsonValue)
for input_data in existing_execution.Input or []
},
input_name: input_data,
@@ -632,6 +756,11 @@ async def update_graph_execution_stats(
status: ExecutionStatus | None = None,
stats: GraphExecutionStats | None = None,
) -> GraphExecution | None:
if not status and not stats:
raise ValueError(
f"Must provide either status or stats to update for execution {graph_exec_id}"
)
update_data: AgentGraphExecutionUpdateManyMutationInput = {}
if stats:
@@ -643,20 +772,25 @@ async def update_graph_execution_stats(
if status:
update_data["executionStatus"] = status
updated_count = await AgentGraphExecution.prisma().update_many(
where={
"id": graph_exec_id,
"OR": [
{"executionStatus": ExecutionStatus.RUNNING},
{"executionStatus": ExecutionStatus.QUEUED},
# Terminated graph can be resumed.
{"executionStatus": ExecutionStatus.TERMINATED},
],
},
where_clause: AgentGraphExecutionWhereInput = {"id": graph_exec_id}
if status:
if allowed_from := VALID_STATUS_TRANSITIONS.get(status, []):
# Add OR clause to check if current status is one of the allowed source statuses
where_clause["AND"] = [
{"id": graph_exec_id},
{"OR": [{"executionStatus": s} for s in allowed_from]},
]
else:
raise ValueError(
f"Status {status} cannot be set via update for execution {graph_exec_id}. "
f"This status can only be set at creation or is not a valid target status."
)
await AgentGraphExecution.prisma().update_many(
where=where_clause,
data=update_data,
)
if updated_count == 0:
return None
graph_exec = await AgentGraphExecution.prisma().find_unique_or_raise(
where={"id": graph_exec_id},
@@ -664,6 +798,7 @@ async def update_graph_execution_stats(
[*get_io_block_ids(), *get_webhook_block_ids()]
),
)
return GraphExecution.from_db(graph_exec)
@@ -817,12 +952,19 @@ async def get_latest_node_execution(
# ----------------- Execution Infrastructure ----------------- #
class UserContext(BaseModel):
"""Generic user context for graph execution containing user-specific settings."""
timezone: str
class GraphExecutionEntry(BaseModel):
user_id: str
graph_exec_id: str
graph_id: str
graph_version: int
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None
nodes_input_masks: Optional[NodesInputMasks] = None
user_context: UserContext
class NodeExecutionEntry(BaseModel):
@@ -833,6 +975,7 @@ class NodeExecutionEntry(BaseModel):
node_id: str
block_id: str
inputs: BlockInput
user_context: UserContext
class ExecutionQueue(Generic[T]):
@@ -882,6 +1025,18 @@ class NodeExecutionEvent(NodeExecutionResult):
)
class SharedExecutionResponse(BaseModel):
"""Public-safe response for shared executions"""
id: str
graph_name: str
graph_description: Optional[str]
status: ExecutionStatus
created_at: datetime
outputs: CompletedBlockOutput # Only the final outputs, no intermediate data
# Deliberately exclude: user_id, inputs, credentials, node details
ExecutionEvent = Annotated[
GraphExecutionEvent | NodeExecutionEvent, Field(discriminator="event_type")
]
@@ -1059,3 +1214,98 @@ async def get_block_error_stats(
)
for row in result
]
async def update_graph_execution_share_status(
execution_id: str,
user_id: str,
is_shared: bool,
share_token: str | None,
shared_at: datetime | None,
) -> None:
"""Update the sharing status of a graph execution."""
await AgentGraphExecution.prisma().update(
where={"id": execution_id},
data={
"isShared": is_shared,
"shareToken": share_token,
"sharedAt": shared_at,
},
)
async def get_graph_execution_by_share_token(
share_token: str,
) -> SharedExecutionResponse | None:
"""Get a shared execution with limited public-safe data."""
execution = await AgentGraphExecution.prisma().find_first(
where={
"shareToken": share_token,
"isShared": True,
"isDeleted": False,
},
include={
"AgentGraph": True,
"NodeExecutions": {
"include": {
"Output": True,
"Node": {
"include": {
"AgentBlock": True,
}
},
},
},
},
)
if not execution:
return None
# Extract outputs from OUTPUT blocks only (consistent with GraphExecution.from_db)
outputs: CompletedBlockOutput = defaultdict(list)
if execution.NodeExecutions:
for node_exec in execution.NodeExecutions:
if node_exec.Node and node_exec.Node.agentBlockId:
# Get the block definition to check its type
block = get_block(node_exec.Node.agentBlockId)
if block and block.block_type == BlockType.OUTPUT:
# For OUTPUT blocks, the data is stored in executionData or Input
# The executionData contains the structured input with 'name' and 'value' fields
if hasattr(node_exec, "executionData") and node_exec.executionData:
exec_data = type_utils.convert(
node_exec.executionData, dict[str, Any]
)
if "name" in exec_data:
name = exec_data["name"]
value = exec_data.get("value")
outputs[name].append(value)
elif node_exec.Input:
# Build input_data from Input relation
input_data = {}
for data in node_exec.Input:
if data.name and data.data is not None:
input_data[data.name] = type_utils.convert(
data.data, JsonValue
)
if "name" in input_data:
name = input_data["name"]
value = input_data.get("value")
outputs[name].append(value)
return SharedExecutionResponse(
id=execution.id,
graph_name=(
execution.AgentGraph.name
if (execution.AgentGraph and execution.AgentGraph.name)
else "Untitled Agent"
),
graph_description=(
execution.AgentGraph.description if execution.AgentGraph else None
),
status=ExecutionStatus(execution.executionStatus),
created_at=execution.createdAt,
outputs=outputs,
)

View File

@@ -1,6 +1,7 @@
import logging
import uuid
from collections import defaultdict
from datetime import datetime, timezone
from typing import TYPE_CHECKING, Any, Literal, Optional, cast
from prisma.enums import SubmissionStatus
@@ -12,7 +13,7 @@ from prisma.types import (
AgentNodeLinkCreateInput,
StoreListingVersionWhereInput,
)
from pydantic import Field, JsonValue, create_model
from pydantic import BaseModel, Field, create_model
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
@@ -28,12 +29,14 @@ from backend.data.model import (
from backend.integrations.providers import ProviderName
from backend.util import type as type_utils
from backend.util.json import SafeJson
from backend.util.models import Pagination
from .block import Block, BlockInput, BlockSchema, BlockType, get_block, get_blocks
from .db import BaseDbModel, query_raw_with_schema, transaction
from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE
if TYPE_CHECKING:
from .execution import NodesInputMasks
from .integrations import Webhook
logger = logging.getLogger(__name__)
@@ -159,6 +162,8 @@ class BaseGraph(BaseDbModel):
is_active: bool = True
name: str
description: str
instructions: str | None = None
recommended_schedule_cron: str | None = None
nodes: list[Node] = []
links: list[Link] = []
forked_from_id: str | None = None
@@ -205,6 +210,35 @@ class BaseGraph(BaseDbModel):
None,
)
@computed_field
@property
def trigger_setup_info(self) -> "GraphTriggerInfo | None":
if not (
self.webhook_input_node
and (trigger_block := self.webhook_input_node.block).webhook_config
):
return None
return GraphTriggerInfo(
provider=trigger_block.webhook_config.provider,
config_schema={
**(json_schema := trigger_block.input_schema.jsonschema()),
"properties": {
pn: sub_schema
for pn, sub_schema in json_schema["properties"].items()
if not is_credentials_field_name(pn)
},
"required": [
pn
for pn in json_schema.get("required", [])
if not is_credentials_field_name(pn)
],
},
credentials_input_name=next(
iter(trigger_block.input_schema.get_credentials_fields()), None
),
)
@staticmethod
def _generate_schema(
*props: tuple[type[AgentInputBlock.Input] | type[AgentOutputBlock.Input], dict],
@@ -238,6 +272,14 @@ class BaseGraph(BaseDbModel):
}
class GraphTriggerInfo(BaseModel):
provider: ProviderName
config_schema: dict[str, Any] = Field(
description="Input schema for the trigger block"
)
credentials_input_name: Optional[str]
class Graph(BaseGraph):
sub_graphs: list[BaseGraph] = [] # Flattened sub-graphs
@@ -342,6 +384,8 @@ class GraphModel(Graph):
user_id: str
nodes: list[NodeModel] = [] # type: ignore
created_at: datetime
@property
def starting_nodes(self) -> list[NodeModel]:
outbound_nodes = {link.sink_id for link in self.links}
@@ -354,6 +398,10 @@ class GraphModel(Graph):
if node.id not in outbound_nodes or node.id in input_nodes
]
@property
def webhook_input_node(self) -> NodeModel | None: # type: ignore
return cast(NodeModel, super().webhook_input_node)
def meta(self) -> "GraphMeta":
"""
Returns a GraphMeta object with metadata about the graph.
@@ -414,7 +462,7 @@ class GraphModel(Graph):
def validate_graph(
self,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
):
"""
Validate graph structure and raise `ValueError` on issues.
@@ -428,7 +476,7 @@ class GraphModel(Graph):
def _validate_graph(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> None:
errors = GraphModel._validate_graph_get_errors(
graph, for_run, nodes_input_masks
@@ -442,7 +490,7 @@ class GraphModel(Graph):
def validate_graph_get_errors(
self,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
@@ -464,7 +512,7 @@ class GraphModel(Graph):
def _validate_graph_get_errors(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
@@ -655,9 +703,12 @@ class GraphModel(Graph):
version=graph.version,
forked_from_id=graph.forkedFromId,
forked_from_version=graph.forkedFromVersion,
created_at=graph.createdAt,
is_active=graph.isActive,
name=graph.name or "",
description=graph.description or "",
instructions=graph.instructions,
recommended_schedule_cron=graph.recommendedScheduleCron,
nodes=[NodeModel.from_db(node, for_export) for node in graph.Nodes or []],
links=list(
{
@@ -696,6 +747,13 @@ class GraphMeta(Graph):
return GraphMeta(**graph.model_dump())
class GraphsPaginated(BaseModel):
"""Response schema for paginated graphs."""
graphs: list[GraphMeta]
pagination: Pagination
# --------------------- CRUD functions --------------------- #
@@ -724,31 +782,42 @@ async def set_node_webhook(node_id: str, webhook_id: str | None) -> NodeModel:
return NodeModel.from_db(node)
async def list_graphs(
async def list_graphs_paginated(
user_id: str,
page: int = 1,
page_size: int = 25,
filter_by: Literal["active"] | None = "active",
) -> list[GraphMeta]:
) -> GraphsPaginated:
"""
Retrieves graph metadata objects.
Default behaviour is to get all currently active graphs.
Retrieves paginated graph metadata objects.
Args:
user_id: The ID of the user that owns the graphs.
page: Page number (1-based).
page_size: Number of graphs per page.
filter_by: An optional filter to either select graphs.
user_id: The ID of the user that owns the graph.
Returns:
list[GraphMeta]: A list of objects representing the retrieved graphs.
GraphsPaginated: Paginated list of graph metadata.
"""
where_clause: AgentGraphWhereInput = {"userId": user_id}
if filter_by == "active":
where_clause["isActive"] = True
# Get total count
total_count = await AgentGraph.prisma().count(where=where_clause)
total_pages = (total_count + page_size - 1) // page_size
# Get paginated results
offset = (page - 1) * page_size
graphs = await AgentGraph.prisma().find_many(
where=where_clause,
distinct=["id"],
order={"version": "desc"},
include=AGENT_GRAPH_INCLUDE,
skip=offset,
take=page_size,
)
graph_models: list[GraphMeta] = []
@@ -762,7 +831,15 @@ async def list_graphs(
logger.error(f"Error processing graph {graph.id}: {e}")
continue
return graph_models
return GraphsPaginated(
graphs=graph_models,
pagination=Pagination(
total_items=total_count,
total_pages=total_pages,
current_page=page,
page_size=page_size,
),
)
async def get_graph_metadata(graph_id: str, version: int | None = None) -> Graph | None:
@@ -1045,6 +1122,7 @@ async def __create_graph(tx, graph: Graph, user_id: str):
version=graph.version,
name=graph.name,
description=graph.description,
recommendedScheduleCron=graph.recommended_schedule_cron,
isActive=graph.is_active,
userId=user_id,
forkedFromId=graph.forked_from_id,
@@ -1103,6 +1181,7 @@ def make_graph_model(creatable_graph: Graph, user_id: str) -> GraphModel:
return GraphModel(
**creatable_graph.model_dump(exclude={"nodes"}),
user_id=user_id,
created_at=datetime.now(tz=timezone.utc),
nodes=[
NodeModel(
**creatable_node.model_dump(),

View File

@@ -2,7 +2,6 @@ import json
from typing import Any
from uuid import UUID
import autogpt_libs.auth.models
import fastapi.exceptions
import pytest
from pytest_snapshot.plugin import Snapshot
@@ -317,12 +316,7 @@ async def test_access_store_listing_graph(server: SpinTestServer):
is_approved=True,
comments="Test comments",
),
autogpt_libs.auth.models.User(
user_id=admin_user.id,
role="admin",
email=admin_user.email,
phone_number="1234567890",
),
user_id=admin_user.id,
)
# Now we check the graph can be accessed by a user that does not own the graph

View File

@@ -59,9 +59,15 @@ def graph_execution_include(
}
AGENT_PRESET_INCLUDE: prisma.types.AgentPresetInclude = {
"InputPresets": True,
"Webhook": True,
}
INTEGRATION_WEBHOOK_INCLUDE: prisma.types.IntegrationWebhookInclude = {
"AgentNodes": {"include": AGENT_NODE_INCLUDE},
"AgentPresets": {"include": {"InputPresets": True}},
"AgentPresets": {"include": AGENT_PRESET_INCLUDE},
}

View File

@@ -96,6 +96,12 @@ class User(BaseModel):
default=True, description="Notify on monthly summary"
)
# User timezone for scheduling and time display
timezone: str = Field(
default="not-set",
description="User timezone (IANA timezone identifier or 'not-set')",
)
@classmethod
def from_db(cls, prisma_user: "PrismaUser") -> "User":
"""Convert a database User object to application User model."""
@@ -149,6 +155,7 @@ class User(BaseModel):
notify_on_daily_summary=prisma_user.notifyOnDailySummary or True,
notify_on_weekly_summary=prisma_user.notifyOnWeeklySummary or True,
notify_on_monthly_summary=prisma_user.notifyOnMonthlySummary or True,
timezone=prisma_user.timezone or "not-set",
)

View File

@@ -54,19 +54,6 @@ class AgentRunData(BaseNotificationData):
class ZeroBalanceData(BaseNotificationData):
last_transaction: float
last_transaction_time: datetime
top_up_link: str
@field_validator("last_transaction_time")
@classmethod
def validate_timezone(cls, value: datetime):
if value.tzinfo is None:
raise ValueError("datetime must have timezone information")
return value
class LowBalanceData(BaseNotificationData):
agent_name: str = Field(..., description="Name of the agent")
current_balance: float = Field(
..., description="Current balance in credits (100 = $1)"
@@ -75,6 +62,13 @@ class LowBalanceData(BaseNotificationData):
shortfall: float = Field(..., description="Amount of credits needed to continue")
class LowBalanceData(BaseNotificationData):
current_balance: float = Field(
..., description="Current balance in credits (100 = $1)"
)
billing_page_link: str = Field(..., description="Link to billing page")
class BlockExecutionFailedData(BaseNotificationData):
block_name: str
block_id: str
@@ -181,6 +175,42 @@ class RefundRequestData(BaseNotificationData):
balance: int
class AgentApprovalData(BaseNotificationData):
agent_name: str
agent_id: str
agent_version: int
reviewer_name: str
reviewer_email: str
comments: str
reviewed_at: datetime
store_url: str
@field_validator("reviewed_at")
@classmethod
def validate_timezone(cls, value: datetime):
if value.tzinfo is None:
raise ValueError("datetime must have timezone information")
return value
class AgentRejectionData(BaseNotificationData):
agent_name: str
agent_id: str
agent_version: int
reviewer_name: str
reviewer_email: str
comments: str
reviewed_at: datetime
resubmit_url: str
@field_validator("reviewed_at")
@classmethod
def validate_timezone(cls, value: datetime):
if value.tzinfo is None:
raise ValueError("datetime must have timezone information")
return value
NotificationData = Annotated[
Union[
AgentRunData,
@@ -240,6 +270,8 @@ def get_notif_data_type(
NotificationType.MONTHLY_SUMMARY: MonthlySummaryData,
NotificationType.REFUND_REQUEST: RefundRequestData,
NotificationType.REFUND_PROCESSED: RefundRequestData,
NotificationType.AGENT_APPROVED: AgentApprovalData,
NotificationType.AGENT_REJECTED: AgentRejectionData,
}[notification_type]
@@ -274,7 +306,7 @@ class NotificationTypeOverride:
# These are batched by the notification service
NotificationType.AGENT_RUN: QueueType.BATCH,
# These are batched by the notification service, but with a backoff strategy
NotificationType.ZERO_BALANCE: QueueType.BACKOFF,
NotificationType.ZERO_BALANCE: QueueType.IMMEDIATE,
NotificationType.LOW_BALANCE: QueueType.IMMEDIATE,
NotificationType.BLOCK_EXECUTION_FAILED: QueueType.BACKOFF,
NotificationType.CONTINUOUS_AGENT_ERROR: QueueType.BACKOFF,
@@ -283,6 +315,8 @@ class NotificationTypeOverride:
NotificationType.MONTHLY_SUMMARY: QueueType.SUMMARY,
NotificationType.REFUND_REQUEST: QueueType.ADMIN,
NotificationType.REFUND_PROCESSED: QueueType.ADMIN,
NotificationType.AGENT_APPROVED: QueueType.IMMEDIATE,
NotificationType.AGENT_REJECTED: QueueType.IMMEDIATE,
}
return BATCHING_RULES.get(self.notification_type, QueueType.IMMEDIATE)
@@ -300,6 +334,8 @@ class NotificationTypeOverride:
NotificationType.MONTHLY_SUMMARY: "monthly_summary.html",
NotificationType.REFUND_REQUEST: "refund_request.html",
NotificationType.REFUND_PROCESSED: "refund_processed.html",
NotificationType.AGENT_APPROVED: "agent_approved.html",
NotificationType.AGENT_REJECTED: "agent_rejected.html",
}[self.notification_type]
@property
@@ -315,6 +351,8 @@ class NotificationTypeOverride:
NotificationType.MONTHLY_SUMMARY: "We did a lot this month!",
NotificationType.REFUND_REQUEST: "[ACTION REQUIRED] You got a ${{data.amount / 100}} refund request from {{data.user_name}}",
NotificationType.REFUND_PROCESSED: "Refund for ${{data.amount / 100}} to {{data.user_name}} has been processed",
NotificationType.AGENT_APPROVED: "🎉 Your agent '{{data.agent_name}}' has been approved!",
NotificationType.AGENT_REJECTED: "Your agent '{{data.agent_name}}' needs some updates",
}[self.notification_type]

View File

@@ -0,0 +1,151 @@
"""Tests for notification data models."""
from datetime import datetime, timezone
import pytest
from pydantic import ValidationError
from backend.data.notifications import AgentApprovalData, AgentRejectionData
class TestAgentApprovalData:
"""Test cases for AgentApprovalData model."""
def test_valid_agent_approval_data(self):
"""Test creating valid AgentApprovalData."""
data = AgentApprovalData(
agent_name="Test Agent",
agent_id="test-agent-123",
agent_version=1,
reviewer_name="John Doe",
reviewer_email="john@example.com",
comments="Great agent, approved!",
reviewed_at=datetime.now(timezone.utc),
store_url="https://app.autogpt.com/store/test-agent-123",
)
assert data.agent_name == "Test Agent"
assert data.agent_id == "test-agent-123"
assert data.agent_version == 1
assert data.reviewer_name == "John Doe"
assert data.reviewer_email == "john@example.com"
assert data.comments == "Great agent, approved!"
assert data.store_url == "https://app.autogpt.com/store/test-agent-123"
assert data.reviewed_at.tzinfo is not None
def test_agent_approval_data_without_timezone_raises_error(self):
"""Test that AgentApprovalData raises error without timezone."""
with pytest.raises(
ValidationError, match="datetime must have timezone information"
):
AgentApprovalData(
agent_name="Test Agent",
agent_id="test-agent-123",
agent_version=1,
reviewer_name="John Doe",
reviewer_email="john@example.com",
comments="Great agent, approved!",
reviewed_at=datetime.now(), # No timezone
store_url="https://app.autogpt.com/store/test-agent-123",
)
def test_agent_approval_data_with_empty_comments(self):
"""Test AgentApprovalData with empty comments."""
data = AgentApprovalData(
agent_name="Test Agent",
agent_id="test-agent-123",
agent_version=1,
reviewer_name="John Doe",
reviewer_email="john@example.com",
comments="", # Empty comments
reviewed_at=datetime.now(timezone.utc),
store_url="https://app.autogpt.com/store/test-agent-123",
)
assert data.comments == ""
class TestAgentRejectionData:
"""Test cases for AgentRejectionData model."""
def test_valid_agent_rejection_data(self):
"""Test creating valid AgentRejectionData."""
data = AgentRejectionData(
agent_name="Test Agent",
agent_id="test-agent-123",
agent_version=1,
reviewer_name="Jane Doe",
reviewer_email="jane@example.com",
comments="Please fix the security issues before resubmitting.",
reviewed_at=datetime.now(timezone.utc),
resubmit_url="https://app.autogpt.com/build/test-agent-123",
)
assert data.agent_name == "Test Agent"
assert data.agent_id == "test-agent-123"
assert data.agent_version == 1
assert data.reviewer_name == "Jane Doe"
assert data.reviewer_email == "jane@example.com"
assert data.comments == "Please fix the security issues before resubmitting."
assert data.resubmit_url == "https://app.autogpt.com/build/test-agent-123"
assert data.reviewed_at.tzinfo is not None
def test_agent_rejection_data_without_timezone_raises_error(self):
"""Test that AgentRejectionData raises error without timezone."""
with pytest.raises(
ValidationError, match="datetime must have timezone information"
):
AgentRejectionData(
agent_name="Test Agent",
agent_id="test-agent-123",
agent_version=1,
reviewer_name="Jane Doe",
reviewer_email="jane@example.com",
comments="Please fix the security issues.",
reviewed_at=datetime.now(), # No timezone
resubmit_url="https://app.autogpt.com/build/test-agent-123",
)
def test_agent_rejection_data_with_long_comments(self):
"""Test AgentRejectionData with long comments."""
long_comment = "A" * 1000 # Very long comment
data = AgentRejectionData(
agent_name="Test Agent",
agent_id="test-agent-123",
agent_version=1,
reviewer_name="Jane Doe",
reviewer_email="jane@example.com",
comments=long_comment,
reviewed_at=datetime.now(timezone.utc),
resubmit_url="https://app.autogpt.com/build/test-agent-123",
)
assert data.comments == long_comment
def test_model_serialization(self):
"""Test that models can be serialized and deserialized."""
original_data = AgentRejectionData(
agent_name="Test Agent",
agent_id="test-agent-123",
agent_version=1,
reviewer_name="Jane Doe",
reviewer_email="jane@example.com",
comments="Please fix the issues.",
reviewed_at=datetime.now(timezone.utc),
resubmit_url="https://app.autogpt.com/build/test-agent-123",
)
# Serialize to dict
data_dict = original_data.model_dump()
# Deserialize back
restored_data = AgentRejectionData.model_validate(data_dict)
assert restored_data.agent_name == original_data.agent_name
assert restored_data.agent_id == original_data.agent_id
assert restored_data.agent_version == original_data.agent_version
assert restored_data.reviewer_name == original_data.reviewer_name
assert restored_data.reviewer_email == original_data.reviewer_email
assert restored_data.comments == original_data.comments
assert restored_data.reviewed_at == original_data.reviewed_at
assert restored_data.resubmit_url == original_data.resubmit_url

View File

@@ -1,8 +1,7 @@
import logging
import os
from functools import cache
from autogpt_libs.utils.cache import thread_cached
from autogpt_libs.utils.cache import cached, thread_cached
from dotenv import load_dotenv
from redis import Redis
from redis.asyncio import Redis as AsyncRedis
@@ -13,7 +12,7 @@ load_dotenv()
HOST = os.getenv("REDIS_HOST", "localhost")
PORT = int(os.getenv("REDIS_PORT", "6379"))
PASSWORD = os.getenv("REDIS_PASSWORD", "password")
PASSWORD = os.getenv("REDIS_PASSWORD", None)
logger = logging.getLogger(__name__)
@@ -35,7 +34,7 @@ def disconnect():
get_redis().close()
@cache
@cached()
def get_redis() -> Redis:
return connect()

Some files were not shown because too many files have changed in this diff Show More