Compare commits

...

531 Commits

Author SHA1 Message Date
Zamil Majdy
5aaf07fbaf feat(backend): implement unified content embeddings with userId support
- Replace StoreListingEmbedding with UnifiedContentEmbedding table
- Add ContentType enum (STORE_AGENT, BLOCK, INTEGRATION, DOCUMENTATION, LIBRARY_AGENT)
- Support user-specific content with optional userId field for access control
- Maintain backward compatibility with wrapper functions for existing store APIs
- Update hybrid search to use unified embedding table with proper ContentType filtering
- Add comprehensive tests for new embedding service functionality
- Use proper Prisma ContentType enum instead of strings for type safety

The unified architecture enables future expansion to semantic search for blocks,
documentation, and library agents while maintaining existing store functionality.
2026-01-09 14:15:09 -06:00
Swifty
0d2996e501 Merge branch 'dev' into hackathon-copilot-search 2026-01-09 16:31:59 +01:00
Swifty
843c487500 feat(backend): add prisma types stub generator for pyright compatibility (#11736)
Prisma's generated `types.py` file is 57,000+ lines with complex
recursive TypedDict definitions that exhaust Pyright's type inference
budget. This causes random type errors and makes the type checker
unreliable.

### Changes 🏗️

- Add `gen_prisma_types_stub.py` script that generates a lightweight
`.pyi` stub file
- The stub preserves safe types (Literal, TypeVar) while collapsing
complex TypedDicts to `dict[str, Any]`
- Integrate stub generation into all workflows that run `prisma
generate`:
  - `platform-backend-ci.yml`
  - `claude.yml`
  - `claude-dependabot.yml`
  - `copilot-setup-steps.yml`
  - `docker-compose.platform.yml`
  - `Dockerfile`
  - `Makefile` (migrate & reset-db targets)
  - `linter.py` (lint & format commands)
- Add `gen-prisma-stub` poetry script entry
- Fix two pre-existing type errors that were previously masked:
- `store/db.py`: Replace private type
`_StoreListingVersion_version_OrderByInput` with dict literal
  - `airtable/_webhook.py`: Add cast for `Serializable` type

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run `poetry run format` - passes with 0 errors (down from 57+)
  - [x] Run `poetry run lint` - passes with 0 errors
  - [x] Run `poetry run gen-prisma-stub` - generates stub successfully
- [x] Verify stub file is created at correct location with proper
content

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Added a lightweight Prisma type-stub generator and integrated it into
build, lint, CI/CD, and container workflows.
* Build, migration, formatting, and lint steps now generate these stubs
to improve type-checking performance and reduce overhead during builds
and deployments.
  * Exposed a project command to run stub generation manually.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-09 16:31:10 +01:00
Nicholas Tindle
47a3a5ef41 feat(backend,frontend): optional credentials flag for blocks at agent level (#11716)
This feature allows agent makers to mark credential fields as optional.
When credentials are not configured for an optional block, the block
will be skipped during execution rather than causing a validation error.

**Use case:** An agent with multiple notification channels (Discord,
Twilio, Slack) where the user only needs to configure one - unconfigured
channels are simply skipped.

### Changes 🏗️

#### Backend

**Data Model Changes:**
- `backend/data/graph.py`: Added `credentials_optional` property to
`Node` model that reads from node metadata
- `backend/data/execution.py`: Added `nodes_to_skip` field to
`GraphExecutionEntry` model to track nodes that should be skipped

**Validation Changes:**
- `backend/executor/utils.py`:
- Updated `_validate_node_input_credentials()` to return a tuple of
`(credential_errors, nodes_to_skip)`
- Nodes with `credentials_optional=True` and missing credentials are
added to `nodes_to_skip` instead of raising validation errors
- Updated `validate_graph_with_credentials()` to propagate
`nodes_to_skip` set
- Updated `validate_and_construct_node_execution_input()` to return
`nodes_to_skip`
- Updated `add_graph_execution()` to pass `nodes_to_skip` to execution
entry

**Execution Changes:**
- `backend/executor/manager.py`:
  - Added skip logic in `_on_graph_execution()` dispatch loop
- When a node is in `nodes_to_skip`, it is marked as `COMPLETED` without
execution
  - No outputs are produced, so downstream nodes won't trigger

#### Frontend

**Node Store:**
- `frontend/src/app/(platform)/build/stores/nodeStore.ts`:
- Added `credentials_optional` to node metadata serialization in
`convertCustomNodeToBackendNode()`
- Added `getCredentialsOptional()` and `setCredentialsOptional()` helper
methods

**Credential Field Component:**
-
`frontend/src/components/renderers/input-renderer/fields/CredentialField/CredentialField.tsx`:
  - Added "Optional - skip block if not configured" switch toggle
  - Switch controls the `credentials_optional` metadata flag
  - Placeholder text updates based on optional state

**Credential Field Hook:**
-
`frontend/src/components/renderers/input-renderer/fields/CredentialField/useCredentialField.ts`:
  - Added `disableAutoSelect` parameter
- When credentials are optional, auto-selection of credentials is
disabled

**Feature Flags:**
- `frontend/src/services/feature-flags/use-get-flag.ts`: Minor refactor
(condition ordering)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Build an agent using smart decision maker and down stream blocks
to test this

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces optional credentials across graph execution and UI,
allowing nodes to be skipped (no outputs, no downstream triggers) when
their credentials are not configured.
> 
> - Backend
> - Adds `Node.credentials_optional` (from node `metadata`) and computes
required credential fields in `Graph.credentials_input_schema` based on
usage.
> - Validates credentials with `_validate_node_input_credentials` →
returns `(errors, nodes_to_skip)`; plumbs `nodes_to_skip` through
`validate_graph_with_credentials`,
`_construct_starting_node_execution_input`,
`validate_and_construct_node_execution_input`, and `add_graph_execution`
into `GraphExecutionEntry`.
> - Executor: dispatch loop skips nodes in `nodes_to_skip` (marks
`COMPLETED`); `execute_node`/`on_node_execution` accept `nodes_to_skip`;
`SmartDecisionMakerBlock.run` filters tool functions whose
`_sink_node_id` is in `nodes_to_skip` and errors only if all tools are
filtered.
> - Models: `GraphExecutionEntry` gains `nodes_to_skip` field. Tests and
snapshots updated accordingly.
> 
> - Frontend
> - Builder: credential field uses `custom/credential_field` with an
"Optional – skip block if not configured" toggle; `nodeStore` persists
`credentials_optional` and history; UI hides optional toggle in run
dialogs.
> - Run dialogs: compute required credentials from
`credentials_input_schema.required`; allow selecting "None"; avoid
auto-select for optional; filter out incomplete creds before execute.
>   - Minor schema/UI wiring updates (`uiSchema`, form context flags).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
5e01fd6a3e. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-09 14:11:35 +00:00
Ubbe
ec00aa951a fix(frontend): agent favorites layout (#11733)
## Changes 🏗️

<img width="800" height="744" alt="Screenshot 2026-01-09 at 16 07 08"
src="https://github.com/user-attachments/assets/034c97e2-18f3-441c-a13d-71f668ad672f"
/>

- Remove feature flag for agent favourites ( _keep it always visible_ )
- Fix the layout on the card so the ❤️ icon appears next to the `...`
menu
- Remove icons on toasts

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and check the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Favorites now respond to the current search term and are available to
all users (no feature-flag).

* **UI/UX Improvements**
* Redesigned Favorites section with simplified header, inline agent
counts, updated spacing/dividers, and removal of skeleton placeholders.
  * Favorite button repositioned and visually simplified on agent cards.
* Toast visuals simplified by removing per-type icons and adjusting
close-button positioning.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-09 18:52:07 +07:00
Zamil Majdy
9e37a66bca feat(backend): fix hybrid search implementation and add comprehensive tests
- Fix configuration to use settings.py instead of getenv for OpenAI API key
- Improve performance by using asyncio.gather for concurrent embedding generation (~10x faster)
- Move all local imports to top-level for better test mocking
- Add graceful degradation when hybrid search fails (fallback to basic text search)
- Create comprehensive test suite with 18 test cases covering all scenarios
- Fix pytest plugin conflicts by disabling syrupy to avoid --snapshot-update collision
- Resolve database variable binding issues with proper initialization
- Ensure all 27 store/embeddings tests pass consistently

Fixes:
- Store listings now use standardized hybrid search (embeddings + BM25)
- Performance improved from sequential to concurrent embedding processing
- Database migrations and table dependencies properly handled
- Test coverage complete for embedding functionality

Next: Extend hybrid search standardization to builder blocks and docs (currently 33% complete)
2026-01-08 14:25:40 -06:00
Zamil Majdy
429a074848 Merge branch 'dev' of github.com:Significant-Gravitas/AutoGPT into hackathon-copilot-search 2026-01-08 13:22:20 -06:00
Zamil Majdy
36fb1ea004 fix(platform): store submission validation and marketplace improvements (#11706)
## Summary

Major improvements to AutoGPT Platform store submission deletion,
creator detection, and marketplace functionality. This PR addresses
critical issues with submission management and significantly improves
performance.

### 🔧 **Store Submission Deletion Issues Fixed**

**Problems Solved**:
-  **Wrong deletion granularity**: Deleting entire `StoreListing` (all
versions) when users expected to delete individual submissions
-  **"Graph not found" errors**: Cascade deletion removing AgentGraphs
that were still referenced
-  **Multiple submissions deleted**: When removing one submission, all
submissions for that agent were removed
-  **Deletion of approved content**: Users could accidentally remove
live store content

**Solutions Implemented**:
-  **Granular deletion**: Now deletes individual `StoreListingVersion`
records instead of entire listings
-  **Protected approved content**: Prevents deletion of approved
submissions to keep store content safe
-  **Automatic cleanup**: Empty listings are automatically removed when
last version is deleted
-  **Simplified logic**: Reduced deletion function from 85 lines to 32
lines for better maintainability

### 🔧 **Creator Detection Performance Issues Fixed**

**Problems Solved**:
-  **Inefficient API calls**: Fetching ALL user submissions just to
check if they own one specific agent
-  **Complex logic**: Convoluted creator detection requiring multiple
database queries
-  **Performance impact**: Especially bad for non-creators who would
never need this data

**Solutions Implemented**:
-  **Added `owner_user_id` field**: Direct ownership reference in
`LibraryAgent` model
-  **Simple ownership check**: `owner_user_id === user.id` instead of
complex submission fetching
-  **90%+ performance improvement**: Massive reduction in unnecessary
API calls for non-creators
-  **Optimized data fetching**: Only fetch submissions when user is
creator AND has marketplace listing

### 🔧 **Original Store Submission Validation Issues (BUILDER-59F)**
Fixes "Agent not found for this user. User ID: ..., Agent ID: , Version:
0" errors:

- **Backend validation**: Added Pydantic validation for `agent_id`
(min_length=1) and `agent_version` (>0)
- **Frontend validation**: Pre-submission validation with user-friendly
error messages
- **Agent selection flow**: Fixed `agentId` not being set from
`selectedAgentId`
- **State management**: Prevented state reset conflicts clearing
selected agent

### 🔧 **Marketplace Display Improvements**
Enhanced version history and changelog display:

- Updated title from "Changelog" to "Version history"
- Added "Last updated X ago" with proper relative time formatting  
- Display version numbers as "Version X.0" format
- Replaced all hardcoded values with dynamic API data
- Improved text sizes and layout structure

### 📁 **Files Changed**

**Backend Changes**:
- `backend/api/features/store/db.py` - Simplified deletion logic, added
approval protection
- `backend/api/features/store/model.py` - Added `listing_id` field,
Pydantic validation
- `backend/api/features/library/model.py` - Added `owner_user_id` field
for efficient creator detection
- All test files - Updated with new required fields

**Frontend Changes**:
- `useMarketplaceUpdate.ts` - Optimized creator detection logic 
- `MainDashboardPage.tsx` - Added `listing_id` mapping for proper type
safety
- `useAgentTableRow.ts` - Updated deletion logic to use
`store_listing_version_id`
- `usePublishAgentModal.ts` - Fixed state reset conflicts
- Marketplace components - Enhanced version history display

###  **Benefits**

**Performance**:
- 🚀 **90%+ reduction** in unnecessary API calls for creator detection
- 🚀 **Instant ownership checks** (no database queries needed)
- 🚀 **Optimized submissions fetching** (only when needed)

**User Experience**: 
-  **Granular submission control** (delete individual versions, not
entire listings)
-  **Protected approved content** (prevents accidental store content
removal)
-  **Better error prevention** (no more "Graph not found" errors)
-  **Clear validation messages** (user-friendly error feedback)

**Code Quality**:
-  **Simplified deletion logic** (85 lines → 32 lines)
-  **Better type safety** (proper `listing_id` field usage)  
-  **Cleaner creator detection** (explicit ownership vs inferred)
-  **Automatic cleanup** (empty listings removed automatically)

### 🧪 **Testing**
- [x] Backend validation rejects empty agent_id and zero agent_version
- [x] Frontend TypeScript compilation passes
- [x] Store submission works from both creator dashboard and "become a
creator" flows
- [x] Granular submission deletion works correctly
- [x] Approved submissions are protected from deletion
- [x] Creator detection is fast and accurate
- [x] Marketplace displays version history correctly

**Breaking Changes**: None - All changes are additive and backwards
compatible.

Fixes critical submission deletion issues, improves performance
significantly, and enhances user experience across the platform.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
  * Agent ownership is now tracked and exposed across the platform.
* Store submissions and versions now include a required listing_id to
preserve listing linkage.

* **Bug Fixes**
* Prevent deletion of APPROVED submissions; remove empty listings after
deletions.
* Edits restricted to PENDING submissions with clearer invalid-operation
messages.

* **Improvements**
* Stronger publish validation and UX guards; deduplicated images and
modal open/reset refinements.
* Version history shows relative "Last updated" times and version
badges.

* **Tests**
* E2E tests updated to target pending-submission flows for edit/delete.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-08 19:11:38 +00:00
Abhimanyu Yadav
a81ac150da fix(frontend): add word wrapping to CodeRenderer and improve output actions visibility (#11724)
## Changes 🏗️
- Updated the `CodeRenderer` component to add `whitespace-pre-wrap` and
`break-words` CSS classes to the `<code>` element
- This enables proper wrapping of long code lines while preserving
whitespace formatting

Before


![image.png](https://app.graphite.com/user-attachments/assets/aca769cc-0f6f-4e25-8cdd-c491fcbf21bb.png)

After

![Screenshot 2026-01-08 at
3.02.53 PM.png](https://app.graphite.com/user-attachments/assets/99e23efa-be2a-441b-b0d6-50fa2a08cdb0.png)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified code with long lines wraps correctly
  - [x] Confirmed whitespace and indentation are preserved
  - [x] Tested code display in various viewport sizes

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Code blocks now preserve whitespace and wrap long lines for improved
readability.
* Output action controls are hidden when there is only a single output
item, reducing unnecessary UI elements.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 11:13:47 +00:00
Abhimanyu Yadav
49ee087496 feat(frontend): add new integration images for Webshare and WordPress (#11725)
### Changes 🏗️

Added two new integration icons to the frontend:
- `webshare_proxy.png` - Icon for WebShare Proxy integration
- `wordpress.png` - Icon for WordPress integration

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified both icons display correctly in the integrations section
  - [x] Confirmed icons render properly at different screen sizes
  - [x] Checked that the icons maintain quality when scaled

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
2026-01-08 11:13:34 +00:00
Ubbe
fc25e008b3 feat(frontend): update library agent cards to use DS (#11720)
## Changes 🏗️

<img width="700" height="838" alt="Screenshot 2026-01-07 at 16 11 04"
src="https://github.com/user-attachments/assets/0b38d2e1-d4a8-4036-862c-b35c82c496c2"
/>

- Update the agent library cards to new designs
- Update page to use Design System components
- Allow to edit/delete/duplicate agents on the library list page
- Add missing actions on library agent detail page

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Marketplace info shown on agent cards and improved favoriting with
optimistic UI and feedback.
  * Delete agent and delete schedule flows with confirmation dialogs.

* **Refactor**
* New composable form system, modernized upload dialog, streamlined
search bar, and multiple library components converted to named exports
with layout tweaks.
  * New agent card menu and favorite button UI.

* **Chores**
  * Removed notification UI and dropped a drag-drop dependency.

* **Tests**
  * Increased timeouts and stabilized upload/pagination flows.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 18:28:27 +07:00
Ubbe
b0855e8cf2 feat(frontend): context menu right click new builder (#11703)
## Changes 🏗️

<img width="250" height="504" alt="Screenshot 2026-01-06 at 17 53 26"
src="https://github.com/user-attachments/assets/52013448-f49c-46b6-b86a-39f98270cbc3"
/>

<img width="300" height="544" alt="Screenshot 2026-01-06 at 17 53 29"
src="https://github.com/user-attachments/assets/e6334034-68e4-4346-9092-3774ab3e8445"
/>

On the **New Builder**:
- right-click on a node menu make it show the context menu
- use the same menu for right-click and when clicking on `...`

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a custom right-click context menu for nodes with Copy, Open
agent (when available), and Delete actions; browser default menu is
suppressed while preserving zoom/drag/wiring.
* Introduced reusable SecondaryMenu primitives for context and dropdown
menus.

* **Documentation**
* Added Storybook examples demonstrating the context menu and dropdown
menu usage.

* **Style**
* Updated menu styling and icons with improved consistency and dark-mode
support.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 17:35:49 +07:00
Abhimanyu Yadav
5e2146dd76 feat(frontend): add CustomSchemaField wrapper for dynamic form field routing
(#11722)

### Changes 🏗️

This PR introduces automatic UI schema generation for custom form
fields, eliminating manual field mapping.

#### 1. **generateUiSchemaForCustomFields Utility**
(`generate-ui-schema.ts`) - New File
   - Auto-generates `ui:field` settings for custom fields
   - Detects custom fields using `findCustomFieldId()` matcher
   - Handles nested objects and array items recursively
   - Merges with existing UI schema without overwriting

#### 2. **FormRenderer Integration** (`FormRenderer.tsx`)
   - Imports and uses `generateUiSchemaForCustomFields`
   - Creates merged UI schema with `useMemo`
   - Passes merged schema to Form component
   - Enables automatic custom field detection

#### 3. **Preprocessor Cleanup** (`input-schema-pre-processor.ts`)
   - Removed manual `$id` assignment for custom fields
   - Removed unused `findCustomFieldId` import
   - Simplified to focus only on type validation

### Why these changes?

- Custom fields now auto-detect without manual `ui:field` configuration
- Uses standard RJSF approach (UI schema) for field routing
- Centralized custom field detection logic improves maintainability

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify custom fields render correctly when present in schema
- [x] Verify standard fields continue to render with default SchemaField
- [x] Verify multiple instances of same custom field type have unique
IDs
  - [x] Test form submission with custom fields

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved custom field rendering in forms by optimizing the UI schema
generation process.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-08 08:47:52 +00:00
Abhimanyu Yadav
103a62c9da feat(frontend/builder): add filters to blocks menu (#11654)
### Changes 🏗️

This PR adds filtering functionality to the new blocks menu, allowing
users to filter search results by category and creator.

**New Components:**
- `BlockMenuFilters`: Main filter component displaying active filters
and filter chips
- `FilterSheet`: Slide-out panel for selecting filters with categories
and creators
- `BlockMenuSearchContent`: Refactored search results display component

**Features Added:**
- Filter by categories: Blocks, Integrations, Marketplace agents, My
agents
- Filter by creator: Shows all available creators from search results
- Category counts: Display number of results per category
- Interactive filter chips with animations (using framer-motion)
- Hover states showing result counts on filter chips
- "All filters" sheet with apply/clear functionality

**State Management:**
- Extended `blockMenuStore` with filter state management
- Added `filters`, `creators`, `creators_list`, and `categoryCounts` to
store
- Integrated filters with search API (`filter` and `by_creator`
parameters)

**Refactoring:**
- Moved search logic from `BlockMenuSearch` to `BlockMenuSearchContent`
- Renamed `useBlockMenuSearch` to `useBlockMenuSearchContent`
- Moved helper functions to `BlockMenuSearchContent` directory

**API Changes:**
- Updated `custom-mutator.ts` to properly handle query parameter
encoding


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for blocks and verify filter chips appear
- [x] Click "All filters" and verify filter sheet opens with categories
- [x] Select/deselect category filters and verify results update
accordingly
  - [x] Filter by creator and verify only blocks from that creator show
  - [x] Clear all filters and verify reset to default state
  - [x] Verify filter counts display correctly
  - [x] Test filter chip hover animations
2026-01-08 08:02:21 +00:00
Bentlybro
fc8434fb30 Merge branch 'master' into dev 2026-01-07 12:02:15 +00:00
Swifty
7f1245dc42 adding hybrid based searching 2026-01-07 12:45:55 +01:00
Ubbe
3ae08cd48e feat(frontend): use Google Drive Picker on new builder (#11702)
## Changes 🏗️

<img width="600" height="960" alt="Screenshot 2026-01-06 at 17 40 23"
src="https://github.com/user-attachments/assets/61085ec5-a367-45c7-acaa-e3fc0f0af647"
/>

- So when using Google Blocks on the new builder, it shows Google Drive
Picket 🏁

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
  - [x] Run app locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a Google Drive picker field and widget for forms with an
always-visible remove button and improved single/multi selection
handling.

* **Bug Fixes**
* Better validation and normalization of selected files and consolidated
error messaging.
* Adjusted layout spacing around the picker and selected files for
clearer display.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-07 17:07:09 +07:00
Swifty
4db13837b9 Revert "extracted frontend changes out of the hackathon/copilot branch"
This reverts commit df87867625.
2026-01-07 09:27:25 +01:00
Swifty
df87867625 extracted frontend changes out of the hackathon/copilot branch 2026-01-07 09:25:10 +01:00
Abhimanyu Yadav
e503126170 feat(frontend): upgrade RJSF to v6 and implement new FormRenderer system
(#11677)

Fixes #11686

### Changes 🏗️

This PR upgrades the React JSON Schema Form (RJSF) library from v5 to v6
and introduces a complete rewrite of the form rendering system with
improved architecture and new features.

#### Core Library Updates
- Upgraded `@rjsf/core` from 5.24.13 to 6.1.2
- Upgraded `@rjsf/utils` from 5.24.13 to 6.1.2
- Added `@radix-ui/react-slider` 1.3.6 for new slider components

#### New Form Renderer Architecture
- **Base Templates**: Created modular base templates for arrays,
objects, and standard fields
- **AnyOf Support**: Implemented `AnyOfField` component with type
selector for union types
- **Array Fields**: New `ArrayFieldTemplate`, `ArrayFieldItemTemplate`,
and `ArraySchemaField` with context provider
- **Object Fields**: Enhanced `ObjectFieldTemplate` with better support
for additional properties via `WrapIfAdditionalTemplate`
- **Field Templates**: New `TitleField`, `DescriptionField`, and
`FieldTemplate` with improved styling
- **Custom Widgets**: Implemented TextWidget, SelectWidget,
CheckboxWidget, FileWidget, DateWidget, TimeWidget, and DateTimeWidget
- **Button Components**: Custom AddButton, RemoveButton, and CopyButton
components

#### Node Handle System Refactor
- Split `NodeHandle` into `InputNodeHandle` and `OutputNodeHandle` for
better separation of concerns
- Refactored handle ID generation logic in `helpers.ts` with new
`generateHandleIdFromTitleId` function
- Improved handle connection detection using edge store
- Added support for nested output handles (objects within outputs)

#### Edge Store Improvements
- Added `removeEdgesByHandlePrefix` method for bulk edge removal
- Improved `isInputConnected` with handle ID cleanup
- Optimized `updateEdgeBeads` to only update when changes occur
- Better edge management with `applyEdgeChanges`

#### Node Store Enhancements
- Added `syncHardcodedValuesWithHandleIds` method to maintain
consistency between form data and handle connections
- Better handling of additional properties in objects
- Improved path parsing with `parseHandleIdToPath` and
`ensurePathExists`

#### Draft Recovery Improvements
- Added diff calculation with `calculateDraftDiff` to show what changed
- New `formatDiffSummary` to display changes in a readable format (e.g.,
"+2/-1 blocks, +3 connections")
- Better visual feedback for draft changes

#### UI/UX Enhancements
- Fixed node container width to 350px for consistency
- Improved field error display with inline error messages
- Better spacing and styling throughout forms
- Enhanced tooltip support for field descriptions
- Improved array item controls with better button placement
- Context-aware field sizing (small/large)

#### Output Handler Updates
- Recursive rendering of nested output properties
- Better type display with color coding
- Improved handle connections for complex output schemas

#### Migration & Cleanup
- Updated `RunInputDialog` to use new FormRenderer
- Updated `FormCreator` to use new FormRenderer
- Moved OAuth callback types to separate file
- Updated import paths from `input-renderer` to `InputRenderer`
- Removed unused console.log statements
- Added `type="button"` to buttons to prevent form submission

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test form rendering with various field types (text, number,
boolean, arrays, objects)
  - [x] Test anyOf field type selector functionality
  - [x] Test array item addition/removal
  - [x] Test nested object fields with additional properties
  - [x] Test input/output node handle connections
  - [x] Test draft recovery with diff display
  - [x] Verify backward compatibility with existing agents
  - [x] Test field validation and error display
  - [x] Verify handle ID generation for complex schemas

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Improved form field rendering with enhanced support for optional
types, arrays, and nested objects.
* Enhanced draft recovery display showing detailed difference tracking
(added, removed, modified items).
  * Better OAuth popup callback handling with structured message types.

* **Bug Fixes**
  * Improved node handle ID normalization and synchronization.
  * Enhanced edge management for complex field changes.
  * Fixed styling consistency across form components.

* **Dependencies**
  * Updated React JSON Schema Form library to version 6.1.2.
  * Added Radix UI slider component support.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-07 05:06:34 +00:00
Zamil Majdy
7ee28197a3 docs(gitbook): sync documentation updates with dev branch (#11709)
## Summary

Sync GitBook documentation changes from the gitbook branch to dev. This
PR contains comprehensive documentation updates including new assets,
content restructuring, and infrastructure improvements.

## Changes 🏗️

### Documentation Updates
- **New GitBook Assets**: Added 9 new documentation images and
screenshots
  - Platform overview images (AGPT_Platform.png, Banner_image.png)
- Feature illustrations (Contribute.png, Integrations.png, hosted.jpg,
no-code.jpg, api-reference.jpg)
  - Screenshots and examples for better user guidance
- **Content Updates**: Enhanced README.md and SUMMARY.md with improved
structure and navigation
- **Visual Documentation**: Added comprehensive visual guides for
platform features

### Infrastructure 
- **Cloudflare Worker**: Added redirect handler for docs.agpt.co →
agpt.co/docs migration
  - Complete URL mapping for 71+ redirect patterns
  - Handles platform blocks restructuring and edge cases
  - Ready for deployment to Cloudflare Workers

### Merge Conflict Resolution
- **Clean merge from dev**: Successfully merged dev's major backend
restructuring (server/ → api/)
- **File resurrection fix**: Removed files that were accidentally
resurrected during merge conflict resolution
  - Cleaned up BuilderActionButton.tsx (deleted in dev)
  - Cleaned up old PreviewBanner.tsx location (moved in dev)
  - Synced pnpm-lock.yaml and layout.tsx with dev's current state

## Technical Details

This PR represents a careful synchronization that:
1. **Preserves all GitBook documentation work** while staying current
with dev
2. **Maintains clean diff**: Only documentation-related changes remain
after merge cleanup
3. **Resolves merge conflicts**: Handled major backend API restructuring
without breaking docs
4. **Infrastructure ready**: Cloudflare Worker ready for docs migration
deployment

## Files Changed
- `docs/`: GitBook documentation assets and content
- `autogpt_platform/cloudflare_worker.js`: Docs infrastructure for URL
redirects

## Validation
-  All TypeScript compilation errors resolved
-  Pre-commit hooks passing (Prettier, TypeCheck)
-  Only documentation changes remain in diff vs dev
-  Cloudflare Worker tested with comprehensive URL mapping
-  No non-documentation code changes after cleanup

## Deployment Notes
The Cloudflare Worker can be deployed via:
```bash
# Cloudflare Dashboard → Workers → Create → Paste code → Add route docs.agpt.co/*
```

This completes the GitBook synchronization and prepares for docs site
migration.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: bobby.gaffin <bobby.gaffin@agpt.co>
Co-authored-by: Bently <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-01-07 02:11:11 +00:00
Nicholas Tindle
818de26d24 fix(platform/blocks): XMLParserBlock list object error (#11517)
<!-- Clearly explain the need for these changes: -->

### Need for these changes 💡

The `XMLParserBlock` was susceptible to crashing with an
`AttributeError: 'List' object has no attribute 'add_text'` when
processing malformed XML inputs, such as documents with multiple root
elements or stray text outside the root. This PR introduces robust
validation to prevent these crashes and provide clear, actionable error
messages to users.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added a `_validate_tokens` static method to `XMLParserBlock` to
perform pre-parsing validation on the token stream. This method ensures
the XML input has a single root element and no text content outside of
it.
- Modified the `XMLParserBlock.run` method to call `_validate_tokens`
immediately after tokenization and before passing the tokens to
`gravitasml.Parser`.
- Introduced a new test case, `test_rejects_text_outside_root`, in
`test_blocks_dos_vulnerability.py` to verify that the `XMLParserBlock`
correctly raises a `ValueError` when encountering XML with text outside
the root element.
- Imported `Token` for type hinting in `xml_parser.py`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Confirm that the `test_rejects_text_outside_root` test passes,
asserting that `ValueError` is raised for invalid XML.
  - [x] Confirm that other relevant XML parsing tests continue to pass.


---
Linear Issue:
[OPEN-2835](https://linear.app/autogpt/issue/OPEN-2835/blockunknownerror-raised-by-xmlparserblock-with-message-list-object)

<a
href="https://cursor.com/background-agent?bcId=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a>&nbsp;<a
href="https://cursor.com/agents?id=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens XML parsing robustness and error clarity.
> 
> - Adds `_validate_tokens` in `XMLParserBlock` to ensure a single root
element, balanced tags, and no text outside the root before parsing
> - Updates `run` to `list(tokenize(...))` and validate tokens prior to
`Parser.parse()`; maintains 10MB input size guard
> - Introduces `test_rejects_text_outside_root` asserting a readable
`ValueError` for trailing text
> - Bumps `gravitasml` to `0.1.4` in `pyproject.toml` and lockfile
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
22cc5149c5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved XML parsing validation with stricter enforcement of
single-root elements and prevention of trailing text, providing clearer
error messages for invalid XML input.

* **Tests**
* Added test coverage for XML parser validation of invalid root text
scenarios.

* **Chores**
  * Updated GravitasML dependency to latest compatible version.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-06 20:02:53 +00:00
Nicholas Tindle
cb08def96c feat(blocks): Add Google Docs integration blocks (#11608)
Introduces a new module with blocks for Google Docs operations,
including reading, creating, appending, inserting, formatting,
exporting, sharing, and managing public access for Google Docs. Updates
dependencies in pyproject.toml and poetry.lock to support these
features.



https://github.com/user-attachments/assets/3597366b-a9eb-4f8e-8a0a-5a0bc8ebc09b



<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds lots of basic docs tools + a dependency to use them with markdown

Block | Description | Key Features
-- | -- | --
Read & Create |   |  
GoogleDocsReadBlock | Read content from a Google Doc | Returns text
content, title, revision ID
GoogleDocsCreateBlock | Create a new Google Doc | Title, optional
initial content
GoogleDocsGetMetadataBlock | Get document metadata | Title, revision ID,
locale, suggested modes
GoogleDocsGetStructureBlock | Get document structure with indexes | Flat
segments or detailed hierarchy; shows start/end indexes
Plain Text Operations |   |  
GoogleDocsAppendPlainTextBlock | Append plain text to end | No
formatting applied
GoogleDocsInsertPlainTextBlock | Insert plain text at position |
Requires index; no formatting
GoogleDocsFindReplacePlainTextBlock | Find and replace plain text |
Case-sensitive option; no formatting on replacement
Markdown Operations | (ideal for LLM/AI output) |  
GoogleDocsAppendMarkdownBlock | Append Markdown to end | Full formatting
via gravitas-md2gdocs
GoogleDocsInsertMarkdownAtBlock | Insert Markdown at position | Requires
index
GoogleDocsReplaceAllWithMarkdownBlock | Replace entire doc with Markdown
| Clears and rewrites
GoogleDocsReplaceRangeWithMarkdownBlock | Replace index range with
Markdown | Requires start/end index
GoogleDocsReplaceContentWithMarkdownBlock | Find text and replace with
Markdown | Text-based search; great for templates
Structural Operations |   |  
GoogleDocsInsertTableBlock | Insert a table | Rows/columns OR content
array; optional Markdown in cells
GoogleDocsInsertPageBreakBlock | Insert a page break | Position index (0
= end)
GoogleDocsDeleteContentBlock | Delete content range | Requires start/end
index
GoogleDocsFormatTextBlock | Apply formatting to text range | Bold,
italic, underline, font size/color, etc.
Export & Sharing |   |  
GoogleDocsExportBlock | Export to different formats | PDF, DOCX, TXT,
HTML, RTF, ODT, EPUB
GoogleDocsShareBlock | Share with specific users | Reader, commenter,
writer, owner roles
GoogleDocsSetPublicAccessBlock | Set public access level | Private,
anyone with link (view/comment/edit)


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Build, run, verify, and upload a block super test
- [x] [Google Docs Super
Agent_v8.json](https://github.com/user-attachments/files/24134215/Google.Docs.Super.Agent_v8.json)
works


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated backend dependencies.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds end-to-end Google Docs capabilities under
`backend/blocks/google/docs.py`, including rich Markdown support.
> 
> - New blocks: read/create docs; plain-text
`append`/`insert`/`find_replace`/`delete`; text `format`;
`insert_table`; `insert_page_break`; `get_metadata`; `get_structure`
> - Markdown-powered blocks (via `gravitas_md2gdocs.to_requests`):
`append_markdown`, `insert_markdown_at`, `replace_all_with_markdown`,
`replace_range_with_markdown`, `replace_content_with_markdown`
> - Export and sharing: `export` (PDF/DOCX/TXT/HTML/RTF/ODT/EPUB),
`share` (user roles), `set_public_access`
> - Dependency updates: add `gravitas-md2gdocs` to `pyproject.toml` and
update `poetry.lock`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
73512a95b2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-05 18:36:56 +00:00
Krzysztof Czerwinski
ac2daee5f8 feat(backend): Add GPT-5.2 and update default models (#11652)
### Changes 🏗️

- Add OpenAI `GPT-5.2` with metadata&cost
- Add const `DEFAULT_LLM_MODEL` (set to GPT-5.2) and use it instead of
hardcoded model across llm blocks and tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] GPT-5.2 is set as default and works on llm blocks
2026-01-05 16:13:35 +00:00
lif
266e0d79d4 fix(blocks): add YouTube Shorts URL support (#11659)
## Summary
Added support for parsing YouTube Shorts URLs (`youtube.com/shorts/...`)
in the TranscribeYoutubeVideoBlock to extract video IDs correctly.

## Changes
- Modified `_extract_video_id` method in `youtube.py` to handle Shorts
URL format
- Added test cases for YouTube Shorts URL extraction

## Related Issue
Fixes #11500

## Test Plan
- [x] Added unit tests for YouTube Shorts URL extraction
- [x] Verified existing YouTube URL formats still work
- [x] CI should pass all existing tests

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2026-01-05 16:11:45 +00:00
lif
01f443190e fix(frontend): allow empty values in number inputs and fix AnyOfField toggle (#11661)
<!-- ⚠️ Reminder: Think about your Changeset/Docs changes! -->
<!-- If you are introducing new blocks or features, document them for
users. -->
<!-- Reference:
https://github.com/Significant-Gravitas/AutoGPT/blob/dev/CONTRIBUTING.md
-->

## Summary

This PR fixes two related issues with number/integer inputs in the
frontend:

1. **HTMLType typo fix**: INTEGER input type was incorrectly mapped to
`htmlType: 'account'` (which is not a valid HTML input type) instead of
`htmlType: 'number'`.

2. **AnyOfField toggle fix**: When a user cleared a number input field,
the input would disappear because `useAnyOfField` checked for both
`null` AND `undefined` in `isEnabled`. This PR changes it to only check
for explicit `null` (set by toggle off), allowing `undefined` (empty
input) to keep the field visible.

### Root cause analysis

When a user cleared a number input:
1. `handleChange` returned `undefined` (because `v === "" ? undefined :
Number(v)`)
2. In `useAnyOfField`, `isEnabled = formData !== null && formData !==
undefined` became `false`
3. The input field disappeared

### Fix

Changed `useAnyOfField.tsx` line 67:
```diff
- const isEnabled = formData !== null && formData !== undefined;
+ const isEnabled = formData !== null;
```

This way:
- When toggle is OFF → `formData` is `null` → `isEnabled` is `false`
(input hidden) ✓
- When toggle is ON but input is cleared → `formData` is `undefined` →
`isEnabled` is `true` (input visible) ✓

## Test plan

- [x] Verified INTEGER inputs now render correctly with `type="number"`
- [x] Verified clearing a number input keeps the field visible
- [x] Verified toggling the nullable switch still works correctly

Fixes #11594

🤖 AI-assisted development disclaimer: This PR was developed with
assistance from Claude Code.

---------

Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2026-01-05 16:10:47 +00:00
Ubbe
bdba0033de refactor(frontend): move NodeInput files (#11695)
## Changes 🏗️

Move the `<NodeInput />` component next to the old builder code where it
is used.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run app locally and click around, E2E is fine
2026-01-05 10:29:12 +00:00
Abhimanyu Yadav
b87c64ce38 feat(frontend): Add delete key bindings to ReactFlow editor
(#11693)

Issues fixed by this PR
- https://github.com/Significant-Gravitas/AutoGPT/issues/11688
- https://github.com/Significant-Gravitas/AutoGPT/issues/11687

### **Changes 🏗️**

Added keyboard delete functionality to the ReactFlow editor by enabling
the `deleteKeyCode` prop with both "Backspace" and "Delete" keys. This
allows users to delete selected nodes and edges using standard keyboard
shortcuts, improving the editing experience.

**Modified:**

- `Flow.tsx`: Added `deleteKeyCode={["Backspace", "Delete"]}` prop to
the ReactFlow component to enable deletion of selected elements via
keyboard

### **Checklist 📋**

#### **For code changes:**

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select a node in the flow editor and press Delete key to confirm
it deletes
- [x] Select a node in the flow editor and press Backspace key to
confirm it deletes
    - [x] Verify deletion works for multiple selected elements
2026-01-05 10:28:57 +00:00
Ubbe
003affca43 refactor(frontend): fix new builder buttons (#11696)
## Changes 🏗️

<img width="800" height="964" alt="Screenshot 2026-01-05 at 15 26 21"
src="https://github.com/user-attachments/assets/f8c7fc47-894a-4db2-b2f1-62b4d70e8453"
/>

- Adjust the new builder to use the Design System components
- Re-structure imports to match formatting rules
- Small improvement on `use-get-flag`
- Move file which is the main hook

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and check the new buttons look good
2026-01-05 09:09:47 +00:00
Abhimanyu Yadav
290d0d9a9b feat(frontend): add auto-save Draft Recovery feature with IndexedDB persistence
(#11658)

## Summary
Implements an auto-save draft recovery system that persists unsaved flow
builder state across browser sessions, tab closures, and refreshes. When
users return to a flow with unsaved changes, they can choose to restore
or discard the draft via an intuitive recovery popup.



https://github.com/user-attachments/assets/0f77173b-7834-48d2-b7aa-73c6cd2eaff6



## Changes 🏗️

### Core Features
- **Draft Recovery Popup** (`DraftRecoveryPopup.tsx`)
  - Displays amber-themed notification with unsaved changes metadata
  - Shows node count, edge count, and relative time since last save
  - Provides restore and discard actions with tooltips
  - Auto-dismisses on click outside or ESC key

- **Auto-Save System** (`useDraftManager.ts`)
  - Automatically saves draft state every 15 seconds
  - Saves on browser tab close/refresh via `beforeunload`
  - Tracks nodes, edges, graph schemas, node counter, and flow version
  - Smart dirty checking - only saves when actual changes detected
  - Cleans up expired drafts (24-hour TTL)

- **IndexedDB Persistence** (`db.ts`, `draft-service.ts`)
  - Uses Dexie library for reliable client-side storage
- Handles both existing flows (by flowID) and new flows (via temp
session IDs)
- Compares draft state with current state to determine if recovery
needed
  - Automatically clears drafts after successful save

### Integration Changes
- **Flow Editor** (`Flow.tsx`)
  - Integrated `DraftRecoveryPopup` component
  - Passes `isInitialLoadComplete` state for proper timing

- **useFlow Hook** (`useFlow.ts`)
  - Added `isInitialLoadComplete` state to track when flow is ready
  - Ensures draft check happens after initial graph load
  - Resets state on flow/version changes

- **useCopyPaste Hook** (`useCopyPaste.ts`)
  - Refactored to manage keyboard event listeners internally
  - Simplified integration by removing external event handler setup

- **useSaveGraph Hook** (`useSaveGraph.ts`)
  - Clears draft after successful save (both create and update)
  - Removes temp flow ID from session storage on first save

### Dependencies
- Added `dexie@4.2.1` - Modern IndexedDB wrapper for reliable
client-side storage

## Technical Details

**Auto-Save Flow:**
1. User makes changes to nodes/edges
2. Change triggers 15-second debounced save
3. Draft saved to IndexedDB with timestamp
4. On save, current state compared with last saved state
5. Only saves if meaningful changes detected

**Recovery Flow:**
1. User loads flow/refreshes page
2. After initial load completes, check for existing draft
3. Compare draft with current state
4. If different and non-empty, show recovery popup
5. User chooses to restore or discard
6. Draft cleared after either action

**Session Management:**
- Existing flows: Use actual flowID for draft key

### Test Plan 🧪

- [x] Create a new flow with 3+ blocks and connections, wait 15+
seconds, then refresh the page - verify recovery popup appears with
correct counts and restoring works
- [x] Create a flow with blocks, refresh, then click "Discard" button on
recovery popup - verify popup disappears and draft is deleted
- [x] Add blocks to a flow, save successfully - verify draft is cleared
from IndexedDB (check DevTools > Application > IndexedDB)
- [x] Make changes to an existing flow, refresh page - verify recovery
popup shows and restoring preserves all changes correctly
- [x] Verify empty flows (0 nodes) don't trigger recovery popup or save
drafts
2025-12-31 14:49:53 +00:00
Abhimanyu Yadav
fba61c72ed feat(frontend): fix duplicate publish button and improve BuilderActionButton styling
(#11669)

Fixes duplicate "Publish to Marketplace" buttons in the builder by
adding a `showTrigger` prop to control modal trigger visibility.

<img width="296" height="99" alt="Screenshot 2025-12-23 at 8 18 58 AM"
src="https://github.com/user-attachments/assets/d5dbfba8-e854-4c0c-a6b7-da47133ec815"
/>


### Changes 🏗️

**BuilderActionButton.tsx**

- Removed borders on hover and active states for a cleaner visual
appearance
- Added `hover:border-none` and `active:border-none` to maintain
consistent styling during interactions

**PublishToMarketplace.tsx**

- Pass `showTrigger={false}` to `PublishAgentModal` to hide the default
trigger button
- This prevents duplicate buttons when a custom trigger is already
rendered

**PublishAgentModal.tsx**

- Added `showTrigger` prop (defaults to `true`) to conditionally render
the modal trigger
- Allows parent components to control whether the built-in trigger
button should be displayed
- Maintains backward compatibility with existing usage

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify only one "Publish to Marketplace" button appears in the
builder
- [x] Confirm button hover/active states display correctly without
border artifacts
- [x] Verify modal can still be triggered programmatically without the
trigger button
2025-12-31 09:46:12 +00:00
Nicholas Tindle
79d45a15d0 feat(platform): Deduplicate insufficient funds Discord + email notifications (#11672)
Add Redis-based deduplication for insufficient funds notifications (both
Discord alerts and user emails) when users run out of credits. This
prevents spamming users and the PRODUCT Discord channel with repeated
alerts for the same user+agent combination.

### Changes 🏗️

- **Redis-based deduplication** (`backend/executor/manager.py`):
- Add `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` constant for Redis key prefix
- Add `INSUFFICIENT_FUNDS_NOTIFIED_TTL_SECONDS` (30 days) as fallback
cleanup
- Implement deduplication in `_handle_insufficient_funds_notif` using
Redis `SET NX`
- Skip both email (`ZERO_BALANCE`) and Discord notifications for
duplicate alerts per user+agent
- Add `clear_insufficient_funds_notifications(user_id)` function to
remove all notification flags for a user

- **Clear flags on credit top-up** (`backend/data/credit.py`):
- Call `clear_insufficient_funds_notifications` in `_top_up_credits`
after successful auto-charge
- Call `clear_insufficient_funds_notifications` in `fulfill_checkout`
after successful manual top-up
- This allows users to receive notifications again if they run out of
funds in the future

- **Comprehensive test coverage**
(`backend/executor/manager_insufficient_funds_test.py`):
  - Test first-time notification sends both email and Discord alert
  - Test duplicate notifications are skipped for same user+agent
  - Test different agents for same user get separate alerts
  - Test clearing notifications removes all keys for a user
  - Test handling when no notification keys exist
- Test notifications still sent when Redis fails (graceful degradation)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First insufficient funds alert sends both email and Discord
notification
  - [x] Duplicate alerts for same user+agent are skipped
  - [x] Different agents for same user each get their own notification
  - [x] Topping up credits clears notification flags
  - [x] Redis failure gracefully falls back to sending notifications
  - [x] 30-day TTL provides automatic cleanup as fallback
  - [x] Manually test this works with scheduled agents
 

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces Redis-backed deduplication for insufficient-funds alerts
and resets flags on successful credit additions.
> 
> - **Dedup insufficient-funds alerts** in `executor/manager.py` using
Redis `SET NX` with `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` and 30‑day TTL;
skips duplicate ZERO_BALANCE email + Discord alerts per
`user_id`+`graph_id`, with graceful fallback if Redis fails.
> - **Reset notification flags on credit increases** by adding
`clear_insufficient_funds_notifications(user_id)` and invoking it when
enabling/adding positive `GRANT`/`TOP_UP` transactions in
`data/credit.py`.
> - **Tests** (`executor/manager_insufficient_funds_test.py`):
first-time vs duplicate behavior, per-agent separation, clearing keys
(including no-key and Redis-error cases), and clearing on
`_add_transaction`/`_enable_transaction`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1a4413b3a1. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-30 18:10:30 +00:00
Ubbe
66f0d97ca2 fix(frontend): hide better chat link if not enabled (#11648)
## Changes 🏗️

- Make `<Navbar />` a client component so its rendering is more
predictable
- Remove the `useMemo()` for the chat link to prevent the flash...
- Make sure chat is added to the navbar links only after checking the
flag is enabled
- Improve logout with `useTransition`
- Simplify feature flags setup

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensures the `Chat` nav item is hidden when the feature flag is off
across desktop and mobile nav.
> 
> - Inline-filters `loggedInLinks` to skip `Chat` when `Flag.CHAT` is
false for both `NavbarLink` rendering and `MobileNavBar` menu items
> - Removes `useMemo`/`linksWithChat` helper; maps directly over
`loggedInLinks` and filters nulls in mobile, keeping icon mapping intact
> - Cleans up unused `useMemo` import
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
79c42d87b4. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-12-30 13:21:53 +00:00
Ubbe
5894a8fcdf fix(frontend): use DS Dialog on old builder (#11643)
## Changes 🏗️

Use the Design System `<Dialog />` on the old builder, which supports
long content scrolling ( the current one does not, causing issues in
graphs with many run inputs )...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added Enhanced Rendering toggle for improved output handling and
display (controlled via feature flag)

* **Improvements**
  * Refined dialog layouts and user interactions
* Enhanced copy-to-clipboard functionality with toast notifications upon
copying

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-30 20:22:57 +07:00
Ubbe
dff8efa35d fix(frontend): favico colour override issue (#11681)
## Changes 🏗️

Sometimes, on Dev, when navigating between pages, the Favico colour
would revert from Green 🟢 (Dev) to Purple 🟣(Default). That's because the
`/marketplace` page had custom code overriding it that I didn't notice
earlier...

I also made it use the Next.js metadata API, so it handles the favicon
correctly across navigations.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above
2025-12-30 20:22:32 +07:00
seer-by-sentry[bot]
e26822998f fix: Handle missing or null 'items' key in DataForSEO Related Keywords block (#10989)
### Changes 🏗️

- Modified the DataForSEO Related Keywords block to handle cases where
the 'items' key is missing or has a null value in the API response.
- Ensures that the code gracefully handles these scenarios by defaulting
to an empty list, preventing potential errors. Fixes
[AUTOGPT-SERVER-66D](https://sentry.io/organizations/significant-gravitas/issues/6902944636/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] The DataForSEO API now returns an empty list when there are no
results, preventing the code from attempting to iterate on a null value.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens parsing of DataForSEO Labs response to avoid errors when
`items` is missing or null.
> 
> - In `backend/blocks/dataforseo/related_keywords.py` `run()`, sets
`items = first_result.get("items") or []` when `first_result` is a
`dict`, otherwise `[]`, ensuring safe iteration
> - Prevents exceptions and yields empty results when no items are
returned
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
cc465ddbf2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-26 16:17:24 +00:00
Zamil Majdy
88731b1f76 feat(platform): marketplace update notifications with enhanced publishing workflow (#11630)
## Summary
This PR implements a comprehensive marketplace update notification
system that allows users to discover and update to newer agent versions,
along with enhanced publishing workflows and UI improvements.

<img width="1500" height="533" alt="image"
src="https://github.com/user-attachments/assets/ee331838-d712-4718-b231-1f9ec21bcd8e"
/>

<img width="600" height="610" alt="image"
src="https://github.com/user-attachments/assets/b881a7b8-91a5-460d-a159-f64765b339f1"
/>

<img width="1500" height="416" alt="image"
src="https://github.com/user-attachments/assets/a2d61904-2673-4e44-bcc5-c47d36af7a38"
/>

<img width="1500" height="1015" alt="image"
src="https://github.com/user-attachments/assets/2dd978c7-20cc-4230-977e-9c62157b9f23"
/>


## Core Features

### 🔔 Marketplace Update Notifications
- **Update detection**: Automatically detects when marketplace has newer
agent versions than user's local copy
- **Creator notifications**: Shows banners for creators with unpublished
changes ready to publish
- **Non-creator support**: Enables regular users to discover and update
to newer marketplace versions
- **Version comparison**: Intelligent logic comparing `graph_version` vs
marketplace listing versions

### 📋 Enhanced Publishing Workflow  
- **Builder integration**: Added "Publish to Marketplace" button
directly in the builder actions
- **Unified banner system**: Consistent `MarketplaceBanners` component
across library and marketplace pages
- **Streamlined UX**: Fixed layout issues, improved button placement and
styling
- **Modal improvements**: Fixed thumbnail loading race conditions and
infinite loop bugs

### 📚 Version History & Changelog
- **Inline version history**: Added version changelog directly to
marketplace agent pages
- **Version comparison**: Clear display of available versions with
current version highlighting
- **Update mechanism**: Direct updates using `graph_version` parameter
for accuracy

## Technical Implementation

### Backend Changes
- **Database schema**: Added `agentGraphVersions` and `agentGraphId`
fields to `StoreAgent` model
- **API enhancement**: Updated store endpoints to expose graph version
data for version comparison
- **Data migration**: Fixed agent version field naming from `version` to
`agentGraphVersions`
- **Model updates**: Enhanced `LibraryAgentUpdateRequest` with
`graph_version` field

### Frontend Architecture
- **`useMarketplaceUpdate` hook**: Centralized marketplace update
detection and creator identification
- **`MarketplaceBanners` component**: Unified banner system with proper
vertical layout and styling
- **`AgentVersionChangelog` component**: Version history display for
marketplace pages
- **`PublishToMarketplace` component**: Builder integration with modal
workflow

### Key Bug Fixes
- **Thumbnail loading**: Fixed race condition where images wouldn't load
on first modal open
- **Infinite loops**: Used refs to prevent circular dependencies in
`useThumbnailImages` hook
- **Layout issues**: Fixed banner placement, removed duplicate
breadcrumbs, corrected vertical layout
- **Field naming**: Fixed `agent_version` vs `version` field
inconsistencies across APIs

## Files Changed

### Backend
- `autogpt_platform/backend/backend/server/v2/store/` - Enhanced store
API with graph version data
- `autogpt_platform/backend/backend/server/v2/library/` - Updated
library API models
- `autogpt_platform/backend/migrations/` - Database migrations for
version fields
- `autogpt_platform/backend/schema.prisma` - Schema updates for graph
versions

### Frontend
- `src/app/(platform)/components/MarketplaceBanners/` - New unified
banner component
- `src/app/(platform)/library/agents/[id]/components/` - Enhanced
library views with banners
- `src/app/(platform)/build/components/BuilderActions/` - Added
marketplace publish button
- `src/app/(platform)/marketplace/components/AgentInfo/` - Added inline
version history
- `src/components/contextual/PublishAgentModal/` - Fixed thumbnail
loading and modal workflow

## User Experience Impact
- **Better discovery**: Users automatically notified of newer agent
versions
- **Streamlined publishing**: Direct publish access from builder
interface
- **Reduced friction**: Fixed UI bugs, improved loading states,
consistent design
- **Enhanced transparency**: Inline version history on marketplace pages
- **Creator workflow**: Better notifications for creators with
unpublished changes

## Testing
-  Update banners appear correctly when marketplace has newer versions
-  Creator banners show for users with unpublished changes  
-  Version comparison logic works with graph_version vs marketplace
versions
-  Publish button in builder opens modal correctly with pre-populated
data
-  Thumbnail images load properly on first modal open without infinite
loops
-  Database migrations completed successfully with version field fixes
-  All existing tests updated and passing with new schema changes

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-22 11:13:06 +00:00
Abhimanyu Yadav
c3e407ef09 feat(frontend): add hover state to edge delete button in FlowEditor (#11601)
<!-- Clearly explain the need for these changes: -->

The delete button on flow editor edges is always visible, which creates
visual clutter. This change makes the button only appear on hover,
improving the UI while keeping it accessible.

### Changes 🏗️

- Added hover state management using `useState` to track when the edge
delete button is hovered
- Applied opacity transition to the delete button (fades in on hover,
fades out when not hovered)
- Added `onMouseEnter` and `onMouseLeave` handlers to the button to
control hover state
- Used `cn` utility for conditional className management
- Button remains interactive even when `opacity-0` (still clickable for
better UX)

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Hover over an edge in the flow editor and verify the delete button
fades in smoothly
- [x] Move mouse away from edge and verify the delete button fades out
smoothly
- [x] Click the delete button while hovered to verify it still removes
the edge connection
- [x] Test with multiple edges to ensure hover state is independent per
edge
2025-12-22 01:30:58 +00:00
Reinier van der Leer
08a60dcb9b refactor(frontend): Clean up React Query-related code (#11604)
- #11603

### Changes 🏗️

Frontend:
- Make `okData` infer the response data type instead of casting
- Generalize infinite query utilities from `SidebarRunsList/helpers.ts`
  - Move to `@/app/api/helpers` and use wherever possible
- Simplify/replace boilerplate checks and conditions with `okData` in
many places
- Add `useUserTimezone` hook to replace all the boilerplate timezone
queries

Backend:
- Fix response type annotation of `GET
/api/store/graph/{store_listing_version_id}` endpoint
- Fix documentation and error behavior of `GET
/api/review/execution/{graph_exec_id}` endpoint

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI passes
  - [x] Clicking around the app manually -> no obvious issues
  - [x] Test Onboarding step 5 (run)
  - [x] Library runs list loads normally
2025-12-20 22:46:24 +01:00
Reinier van der Leer
de78d062a9 refactor(backend/api): Clean up API file structure (#11629)
We'll soon be needing a more feature-complete external API. To make way
for this, I'm moving some files around so:
- We can more easily create new versions of our external API
- The file structure of our internal API is more homogeneous

These changes are quite opinionated, but IMO in any case they're better
than the chaotic structure we have now.

### Changes 🏗️

- Move `backend/server` -> `backend/api`
- Move `backend/server/routers` + `backend/server/v2` ->
`backend/api/features`
  - Change absolute sibling imports to relative imports
- Move `backend/server/v2/AutoMod` -> `backend/executor/automod`
- Combine `backend/server/routers/analytics_*test.py` ->
`backend/api/features/analytics_test.py`
- Sort OpenAPI spec file

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI tests
  - [x] Clicking around in the app -> no obvious breakage
2025-12-20 20:33:10 +00:00
Zamil Majdy
217e3718d7 feat(platform): implement HITL UI redesign with improved review flow (#11529)
## Summary

• Redesigned Human-in-the-Loop review interface with yellow warning
scheme
• Implemented separate approved_data/rejected_data output pins for
human_in_the_loop block
• Added real-time execution status tracking to legacy flow for review
detection
• Fixed button loading states and improved UI consistency across flows
• Standardized Tailwind CSS usage removing custom values

<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/4ca6dd98-f3c4-41c0-a06b-92b3bca22490"
/>
<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/0afae211-09f0-465e-b477-c3949f13c876"
/>
<img width="1500" alt="image"
src="https://github.com/user-attachments/assets/05d9d1ed-cd40-4c73-92b8-0dab21713ca9"
/>



## Changes Made

### Backend Changes
- Modified `human_in_the_loop.py` block to output separate
`approved_data` and `rejected_data` pins instead of single reviewed_data
with status
- Updated block output schema to support better data flow in graph
builder

### Frontend UI Changes
- Redesigned PendingReviewsList with yellow warning color scheme
(replacing orange)
- Fixed button loading states to show spinner only on clicked button 
- Improved FloatingReviewsPanel layout removing redundant headers
- Added real-time status tracking to legacy flow using useFlowRealtime
hook
- Fixed AgentActivityDropdown text overflow and layout issues
- Enhanced Safe Mode toggle positioning and toast timing
- Standardized all custom Tailwind values to use standard classes

### Design System Updates
- Added yellow design tokens (25, 150, 600) for warning states
- Unified REVIEW status handling across all components
- Improved component composition patterns

## Test Plan
- [x] Verify HITL blocks create separate output pins for
approved/rejected data
- [x] Test review flow works in both new and legacy flow builders
- [x] Confirm button loading states work correctly (only clicked button
shows spinner)
- [x] Validate AgentActivityDropdown properly displays review status
- [x] Check Safe Mode toggle positioning matches old flow
- [x] Ensure real-time status updates work in legacy flow
- [x] Verify yellow warning colors are consistent throughout

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-12-20 15:52:51 +00:00
Reinier van der Leer
3dbc03e488 feat(platform): OAuth API & Single Sign-On (#11617)
We want to provide Single Sign-On for multiple AutoGPT apps that use the
Platform as their backend.

### Changes 🏗️

Backend:
- DB + logic + API for OAuth flow (w/ tests)
  - DB schema additions for OAuth apps, codes, and tokens
  - Token creation/validation/management logic
- OAuth flow endpoints (app info, authorize, token exchange, introspect,
revoke)
  - E2E OAuth API integration tests
- Other OAuth-related endpoints (upload app logo, list owned apps,
external `/me` endpoint)
    - App logo asset management
  - Adjust external API middleware to support auth with access token
  - Expired token clean-up job
    - Add `OAUTH_TOKEN_CLEANUP_INTERVAL_HOURS` setting (optional)
- `poetry run oauth-tool`: dev tool to test the OAuth flows and register
new OAuth apps
- `poetry run export-api-schema`: dev tool to quickly export the OpenAPI
schema (much quicker than spinning up the backend)

Frontend:
- Frontend UI for app authorization (`/auth/authorize`)
  - Re-redirect after login/signup
- Frontend flow to batch-auth integrations on request of the client app
(`/auth/integrations/setup-wizard`)
  - Debug `CredentialInputs` component
- Add `/profile/oauth-apps` management page
- Add `isOurProblem` flag to `ErrorCard` to hide action buttons when the
error isn't our fault
- Add `showTitle` flag to `CredentialsInput` to hide built-in title for
layout reasons

DX:
- Add [API
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/api-guide.md)
and [OAuth
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/oauth-guide.md)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Manually verify test coverage of OAuth API tests
  - Test `/auth/authorize` using `poetry run oauth-tool test-server`
    - [x] Works
    - [x] Looks okay
- Test `/auth/integrations/setup-wizard` using `poetry run oauth-tool
test-server`
    - [x] Works
    - [x] Looks okay
  - Test `/profile/oauth-apps` page
    - [x] All owned OAuth apps show up
    - [x] Enabling/disabling apps works
- [ ] ~~Uploading logos works~~ can only test this once deployed to dev

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-12-19 21:05:16 +01:00
Zamil Majdy
b76b5a37c5 fix(backend): Convert generic exceptions to appropriate typed exceptions (#11641)
## Summary
- Fix TimeoutError in AIShortformVideoCreatorBlock → BlockExecutionError
- Fix generic exceptions in SearchTheWebBlock → BlockExecutionError with
proper HTTP error handling
- Fix FirecrawlError 504 timeouts → BlockExecutionError with
service-specific messages
- Fix ReplicateBlock validation errors → BlockInputError for 422 status,
BlockExecutionError for others
- Add comprehensive HTTP error handling with
HTTPClientError/HTTPServerError classes
- Implement filename sanitization for "File name too long" errors
- Add proper User-Agent handling for Wikipedia API compliance
- Fix type conversion for string subclasses like ShortTextType
- Add support for moderation errors with proper context propagation

## Test plan
- [x] All modified blocks now properly categorize errors instead of
raising BlockUnknownError
- [x] Type conversion tests pass for ShortTextType and other string
subclasses
- [x] Formatting and linting pass
- [x] Exception constructors include required block_name and block_id
parameters

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-19 13:19:58 +01:00
Bently
eed07b173a fix(frontend/builder): automatically frame agent when opening in builder (#11640)
## Summary
- Fixed auto-frame timing in new builder - now calls `fitView` after
nodes are rendered instead of on mount
- Replaced manual viewport calculation in legacy builder with React
Flow's `fitView` for consistency
- Both builders now properly center and frame all blocks when opening an
agent

  ## Test plan
- [x] Open an existing agent with multiple blocks in the new builder -
verify all blocks are visible and centered
- [x] Open an existing agent in the legacy builder - verify all blocks
are visible and centered
  - [x] Verify the manual "Frame" button still works correctly
2025-12-18 18:07:40 +00:00
Ubbe
c5e8b0b08f fix(frontend): modal hidden overflow (#11642)
## Changes 🏗️

### Before

<img width="300" height="528" alt="Screenshot 2025-12-18 at 19 07 37"
src="https://github.com/user-attachments/assets/6bedf02d-e1fd-4f87-956e-96fcd9996392"
/>


### After

<img width="300" height="576" alt="Screenshot 2025-12-18 at 19 02 12"
src="https://github.com/user-attachments/assets/f3175b94-447c-41b0-83cf-4334c02d1378"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and check crop
2025-12-18 19:51:24 +01:00
Ubbe
cd3e35df9e fix(frontend): small library/mobile improvements (#11626)
## Changes 🏗️

Adds the following improvements:

### Prevent credential row overflowing on mobile 📱 

**Before**

<img width="300" height="469" alt="Screenshot 2025-12-15 at 16 42 05"
src="https://github.com/user-attachments/assets/0d27394c-cec9-45a4-be82-804827343212"
/>

**After**

<img width="300" height="446" alt="Screenshot 2025-12-15 at 16 44 22"
src="https://github.com/user-attachments/assets/0f19e220-500d-4488-955e-612d38704727"
/>

_Just hide the ****** on mobile..._

### Make touch targets bigger on 📱 on the mobile menu

**Before**

<img width="300" height="607" alt="Screenshot 2025-12-15 at 16 58 28"
src="https://github.com/user-attachments/assets/762b7d4e-5269-41a4-88d2-ea745c50324e"
/>

Touch targets were quite small on mobile, especially for people with big
fingers...

**After**

<img width="300" height="589" alt="Screenshot 2025-12-15 at 16 54 02"
src="https://github.com/user-attachments/assets/beede0c4-5439-47a9-8bec-143b44306c6b"
/>

### New `<OverflowText />` component

<img width="600" height="551" alt="Screenshot 2025-12-15 at 16 48 20"
src="https://github.com/user-attachments/assets/a969223f-dd6a-497a-857e-18483aea28d7"
/>

A component that will render text like `<Text />`, but automatically
displays `...` and the full text content on a tooltip if it detects
there is no space for the full text length. Pretty useful for the type
of dashboard we are building, where sometimes titles or user-generated
content can be quite long, making the UI look whack.

### Google Drive Picker

Only allow the removal of files if it is not in read-only mode.


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Checkout branch locally
  - [x] Test the above
2025-12-18 19:33:30 +01:00
Ubbe
4a7bc006a8 hotfix(frontend): chat should be disabled by default (#11639)
### Changes 🏗️

Chat should be disabled by default; otherwise, it flashes, and if Launch
Darkly fails to fail, it is dangerous.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally with Launch Darkly disabled and test the above
2025-12-18 19:04:13 +01:00
dependabot[bot]
4c474417bc chore(frontend/deps-dev): bump import-in-the-middle from 1.14.2 to 2.0.0 in /autogpt_platform/frontend (#11357)
Bumps
[import-in-the-middle](https://github.com/nodejs/import-in-the-middle)
from 1.14.2 to 2.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nodejs/import-in-the-middle/releases">import-in-the-middle's
releases</a>.</em></p>
<blockquote>
<h2>import-in-the-middle: v2.0.0</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.15.0...import-in-the-middle-v2.0.0">2.0.0</a>
(2025-10-14)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<p>This was only a new major out of an abundance of caution. The hook
code has been converted to ESM to work around some loader issues. There
should actually be no breaking changes when using
<code>import-in-the-middle/hook.mjs</code> or the exported
<code>Hook</code> API.</p>
<h3>Features</h3>
<ul>
<li>convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)
(<a
href="da7c7a6904">da7c7a6</a>)</li>
</ul>
<h2>import-in-the-middle: v1.15.0</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.4...import-in-the-middle-v1.15.0">1.15.0</a>
(2025-10-09)</h2>
<h3>Features</h3>
<ul>
<li>Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)
(<a
href="83d662a8e1">83d662a</a>)</li>
</ul>
<h2>import-in-the-middle: v1.14.4</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.3...import-in-the-middle-v1.14.4">1.14.4</a>
(2025-09-25)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)
(<a
href="f23b7ef9e8">f23b7ef</a>)</li>
</ul>
<h2>import-in-the-middle: v1.14.3</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v1.14.3">1.14.3</a>
(2025-09-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)
(<a
href="81a2ae0ea0">81a2ae0</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/nodejs/import-in-the-middle/blob/main/CHANGELOG.md">import-in-the-middle's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.15.0...import-in-the-middle-v2.0.0">2.0.0</a>
(2025-10-14)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<p>Converting all modules running in the loader thread to ESM should not
be a
breaking change for most users since it primarily affects internal
implementation
details. However, if you were referencing internal CJS files like
<code>hook.js</code> this will no longer work.</p>
<h3>Features</h3>
<ul>
<li>convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)
(<a
href="da7c7a6904">da7c7a6</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.4...import-in-the-middle-v1.15.0">1.15.0</a>
(2025-10-09)</h2>
<h3>Features</h3>
<ul>
<li>Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)
(<a
href="83d662a8e1">83d662a</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.3...import-in-the-middle-v1.14.4">1.14.4</a>
(2025-09-25)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)
(<a
href="f23b7ef9e8">f23b7ef</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v1.14.3">1.14.3</a>
(2025-09-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)
(<a
href="81a2ae0ea0">81a2ae0</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d96619f7f2"><code>d96619f</code></a>
chore: release v2.0.0 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/214">#214</a>)</li>
<li><a
href="da7c7a6904"><code>da7c7a6</code></a>
feat!: convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)</li>
<li><a
href="b8fd97e98c"><code>b8fd97e</code></a>
chore: npm ignore several files used for dev (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/137">#137</a>)</li>
<li><a
href="94837a7427"><code>94837a7</code></a>
chore: release v1.15.0 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/213">#213</a>)</li>
<li><a
href="83d662a8e1"><code>83d662a</code></a>
feat: Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)</li>
<li><a
href="ffb5682e3f"><code>ffb5682</code></a>
chore: release v1.14.4 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/209">#209</a>)</li>
<li><a
href="f23b7ef9e8"><code>f23b7ef</code></a>
fix: Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)</li>
<li><a
href="f67ecb8323"><code>f67ecb8</code></a>
chore: release v1.14.3 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/206">#206</a>)</li>
<li><a
href="81a2ae0ea0"><code>81a2ae0</code></a>
fix: use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)</li>
<li><a
href="2fb1f21b3d"><code>2fb1f21</code></a>
chore: use npm@11 for OIDC publishing (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/204">#204</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v2.0.0">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for import-in-the-middle since your current
version.</p>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=import-in-the-middle&package-manager=npm_and_yarn&previous-version=1.14.2&new-version=2.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-12-18 18:58:27 +01:00
dependabot[bot]
99e2261254 chore(frontend/deps-dev): bump eslint-config-next from 15.5.2 to 15.5.6 in /autogpt_platform/frontend (#11355)
Bumps
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
from 15.5.2 to 15.5.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.5.6</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Turbopack: don't define process.cwd() in node_modules <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83452">#83452</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/mischnic"><code>@​mischnic</code></a> for
helping!</p>
<h2>v15.5.5</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Split code-frame into separate compiled package (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84238">#84238</a>)</li>
<li>Add deprecation warning to Runtime config (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84650">#84650</a>)</li>
<li>fix: unstable_cache should perform blocking revalidation during ISR
revalidation (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84716">#84716</a>)</li>
<li>feat: <code>experimental.middlewareClientMaxBodySize</code> body
cloning limit (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84722">#84722</a>)</li>
<li>fix: missing next/link types with typedRoutes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84779">#84779</a>)</li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>docs: early October improvements and fixes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84334">#84334</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/devjiwonchoi"><code>@​devjiwonchoi</code></a>,
<a href="https://github.com/ztanner"><code>@​ztanner</code></a>, and <a
href="https://github.com/icyJoseph"><code>@​icyJoseph</code></a> for
helping!</p>
<h2>v15.5.4</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: ensure onRequestError is invoked when otel enabled (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83343">#83343</a>)</li>
<li>fix: devtools initial position should be from next config (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83571">#83571</a>)</li>
<li>[devtool] fix overlay styles are missing (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83721">#83721</a>)</li>
<li>Turbopack: don't match dynamic pattern for node_modules packages (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83176">#83176</a>)</li>
<li>Turbopack: don't treat metadata routes as RSC (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82911">#82911</a>)</li>
<li>[turbopack] Improve handling of symlink resolution errors in
track_glob and read_glob (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83357">#83357</a>)</li>
<li>Turbopack: throw large static metadata error earlier (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82939">#82939</a>)</li>
<li>fix: error overlay not closing when backdrop clicked (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83981">#83981</a>)</li>
<li>Turbopack: flush Node.js worker IPC on error (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84077">#84077</a>)</li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>[CNA] use linter preference (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83194">#83194</a>)</li>
<li>CI: use KV for test timing data (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83745">#83745</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="55ef0e3ebc"><code>55ef0e3</code></a>
v15.5.6</li>
<li><a
href="81f530db26"><code>81f530d</code></a>
v15.5.5</li>
<li><a
href="40f1d7814d"><code>40f1d78</code></a>
v15.5.4</li>
<li><a
href="07d1cbc9c6"><code>07d1cbc</code></a>
v15.5.3</li>
<li>See full diff in <a
href="https://github.com/vercel/next.js/commits/v15.5.6/packages/eslint-config-next">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=eslint-config-next&package-manager=npm_and_yarn&previous-version=15.5.2&new-version=15.5.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-12-18 18:55:57 +01:00
dependabot[bot]
cab498fa8c chore(deps): Bump actions/stale from 9 to 10 (#10871)
Bumps [actions/stale](https://github.com/actions/stale) from 9 to 10.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/stale/releases">actions/stale's
releases</a>.</em></p>
<blockquote>
<h2>v10.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade to node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1279">actions/stale#1279</a>
Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></li>
</ul>
<h3>Enhancement</h3>
<ul>
<li>Introducing sort-by option by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1254">actions/stale#1254</a></li>
</ul>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade actions/publish-immutable-action from 0.0.3 to 0.0.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1186">actions/stale#1186</a></li>
<li>Upgrade undici from 5.28.4 to 5.28.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1201">actions/stale#1201</a></li>
<li>Upgrade <code>@​action/cache</code> from 4.0.0 to 4.0.2 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1226">actions/stale#1226</a></li>
<li>Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1233">actions/stale#1233</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1251">actions/stale#1251</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/stale/pull/1277">actions/stale#1277</a></li>
</ul>
<h3>Documentation changes</h3>
<ul>
<li>Changelog update for recent releases by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1224">actions/stale#1224</a></li>
<li>Permissions update in Readme by <a
href="https://github.com/ghadimir"><code>@​ghadimir</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1248">actions/stale#1248</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1224">actions/stale#1224</a></li>
<li><a href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1248">actions/stale#1248</a></li>
<li><a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1277">actions/stale#1277</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1279">actions/stale#1279</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/stale/compare/v9...v10.0.0">https://github.com/actions/stale/compare/v9...v10.0.0</a></p>
<h2>v9.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Documentation update by <a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
<li>Update undici from 5.28.2 to 5.28.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1150">actions/stale#1150</a></li>
<li>Update actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1091">actions/stale#1091</a></li>
<li>Update actions/publish-action from 0.2.2 to 0.3.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1147">actions/stale#1147</a></li>
<li>Update ts-jest from 29.1.1 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1175">actions/stale#1175</a></li>
<li>Update <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1191">actions/stale#1191</a></li>
<li>Update <code>@​types/jest</code> from 29.5.11 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1193">actions/stale#1193</a></li>
<li>Update <code>@​actions/cache</code> from 3.2.2 to 4.0.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1194">actions/stale#1194</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/stale/compare/v9...v9.1.0">https://github.com/actions/stale/compare/v9...v9.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/stale/blob/main/CHANGELOG.md">actions/stale's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h1>[9.1.0]</h1>
<h2>What's Changed</h2>
<ul>
<li>Documentation update by <a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
<li>Update undici from 5.28.2 to 5.28.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1150">actions/stale#1150</a></li>
<li>Update actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1091">actions/stale#1091</a></li>
<li>Update actions/publish-action from 0.2.2 to 0.3.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1147">actions/stale#1147</a></li>
<li>Update ts-jest from 29.1.1 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1175">actions/stale#1175</a></li>
<li>Update <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1191">actions/stale#1191</a></li>
<li>Update <code>@​types/jest</code> from 29.5.11 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1193">actions/stale#1193</a></li>
<li>Update <code>@​actions/cache</code> from 3.2.2 to 4.0.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1194">actions/stale#1194</a></li>
</ul>
<h1>[9.0.0]</h1>
<h2>Breaking Changes</h2>
<ol>
<li>Action is now stateful: If the action ends because of <a
href="https://github.com/actions/stale#operations-per-run">operations-per-run</a>
then the next run will start from the first unprocessed issue skipping
the issues processed during the previous run(s). The state is reset when
all the issues are processed. This should be considered for scheduling
workflow runs.</li>
<li>Version 9 of this action updated the runtime to Node.js 20. All
scripts are now run with Node.js 20 instead of Node.js 16 and are
affected by any breaking changes between Node.js 16 and 20.</li>
</ol>
<h2>What Else Changed</h2>
<ol>
<li>Performance optimization that removes unnecessary API calls by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1033/">#1033</a>;
fixes <a
href="https://redirect.github.com/actions/stale/issues/792">#792</a></li>
<li>Logs displaying current GitHub API rate limit by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1032">#1032</a>;
addresses <a
href="https://redirect.github.com/actions/stale/issues/1029">#1029</a></li>
</ol>
<p>For more information, please read the <a
href="https://github.com/actions/stale#readme">action documentation</a>
and its <a href="https://github.com/actions/stale#statefulness">section
about statefulness</a></p>
<h1>[4.1.1]</h1>
<p>In scope of this release we updated <a
href="https://redirect.github.com/actions/stale/pull/957">actions/core
to 1.10.0</a> for v4 and <a
href="https://redirect.github.com/actions/stale/pull/662">fixed issues
operation count</a>.</p>
<h1>[8.0.0]</h1>
<p>⚠️ This version contains breaking changes ⚠️</p>
<ul>
<li>New option labels-to-remove-when-stale enables users to specify list
of comma delimited labels that will be removed when the issue or PR
becomes stale by <a
href="https://github.com/panticmilos"><code>@​panticmilos</code></a> <a
href="https://redirect.github.com/actions/stale/issues/770">actions/stale#770</a></li>
<li>Skip deleting the branch in the upstream of a forked repo by <a
href="https://github.com/dsame"><code>@​dsame</code></a> <a
href="https://redirect.github.com/actions/stale/pull/913">actions/stale#913</a></li>
<li>abort the build on the error by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/935">actions/stale#935</a></li>
</ul>
<h1>[7.0.0]</h1>
<p>⚠️ Breaking change ⚠️</p>
<ul>
<li>Allow daysBeforeStale options to be float by <a
href="https://github.com/irega"><code>@​irega</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/841">actions/stale#841</a></li>
<li>Use cache in check-dist.yml by <a
href="https://github.com/jongwooo"><code>@​jongwooo</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/876">actions/stale#876</a></li>
<li>fix print outputs step in existing workflows by <a
href="https://github.com/irega"><code>@​irega</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/859">actions/stale#859</a></li>
<li>Update issue and PR templates, add/delete workflow files by <a
href="https://github.com/IvanZosimov"><code>@​IvanZosimov</code></a> in
<a
href="https://redirect.github.com/actions/stale/pull/880">actions/stale#880</a></li>
<li>Update how stale handles exempt items by <a
href="https://github.com/johnsudol"><code>@​johnsudol</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/874">actions/stale#874</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3a9db7e6a4"><code>3a9db7e</code></a>
Upgrade to node 24 (<a
href="https://redirect.github.com/actions/stale/issues/1279">#1279</a>)</li>
<li><a
href="8f717f0dfc"><code>8f717f0</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/stale/issues/1277">#1277</a>)</li>
<li><a
href="a92fd57ffe"><code>a92fd57</code></a>
build(deps): bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/stale/issues/1251">#1251</a>)</li>
<li><a
href="128b2c81d0"><code>128b2c8</code></a>
Introducing sort-by option (<a
href="https://redirect.github.com/actions/stale/issues/1254">#1254</a>)</li>
<li><a
href="f78de9780e"><code>f78de97</code></a>
Update README.md (<a
href="https://redirect.github.com/actions/stale/issues/1248">#1248</a>)</li>
<li><a
href="816d9db1ab"><code>816d9db</code></a>
Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 (<a
href="https://redirect.github.com/actions/stale/issues/1233">#1233</a>)</li>
<li><a
href="ba23c1cb02"><code>ba23c1c</code></a>
upgrade actions/cache from 4.0.0 to 4.0.2 (<a
href="https://redirect.github.com/actions/stale/issues/1226">#1226</a>)</li>
<li><a
href="a65e88a9b9"><code>a65e88a</code></a>
build(deps): bump undici from 5.28.4 to 5.28.5 (<a
href="https://redirect.github.com/actions/stale/issues/1201">#1201</a>)</li>
<li><a
href="d4df79c591"><code>d4df79c</code></a>
Updates to CHANGELOG.MD for recent releases (<a
href="https://redirect.github.com/actions/stale/issues/1224">#1224</a>)</li>
<li><a
href="ee7ef89499"><code>ee7ef89</code></a>
build(deps): bump actions/publish-immutable-action from 0.0.3 to 0.0.4
(<a
href="https://redirect.github.com/actions/stale/issues/1186">#1186</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/stale/compare/v9...v10">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/stale&package-manager=github_actions&previous-version=9&new-version=10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Update the stale-issues workflow to use `actions/stale@v10` instead of
`v9`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
747d4ea73a. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-18 17:34:04 +00:00
Bently
22078671df feat(frontend): increase file upload size limit to 256MB (#11634)
- Updated Next.js configuration to set body size limits for server
actions and API routes.
- Enhanced error handling in the API client to provide user-friendly
messages for file size errors.
- Added user-friendly error messages for 413 Payload Too Large responses
in API error parsing.

These changes ensure that file uploads are consistent with backend
limits and improve user experience during uploads.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Upload a file bigger than 10MB and it works
- [X] Upload a file bigger than 256MB and you see a official error
stating the max file size is 256MB
2025-12-18 17:29:20 +00:00
dependabot[bot]
0082a72657 chore(deps): Bump actions/labeler from 5 to 6 (#10868)
Bumps [actions/labeler](https://github.com/actions/labeler) from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/labeler/releases">actions/labeler's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/jcambass"><code>@​jcambass</code></a> in <a
href="https://redirect.github.com/actions/labeler/pull/802">actions/labeler#802</a></li>
</ul>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade Node.js version to 24 in action and dependencies <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/labeler/pull/891">actions/labeler#891</a>
Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></li>
</ul>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade eslint-config-prettier from 9.0.0 to 9.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/711">actions/labeler#711</a></li>
<li>Upgrade eslint from 8.52.0 to 8.55.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/720">actions/labeler#720</a></li>
<li>Upgrade <code>@​types/jest</code> from 29.5.6 to 29.5.11 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/719">actions/labeler#719</a></li>
<li>Upgrade <code>@​types/js-yaml</code> from 4.0.8 to 4.0.9 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/718">actions/labeler#718</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 6.9.0 to 6.14.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/717">actions/labeler#717</a></li>
<li>Upgrade prettier from 3.0.3 to 3.1.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/726">actions/labeler#726</a></li>
<li>Upgrade eslint from 8.55.0 to 8.56.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/725">actions/labeler#725</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 6.14.0 to
6.19.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/745">actions/labeler#745</a></li>
<li>Upgrade eslint-plugin-jest from 27.4.3 to 27.6.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/744">actions/labeler#744</a></li>
<li>Upgrade <code>@​typescript-eslint/eslint-plugin</code> from 6.9.0 to
6.20.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/750">actions/labeler#750</a></li>
<li>Upgrade prettier from 3.1.1 to 3.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/752">actions/labeler#752</a></li>
<li>Upgrade undici from 5.26.5 to 5.28.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/757">actions/labeler#757</a></li>
<li>Upgrade braces from 3.0.2 to 3.0.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/789">actions/labeler#789</a></li>
<li>Upgrade minimatch from 9.0.3 to 10.0.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/805">actions/labeler#805</a></li>
<li>Upgrade <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/811">actions/labeler#811</a></li>
<li>Upgrade typescript from 5.4.3 to 5.7.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/819">actions/labeler#819</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 7.3.1 to 8.17.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/824">actions/labeler#824</a></li>
<li>Upgrade prettier from 3.2.5 to 3.4.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/825">actions/labeler#825</a></li>
<li>Upgrade <code>@​types/jest</code> from 29.5.12 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/827">actions/labeler#827</a></li>
<li>Upgrade eslint-plugin-jest from 27.9.0 to 28.9.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/832">actions/labeler#832</a></li>
<li>Upgrade ts-jest from 29.1.2 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/831">actions/labeler#831</a></li>
<li>Upgrade <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/830">actions/labeler#830</a></li>
<li>Upgrade typescript from 5.7.2 to 5.7.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/835">actions/labeler#835</a></li>
<li>Upgrade eslint-plugin-jest from 28.9.0 to 28.11.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/839">actions/labeler#839</a></li>
<li>Upgrade undici from 5.28.4 to 5.28.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/842">actions/labeler#842</a></li>
<li>Upgrade <code>@​octokit/request-error</code> from 5.0.1 to 5.1.1 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/846">actions/labeler#846</a></li>
</ul>
<h3>Documentation changes</h3>
<ul>
<li>Add note regarding <code>pull_request_target</code> to README.md by
<a href="https://github.com/silverwind"><code>@​silverwind</code></a> in
<a
href="https://redirect.github.com/actions/labeler/pull/669">actions/labeler#669</a></li>
<li>Update readme with additional examples and important note about
<code>pull_request_target</code> event by <a
href="https://github.com/IvanZosimov"><code>@​IvanZosimov</code></a> in
<a
href="https://redirect.github.com/actions/labeler/pull/721">actions/labeler#721</a></li>
<li>Document update - permission section by <a
href="https://github.com/harithavattikuti"><code>@​harithavattikuti</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/840">actions/labeler#840</a></li>
<li>Improvement in documentation for pull_request_target event usage in
README by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/871">actions/labeler#871</a></li>
<li>Fix broken links in documentation by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/822">actions/labeler#822</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/silverwind"><code>@​silverwind</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/669">actions/labeler#669</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/802">actions/labeler#802</a></li>
<li><a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/822">actions/labeler#822</a></li>
<li><a
href="https://github.com/HarithaVattikuti"><code>@​HarithaVattikuti</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/840">actions/labeler#840</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/891">actions/labeler#891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634933edcd"><code>634933e</code></a>
publish-action upgrade to 0.4.0 from 0.2.2 (<a
href="https://redirect.github.com/actions/labeler/issues/901">#901</a>)</li>
<li><a
href="f1a63e87db"><code>f1a63e8</code></a>
Update Node.js version to 24 in action and dependencies (<a
href="https://redirect.github.com/actions/labeler/issues/891">#891</a>)</li>
<li><a
href="b0a1180683"><code>b0a1180</code></a>
Bump <code>@​octokit/request-error</code> from 5.0.1 to 5.1.1 (<a
href="https://redirect.github.com/actions/labeler/issues/846">#846</a>)</li>
<li><a
href="110d44140c"><code>110d441</code></a>
Update README.md (<a
href="https://redirect.github.com/actions/labeler/issues/871">#871</a>)</li>
<li><a
href="bee50fefe1"><code>bee50fe</code></a>
Bump undici from 5.28.4 to 5.28.5 (<a
href="https://redirect.github.com/actions/labeler/issues/842">#842</a>)</li>
<li><a
href="6463cdb00e"><code>6463cdb</code></a>
Bump eslint-plugin-jest from 28.9.0 to 28.11.0 (<a
href="https://redirect.github.com/actions/labeler/issues/839">#839</a>)</li>
<li><a
href="c209686724"><code>c209686</code></a>
Bump typescript from 5.7.2 to 5.7.3 (<a
href="https://redirect.github.com/actions/labeler/issues/835">#835</a>)</li>
<li><a
href="5184940b54"><code>5184940</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 (<a
href="https://redirect.github.com/actions/labeler/issues/830">#830</a>)</li>
<li><a
href="3629d5568b"><code>3629d55</code></a>
Document update - permission section (<a
href="https://redirect.github.com/actions/labeler/issues/840">#840</a>)</li>
<li><a
href="d24f7f3731"><code>d24f7f3</code></a>
Bump ts-jest from 29.1.2 to 29.2.5 (<a
href="https://redirect.github.com/actions/labeler/issues/831">#831</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/labeler/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/labeler&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 17:16:22 +00:00
Ubbe
9a1d940677 fix(frontend): onboarding run card (#11636)
## Changes 🏗️

### Before

<img width="300" height="730" alt="Screenshot 2025-12-18 at 17 16 57"
src="https://github.com/user-attachments/assets/f57efc83-aa55-4371-96af-e294cab9b03a"
/>

- extra label
- overflow

### After

<img width="300" height="642" alt="Screenshot 2025-12-18 at 17 41 53"
src="https://github.com/user-attachments/assets/50721293-c67c-438a-9c9d-ff7ffae11d14"
/>

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally
  - [x] Test the above
2025-12-18 18:08:19 +01:00
Abhimanyu Yadav
e640d36265 feat(frontend): Add special handling for AGENT block type in form and output handlers (#11595)
<!-- Clearly explain the need for these changes: -->

Agent blocks require different handling compared to standard blocks,
particularly for:
- Handle ID generation (using direct keys instead of generated IDs)
- Form data storage structure (nested under `inputs` key)
- Field ID parsing (filtering out schema path prefixes)

This PR implements special handling for `BlockUIType.AGENT` throughout
the form rendering and output handling components to ensure agents work
correctly in the flow editor.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **CustomNode.tsx**: Pass `uiType` prop to `OutputHandler` component
- **FormCreator.tsx**: 
- Store agent form data in `hardcodedValues.inputs` instead of directly
in `hardcodedValues`
- Extract initial values from `hardcodedValues.inputs` for agent blocks
- **OutputHandler.tsx**: 
  - Accept `uiType` prop
- Use direct key as handle ID for agents instead of
`generateHandleId(key)`
- **useMarketplaceAgentsContent.ts**: 
- Fetch full agent details using `getV2GetLibraryAgent` before adding to
builder
- Ensures agent schemas are properly populated (fixes issue where
marketplace endpoint returns empty schemas)
- **AnyOfField.tsx**: Generate handle IDs for agents by filtering out
"root" and "properties" from schema path
- **FieldTemplate.tsx**: Apply same handle ID generation logic for agent
fields

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add an agent block from marketplace and verify it renders
correctly
- [x] Connect inputs/outputs to/from an agent block and verify
connections work
- [x] Fill in form fields for an agent block and verify data persists
correctly
  - [x] Verify agent blocks work in both new and existing flows
- [x] Test that non-agent blocks still work as before (regression test)
2025-12-18 03:42:55 +00:00
Zamil Majdy
cc9179178f feat(block): Human in The Loop Block restructure (#11627)
## Summary

This PR refactors the Human-In-The-Loop (HITL) review system backend to
improve data handling and API consistency.

## Changes

### Backend Refactoring

#### 1. **Block Output Schema Update** (`human_in_the_loop.py`)
- Replaced single `reviewed_data` and `status` fields with separate
`approved_data` and `rejected_data` outputs
- This allows downstream blocks to handle approved vs rejected data
differently without checking status
- Simplified test outputs to match new schema

#### 2. **Review Data Handling** (`human_review.py`)
- Modified `get_or_create_human_review` to always return
`review.payload` regardless of approval status
- Previously returned `None` for rejected reviews, which could cause
data loss
- Now preserves reviewer-modified data for both approved and rejected
cases

#### 3. **API Route Simplification** (`review/routes.py`)
- Streamlined review decision processing logic using ternary operator
- Unified data handling for both approved and rejected reviews
- Maintains backward compatibility while improving code clarity

## Why These Changes?

- **Better Data Flow**: Separate output pins for approved/rejected data
make workflow design more intuitive
- **Data Preservation**: Rejected reviews can still pass modified data
downstream for logging or alternative processing
- **Cleaner API**: Simplified decision processing reduces code
complexity and potential bugs

## Testing

- All existing tests pass with updated schema
- Backward compatibility maintained for existing workflows
- Human review functionality verified in both approved and rejected
scenarios

## Related

This is the backend portion of changes from #11529, applied separately
to the `feat/hitl` branch.
2025-12-16 12:14:14 +00:00
Ubbe
e8d37ab116 feat(frontend): add nice scrollable tabs on Selected Run view (#11596)
## Changes 🏗️


https://github.com/user-attachments/assets/7e49ed5b-c818-4aa3-b5d6-4fa86fada7ee

When the content of Summary + Outputs + Inputs is long enough, it will
show in this new `<ScrollableTabs />` component, which auto-scrolls the
content as you click on a tab.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Check the new page with scrollable tabs
2025-12-15 16:57:36 +07:00
Ubbe
7f7ef6a271 feat(frontend): imporve agent inputs read-only (#11621)
## Changes 🏗️

The main goal of this PR is to improve how we display inputs used for a
given task.

Agent inputs can be of many types (text, long text, date, select, file,
etc.). Until now, we have tried to display them as text, which has not
always worked. Given we already have `<RunAgentInputs />`, which uses
form elements to display the inputs ( _prefilled with data_ ), most of
the time it will look better and less buggy than text.

### Before

<img width="800" height="614" alt="Screenshot 2025-12-14 at 17 45 44"
src="https://github.com/user-attachments/assets/3d851adf-9638-46c1-adfa-b5e68dc78bb0"
/>

### After

<img width="800" height="708" alt="Screenshot 2025-12-14 at 17 45 21"
src="https://github.com/user-attachments/assets/367f32b4-2c30-4368-8d63-4cad06e32437"
/>

### Other improvements

- 🗑️  Removed `<EditInputsModal />`
- it is not used given the API does not support editing inputs for a
schedule yt
- Made `<InformationTooltip />` icon size customisable    

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
- [x] Check the new view tasks use the form elements instead of text to
display inputs
2025-12-15 00:11:27 +07:00
Ubbe
aefac541d9 fix(frontend): force light mode for now (#11619)
## Changes 🏗️

We have the setup for light/dark mode support ( Tailwind + `next-themes`
), but not the capacity yet from contributions to make the app dark-mode
ready. First, we need to make it look good in light mode 😆

This disables `dark:` mode classes on the code, to prevent the app
looking oopsie when the user is seeing it with a browser with dark mode
preference:

### Before these changes

<img width="800" height="739" alt="Screenshot 2025-12-14 at 17 09 25"
src="https://github.com/user-attachments/assets/76333e03-930a-40b6-b91e-47ee01bf2c00"
/>

### After

<img width="800" height="722" alt="Screenshot 2025-12-14 at 16 55 46"
src="https://github.com/user-attachments/assets/34d85359-c68f-474c-8c66-2bebf28f923e"
/>

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app on a browser with dark mode preference
  - [x] It still looks in light mode without broken styles
2025-12-15 00:10:36 +07:00
Krzysztof Czerwinski
ff5c8f324b Merge branch 'master' into dev 2025-12-12 22:26:39 +09:00
Krzysztof Czerwinski
f121a22544 hotfix: update next (#11612)
Update next to `15.4.10`
2025-12-12 13:42:36 +01:00
Zamil Majdy
71157bddd7 feat(backend): add agent mode support to SmartDecisionMakerBlock with autonomous tool execution loops (#11547)
## Summary

<img width="2072" height="1836" alt="image"
src="https://github.com/user-attachments/assets/9d231a77-6309-46b9-bc11-befb5d8e9fcc"
/>

**🚀 Major Feature: Agent Mode Support**

Adds autonomous agent mode to SmartDecisionMakerBlock, enabling it to
execute tools directly in loops until tasks are completed, rather than
just yielding tool calls for external execution.

##  **Key New Features**

### 🤖 **Agent Mode with Tool Execution Loops**
- **New `agent_mode_max_iterations` parameter** controls execution
behavior:
  - `0` = Traditional mode (single LLM call, yield tool calls)
  - `1+` = Agent mode with iteration limit
  - `-1` = Infinite agent mode (loop until finished)

### 🔄 **Autonomous Tool Execution**  
- **Direct tool execution** instead of yielding for external handling
- **Multi-iteration loops** with conversation state management
- **Automatic completion detection** when LLM stops making tool calls
- **Iteration limit handling** with graceful completion messages

### 🏗️ **Proper Database Operations**
- **Replace manual execution ID generation** with proper
`upsert_execution_input`/`upsert_execution_output`
- **Real NodeExecutionEntry objects** from database results
- **Proper execution status management**: QUEUED → RUNNING →
COMPLETED/FAILED

### 🔧 **Enhanced Type Safety**
- **Pydantic models** replace TypedDict: `ToolInfo` and
`ExecutionParams`
- **Runtime validation** with better error messages
- **Improved developer experience** with IDE support

## 🔧 **Technical Implementation**

### Agent Mode Flow:
```python
# Agent mode enabled with iterations
if input_data.agent_mode_max_iterations != 0:
    async for result in self._execute_tools_agent_mode(...):
        yield result  # "conversations", "finished"
    return

# Traditional mode (existing behavior)  
# Single LLM call + yield tool calls for external execution
```

### Tool Execution with Database Operations:
```python
# Before: Manual execution IDs
tool_exec_id = f"{node_exec_id}_tool_{sink_node_id}_{len(input_data)}"

# After: Proper database operations
node_exec_result, final_input_data = await db_client.upsert_execution_input(
    node_id=sink_node_id,
    graph_exec_id=execution_params.graph_exec_id,
    input_name=input_name, 
    input_data=input_value,
)
```

### Type Safety with Pydantic:
```python
# Before: Dict access prone to errors
execution_params["user_id"]  

# After: Validated model access
execution_params.user_id  # Runtime validation + IDE support
```

## 🧪 **Comprehensive Test Coverage**

- **Agent mode execution tests** with multi-iteration scenarios
- **Database operation verification** 
- **Type safety validation**
- **Backward compatibility** for traditional mode
- **Enhanced dynamic fields tests**

## 📊 **Usage Examples**

### Traditional Mode (Existing Behavior):
```python
SmartDecisionMakerBlock.Input(
    prompt="Search for keywords",
    agent_mode_max_iterations=0  # Default
)
# → Yields tool calls for external execution
```

### Agent Mode (New Feature):
```python  
SmartDecisionMakerBlock.Input(
    prompt="Complete this task using available tools",
    agent_mode_max_iterations=5  # Max 5 iterations
)
# → Executes tools directly until task completion or iteration limit
```

### Infinite Agent Mode:
```python
SmartDecisionMakerBlock.Input(
    prompt="Analyze and process this data thoroughly", 
    agent_mode_max_iterations=-1  # No limit, run until finished
)
# → Executes tools autonomously until LLM indicates completion
```

##  **Backward Compatibility**

- **Zero breaking changes** to existing functionality
- **Traditional mode remains default** (`agent_mode_max_iterations=0`)
- **All existing tests pass**
- **Same API for tool definitions and execution**

This transforms the SmartDecisionMakerBlock from a simple tool call
generator into a powerful autonomous agent capable of complex multi-step
task execution! 🎯

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-12 09:58:06 +00:00
Bentlybro
152e747ea6 Merge branch 'master' into dev 2025-12-11 14:40:01 +00:00
Reinier van der Leer
4d4741d558 fix(frontend/library): Transition from empty tasks view on task init (#11600)
- Resolves #11599

### Changes 🏗️

- Manually update item counts when initiating a task from `EmptyTasks`
view
- Other improvements made while debugging

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `NewAgentLibraryView` transitions to full layout when a first task
is created
- [x] `NewAgentLibraryView` transitions to full layout when a first
trigger is set up
2025-12-11 11:13:53 +00:00
Krzysztof Czerwinski
bd37fe946d feat(platform): Builder search history (#11457)
Preserve user searches in the new builder and cache search results for
more efficiency.
Search is saved, so the user can see their previous searches.

### Changes 🏗️

- Add `BuilderSearch` column&migration to save user search (with all
filters)
- Builder `db.py` now caches all search results using `@cached` and
returns paginated results, so following pages are returned much quicker
- Score and sort results
- Update models&routes
- Update frontend, so it works properly with modified endpoints
- Frontend: store `serachId` and use it for subsequent searches, so we
don't save partial searches (e.g. "b", "bl", ..., "block"). Search id is
reset when user clears the search field.
- Add clickable chips to the Suggestions builder tab
- Add `HorizontalScroll` component (chips use it)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search works and is cached
  - [x] Search sorts results
  - [x] Searches are preserved properly

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-10 17:32:17 +00:00
Nicholas Tindle
7ff282c908 fix(frontend): Disable single dollar sign LaTeX mode in markdown rend… (#11598)
Single dollar signs ($10, $variable) are commonly used in content and
were being incorrectly interpreted as inline LaTeX math delimiters. This
change disables that behavior while keeping double dollar sign ($$...$$)
math blocks working.
## Changes 🏗️
• Configure remarkMath plugin with singleDollarTextMath: false in
MarkdownRenderer.tsx
• Double dollar sign display math ($$...$$) continues to work as
expected
• Single dollar signs are no longer interpreted as inline math
delimiters
## Checklist 📋
For code changes:
	-[x]	I have clearly listed my changes in the PR description
	-[x] I have made a test plan
	-[x] I have tested my changes according to the test plan:
-[x] Verify content with dollar amounts (e.g., “$100”) renders as plain
text
-[x] Verify double dollar sign math blocks ($$x^2$$) still render as
LaTeX
-[x] Verify other markdown features (code blocks, tables, links) still
work correctly

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-10 17:10:52 +00:00
Reinier van der Leer
117bb05438 fix(frontend/library): Fix trigger UX flows in v3 library (#11589)
- Resolves #11586
- Follow-up to #11580

### Changes 🏗️

- Fix logic to include manual triggers as a possibility
- Fix input render logic to use trigger setup schema if applicable
- Fix rendering payload input for externally triggered runs
- Amend `RunAgentModal` to load preset inputs+credentials if selected
- Amend `SelectedTemplateView` to use modified input for run (if
applicable)
- Hide non-applicable buttons in `SelectedRunView` for externally
triggered runs
- Implement auto-navigation to `SelectedTriggerView` on trigger setup

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Can set up manual triggers
    - [x] Navigates to trigger view after setup
  - [x] Can set up automatic triggers
  - [x] Can create templates from runs
  - [x] Can run templates
  - [x] Can run templates with modified input
2025-12-10 15:52:02 +00:00
Nicholas Tindle
979d7c3b74 feat(blocks): Add 4 new GitHub webhook trigger blocks (#11588)
I want to be able to automate some actions on social media or our
sevrver in response to actions from discord


<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Add trigger blocks for common GitHub events to enable OSS automation:
- GithubReleaseTriggerBlock: Trigger on release events (published, etc.)
- GithubStarTriggerBlock: Trigger on star events for milestone
celebrations
- GithubIssuesTriggerBlock: Trigger on issue events for
triage/notifications
- GithubDiscussionTriggerBlock: Trigger on discussion events for Q&A
sync
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test Stars
  - [x] Test Discussions
  - [x] Test Issues
  - [x] Test Release

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-09 21:25:43 +00:00
Nicholas Tindle
95200b67f8 feat(blocks): add many new spreadsheet blocks (#11574)
<!-- Clearly explain the need for these changes: -->
We have lots we want to do with google sheets and we don't want a lack
of blocks to be a limiter so I pre-ddi a lot of blocks!

### Changes 🏗️
Adds 24 new blocks for google sheets (tested and working)
```
|-----|-------------------------------------------|----------------------------------------|
|  1  | GoogleSheetsFilterRowsBlock               | Filter rows based on column conditions |  |
|  2  | GoogleSheetsLookupRowBlock                | VLOOKUP-style row lookup               |  |
|  3  | GoogleSheetsDeleteRowsBlock               | Delete rows from a sheet               |  |
|  4  | GoogleSheetsGetColumnBlock                | Get data from a specific column        |  |
|  5  | GoogleSheetsSortBlock                     | Sort sheet data                        |  | 
|  6  | GoogleSheetsGetUniqueValuesBlock          | Get unique values from a column        |  | 
|  7  | GoogleSheetsInsertRowBlock                | Insert rows into a sheet               |  |
|  8  | GoogleSheetsAddColumnBlock                | Add a new column                       |  |
|  9  | GoogleSheetsGetRowCountBlock              | Get the number of rows                 |  |
| 10  | GoogleSheetsRemoveDuplicatesBlock         | Remove duplicate rows                  |  | 
| 11  | GoogleSheetsUpdateRowBlock                | Update an existing row                 |  |
| 12  | GoogleSheetsGetRowBlock                   | Get a specific row by index            |  |
| 13  | GoogleSheetsDeleteColumnBlock             | Delete a column                        |  |
| 14  | GoogleSheetsCreateNamedRangeBlock         | Create a named range                   |  | 
| 15  | GoogleSheetsListNamedRangesBlock          | List all named ranges                  |  | 
| 16  | GoogleSheetsAddDropdownBlock              | Add dropdown validation to cells       |  | 
| 17  | GoogleSheetsCopyToSpreadsheetBlock        | Copy sheet to another spreadsheet      |  |
| 18  | GoogleSheetsProtectRangeBlock             | Protect a range from editing           |  |
| 19  | GoogleSheetsExportCsvBlock                | Export sheet as CSV                    |  |
| 20  | GoogleSheetsImportCsvBlock                | Import CSV data                        |  |
| 21  | GoogleSheetsAddNoteBlock                  | Add notes to cells                     |  | 
| 22  | GoogleSheetsGetNotesBlock                 | Get notes from cells                   |  | 
| 23  | GoogleSheetsShareSpreadsheetBlock         | Share spreadsheet with users           |  | 
| 24  | GoogleSheetsSetPublicAccessBlock          | Set public access permissions          |  | 
```


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Tested using the attached agent 
[super test for
spreadsheets_v9.json](https://github.com/user-attachments/files/24041582/super.test.for.spreadsheets_v9.json)


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces a large suite of Google Sheets blocks for row/column ops,
filtering/sorting/lookup, CSV import/export, notes, named ranges,
protections, sheet copy, and sharing/public access, plus refactors
append to a simpler single-row append.
> 
> - **Google Sheets blocks (new)**:
> - **Data ops**: `GoogleSheetsFilterRowsBlock`,
`GoogleSheetsLookupRowBlock`, `GoogleSheetsDeleteRowsBlock`,
`GoogleSheetsGetColumnBlock`, `GoogleSheetsSortBlock`,
`GoogleSheetsGetUniqueValuesBlock`, `GoogleSheetsInsertRowBlock`,
`GoogleSheetsAddColumnBlock`, `GoogleSheetsGetRowCountBlock`,
`GoogleSheetsRemoveDuplicatesBlock`, `GoogleSheetsUpdateRowBlock`,
`GoogleSheetsGetRowBlock`, `GoogleSheetsDeleteColumnBlock`.
> - **Named ranges & validation**: `GoogleSheetsCreateNamedRangeBlock`,
`GoogleSheetsListNamedRangesBlock`, `GoogleSheetsAddDropdownBlock`.
> - **Sheet/admin**: `GoogleSheetsCopyToSpreadsheetBlock`,
`GoogleSheetsProtectRangeBlock`.
> - **CSV & notes**: `GoogleSheetsExportCsvBlock`,
`GoogleSheetsImportCsvBlock`, `GoogleSheetsAddNoteBlock`,
`GoogleSheetsGetNotesBlock`.
> - **Sharing**: `GoogleSheetsShareSpreadsheetBlock`,
`GoogleSheetsSetPublicAccessBlock`.
> - **Refactor**:
> - Rename and simplify append: `GoogleSheetsAppendRowBlock` (replaces
multi-row/dict input with single `row`), fixed insert option to
`INSERT_ROWS` and streamlined response.
> - **Utilities/Enums**:
> - Add helpers (`_column_letter_to_index`, `_index_to_column_letter`,
`_apply_filter`) and enums (`FilterOperator`, `SortOrder`, `ShareRole`,
`PublicAccessRole`).
> - Drive/Sheets service builders and file validation reused across new
blocks.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9e2f4024. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-12-09 17:28:22 +00:00
Abhimanyu Yadav
f8afc6044e fix(frontend): prevent file upload buttons from triggering form submission (#11576)
<!-- Clearly explain the need for these changes: -->

In the File Widget, the upload button was incorrectly behaving like a
submit button. When users clicked it, the rjsf library immediately
triggered form validation and displayed validation errors, even though
the user was only trying to upload a file.

This happened because HTML buttons inside a form default to
`type="submit"`, which triggers form submission on click. By explicitly
setting `type="button"` on all file-related buttons, we prevent them
from submitting the form while still allowing them to trigger the file
input dialog.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added `type="button"` attribute to the clear button in the compact
variant
- Added `type="button"` attribute to the upload button in the compact
variant
- Added `type="button"` attribute to the "Browse File" button in the
default variant

This ensures that clicking any of these buttons only triggers the
intended file selection/upload action without causing unwanted form
validation or submission.

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested clicking the upload button in a form with File Widget -
form should not submit or show validation errors
- [x] Tested clicking the clear button - should clear the file without
triggering form validation
- [x] Tested clicking the "Browse File" button - should open file dialog
without triggering form validation
- [x] Verified file upload functionality still works correctly after
selecting a file
2025-12-09 15:54:17 +00:00
Abhimanyu Yadav
7edf01777e fix(frontend): sync flowVersion to URL when loading graph from Library (#11585)
<!-- Clearly explain the need for these changes: -->

When opening a graph from the Library, the `flowVersion` query parameter
was not being set in the URL. This caused issues when the graph data
didn't contain an internal `graphVersion`, resulting in the builder
failing and graphs breaking when running.

The `useGetV1GetSpecificGraph` hook relies on the `flowVersion` query
parameter to fetch the correct graph version. Without it being properly
set in the URL, the graph loading logic fails when the version
information is missing from the graph data itself.

### Changes 🏗️

- Added `setQueryStates` to the `useQueryStates` hook return value in
`useFlow.ts`
- Added logic to sync `flowVersion` to the URL query parameters when a
graph is loaded
- When `graph.version` is available, it now updates the `flowVersion`
query parameter in the URL (defaults to `1` if version is undefined)

This ensures the URL stays in sync with the loaded graph's version,
preventing builder failures and execution issues.

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open a graph from the Library that doesn't have a version in the
URL
- [x] Verify that `flowVersion` is automatically added to the URL query
parameters
  - [x] Verify that the graph loads correctly and can be executed
- [x] Verify that graphs with existing `flowVersion` in URL continue to
work correctly
- [x] Verify that graphs opened from Library with version information
sync correctly
2025-12-09 15:26:45 +00:00
Ubbe
c9681f5d44 fix(frontend): library page adjustments (#11587)
## Changes 🏗️

### Adjust layout and styles on mobile 📱 

<img width="448" height="843" alt="Screenshot 2025-12-09 at 22 53 14"
src="https://github.com/user-attachments/assets/159bdf4f-e6b2-42f5-8fdf-25f8a62c62d1"
/>

### Make the sidebar cards have contextual actions

<img width="486" height="243" alt="Screenshot 2025-12-09 at 22 53 27"
src="https://github.com/user-attachments/assets/2f530168-3217-47c4-b08d-feccbb9e9152"
/>

Depending on the card type, different type of actions are shown...

### Make buttons in "About agent" card do something

<img width="344" height="346" alt="Screenshot 2025-12-09 at 22 54 01"
src="https://github.com/user-attachments/assets/47181f80-1f68-4ef1-aecc-bbadc7cc9c44"
/>

### Other

- Hide `Schedule` button for agents with trigger run type
- Adjust secondary button background colour...
- Make drawer content scrollable on mobile 

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-09 23:17:44 +07:00
Abhimanyu Yadav
1305325813 fix(frontend): preserve button shape in credential select when content is long (#11577)
<!-- Clearly explain the need for these changes: -->

When the content inside the credential select dropdown becomes too long,
the adjacent link buttons lose their rounded shape and appear squarish.
This happens when the text stretches the container or affects the layout
of the buttons.

The issue occurs because the button's width can shrink below its
intended size when the flex container is stretched by long credential
names. By adding an explicit minimum width constraint with `!min-w-8`,
we ensure the button maintains its proper dimensions and rounded
appearance regardless of the select dropdown's content length.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added `!min-w-8` to the external link button's className in
`SelectCredential` component to enforce a minimum width of 2rem (8 *
0.25rem)
- This ensures the button maintains its rounded shape even when the
adjacent select dropdown contains long credential names

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested credential select with short credential names - button
should maintain rounded shape
- [x] Tested credential select with very long credential names (e.g.,
long provider names, usernames, and hosts) - button should maintain
rounded shape
2025-12-09 15:26:22 +00:00
Abhimanyu Yadav
4f349281bd feat(frontend): switch copied graph storage from local storage to clipboard (#11578)
### Changes 🏗️

This PR migrates the copy/paste functionality for graph nodes and edges
from local storage to the Clipboard API. This change addresses storage
limitations and enables cross-tab copying.


https://github.com/user-attachments/assets/6ef55713-ca5b-4562-bb54-4c12db241d30


**Key changes:**
- Replaced `localStorage` with `navigator.clipboard` API for copy/paste
operations
- Added `CLIPBOARD_PREFIX` constant (`"autogpt-flow-data:"`) to identify
our clipboard data and prevent conflicts with other clipboard content
- Added toast notifications to provide user feedback when copying nodes
- Added error handling for clipboard read/write operations with console
error logging
- Removed dependency on `@/services/storage/local-storage` for copied
flow data
- Updated `useCopyPaste` hook to use async clipboard operations with
proper promise handling

**Benefits:**
-  Removes local storage size limitations (5-10MB)
-  Enables copying nodes between browser tabs/windows
-  Provides better user feedback through toast notifications
-  More standard approach using native browser Clipboard API

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Select one or more nodes in the flow editor
  - [x] Press `Ctrl+C` (or `Cmd+C` on Mac) to copy nodes
- [x] Verify toast notification appears showing "Copied successfully"
with node count
  - [x] Press `Ctrl+V` (or `Cmd+V` on Mac) to paste nodes
  - [x] Verify nodes are pasted at viewport center with new unique IDs
  - [x] Verify edges between copied nodes are also pasted correctly
- [x] Test copying nodes in one browser tab and pasting in another tab
(should work)
- [x] Test copying non-flow data (e.g., text) and verify paste doesn't
interfere with flow editor
2025-12-09 15:26:12 +00:00
Ubbe
6c43b34dee feat(frontend): add templates/triggers to new Library page view (#11580)
## Changes 🏗️

Add Templates to the new Agent Library page:

<img width="800" height="889" alt="Screenshot 2025-12-09 at 14 10 01"
src="https://github.com/user-attachments/assets/85f0d478-f3f9-4ccf-81df-b9a7f4ae8849"
/>

- You can create a template from a run ( new action button )
- Templates are listed and can be selected on the sidebar
- When viewing a template, you can edit it, create a task or delete it

Add Triggers to the new Agent Library page:

<img width="800" height="836" alt="Screenshot 2025-12-09 at 14 10 43"
src="https://github.com/user-attachments/assets/c722f807-c72f-4a4d-8778-e36bea203f6e"
/>

- When an agent contains a trigger block, on the modal it will create a
trigger
- When there are triggers, they are listed on the sidebar
- A trigger can be viewed and edited

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the new page locally
2025-12-09 18:30:13 +07:00
Zamil Majdy
c1e21d07e6 feat(platform): add execution accuracy alert system (#11562)
## Summary

<img width="1263" height="883" alt="image"
src="https://github.com/user-attachments/assets/98d4f449-1897-4019-a599-846c27df4191"
/>
<img width="398" height="190" alt="image"
src="https://github.com/user-attachments/assets/0138ac02-420d-4f96-b980-74eb41e3c968"
/>

- Add execution accuracy monitoring with moving averages and Discord
alerts
- Dashboard visualization for accuracy trends and alert detection  
- Hourly monitoring for marketplace agents (≥10 executions in 30 days)
- Generated API client integration with type-safe models

## Features
- **Moving Average Analysis**: 3-day vs 7-day comparison with
configurable thresholds
- **Discord Notifications**: Hourly alerts for accuracy drops ≥10%
- **Dashboard UI**: Real-time trends visualization with alert status
- **Type Safety**: Generated API hooks and models throughout
- **Error Handling**: Graceful OpenAI configuration handling
- **PostgreSQL Optimization**: Window functions for efficient trend
queries

## Test plan
- [x] Backend accuracy monitoring logic tested with sample data
- [x] Frontend components using generated API hooks (no manual fetch)
- [x] Discord notification integration working
- [x] Admin authentication and authorization working
- [x] All formatting and linting checks passing
- [x] Error handling for missing OpenAI configuration
- [x] Test data available with `test-accuracy-agent-001`

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-08 19:28:57 +00:00
Ubbe
aaa8dcc5a8 feat(frontend): design updates run agent modal (#11570)
## Changes 🏗️

<img width="500" height="927" alt="Screenshot 2025-12-08 at 16 30 04"
src="https://github.com/user-attachments/assets/97170302-50ae-4750-99e1-275861ee9e08"
/>

<img width="500" height="897" alt="Screenshot 2025-12-08 at 16 30 10"
src="https://github.com/user-attachments/assets/0a7ac828-ebea-40de-a19e-fbf77b0c2eb7"
/>

Update the new **Run Agent Modal** as a new view. From now on, sections
( inputs, credentials, etc... ) are enclosed. Refactor all the existing
code to be more readable and [adhere to code
conventions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/CONTRIBUTING.md).

### Other improvements

- Refactor `<CredentialInputs />`:
  - to adhere to code conventions
  - to show all existing credentials for a given provider
  - to allow deletion of a given credential
  - to allow to select/switch credentials for a given task
- Run Graph Errors:
- when the run fails, show the errors nicely on the toast and link to
the builder for further debugging
- Design System:
  - secondary button colour has been updated 
  - labels size for form fields has been updated 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Test the above updates running agents from the new library page
2025-12-08 21:47:32 +07:00
Ubbe
a46976decd fix(frontend): allow to close toasts (#11550)
## Changes 🏗️

<img width="795" height="275" alt="Screenshot 2025-12-04 at 21 05 55"
src="https://github.com/user-attachments/assets/e7572635-3976-4822-89ea-3e5e26196ab8"
/>

Allow to close toasts 🍞 

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Toast have a close button top-right

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-08 17:29:14 +07:00
Nicholas Tindle
c4eb7edb65 feat(platform): Improve Google Sheets/Drive integration with unified credentials (#11520)
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.

### Changes 🏗️

- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
  - [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
  - [x] Verify OAuth flow works with simplified scopes
  - [x] Verify file picker works with merged credentials field

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
> 
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-05 09:44:38 -06:00
Nicholas Tindle
3f690ea7b8 fix(platform/frontend): security upgrade next from 15.4.7 to 15.4.8 (#11536)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![critical
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/c.png
'critical severity') | Arbitrary Code Injection
<br/>[SNYK-JS-NEXT-14173355](https://snyk.io/vuln/SNYK-JS-NEXT-14173355)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJhNDQzN2JlZC0wMjYxLTRhZmMtYmQxOS1hMTUwY2RhMDE3ZDciLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImE0NDM3YmVkLTAyNjEtNGFmYy1iZDE5LWExNTBjZGEwMTdkNyJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Arbitrary Code
Injection](https://learn.snyk.io/lesson/insecure-deserialization/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.7","to":"15.4.8"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-14173355"],"prId":"a4437bed-0261-4afc-bd19-a150cda017d7","prPublicId":"a4437bed-0261-4afc-bd19-a150cda017d7","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-14173355"],"vulns":["SNYK-JS-NEXT-14173355"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades Next.js from 15.4.7 to 15.4.8 in the frontend and updates
lockfile/transitive references accordingly.
> 
> - **Dependencies**:
> - Bump `next` to `15.4.8` in `autogpt_platform/frontend/package.json`.
> - Update lockfile to align, including `@next/*` SWC binaries and
packages that peer-depend on `next` (e.g., `@sentry/nextjs`,
`@storybook/nextjs`, `@vercel/*`, `geist`, `nuqs`,
`@next/third-parties`).
> - Minor transitive tweak: `sharp` dependency `semver` updated to
`7.7.3`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7741cbfb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-05 09:44:12 -06:00
Swifty
8be3c88711 feat(backend): add default store agents for seeding test databases (#11552)
This PR adds a collection of pre-built store agents that can be loaded
into test databases for development and testing purposes.

### Changes 🏗️

- Add 17 exported agent JSON files in `backend/agents/` directory
- Add `StoreAgent_rows.csv` containing store listing metadata (titles,
descriptions, categories, images)
- Add `load_store_agents.py` script to load agents into the test
database
- Add `load-store-agents` Makefile target for easy execution

**Included Agents:**
- Flux AI Image Generator
- YouTube Transcription Scraper  
- Decision Maker Lead Finder
- Smart Meeting Prep
- Automated Support Agent
- Unspirational Poster Maker
- AI Video Generator
- Automated SEO Blog Writer
- Lead Finder (Local Businesses)
- LinkedIn Post Generator
- YouTube to LinkedIn Post Converter
- Personal Newsletter
- Email Scout - Contact Finder Assistant
- YouTube Video to SEO Blog Writer
- AI Webpage Copy Improver
- Domain Name Finder
- AI Function

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `make load-store-agents` and verify agents are loaded into the
database
  - [x] Verify store listings appear correctly with metadata from CSV
- [x] Confirm no sensitive information (API keys, secrets) is included
in the exported agents

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this only adds test data and a
loading script.
2025-12-05 16:08:37 +01:00
Zamil Majdy
e4d0dbc283 feat(platform): add Agent Output Demo field to marketplace submission form (#11538)
## Summary
- Add Agent Output Demo field to marketplace agent submission form,
positioned below the Description field
- Store agent output demo URLs in database for future CoPilot
integration
- Implement proper video/image ordering on marketplace pages
- Add shared YouTube URL validation utility to eliminate code
duplication

## Changes Made

### Frontend
- **Agent submission form**: Added Agent Output Demo field with YouTube
URL validation
- **Edit agent form**: Added Agent Output Demo field for existing
submissions
- **Marketplace display**: Implemented proper video/image ordering:
  1. YouTube/Overview video (if exists)
  2. First image (hero)
  3. Agent Output Demo (if exists) 
  4. Additional images
- **Shared utilities**: Created `validateYouTubeUrl` function in
`src/lib/utils.ts`

### Backend
- **Database schema**: Added `agentOutputDemoUrl` field to
`StoreListingVersion` model
- **Database views**: Updated `StoreAgent` view to include
`agent_output_demo` field
- **API models**: Added `agent_output_demo_url` to submission requests
and `agent_output_demo` to responses
- **Database migration**: Added migration to create new column and
update view
- **Test files**: Updated all test files to include the new required
field

## Test Plan
- [x] Frontend form validation works correctly for YouTube URLs
- [x] Database migration applies successfully 
- [x] Backend API accepts and returns the new field
- [x] Marketplace displays videos in correct order
- [x] Both frontend and backend formatting/linting pass
- [x] All test files include required field to prevent failures

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 11:40:12 +00:00
Swifty
8e476c3f8d fix(backend): pass credential type from SDK registry to integrations API (#11544)
### Changes 🏗️

This PR improves the `/integrations/providers` endpoint to dynamically
determine supported authentication types from the SDK registry instead
of using hardcoded values.

**What changed:**
- The `list_providers` function now looks up each provider in the
`AutoRegistry` to get its `supported_auth_types`
- If a provider has defined auth types in the SDK registry, those are
used to set `supports_api_key`, `supports_user_password`, and
`supports_host_scoped` flags
- Falls back to legacy hardcoded behavior for providers not registered
in the SDK (maintains backwards compatibility)

**Why:**
- Providers can now correctly declare their supported authentication
methods via the SDK
- Removes brittle hardcoded checks like `name in ("smtp",)` for specific
providers
- Makes the credential type system more extensible and maintainable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified providers with SDK-defined auth types return correct
flags
  - [x] Verified legacy providers still work with fallback behavior
- [x] Tested the `/integrations/providers` endpoint returns expected
data

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required for this PR.
2025-12-05 12:42:49 +01:00
Zamil Majdy
2f63defb53 fix(backend): Mark ValueError as known block errors (#11537)
### Changes 🏗️

Mark ValueError as known block errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
2025-12-05 11:12:18 +00:00
Sukhtumur Narantuya
2934e9ea69 fix(backend): replace print() statements with proper logging (#11499)
- Replace print() with logger.info() in reddit.py for login message
- Replace print() with logger.debug() in airtable/_api.py for API params
- Replace print() with logger.debug() in _manual_base.py for webhook URL
- Add logging imports and logger initialization where missing
- Update FIXME to TODO with GitHub issue reference #8537

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] test it still works


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Switch `print()` to `logger.info/debug()` across Airtable, Reddit, and
manual webhook modules; add logger initialization and clarify TODO with
issue reference.
> 
> - **Backend**:
>   - **Airtable (`backend/blocks/airtable/_api.py`)**:
>     - Replace `print(params)` with `logger.debug` in `create_base`.
>   - **Reddit (`backend/blocks/reddit.py`)**:
>     - Add `logging` import and `logger` initialization.
>     - Replace login `print` with `logger.info` in `get_praw`.
>   - **Webhooks (`backend/integrations/webhooks/_manual_base.py`)**:
> - Replace `print` with `logger.debug` in `_register_webhook` and add
`logger`.
>     - Update `FIXME` to `TODO` with GitHub issue reference `#8537`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
3add9b0fa9. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-12-05 07:49:20 +00:00
Krzysztof Czerwinski
c880db439d feat(platform): Backend completion of Onboarding tasks (#11375)
Make onboarding task completion backend-authoritative which prevents
cheating (previously users could mark all tasks as completed instantly
and get rewards) and makes task completion more reliable. Completion of
tasks is moved backend with exception of introductory onboarding tasks
and visit-page type tasks.

### Changes 🏗️

- Move incrementing run counter backend and make webhook-triggered and
scheduled task execution count as well
- Use user timezone for calculating run streak
- Frontend task completion is moved from update onboarding state to
separate endpoint and guarded so only frontend tasks can be completed
- Graph creation, execution and add marketplace agent to library accept
`source`, so appropriate tasks can be completed
- Replace `client.ts` api calls with orval generated and remove no
longer used functions from `client.ts`
- Add `resolveResponse` helper function that unwraps orval generated
call result to 2xx response

Small changes&bug fixes:
- Make Redis notification bus serialize all payload fields
- Fix confetti when group is finished
- Collapse finished group when opening Wallet
- Play confetti only for tasks that are listed in the Wallet UI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding can be finished
  - [x] All tasks can be finished and work properly
  - [x] Confetti works properly
2025-12-05 02:32:28 +00:00
Nicholas Tindle
486099140d fix(platform/frontend): security upgrade next from 15.4.7 to 15.4.8 (#11536)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![critical
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/c.png
'critical severity') | Arbitrary Code Injection
<br/>[SNYK-JS-NEXT-14173355](https://snyk.io/vuln/SNYK-JS-NEXT-14173355)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJhNDQzN2JlZC0wMjYxLTRhZmMtYmQxOS1hMTUwY2RhMDE3ZDciLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImE0NDM3YmVkLTAyNjEtNGFmYy1iZDE5LWExNTBjZGEwMTdkNyJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Arbitrary Code
Injection](https://learn.snyk.io/lesson/insecure-deserialization/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.7","to":"15.4.8"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-14173355"],"prId":"a4437bed-0261-4afc-bd19-a150cda017d7","prPublicId":"a4437bed-0261-4afc-bd19-a150cda017d7","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-14173355"],"vulns":["SNYK-JS-NEXT-14173355"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades Next.js from 15.4.7 to 15.4.8 in the frontend and updates
lockfile/transitive references accordingly.
> 
> - **Dependencies**:
> - Bump `next` to `15.4.8` in `autogpt_platform/frontend/package.json`.
> - Update lockfile to align, including `@next/*` SWC binaries and
packages that peer-depend on `next` (e.g., `@sentry/nextjs`,
`@storybook/nextjs`, `@vercel/*`, `geist`, `nuqs`,
`@next/third-parties`).
> - Minor transitive tweak: `sharp` dependency `semver` updated to
`7.7.3`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7741cbfb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-04 20:09:49 +00:00
Ubbe
6d8906ced7 fix(frontend): summary card layout (#11556)
## Changes 🏗️

- Make it less verbose and adapt the summary card to the new design

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and see summary displaying well
2025-12-04 23:33:57 +07:00
Abhimanyu Yadav
bf32a76f49 fix(frontend): prevent NodeHandle error when array schema items is undefined (#11549)
- Added conditional rendering in `NodeArrayInput` component to only
render `NodeHandle` when `schema.items` is defined
- This prevents the error when array schemas don't have an `items`
property defined
- The input field rendering logic already had proper null checks, so
only the handle rendering needed to be fixed

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open the legacy builder and create a new agent
  - [x] Add a Smart Decision block to the agent
- [x] Verify that the `conversation_history` field renders without
errors
- [x] Verify that array fields with defined `items` still render
correctly (e.g., test with other blocks that use arrays)
- [x] Verify that connecting/disconnecting handles to the
conversation_history field works correctly
  - [x] Verify that the field can accept input values when not connected
2025-12-04 15:13:31 +00:00
Ubbe
f7a8e372dd feat(frontend): implement new actions sidebar + summary (#11545)
## Changes 🏗️

<img width="800" height="767" alt="Screenshot 2025-12-04 at 17 40 10"
src="https://github.com/user-attachments/assets/37036246-bcdb-46eb-832c-f91fddfd9014"
/>

<img width="800" height="492" alt="Screenshot 2025-12-04 at 17 40 16"
src="https://github.com/user-attachments/assets/ba547e54-016a-403c-9ab6-99465d01af6b"
/>

On the new Agent Library page:

- Implement the new actions sidebar ( main change... ) 
- Refactor the layout/components to accommodate that
- Implement the missing "Summary" functionality 
- Update icon buttons in Design system with new designs

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and test it
2025-12-04 22:46:42 +07:00
Abhimanyu Yadav
3ccc712463 feat(frontend): add host-scoped credentials support to CredentialField (#11546)
### Changes 🏗️

This PR adds support for `host_scoped` credential type in the new
builder's `CredentialField` component. This enables blocks that require
sensitive headers for custom API endpoints to configure host-scoped
credentials directly from the credential field.
<img width="745" height="843" alt="Screenshot 2025-12-04 at 4 31 09 PM"
src="https://github.com/user-attachments/assets/d076b797-64c4-4c31-9c88-47a064814055"
/>

<img width="418" height="180" alt="Screenshot 2025-12-04 at 4 36 02 PM"
src="https://github.com/user-attachments/assets/b4fa6d8d-d8f4-41ff-ab11-7c708017f8fd"
/>

**Key changes:**

- **Added `HostScopedCredentialsModal` component**
(`models/HostScopedCredentialsModal/`)
- Modal dialog for creating host-scoped credentials with host pattern,
optional title, and dynamic header pairs (key-value)
- Auto-populates host from discriminator value (URL field) when
available
  - Supports adding/removing multiple header pairs with validation

- **Enhanced credential filtering logic** (`helpers.ts`)
- Updated `filterCredentialsByProvider` to accept `schema` and
`discriminatorValue` parameters
  - Added intelligent filtering for:
    - Credential types supported by the block
    - OAuth credentials with sufficient scopes
    - Host-scoped credentials matched by host from discriminator value
  - Extracted `getDiscriminatorValue` helper function for reusability

- **Updated `CredentialField` component**
  - Added `supportsHostScoped` check in `useCredentialField` hook
- Conditionally renders `HostScopedCredentialsModal` when
`supportsHostScoped && discriminatorValue` is true
  - Exports `discriminatorValue` for use in child components

- **Updated `useCredentialField` hook**
- Calculates `discriminatorValue` using new `getDiscriminatorValue`
helper
- Passes `schema` and `discriminatorValue` to enhanced
`filterCredentialsByProvider` function
- Returns `supportsHostScoped` and `discriminatorValue` for component
consumption

**Technical details:**
- Host extraction uses `getHostFromUrl` utility to parse host from
discriminator value (URL)
- Header pairs are managed as state with add/remove functionality
- Form validation uses `react-hook-form` with `zod` schema
- Credential creation integrates with existing API endpoints and query
invalidation

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify `HostScopedCredentialsModal` appears when block supports
`host_scoped` credentials and discriminator value is present
  - [x] Test host auto-population from discriminator value (URL field)
  - [x] Test manual host entry when discriminator value is not available
  - [x] Test adding/removing multiple header pairs
- [x] Test form validation (host required, empty header pairs filtered
out)
  - [x] Test credential creation and successful toast notification
  - [x] Verify credentials list refreshes after creation
- [x] Test host-scoped credential filtering matches credentials by host
from URL
- [x] Verify existing credential types (api_key, oauth2, user_password)
still work correctly
  - [x] Test OAuth scope filtering still works as expected
- [x] Verify modal only shows when `supportsHostScoped &&
discriminatorValue` conditions are met
2025-12-04 15:13:23 +00:00
Abhimanyu Yadav
2b9816cfa5 fix(frontend): ensure node selection state is set before copying in context menu (#11535)
<!-- Clearly explain the need for these changes: -->

Fixes an issue where copying a node via the context menu dialog fails on
the first attempt in a new session. The problem occurs because the node
selection state update and the copy operation happen in quick
succession, causing a race condition where `copySelectedNodes()` reads
the store before the selection state is properly updated.


### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Start a new browser session (or clear storage)
  - [x] Open the flow editor
- [x] Right-click on a node and select "Copy Node" from the context menu
  - [x] Verify the node is successfully copied on the first attempt
2025-12-04 15:13:13 +00:00
Abhimanyu Yadav
4e87f668e3 feat(frontend): add file input widget with variants and base64 support (#11533)
<!-- Clearly explain the need for these changes: -->

This PR enhances the FileInput component to support multiple variants
and modes, and integrates it into the form renderer as a file widget.
The changes enable a more flexible file input experience with both
server upload and local base64 conversion capabilities.

### Compact one
<img width="354" height="91" alt="Screenshot 2025-12-03 at 8 05 51 PM"
src="https://github.com/user-attachments/assets/1295a34f-4d9f-4a65-89f2-4b6516f176ff"
/>
<img width="386" height="96" alt="Screenshot 2025-12-03 at 8 06 11 PM"
src="https://github.com/user-attachments/assets/3c10e350-8ddc-43ff-bf0a-68b23f8db394"
/>

## Default one
<img width="671" height="165" alt="Screenshot 2025-12-03 at 8 05 08 PM"
src="https://github.com/user-attachments/assets/bd17c5f1-8cdb-4818-9850-a9e61252d8c1"
/>
<img width="656" height="141" alt="Screenshot 2025-12-03 at 8 05 21 PM"
src="https://github.com/user-attachments/assets/1edf260f-6245-44b9-bd3e-dd962f497413"
/>

### Changes 🏗️

#### FileInput Component Enhancements
- **Added variant support**: Introduced `default` and `compact` variants
- `default`: Full-featured UI with drag & drop, progress bar, and
storage note
- `compact`: Minimal inline design suitable for tight spaces like node
inputs
- **Added mode support**: Introduced `upload` and `base64` modes
- `upload`: Uploads files to server (requires `onUploadFile` and
`uploadProgress`)
  - `base64`: Converts files to base64 locally without server upload
- **Improved type safety**: Refactored props using discriminated unions
(`UploadModeProps | Base64ModeProps`) to ensure type-safe usage
- **Enhanced file handling**: Added `getFileLabelFromValue` helper to
extract file labels from base64 data URIs or file paths
- **Better UX**: Added `showStorageNote` prop to control visibility of
storage disclaimer

#### FileWidget Integration
- **Replaced legacy Input**: Migrated from legacy `Input` component to
new `FileInput` component
- **Smart variant selection**: Automatically selects `default` or
`compact` variant based on form context size
- **Base64 mode**: Uses base64 mode for form inputs, eliminating need
for server uploads in builder context
- **Improved accessibility**: Better disabled/readonly state handling
with visual feedback

#### Form Renderer Updates
- **Disabled validation**: Added `noValidate={true}` and
`liveValidate={false}` to prevent premature form validation

#### Storybook Updates
- **Expanded stories**: Added comprehensive stories for all variant/mode
combinations:
  - `Default`: Default variant with upload mode
  - `Compact`: Compact variant with base64 mode
  - `CompactWithUpload`: Compact variant with upload mode
  - `DefaultWithBase64`: Default variant with base64 mode
- **Improved documentation**: Updated component descriptions to clearly
explain variants and modes

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test FileInput component in Storybook with all variant/mode
combinations
  - [x] Test file upload flow in default variant with upload mode
  - [x] Test base64 conversion in compact variant with base64 mode
  - [x] Test file widget in form renderer (node inputs)
  - [x] Test file type validation (accept prop)
  - [x] Test file size validation (maxFileSize prop)
  - [x] Test error handling for invalid files
  - [x] Test disabled and readonly states
  - [x] Test file clearing/removal functionality
  - [x] Verify compact variant renders correctly in tight spaces
  - [x] Verify default variant shows storage note only in upload mode
  - [x] Test drag & drop functionality in default variant
2025-12-04 15:13:01 +00:00
Abhimanyu Yadav
729400dbe1 feat(frontend): display graph validation errors inline on node fields (#11524)
When running a graph in the new builder, validation errors were only
displayed in toast notifications, making it difficult for users to
identify which specific fields had errors. Users needed to see
validation errors directly next to the problematic fields within each
node for better UX and faster debugging.

<img width="1319" height="953" alt="Screenshot 2025-12-03 at 12 48
15 PM"
src="https://github.com/user-attachments/assets/d444bc71-9bee-4fa7-8b7f-33339bd0cb24"
/>

### Changes 🏗️

- **Error handling in graph execution** (`useRunGraph.ts`):
- Added detection for graph validation errors using
`ApiError.isGraphValidationError()`
  - Parse and store node-level errors from backend validation response
  - Clear all node errors on successful graph execution
- Enhanced toast messages to guide users to fix validation errors on
highlighted nodes

- **Node store error management** (`nodeStore.ts`):
  - Added `errors` field to node data structure
  - Implemented `updateNodeErrors()` to set errors for a specific node
- Implemented `clearNodeErrors()` to remove errors from a specific node
  - Implemented `getNodeErrors()` to retrieve errors for a specific node
- Implemented `setNodeErrorsForBackendId()` to set errors by backend ID
(supports matching by `metadata.backend_id` or node `id`)
- Implemented `clearAllNodeErrors()` to clear all node errors across the
graph

- **Visual error indication** (`CustomNode.tsx`, `NodeContainer.tsx`):
- Added error detection logic to identify both configuration errors and
output errors
- Applied error styling to nodes with validation errors (using `FAILED`
status styling)
- Nodes with errors now display with red border/ring to visually
indicate issues

- **Field-level error display** (`FieldTemplate.tsx`):
  - Fetch node errors from store for the current node
- Match field IDs with error keys (handles both underscore and dot
notation)
  - Display field-specific error messages below each field in red text
- Added helper function `getFieldErrorKey()` to normalize field IDs for
error matching

- **Utility helpers** (`helpers.ts`):
- Created `getFieldErrorKey()` function to extract field key from field
ID (removes `root_` prefix)

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a graph with multiple nodes and intentionally leave
required fields empty
- [x] Run the graph and verify that validation errors appear in toast
notification
- [x] Verify that nodes with errors are highlighted with red border/ring
styling
- [x] Verify that field-specific error messages appear below each
problematic field in red text
- [x] Verify that error messages handle both underscore and dot notation
in field keys
- [x] Fix validation errors and run graph again - verify errors are
cleared
  - [x] Verify that successful graph execution clears all node errors
- [x] Test with nodes that have `backend_id` in metadata vs nodes
without
  - [x] Verify that nodes without errors don't show error styling
- [x] Test with nested fields and array fields to ensure error matching
works correctly
2025-12-04 15:12:51 +00:00
Abhimanyu Yadav
f6608e99c8 feat(frontend): add expandable text input modal for better editing experience (#11510)
Text inputs in the form builder can be difficult to edit when dealing
with longer content. Users need a way to expand text inputs into a
larger, more comfortable editing interface, especially for multi-line
text, passwords, and longer string values.



https://github.com/user-attachments/assets/443bf4eb-c77c-4bf6-b34c-77091e005c6d


### Changes 🏗️

- **Added `InputExpanderModal` component**: A new modal component that
provides a larger textarea (300px min-height) for editing text inputs
with the following features:
- Copy-to-clipboard functionality with visual feedback (checkmark icon)
  - Toast notification on successful copy
  - Auto-focus on open for better UX
  - Proper state management to reset values when modal opens/closes
  
- **Enhanced `TextInputWidget`**:
- Added expand button (ArrowsOutIcon) with tooltip for text, password,
and textarea input types
  - Button appears inline next to the input field
  - Integrated the new `InputExpanderModal` component
  - Improved layout with flexbox to accommodate the expand button
- Added padding-right to input when expand button is visible to prevent
text overlap

- **Refactored file structure**:
  - Moved `TextInputWidget.tsx` into `TextInputWidget/` directory
  - Updated import path in `widgets/index.ts`

- **UX improvements**:
- Expand button only shows for applicable input types (text, password,
textarea)
  - Number and integer inputs don't show expand button (not needed)
- Modal preserves schema title, description, and placeholder for context

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test expand button appears for text input fields
  - [x] Test expand button appears for password input fields  
  - [x] Test expand button appears for textarea fields
  - [x] Test expand button does NOT appear for number/integer inputs
  - [x] Test clicking expand button opens modal with current value
  - [x] Test editing text in modal and saving updates the input field
  - [x] Test cancel button closes modal without saving changes
- [x] Test copy-to-clipboard button copies text and shows success state
  - [x] Test toast notification appears on successful copy
  - [x] Test modal resets to original value when reopened
  - [x] Test modal auto-focuses textarea on open
  - [x] Test expand button tooltip displays correctly
  - [x] Test input field layout with expand button (no text overlap)
2025-12-04 15:12:32 +00:00
Abhimanyu Yadav
78c2245269 feat(frontend): add automatic collision resolution for flow editor nodes (#11506)
When users drag and drop nodes in the new flow editor, nodes can overlap
with each other, making the graph difficult to read and interact with.
This PR adds an automatic collision resolution algorithm that runs when
a node is dropped, ensuring nodes are automatically separated to prevent
overlaps and maintain a clean, readable graph layout.

### Changes 🏗️

- **Added collision resolution algorithm** (`resolve-collision.ts`):
- Implements an iterative collision detection and resolution system
using Flatbush for efficient spatial indexing
- Automatically resolves overlaps by moving nodes apart along the axis
with the smallest overlap
- Configurable options: `maxIterations`, `overlapThreshold`, and
`margin`
- Uses actual node dimensions (`width`, `height`, or `measured` values)
when available

- **Integrated collision resolution into Flow component**:
- Added `onNodeDragStop` callback that triggers collision resolution
after a node is dropped
- Configured with `maxIterations: Infinity`, `overlapThreshold: 0.5`,
and `margin: 15px`

- **Enhanced node dimension handling**:
- Updated `nodeStore.ts` to prioritize actual node dimensions
(`node.width`, `node.measured.width`) over hardcoded defaults when
calculating positions
  - Ensures collision detection uses accurate node sizes

- **Added dependency**:
- Added `flatbush@4.5.0` for efficient spatial indexing and collision
detection

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Drag a node and drop it on top of another node - verify nodes
automatically separate
- [x] Drag multiple nodes to create overlapping clusters - verify all
overlaps are resolved
- [x] Drag nodes with different sizes (NOTE blocks vs regular blocks) -
verify collision detection uses correct dimensions
- [x] Drag nodes near the edge of the canvas - verify nodes don't get
pushed off-screen
- [x] Test with a graph containing many nodes (20+) - verify performance
is acceptable
  - [x] Verify nodes maintain their positions when no collisions occur
- [x] Test with nodes that have custom measured dimensions - verify
accurate collision detection
2025-12-04 14:46:43 +00:00
Nicholas Tindle
113df689dc feat(platform): Improve Google Sheets/Drive integration with unified credentials (#11520)
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.

### Changes 🏗️

- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
  - [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
  - [x] Verify OAuth flow works with simplified scopes
  - [x] Verify file picker works with merged credentials field

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
> 
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-04 14:40:30 +00:00
Ubbe
f1c6c94636 ci(frontend): fix concurrency groups (#11551)
## Changes 🏗️

Fix concurrency grouping on Front-end workflows.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] We will see once merged
2025-12-04 21:44:06 +07:00
Swifty
6588110bf2 fix(frontend): forward X-API-Key header through proxy (#11530)
The Next.js API proxy was stripping the X-API-Key header when forwarding
requests to the backend, causing API key authentication to fail in
environments where requests go through the proxy (e.g., dev
environment).

### Changes 🏗️

- Updated `createRequestHeaders()` in
`frontend/src/lib/autogpt-server-api/helpers.ts` to forward the
`X-API-Key` header from the original request to the backend

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify API key authentication works when requests go through the
Next.js proxy
- [x] Verify existing authentication (Authorization header) still works
  - [x] Verify admin impersonation header forwarding still works

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 13:39:17 +01:00
Ubbe
bfbd4eee53 ci(frontend): concurrency optimizations (#11525)
## Changes 🏗️

Added the same concurrency optimisation to the Front-end and Fullstack
CI workflows. It will:

- Cancel in-progress runs when a new workflow starts for the same
branch/PR
- Reduce CI costs by avoiding redundant runs
- Ensure only the latest workflow runs
- Both workflows now use the same concurrency strategy to optimise CI
billing.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] We will see as we push commits...
2025-12-03 18:05:22 +07:00
Swifty
7b93600973 fix duplicate promethues metrics 2025-12-03 11:04:38 +01:00
Abhimanyu Yadav
02d9ff8db2 fix(frontend): improve error message extraction in agent execution error handler (#11527)
When agent execution fails, the error toast was only showing
`error.message`, which often lacks detail. The API returns more specific
error messages in `error.response.detail.message`, but these weren't
being displayed to users, making debugging harder.

<img width="1018" height="796" alt="Screenshot 2025-12-03 at 2 14 17 PM"
src="https://github.com/user-attachments/assets/6a93aed9-9e18-450a-9995-8760d7d5ca35"
/>

### Changes 🏗️

- Updated error message extraction in `useAgentRunModal` to check
`error.response.detail.message` first, then fall back to
`error.message`, then to the default message
- This ensures users see the most specific error message available from
the API response

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Triggered an agent execution error (e.g., invalid inputs) and
verified the toast shows the detailed error message from
`error.response.detail.message`
- [x] Verified fallback to `error.message` when
`error.response.detail.message` is not available
  - [x] Verified fallback to default message when neither is available
2025-12-03 09:19:29 +00:00
Ubbe
b4a69c49a1 feat(frontend): use websockets on new library page + fixes (#11526)
## Changes 🏗️

<img width="800" height="614" alt="Screenshot 2025-12-03 at 14 52 46"
src="https://github.com/user-attachments/assets/c7012d7a-96d4-4268-a53b-27f2f7322a39"
/>

- Create a new `useExecutionEvents()` hook that can be used to subscribe
to agent executions
- now both the **Activity Dropdown** and the new **Library Agent Page**
use that
  - so subscribing to executions is centralised on a single place
- Apply a couple of design fixes
- Fix not being able to select the new templates tab 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and verify the above
2025-12-03 16:24:58 +07:00
Ubbe
e62a8fb572 feat(frontend): design updates on new library page... (#11522)
## Changes 🏗️

### Design updates

Design updates on the new Library page. New empty views with
illustration and overall changes on the sidebar and selected run
sections...

<img width="800" height="871" alt="Screenshot 2025-12-03 at 14 03 45"
src="https://github.com/user-attachments/assets/6970af99-dda8-4cd8-a9b5-97fe5ee2032e"
/>

<img width="800" height="844" alt="Screenshot 2025-12-03 at 14 03 52"
src="https://github.com/user-attachments/assets/92e6af79-db9f-4098-961f-9ae3b3300ffa"
/>

<img width="800" height="816" alt="Screenshot 2025-12-03 at 14 03 57"
src="https://github.com/user-attachments/assets/7e23e612-b446-4c53-8ea2-f0e2b1574ec3"
/>

<img width="800" height="862" alt="Screenshot 2025-12-03 at 14 04 07"
src="https://github.com/user-attachments/assets/e3398232-e74b-4a06-8702-30a96862dc00"
/>

### Architecture

- Make selected tabs/items synced with the URL via `?activeTab=` and
`?activeItem=`, so it is easy and predictable to change their state...
- Some minor updates on the design system I missed on previous PRs ... 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and check the new page ( still wip )
2025-12-03 14:33:18 +07:00
Reinier van der Leer
233ff40a24 fix(frontend/marketplace): Fix rendering creator links without schema (#11516)
- [OPEN-2871: TypeError: URL constructor: www.agpt.co is not a valid
URL.](https://linear.app/autogpt/issue/OPEN-2871/typeerror-url-constructor-wwwagptco-is-not-a-valid-url)
- [Sentry Issue BUILDER-56D: TypeError: URL constructor: www.agpt.co is
not a valid
URL.](https://significant-gravitas.sentry.io/issues/7081476631/)

### Changes 🏗️

- Amend URL handling in `CreatorLinks` to correctly handle URLs with
implicit schema

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, CI is sufficient
2025-12-03 06:57:55 +00:00
seer-by-sentry[bot]
fa567991b3 fix(backend): Handle HTTP errors in HTTP block by returning response objects (#11515)
### Changes 🏗️

- Modify the HTTP block to handle HTTP errors (4xx, 5xx) by returning
response objects instead of raising exceptions.
- This allows proper handling of client_error and server_error outputs.

Fixes
[AUTOGPT-SERVER-6VP](https://sentry.io/organizations/significant-gravitas/issues/7023985892/).
The issue was that: HTTP errors are raised as exceptions by `Requests`
default behavior, bypassing the block's intended error output handling,
resulting in `BlockUnknownError`.

This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 4902617

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/7023985892/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Tested with a service that will return 4XX and 5XX errors to make
sure the correct paths are followed



<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> HTTP block now returns 4xx/5xx responses instead of raising, and
Requests gains retry_max_attempts with last-result handling.
> 
> - **Backend**
>   - **HTTP block (`backend/blocks/http.py`)**:
> - Use `Requests(raise_for_status=False, retry_max_attempts=1)` so
4xx/5xx return response objects and route to
`client_error`/`server_error` outputs.
>   - **HTTP client util (`backend/util/request.py`)**:
> - Add `retry_max_attempts` option with `stop_after_attempt` and
`_return_last_result` to return the final response when retries stop.
> - Build `tenacity` retry config dynamically in `Requests.request()`;
validate `retry_max_attempts >= 1` when provided.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
fccae61c26. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: nicholas.tindle <nicholas.tindle@agpt.co>
2025-12-02 19:00:43 +00:00
Swifty
2cb6fd581c feat(platform): Integration management from external api (#11472)
Allow the external api to manage credentials 

### Changes 🏗️

- add ability to external api to manage credentials

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested it works

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces external API endpoints to manage integrations (OAuth
initiation/completion and credential CRUD), adds external OAuth state
fields, and new API key permissions/config.
> 
> - **External API – Integrations**:
> - Add router `backend/server/external/routes/integrations.py` with
endpoints to:
> - `GET /v1/integrations/providers` list providers (incl. default
scopes)
> - `POST /v1/integrations/{provider}/oauth/initiate` and `POST
/oauth/complete` for external OAuth (custom callback, state)
> - `GET /v1/integrations/credentials` and `GET /{provider}/credentials`
to list credentials
> - `POST /{provider}/credentials` to create `api_key`, `user_password`,
`host_scoped` creds; `DELETE /{provider}/credentials/{cred_id}` to
delete
>   - Wire router in `backend/server/external/api.py`.
> - **Auth/Permissions**:
> - Add `APIKeyPermission` values: `MANAGE_INTEGRATIONS`,
`READ_INTEGRATIONS`, `DELETE_INTEGRATIONS` (schema + migration +
OpenAPI).
> - **Data model / Store**:
> - Extend `OAuthState` with external-flow fields: `callback_url`,
`state_metadata`, `api_key_id`, `is_external`.
> - Update `IntegrationCredentialsStore.store_state_token(...)` to
accept/store external OAuth metadata.
> - **OAuth providers**:
> - Set GitHub handler `DEFAULT_SCOPES = ["repo"]` in
`integrations/oauth/github.py`.
> - **Config**:
> - Add `config.external_oauth_callback_origins` in
`backend/util/settings.py` to validate allowed OAuth callback origins.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
249bba9e59. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-02 17:42:53 +01:00
Nicholas Tindle
55af799083 fix(blocks): clamp Twitter search start_time to 10 seconds before now (#11461)
## Summary
- Clamp `start_time` to at least 10 seconds before request time (Twitter
API requirement)
- Update input description to document this automatic adjustment
- Fix `serialize_list` to handle `None` data gracefully (exposed by the
fix)

## Background
Twitter API returns `400 Bad Request` when `start_time` is less than 10
seconds before the request time. Users providing current/future times
would hit this error.

**Sentry Issue:**
[BUILDER-3PG](https://significant-gravitas.sentry.io/issues/6919685270/)

## Affected Blocks
- `TwitterSearchRecentTweetsBlock`
- `TwitterGetUserMentionsBlock`
- `TwitterGetHomeTimelineBlock`
- `TwitterGetUserTweetsBlock`

## Test plan
- [x] Tested `TwitterSearchRecentTweetsBlock` with current time as
`start_time`
- [x] Verified clamping works and API call succeeds
- [x] Verified "No tweets found" is returned correctly when search
window has no results

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-02 15:45:31 +00:00
Zamil Majdy
6590fcb76f fix(backend): fix broken update_agent_version_in_library and reduce the method code duplication (#11514)
## Summary
Fix broken `update_agent_version_in_library` functionality by eagerly
loading `AgentGraph` while loading the library, also consolidating
duplicate code that updates agent version in library and configures HITL
safe mode settings.

## Problem
The `update_agent_version_in_library` is currently failed with this
error:
```
  File "/Users/abhi/Documents/AutoGPT/autogpt_platform/backend/backend/server/v2/library/model.py", line 110, in from_db
    raise ValueError("Associated Agent record is required.")
ValueError: Associated Agent record is required.
```

also logic was duplicated across two router endpoints with identical
implementations, creating maintenance burden and potential for
inconsistencies.

## Changes Made

### Created Helper Method
- Add `_update_library_agent_version_and_settings()` helper function  
- Fixes broken `update_agent_version_in_library` by centralizing the
logic
- Uses proper error handling and settings merging with `model_copy()`

### Replaced Duplicate Code  
- **In `update_graph` function** (v1.py:863) - replaced 13 lines with
single helper call
- **In `set_graph_active_version` function** (v1.py:920) - replaced 13
lines with single helper call

### Benefits
- **Fixes broken functionality**: Centralizes
`update_agent_version_in_library` logic
- **DRY Principle**: Eliminates code duplication across two router
endpoints
- **Maintainability**: Single place to modify the library agent update
logic
- **Consistency**: Ensures both endpoints use identical logic for HITL
safe mode configuration
- **Readability**: Cleaner, more focused endpoint implementations

## Technical Details
The helper method fixes broken `update_agent_version_in_library` by
handling:
1. Updating agent version in library via
`update_agent_version_in_library()`
2. Conditionally setting `human_in_the_loop_safe_mode: true` if graph
has HITL blocks and setting is not already configured
3. Proper settings merging to preserve existing configuration

## Testing
- [x] Code compiles and passes type checking
- [x] Pre-commit hooks pass (linting, formatting, type checking)
- [x] Both affected endpoints maintain same functionality with cleaner
implementation

Fixes broken duplicate code identified in v1.py router endpoints for
`update_agent_version_in_library`.

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 14:31:36 +00:00
Abhimanyu Yadav
f987937046 fix(frontend): prune empty values from node inputs and fix object editor connection detection (#11507)
This PR addresses two issues:
1. **Empty values being sent to backend**: Node inputs were including
empty strings, null, and undefined values when converting to backend
format, causing unnecessary data to be sent.
2. **Incorrect connection detection in ObjectEditor**: The component was
checking if the parent field was connected instead of checking
individual key-value pairs, causing incorrect UI state for dynamic
properties.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **nodeStore.ts**: Added `pruneEmptyValues` utility to remove
empty/null/undefined values from `hardcodedValues` before converting
nodes to backend format. This ensures only meaningful input values are
sent to the backend API.
- **ObjectEditorWidget.tsx**: 
  - Removed unused `fieldKey` prop from component interface
- Fixed connection detection logic to check individual key-value handle
IDs instead of the parent field key, ensuring correct connection state
for each dynamic property in object editors

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a node with object input fields and verify empty values are
not sent to backend
  - [x] Add multiple key-value pairs to an object editor widget
- [x] Connect individual key-value pairs via handles and verify
connection state is correctly displayed
- [x] Disconnect a key-value pair and verify the input field becomes
editable
- [x] Save and reload an agent with object inputs to verify empty values
are pruned correctly
- [x] Verify that non-empty values are still correctly preserved and
sent to backend
2025-12-02 12:47:12 +00:00
Abhimanyu Yadav
b8d67fc2a9 fix(frontend): Update isGraphRunning state when manually executing graphs (#11489)
This PR fixes an issue where the `isGraphRunning` state wasn't being
updated when users manually executed graphs. This caused the UI to not
show proper feedback that a graph was running, such as the running
background animation. The fix adds a `setIsGraphRunning` method to the
graph store and ensures it's called both when starting execution (for
immediate UI feedback) and when errors occur (to reset the state).

### Changes 🏗️

- **Added `setIsGraphRunning` method to graphStore**: Provides a way to
manually control the graph running state
- **Set running state on manual execution**: Updates `isGraphRunning` to
`true` immediately when users click the Run button, providing instant UI
feedback
- **Reset state on execution errors**: Ensures `isGraphRunning` is set
back to `false` if the graph execution fails, preventing the UI from
showing incorrect running state
- **Improved UI responsiveness**: Users now see immediate visual
feedback (like the running background) when they start a graph execution
- **Cleaned up unused import**: Removed unnecessary `flowID` from
useQueryStates in Flow component
- I’ve also changed the colour of the running background to a lighter
shade of purple.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Click Run button and verify the running background appears
immediately
  - [x] Test with graphs that require input values via the input dialog
  - [x] Verify running state is cleared when execution completes
- [x] Test error scenarios and confirm running state is reset on failure
- [x] Confirm all UI elements that depend on `isGraphRunning` update
correctly
- [x] Test multiple consecutive runs to ensure state management is
consistent
2025-12-02 12:42:47 +00:00
Zamil Majdy
e4102bf0fb fix(backend): resolve HITL execution context validation and re-enable tests (#11509)
## Summary
Fix critical validation errors in GraphExecutionEntry and
NodeExecutionEntry models that were preventing HITL block execution, and
re-enable HITL test suite.

## Root Cause
After introducing ExecutionContext as a mandatory field, existing
messages in the execution queue lacked this field, causing validation
failures:
```
[GraphExecutor] [ExecutionManager] Could not parse run message: 1 validation error for GraphExecutionEntry
execution_context
  Field required [type=missing, input_value={'user_id': '26db15cb-a29...
```

## Changes Made

### 🔧 Execution Model Fixes (`backend/data/execution.py`)
- **Add backward compatibility**: `execution_context: ExecutionContext =
Field(default_factory=ExecutionContext)`
- **Prevent shared mutable defaults**: Use `default_factory` instead of
direct instantiation to avoid mutation issues
- **Ignore unknown fields**: Add `model_config = {"extra": "ignore"}`
for future compatibility
- **Applied to both models**: GraphExecutionEntry and NodeExecutionEntry

### 🧪 Re-enable HITL Test Suite
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`human_review_test.py`
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`review_routes_test.py`
- **Restore test coverage**: HITL functionality now properly tested in
CI

## Default ExecutionContext Behavior
When execution_context is missing from old messages, provides safe
defaults:
- `safe_mode: bool = True` (HITL blocks require approval by default)
- `user_timezone: str = "UTC"` (safe timezone default)  
- `root_execution_id: Optional[str] = None`
- `parent_execution_id: Optional[str] = None`

## Impact
-  **Fixes deployment validation errors** in dev environment
-  **Maintains backward compatibility** with existing queue messages  
-  **Restores proper HITL test coverage** in CI
-  **Ensures isolation**: Each execution gets its own ExecutionContext
instance
-  **Future-proofs**: Protects against message format changes

## Testing
- [x] HITL test suite re-enabled and should pass in CI
- [x] Existing executions continue to work with sensible defaults
- [x] New executions receive proper ExecutionContext from caller
- [x] Verified `default_factory` prevents shared mutable instances

## Files Changed
- `backend/data/execution.py` - Add backward-compatible ExecutionContext
defaults
- `backend/data/human_review_test.py` - Re-enable test suite
- `backend/server/v2/executions/review/review_routes_test.py` -
Re-enable test suite

Resolves the "Field required" validation error preventing HITL block
execution in dev environment.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 11:02:21 +00:00
Zamil Majdy
7b951c977e feat(platform): implement graph-level Safe Mode toggle for HITL blocks (#11455)
## Summary

This PR implements a graph-level Safe Mode toggle system for
Human-in-the-Loop (HITL) blocks. When Safe Mode is ON (default), HITL
blocks require manual review before proceeding. When OFF, they execute
automatically.

## 🔧 Backend Changes

- **Database**: Added `metadata` JSON column to `AgentGraph` table with
migration
- **API**: Updated `execute_graph` endpoint to accept `safe_mode`
parameter
- **Execution**: Enhanced execution context to use graph metadata as
default with API override capability
- **Auto-detection**: Automatically populate `has_human_in_the_loop` for
graphs containing HITL blocks
- **Block Detection**: HITL block ID:
`8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d`

## 🎨 Frontend Changes

- **Component**: New `FloatingSafeModeToggle` with dual variants:
  - **White variant**: For library pages, integrates with action buttons
  - **Black variant**: For builders, floating positioned  
- **Integration**: Added toggles to both new/legacy builders and library
pages
- **API Integration**: Direct graph metadata updates via
`usePutV1UpdateGraphVersion`
- **Query Management**: React Query cache invalidation for consistent UI
updates
- **Conditional Display**: Toggle only appears when graph contains HITL
blocks

## 🛠 Technical Implementation

- **Safe Mode ON** (default): HITL blocks require manual review before
proceeding
- **Safe Mode OFF**: HITL blocks execute automatically without
intervention
- **Priority**: Backend API `safe_mode` parameter takes precedence over
graph metadata
- **Detection**: Auto-populates `has_human_in_the_loop` metadata field
- **Positioning**: Proper z-index and responsive positioning for
floating elements

## 🚧 Known Issues (Work in Progress)

### High Priority
- [ ] **Toggle state persistence**: Always shows "ON" regardless of
actual state - query invalidation issue
- [ ] **LibraryAgent metadata**: Missing metadata field causing
TypeScript errors
- [ ] **Tooltip z-index**: Still covered by some UI elements despite
high z-index

### Medium Priority  
- [ ] **HITL detection**: Logic needs improvement for reliable block
detection
- [ ] **Error handling**: Removing HITL blocks from graph causes save
errors
- [ ] **TypeScript**: Fix type mismatches between GraphModel and
LibraryAgent

### Low Priority
- [ ] **Frontend API**: Add `safe_mode` parameter to execution calls
once OpenAPI is regenerated
- [ ] **Performance**: Consider debouncing rapid toggle clicks

## 🧪 Test Plan

- [ ] Verify toggle appears only when graph has HITL blocks
- [ ] Test toggle persistence across page refreshes  
- [ ] Confirm API calls update graph metadata correctly
- [ ] Validate execution behavior respects safe mode setting
- [ ] Check styling consistency across builder and library contexts

## 🔗 Related

- Addresses requirements for graph-level HITL configuration
- Builds on existing FloatingReviewsPanel infrastructure
- Integrates with existing graph metadata system

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-12-02 09:55:55 +00:00
Nicholas Tindle
3b2a67a711 ci(claude): add free disk space step to prevent runner out of space (#11503)
## Summary
- Add `jlumbroso/free-disk-space@v1.3.1` action to claude.yml workflow
- Frees up disk space on Ubuntu runner before heavy Docker and
dependency setup
- Skips `large-packages` (slow) and `docker-images` (needed for
workflow)

## Test plan
- [x] Verify claude.yml workflow runs successfully without disk space
errors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 20:06:56 +00:00
Bently
a58ac2150f feat(backend/blocks): add Discord create thread block (#11131)
Adds a new Discord block that allows users to create threads in Discord
channels. This addresses issue OPEN-2666 which requested the ability to
create Discord threads from workflows.

## Solution

Implemented `CreateDiscordThreadBlock` in
`autogpt_platform/backend/backend/blocks/discord/bot_blocks.py` with the
following features:

- Create public or private threads in Discord channels via bot token
- Support for both channel ID and channel name lookup (with optional
server name)
- Configurable thread type (public/private toggle)
- Configurable auto-archive duration (60, 1440, 4320, or 10080 minutes)
- Optional initial message to send in the newly created thread
- Outputs: status, thread_id, and thread_name for workflow chaining

The block follows the existing Discord block patterns and includes
proper error handling for permissions, channel not found, and login
failures.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Verify block appears in the workflow builder UI
- [x] Test creating a public thread with valid bot token and channel ID
- [x] Test creating a private thread with valid bot token and channel
name
  - [x] Test with invalid channel ID/name to verify error handling
  - [x] Test with bot lacking thread creation permissions
  - [x] Verify thread_id output can be chained to subsequent blocks
- [x] Test auto-archive duration options (60, 1440, 4320, 10080 minutes)
  - [x] Test sending initial message in newly created thread

Video to show the blocks working!



https://github.com/user-attachments/assets/f248f315-05b3-47e2-bd6b-4c64d53c35fc
2025-12-01 20:01:27 +00:00
Swifty
9f37342bc6 feat(platform): Simplify the chat tool system to use only 2 tools (#11464)
Simplifying the chat tool system to use only 2 tools

### Changes 🏗️

- remove old tools
- expand run_agent tool to include all stages

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested adding credentials work
  - [x] tested running an agent works
  - [x] tested scheduling an agent works
2025-12-01 20:56:16 +01:00
Swifty
7dc3b201b7 feat(platform): Explain None Message in BlockError Messages (#11490)
Sometime block errors are raised with message set as None, we now handle
this case

### Changes 🏗️

- handle case of None message
- Add tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] write unit tests

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensure `BlockExecutionError` and `BlockUnknownError` provide default
messages when given None/empty input, with new unit tests covering
formatting and inheritance.
> 
> - **Backend**:
>   - `backend/util/exceptions.py`:
> - `BlockExecutionError`: default `None` message to `"Output error was
None"`.
> - `BlockUnknownError`: default empty/`None` message to `"Unknown error
occurred"`.
> - **Tests**:
> - `backend/util/exceptions_test.py`: add tests for message formatting,
`None`/empty handling, and exception inheritance.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9b31ae47. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-01 20:55:02 +01:00
Swifty
7d53c0de27 fix(backend): Fix Youtube blocking our cloud ips (#11456)
Youtube can blocks cloud ips causing the youtube transcribe blocks to
not work. This PR adds webshare proxy to get around this issue

### Changes 🏗️

- add webshare proxy to youtube transcribe block 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have tested this works locally using the proxy

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Routes YouTube transcript fetching through Webshare proxy using
user/password credentials, wiring in provider enum, settings, default
credentials, and updated tests.
> 
> - **Blocks** (`backend/blocks/youtube.py`):
> - Use `WebshareProxyConfig` with `YouTubeTranscriptApi` to fetch
transcripts via proxy.
> - Add `credentials` input (`user_password` for `webshare_proxy`);
include test credentials and mocks.
> - Update method signatures: `get_transcript(video_id, credentials)`
and `run(..., *, credentials, ...)`.
>   - Change description to indicate proxy usage; add logging.
> - **Integrations**:
> - Providers (`backend/integrations/providers.py`): add
`ProviderName.WEBSHARE_PROXY`.
> - Credentials store (`backend/integrations/credentials_store.py`): add
`webshare_proxy` `UserPasswordCredentials`; include in
`DEFAULT_CREDENTIALS` and conditionally in `get_all_creds`.
> - **Settings** (`backend/util/settings.py`): add secrets
`webshare_proxy_username` and `webshare_proxy_password`.
> - **Tests** (`test/blocks/test_youtube.py`): update to pass
credentials and assert proxy config; add custom-credentials test; adjust
fallback/priority tests.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d060898488. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-01 20:54:52 +01:00
Nicholas Tindle
0728f3bd49 fix(backend): Remove Google Sheets API scopes from block inputs (#11484)
Eliminates explicit Google Sheets API scopes from credentials fields in
all Google Sheets-related blocks. This change may be intended to
centralize or dynamically manage API scopes elsewhere, simplifying block
configuration.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- removes the scopes we aren't approved to use
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Bently tested it on his fresh account and it worked!
2025-12-01 18:07:31 +00:00
Lluis Agusti
eb7e919450 Revert "chore: experiment"
This reverts commit 686412e7df.
2025-12-01 21:20:51 +07:00
Lluis Agusti
686412e7df chore: experiment 2025-12-01 21:19:21 +07:00
Ubbe
b4e95dba14 feat(frontend): update empty task view designs (#11498)
## Changes 🏗️

Update the new library agent page, empty view to look like:

<img width="900" height="1060" alt="Screenshot 2025-12-01 at 14 12 10"
src="https://github.com/user-attachments/assets/e6a22a4f-35f4-434e-bbb1-593390034b9a"
/>

Now we display an **About this agent** card on the left when the agent
is published on the marketplace. I expanded the:
```
/api/library/agents/{id}
```
endpoint to return as well the following:
```js
{
  // ...
  created_at: "timestamp",
  marketplace_listing: {
    creator: { name: "string", "slug": string, id: "string"  },
    name: "string",
    slug: "string",
    id: "string"
  }
}
```
To be able to display this extra information on the card and link to the
creator and marketplace pages.

Also:
- design system updates regarding navbar and colors

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and see the new page for an agent with no runs
2025-12-01 20:28:44 +07:00
Swifty
00148f4e3d feat(platform): add external api routes for store search and tool usage (#11463)
We want to allow external tools to explore the marketplace and use the
chat agent tools


### Changes 🏗️

- add store api routes
- add tool api routes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested all endpoints work

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-12-01 12:04:03 +00:00
Ubbe
6db18b8445 feat(frontend): design system tokens update (#11501)
## Changes 🏗️

Update tokens of the design system with new values 🖌️ 🎨 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Storybook build passes, no type errors
2025-12-01 18:14:10 +07:00
Abhimanyu Yadav
627a33468b feat(frontend): Default node output accordion to expanded state (#11483)
This PR improves the user experience by defaulting the node output
accordion to an expanded state. Previously, users had to manually expand
the accordion to view execution results, which added an unnecessary
click to the workflow. With this change, output data is immediately
visible when available, allowing users to quickly see the results of
their node executions. The ability to collapse the accordion is
preserved for users who prefer a more compact interface.

### Changes 🏗️

- **Changed default state of node output accordion**: The node output
section now defaults to expanded (`isExpanded = true`) instead of
collapsed
- **Improved user experience**: Users can now immediately see node
execution results without needing to manually expand the accordion
- **Maintains collapse functionality**: Users can still manually
collapse the accordion if they prefer a more compact view

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute a node and verify the output accordion is expanded by
default
  - [x] Verify output data is immediately visible after node execution
  - [x] Test that the accordion can still be manually collapsed
- [x] Confirm the accordion state resets to expanded when switching
between nodes
- [x] Test with different types of output data (simple values, objects,
arrays)
2025-12-01 05:57:08 +00:00
Abhimanyu Yadav
ea4b55f967 feat(frontend): Add minimum movement threshold for node position history tracking (#11481)
This PR implements a minimum movement threshold of 50 pixels for node
position changes before they are logged to the history system. This
prevents the undo/redo history from being cluttered with minor,
unintentional movements that occur when users interact with block inputs
or accidentally nudge nodes.

### Changes 🏗️

- **Added movement threshold for history tracking**: Implemented a 50px
minimum movement requirement before logging node position changes to
history
- **Prevents history spam**: Stops small, unintentional movements (like
clicking on inputs inside blocks) from cluttering the undo/redo history
- **Tracks drag start positions**: Maintains initial positions when
dragging begins to accurately calculate total movement distance
- **Improved history management**: Only significant node movements are
now recorded, matching the behavior of the old builder
- **Memory efficient**: Cleans up tracked positions after drag
operations complete

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Drag a node less than 50px and verify no history entry is created
  - [x] Drag a node more than 50px and verify history entry is created
- [x] Click on inputs inside blocks and verify no history entries are
created
  - [x] Test undo/redo functionality works correctly with the threshold
  - [x] Verify adding/removing nodes still creates history entries
  - [x] Test multiple nodes being dragged simultaneously
2025-12-01 05:43:50 +00:00
Abhimanyu Yadav
321ab8a48a feat(frontend): Add trigger agent banner for webhook-based flows (#11480)
This PR addresses the need for better user awareness when building
trigger-based agents. When a user adds webhook/trigger nodes to their
flow, a prominent banner now appears at the bottom of the builder
informing them they're creating a "Trigger Agent" and providing a direct
link to monitor its activity in the Agent Library.

### Changes 🏗️

- **Added TriggerAgentBanner component**: New banner that displays at
the bottom of the builder when the flow contains webhook/trigger nodes
- **Implemented webhook node detection**: Added `hasWebhookNodes` method
to nodeStore that checks if any nodes are of type WEBHOOK or
WEBHOOK_MANUAL
- **Conditional banner display**: Builder now shows TriggerAgentBanner
instead of BuilderActions when webhook nodes are present
- **Dynamic library link**: Banner includes a link to the specific agent
in the library (if found) or defaults to the general library page
- **Integrated with existing flow context**: Uses the current flowID to
fetch the corresponding library agent for proper linking

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Add a webhook node to a flow and verify the banner appears
  - [x] Remove all webhook nodes and verify the banner disappears
  - [x] Test with both WEBHOOK and WEBHOOK_MANUAL node types
- [x] Click the library link and verify it navigates to the correct
agent page
  - [x] Test library link fallback when agent is not found in library
- [x] Verify banner styling and positioning at bottom center of builder
2025-12-01 05:43:04 +00:00
Abhimanyu Yadav
cd6a8cbd47 fix(frontend): Include edges when copying multiple nodes with Shift+Select (#11478)
This PR fixes issue where edges were not being copied when selecting
multiple nodes with Shift+Select.

### Changes 🏗️

- **Fixed edge copying logic**: Removed the requirement for edges to be
explicitly selected - now automatically includes all edges between
selected nodes when copying
- **Migrated to custom stores**: Refactored copy-paste functionality to
use `useNodeStore` and `useEdgeStore` instead of ReactFlow's built-in
state management
- **Improved type safety**: Replaced generic `Node` and `Edge` types
with `CustomNode` and `CustomEdge` for better type checking
- **Enhanced paste behavior**: 
- Deselects existing nodes before pasting to ensure only pasted nodes
are selected
- Uses store methods for adding nodes and edges, ensuring proper state
management
- **Simplified node data handling**: Removed manual clearing of
backend_id, status, and nodeExecutionResult - now handled by the store's
addNode method
- **Added debugging support**: Added console logging for copied data to
aid in troubleshooting

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select multiple nodes using Shift+Select and copy with Ctrl/Cmd+C
  - [x] Verify edges between selected nodes are included in the copy
  - [x] Paste with Ctrl/Cmd+V and confirm both nodes and edges appear
  - [x] Verify pasted elements maintain correct connections
  - [x] Confirm only pasted nodes are selected after paste
  - [x] Test that pasted nodes have unique IDs
  - [x] Verify pasted nodes appear centered in the current viewport
2025-12-01 05:41:25 +00:00
Abhimanyu Yadav
7fabbb25c4 fix(frontend): Preserve customized_name only when explicitly set in node metadata (#11477)
This PR fixes an issue where undefined `customized_name` values were
being included in node metadata, which could cause issues with data
persistence and agent exports. The fix ensures that the
`customized_name` property is only included when it has been explicitly
set by the user.

### Changes 🏗️

- **Fixed metadata serialization**: Modified `getNodeAsGraphNode` to
only include `customized_name` in the metadata when it has been
explicitly set (not undefined)
- **Improved data structure**: Uses object spread syntax to
conditionally include the `customized_name` property, preventing
undefined values from being stored
- **Cleaned up code**: Removed unnecessary TODO comment and improved
code readability

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new agent and verify metadata doesn't include undefined
customized_name
- [x] Customize a block name and verify it's properly saved in metadata
- [x] Export an agent and verify the JSON doesn't contain undefined
customized_name fields
- [x] Import an agent with customized names and verify they are
preserved
- [x] Save and reload an agent to ensure metadata persistence works
correctly
2025-12-01 05:40:52 +00:00
Abhimanyu Yadav
35eb563241 feat(platform): enhance BlockMenuSearch with agent addition (#11474)
This PR enables users to add agents directly to the builder from search
results and marketplace views. Previously, users had to navigate to
different sections to add agents - now they can do it with a single
click from wherever they find the agent. The change includes proper
loading states, error handling, and success notifications to provide a
smooth user experience.

### Changes 🏗️

- **Added direct agent-to-builder functionality**: Users can now add
agents directly to the builder from search results and marketplace views
- **Created reusable hook `useAddAgentToBuilder`**: Centralized logic
for adding both library and marketplace agents to the builder
- **Enhanced search results interaction**: Added click handlers and
loading states to agent cards in search results
- **Improved marketplace agent addition**: Marketplace agents are now
added to both library and builder with proper feedback
- **Added loading states**: Visual feedback when agents are being added
(loading spinners on cards)
- **Improved error handling**: Added toast notifications for success and
failure cases with descriptive error messages
- **Added Sentry error tracking**: Captures exceptions for better
debugging in production

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for agents and add them to builder from search results
- [x] Add marketplace agents which should appear in both library and
builder
  - [x] Verify loading states appear during agent addition
  - [x] Test error scenarios (network failure, invalid agent)
- [x] Confirm toast notifications appear for both success and error
cases
  - [x] Verify builder viewport centers on newly added agent
2025-12-01 05:26:38 +00:00
Ubbe
a37b527744 refactor(frontend): agent runs view folder structure (#11475)
## Changes 🏗️

Re-arrange the folder structure of the new Library page sub-components
so they are grouped by location:

### Before

<img width="238" height="506" alt="Screenshot 2025-11-27 at 23 45 27"
src="https://github.com/user-attachments/assets/429fda6e-bf74-4d80-9306-028365789ca1"
/>

All components where on a single level, which works fine for simpler
pages without that many sub-components, but on this one which has so
much functionality it ends up messier...

### After 

<img width="226" height="517" alt="Screenshot 2025-11-27 at 23 45 46"
src="https://github.com/user-attachments/assets/99c098ea-ff11-4779-bad8-7d524bf91605"
/>


### Imports order

I edited some files, and the linter/formatter automatically sorted
import order as per the lint plugin.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the new library agent page locally and click around
2025-11-28 19:00:41 +07:00
Zamil Majdy
3d08c22dd5 feat(platform): add Human In The Loop block with review workflow (#11380)
## Summary
This PR implements a comprehensive Human In The Loop (HITL) block that
allows agents to pause execution and wait for human
approval/modification of data before continuing.



https://github.com/user-attachments/assets/c027d731-17d3-494c-85ca-97c3bf33329c


## Key Features
- Added WAITING_FOR_REVIEW status to AgentExecutionStatus enum
- Created PendingHumanReview database table for storing review requests
- Implemented HumanInTheLoopBlock that extracts input data and creates
review entries
- Added API endpoints at /api/executions/review for fetching and
reviewing pending data
- Updated execution manager to properly handle waiting status and resume
after approval

## Frontend Components
- PendingReviewCard for individual review handling
- PendingReviewsList for multiple reviews
- FloatingReviewsPanel for graph builder integration
- Integrated review UI into 3 locations: legacy library, new library,
and graph builder

## Technical Implementation
- Added proper type safety throughout with SafeJson handling
- Optimized database queries using count functions instead of full data
fetching
- Fixed imports to be top-level instead of local
- All formatters and linters pass

## Test plan
- [ ] Test Human In The Loop block creation in graph builder
- [ ] Test block execution pauses and creates pending review
- [ ] Test review UI appears in all 3 locations
- [ ] Test data modification and approval workflow
- [ ] Test rejection workflow
- [ ] Test execution resumes after approval

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added Human-In-The-Loop review workflows to pause executions for human
validation.
* Users can approve or reject pending tasks, optionally editing
submitted data and adding a message.
* New "Waiting for Review" execution status with UI indicators across
run lists, badges, and activity views.
* Review management UI: pending review cards, list view, and a floating
reviews panel for quick access.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 12:07:46 +07:00
Zamil Majdy
ff5dd7a5b4 fix(backend): migrate all query_raw calls to query_raw_with_schema for proper schema handling (#11462)
## Summary

Complete migration of all non-test `query_raw` calls to use
`query_raw_with_schema` for proper PostgreSQL schema context handling.
This resolves the marketplace API failures where queries were looking
for unqualified table names.

## Root Cause

Prisma's `query_raw()` doesn't respect the `schema` parameter in
`DATABASE_URL` (`?schema=platform`) for raw SQL queries, causing queries
to fail when looking for unqualified table names in multi-schema
environments.

## Changes Made

### Files Updated
-  **backend/server/v2/store/db.py**: Already updated in previous
commit
-  **backend/server/v2/builder/db.py**: Updated `get_suggested_blocks`
query at line 343
-  **backend/check_store_data.py**: Updated all 4 `query_raw` calls to
use schema-aware queries
-  **backend/check_db.py**: Updated all `query_raw` calls (import
already existed)

### Technical Implementation
- Add import: `from backend.data.db import query_raw_with_schema`
- Replace `prisma.get_client().query_raw()` with
`query_raw_with_schema()`
- Add `{schema_prefix}` placeholder to table references in SQL queries
- Fix f-string template conflicts by using double braces
`{{schema_prefix}}`

### Query Examples

**Before:**
```sql
FROM "StoreAgent"
FROM "AgentNodeExecution" execution
```

**After:**
```sql  
FROM {schema_prefix}"StoreAgent"
FROM {schema_prefix}"AgentNodeExecution" execution
```

## Impact

-  All raw SQL queries now properly respect platform schema context
-  Fixes "relation does not exist" errors in multi-schema environments
-  Maintains backward compatibility with public schema deployments
-  Code formatting passes with `poetry run format`

## Testing

- All `query_raw` usages in non-test code successfully migrated
- `query_raw_with_schema` automatically handles schema prefix injection
- Existing query logic unchanged, only schema awareness added

## Before/After

**Before:** GET /api/store/agents → "relation 'StoreAgent' does not
exist"
**After:** GET /api/store/agents →  Returns store agents correctly

Resolves the marketplace API failures and ensures consistent schema
handling across all raw SQL operations.

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 04:04:20 +00:00
Nicholas Tindle
02f8a69c6a feat(platform): add Google Drive Picker field type for enhanced file selection (#11311)
### 🏗️ Changes 

This PR adds a Google Drive Picker field type to enhance the user
experience of existing Google blocks, replacing manual file ID entry
with a visual file picker.

#### Backend Changes
- **Added  and  types** in :
  - Configurable picker field with OAuth scope management
  - Support for multiselect, folder selection, and MIME type filtering
  - Proper access token handling for file downloads
- **Enhanced Gmail blocks**: Updated attachment fields to use Google
Drive Picker for better UX
- **Enhanced Google Sheets blocks**: Updated spreadsheet selection to
use picker instead of manual ID entry
- **Added utility**: Async file download with virus scanning and 100MB
size limit

#### Frontend Changes  
- **Enhanced GoogleDrivePicker component**: Improved UI with folder icon
and multiselect messaging
- **Integrated picker in form renderers**: Auto-renders for fields with
format
- **Added shared GoogleDrivePickerInput component**: Eliminates code
duplication between NodeInputs and RunAgentInputs
- **Added type definitions**: Complete TypeScript support for picker
schemas and responses

#### Key Features
- 🎯 **Visual file selection**: Replace manual Google Drive file ID entry
with intuitive picker
- 📁 **Flexible configuration**: Support for documents, spreadsheets,
folders, and custom MIME types
- 🔒 **Minimal OAuth scopes**: Uses scope for security (only access to
user-selected files)
-  **Enhanced UX**: Seamless integration in both block configuration
and agent run modals
- 🛡️ **Security**: Virus scanning and file size limits for downloaded
attachments

#### Migration Impact
- **Backward compatible**: Existing blocks continue to work with manual
ID entry
- **Progressive enhancement**: New picker fields provide better UX for
the same functionality
- **No breaking changes**: all existing blocks should be unaffected

This enhancement improves the user experience of Google blocks without
introducing new systems or breaking existing functionality.


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x]Test multiple of the new blocks [of note is that the create
spreadsheet block should be not used for now as it uses api not drive
picker]
  - [x] chain the blocks together and pass values between them

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 03:01:29 +00:00
Toran Bruce Richards
e983d5c49a fix(backend): Implement passed uploaded media support for AI image customizer block (#11441)
- Added `store_media_file` utility to convert local file paths to Data
URIs for image processing.
- Updated `AIImageCustomizerBlock` to utilize processed images in model
execution, improving compatibility with Replicate API.
- Added optional Aspect ratio input to AIImageCustomizerBlock

This change enhances the image handling capabilities of the AI image
customizer, ensuring that images are properly formatted for external
processing.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Created agent using AI Image Customizer block attached to agent
file input
  - [x] Run agent, confirmed block is working
- [x] Confirm block is still working in original direct file upload
setup.


### Testing Results

#### Before (dev cloud):
<img width="836" height="592" alt="image"
src="https://github.com/user-attachments/assets/88c75668-c5c9-44bb-bec5-6554088a0cb7"
/>


#### After (local):
<img width="827" height="587" alt="image"
src="https://github.com/user-attachments/assets/04fea431-70a5-4173-bc84-d354c03d7174"
/>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Preprocesses input images to data URIs and adds an `aspect_ratio`
option, wiring both through to Replicate in `AIImageCustomizerBlock`.
> 
> - **Backend**
>   - **`backend/blocks/ai_image_customizer.py`**:
> - Preprocesses input images via `store_media_file(...,
return_content=True)` to Data URIs before invoking Replicate.
> - Adds `AspectRatio` enum and `aspect_ratio` input; passed through
`run_model` and included in Replicate input.
>     - Updates block test input accordingly.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4116cf80d7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-11-27 00:41:45 +00:00
Lluis Agusti
bdb94a3cf9 hotfix(frontend): clear cache account on logout 2025-11-26 23:33:17 +07:00
Lluis Agusti
8daec53230 hotfix(frontend): add profile loading state and error boundary 2025-11-26 22:42:42 +07:00
Ubbe
ec6f593edc fix(frontend): code scanning vulnerability (#11459)
## Changes 🏗️

Addresses this code scanning alert
[security/code-scanning/156](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/156)

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] No prototype pollution
2025-11-26 12:25:21 +00:00
Lluis Agusti
e6ed83462d fix(frontend): update next.config.mjs to not use standalone output 2025-11-26 19:13:33 +07:00
Nicholas Tindle
1851264a6a [Snyk] Security upgrade @sentry/nextjs from 10.22.0 to 10.27.0 (#11451)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 3 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Insertion of Sensitive Information Into Sent Data
<br/>[SNYK-JS-SENTRYCORE-14105053](https://snyk.io/vuln/SNYK-JS-SENTRYCORE-14105053)
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Insertion of Sensitive Information Into Sent Data
<br/>[SNYK-JS-SENTRYNEXTJS-14105054](https://snyk.io/vuln/SNYK-JS-SENTRYNEXTJS-14105054)
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Insertion of Sensitive Information Into Sent Data
<br/>[SNYK-JS-SENTRYNODECORE-14105055](https://snyk.io/vuln/SNYK-JS-SENTRYNODECORE-14105055)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiIwOWQyNjk1Yy1hZDYyLTRkODItOTg1OS03Yzk4NDY0ZDc4YmQiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6IjA5ZDI2OTVjLWFkNjItNGQ4Mi05ODU5LTdjOTg0NjRkNzhiZCJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Learn about vulnerability in an interactive lesson of Snyk
Learn.](https://learn.snyk.io/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"@sentry/nextjs","from":"10.22.0","to":"10.27.0"}],"env":"prod","issuesToFix":["SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYNEXTJS-14105054","SNYK-JS-SENTRYNODECORE-14105055"],"prId":"09d2695c-ad62-4d82-9859-7c98464d78bd","prPublicId":"09d2695c-ad62-4d82-9859-7c98464d78bd","packageManager":"yarn","priorityScoreList":[null,null,null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYNEXTJS-14105054","SNYK-JS-SENTRYNODECORE-14105055"],"vulns":["SNYK-JS-SENTRYCORE-14105053","SNYK-JS-SENTRYNEXTJS-14105054","SNYK-JS-SENTRYNODECORE-14105055"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-11-25 17:32:40 +00:00
Reinier van der Leer
8f25d43089 Merge branch 'master' into dev 2025-11-25 17:55:14 +01:00
Reinier van der Leer
0c435c4afa dx: Update Node.js versions in GitHub workflows to match frontend requirement (#11449)
This unbreaks the Claude Code and Copilot workflows in our repo.

- Follow-up to #11288

### Changes 🏗️

- Update `node-version` on `actions/setup-node@v4` from v21 to v22
2025-11-25 14:48:57 +00:00
dependabot[bot]
18002cb8f0 chore(frontend/deps-dev): bump cross-env from 7.0.3 to 10.1.0 in /autogpt_platform/frontend (#11353)
Bumps [cross-env](https://github.com/kentcdodds/cross-env) from 7.0.3 to
10.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/kentcdodds/cross-env/releases">cross-env's
releases</a>.</em></p>
<blockquote>
<h2>v10.1.0</h2>
<h1><a
href="https://github.com/kentcdodds/cross-env/compare/v10.0.0...v10.1.0">10.1.0</a>
(2025-09-29)</h1>
<h3>Features</h3>
<ul>
<li>add support for default value syntax (<a
href="152ae6a85b">152ae6a</a>)</li>
</ul>
<p>For example:</p>
<pre lang="json"><code>&quot;dev:server&quot;: &quot;cross-env wrangler
dev --port ${PORT:-8787}&quot;,
</code></pre>
<p>If <code>PORT</code> is already set, use that value, otherwise
fallback to <code>8787</code>.</p>
<p>Learn more about <a
href="https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html">Shell
Parameter Expansion</a></p>
<h2>v10.0.0</h2>
<h1><a
href="https://github.com/kentcdodds/cross-env/compare/v9.0.0...v10.0.0">10.0.0</a>
(2025-07-25)</h1>
<p>TL;DR: You should probably not have to change anything if:</p>
<ul>
<li>You're using a modern maintained version of Node.js (v20+ is
tested)</li>
<li>You're only using the CLI (most of you are as that's the intended
purpose)</li>
</ul>
<p>In this release (which should have been v8 except I had some issues
with automated releases 🙈), I've updated all the things and modernized
the package. This happened in <a
href="https://redirect.github.com/kentcdodds/cross-env/issues/261">#261</a></p>
<p>Was this needed? Not really, but I just thought it'd be fun to
modernize this package.</p>
<p>Here's the highlights of what was done.</p>
<ul>
<li>Replace Jest with Vitest for testing</li>
<li>Convert all source files from .js to .ts with proper TypeScript
types</li>
<li>Use zshy for ESM-only builds (removes CJS support)</li>
<li>Adopt <code>@​epic-web/config</code> for TypeScript, ESLint, and
Prettier</li>
<li>Update to Node.js &gt;=20 requirement</li>
<li>Remove kcd-scripts dependency</li>
<li>Add comprehensive e2e tests with GitHub Actions matrix testing</li>
<li>Update GitHub workflow with caching and cross-platform testing</li>
<li>Modernize documentation and remove outdated sections</li>
<li>Update all dependencies to latest versions</li>
<li>Add proper TypeScript declarations and exports</li>
</ul>
<p>The tool maintains its original functionality while being completely
modernized with the latest tooling and best practices</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>This is a major rewrite that changes the module format from CommonJS
to ESM-only. The package now requires Node.js &gt;=20 and only exports
ESM modules (not relevant in most cases).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="152ae6a85b"><code>152ae6a</code></a>
feat: add support ofr default value syntax</li>
<li><a
href="bd70d1ab25"><code>bd70d1a</code></a>
chore: upgrade zshy</li>
<li><a
href="8e0b190df9"><code>8e0b190</code></a>
chore(ci): get coverage</li>
<li><a
href="8635e80e81"><code>8635e80</code></a>
fix(release): manually release a major version</li>
<li><a
href="3a58f22360"><code>3a58f22</code></a>
chore: fix npmrc registry</li>
<li><a
href="b70bfff5ec"><code>b70bfff</code></a>
chore(ci): add names to steps and workflows</li>
<li><a
href="cc5759dc36"><code>cc5759d</code></a>
fix(release): manually release a major version</li>
<li><a
href="080a859190"><code>080a859</code></a>
chore: remove publish script</li>
<li><a
href="31e5bc70e7"><code>31e5bc7</code></a>
chore(ci): restore built files</li>
<li><a
href="81e9c34f55"><code>81e9c34</code></a>
chore(ci): add back semantic-release</li>
<li>Additional commits viewable in <a
href="https://github.com/kentcdodds/cross-env/compare/v7.0.3...v10.1.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cross-env&package-manager=npm_and_yarn&previous-version=7.0.3&new-version=10.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-11-25 14:28:01 +00:00
Ubbe
240a65e7b3 fix(frontend): show spinner login/logout (#11447)
## Changes 🏗️

Show spinners on the login/logout buttons while they are being
processed.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login with password: there is a spinner on the button while
logging in
  - [x] Logout: there is a spinner on the button while logging out
2025-11-25 20:16:16 +07:00
Ubbe
07368468a4 fix(frontend): make agent ouput clickable (#11433)
### Changes 🏗️

<img width="1490" height="432" alt="Screenshot 2025-11-24 at 23 26 12"
src="https://github.com/user-attachments/assets/e5a149f2-7751-4276-9b76-707db7afdd46"
/>


The agent output buttons on the new run page weren't always clickable
due to a missing `z-index`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Open run in new runs page
  - [x] Ouput buttons are clickable without issues
2025-11-25 20:15:46 +07:00
Ubbe
52aac09577 feat(frontend): environment aware favicon (#11431)
## Changes 🏗️

Change the favicon colour in the tab depending on the environment, so
when you have multiple tabs open (production, staging, local... ) it is
clear which map to what.

### Local ( orange )

<img width="257" height="40" alt="Screenshot 2025-11-24 at 22 38 27"
src="https://github.com/user-attachments/assets/705ddf6b-cc4a-498a-ad15-ed2c60f6b397"
/>

### Dev ( green )

<img width="263" height="40" alt="Screenshot 2025-11-24 at 22 45 20"
src="https://github.com/user-attachments/assets/eda3ba16-8646-4032-ad3c-7a8fc4db778c"
/>

### Example

<img width="513" height="41" alt="Screenshot 2025-11-24 at 22 45 09"
src="https://github.com/user-attachments/assets/1a43f860-536a-465e-9c10-a68c5218a58c"
/>

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Load the app and the favicon colour matches the env
2025-11-25 20:15:31 +07:00
Bently
64a775dfa7 feat(backend/blocks): Add GPT-5.1 and GPT-5.1-codex (#11406)
This pr adds the latest gpt-5.1 and gpt-5.1-codex llm's from openai, as
well as update the price of the gpt-5-chat model

https://platform.openai.com/docs/models/gpt-5.1
https://platform.openai.com/docs/models/gpt-5.1-codex

I have also had to add a new codex block as it uses a different openai
API and has other options the main llm's dont use

<img width="231" height="755" alt="image"
src="https://github.com/user-attachments/assets/a4056633-7b0f-446f-ae86-d7755c5b88ec"
/>


#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the latest gpt-5.1 llm
  - [x] Test the latest gpt-5.1-codex block

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 09:33:11 +00:00
Bently
5d97706bb8 feat(backend/blocks): Add claude opus 4.5 (#11446)
This PR adds the latest [claude opus
4.5](https://www.anthropic.com/news/claude-opus-4-5) model to the
platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test and use the llm to make sure it works
2025-11-25 09:11:02 +00:00
dependabot[bot]
244f3c7c71 chore(backend/deps-dev): bump faker from 37.8.0 to 38.2.0 in /autogpt_platform/backend (#11435)
Bumps [faker](https://github.com/joke2k/faker) from 37.8.0 to 38.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/releases">faker's
releases</a>.</em></p>
<blockquote>
<h2>Release v38.2.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.2.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v38.1.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.1.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v38.0.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.0.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.12.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.12.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.11.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.11.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.10.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.10.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.9.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.9.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/blob/master/CHANGELOG.md">faker's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/joke2k/faker/compare/v38.1.0...v38.2.0">v38.2.0
- 2025-11-19</a></h3>
<ul>
<li>Add localized UniqueProxy. Thanks <a
href="https://github.com/azmeuk"><code>@​azmeuk</code></a></li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v38.0.0...v38.1.0">v38.1.0
- 2025-11-19</a></h3>
<ul>
<li>Add <code>person</code> provider for <code>ar_DZ</code> locale.
Thanks <a
href="https://github.com/othmane099"><code>@​othmane099</code></a>.</li>
<li>Add <code>person</code>, <code>phone_number</code>,
<code>date_time</code> for <code>fr_DZ</code> locale. Thanks <a
href="https://github.com/othmane099"><code>@​othmane099</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.12.0...v38.0.0">v38.0.0
- 2025-11-11</a></h3>
<ul>
<li>Drop support for Python 3.9</li>
<li>Add support for Python 3.14</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.11.0...v37.12.0">v37.12.0
- 2025-10-07</a></h3>
<ul>
<li>Add french VAT number. Thanks <a
href="https://github.com/fabien-michel"><code>@​fabien-michel</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.9.0...v37.11.0">v37.11.0
- 2025-10-07</a></h3>
<ul>
<li>Add French company APE code. Thanks <a
href="https://github.com/fabien-michel"><code>@​fabien-michel</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.8.0...v37.9.0">v37.9.0
- 2025-10-07</a></h3>
<ul>
<li>Add names generation to <code>en_KE</code> locale. Thanks <a
href="https://github.com/titustum"><code>@​titustum</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="337f8faea2"><code>337f8fa</code></a>
Bump version: 38.1.0 → 38.2.0</li>
<li><a
href="d8fb7f20fa"><code>d8fb7f2</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="243e3174c0"><code>243e317</code></a>
lint docs</li>
<li><a
href="e398287902"><code>e398287</code></a>
📝 Update docs</li>
<li><a
href="3cc7f7750f"><code>3cc7f77</code></a>
feat: localized UniqueProxy (<a
href="https://redirect.github.com/joke2k/faker/issues/2279">#2279</a>)</li>
<li><a
href="8ba30da5f7"><code>8ba30da</code></a>
Bump version: 38.0.0 → 38.1.0</li>
<li><a
href="921bde120f"><code>921bde1</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="702e23b8e3"><code>702e23b</code></a>
fix newline</li>
<li><a
href="d5051a98db"><code>d5051a9</code></a>
add_faker_pk_pypi_link (<a
href="https://redirect.github.com/joke2k/faker/issues/2281">#2281</a>)</li>
<li><a
href="050de370cc"><code>050de37</code></a>
Add <code>person</code> provider for <code>ar_DZ</code> locale (<a
href="https://redirect.github.com/joke2k/faker/issues/2271">#2271</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/joke2k/faker/compare/v37.8.0...v38.2.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=faker&package-manager=pip&previous-version=37.8.0&new-version=38.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-11-25 09:05:48 +00:00
Ubbe
355219acbd feat(frontend): fix tab titles (#11432)
## Changes 🏗️

Add some missing page titles, most importantly the missing one the new
runs page.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Page titles are there
2025-11-25 15:47:38 +07:00
Ubbe
1ab66eaed4 fix(frontend): login/signup pages away redirects (#11430)
## Changes 🏗️

When the user is logged in and tries to navigate to `/login` or
`/signup` manually, redirect them away to `/marketplace`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Go to `/login` or `/signup`
  - [x] You are redirected back into `/marketplace`
2025-11-25 15:47:29 +07:00
Bently
126d5838a0 feat(backend/blocks): add latest grok models (#11422)
This PR adds some of the latest grok models to the platform
``x-ai/grok-4-fast``, ``x-ai/grok-4.1-fast`` and ``ai/grok-code-fast-1``

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test all of the latest grok models to make sure they work and they
do!

<img width="1089" height="714" alt="image"
src="https://github.com/user-attachments/assets/0d1e3984-69e8-432b-982a-b04c16bc4f41"
/>
2025-11-24 13:25:48 +00:00
Bently
643aea849b feat(backend/blocks): Add google banana pro (#11425)
This PR adds the latest google banana pro image generator and editor to
the platform and fixes up some of the prices for the image generation
models

I asked for ``Generate a image of a dog on a skateboard`` and this is
what i got:
<img width="2048" height="2048" alt="image"
src="https://github.com/user-attachments/assets/9b6c16d8-df8f-4fb6-a009-d6d342f9beb7"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test the image generator and image editor block using the latest
google banana pro model and it works

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-11-24 13:23:54 +00:00
Swifty
3b092f34d8 feat(platform): Add Get Linear Issues Block (#11415)
Added the ability to get all issues for a given project.

### Changes 🏗️

- added api query
- added new models
- added new block that gets all issues for a given project

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have ensured the new block works in dev
  - [x] I have ensured the other linear blocks still work
2025-11-24 11:43:10 +00:00
Swifty
0921d23628 fix(block): Improve error handling of SendEmailBlock (#11420)
Currently if the smtp server is not configured currently it results in a
platform error. This PR simplifies the error handling

### Changes 🏗️
 
- removed default value for smtp server host. 
- capture common errors and yield them as error

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Checked all tests still pass
2025-11-24 11:42:38 +00:00
Lluis Agusti
0edc669874 refactor(frontend): debug preview stealing dev 2025-11-21 10:39:12 +07:00
Abhimanyu Yadav
e64d3d9b99 feat(frontend): implement agent outputs viewer and improve UI/UX in new builder (#11421)
This PR introduces several improvements to the new builder experience:

**1. Agent Outputs Feature** 
- Implemented a new `AgentOutputs` component that displays execution
outputs from OUTPUT blocks
- Added a slide-out sheet UI to view agent outputs with proper
formatting
- Integrated with existing output renderers from the library view
- Shows output block names, descriptions, and rendered values
- Added beta badge to indicate feature is still experimental

**2. UI/UX Improvements** 🎨
- Fixed graph loading spinner color from violet to neutral zinc for
better consistency
- Adjusted node shadow styling for better visual hierarchy (reduced
shadow when not selected)
- Fixed credential field button spacing to prevent layout overflow
- Improved array editor widget delete button positioning
- Added proper link handling for integration redirects (opens in new
tab)
- Fixed object editor to handle null values gracefully

**3. Performance & State Management** 🚀
- Fixed race condition in run input dialog by awaiting execution before
closing
- Added proper history initialization after graph loads
- Added `outputSchema` to graph store for tracking output blocks
- Fixed search bar to maintain query state properly
- Added automatic fit view on graph load for better initial viewport

**4. Build Actions Bar** 🔧
- Reduced padding for more compact appearance
- Enabled/disabled Agent Outputs button based on presence of output
blocks
- Removed loading icon from manual run button when not executing

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created and executed an agent with OUTPUT blocks to verify outputs
display correctly
- [x] Tested output viewer with different data types (text, JSON,
images, etc.)
- [x] Verified credential field layouts don't overflow in constrained
spaces
  - [x] Tested array editor delete functionality and button positioning
- [x] Confirmed graph loads with proper fit view and history
initialization
  - [x] Tested run input dialog closes only after execution starts
  - [x] Verified integration links open in new tabs
  - [x] Tested object editor with null values
2025-11-20 17:06:02 +00:00
Lluis Agusti
41dc39b97d fix(frontend): preview banner ammend 2025-11-20 23:38:51 +07:00
Ubbe
80e573f33b feat(frontend): PR preview banner (#11412)
## Changes 🏗️

<img width="900" height="757" alt="Screenshot 2025-11-19 at 12 18 38"
src="https://github.com/user-attachments/assets/e2c2a4cf-a05e-431e-853d-fb0a68729e54"
/>

When the dev environment is used for a PR preview, show a banner at the
top of the page to indicate this.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a PR preview against Dev
  - [x] Check it shows the banner once this is merged
  - [x] Or try locally with the env var set

### For configuration changes:

`NEXT_PUBLIC_PREVIEW_STEALING_DEV` is set programmatically via our Infra
CI.
2025-11-20 23:08:32 +07:00
dependabot[bot]
06d20e7e4c chore(backend/deps-dev): bump the development-dependencies group across 1 directory with 3 updates (#11411)
Bumps the development-dependencies group with 3 updates in the
/autogpt_platform/backend directory:
[pre-commit](https://github.com/pre-commit/pre-commit),
[pyright](https://github.com/RobertCraigie/pyright-python) and
[ruff](https://github.com/astral-sh/ruff).

Updates `pre-commit` from 4.3.0 to 4.4.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pre-commit/pre-commit/releases">pre-commit's
releases</a>.</em></p>
<blockquote>
<h2>pre-commit v4.4.0</h2>
<h3>Features</h3>
<ul>
<li>Add <code>--fail-fast</code> option to <code>pre-commit run</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3528">#3528</a>
PR by <a
href="https://github.com/JulianMaurin"><code>@​JulianMaurin</code></a>.</li>
</ul>
</li>
<li>Upgrade <code>ruby-build</code> / <code>rbenv</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3566">#3566</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3565">#3565</a>
issue by <a
href="https://github.com/MRigal"><code>@​MRigal</code></a>.</li>
</ul>
</li>
<li>Add <code>language: unsupported</code> / <code>language:
unsupported_script</code> as aliases for <code>language: system</code> /
<code>language: script</code> (which will eventually be deprecated).
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3577">#3577</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
</ul>
</li>
<li>Add support docker-in-docker detection for cgroups v2.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3535">#3535</a>
PR by <a
href="https://github.com/br-rhrbacek"><code>@​br-rhrbacek</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3360">#3360</a>
issue by <a
href="https://github.com/JasonAlt"><code>@​JasonAlt</code></a>.</li>
</ul>
</li>
</ul>
<h3>Fixes</h3>
<ul>
<li>Handle when docker gives <code>SecurityOptions: null</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3537">#3537</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3514">#3514</a>
issue by <a
href="https://github.com/jenstroeger"><code>@​jenstroeger</code></a>.</li>
</ul>
</li>
<li>Fix error context for invalid <code>stages</code> in
<code>.pre-commit-config.yaml</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3576">#3576</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md">pre-commit's
changelog</a>.</em></p>
<blockquote>
<h1>4.4.0 - 2025-11-08</h1>
<h3>Features</h3>
<ul>
<li>Add <code>--fail-fast</code> option to <code>pre-commit run</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3528">#3528</a>
PR by <a
href="https://github.com/JulianMaurin"><code>@​JulianMaurin</code></a>.</li>
</ul>
</li>
<li>Upgrade <code>ruby-build</code> / <code>rbenv</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3566">#3566</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3565">#3565</a>
issue by <a
href="https://github.com/MRigal"><code>@​MRigal</code></a>.</li>
</ul>
</li>
<li>Add <code>language: unsupported</code> / <code>language:
unsupported_script</code> as aliases
for <code>language: system</code> / <code>language: script</code> (which
will eventually be
deprecated).
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3577">#3577</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
</ul>
</li>
<li>Add support docker-in-docker detection for cgroups v2.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3535">#3535</a>
PR by <a
href="https://github.com/br-rhrbacek"><code>@​br-rhrbacek</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3360">#3360</a>
issue by <a
href="https://github.com/JasonAlt"><code>@​JasonAlt</code></a>.</li>
</ul>
</li>
</ul>
<h3>Fixes</h3>
<ul>
<li>Handle when docker gives <code>SecurityOptions: null</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3537">#3537</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3514">#3514</a>
issue by <a
href="https://github.com/jenstroeger"><code>@​jenstroeger</code></a>.</li>
</ul>
</li>
<li>Fix error context for invalid <code>stages</code> in
<code>.pre-commit-config.yaml</code>.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3576">#3576</a>
PR by <a
href="https://github.com/asottile"><code>@​asottile</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="17cf886473"><code>17cf886</code></a>
v4.4.0</li>
<li><a
href="cb63a5cb9a"><code>cb63a5c</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3535">#3535</a>
from br-rhrbacek/fix-cgroups</li>
<li><a
href="f80801d75a"><code>f80801d</code></a>
Fix docker-in-docker detection for cgroups v2</li>
<li><a
href="9143fc3545"><code>9143fc3</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3577">#3577</a>
from pre-commit/language-unsupported</li>
<li><a
href="725acc969a"><code>725acc9</code></a>
rename system and script languages to unsupported /
unsupported_script</li>
<li><a
href="3815e2e6d8"><code>3815e2e</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3576">#3576</a>
from pre-commit/fix-stages-config-error</li>
<li><a
href="aa2961c122"><code>aa2961c</code></a>
fix missing context in error for stages</li>
<li><a
href="46297f7cd6"><code>46297f7</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3575">#3575</a>
from pre-commit/rm-python3-hooks-repo</li>
<li><a
href="95eec75004"><code>95eec75</code></a>
rm python3_hooks_repo</li>
<li><a
href="5e4b3546f3"><code>5e4b354</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3574">#3574</a>
from pre-commit/rm-hook-with-spaces-test</li>
<li>Additional commits viewable in <a
href="https://github.com/pre-commit/pre-commit/compare/v4.3.0...v4.4.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pyright` from 1.1.406 to 1.1.407
<details>
<summary>Commits</summary>
<ul>
<li><a
href="53e8efb463"><code>53e8efb</code></a>
Pyright NPM Package update to 1.1.407 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/356">#356</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.406...v1.1.407">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.13.3 to 0.14.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.14.5</h2>
<h2>Release Notes</h2>
<p>Released on 2025-11-13.</p>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-simplify</code>] Apply <code>SIM113</code> when index
variable is of type <code>int</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21395">#21395</a>)</li>
<li>[<code>pydoclint</code>] Fix false positive when Sphinx directives
follow a &quot;Raises&quot; section (<code>DOC502</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20535">#20535</a>)</li>
<li>[<code>pydoclint</code>] Support NumPy-style comma-separated
parameters (<code>DOC102</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20972">#20972</a>)</li>
<li>[<code>refurb</code>] Auto-fix annotated assignments
(<code>FURB101</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21278">#21278</a>)</li>
<li>[<code>ruff</code>] Ignore <code>str()</code> when not used for
simple conversion (<code>RUF065</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21330">#21330</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>Fix syntax error false positive on alternative <code>match</code>
patterns (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21362">#21362</a>)</li>
<li>[<code>flake8-simplify</code>] Fix false positive for iterable
initializers with generator arguments (<code>SIM222</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21187">#21187</a>)</li>
<li>[<code>pyupgrade</code>] Fix false positive on relative imports from
local <code>.builtins</code> module (<code>UP029</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21309">#21309</a>)</li>
<li>[<code>pyupgrade</code>] Consistently set the deprecated tag
(<code>UP035</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21396">#21396</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>refurb</code>] Detect empty f-strings (<code>FURB105</code>)
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21348">#21348</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Add option to provide a reason to <code>--add-noqa</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21294">#21294</a>)</li>
<li>Add upstream linter URL to <code>ruff linter
--output-format=json</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21316">#21316</a>)</li>
<li>Add color to <code>--help</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21337">#21337</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Add a new &quot;Opening a PR&quot; section to the contribution guide
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21298">#21298</a>)</li>
<li>Added the PyScripter IDE to the list of &quot;Who is using
Ruff?&quot; (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21402">#21402</a>)</li>
<li>Update PyCharm setup instructions (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21409">#21409</a>)</li>
<li>[<code>flake8-annotations</code>] Add link to
<code>allow-star-arg-any</code> option (<code>ANN401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21326">#21326</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>[<code>configuration</code>] Improve error message when
<code>line-length</code> exceeds <code>u16::MAX</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21329">#21329</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a href="https://github.com/njhearp"><code>@​njhearp</code></a></li>
<li><a href="https://github.com/11happy"><code>@​11happy</code></a></li>
<li><a href="https://github.com/hugovk"><code>@​hugovk</code></a></li>
<li><a href="https://github.com/Gankra"><code>@​Gankra</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/pyscripter"><code>@​pyscripter</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.14.5</h2>
<p>Released on 2025-11-13.</p>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-simplify</code>] Apply <code>SIM113</code> when index
variable is of type <code>int</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21395">#21395</a>)</li>
<li>[<code>pydoclint</code>] Fix false positive when Sphinx directives
follow a &quot;Raises&quot; section (<code>DOC502</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20535">#20535</a>)</li>
<li>[<code>pydoclint</code>] Support NumPy-style comma-separated
parameters (<code>DOC102</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20972">#20972</a>)</li>
<li>[<code>refurb</code>] Auto-fix annotated assignments
(<code>FURB101</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21278">#21278</a>)</li>
<li>[<code>ruff</code>] Ignore <code>str()</code> when not used for
simple conversion (<code>RUF065</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21330">#21330</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>Fix syntax error false positive on alternative <code>match</code>
patterns (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21362">#21362</a>)</li>
<li>[<code>flake8-simplify</code>] Fix false positive for iterable
initializers with generator arguments (<code>SIM222</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21187">#21187</a>)</li>
<li>[<code>pyupgrade</code>] Fix false positive on relative imports from
local <code>.builtins</code> module (<code>UP029</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21309">#21309</a>)</li>
<li>[<code>pyupgrade</code>] Consistently set the deprecated tag
(<code>UP035</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21396">#21396</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>refurb</code>] Detect empty f-strings (<code>FURB105</code>)
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21348">#21348</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Add option to provide a reason to <code>--add-noqa</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21294">#21294</a>)</li>
<li>Add upstream linter URL to <code>ruff linter
--output-format=json</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21316">#21316</a>)</li>
<li>Add color to <code>--help</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21337">#21337</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Add a new &quot;Opening a PR&quot; section to the contribution guide
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21298">#21298</a>)</li>
<li>Added the PyScripter IDE to the list of &quot;Who is using
Ruff?&quot; (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21402">#21402</a>)</li>
<li>Update PyCharm setup instructions (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21409">#21409</a>)</li>
<li>[<code>flake8-annotations</code>] Add link to
<code>allow-star-arg-any</code> option (<code>ANN401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21326">#21326</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>[<code>configuration</code>] Improve error message when
<code>line-length</code> exceeds <code>u16::MAX</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21329">#21329</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a href="https://github.com/njhearp"><code>@​njhearp</code></a></li>
<li><a href="https://github.com/11happy"><code>@​11happy</code></a></li>
<li><a href="https://github.com/hugovk"><code>@​hugovk</code></a></li>
<li><a href="https://github.com/Gankra"><code>@​Gankra</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/pyscripter"><code>@​pyscripter</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="87dafb8787"><code>87dafb8</code></a>
Bump 0.14.5 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21435">#21435</a>)</li>
<li><a
href="9e80e5a3a6"><code>9e80e5a</code></a>
[ty] Support <code>type[…]</code> and <code>Type[…]</code> in implicit
type aliases (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21421">#21421</a>)</li>
<li><a
href="f9cc26aa12"><code>f9cc26a</code></a>
[ty] Respect notebook cell boundaries when adding an auto import (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21322">#21322</a>)</li>
<li><a
href="d49c326309"><code>d49c326</code></a>
Update PyCharm setup instructions (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21409">#21409</a>)</li>
<li><a
href="e70fccbf25"><code>e70fccb</code></a>
[ty] Improve LSP test server logging (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21432">#21432</a>)</li>
<li><a
href="90b32f3b3b"><code>90b32f3</code></a>
[ty] Ensure annotation/type expressions in stub files are always
deferred (<a
href="https://redirect.github.com/astral-sh/ruff/issues/2">#2</a>...</li>
<li><a
href="99694b6e4a"><code>99694b6</code></a>
Use <code>profiling</code> profile for <code>cargo test(linux,
release)</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21429">#21429</a>)</li>
<li><a
href="67e54fffe1"><code>67e54ff</code></a>
[ty] Fix panic for cyclic star imports (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21428">#21428</a>)</li>
<li><a
href="a01b0d7780"><code>a01b0d7</code></a>
[ty] Press 'enter' to rerun all mdtests (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21427">#21427</a>)</li>
<li><a
href="04ab9170d6"><code>04ab917</code></a>
[ty] Further improve subscript assignment diagnostics (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21411">#21411</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.13.3...0.14.5">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-11-20 07:47:06 +00:00
Bently
07b5fe859a feat(platform/backend): add gemini-3-pro-preview (#11413)
This adds gemini-3-pro-preview from openrouter

https://openrouter.ai/google/gemini-3-pro-preview

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x]  Test the gemini 3 model in the llm blocks and it works
2025-11-19 14:56:54 +00:00
Abhimanyu Yadav
746dbbac84 refactor(frontend): improve new builder performance and UX with position handling and store optimizations (#11397)
This PR introduces several performance and user experience improvements
to the new builder, focusing on node positioning, state management
optimizations, and visual enhancements.

The new builder had several issues that impacted developer experience
and runtime performance:
- Inefficient store subscriptions causing unnecessary re-renders
- No intelligent node positioning when adding blocks via clicking
- useEffect dependencies causing potential stale closures
- Width constraints missing on form fields affecting layout consistency

### Changes 🏗️

#### Performance Optimizations
- **Store subscription optimization**: Added `useShallow` from zustand
to prevent unnecessary re-renders in
[NodeContainer](file:///app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeContainer.tsx)
and
[NodeExecutionBadge](file:///app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeExecutionBadge.tsx)
- **useEffect cleanup**: Split combined useEffects in
[useFlow](file:///app/(platform)/build/hooks/useFlow.ts) for clearer
dependencies and better performance
- **Memoization**: Added `memo` to
[NewControlPanel](file:///app/(platform)/build/components/NewControlPanel/NewControlPanel.tsx)
to prevent unnecessary re-renders
- **Callback optimization**: Wrapped `onDrop` handler in `useCallback`
to prevent recreation on every render

#### UX Improvements  
- **Smart node positioning**: Implemented `findFreePosition` algorithm
in [helper.ts](file:///app/(platform)/build/components/helper.ts) that:
  - Automatically finds non-overlapping positions for new nodes
  - Tries right, left, then below existing nodes
  - Falls back to far-right position if no space available
- **Click-to-add blocks**: Added click handlers to blocks that:
  - Add the block at an intelligent position
- Automatically pan viewport to center the new node with smooth
animation
- **Visual feedback**: Added loading state with spinner icon for agent
blocks during fetch
- **Form field width**: Added `max-w-[340px]` constraint to prevent
overflow in
[FieldTemplate](file:///components/renderers/input-renderer/templates/FieldTemplate.tsx)


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create from scratch and execute an agent with at least 3 blocks
  - [x] Test adding blocks via drag-and-drop ensures no overlapping
  - [x] Test adding blocks via click positions them intelligently
  - [x] Test viewport animation when adding blocks via click
- [x] Import an agent from file upload, and confirm it executes
correctly
  - [x] Test loading spinner appears when adding agents from "My Agents"
- [x] Verify performance improvements by checking React DevTools for
reduced re-renders
2025-11-19 13:52:18 +00:00
Zamil Majdy
901bb31e14 feat(backend): parameterize activity status generation with customizable prompts (#11407)
## Summary

Implement comprehensive parameterization of the activity status
generation system to enable custom prompts for admin analytics
dashboard.

## Changes Made

### Core Function Enhancement (`activity_status_generator.py`)
- **Extract hardcoded prompts to constants**: `DEFAULT_SYSTEM_PROMPT`
and `DEFAULT_USER_PROMPT`
- **Add prompt parameters**: `system_prompt`, `user_prompt` with
defaults to maintain backward compatibility
- **Template substitution system**: User prompt supports
`{{GRAPH_NAME}}` and `{{EXECUTION_DATA}}` placeholders
- **Skip existing flag**: `skip_existing` parameter allows admin to
force regeneration of existing data
- **Maintain manager compatibility**: All existing calls continue to
work with default parameters

### Admin API Enhancement (`execution_analytics_routes.py`)
- **Custom prompt fields**: `system_prompt` and `user_prompt` optional
fields in `ExecutionAnalyticsRequest`
- **Skip existing control**: `skip_existing` boolean flag for admin
regeneration option
- **Template documentation**: Clear documentation of placeholder system
in field descriptions
- **Backward compatibility**: All existing API calls work unchanged

### Template System Design
- **Simple placeholder replacement**: `{{GRAPH_NAME}}` → actual graph
name, `{{EXECUTION_DATA}}` → JSON execution data
- **No dependencies**: Uses simple `string.replace()` for maximum
compatibility
- **JSON safety**: Execution data properly serialized as indented JSON
- **Validation tested**: Template substitution verified to work
correctly

## Key Features

### For Regular Users (Manager Integration)
- **No changes required**: Existing manager.py calls work unchanged
- **Default behavior preserved**: Same prompts and logic as before
- **Feature flag compatibility**: LaunchDarkly integration unchanged

### For Admin Analytics Dashboard
- **Custom system prompts**: Admins can override the AI evaluation
criteria
- **Custom user prompts**: Admins can modify the analysis instructions
with execution data templates
- **Force regeneration**: `skip_existing=False` allows reprocessing
existing executions with new prompts
- **Complete model list**: Access to all LLM models from `llm.py` (70+
models including GPT, Claude, Gemini, etc.)

## Technical Validation
-  Template substitution tested and working
-  Default behavior preserved for existing code
-  Admin API parameter validation working
-  All imports and function signatures correct
-  Backward compatibility maintained

## Use Cases Enabled
- **A/B testing**: Compare different prompt strategies on same execution
data
- **Custom evaluation**: Tailor success criteria for specific graph
types
- **Prompt optimization**: Iterate on prompt design based on admin
feedback
- **Bulk reprocessing**: Regenerate activity status with improved
prompts

## Testing
- Template substitution functionality verified
- Function signatures and imports validated
- Code formatting and linting passed
- Backward compatibility confirmed

## Breaking Changes
None - all existing functionality preserved with default parameters.

## Related Issues
Resolves the requirement to expose prompt customization on the frontend
execution analytics dashboard.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-19 13:38:08 +00:00
Swifty
9438817702 fix(platform): Capture Sentry Block Errors Correctly (#11404)
Currently we are capturing block errors via the scope only, this change
captures the error directly.

### Changes 🏗️

- capture the error as well as the scope in the executor manager
- Update the block error message to include additional details
- remove the __str__ function from blockerror as it is no longer needed

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Checked that errors are still captured in dev
2025-11-19 12:21:47 +01:00
Abhimanyu Yadav
184a73de7d fix(frontend): Add custom validator to handle short-text and long-text formats in form renderer (#11395)
The rfjs library was throwing validation errors for our custom format
types `short-text` and `long-text` because these are not standard JSON
Schema formats. This was causing form validation to fail even though
these formats are valid in our application context.

<img width="792" height="85" alt="Screenshot 2025-11-18 at 9 39 08 AM"
src="https://github.com/user-attachments/assets/c75c584f-b991-483c-8779-fc93877028e0"
/>

### Changes 🏗️

- Created a custom validator using `@rjsf/validator-ajv8`'s
`customizeValidator` function
- Added support for `short-text` and `long-text` custom formats that
accept any string value
- Replaced the default validator with our custom validator in the
FormRenderer component
- Disabled strict mode and format validation in AJV options to prevent
validation errors for non-standard formats

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create an agent with input blocks that use short-text format
  - [x] Create an agent with input blocks that use long-text format
  - [x] Execute the agent and verify no validation errors appear
  - [x] Verify that form submission works correctly with both formats
- [x] Test that other standard formats (email, URL, etc.) still work as
expected
2025-11-19 04:51:25 +00:00
Abhimanyu Yadav
1154f86a5c feat(frontend): add inline node title editing with double-click (#11370)
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11368

This PR adds the ability to rename nodes directly in the flow editor by
double-clicking on their titles.


https://github.com/user-attachments/assets/1de3fc5c-f859-425e-b4cf-dfb21c3efe3d

### Changes 🏗️

- **Added inline node title editing functionality:**
  - Users can now double-click on any node title to enter edit mode
  - Custom titles are saved on Enter key or blur, canceled on Escape key
- Custom node names are persisted in the node's metadata as
`customized_name`
  - Added tooltip to display full title when text is truncated

- **Modified node data handling:**
- Updated `nodeStore` to include `customized_name` in metadata when
converting nodes
- Modified `helper.ts` to pass metadata (including custom titles) to
custom nodes
  - Added metadata property to `CustomNodeData` type

- **UI improvements:**
  - Added hover cursor indication for editable titles
  - Implemented proper focus management during editing
  - Maintained consistent styling between display and edit modes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Double-click on various node types to enter edit mode
  - [x] Type new names and press Enter to save
  - [x] Press Escape to cancel editing and revert to original name
  - [x] Click outside the input field to save changes
  - [x] Verify custom names persist after page refresh
  - [x] Test with long node names to ensure tooltip appears
  - [x] Verify custom names are saved with the graph
- [x] Test editing on all node types (standard, input, output, webhook,
etc.)
2025-11-19 04:51:11 +00:00
Zamil Majdy
73c93cf554 fix(backend): resolve production failures with comprehensive token handling and conversation safety fixes (#11394)
## Summary

Resolves multiple production failures including execution
**6239b448-0434-4687-a42b-9ff0ddf01c1d** where AI Text Generator failed
with `'NoneType' object is not iterable`.

This implements comprehensive fixes addressing both the root cause
(unrealistic token limits) and masking issues (Sentry SDK bug +
conversation history null safety).

## Root Cause Analysis

Three interconnected issues caused production failures:

### 1. Unrealistic Perplexity Token Limits 
- **PERPLEXITY_SONAR**: 127,000 max_output_tokens (equivalent to ~95,000
words!)
- **PERPLEXITY_SONAR_DEEP_RESEARCH**: 128,000 max_output_tokens
- **Problem**: Newsletter generation defaulted to 127K output tokens 
- **Result**: Exceeded OpenRouter's 128K total limit, causing API
failures

### 2. Sentry SDK OpenAI Integration Bug 🐛
- **Location**: `sentry_sdk/integrations/openai.py:157`
- **Bug**: `for choice in response.choices:` failed when `choices=None`
- **Impact**: Masked real token limit errors with confusing TypeError

### 3. Conversation History Null Safety Issues ⚠️
- **Problem**: `get_pending_tool_calls()` expected non-null
conversation_history
- **Impact**: SmartDecisionMaker crashes when conversation_history is
None
- **Pattern**: Common in various LLM block scenarios

## Changes Made

###  Fix 1: Realistic Perplexity Token Limits (`backend/blocks/llm.py`)

```python
# Before (PROBLEMATIC)
LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 127000)
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata("open_router", 128000, 128000)

# After (FIXED)
LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 8000)
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata("open_router", 128000, 16000)
```

**Rationale:**
- **8K tokens** (SONAR): Matches industry standard, sufficient for long
content (6K words)
- **16K tokens** (DEEP_RESEARCH): Higher limit for research, supports
very long content (12K words)
- **Industry pattern**: 3-4% of context window (consistent with other
OpenRouter models)

###  Fix 2: Sentry SDK Upgrade (`pyproject.toml`)

- **Upgrade**: `^2.33.2` → `^2.44.0`  
- **Result**: OpenAI integration bug fixed in SDK (no code changes
needed)

###  Fix 3: Conversation History Null Safety
(`backend/blocks/smart_decision_maker.py`)

```python
# Before
def get_pending_tool_calls(conversation_history: list[Any]) -> dict[str, int]:

# After  
def get_pending_tool_calls(conversation_history: list[Any] | None) -> dict[str, int]:
    if not conversation_history:
        return {}
```

- **Added**: Proper null checking for conversation_history parameter
- **Prevents**: `'NoneType' object is not iterable` errors
- **Impact**: Improves SmartDecisionMaker reliability across all
scenarios

## Impact & Benefits

### 🎯 Production Reliability
-  **Prevents token limit errors** for realistic content generation
-  **Clear error handling** without masked Sentry TypeError crashes  
-  **Better conversation safety** with proper null checking
-  **Multiple failure scenarios resolved** comprehensively

### 📈 User Experience  
-  **Faster responses** (reasonable output lengths)
-  **Lower costs** (more focused content generation)
-  **More stable workflows** with better error handling
-  **Maintains flexibility** - users can override with explicit
`max_tokens`

### 🔧 Technical Improvements
-  **Follows industry standards** - aligns with other OpenRouter models
-  **Breaking change risk: LOW** - users can override if needed
-  **Root cause resolution** - fixes error chain at source
-  **Defensive programming** - better null safety patterns

## Validation

### Industry Analysis 
- Large context models typically use 8K-16K output limits (not 127K)
- Newsletter generation needs 650-10K tokens typically, not 127K tokens
- Pattern analysis of 13 OpenRouter models confirms 3-4% context ratio

### Production Testing 
- **Before**: Newsletter generation → 127K tokens → API failure → Sentry
crash
- **After**: Newsletter generation → 8K tokens → successful completion
- **Error handling**: Clear token limit errors instead of confusing
TypeErrors
- **Null safety**: Conversation history None/undefined handled
gracefully

### Dependencies 
- **Sentry SDK**: Confirmed 2.44.0 fixes OpenAI integration crashes
- **Poetry lock**: All dependencies updated successfully
- **Backward compatibility**: Maintained for existing workflows

## Related Issues

- Fixes flowExecutionID **6239b448-0434-4687-a42b-9ff0ddf01c1d** 
- Resolves AI Text Generator reliability issues
- Improves overall platform token handling and conversation safety
- Addresses multiple production failure patterns comprehensively

## Breaking Changes Assessment

**Risk Level**: 🟡 **LOW-MEDIUM**

- **Perplexity limits**: Users relying on 127K+ output would be limited
(likely unintentional usage)
- **Override available**: Users can explicitly set `max_tokens` for
custom limits
- **Conversation safety**: Only improves reliability, no breaking
changes
- **Most use cases**: Unaffected or improved by realistic defaults

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-18 22:32:58 +00:00
Zamil Majdy
02757d68f3 fix(backend): resolve marketplace agent access in get_graph_execution endpoint (#11396)
## Summary
Fixes critical issue where `GET
/graphs/{graph_id}/executions/{graph_exec_id}` failed for marketplace
agents with "Graph not found" errors due to incorrect version access
checking.

## Root Cause
The endpoint was checking access to the **latest version** of a graph
instead of the **specific version used in the execution**. This broke
marketplace agents when:

1. User executes a marketplace agent (e.g., v3)
2. Graph owner later publishes a new version (e.g., v4) 
3. User tries to view execution details
4. **BUG**: Code checked access to latest version (v4) instead of
execution version (v3)
5. If v4 wasn't published to marketplace → access denied → "Graph not
found"

## Original Problematic Code
```python
# routers/v1.py - get_graph_execution (WRONG ORDER)
graph = await graph_db.get_graph(graph_id=graph_id, user_id=user_id)  #  Uses LATEST version
if not graph:
    raise HTTPException(404, f"Graph #{graph_id} not found")

result = await execution_db.get_graph_execution(...)  # Gets execution data
```

## Solution
**Reordered operations** to check access against the **execution's
specific version**:

```python
# NEW CODE (CORRECT ORDER)
result = await execution_db.get_graph_execution(...)  #  Get execution FIRST

if not await graph_db.get_graph(
    graph_id=result.graph_id,
    version=result.graph_version,  #  Use execution's version, not latest!
    user_id=user_id,
):
    raise HTTPException(404, f"Graph #{graph_id} not found")
```

### Key Changes Made

1. **Fixed version access logic** (routers/v1.py:1075-1095):
   - Reordered operations to get execution data first 
   - Check access using `result.graph_version` instead of latest version
   - Applied same fix to external API routes

2. **Enhanced `get_graph()` marketplace fallback**
(data/graph.py:919-935):
   - Added proper marketplace lookup when user doesn't own the graph
   - Supports version-specific marketplace access checking
   - Maintains security by only allowing approved, non-deleted listings

3. **Activity status generator fix**
(activity_status_generator.py:139-144):
   - Use `skip_access_check=True` for internal system operations

4. **Missing block handling** (data/graph.py:94-103):
- Added `_UnknownBlockBase` placeholder for graceful handling of deleted
blocks

## Example Scenario Fixed
1. **User**: Installs marketplace agent "Blog Writer" v3
2. **Owner**: Later publishes v4 (not to marketplace yet)  
3. **User**: Runs the agent (executes v3)
4. **Before**: Viewing execution details fails because code checked v4
access
5. **After**:  Viewing execution details works because code checks v3
access

## Impact
-  **Marketplace agents work correctly**: Users can view execution
details for any marketplace agent version they've used
-  **Backward compatibility**: Existing owned graphs continue working
-  **Security maintained**: Only allows access to versions user
legitimately executed
-  **Version-aware access control**: Proper access checking for
specific versions, not just latest

## Testing
- [x] Marketplace agents: Execution details now accessible for all
executed versions
- [x] Owned graphs: Continue working as before
- [x] Version scenarios: Access control works correctly for specific
versions
- [x] Missing blocks: Graceful handling without errors

**Root issue resolved**: Version mismatch between execution version and
access check version that was breaking marketplace agent execution
viewing.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-18 15:44:10 +00:00
Reinier van der Leer
2569576d78 fix(frontend/builder): Fix save-and-run with no agent inputs (#11401)
- Resolves #11390

This unbreaks the last step of the Builder tutorial :)

### Changes 🏗️

- Give `isSaving` time to propagate before calling dependent callback
`saveAndRun`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run through Builder tutorial; Run (with implicit save) should work
at once
2025-11-18 12:49:37 +00:00
Ubbe
3b34c04a7a fix(frontend): logout console issues (#11400)
## Changes 🏗️

Fixed the logout errors by removing duplicate redirects. `serverLogout`
was calling `redirect("/login")` (which throws `NEXT_REDIRECT`), and
then `useSupabaseStore` was also calling `router.refresh()`, causing
conflicts.

Updated `serverLogout` to return a result object instead of redirecting,
and moved the redirect to the client using `router.push("/login")` after
logout completes. This removes the `NEXT_REDIRECT` error and ensures a
single redirect.

<img width="800" height="706" alt="Screenshot 2025-11-18 at 16 14 54"
src="https://github.com/user-attachments/assets/38e0e55c-f48d-4b25-a07b-d4729e229c70"
/>

Also addressed 401 errors during logout. Hooks like `useCredits` were
still making API calls after logout, causing "Authorization header is
missing" errors. Added a check in `_makeClientRequest` to detect
logout-in-progress and suppress authentication errors during that
window. This prevents console noise and avoids unnecessary error
handling.

<img width="800" height="742" alt="Screenshot 2025-11-18 at 16 14 45"
src="https://github.com/user-attachments/assets/6fb2270a-97a0-4411-9e5a-9b4b52117af3"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Log out of your account
  - [x] There are no errors showing up on the browser devtools
2025-11-18 16:41:51 +07:00
Abhimanyu Yadav
34c9ecf6bc refactor(frontend): improve customNode reusability and add multi-block support (#11368)
This refactor improves developer experience (DX) by creating a more
maintainable and extensible architecture.

The previous `CustomNode` implementation had several issues:
- Code was duplicated across different node types (StandardNodeBlock,
OutputBlock, etc.)
- Poor separation of concerns with all logic in a single component
- Limited flexibility for handling different block types
- Inconsistent handle display logic across different node types

<img width="2133" height="831" alt="Screenshot 2025-11-12 at 9 25 10 PM"
src="https://github.com/user-attachments/assets/02864bba-9ffe-4629-98ab-1c43fa644844"
/>

## Changes 🏗️

- **Refactored CustomNode structure**:
- Extracted reusable components:
[`NodeContainer`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeContainer.tsx),
[`NodeHeader`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeHeader.tsx),
[`NodeAdvancedToggle`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeAdvancedToggle.tsx),
[`WebhookDisclaimer`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx)
- Removed `StandardNodeBlock.tsx` and consolidated logic into
[`CustomNode.tsx`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/CustomNode.tsx)
- Moved
[`StickyNoteBlock`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/StickyNoteBlock.tsx)
to components folder for better organization

- **Added BlockUIType-specific logic**:
- Implemented conditional handle display based on block type (INPUT,
WEBHOOK, WEBHOOK_MANUAL blocks don't show handles)
- Added special handling for AGENT blocks with dynamic input/output
schemas
- Added webhook-specific disclaimer component with library agent
integration
  - Fixed OUTPUT block's name field to not show input handle

- **Enhanced FormCreator**:
  - Added `showHandles` prop for granular control
- Added `className` prop for styling flexibility (used for webhook
opacity)

- **Improved nodeStore**:
  - Added `getNodeBlockUIType` method for retrieving node UI types

- **UI/UX improvements**:
- Fixed duplicate gap classes in
[`BuilderActions`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/BuilderActions/BuilderActions.tsx)
- Added proper styling for webhook blocks (disabled state with reduced
opacity)
  - Improved field template spacing for specific block types

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create and test a standard node block with input/output handles
  - [x] Create and test INPUT block (verify no input handles)
  - [x] Create and test OUTPUT block (verify name field has no handle)
- [x] Create and test WEBHOOK block (verify disclaimer appears and form
is disabled)
  - [x] Create and test AGENT block with custom schemas
  - [x] Create and test sticky note block
  - [x] Verify advanced toggle works for all node types
  - [x] Test node execution badges display correctly
  - [x] Verify node selection highlighting works
2025-11-18 03:11:32 +00:00
Swifty
a66219fc1f fix(platform): Remove un-runnable agents from schedule (#11374)
Currently when an agent fails validation during a scheduled run, we
raise an error then try again, regardless of why.

This change removed the agent schedule and notifies the user 

### Changes 🏗️

- add schedule_id to the GraphExecutionJobArgs
- add agent_name to the GraphExecutionJobArgs
- Delete schedule on GraphValidationError
- Notify the user with a message that include the agent name

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have ensured the scheduler tests work with these changes
2025-11-17 15:24:40 +00:00
Bently
8b3a741f60 refactor(turnstile): Remove turnstile (#11387)
This PR removes turnstile from the platform.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test to make sure that turnstile is gone, it will be.
  - [x] Test logging in with out turnstile to make sure it still works
  - [x] Test registering a new account with out turnstile and it works
2025-11-17 15:14:31 +00:00
Bently
7c48598f44 fix(frontend): Replace question mark icon with "Give Feedback" text button (#11381)
## Summary
- Replaced the question mark icon with explicit "Give Feedback" text in
the feedback button
- Applied consistent styling to match the "Tutorial" button
- Removed QuestionMarkCircledIcon dependency from TallyPopup component

## Motivation
Users reported not knowing what the question mark icon was for, which
prevented them from discovering the feedback feature. Making the button
text-based and explicit removes this confusion.

## Changes
- Removed `QuestionMarkCircledIcon` import and icon element
- Changed button to display only "Give Feedback" text
- Added consistent styling (height, rounded corners, background color)
to match Tutorial button
- Button text can wrap to two lines if needed for better readability

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Check the UI to see that the question mark on the tally button has
been replaced with "Give Feedback"


Before
<img width="618" height="198" alt="image"
src="https://github.com/user-attachments/assets/0d4803eb-9a05-4a43-aaff-cc43b6d0cda4"
/>


After
<img width="298" height="126" alt="image"
src="https://github.com/user-attachments/assets/c1e1c3b5-94b4-4ad9-87e9-a0feca1143e3"
/>

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-11-17 12:14:05 +00:00
Bently
804e3b403a feat(frontend): add mobile warning banner to login and signup pages (#11383)
## Summary
Adds a non-blocking warning banner to Login and Sign Up pages that
alerts mobile users about potential limitations in the mobile
experience.

## Changes
- Created `MobileWarningBanner` component in `src/components/auth/`
- Integrated banner into Login page (`/login`)
- Integrated banner into Sign Up page (`/signup`)
- Banner displays only on mobile devices (viewports < 768px)
- Uses existing `useBreakpoint` hook for responsive detection

## Design Details
- **Position**: Appears below the login/signup card (after the bottom
"Sign up"/"Log in" links)
- **Style**: Amber-themed warning banner with DeviceMobile icon
- **Message**: 
  - Title: "Heads up: AutoGPT works best on desktop"
- Description: "Some features may be limited on mobile. For the best
experience, consider switching to a desktop."
- **Behavior**: Non-blocking, no user interaction required

<img width="342" height="81" alt="image"
src="https://github.com/user-attachments/assets/b6584299-b388-4d8d-b951-02bd95915566"
/>

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
	- [x] Verified banner appears on mobile viewports (< 768px)
	- [x] Verified banner is hidden on desktop viewports (≥ 768px)
	- [x] Tested on Login page
	- [x] Tested on Sign Up page

<img width="342" height="758" alt="image"
src="https://github.com/user-attachments/assets/077b3e0a-ab9c-41c7-83b7-7ee80a3396fd"
/>

<img width="342" height="759" alt="image"
src="https://github.com/user-attachments/assets/77a64b28-748b-4d97-bd7c-67c55e5e9f22"
/>

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-11-17 11:31:59 +00:00
Lluis Agusti
9c3f679f30 fix(frontend): simplify login redirect 2025-11-15 01:14:22 +07:00
Lluis Agusti
9977144b3d fix(frontend): onboarding fixes... (2) 2025-11-15 00:58:13 +07:00
Lluis Agusti
81d61a0c94 fix(frontend): redirects improvements... 2025-11-15 00:31:42 +07:00
Ubbe
e1e0fb7b25 fix(frontend): post login/signup/onboarding redirect clash (#11382)
## Changes 🏗️

### Issue 1: login/signup redirect conflict

There are 2 hooks, both on the login and signup pages, that attempt to
call `router.push` once a user logs in or is created.

The main offender seems to be this hook:
```tsx
  useEffect(() => {
    if (user) router.push("/");
  }, [user]);
```
Which is in place on both pages to prevent logged-in users from
accessing `/login` or `/signup`. What happens is when a user signs up or
logs in, if they need onboarding, there is a `router.push` down the line
to redirect them there, which conflicts with the one done in this hook.
 
**Solution**

I moved the logic from that hook to the `middleware.ts`, which is a
better place for it... It won't conflict anymore with onboarding
redirects done in those pages

### Issue 2: onboarding server redirects

Potential race condition: both the server component and the client
`<OnboardingProvider />` perform redirects. The server component
redirects happen first, but if onboarding state changes after mount, the
provider can redirect again, causing rapid mount/unmount cycles.

**Solution**

Make all onboarding redirects central in `/onboarding` which is now a
client component do in client redirects only and displaying a spinner
while it does so.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested locally login/logout/signup and trying to access `/login`
and `/signup` being logged in
2025-11-14 15:30:58 +01:00
Reinier van der Leer
a054740aac dx(frontend): Fix source map upload configuration (#11378)
Source maps aren't being uploaded to Sentry, so debugging errors in
production is really hard.

### Changes 🏗️

- Fix config so source maps are found and uploaded to Sentry
- Disable deleting source maps after upload (so they are available in
the browser)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tested locally
2025-11-13 17:23:37 +01:00
Abhimanyu Yadav
f78a6df96c feat(frontend): add static style and beads in custom edge (#11364)
<!-- Clearly explain the need for these changes: -->
This PR enhances the visual feedback in the flow editor by adding
animated "beads" that travel along edges during execution. This provides
users with clear, real-time visualization of data flow and execution
progress through the graph, making it easier to understand which
connections are active and track execution state.


https://github.com/user-attachments/assets/df4a4650-8192-403f-a200-15f6af95e384

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- **Added new edge data types and structure:**
- Added `CustomEdgeData` type with `isStatic`, `beadUp`, `beadDown`, and
`beadData` properties
  - Created `CustomEdge` type extending XYEdge with custom data
  
- **Implemented bead animation components:**
- Added `JSBeads.tsx` - JavaScript-based animation component with
real-time updates
- Added `SVGBeads.tsx` - SVG-based animation component (for future
consideration)
  - Added helper functions for path calculations and bead positioning
  
- **Updated edge rendering:**
  - Modified `CustomEdge` component to display beads during execution
  - Added static edge styling with dashed lines (`stroke-dasharray: 6`)
- Improved visual hierarchy with different stroke styles for
selected/unselected states
  
- **Refactored edge management:**
- Converted `edgeStore` from using `Connection` type to `CustomEdge`
type
- Added `updateEdgeBeads` and `resetEdgeBeads` methods for bead state
management
  - Updated `copyPasteStore` to work with new edge structure
  
- **Added support for static outputs:**
  - Added `staticOutput` property to `CustomNodeData`
- Static edges show continuous bead animation while regular edges show
one-time animation

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a flow with multiple blocks and verify beads animate along
edges during execution
- [x] Test that beads increment when execution starts (`beadUp`) and
decrement when completed (`beadDown`)
- [x] Verify static edges display with dashed lines and continuous
animation
  - [x] Confirm copy/paste operations preserve edge data and bead states
  - [x] Test edge animations performance with complex graphs (10+ nodes)
  - [x] Verify bead animations complete properly before disappearing
- [x] Test that multiple beads can animate on the same edge for
concurrent executions
- [x] Verify edge selection/deletion still works with new visualization
- [x] Test that bead state resets properly when starting new executions
2025-11-13 15:36:40 +00:00
Reinier van der Leer
4d43570552 Merge branch 'master' into dev 2025-11-13 12:44:43 +01:00
Bently
d3d78660da fix(frontend/builder): Generate unique IDs for copied blocks (#11344)
## Changes 🏗️

- Clear backend_id when pasting blocks to prevent duplicate ID errors
- Add copy/paste functionality to new FlowEditor
- Ensure pasted blocks use newly generated UUIDs when saving

Fixes issue where copying and pasting blocks would fail with 'Unique
constraint failed' error because the old backend_id was being reused
instead of the new node ID.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add any block to the builder, save and run the graph and wait for
it to finish, then copy and paste the first block and paste it, try to
use it and it should now work and not have any issues/errors



https://github.com/user-attachments/assets/c24f9a9a-8e4f-4988-8731-cddc34a0da13

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-11-13 09:48:49 +00:00
Reinier van der Leer
749be06599 fix(blocks/ai): Make AI List Generator block more reliable (#11317)
- Resolves #11305

### Changes 🏗️

Make `AIListGeneratorBlock` more reliable:
- Leverage `AIStructuredResponseGenerator`'s robust
prompt/retry/validate logic
  - Use JSON format instead of Python list format
  - Add `force_json_output` toggle
- Fix output instructions in prompt (only string values allowed)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Works without `force_json_output`
  - [x] Works with `force_json_output`
  - [x] Retry mechanism works as intended
2025-11-13 09:46:23 +00:00
Krzysztof Czerwinski
27d886f05c feat(platform): WebSocket Onboarding notifications (#11335)
Use WebSocket notifications from the backend to display confetti.

### Changes 🏗️

- Send WebSocket notifications to the browser when new onboarding steps
are completed
- Handle WebSocket notifications events in the Wallet and use them
instead of frontend-based logic to play confetti (fixes confetti
appearing on every refresh)
- Scroll to newly completed tasks when wallet opens just before confetti
plays
- Fix: make `Run again` button complete `RE_RUN_AGENT` task

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Confetti are displayed when previously uncompleted tasks are
completed
  - [x] Confetti do not appear on page refresh
  - [x] Wallet scrolls on open before confetti is displayed
  - [x] `Run again` button completes `RE_RUN_AGENT` task
2025-11-13 09:45:13 +00:00
Abhimanyu Yadav
be01a1316a fix(tests/frontend): resolve flaky test due to duplicate heading elements on marketplace page (#11369)
This PR fixes a flaky test issue in the signup flow where Playwright's
strict mode was failing due to duplicate heading elements on the
marketplace page.

### Problem
The test was failing intermittently with the following error:

```
Error: strict mode violation: getByText('Bringing you AI agents designed by thinkers from around the world') resolved to 2 elements
```

This occurred because the marketplace page contains two identical `<h3>`
elements with the same text, causing Playwright's strict mode to throw
an error when trying to select a single element.


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run E2E tests locally multiple times to ensure no flakiness
  - [x] Check CI pipeline runs successfully
2025-11-13 09:11:55 +00:00
Bently
32bb6705d1 fix(frontend/library): fix schedule display issues for recurring schedules (#11362)
Fixes two related bugs in the agent scheduling UI that caused confusion
for users setting up recurring schedules:

1. **"on day nan of every month" display bug**: When scheduling an agent
to repeat every N days (e.g., "every 2 days"), the schedule info panel
incorrectly displayed "on day nan of every month" instead of the correct
"Every N days at HH:MM" format.

2. **Confusing time picker for hourly intervals**: When setting up a
schedule with "every N hours", the UI displayed a time picker labeled
"at 9 o'clock" which was confusing because the time setting is ignored
for hourly intervals. Users were unclear about what this setting meant
or if it had any effect.

### Changes 🏗️

**Fixed `humanizeCronExpression` function**
(`autogpt_platform/frontend/src/lib/cron-expression-utils.ts`):
- Reordered cron expression parsing logic to handle day intervals
(`*/N`) before monthly checks
- Added `!dayOfMonth.startsWith("*/")` guard to monthly and yearly
checks to prevent misinterpreting day intervals as monthly day lists
- This ensures expressions like `0 9 */2 * *` (every 2 days at 9:00) are
correctly displayed as "Every 2 days at 09:00" instead of "on day nan of
every month"

**Updated `CronScheduler` component**
(`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/ScheduleAgentModal/components/CronScheduler/CronScheduler.tsx`):
- Hide `TimeAt` component for custom intervals with unit "hours" (time
is ignored for hourly intervals)
- Pass context-aware label to `TimeAt`: "Starting at" for custom day
intervals, "At" for other frequencies
- This clarifies that the time setting is the starting time for day
intervals and removes confusion for hourly intervals

**Enhanced `TimeAt` component**
(`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/ScheduleAgentModal/components/CronScheduler/TimeAt.tsx`):
- Added optional `label` prop (defaults to "At") to allow context-aware
labeling
- Component now displays "Starting at" when used with custom day
intervals for better clarity

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Schedule an agent with "Custom" frequency, "Every 2 days" interval
- verify it displays as "Every 2 days at HH:MM" in the schedule info
panel (not "on day nan of every month")
- [x] Schedule an agent with "Monthly" frequency - verify it displays
correctly (e.g., "On day 1, 15 of every month at HH:MM")


<img width="845" height="388" alt="image"
src="https://github.com/user-attachments/assets/02ed0b73-bf5e-48fd-a7b0-6f4d4687eb13"
/>
<img width="839" height="374" alt="image"
src="https://github.com/user-attachments/assets/be62eee2-3fdd-4b20-aecf-669c3c6c6fb2"
/>
2025-11-13 08:13:21 +00:00
Reinier van der Leer
536e2a5ec8 fix(blocks): Make Smart Decision Maker tool pin handling consistent and reliable (#11363)
- Resolves #11345

### Changes 🏗️

- Move tool use routing logic from frontend to backend: routing info was
being baked into graph links by the frontend, inconsistently, causing
issues
- Rework tool use routing to use target node ID instead of target block
name
- Add a bit of magic to `NodeOutputs` component to show tool node title
instead of ID

DX:
- Removed `build` from `.prettierignore` -> re-enable formatting for
builder components

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Use SDM block in a graph; verify it works
  - [x] Use SDM block with agent executor block as tool; verify it works
  - Tests for `parse_execution_output` pass (checked by CI)
2025-11-12 18:55:38 +01:00
Abhimanyu Yadav
a3e5f7fce2 fix(frontend): consolidate graph save logic to prevent duplicate event listeners (#11367)
## Summary

This PR fixes an issue where multiple keyboard save event listeners were
being registered when the same save hook was used in multiple
components, causing the graph to be saved multiple times (3x) when using
Ctrl/Cmd+S.

## Changes

- **Created a centralized `useSaveGraph` hook** in
`/hooks/useSaveGraph.ts` that encapsulates all graph saving logic
- **Refactored `useNewSaveControl`** to use the new centralized hook
instead of duplicating save logic
- **Updated `useRunGraph` and `useScheduleGraph`** to use the
centralized `useSaveGraph` hook directly
- **Simplified the save control component** by removing redundant logic
and using cleaner naming conventions

## Problem

The previous implementation had the save logic duplicated in
`useNewSaveControl`, and when this hook was used in multiple places
(NewSaveControl component, RunGraph, ScheduleGraph), each instance would
register its own keyboard event listener for Ctrl/Cmd+S. This caused:
- Multiple save requests being sent simultaneously
- "Unique constraint failed on the fields: ('id', 'version')" errors
from the backend
- Poor performance due to unnecessary re-renders

## Solution

By centralizing the save logic in a dedicated `useSaveGraph` hook:
- Save logic is now in one place, making it easier to maintain
- Components can use the save functionality without registering
duplicate event listeners
- The keyboard shortcut listener is only registered once in the
`useNewSaveControl` hook
- Other components (RunGraph, ScheduleGraph) can call `saveGraph`
directly without side effects

## Testing
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified Ctrl/Cmd+S saves the graph only once
  - [x] Tested save functionality from Save Control popup
- [x] Confirmed Run Graph and Schedule Graph still save before execution
  - [x] Verified no duplicate save requests in network tab
  - [x] Checked that save toast notifications appear correctly
2025-11-12 14:16:30 +00:00
Swifty
d674cb80e2 refactor(backend): Improve Block Error Handling (#11366)
We need a way to differentiate between serious errors that cause on call
alerts and block errors.
This PR address this need by ensuring all errors that occur during
execution of a block are of Subtype BlockError

### Changes 🏗️

- Introduced BlockErrors and its subtypes
- Updated current errors that are emitted by blocks to use BlockError
- Update executor manager, to errors emitted when running a block that
are not of type BlockError to BlockUnknownError

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] checked tests still work
  - [x] Ensured block error message is readable and useful
2025-11-12 13:58:42 +00:00
Abhimanyu Yadav
c3a6235cee feat(frontend): integrate drag-and-drop functionality in new builder (#11341)
In this PR, I’ve added drag-and-drop functionality to the new builder
using built-in HTML drag-and-drop.



https://github.com/user-attachments/assets/b27c281e-6216-4131-9a89-e10b0dd56a8f

### Changes

- Added ReactFlowProvider to manage flow state in BuilderPage and Flow
components.
- Implemented drag-and-drop support for blocks in the NewControlPanel,
allowing users to drag blocks from the menu and drop them onto the
canvas.
- Enhanced the Block component to handle drag events and provide visual
feedback during dragging.
- Updated useFlow hook to include onDragOver and onDrop handlers for
managing block placement.
- Adjusted nodeStore to accept position parameters for added blocks,
improving placement accuracy.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I’ve tried dragging and dropping multiple blocks, and it works
perfectly as shown in the video.
2025-11-11 06:05:06 +00:00
Abhimanyu Yadav
43638defa2 feat(frontend): add context menu in custom node (#11342)
- depends one https://github.com/Significant-Gravitas/AutoGPT/pull/11339

In this PR, I’ve added a dropdown menu to the custom node. This allows
you to delete a node, copy a node, and if the node is an agent node, you
can also navigate to that specific agent.

<img width="633" height="403" alt="Screenshot 2025-11-08 at 7 24 38 PM"
src="https://github.com/user-attachments/assets/89dd2906-95f5-40a5-82d1-de05075e4f30"
/>

###Changes
- Added context menu to custom nodes with copy, delete, and open agent
options
- Added performance optimization with memo for custom edge

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All three buttons are working perfectly.
  - [x] The “Go to graph” option is only visible in the sub-graph node.
2025-11-11 06:04:56 +00:00
Abhimanyu Yadav
9371528aab feat(frontend): add copy paste functionality in new builder (#11339)
In the new builder, I’ve added a copy-paste functionality using the
keyboard.


https://github.com/user-attachments/assets/3106ae86-3f47-4807-a598-9c0b166eaae9

### Changes 🏗️

- Added useCopyPasteKeyboard hook for handling keyboard shortcuts
- Created new copyPasteStore for state management
- Implemented performance optimizations (memo on CustomEdge)
- Updated nodeStore and edgeStore to support the functionality

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The copy-paste functionality is working correctly as you can see
in the video.
2025-11-10 15:39:53 +00:00
Zamil Majdy
711c439642 fix(frontend): enable admin impersonation for server-side rendered requests (#11343)
## Summary
Fix admin impersonation not working for graph execution requests that
are server-side rendered.

## Problem
- Build page uses SSR, so API calls go through _makeServerRequest
instead of _makeClientRequest
- Server-side requests cannot access sessionStorage where impersonation
ID is stored
- Graph execution requests were missing X-Act-As-User-Id header

## Simple Solution
1. **Store impersonation in cookie** (useAdminImpersonation.ts): 
   - Set/clear cookie alongside sessionStorage for server access
2. **Read cookie on server** (_makeServerRequest in client.ts):
   - Check for impersonation cookie using Next.js cookies() API  
   - Create fake Request with X-Act-As-User-Id header
   - Pass to existing makeAuthenticatedRequest flow

## Changes Made
- useAdminImpersonation.ts: 2 lines to set/clear cookie
- client.ts: 1 method to read cookie and create header  
- No changes to existing proxy/header/helpers logic

## Result
-  Graph execution requests now include impersonation header
-  Works for both client-side and server-side rendered requests  
-  Minimal changes, leverages existing header forwarding logic
-  Backward compatible with all existing functionality

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-10 21:23:23 +07:00
Ubbe
33989f09d0 feat(frontend): supabase + zustand for speed (#11333)
### Changes 🏗️

This change uses [Zustand](https://github.com/pmndrs/zustand) (a
lightweight state management library) to centralize authentication state
across the app. Previously, each component mounting `useSupabase()`
would create its own local state, causing duplicate API calls and
inconsistent user data. Now, user state is cached globally with Zustand
- when multiple components need auth data, they share the same cached
state instead of each fetching separately. This reduces server load and
improves app responsiveness.

**File structure:**
```
src/lib/supabase/hooks/
├── useSupabase.ts           # React hook interface (modified)
├── useSupabaseStore.ts      # Zustand state management (new)
└── helpers.ts               # Pure business logic (new)
```

**What was extracted to helpers:**
- `ensureSupabaseClient()` - Singleton client initialization
- `fetchUser()` - User fetching with error handling  
- `validateSession()` - Session validation logic
- `refreshSession()` - Session refresh logic
- `handleStorageEvent()` - Cross-tab logout handling

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified no TypeScript errors in modified files
  - [x] Tested login flow works correctly
  - [x] Tested logout flow works correctly
  - [x] Verified session validation on tab focus/visibility
  - [x] Tested cross-tab logout synchronization
  - [x] Confirmed WebSocket disconnection on logout
2025-11-10 12:08:37 +00:00
Zamil Majdy
d6ee402483 feat(platform): Add execution analytics admin endpoint with feature flag bypass (#11327)
This PR adds a comprehensive execution analytics admin endpoint that
generates AI-powered activity summaries and correctness scores for graph
executions, with proper feature flag bypass for admin use.

### Changes 🏗️

**Backend Changes:**
- Added admin endpoint: `/api/executions/admin/execution_analytics`
- Implemented feature flag bypass with `skip_feature_flag=True`
parameter for admin operations
- Fixed async database client usage (`get_db_async_client`) to resolve
async/await errors
- Added batch processing with configurable size limits to handle large
datasets
- Comprehensive error handling and logging for troubleshooting
- Renamed entire feature from "Activity Backfill" to "Execution
Analytics" for clarity

**Frontend Changes:**
- Created clean admin UI for execution analytics generation at
`/admin/execution-analytics`
- Built form with graph ID input, model selection dropdown, and optional
filters
- Implemented results table with status badges and detailed execution
information
- Added CSV export functionality for analytics results
- Integrated with generated TypeScript API client for proper
authentication
- Added proper error handling with toast notifications and loading
states

**Database & API:**
- Fixed critical async/await issue by switching from sync to async
database client
- Updated router configuration and endpoint naming for consistency
- Generated proper TypeScript types and API client integration
- Applied feature flag filtering at API level while bypassing for admin
operations

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

**Test Plan:**
- [x] Admin can access execution analytics page at
`/admin/execution-analytics`
- [x] Form validation works correctly (requires graph ID, validates
inputs)
- [x] API endpoint `/api/executions/admin/execution_analytics` responds
correctly
- [x] Authentication works properly through generated API client
- [x] Analytics generation works with different LLM models (gpt-4o-mini,
gpt-4o, etc.)
- [x] Results display correctly with appropriate status badges
(success/failed/skipped)
- [x] CSV export functionality downloads correct data
- [x] Error handling displays appropriate toast messages
- [x] Feature flag bypass works for admin users (generates analytics
regardless of user flags)
- [x] Batch processing handles multiple executions correctly
- [x] Loading states show proper feedback during processing

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] No configuration changes required for this feature

**Related to:** PR #11325 (base correctness score functionality)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
2025-11-10 10:27:44 +00:00
Lluis Agusti
08a4a6652d fix(frontend): onboarding provider calls only when user is logged in 2025-11-10 16:53:55 +07:00
Lluis Agusti
58928b516b fix(frontend): onboarding provider calls only when user is logged in 2025-11-10 16:52:39 +07:00
Krzysztof Czerwinski
18bb78d93e feat(platform): WebSocket-based notifications (#11297)
This enables real time notifications from backend to browser via
WebSocket using Redis bus for moving notifications from REST process to
WebSocket process.
This is needed for (follow-up) backend-completion of onboarding tasks
with instant notifications.

### Changes 🏗️

- Add new `AsyncRedisNotificationEventBus` to enable publishing
notifications to the Redis event bus
- Consume notifications in `ws_api.py` similarly to execution events and
send them via WebSocket
- Store WebSocket user connections in `ConnectionManager`
- Add relevant tests and types

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Notifications are sent to the frontend
2025-11-09 01:42:20 +00:00
Swifty
8058b9487b feat(platform): Chat UI refinements with simplified tool status indicators (#11337) 2025-11-07 22:49:03 +01:00
Ubbe
e68896a25a feat(backend): allow regex on CORS allowed origins (#11336)
## Changes 🏗️

Allow dynamic URLs in the CORS config, to match them via regex. This
helps because currently we have Front-end preview deployments which are
isolated ( _nice they don't pollute or overrride other domains_ ) like:
```
https://autogpt-git-{branch_name}-{commit}-significant-gravitas.vercel.app
```
The Front-end builds and works there, but as soon as you login, any API
requests to endpoints that need auth will fail due to CORS, given our
current CORS config does not support dynamically generated domains.

### Changes

After these changes we can specify dynamic domains to be allowed under
CORS. I also made `localhost` disabled if the API is in production for
safety...

### Before

```yml
cors:
  allowOrigin: "https://dev-builder.agpt.co" # could only specify full URL strings, not dyamic ones
```

### After

```yml
cors:
  allowOrigins:
    - "https://dev-builder.agpt.co"
    - "regex:https://autogpt-git-[a-z0-9-]+\\.vercel\\.app" # dynamic domains supported via regex
```

### Files

- add `build_cors_params` utility to parse literal/regex origins and
block localhost in production (`backend/server/utils/cors.py`)
- apply the helper in both `AgentServer` and `WebsocketServer` so CORS
logic and validations remain consistent
- add reusable `override_config` testing helper and update existing
WebSocket tests to cover the shared CORS behavior
- introduce targeted unit tests for the new CORS helper
(`backend/server/utils/cors_test.py`)

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will know once we made the origin config changes on infra and
test with this...
2025-11-07 23:28:14 +07:00
Ubbe
dfed092869 dx(frontend): make preview deploys work + minor improvements (#11329)
## Changes 🏗️

Make sure we can login on preview deployments generated by Vercel to
test Front-end changes. As of now, the Cloudflare CAPTCHA verification
fails, we don't need to have it active there.

### Minor improvements

<img width="1599" height="755" alt="Screenshot 2025-11-06 at 16 18 10"
src="https://github.com/user-attachments/assets/0a3fb1f3-2d4d-49fe-885f-10f141dc0ce4"
/>

Prevent the following build error:
```
15:58:01.507
     at j (.next/server/app/(no-navbar)/onboarding/reset/page.js:1:5125)
15:58:01.507
     at <unknown> (.next/server/chunks/5826.js:2:14221)
15:58:01.507
     at b.handleCallbackErrors (.next/server/chunks/5826.js:43:43068)
15:58:01.507
     at <unknown> (.next/server/chunks/5826.js:2:14194) {
15:58:01.507
   description: "Route /onboarding/reset couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
15:58:01.507
   digest: 'DYNAMIC_SERVER_USAGE'
15:58:01.507
 }
 ```
by making the reset onboarding route a client one. I made a new component, `<LoadingSpinner />`, and that page will show it while onboarding it's being reset.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] You can login/signup on the app and use it in the preview URL generated by Vercel
2025-11-07 21:03:57 +07:00
Swifty
5559d978d7 fix(platform): chat duplicate messages (#11332) 2025-11-06 17:20:46 +01:00
Bently
dcecb17bd1 feat(backend): Remove deprecated LLM models and add migration script (#11331)
These models have become deprecated
- deepseek-r1-distill-llama-70b
- gemma2-9b-it
- llama3-70b-8192
- llama3-8b-8192
- google/gemini-flash-1.5

I have removed them and setup a migration, the migration is to convert
all the old versions of the model to new versions, the model changes
will happen like so

- llama3-70b-8192 → llama-3.3-70b-versatile
- llama3-8b-8192 → llama-3.1-8b-instant
- google/gemini-flash-1.5 → google/gemini-2.5-flash
- deepseek-r1-distill-llama-70b → gpt-5-chat-latest
- gemma2-9b-it → gpt-5-chat-latest 

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Check to see if old models where removed
- [x] Check to see if migration worked and converted old models to new
one in graph
2025-11-06 12:36:42 +00:00
Swifty
a056d9e71a feature(backend): Limit Chat to Auth Users, Limit Agent Runs Per Chat (#11330) 2025-11-06 13:11:15 +01:00
Swifty
42b643579f feat(frontend): Chat UI Frontend (#11290) 2025-11-06 11:37:36 +01:00
seer-by-sentry[bot]
5b52ca9227 fix(platform): Implement backwards-compatible Set operations for input validation (#11197)
### Changes 🏗️

- Replaces `isSupersetOf` and `difference` Set operations with
backwards-compatible implementations using `Array.from` and
`every`/`filter` methods.
- This ensures compatibility with older JavaScript environments that may
not fully support modern Set operations.

Fixes
[BUILDER-451](https://sentry.io/organizations/significant-gravitas/issues/6952591149/).
The issue was that: ES2024 Set methods `isSupersetOf` and `difference`
are unsupported in iOS Safari 16.7, causing a TypeError during component
render.

This fix was generated by Seer in Sentry, triggered automatically. 👁️
Run ID: 2032240

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6952591149/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [ ] Test on iOS Safari 16.7 to ensure no TypeError occurs during
component render.
- [ ] Verify that the replaced `isSupersetOf` and `difference`
implementations function correctly in other supported browsers.

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
2025-11-06 16:55:47 +07:00
Bently
8e83586d13 feat(frontend): Cookie consent banner and settings (#11306)
Implements a cookie consent banner and settings modal for GDPR
compliance, allowing users to manage preferences for analytics and
monitoring cookies. Integrates consent checks with Sentry, Vercel
Analytics, and Google Analytics, ensuring tracking is only enabled with
user permission. Refactors dialog components for improved layout and
adds consent management utilities and hooks.

#### For code changes:
- [x] Banner appears at bottom of page on first visit with rounded
corners and proper spacing (40px margins)
- [x] Banner shows three buttons: "Reject All", "Settings", and "Accept
All"
- [x] Clicking "Accept All" hides banner and enables
analytics/monitoring
- [x] Clicking "Reject All" hides banner and keeps analytics/monitoring
disabled
- [x] Banner does not reappear after consent is given (check
localStorage: `autogpt_cookie_consent`)

**Cookie Settings Modal:**
- [x] Clicking "Settings" button opens the Cookie Settings modal
- [x] Modal displays three categories: Essential Cookies (always
active), Analytics & Performance (toggle), Error Monitoring & Session
Replay (toggle)
- [x] Clicking "Save Preferences" saves custom settings and closes modal
- [x] Clicking "Accept All" enables all cookies and closes modal
- [x] Clicking "Reject All" disables optional cookies and closes modal
- [x] Modal can be closed with X button or clicking outside

**Consent Persistence:**
- [x] Refresh page after giving consent - banner should not reappear
- [x] Clear localStorage and refresh - banner should reappear
- [x] Consent choices persist across browser sessions

<img width="1123" height="126" alt="image"
src="https://github.com/user-attachments/assets/7425efab-b5cc-4449-802d-0e12bd65053b"
/>

<img width="1124" height="372" alt="image"
src="https://github.com/user-attachments/assets/2f28919a-97e8-44f5-9021-70d3836bb996"
/>
2025-11-06 09:25:43 +00:00
seer-by-sentry[bot]
df9850a141 fix(frontend): prevent state updates on unmounted OnboardingProvider (#11211)
<!-- Clearly explain the need for these changes: -->
Fixes
[BUILDER-48G](https://sentry.io/organizations/significant-gravitas/issues/6960009111/).
The issue was that: Asynchronous API update scheduled via
`setTimeout(0)` in `OnboardingProvider` creates a race condition,
causing React-DOM's portal cleanup (`removeChild`) to fail during
concurrent component unmounting.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Prevents state updates and API calls after the `OnboardingProvider`
component has been unmounted.
- Introduces a `isMounted` ref to track the component's mount status.
- Uses a `pendingUpdatesRef` to manage and cancel pending API update
promises on unmount, preventing memory leaks and errors.
- Ensures that API update errors are only logged if the component is
still mounted.


This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2058387

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6960009111/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding works and does not throw errors when unmounted

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-11-06 16:20:23 +07:00
Reinier van der Leer
cbe4086e79 feat(frontend/marketplace): Update agent download UI (#11322)
- Resolves #11314

### Changes 🏗️

- Change "Download agent" CTA button to action link at bottom of summary
agent info
- Move agent ratings above CTA buttons to prevent it from jumping on
page load
- Update vertical spacings to more closely match designs

![example of new
look](https://github.com/user-attachments/assets/2db4d21e-b4f0-4930-9648-d4d4b20ef6fe)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Designer approves of new look
  - [x] Tests pass
2025-11-06 10:18:29 +01:00
seer-by-sentry[bot]
fbd5f34a61 fix(frontend): Ensure agent version is fetched when active_version_id exists (#11291)
### Changes 🏗️

Fixes
[BUILDER-4HJ](https://sentry.io/organizations/significant-gravitas/issues/6979388537/).
The issue was that: Server-side rendering failed to retrieve the
Supabase access token, causing authenticated API calls to omit the
Authorization header.

- Ensures that the agent version is fetched only when
`creator_agent.active_version_id` exists and the status code is 200.
- Enables the `prefetchGetV2GetAgentByStoreIdQuery` query when
`creator_agent.active_version_id` exists.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2234004

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6979388537/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Loading marketplace works...

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-11-06 08:29:43 +00:00
Ubbe
0b680d4990 feat(frontend): <Button variant="link" /> (#11320)
## Changes 🏗️

<img width="800" height="800" alt="Screenshot 2025-11-04 at 23 05 22"
src="https://github.com/user-attachments/assets/ecb3f442-8f1b-4a80-a6c9-0c4b6d5e0427"
/>

New `<Button variant="link" />` for when you need to render an HTML
`<button>` but with our link styles.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Looks good
2025-11-06 15:06:56 +07:00
Zamil Majdy
6037f80502 feat(backend): Add correctness score to execution activity generation (#11325)
## Summary
Add AI-generated correctness score field to execution activity status
generation to provide quantitative assessment of how well executions
achieved their intended purpose.

New page:
<img width="1000" height="229" alt="image"
src="https://github.com/user-attachments/assets/5cb907cf-5bc7-4b96-8128-8eecccde9960"
/>

Old page:
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/ece0dfab-1e50-4121-9985-d585f7fcd4d2"
/>


## What Changed
- Added `correctness_score` field (float 0.0-1.0) to
`GraphExecutionStats` model
- **REFACTORED**: Removed duplicate `llm_utils.py` and reused existing
`AIStructuredResponseGeneratorBlock` logic
- Updated activity status generator to use structured responses instead
of plain text
- Modified prompts to include correctness assessment with 5-tier scoring
system:
  - 0.0-0.2: Failure
  - 0.2-0.4: Poor 
  - 0.4-0.6: Partial Success
  - 0.6-0.8: Mostly Successful
  - 0.8-1.0: Success
- Updated manager.py to extract and set both activity_status and
correctness_score
- Fixed tests to work with existing structured response interface

## Technical Details
- **Code Reuse**: Eliminated duplication by using existing
`AIStructuredResponseGeneratorBlock` instead of creating new LLM
utilities
- Added JSON validation with retry logic for malformed responses
- Maintained backward compatibility for existing activity status
functionality
- Score is clamped to valid 0.0-1.0 range and validated
- All type errors resolved and linting passes

## Test Plan
- [x] All existing tests pass with refactored structure
- [x] Structured LLM call functionality tested with success and error
cases
- [x] Activity status generation tested with various execution scenarios
- [x] Integration tests verify both fields are properly set in execution
stats
- [x] No code duplication - reuses existing block logic

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
2025-11-06 04:42:13 +00:00
Nicholas Tindle
37b3e4e82e feat(blocks)!: Update Exa search block to match latest API specification (#11185)
BREAKING CHANGE: Removed deprecated use_auto_prompt field from Input
schema. Existing workflows using this field will need to be updated to
use the type field set to "auto" instead.

## Summary of Changes 📝

This PR comprehensively updates all Exa search blocks to match the
latest Exa API specification and adds significant new functionality
through the Websets API integration.

### Core API Updates 🔄

- **Migration to Exa SDK**: Replaced manual API calls with the official
`exa_py` AsyncExa SDK across all blocks for better reliability and
maintainability
- **Removed deprecated fields**: Eliminated
`use_auto_prompt`/`useAutoprompt` field (breaking change)
- **Fixed incomplete field definitions**: Corrected `user_location`
field definition
- **Added new input fields**: Added `moderation` and `context` fields
for enhanced content filtering

### Enhanced Content Settings 🛠️

- **Text field improvements**: Support both boolean and advanced object
configurations
- **New content options**: 
  - Added `livecrawl` settings (never, fallback, always, preferred)
  - Added `subpages` support for deeper content retrieval
  - Added `extras` settings for links and images
  - Added `context` settings for additional contextual information
- **Updated settings**: Enhanced `highlight` and `summary`
configurations with new query and schema options

### Comprehensive Cost Tracking 💰

- Added detailed cost tracking models:
  - `CostDollars` for monetary costs
  - `CostCredits` for API credit tracking
  - `CostDuration` for time-based costs
- New output fields: `request_id`, `resolved_search_type`,
`cost_dollars`
- Improved response handling to conditionally yield fields based on
availability

### New Websets API Integration 🚀

Added eight new specialized blocks for Exa's Websets API:
- **`websets.py`**: Core webset management (create, get, list, delete)
- **`websets_search.py`**: Search operations within websets
- **`websets_items.py`**: Individual item management (add, get, update,
delete)
- **`websets_enrichment.py`**: Data enrichment operations
- **`websets_import_export.py`**: Bulk import/export functionality
- **`websets_monitor.py`**: Monitor and track webset changes
- **`websets_polling.py`**: Poll for updates and changes

### New Special-Purpose Blocks 🎯

- **`code_context.py`**: Code search capabilities for finding relevant
code snippets from open source repositories, documentation, and Stack
Overflow
- **`research.py`**: Asynchronous research capabilities that explore the
web, gather sources, synthesize findings, and return structured results
with citations

### Code Organization Improvements 📁

- **Removed legacy code**: Deleted `model.py` file containing deprecated
API models
- **Centralized helpers**: Consolidated shared models and utilities in
`helpers.py`
- **Improved modularity**: Each webset operation is now in its own
dedicated file

### Other Changes 🔧

- Updated `.gitignore` for better development workflow
- Updated `CLAUDE.md` with project-specific instructions
- Updated documentation in `docs/content/platform/new_blocks.md` with
error handling, data models, and file input guidelines
- Improved webhook block implementations with SDK integration

### Files Changed 📂

- **Modified (11 files)**:
  - `.gitignore`
  - `autogpt_platform/CLAUDE.md`
  - `autogpt_platform/backend/backend/blocks/exa/answers.py`
  - `autogpt_platform/backend/backend/blocks/exa/contents.py`
  - `autogpt_platform/backend/backend/blocks/exa/helpers.py`
  - `autogpt_platform/backend/backend/blocks/exa/search.py`
  - `autogpt_platform/backend/backend/blocks/exa/similar.py`
  - `autogpt_platform/backend/backend/blocks/exa/webhook_blocks.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets.py`
  - `docs/content/platform/new_blocks.md`

- **Added (8 files)**:
  - `autogpt_platform/backend/backend/blocks/exa/code_context.py`
  - `autogpt_platform/backend/backend/blocks/exa/research.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_enrichment.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_import_export.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_items.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_monitor.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_polling.py`
  - `autogpt_platform/backend/backend/blocks/exa/websets_search.py`

- **Deleted (1 file)**:
  - `autogpt_platform/backend/backend/blocks/exa/model.py`

### Migration Guide 🚦

For users with existing workflows using the deprecated `use_auto_prompt`
field:
1. Remove the `use_auto_prompt` field from your input configuration
2. Set the `type` field to `ExaSearchTypes.AUTO` (or "auto" in JSON) to
achieve the same behavior
3. Review any custom content settings as the structure has been enhanced

### Testing Recommendations 

- Test existing workflows to ensure they handle the breaking change
- Verify cost tracking fields are properly returned
- Test new content settings options (livecrawl, subpages, extras,
context)
- Validate websets functionality if using the new Websets API blocks

🤖 Generated with [Claude Code](https://claude.com/claude-code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] made + ran a test agent for the blocks and flows between them
[Exa
Tests_v44.json](https://github.com/user-attachments/files/23226143/Exa.Tests_v44.json)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Migrates Exa blocks to AsyncExa SDK, adds comprehensive
Websets/research/code-context blocks, updates existing
search/content/answers/similar, deletes legacy models, adjusts
tests/docs; breaking: remove `use_auto_prompt` in favor of
`type="auto"`.
> 
> - **Backend — Exa integration (SDK migration & BREAKING)**:
> - Replace manual HTTP calls with `exa_py.AsyncExa` across `search`,
`similar`, `contents`, `answers`, and webhooks; richer outputs
(citations, context, costs, resolved search type).
>   - BREAKING: remove `Input.use_auto_prompt`; use `type = "auto"`.
> - Centralize models/utilities in `exa/helpers.py` (content settings,
cost models, result mappers).
> - **New Blocks**:
> - **Websets**: management (`websets.py`), searches, items,
enrichments, imports/exports, monitors, polling (new files under
`exa/websets_*`).
> - **Research**: async research task create/get/wait/list
(`exa/research.py`).
> - **Code Context**: code snippet/context retrieval
(`exa/code_context.py`).
> - **Removals**:
>   - Delete deprecated `exa/model.py`.
> - **Docs & DX**:
> - Update `docs/new_blocks.md` (error handling, models, file input) and
`CLAUDE.md`; ignore backend logs in `.gitignore`.
> - **Frontend Tests**:
> - Split/extend “e” block tests and improve block add robustness in
Playwright (`build.spec.ts`, `build.page.ts`).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e5e572322. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added multiple Exa research and webset management blocks for task
creation, monitoring, and completion tracking.
* Introduced new search capabilities including code context retrieval,
content search, and enhanced filtering options.
* Added webset enrichment, import/export, and item management
functionality.
  * Expanded search with location-based and category filters.

* **Documentation**
* Updated guidance on error handling, data models, and file input
handling.

* **Refactor**
* Modernized backend API integration with improved response structure
and error reporting.
  * Simplified configuration options for search operations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-05 19:52:48 +00:00
Reinier van der Leer
de7c5b5c31 Merge branch 'master' into dev 2025-11-05 20:17:27 +01:00
Reinier van der Leer
d68dceb9c1 fix(backend/executor): Improve graph execution permission check (#11323)
- Resolves #11316
- Durable fix to replace #11318

### Changes 🏗️

- Expand graph execution permissions check
  - Don't require library membership for execution as sub-graph

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Can run sub-agent with non-latest graph version
- [x] Can run sub-agent that is available in Marketplace but not added
to Library
2025-11-05 17:13:41 +00:00
Zamil Majdy
193866232c hotfix(backend): fix rate-limited messages blocking queue by republishing to back (#11326)
## Summary
Fix critical queue blocking issue where rate-limited user messages
prevent other users' executions from being processed, causing the 135
late executions reported in production.

## Root Cause Analysis
When a user exceeds `max_concurrent_graph_executions_per_user` (25), the
executor uses `basic_nack(requeue=True)` which sends the message to the
**FRONT** of the RabbitMQ queue. This creates an infinite blocking loop
where:
1. Rate-limited message goes to front of queue
2. Gets processed, hits rate limit again  
3. Goes back to front of queue
4. Blocks all other users' messages indefinitely

## Solution Implementation

### 🔧 Core Changes
- **New setting**: `requeue_by_republishing` (default: `True`) in
`backend/util/settings.py`
- **Smart `_ack_message`**: Automatically uses republishing when
`requeue=True` and setting enabled
- **Efficient implementation**: Uses existing `self.run_client`
connection instead of creating new ones
- **Integration test**: Real RabbitMQ test validates queue ordering
behavior

### 🔄 Technical Implementation
**Before (blocking):**
```python
basic_nack(delivery_tag, requeue=True)  # Goes to FRONT of queue 
```

**After (non-blocking):**
```python
if requeue and self.config.requeue_by_republishing:
    # First: Republish to BACK of queue
    self.run_client.publish_message(...)
    # Then: Reject without requeue
    basic_nack(delivery_tag, requeue=False)
```

### 📊 Impact
-  **Other users' executions no longer blocked** by rate-limited users
-  **Fair queue processing** - FIFO behavior maintained for all users
-  **Rate limiting still works** - just doesn't block others
-  **Configurable** - can revert to old behavior with
`requeue_by_republishing=False`
-  **Zero performance impact** - uses existing connections

## Test Plan
- **Integration test**: `test_requeue_integration.py` validates real
RabbitMQ queue ordering
- **Scenario testing**: Confirms rate-limited messages go to back of
queue
- **Cross-user validation**: Verifies other users' messages process
correctly
- **Setting test**: Confirms configuration loads with correct defaults

## Deployment Strategy
This is a **hotfix** that can be deployed immediately:
- **Backward compatible**: Old behavior available via config
- **Safe default**: New behavior is safer than current state
- **No breaking changes**: All existing functionality preserved
- **Immediate relief**: Resolves production queue blocking

## Files Modified
- `backend/executor/manager.py`: Enhanced `_ack_message` logic and
`_requeue_message_to_back` method
- `backend/util/settings.py`: Added `requeue_by_republishing`
configuration field
- `test_requeue_integration.py`: Integration test for queue ordering
validation

## Related Issues
Fixes the 135 late executions issue where messages were stuck in QUEUED
state despite available executor capacity (583m/600m utilization).

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-05 16:24:07 +00:00
Krzysztof Czerwinski
979826f559 fix(platform): Wallet fixes (#11304)
### Changes 🏗️

- Unmask for Sentry:
  - Agent name&creator on onboarding cards
  - Edge paths
  - Block I/O names
- Prevent firing `onClick` when onboarding agents are loading
- Prevent confetti on null elements and top-left corner
- Fix tooltip on Wallet hover
- Fix `0` appearing in place of notification dot on the Wallet button

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding works and can be completed
  - [x] Wallet confetti works properly
  - [x] Tooltip works
2025-11-05 14:17:43 +00:00
Swifty
2f87e13d17 feat(platform): Chat system backend (#11230)
Implements foundational backend infrastructure for chat-based agent
interaction system. Users will be able to discover, configure, and run
marketplace agents through conversational AI.

**Note:** Chat routes are behind a feature flag 

### Changes 🏗️

**Core Chat System:**
- Chat service with LLM orchestration (Claude 3.5 Sonnet, Haiku, GPT-4)
- REST API routes for sessions and messages
- Database layer for chat persistence
- System prompts and configuration

**5 Conversational Tools:**
1. `find_agent` - Search marketplace by keywords
2. `get_agent_details` - Fetch agent info, inputs, credentials
3. `get_required_setup_info` - Check user readiness, missing credentials
4. `run_agent` - Execute agents immediately
5. `setup_agent` - Configure scheduled execution with cron

**Testing:**
- 28 tests across chat tools (23 passing, 5 skipped for scheduler)
- Test fixtures for simple, LLM, and Firecrawl agents
- Service and data layer tests

**Bug Fixes:**
- Fixed `setup_agent.py` to create schedules instead of immediate
execution
- Fixed graph lookup to use UUID instead of username/slug
- Fixed credential matching by provider/type instead of ID
- Fixed internal tool calls to use `._execute()` instead of `.execute()`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All 28 chat tool tests pass (23 pass, 5 skip - require scheduler)
  - [x] Code formatting and linting pass
  - [x] Tool execution flow validated through unit tests
  - [x] Agent discovery, details, and execution tested
  - [x] Credential parsing and matching tested

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - all existing settings compatible.
2025-11-05 13:49:01 +00:00
Zamil Majdy
2ad5a88a5c feat(frontend): Change copywriting for execution task summary (#11324)
### Changes 🏗️

Change copywriting for execution task summary

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual review
2025-11-05 12:59:21 +00:00
Abhimanyu Yadav
e9cd40c0d4 fix(frontend): Correctly rendering all types of outputs in the custom node in the new builder (#11258)
Currently, we are rendering text for all types of outputs, even if it’s
a video, image, or other type. So, In current we fixed it by rendering
them correctly. Also, some output actions weren’t working, so fixed them
also.

<img width="1486" height="1080" alt="Screenshot 2025-10-27 at 4 36
33 PM"
src="https://github.com/user-attachments/assets/4e4ee43f-5400-477e-8fa9-2914acf11466"
/>

<img width="463" height="683" alt="Screenshot 2025-10-27 at 4 39 00 PM"
src="https://github.com/user-attachments/assets/bfc09c00-58dd-4a0d-96a2-aa51cc282797"
/>

<img width="1455" height="753" alt="Screenshot 2025-10-27 at 4 36 56 PM"
src="https://github.com/user-attachments/assets/52870ffe-3e47-4b0f-bfa3-8d8bbe38cbbd"
/>

<img width="1131" height="1062" alt="Screenshot 2025-10-27 at 4 37
17 PM"
src="https://github.com/user-attachments/assets/e55040e9-33e6-45a8-8397-bf912e93840f"
/>

### Changes 🏗️

- Add a new design for the node output.
- Render the correct HTML tag for each type.
- Make all the output actions below the data section workable, such as
viewing the complete data or copying it.
- Add a “View more” button. We’re only seeing two pins of output. If we
have more pins, we can view all the output in a dialogue box.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
   - [x] able to render different types of output data correctly.
   - [x] All output actions are working perfectly.

---------

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-11-05 07:01:49 +00:00
dependabot[bot]
4744675ef9 chore(frontend/deps-dev): bump the development-dependencies group across 1 directory with 13 updates (#11288)
Bumps the development-dependencies group with 13 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
|
[@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests)
| `4.1.1` | `4.1.2` |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.55.0`
| `1.56.1` |
|
[@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query)
| `5.86.0` | `5.91.2` |
|
[@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools)
| `5.87.3` | `5.90.2` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `24.3.1` | `24.9.2` |
| [axe-playwright](https://github.com/abhinaba-ghosh/axe-playwright) |
`2.1.0` | `2.2.2` |
| [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.4` |
`13.3.2` |
| [msw](https://github.com/mswjs/msw) | `2.11.1` | `2.11.6` |
|
[msw-storybook-addon](https://github.com/mswjs/msw-storybook-addon/tree/HEAD/packages/msw-addon)
| `2.0.5` | `2.0.6` |
| [orval](https://github.com/orval-labs/orval) | `7.11.2` | `7.15.0` |
| [pbkdf2](https://github.com/browserify/pbkdf2) | `3.1.3` | `3.1.5` |
|
[prettier-plugin-tailwindcss](https://github.com/tailwindlabs/prettier-plugin-tailwindcss)
| `0.6.14` | `0.7.1` |
| [typescript](https://github.com/microsoft/TypeScript) | `5.9.2` |
`5.9.3` |


Updates `@chromatic-com/storybook` from 4.1.1 to 4.1.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/releases"><code>@​chromatic-com/storybook</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v4.1.2</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency to include
10.1.0-0 <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/392">#392</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/chromatic-support"><code>@​chromatic-support</code></a></li>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.4</h2>
<h4>⚠️ Pushed to <code>next</code></h4>
<ul>
<li>Broaden version-range for storybook peerDependency to include
10.2.0-0 and 10.3.0-0 (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.3</h2>
<h4>⚠️ Pushed to <code>next</code></h4>
<ul>
<li>Update GitHub Actions workflow to fetch full git history and tags
with optimized settings (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.2</h2>
<h4>⚠️ Pushed to <code>next</code></h4>
<ul>
<li>bump yarn version (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency to include
10.1.0-0 <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/392">#392</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v4.1.2-next.0</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Main <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/391">#391</a>
(<a href="https://github.com/ndelangen"><code>@​ndelangen</code></a> <a
href="https://github.com/chromatic-support"><code>@​chromatic-support</code></a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.2/CHANGELOG.md"><code>@​chromatic-com/storybook</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v4.1.2 (Wed Oct 29 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency to include
10.1.0-0 <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/392">#392</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>Main <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/391">#391</a>
(<a href="https://github.com/ndelangen"><code>@​ndelangen</code></a> <a
href="https://github.com/chromatic-support"><code>@​chromatic-support</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/chromatic-support"><code>@​chromatic-support</code></a></li>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a3af186430"><code>a3af186</code></a>
Bump version to: 4.1.2 [skip ci]</li>
<li><a
href="5c28b499ed"><code>5c28b49</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="b5fef8d290"><code>b5fef8d</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/393">#393</a>
from chromaui/next</li>
<li><a
href="dbc88e7877"><code>dbc88e7</code></a>
Broaden version-range for storybook peerDependency to include 10.2.0-0
and 10...</li>
<li><a
href="2b73fc62cc"><code>2b73fc6</code></a>
Broaden version-range for storybook peerDependency to include 10.2.0-0
and 10...</li>
<li><a
href="70f0e52433"><code>70f0e52</code></a>
Update GitHub Actions workflow to fetch full git history and tags with
optimi...</li>
<li><a
href="5cf104a81e"><code>5cf104a</code></a>
bump yarn version</li>
<li><a
href="0cb03a5386"><code>0cb03a5</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/392">#392</a>
from chromaui/norbert/broaden-version-range</li>
<li><a
href="aee436179e"><code>aee4361</code></a>
fix linting</li>
<li><a
href="86004684e8"><code>8600468</code></a>
regen lockfile for updates</li>
<li>Additional commits viewable in <a
href="https://github.com/chromaui/addon-visual-tests/compare/v4.1.1...v4.1.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `@playwright/test` from 1.55.0 to 1.56.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/microsoft/playwright/releases"><code>@​playwright/test</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v1.56.1</h2>
<h2>Highlights</h2>
<p><a
href="https://redirect.github.com/microsoft/playwright/issues/37871">#37871</a>
chore: allow local-network-access permission in chromium
<a
href="https://redirect.github.com/microsoft/playwright/issues/37891">#37891</a>
fix(agents): remove workspaceFolder ref from vscode mcp
<a
href="https://redirect.github.com/microsoft/playwright/issues/37759">#37759</a>
chore: rename agents to test agents
<a
href="https://redirect.github.com/microsoft/playwright/issues/37757">#37757</a>
chore(mcp): fallback to cwd when resolving test config</p>
<h2>Browser Versions</h2>
<ul>
<li>Chromium 141.0.7390.37</li>
<li>Mozilla Firefox 142.0.1</li>
<li>WebKit 26.0</li>
</ul>
<h2>v1.56.0</h2>
<h2>Playwright Agents</h2>
<p>Introducing Playwright Agents, three custom agent definitions
designed to guide LLMs through the core process of building a Playwright
test:</p>
<ul>
<li><strong>🎭 planner</strong> explores the app and produces a Markdown
test plan</li>
<li><strong>🎭 generator</strong> transforms the Markdown plan into the
Playwright Test files</li>
<li><strong>🎭 healer</strong> executes the test suite and automatically
repairs failing tests</li>
</ul>
<p>Run <code>npx playwright init-agents</code> with your client of
choice to generate the latest agent definitions:</p>
<pre lang="bash"><code># Generate agent files for each agentic loop
# Visual Studio Code
npx playwright init-agents --loop=vscode
# Claude Code
npx playwright init-agents --loop=claude
# opencode
npx playwright init-agents --loop=opencode
</code></pre>
<blockquote>
<p>[!NOTE]
VS Code v1.105 (currently on the VS Code Insiders channel) is needed for
the agentic experience in VS Code. It will become stable shortly, we are
a bit ahead of times with this functionality!</p>
</blockquote>
<p><a href="https://playwright.dev/docs/test-agents">Learn more about
Playwright Agents</a></p>
<h2>New APIs</h2>
<ul>
<li>New methods <a
href="https://playwright.dev/docs/api/class-page#page-console-messages">page.consoleMessages()</a>
and <a
href="https://playwright.dev/docs/api/class-page#page-page-errors">page.pageErrors()</a>
for retrieving the most recent console messages from the page</li>
<li>New method <a
href="https://playwright.dev/docs/api/class-page#page-requests">page.requests()</a>
for retrieving the most recent network requests from the page</li>
<li>Added <a
href="https://playwright.dev/docs/test-cli#test-list"><code>--test-list</code>
and <code>--test-list-invert</code></a> to allow manual specification of
specific tests from a file</li>
</ul>
<h2>UI Mode and HTML Reporter</h2>
<ul>
<li>Added option to <code>'html'</code> reporter to disable the
&quot;Copy prompt&quot; button</li>
<li>Added option to <code>'html'</code> reporter and UI Mode to merge
files, collapsing test and describe blocks into a single unified
list</li>
<li>Added option to UI Mode mirroring the
<code>--update-snapshots</code> options</li>
<li>Added option to UI Mode to run only a single worker at a time</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="54c711571a"><code>54c7115</code></a>
chore: revert &quot;minimal vscode version notice&quot; (<a
href="https://redirect.github.com/microsoft/playwright/issues/37892">#37892</a>)</li>
<li><a
href="7d45eb331a"><code>7d45eb3</code></a>
chore: mark v1.56.1 (<a
href="https://redirect.github.com/microsoft/playwright/issues/37784">#37784</a>)</li>
<li><a
href="e6ef6974be"><code>e6ef697</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37871">#37871</a>):
chore: allow local-network-access permission in chromium</li>
<li><a
href="932542c3c1"><code>932542c</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37891">#37891</a>):
fix(agents): remove workspaceFolder ref from vscode mcp</li>
<li><a
href="0662dd29ee"><code>0662dd2</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37759">#37759</a>):
chore: rename agents to test agents</li>
<li><a
href="919549ec2c"><code>919549e</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37758">#37758</a>):
docs: mention VS Code insiders in the agents docs</li>
<li><a
href="e593c64187"><code>e593c64</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37757">#37757</a>):
chore(mcp): fallback to cwd when resolving test config</li>
<li><a
href="a8a6e1049b"><code>a8a6e10</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37755">#37755</a>):
chore(mcp): minimal vscode version notice</li>
<li><a
href="f36b2eec65"><code>f36b2ee</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37731">#37731</a>):
docs: add agents video to agents page (<a
href="https://redirect.github.com/microsoft/playwright/issues/37733">#37733</a>)</li>
<li><a
href="b6af258d07"><code>b6af258</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37727">#37727</a>):
devops: fix NPM release step (<a
href="https://redirect.github.com/microsoft/playwright/issues/37728">#37728</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/microsoft/playwright/compare/v1.55.0...v1.56.1">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for <code>@​playwright/test</code> since your
current version.</p>
</details>
<br />

Updates `@tanstack/eslint-plugin-query` from 5.86.0 to 5.91.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/eslint-plugin-query</code>'s
releases</a>.</em></p>
<blockquote>
<h2><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.91.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>fix: allow useQueries with combine property in no-unstable-deps rule
(<a
href="https://redirect.github.com/TanStack/query/pull/9720">#9720</a>)</li>
</ul>
<h2><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.91.1</h2>
<h3>Patch Changes</h3>
<ul>
<li>avoid typescript import in no-void-query-fn rule (<a
href="https://redirect.github.com/TanStack/query/pull/9759">#9759</a>)</li>
</ul>
<h2><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.91.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>feat: improve type of exported plugin (<a
href="https://redirect.github.com/TanStack/query/pull/9700">#9700</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/blob/main/packages/eslint-plugin-query/CHANGELOG.md"><code>@​tanstack/eslint-plugin-query</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>5.91.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>fix: allow useQueries with combine property in no-unstable-deps rule
(<a
href="https://redirect.github.com/TanStack/query/pull/9720">#9720</a>)</li>
</ul>
<h2>5.91.1</h2>
<h3>Patch Changes</h3>
<ul>
<li>avoid typescript import in no-void-query-fn rule (<a
href="https://redirect.github.com/TanStack/query/pull/9759">#9759</a>)</li>
</ul>
<h2>5.91.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>feat: improve type of exported plugin (<a
href="https://redirect.github.com/TanStack/query/pull/9700">#9700</a>)</li>
</ul>
<h2>5.90.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>fix: exhaustive-deps with variables and type assertions (<a
href="https://redirect.github.com/TanStack/query/pull/9687">#9687</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9df1308181"><code>9df1308</code></a>
ci: Version Packages (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9765">#9765</a>)</li>
<li><a
href="9595254b2f"><code>9595254</code></a>
fix: allow useQueries with combine property in no-unstable-deps rule (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9720">#9720</a>)</li>
<li><a
href="b59925ebcc"><code>b59925e</code></a>
ci: Version Packages (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9760">#9760</a>)</li>
<li><a
href="b9a478dc79"><code>b9a478d</code></a>
fix(eslint-plugin): avoid imports from typescript (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9759">#9759</a>)</li>
<li><a
href="a242f98b82"><code>a242f98</code></a>
Revert &quot;chore(deps): update all non-major dependencies&quot; (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9715">#9715</a>)</li>
<li><a
href="571bc184fd"><code>571bc18</code></a>
chore(deps): update all non-major dependencies (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9712">#9712</a>)</li>
<li><a
href="dcc7bd9616"><code>dcc7bd9</code></a>
ci: Version Packages (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9709">#9709</a>)</li>
<li><a
href="832fac36ab"><code>832fac3</code></a>
feat(eslint-plugin): improve type of exported plugin (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9700">#9700</a>)</li>
<li><a
href="5e8fd61126"><code>5e8fd61</code></a>
chore: update <code>@​eslint-react/eslint-plugin</code> (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9702">#9702</a>)</li>
<li><a
href="51ca638fac"><code>51ca638</code></a>
ci: Version Packages</li>
<li>Additional commits viewable in <a
href="https://github.com/TanStack/query/commits/@tanstack/eslint-plugin-query@5.91.2/packages/eslint-plugin-query">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/react-query-devtools` from 5.87.3 to 5.90.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/react-query-devtools</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.90.2</h2>
<p>Version 5.90.2 - 9/23/25, 7:37 AM</p>
<h2>Changes</h2>
<h3>Fix</h3>
<ul>
<li>types: onMutateResult is always defined in onSuccess callback (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9677">#9677</a>)
(2cf3ec9) by Dominik Dorfmeister</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/angular-query-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/svelte-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/svelte-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/vue-query</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
<li><code>@​tanstack/vue-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.90.2</li>
</ul>
<h2>v5.90.1</h2>
<p>Version 5.90.1 - 9/22/25, 6:41 AM</p>
<h2>Changes</h2>
<h3>Fix</h3>
<ul>
<li>vue-query: Support infiniteQueryOptions for MaybeRef argument (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9634">#9634</a>)
(49243c8) by hriday330</li>
</ul>
<h3>Chore</h3>
<ul>
<li>deps: update marocchino/sticky-pull-request-comment digest to
fd19551 (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9674">#9674</a>)
(cd4ef5c) by renovate[bot]</li>
</ul>
<h3>Ci</h3>
<ul>
<li>update checkout action (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9673">#9673</a>)
(cbf0896) by Lachlan Collins</li>
<li>update workspace config (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9671">#9671</a>)
(fb48985) by Lachlan Collins</li>
</ul>
<h3>Docs</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0eaafe0821"><code>0eaafe0</code></a>
release: v5.90.2</li>
<li><a
href="fcd23c9b1a"><code>fcd23c9</code></a>
release: v5.90.1</li>
<li><a
href="fb48985f63"><code>fb48985</code></a>
ci: update workspace config (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9671">#9671</a>)</li>
<li><a
href="2a00fb6504"><code>2a00fb6</code></a>
release: v5.89.0</li>
<li><a
href="230435d112"><code>230435d</code></a>
release: v5.87.4</li>
<li>See full diff in <a
href="https://github.com/TanStack/query/commits/v5.90.2/packages/react-query-devtools">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/node` from 24.3.1 to 24.9.2
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />

Updates `axe-playwright` from 2.1.0 to 2.2.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/abhinaba-ghosh/axe-playwright/releases">axe-playwright's
releases</a>.</em></p>
<blockquote>
<h2>Release v2.2.2</h2>
<p>See <a
href="https://github.com/abhinaba-ghosh/axe-playwright/blob/v2.2.2/CHANGELOG.md">CHANGELOG.md</a>
for detailed changes.</p>
<h2>What's Changed</h2>
<ul>
<li>docs(types): add full type definitions and JSDoc for axe-playwright
m… by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/246">abhinaba-ghosh/axe-playwright#246</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.1...v2.2.2">https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.1...v2.2.2</a></p>
<h2>Release v2.2.1</h2>
<p>See <a
href="https://github.com/abhinaba-ghosh/axe-playwright/blob/v2.2.1/CHANGELOG.md">CHANGELOG.md</a>
for detailed changes.</p>
<h2>What's Changed</h2>
<ul>
<li>fixed: cci junit report format by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/245">abhinaba-ghosh/axe-playwright#245</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.0...v2.2.1">https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.0...v2.2.1</a></p>
<h2>Release v2.2.0</h2>
<p>See <a
href="https://github.com/abhinaba-ghosh/axe-playwright/blob/v2.2.0/CHANGELOG.md">CHANGELOG.md</a>
for detailed changes.</p>
<h2>What's Changed</h2>
<ul>
<li>checkA11y - <code>skipFailure</code> options not working when
reporter is 'junit' by <a
href="https://github.com/iamhoonse"><code>@​iamhoonse</code></a> in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/241">abhinaba-ghosh/axe-playwright#241</a></li>
<li>Feat/add change log generation by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/242">abhinaba-ghosh/axe-playwright#242</a></li>
<li>set skipFailure to true by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/243">abhinaba-ghosh/axe-playwright#243</a></li>
<li>Fix/test in ci by <a
href="https://github.com/abhinaba-ghosh"><code>@​abhinaba-ghosh</code></a>
in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/244">abhinaba-ghosh/axe-playwright#244</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/iamhoonse"><code>@​iamhoonse</code></a>
made their first contribution in <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/pull/241">abhinaba-ghosh/axe-playwright#241</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.1.0...v2.2.0">https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.1.0...v2.2.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/abhinaba-ghosh/axe-playwright/blob/master/CHANGELOG.md">axe-playwright's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.1...v2.2.2">2.2.2</a>
(2025-09-12)</h3>
<h3>Bug Fixes</h3>
<ul>
<li><strong>types:</strong> ensure custom axe-playwright types are
resolved via typeRoots (<a
href="4ce358c7ec">4ce358c</a>)</li>
</ul>
<h3><a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.2.0...v2.2.1">2.2.1</a>
(2025-09-10)</h3>
<h2><a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.1.0...v2.2.0">2.2.0</a>
(2025-09-09)</h2>
<h3>Features</h3>
<ul>
<li>added changelog (<a
href="2f2001164b">2f20011</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>change reporter conditionals in <code>checkA11y()</code> to fix
<code>skipFailure</code> options not working when reporter is 'junit'
(<a
href="b4514d043e">b4514d0</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7b5f359bfc"><code>7b5f359</code></a>
chore(release): 2.2.2</li>
<li><a
href="5f02048e03"><code>5f02048</code></a>
Merge pull request <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/issues/246">#246</a>
from abhinaba-ghosh/docs/axe-playwright-types</li>
<li><a
href="07a09be7e4"><code>07a09be</code></a>
chore(types): remove obsolete axe-playwright.d.ts file</li>
<li><a
href="4ce358c7ec"><code>4ce358c</code></a>
fix(types): ensure custom axe-playwright types are resolved via
typeRoots</li>
<li><a
href="7cdc2dc50c"><code>7cdc2dc</code></a>
docs(types): add full type definitions and JSDoc for axe-playwright
methods</li>
<li><a
href="0be5f49d11"><code>0be5f49</code></a>
chore(release): 2.2.1</li>
<li><a
href="6be4157af8"><code>6be4157</code></a>
Merge pull request <a
href="https://redirect.github.com/abhinaba-ghosh/axe-playwright/issues/245">#245</a>
from abhinaba-ghosh/fix/cci</li>
<li><a
href="7ea08dc53b"><code>7ea08dc</code></a>
fixed: cci junit report format</li>
<li><a
href="8bd78dc4a2"><code>8bd78dc</code></a>
Update CHANGELOG.md</li>
<li><a
href="0bca3b0b4b"><code>0bca3b0</code></a>
Update CHANGELOG.md</li>
<li>Additional commits viewable in <a
href="https://github.com/abhinaba-ghosh/axe-playwright/compare/v2.1.0...v2.2.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `chromatic` from 13.1.4 to 13.3.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/releases">chromatic's
releases</a>.</em></p>
<blockquote>
<h2>v13.3.2</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Fix Node 24 security warning <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1215">#1215</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
<li>Remove <code>corepack enable</code> from GitHub Actions <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1218">#1218</a>
(<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
<li>Setup permissions for NPM trusted publishing <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1216">#1216</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li>Cody Kaup (<a
href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
<li>Justin Thurman (<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<h2>v13.3.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Prevent log file from writes during metadata upload <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1214">#1214</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Cody Kaup (<a
href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h2>v13.3.0</h2>
<h4>🚀 Enhancement</h4>
<ul>
<li>Use <code>--stats-json</code> with storybook version guard. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1210">#1210</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Trim down some Sentry error reporting <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1209">#1209</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h2>v13.2.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Remove unused references to view layer <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1207">#1207</a>
(<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Justin Thurman (<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<h2>v13.2.0</h2>
<h4>🚀 Enhancement</h4>
<ul>
<li>Updated the mapping for the storybook entry with a recent change in
storybook-rsbuild <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1206">#1206</a>
(<a
href="https://github.com/ethriel3695"><code>@​ethriel3695</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/blob/main/CHANGELOG.md">chromatic's
changelog</a>.</em></p>
<blockquote>
<h1>v13.3.2 (Fri Oct 24 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Fix Node 24 security warning <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1215">#1215</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
<li>Remove <code>corepack enable</code> from GitHub Actions <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1218">#1218</a>
(<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
<li>Setup permissions for NPM trusted publishing <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1216">#1216</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li>Cody Kaup (<a
href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
<li>Justin Thurman (<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<hr />
<h1>v13.3.1 (Tue Oct 21 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Prevent log file from writes during metadata upload <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1214">#1214</a>
(<a href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Cody Kaup (<a
href="https://github.com/codykaup"><code>@​codykaup</code></a>)</li>
</ul>
<hr />
<h1>v13.3.0 (Mon Sep 29 2025)</h1>
<h4>🚀 Enhancement</h4>
<ul>
<li>Use <code>--stats-json</code> with storybook version guard. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1210">#1210</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Trim down some Sentry error reporting <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1209">#1209</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<hr />
<h1>v13.2.1 (Fri Sep 26 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Remove unused references to view layer <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1207">#1207</a>
(<a
href="https://github.com/justin-thurman"><code>@​justin-thurman</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1548300b71"><code>1548300</code></a>
Bump version to: 13.3.2 [skip ci]</li>
<li><a
href="39e30af667"><code>39e30af</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="4e3f944c05"><code>4e3f944</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1215">#1215</a>
from chromaui/cody/cap-3598-shell-true-causes-securi...</li>
<li><a
href="f73c621eb8"><code>f73c621</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1218">#1218</a>
from chromaui/CAP-3707</li>
<li><a
href="b9d0d3b3c7"><code>b9d0d3b</code></a>
Remove corepack enable from actions</li>
<li><a
href="5b685c3f48"><code>5b685c3</code></a>
Clean up some unnecessary test helpers</li>
<li><a
href="12c5291b67"><code>12c5291</code></a>
execaCommand -&gt; execa</li>
<li><a
href="2141d58aa6"><code>2141d58</code></a>
Upgrade execa</li>
<li><a
href="7c3f508721"><code>7c3f508</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1216">#1216</a>
from chromaui/cody/cap-3623-look-into-using-trusted-...</li>
<li><a
href="77dcf1f024"><code>77dcf1f</code></a>
Use trusted publishing instead of NPM_TOKEN</li>
<li>Additional commits viewable in <a
href="https://github.com/chromaui/chromatic-cli/compare/v13.1.4...v13.3.2">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for chromatic since your current version.</p>
</details>
<br />

Updates `msw` from 2.11.1 to 2.11.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw/releases">msw's
releases</a>.</em></p>
<blockquote>
<h2>v2.11.6 (2025-10-20)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update <code>@mswjs/interceptors</code> to 0.40.0 (<a
href="https://redirect.github.com/mswjs/msw/issues/2613">#2613</a>)
(50028b7f61b2f6ba9998ed111d9e517ef08bf538) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.11.5 (2025-10-09)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>export <code>onUnhandledRequest</code> for use in
<code>@​msw/playwright</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2562">#2562</a>)
(54ce91965796ebbdf04426c3f84e23e627d273bc) <a
href="https://github.com/zachatrocity"><code>@​zachatrocity</code></a>
<a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.11.4 (2025-10-08)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>add missing parameter documentation for <code>getResponse</code>
function (<a
href="https://redirect.github.com/mswjs/msw/issues/2580">#2580</a>)
(f33fb47c098be4f46d57fad16ea1c6b1d06ef471) <a
href="https://github.com/djpremier"><code>@​djpremier</code></a> <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li><strong>HttpResponse:</strong> preserve request body type after
cloning the request (<a
href="https://redirect.github.com/mswjs/msw/issues/2600">#2600</a>)
(f40515bf18486b4b3570b309159a10d609779f08) <a
href="https://github.com/Slessi"><code>@​Slessi</code></a> <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li>use <code>statuses</code> as a shim (<a
href="https://redirect.github.com/mswjs/msw/issues/2607">#2607</a>)
(fee715c503d1cd00ca70f0688dd28b6c297aa158) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li>use <code>cookie</code> directly via a shim (<a
href="https://redirect.github.com/mswjs/msw/issues/2606">#2606</a>)
(29d0b53b8088e773ed9e7acb3abd7457b8a29965) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li><strong>setupWorker:</strong> prevent <code>WorkerChannel</code>
access in fallback mode (<a
href="https://redirect.github.com/mswjs/msw/issues/2594">#2594</a>)
(1e653c9b06e9bc3e59f71d3500f31ced73f538cb) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li><strong>setupWorker:</strong> remove unused
<code>deferNetworkRequestsUntil</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2595">#2595</a>)
(44d13d23ef1c9a75820c89961585cd01049a323c) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.11.3 (2025-09-20)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>migrate to <code>until-async</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2590">#2590</a>)
(7087b1e29eb7ca0a414eff36ed3c98d03147044b) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.11.2 (2025-09-10)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>setupWorker:</strong> handle in-flight requests after
calling <code>worker.stop()</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2578">#2578</a>)
(97cf4c744d9b1a17f42ca65ac8ef93b2632b935b) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d4c287c127"><code>d4c287c</code></a>
chore(release): v2.11.6</li>
<li><a
href="24cfde8997"><code>24cfde8</code></a>
chore: add missing <code>@types/serviceworker</code> for development (<a
href="https://redirect.github.com/mswjs/msw/issues/2614">#2614</a>)</li>
<li><a
href="50028b7f61"><code>50028b7</code></a>
fix: update <code>@mswjs/interceptors</code> to 0.40.0 (<a
href="https://redirect.github.com/mswjs/msw/issues/2613">#2613</a>)</li>
<li><a
href="7a49fdadfc"><code>7a49fda</code></a>
chore(release): v2.11.5</li>
<li><a
href="54ce919657"><code>54ce919</code></a>
fix: export <code>onUnhandledRequest</code> for use in
<code>@​msw/playwright</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2562">#2562</a>)</li>
<li><a
href="326e2b574e"><code>326e2b5</code></a>
chore(release): v2.11.4</li>
<li><a
href="f33fb47c09"><code>f33fb47</code></a>
fix: add missing parameter documentation for <code>getResponse</code>
function (<a
href="https://redirect.github.com/mswjs/msw/issues/2580">#2580</a>)</li>
<li><a
href="f40515bf18"><code>f40515b</code></a>
fix(HttpResponse): preserve request body type after cloning the request
(<a
href="https://redirect.github.com/mswjs/msw/issues/2600">#2600</a>)</li>
<li><a
href="fee715c503"><code>fee715c</code></a>
fix: use <code>statuses</code> as a shim (<a
href="https://redirect.github.com/mswjs/msw/issues/2607">#2607</a>)</li>
<li><a
href="29d0b53b80"><code>29d0b53</code></a>
fix: use <code>cookie</code> directly via a shim (<a
href="https://redirect.github.com/mswjs/msw/issues/2606">#2606</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/mswjs/msw/compare/v2.11.1...v2.11.6">compare
view</a></li>
</ul>
</details>
<br />

Updates `msw-storybook-addon` from 2.0.5 to 2.0.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw-storybook-addon/releases">msw-storybook-addon's
releases</a>.</em></p>
<blockquote>
<h2>v2.0.6</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>fix: add a <code>@deprecated</code> tag to the
<code>mswDecorator</code> <a
href="https://redirect.github.com/mswjs/msw-storybook-addon/pull/178">#178</a>
(<a
href="https://github.com/connorshea"><code>@​connorshea</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Connor Shea (<a
href="https://github.com/connorshea"><code>@​connorshea</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw-storybook-addon/blob/main/packages/msw-addon/CHANGELOG.md">msw-storybook-addon's
changelog</a>.</em></p>
<blockquote>
<h1>v2.0.6 (Fri Oct 10 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>fix: add a <code>@deprecated</code> tag to the
<code>mswDecorator</code> <a
href="https://redirect.github.com/mswjs/msw-storybook-addon/pull/178">#178</a>
(<a
href="https://github.com/connorshea"><code>@​connorshea</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Connor Shea (<a
href="https://github.com/connorshea"><code>@​connorshea</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ae7ce4de01"><code>ae7ce4d</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="0b9594003c"><code>0b95940</code></a>
fix: add a <code>@deprecated</code> tag to the <code>mswDecorator</code>
(<a
href="https://github.com/mswjs/msw-storybook-addon/tree/HEAD/packages/msw-addon/issues/178">#178</a>)</li>
<li>See full diff in <a
href="https://github.com/mswjs/msw-storybook-addon/commits/v2.0.6/packages/msw-addon">compare
view</a></li>
</ul>
</details>
<br />

Updates `orval` from 7.11.2 to 7.15.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/orval-labs/orval/releases">orval's
releases</a>.</em></p>
<blockquote>
<h2>Release v7.15.0</h2>
<h2>What's Changed</h2>
<blockquote>
<p>[!IMPORTANT]<br />
<strong>Breaking change</strong>: Node <strong>22.18.0</strong> or
higher is required</p>
</blockquote>
<ul>
<li>chore(deps): bump actions/setup-node from 4 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2453">orval-labs/orval#2453</a></li>
<li>fix: generate prefetch hook for hook based mutator by <a
href="https://github.com/aizerin"><code>@​aizerin</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2456">orval-labs/orval#2456</a></li>
<li>feat: option to generate query invalidations by <a
href="https://github.com/aizerin"><code>@​aizerin</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2457">orval-labs/orval#2457</a></li>
<li>chore(deps): bump hono from 4.9.7 to 4.10.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2462">orval-labs/orval#2462</a></li>
<li>fix(hono): zValidator wrapper doesn't typecheck with 0.7.4 by <a
href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2468">orval-labs/orval#2468</a></li>
<li>fix(zod): add line breaks to export constants by <a
href="https://github.com/melloware"><code>@​melloware</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2469">orval-labs/orval#2469</a></li>
<li>fix(query): deduplicate error response types by <a
href="https://github.com/melloware"><code>@​melloware</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2471">orval-labs/orval#2471</a></li>
<li>fix(zod): fixed bug where zod did not correctly use namespace import
by <a
href="https://github.com/chcederquist"><code>@​chcederquist</code></a>
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2472">orval-labs/orval#2472</a></li>
<li>fix(axios): &quot;responseType: text&quot; incorrectly being added
by <a href="https://github.com/snebjorn"><code>@​snebjorn</code></a> in
<a
href="https://redirect.github.com/orval-labs/orval/pull/2473">orval-labs/orval#2473</a></li>
<li>Release v7.15.0 by <a
href="https://github.com/github-actions"><code>@​github-actions</code></a>[bot]
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2474">orval-labs/orval#2474</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/aizerin"><code>@​aizerin</code></a> made
their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2456">orval-labs/orval#2456</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/orval-labs/orval/compare/v7.14.0...v7.15.0">https://github.com/orval-labs/orval/compare/v7.14.0...v7.15.0</a></p>
<h2>Release v7.14.0</h2>
<blockquote>
<p>[!IMPORTANT]<br />
<strong>Breaking change</strong>: Node <strong>22.18.0</strong> or
higher is required</p>
</blockquote>
<h2>What's Changed</h2>
<ul>
<li>Breaking change: Node 22.18.0 or higher is required</li>
<li>Skip version check when using a Bun/Pnpm workspace catalog by <a
href="https://github.com/xorob0"><code>@​xorob0</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2434">orval-labs/orval#2434</a></li>
<li>chore: removes the last require() usages by <a
href="https://github.com/snebjorn"><code>@​snebjorn</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2435">orval-labs/orval#2435</a></li>
<li>Fix typo for infinite by <a
href="https://github.com/JurianArie"><code>@​JurianArie</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2437">orval-labs/orval#2437</a></li>
<li>Docs: Update output.md by <a
href="https://github.com/gialmisi"><code>@​gialmisi</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2438">orval-labs/orval#2438</a></li>
<li>fix: don't mangle external URL $ref by <a
href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2439">orval-labs/orval#2439</a></li>
<li>fix(zod): consider path-level parameters when generating zod schemas
by <a href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2440">orval-labs/orval#2440</a></li>
<li>fix(useOperationIdAsQueryKey): make sure operationName is returned
as string by <a
href="https://github.com/mortik"><code>@​mortik</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2443">orval-labs/orval#2443</a></li>
<li>fix(hono): zValidator not imported consistently by <a
href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2444">orval-labs/orval#2444</a></li>
<li>fix(zod): update type handling from 'any' to 'unknown' in schema
definitions by <a
href="https://github.com/alexmarqs"><code>@​alexmarqs</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2445">orval-labs/orval#2445</a></li>
<li>fix(hono): some configs generate wrong imports by <a
href="https://github.com/xandris"><code>@​xandris</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2448">orval-labs/orval#2448</a></li>
<li>Fix: Support OpenAPI 3.1 Multi-Type Arrays in Zod Generator by <a
href="https://github.com/rang-ali"><code>@​rang-ali</code></a> in <a
href="https://redirect.github.com/orval-labs/orval/pull/2447">orval-labs/orval#2447</a></li>
<li>Release v7.14.0 by <a
href="https://github.com/github-actions"><code>@​github-actions</code></a>[bot]
in <a
href="https://redirect.github.com/orval-labs/orval/pull/2449">orval-labs/orval#2449</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/JurianArie"><code>@​JurianArie</code></a> made
their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2437">orval-labs/orval#2437</a></li>
<li><a href="https://github.com/gialmisi"><code>@​gialmisi</code></a>
made their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2438">orval-labs/orval#2438</a></li>
<li><a href="https://github.com/xandris"><code>@​xandris</code></a> made
their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2439">orval-labs/orval#2439</a></li>
<li><a href="https://github.com/alexmarqs"><code>@​alexmarqs</code></a>
made their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2445">orval-labs/orval#2445</a></li>
<li><a href="https://github.com/rang-ali"><code>@​rang-ali</code></a>
made their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2447">orval-labs/orval#2447</a></li>
<li><a
href="https://github.com/github-actions"><code>@​github-actions</code></a>[bot]
made their first contribution in <a
href="https://redirect.github.com/orval-labs/orval/pull/2449">orval-labs/orval#2449</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/orval-labs/orval/compare/v7.13.2...v7.14.0">https://github.com/orval-labs/orval/compare/v7.13.2...v7.14.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bdacff4b5d"><code>bdacff4</code></a>
chore(release): bump version to v7.15.0 (<a
href="https://redirect.github.com/orval-labs/orval/issues/2474">#2474</a>)</li>
<li><a
href="0dfe795fad"><code>0dfe795</code></a>
fix(axios): &quot;responseType: text&quot; incorrectly being added (<a
href="https://redirect.github.com/orval-labs/orval/issues/2473">#2473</a>)</li>
<li><a
href="b119209125"><code>b119209</code></a>
fix(zod): fixed bug where zod did not correctly use namespace import (<a
href="https://redirect.github.com/orval-labs/orval/issues/2472">#2472</a>)</li>
<li><a
href="8d0812b8cf"><code>8d0812b</code></a>
fix(query): deduplicate error response types (<a
href="https://redirect.github.com/orval-labs/orval/issues/2471">#2471</a>)</li>
<li><a
href="3592baec13"><code>3592bae</code></a>
Disable samples update step in tests workflow</li>
<li><a
href="86b6d4b093"><code>86b6d4b</code></a>
fix(zod): add line breaks to export constants (<a
href="https://redirect.github.com/orval-labs/orval/issues/2469">#2469</a>)</li>
<li><a
href="13e868b9b4"><code>13e868b</code></a>
fix(hono): zValidator wrapper doesn't typecheck with 0.7.4 (<a
href="https://redirect.github.com/orval-labs/orval/issues/2468">#2468</a>)</li>
<li><a
href="37a1f88109"><code>37a1f88</code></a>
chore(deps): bump hono from 4.9.7 to 4.10.2 (<a
href="https://redirect.github.com/orval-labs/orval/issues/2462">#2462</a>)</li>
<li><a
href="fa69296064"><code>fa69296</code></a>
chore(format): format code</li>
<li><a
href="d58241fba0"><code>d58241f</code></a>
Fix <a
href="https://redirect.github.com/orval-labs/orval/issues/2431">#2431</a>:
Input docs for secure URL</li>
<li>Additional commits viewable in <a
href="https://github.com/orval-labs/orval/compare/v7.11.2...v7.15.0">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by <a
href="https://www.npmjs.com/~melloware">melloware</a>, a new releaser
for orval since your current version.</p>
</details>
<br />

Updates `pbkdf2` from 3.1.3 to 3.1.5
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/browserify/pbkdf2/blob/master/CHANGELOG.md">pbkdf2's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/browserify/pbkdf2/compare/v3.1.4...v3.1.5">v3.1.5</a>
- 2025-09-23</h2>
<h3>Commits</h3>
<ul>
<li>[Fix] only allow finite iterations <a
href="67bd94dbbf"><code>67bd94d</code></a></li>
<li>[Fix] restore node 0.10 support <a
href="8f59d962f7"><code>8f59d96</code></a></li>
<li>[Fix] check parameters before the &quot;no Promise&quot; bailout <a
href="d2dc5f052c"><code>d2dc5f0</code></a></li>
</ul>
<h2><a
href="https://github.com/browserify/pbkdf2/compare/v3.1.3...v3.1.4">v3.1.4</a>
- 2025-09-22</h2>
<h3>Commits</h3>
<ul>
<li>[Deps] update <code>create-hash</code>, <code>ripemd160</code>,
<code>sha.js</code>, <code>to-buffer</code> <a
href="8dbf49b382"><code>8dbf49b</code></a></li>
<li>[meta] update repo URLs <a
href="d15bc351de"><code>d15bc35</code></a></li>
<li>[Dev Deps] update <code>@ljharb/eslint-config</code> <a
href="aaf870b1d1"><code>aaf870b</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3687905291"><code>3687905</code></a>
v3.1.5</li>
<li><a
href="67bd94dbbf"><code>67bd94d</code></a>
[Fix] only allow finite iterations</li>
<li><a
href="8f59d962f7"><code>8f59d96</code></a>
[Fix] restore node 0.10 support</li>
<li><a
href="d2dc5f052c"><code>d2dc5f0</code></a>
[Fix] check parameters before the &quot;no Promise&quot; bailout</li>
<li><a
href="b2ad6154b9"><code>b2ad615</code></a>
v3.1.4</li>
<li><a
href="8dbf49b382"><code>8dbf49b</code></a>
[Deps] update <code>create-hash</code>, <code>ripemd160</code>,
<code>sha.js</code>, <code>to-buffer</code></li>
<li><a
href="aaf870b1d1"><code>aaf870b</code></a>
[Dev Deps] update <code>@ljharb/eslint-config</code></li>
<li><a
href="d15bc351de"><code>d15bc35</code></a>
[meta] update repo URLs</li>
<li>See full diff in <a
href="https://github.com/browserify/pbkdf2/compare/v3.1.3...v3.1.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `prettier-plugin-tailwindcss` from 0.6.14 to 0.7.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tailwindlabs/prettier-plugin-tailwindcss/releases">prettier-plugin-tailwindcss's
releases</a>.</em></p>
<blockquote>
<h2>v0.7.1</h2>
<h3>Fixed</h3>
<ul>
<li>Match against correct name of dynamic attributes when using regexes
(<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/410">#410</a>)</li>
</ul>
<h2>v0.7.0</h2>
<h3>Added</h3>
<ul>
<li>Format quotes in <code>@source</code>, <code>@plugin</code>, and
<code>@config</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/387">#387</a>)</li>
<li>Sort in function calls in Twig (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/358">#358</a>)</li>
<li>Sort in callable template literals (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/367">#367</a>)</li>
<li>Sort in function calls mixed with property accesses (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/367">#367</a>)</li>
<li>Support regular expression patterns for attributes (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/405">#405</a>)</li>
<li>Support regular expression patterns for function names (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/405">#405</a>)</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Improved monorepo support by loading Tailwind CSS relative to the
input file instead of prettier config file (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/386">#386</a>)</li>
<li>Improved monorepo support by loading v3 configs relative to the
input file instead of prettier config file (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/386">#386</a>)</li>
<li>Fallback to Tailwind CSS v4 instead of v3 by default (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/390">#390</a>)</li>
<li>Don't augment global Prettier <code>ParserOptions</code> and
<code>RequiredOptions</code> types (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/354">#354</a>)</li>
<li>Drop support for <code>prettier-plugin-import-sort</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/385">#385</a>)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Handle quote escapes in LESS when sorting <code>@apply</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/392">#392</a>)</li>
<li>Fix whitespace removal inside nested concat and template expressions
(<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/396">#396</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tailwindlabs/prettier-plugin-tailwindcss/blob/main/CHANGELOG.md">prettier-plugin-tailwindcss's
changelog</a>.</em></p>
<blockquote>
<h2>[0.7.1] - 2025-10-17</h2>
<h3>Fixed</h3>
<ul>
<li>Match against correct name of dynamic attributes when using regexes
(<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/410">#410</a>)</li>
</ul>
<h2>[0.7.0] - 2025-10-14</h2>
<h3>Added</h3>
<ul>
<li>Format quotes in <code>@source</code>, <code>@plugin</code>, and
<code>@config</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/387">#387</a>)</li>
<li>Sort in function calls in Twig (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/358">#358</a>)</li>
<li>Sort in callable template literals (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/367">#367</a>)</li>
<li>Sort in function calls mixed with property accesses (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/367">#367</a>)</li>
<li>Support regular expression patterns for attributes (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/405">#405</a>)</li>
<li>Support regular expression patterns for function names (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/405">#405</a>)</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Improved monorepo support by loading Tailwind CSS relative to the
input file instead of prettier config file (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/386">#386</a>)</li>
<li>Improved monorepo support by loading v3 configs relative to the
input file instead of prettier config file (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/386">#386</a>)</li>
<li>Fallback to Tailwind CSS v4 instead of v3 by default (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/390">#390</a>)</li>
<li>Don't augment global Prettier <code>ParserOptions</code> and
<code>RequiredOptions</code> types (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/354">#354</a>)</li>
<li>Drop support for <code>prettier-plugin-import-sort</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/385">#385</a>)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Handle quote escapes in LESS when sorting <code>@apply</code> (<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/392">#392</a>)</li>
<li>Fix whitespace removal inside nested concat and template expressions
(<a
href="https://redirect.github.com/tailwindlabs/prettier-plugin-tailwindcss/pull/396">#396</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a0fea3f3c2"><code>a0fea3f</code></a>
0.7.1</li>
<li><a href="https://github.com/tailwindlabs/pre...

_Description has been truncated_

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-11-04 22:48:21 +07:00
Zamil Majdy
910fd2640d hotfix(backend): Temporarily disable library existence check for graph execution (#11318)
### Changes 🏗️

add_store_agent_to_library does not add subagents to the user library,
this check can cause issues.

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-11-04 13:54:48 +00:00
Ubbe
eae2616fb5 fix(frontend): marketplace breadcrumbs typo (#11315)
## Changes 🏗️

Fixing a ✍🏽  typo found by @Pwuts 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] No typos on the breadcrumbs
2025-11-04 13:28:29 +00:00
Abhimanyu Yadav
ad3ea59d90 feat(frontend): Add cron-based scheduling functionality to new builder with input/credential support (#11312)
This PR introduces scheduling functionality to the new builder, allowing
users to create cron-based schedules for automated graph execution with
configurable inputs and credentials.


https://github.com/user-attachments/assets/20c1359f-a3d6-47bf-a881-4f22c657906c

## What's New

### 🚀 Features

#### Scheduling Infrastructure
- **CronSchedulerDialog Component**: Interactive dialog for creating
scheduled runs with:
  - Schedule name configuration
  - Cron expression builder with visual UI
  - Timezone support (displays user timezone or defaults to UTC)
  - Integration with backend scheduling API

- **ScheduleGraph Component**: New action button in builder actions
toolbar
  - Clock icon button to initiate scheduling workflow
  - Handles conditional flow based on input/credential requirements

#### Enhanced Input Management
- **Unified RunInputDialog**: Refactored to support both manual runs and
scheduled runs
- Dynamic "purpose" prop (`"run"` | `"schedule"`) for contextual
behavior
  - Seamless credential and input collection flow
  - Transitions to cron scheduler when scheduling

#### Builder Actions Improvements
- **New Action Buttons Layout**: Three primary actions in the builder
toolbar:
  1. Agent Outputs (placeholder for future implementation)
  2. Run Graph (play/stop button with gradient styling)
  3. Schedule Graph (clock icon for scheduling)

## Technical Details

### New Components
- `CronSchedulerDialog` - Main scheduling dialog component
- `useCronSchedulerDialog` - Hook managing scheduling logic and API
calls
- `ScheduleGraph` - Schedule button component
- `useScheduleGraph` - Hook for scheduling flow control
- `AgentOutputs` - Placeholder component for future outputs feature

### Modified Components
- `BuilderActions` - Added new action buttons
- `RunGraph` - Enhanced with tooltip support
- `RunInputDialog` - Made multi-purpose for run/schedule
- `useRunInputDialog` - Added scheduling dialog state management

### API Integration
- Uses `usePostV1CreateExecutionSchedule` for schedule creation
- Fetches user timezone with `useGetV1GetUserTimezone`
- Validates and passes graph ID, version, inputs, and credentials

## User Experience

1. **Without Inputs/Credentials**: 
   - Click schedule button → Opens cron scheduler directly

2. **With Inputs/Credentials**:
   - Click schedule button → Opens input dialog
   - Fill required fields → Click "Schedule Run"
   - Configure cron expression → Create schedule

3. **Timezone Awareness**:
   - Shows user's configured timezone
   - Warns if no timezone is set (defaults to UTC)
   - Provides link to timezone settings

## Testing Checklist

- [x] Create a schedule without inputs/credentials
- [x] Create a schedule with required inputs
- [x] Create a schedule with credentials
- [x] Verify timezone display (with and without user timezone)
2025-11-04 11:37:10 +00:00
Reinier van der Leer
69b6b732a2 feat(frontend/ui): Increase contrast of Switch component (#11309)
- Resolves #11308

### Changes 🏗️

- Change background color of `Switch` in unchecked state from
`neutral-200` to `zinc-300`

Before / after:
<center>
<img width="48%" alt="before"
src="https://github.com/user-attachments/assets/d23c9531-2f7e-49d3-8a92-f4ad40e9fa14"
/>

<img width="48%" alt="after"
src="https://github.com/user-attachments/assets/9f27fbee-081e-4b26-8b24-74d5d5cdcef8"
/>
</center>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Visually verified new look
2025-11-04 08:10:24 +00:00
Abhimanyu Yadav
b1a2d21892 feat(frontend): add undo/redo functionality with keyboard shortcuts to flow builder (#11307)
This PR introduces comprehensive undo/redo functionality to the flow
builder, allowing users to revert and restore changes to their
workflows. The implementation includes keyboard shortcuts (Ctrl/Cmd+Z
for undo, Ctrl/Cmd+Y for redo) and visual controls in the UI.



https://github.com/user-attachments/assets/514253a6-4e86-4ac5-96b4-992180fb3b00



### What's New 🚀
- **Undo/Redo State Management**: Implemented a dedicated Zustand store
(`historyStore`) that tracks up to 50 historical states of nodes and
connections
- **Keyboard Shortcuts**: Added cross-platform keyboard shortcuts:
  - `Ctrl/Cmd + Z` for undo
  - `Ctrl/Cmd + Y` for redo
- **UI Controls**: Added dedicated undo/redo buttons to the control
panel with:
  - Visual feedback when actions are available/disabled
  - Tooltips for better user guidance
  - Proper accessibility attributes
- **Automatic History Tracking**: Integrated history tracking into node
operations (add, remove, position changes, data updates)

### Technical Details 🔧

#### Architecture
- **History Store** (`historyStore.ts`): Manages past and future states
using a stack-based approach
  - Stores snapshots of nodes and connections
  - Implements state deduplication to prevent duplicate history entries
  - Limits history to 50 states to manage memory usage

- **Integration Points**:
- `nodeStore.ts`: Modified to push state changes to history on relevant
operations
- `Flow.tsx`: Added the new `useFlowRealtime` hook for real-time updates
- `NewControlPanel.tsx`: Integrated the new `UndoRedoButtons` component

#### UI Improvements
- **Enhanced Control Panel Button**: Updated to support different HTML
elements (button/div) with proper role attributes for accessibility
- **Block Menu Tooltips**: Added tooltips to improve user guidance
- **Responsive UI**: Adjusted tooltip delays for better responsiveness
(100ms delay)

### Testing Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description 
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new flow with multiple nodes and verify undo/redo works
for node additions
  - [x] Move nodes and verify position changes can be undone/redone
  - [x] Delete nodes and verify deletions can be undone
- [x] Test keyboard shortcuts (Ctrl/Cmd+Z and Ctrl/Cmd+Y) on different
platforms
- [x] Verify undo/redo buttons are disabled when no history is available
- [x] Test with complex flows (10+ nodes) to ensure performance remains
good
2025-11-04 05:17:29 +00:00
Zamil Majdy
a78b08f5e7 feat(platform): implement admin user impersonation with header-based authentication (#11298)
## Summary

Implement comprehensive admin user impersonation functionality to enable
admins to act on behalf of any user for debugging and support purposes.

## 🔐 Security Features

- **Admin Role Validation**: Only users with 'admin' role can
impersonate others
- **Header-Based Authentication**: Uses `X-Act-As-User-Id` header for
impersonation requests
- **Comprehensive Audit Logging**: All impersonation attempts logged
with admin details
- **Secure Error Handling**: Proper HTTP 403/401 responses for
unauthorized access
- **SSR Safety**: Client-side environment checks prevent server-side
rendering issues

## 🏗️ Architecture

### Backend Implementation (`autogpt_libs/auth/dependencies.py`)
- Enhanced `get_user_id` FastAPI dependency to process impersonation
headers
- Admin role verification using existing `verify_user()` function
- Audit trail logging with admin email, user ID, and target user
- Seamless integration with all existing routes using `get_user_id`
dependency

### Frontend Implementation
- **React Hook**: `useAdminImpersonation` for state management and API
calls
- **Security Banner**: Prominent warning when impersonation is active
- **Admin Panel**: Control interface for starting/stopping impersonation
- **Session Persistence**: Maintains impersonation state across page
refreshes
- **Full Page Refresh**: Ensures all data updates correctly on state
changes

### API Integration
- **Header Forwarding**: All API requests include impersonation header
when active
- **Proxy Support**: Next.js API proxy forwards headers to backend
- **Generated Hooks**: Compatible with existing React Query API hooks
- **Error Handling**: Graceful fallback for storage/authentication
failures

## 🎯 User Experience

### For Admins
1. Navigate to `/admin/impersonation` 
2. Enter target user ID (UUID format with validation)
3. System displays security banner during active impersonation
4. All API calls automatically use impersonated user context
5. Click "Stop Impersonation" to return to admin context

### Security Notice
- **Audit Trail**: All impersonation logged with `logger.info()`
including admin email
- **Session Isolation**: Impersonation state stored in sessionStorage
(not persistent)
- **No Token Manipulation**: Uses header-based approach, preserving
admin's JWT
- **Role Enforcement**: Backend validates admin role on every
impersonated request

## 🔧 Technical Details

### Constants & Configuration
- `IMPERSONATION_HEADER_NAME = "X-Act-As-User-Id"`
- `IMPERSONATION_STORAGE_KEY = "admin-impersonate-user-id"`
- Centralized in `frontend/src/lib/constants.ts` and
`autogpt_libs/auth/dependencies.py`

### Code Quality Improvements
- **DRY Principle**: Eliminated duplicate header forwarding logic
- **Icon Compliance**: Uses Phosphor Icons per coding guidelines  
- **Type Safety**: Proper TypeScript interfaces and error handling
- **SSR Compatibility**: Environment checks for client-side only
operations
- **Error Consistency**: Uniform silent failure with logging approach

### Testing
- Updated backend auth dependency tests for new function signatures
- Added Mock Request objects for comprehensive test coverage
- Maintained existing test functionality while extending capabilities

## 🚀 CodeRabbit Review Responses

All CodeRabbit feedback has been addressed:

1.  **DRY Principle**: Refactored duplicate header forwarding logic
2.  **Icon Library**: Replaced lucide-react with Phosphor Icons  
3.  **SSR Safety**: Added environment checks for sessionStorage
4.  **UI Improvements**: Synchronous initialization prevents flicker
5.  **Error Handling**: Consistent silent failure with logging
6.  **Backend Validation**: Confirmed comprehensive security
implementation
7.  **Type Safety**: Addressed TypeScript concerns
8.  **Code Standards**: Followed all coding guidelines and best
practices

## 🧪 Testing Instructions

1. **Login as Admin**: Ensure user has admin role
2. **Navigate to Panel**: Go to `/admin/impersonation`
3. **Test Impersonation**: Enter valid user UUID and start impersonation
4. **Verify Banner**: Security banner should appear at top of all pages
5. **Test API Calls**: Verify credits/graphs/etc show impersonated
user's data
6. **Check Logging**: Backend logs should show impersonation audit trail
7. **Stop Impersonation**: Verify return to admin context works
correctly

## 📝 Files Modified

### Backend
- `autogpt_libs/auth/dependencies.py` - Core impersonation logic
- `autogpt_libs/auth/dependencies_test.py` - Updated test signatures

### Frontend
- `src/hooks/useAdminImpersonation.ts` - State management hook
- `src/components/admin/AdminImpersonationBanner.tsx` - Security warning
banner
- `src/components/admin/AdminImpersonationPanel.tsx` - Admin control
interface
- `src/app/(platform)/admin/impersonation/page.tsx` - Admin page
- `src/app/(platform)/admin/layout.tsx` - Navigation integration
- `src/app/(platform)/layout.tsx` - Banner integration
- `src/lib/autogpt-server-api/client.ts` - Header injection for API
calls
- `src/lib/autogpt-server-api/helpers.ts` - Header forwarding logic
- `src/app/api/proxy/[...path]/route.ts` - Proxy header forwarding
- `src/app/api/mutators/custom-mutator.ts` - Enhanced error handling
- `src/lib/constants.ts` - Shared constants

## 🔒 Security Compliance

- **Authorization**: Admin role required for impersonation access
- **Authentication**: Uses existing JWT validation with additional role
checks
- **Audit Logging**: Comprehensive logging of all impersonation
activities
- **Error Handling**: Secure error responses without information leakage
- **Session Management**: Temporary sessionStorage without persistent
data
- **Header Validation**: Proper sanitization and validation of
impersonation headers

This implementation provides a secure, auditable, and user-friendly
admin impersonation system that integrates seamlessly with the existing
AutoGPT Platform architecture.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
  * Admin user impersonation to view the app as another user.
* New "User Impersonation" admin page for entering target user IDs and
managing sessions.
  * Sidebar link for quick access to the impersonation page.
* Persistent impersonation state that updates app data (e.g., credits)
and survives page reloads.
* Top warning banner when impersonation is active with a Stop
Impersonation control.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-04 03:51:28 +00:00
Ubbe
5359f20070 feat(frontend): Google Drive Picker component (#11286)
## Changes 🏗️

<img width="800" height="876" alt="Screenshot_2025-10-29_at_22 56 43"
src="https://github.com/user-attachments/assets/e1d9cf62-0a81-4658-82c2-6e673d636479"
/>

New `<GoogleDrivePicker />` component that, when rendered:
- re-uses existing Google credentials OR asks the user to SSO
- uses the Google Drive Picker script to launch a modal for the user to
select files

We will need this 3 new environment variables on the Front-end for it to
work:
```
# Google Drive Picker
NEXT_PUBLIC_GOOGLE_CLIENT_ID=
NEXT_PUBLIC_GOOGLE_API_KEY=
NEXT_PUBLIC_GOOGLE_APP_ID=
```
Updated `.env.default` with them.

### Next

We need to figure out how to map this to an agent input type and update
the Back-end to accept the files as input.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I tried the whole flow

### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-11-03 13:48:28 +00:00
Abhimanyu Yadav
427c7eb1d4 feat(frontend): Add dynamic input dialog for agent execution with credential support (#11301)
### Changes 🏗️

This PR enhances the agent execution functionality by introducing a
dynamic input dialog that collects both regular inputs and credentials
before running agents.

<img width="1309" height="826" alt="Screenshot 2025-11-03 at 10 16
38 AM"
src="https://github.com/user-attachments/assets/2015da5d-055d-49c5-8e7e-31bd0fe369f4"
/>

####  New Features
- **Dynamic Input Dialog**: Added a new `RunInputDialog` component that
automatically detects when agents require inputs or credentials and
prompts users before execution
- **Credential Management**: Integrated credential input handling
directly into the execution flow, supporting various credential types
(API keys, OAuth, passwords)
- **Enhanced Run Controls**: Improved the `RunGraph` component with
better state management and visual feedback for running/stopping agents
- **Form Renderer**: Created a new unified `FormRenderer` component for
consistent input rendering across the application

#### 🔧 Refactoring
- **Input Renderer Migration**: Moved input renderer components from
FlowEditor-specific location to a shared components directory for better
reusability:
  - Migrated fields (AnyOfField, CredentialField, ObjectField)
- Migrated widgets (ArrayEditor, DateInput, SelectWidget, TextInput,
etc.)
  - Migrated templates (FieldTemplate, ArrayFieldTemplate)
- **State Management**: Enhanced `graphStore` with schemas for inputs
and credentials, including helper methods to check for their presence
- **Component Organization**: Restructured BuilderActions components for
better modularity

#### 🗑️ Cleanup
- Removed outdated FlowEditor documentation files (FORM_CREATOR.md,
README.md)
- Removed deprecated `RunGraph` and `useRunGraph` implementations from
FlowEditor
- Consolidated duplicate functionality into new shared components

#### 🎨 UI/UX Improvements
- Added gradient styling to Run/Stop button for better visual appeal
- Improved dialog layout with clear sections for Credentials and Inputs
- Enhanced form fields with size variants (small, medium, large) for
better responsiveness
- Added loading states and proper error handling during execution

### Technical Details
- The new system automatically detects input requirements from the graph
schema
- Credentials are handled separately with special UI treatment based on
credential type
- The dialog only appears when inputs or credentials are actually
required
- Execution flow: Save graph → Check for inputs/credentials → Show
dialog if needed → Execute with provided values

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create an agent without inputs and verify it runs directly without
dialog
- [x] Create an agent with input blocks and verify the dialog appears
with correct fields
- [x] Create an agent requiring credentials and verify credential
selection/creation works
  - [x] Test agent execution with both inputs and credentials
  - [x] Verify Stop Agent functionality during execution
  - [x] Test error handling for invalid inputs or missing credentials
  - [x] Verify that the dialog closes properly after submission
  - [x] Test that execution state is properly reflected in the UI
2025-11-03 12:05:45 +00:00
Krzysztof Czerwinski
c17a2f807d fix(frontend): Reset beads on run (#11303)
Beads are reset when saving but not on run which can result in beads
from previous runs accumulating on the opened graph.

### Changes 🏗️

- Move bead reset code to function and call it before run

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Beads reset on every run
2025-11-03 09:23:39 +00:00
Krzysztof Czerwinski
f80739d38c Merge branch 'master' into dev 2025-11-03 10:28:57 +09:00
Krzysztof Czerwinski
f97e19f418 hotfix: Patch onboarding (#11299)
### Changes 🏗️

- Prevent removing progress of user onboarding tasks by merging arrays
on the backend instead of replacing them
- New endpoint for onboarding reset

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tasks are not being reset
  - [x] `/onboarding/reset` works
2025-11-01 10:19:55 +01:00
Reinier van der Leer
42b9facd4a hotfix(backend/scheduler): Bump apscheduler to DST-fixed version 3.11.1 (#11294)
- #11273

- Bump `apscheduler` to v3.11.1 which contains a fix for the issue

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] "It's a rather ugly solution but the test proves that it works."
~the maintainer
  - [x] CI passes
2025-10-31 23:09:28 +01:00
Reinier van der Leer
a02b8d9ad7 fix(backend/scheduler): Bump apscheduler to DST-fixed version 3.11.1 (#11294)
- #11273

### Changes 🏗️

- Bump `apscheduler` to v3.11.1 which contains a fix for the issue

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] "It's a rather ugly solution but the test proves that it works."
~the maintainer
  - [x] CI passes
2025-10-31 21:40:44 +00:00
Nicholas Tindle
834617d221 hotfix(backend): Clarify prompt requirements for list generation for our friend claude (#11293) 2025-10-31 12:28:05 -05:00
Lluis Agusti
e6fb649ced Merge 'master' into 'dev' 2025-10-30 20:05:55 +07:00
Zamil Majdy
2f8cdf62ba feat(backend): Standardize error handling with BlockSchemaInput & BlockSchemaOutput base class (#11257)
<!-- Clearly explain the need for these changes: -->

This PR addresses the need for consistent error handling across all
blocks in the AutoGPT platform. Previously, each block had to manually
define an `error` field in their output schema, leading to code
duplication and potential inconsistencies. Some blocks might forget to
include the error field, making error handling unpredictable.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Created `BlockSchemaOutput` base class**: New base class that
extends `BlockSchema` with a standardized `error` field
- **Created `BlockSchemaInput` base class**: Added for consistency and
future extensibility
- **Updated 140+ block implementations**: Changed all block `Output`
classes from `class Output(BlockSchema):` to `class
Output(BlockSchemaOutput):`
- **Removed manual error field definitions**: Eliminated hundreds of
duplicate `error: str = SchemaField(...)` definitions
- **Updated type annotations**: Changed `Block[BlockSchema,
BlockSchema]` to `Block[BlockSchemaInput, BlockSchemaOutput]` throughout
the codebase
- **Fixed imports**: Added `BlockSchemaInput` and `BlockSchemaOutput`
imports to all relevant files
- **Maintained backward compatibility**: Updated `EmptySchema` to
inherit from `BlockSchemaOutput`

**Key Benefits:**
- Consistent error handling across all blocks
- Reduced code duplication (removed ~200 lines of repetitive error field
definitions)
- Type safety improvements with distinct input/output schema types
- Blocks can still override error field with more specific descriptions
when needed

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified `poetry run format` passes (all linting, formatting, and
type checking)
- [x] Tested block instantiation works correctly (MediaDurationBlock,
UnrealTextToSpeechBlock)
- [x] Confirmed error fields are automatically present in all updated
blocks
- [x] Verified block loading system works (successfully loads 353+
blocks)
  - [x] Tested backward compatibility with EmptySchema
- [x] Confirmed blocks can still override error field with custom
descriptions
  - [x] Validated core schema inheritance chain works correctly

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes were needed for this refactoring.*

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-10-30 12:28:08 +00:00
seer-by-sentry[bot]
3dc5208f71 feat(backend): Increase max_field_size in aiohttp requests (#11261)
### Changes 🏗️

- Increased `max_field_size` in `aiohttp.ClientSession` to 16KB to
handle servers with large headers (e.g., long CSP headers).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x]  Add unit test that checks it can now parse headers over 8k size

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-10-30 10:41:22 +00:00
Ubbe
04493598e2 fix(frontend): more wallet popover fixes (#11285)
## Changes 🏗️

<img width="800" height="547" alt="Screenshot 2025-10-29 at 22 11 35"
src="https://github.com/user-attachments/assets/5c700ddc-d770-48ef-9847-7e652c5dedcb"
/>
<br /><br />

- Use
[`react-currency-input-field`](https://www.npmjs.com/package/react-currency-input-field)
for `<Input type="amount" />` under the hood
  - so it formats numbers nicely with `,` and `.`
- Simplify form logic
- Make the popover cover the trigger button when open
- Re-organize imports
- Show a `$` prefix in front of the amount inputs

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Open the wallet with credits enabled
  - [x] Play with the inputs

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-30 14:44:29 +04:00
seer-by-sentry[bot]
4140331731 fix(blocks/llm): Validate LLM summary responses are strings (#11275)
### Changes 🏗️

- Added validation to ensure that the `summary` and `final_summary`
returned by the LLM are strings.
- Raises a `ValueError` if the LLM returns a list or other non-string
type, providing a descriptive error message to aid debugging.

Fixes
[AUTOGPT-SERVER-6M4](https://sentry.io/organizations/significant-gravitas/issues/6978480131/).
The issue was that: LLM returned list of strings instead of single
string summary, causing `_combine_summaries` to fail on `join`.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2230933

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6978480131/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Added a unit test to verify that a ValueError is raised when the
LLM returns a list instead of a string for summary or final_summary.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-30 09:52:50 +00:00
Swifty
594b1adcf7 fix(frontend): Fix marketplace sort by (#11284)
Marketplace sort by functionality was not working on the frontend. This
PR fixes it

### Changes 🏗️

- Add type hints for sort by
- Fix marketplace sort by drop downs


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested locally
2025-10-30 08:46:11 +00:00
Swifty
cab6590908 fix(frontend): Safely parse error response body in handleFetchError (#11274)
### Changes 🏗️

- Ensures `handleFetchError` can handle non-JSON error responses (e.g.,
HTML error pages).
- Attempts to parse the response body as JSON, but falls back to text if
JSON parsing fails.
- Logs a warning to the console if JSON parsing fails.
- Sets `responseData` to null if parsing fails.

Fixes
[BUILDER-482](https://sentry.io/organizations/significant-gravitas/issues/6958135748/).
The issue was that: Frontend error handler unconditionally calls
`response.json()` on a non-JSON HTML error page starting with 'A'.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2206951

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6958135748/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test Plan:
    - [x] Created unit tests for the issue that caused the error
    - [x] Created unit tests to ensure responses are parsed gracefully
2025-10-29 16:22:47 +00:00
Swifty
a1ac109356 fix(backend): Further enhance sanitization of SQL raw queries (#11279)
### Changes 🏗️

Enhanced SQL query security in the store search functionality by
implementing proper parameterization to prevent SQL injection
vulnerabilities.

**Security Improvements:**
- Replaced string interpolation with PostgreSQL positional parameters
(`$1`, `$2`, etc.) for all user inputs
- Added ORDER BY whitelist validation to prevent injection via
`sorted_by` parameter
- Parameterized search term, creators array, category, and pagination
values
- Fixed variable naming conflict (`sql_where_clause` vs `where_clause`)

**Testing:**
- Added 4 comprehensive tests validating SQL injection prevention across
different attack vectors
- Tests verify that malicious input in search queries, filters, sorting,
and categories are safely handled
- All 10 tests in db_test.py pass successfully

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All existing tests pass (10/10 tests passing)
  - [x] New security tests validate SQL injection prevention
  - [x] Verified parameterized queries handle malicious input safely
  - [x] Code formatting passes (`poetry run format`)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes required for this security fix*
2025-10-29 15:21:27 +00:00
Zamil Majdy
5506d59da1 fix(backend/executor): make graph execution permission check version-agnostic (#11283)
## Summary
Fix critical issue where pre-execution permission validation broke
execution of graphs that reference older versions of sub-graphs.

## Problem
The `validate_graph_execution_permissions` function was checking for the
specific version of a graph in the user's library. This caused failures
when:
1. A parent graph references an older version of a sub-graph  
2. The user updates the sub-graph to a newer version
3. The older version is no longer in their library
4. Execution of the parent graph fails with `GraphNotInLibraryError`

## Root Cause
In `backend/executor/utils.py` line 523, the function was checking for
the exact version, but sub-graphs legitimately reference older versions
that may no longer be in the library.

## Solution

### 1. Remove Version-Specific Check (backend/executor/utils.py)
- Remove `graph_version=graph.version` parameter from validation call
- Add explanatory comment about version-agnostic behavior
- Now only checks that the graph ID exists in user's library (any
version)

### 2. Enhance Documentation (backend/data/graph.py)  
- Update function docstring to explain version-agnostic behavior
- Document that `None` (now default) allows execution of any version
- Clarify this is important for sub-graph version compatibility

## Technical Details
The `validate_graph_execution_permissions` function was already designed
to handle version-agnostic checks when `graph_version=None`. By omitting
the version parameter, we skip the version check and only verify:
- Graph exists in user's library  
- Graph is not deleted/archived
- User has execution permissions

## Impact
-  Parent graphs can execute even when they reference older sub-graph
versions
-  Sub-graph updates don't break existing parent graphs  
-  Maintains security: still checks library membership and permissions
-  No breaking changes: version-specific validation still available
when needed

## Example Scenario Fixed
1. User creates parent graph that uses sub-graph v1
2. User updates sub-graph to v2 (v1 removed from library)  
3. Parent graph still references sub-graph v1
4. **Before**: Execution fails with `GraphNotInLibraryError`
5. **After**: Execution succeeds (version-agnostic permission check)

## Testing
- [x] Code formatting and linting passes
- [x] Type checking passes
- [x] No breaking changes to existing functionality
- [x] Security still maintained through library membership checks

## Files Changed
- `backend/executor/utils.py`: Remove version-specific permission check
- `backend/data/graph.py`: Enhanced documentation for version-agnostic
behavior

Closes #[issue-number-if-applicable]

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 14:13:23 +00:00
Ubbe
749341100b fix(frontend): prevent Wallet rendering twice (#11282)
## Changes 🏗️

The `<Wallet />` was being rendered twice ( one hidden with CSS `hidden`
) because of the Navbar layout, which caused logic issues within the
wallet. I changed to render it conditionally via Javascript instance,
which is always better practice than use `hidden` specially for
components with actual logic.

I also moved the component files closer to where it is used ( in the
navbar ).

I have a Cursor plugin that removes imports when unused, but annoyingly
re-organizes them, hence the changes around that...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] There is only 1 Wallet in the DOM
2025-10-29 14:09:13 +00:00
Zamil Majdy
4922f88851 feat(backend/executor): Implement cascading stop for nested graph executions (#11277)
## Summary
Fixes critical issue where child executions spawned by
`AgentExecutorBlock` continue running after parent execution is stopped.
Implements parent-child execution tracking and recursive cascading stop
logic to ensure entire execution trees are terminated together.

## Background
When a parent graph execution containing `AgentExecutorBlock` nodes is
stopped, only the parent was terminated. Child executions continued
running, leading to:
-  Orphaned child executions consuming credits
-  No user control over execution trees  
-  Race conditions where children start after parent stops
-  Resource leaks from abandoned executions

## Core Changes

### 1. Database Schema (`schema.prisma` + migration)
```sql
-- Add nullable parent tracking field
ALTER TABLE "AgentGraphExecution" ADD COLUMN "parentGraphExecutionId" TEXT;

-- Add self-referential foreign key with graceful deletion
ALTER TABLE "AgentGraphExecution" ADD CONSTRAINT "AgentGraphExecution_parentGraphExecutionId_fkey" 
  FOREIGN KEY ("parentGraphExecutionId") REFERENCES "AgentGraphExecution"("id") 
  ON DELETE SET NULL ON UPDATE CASCADE;

-- Add index for efficient child queries
CREATE INDEX "AgentGraphExecution_parentGraphExecutionId_idx" 
  ON "AgentGraphExecution"("parentGraphExecutionId");
```

### 2. Parent ID Propagation (`backend/blocks/agent.py`)
```python
# Extract current graph execution ID and pass as parent to child
execution = add_graph_execution(
    # ... other params
    parent_graph_exec_id=graph_exec_id,  # NEW: Track parent relationship
)
```

### 3. Data Layer (`backend/data/execution.py`)
```python
async def get_child_graph_executions(parent_exec_id: str) -> list[GraphExecution]:
    """Get all child executions of a parent execution."""
    children = await AgentGraphExecution.prisma().find_many(
        where={"parentGraphExecutionId": parent_exec_id, "isDeleted": False}
    )
    return [GraphExecution.from_db(child) for child in children]
```

### 4. Cascading Stop Logic (`backend/executor/utils.py`)
```python
async def stop_graph_execution(
    user_id: str,
    graph_exec_id: str,
    wait_timeout: float = 15.0,
    cascade: bool = True,  # NEW parameter
):
    # 1. Find all child executions
    if cascade:
        children = await _get_child_executions(graph_exec_id)
        
        # 2. Stop all children recursively in parallel
        if children:
            await asyncio.gather(
                *[stop_graph_execution(user_id, child.id, wait_timeout, True) 
                  for child in children],
                return_exceptions=True,  # Don't fail parent if child fails
            )
    
    # 3. Stop the parent execution
    # ... existing stop logic
```

### 5. Race Condition Prevention (`backend/executor/manager.py`)
```python
# Before executing queued child, check if parent was terminated
if parent_graph_exec_id:
    parent_exec = get_db_client().get_graph_execution_meta(parent_graph_exec_id, user_id)
    if parent_exec and parent_exec.status == ExecutionStatus.TERMINATED:
        # Skip execution, mark child as terminated
        get_db_client().update_graph_execution_stats(
            graph_exec_id=graph_exec_id,
            status=ExecutionStatus.TERMINATED,
        )
        return  # Don't start orphaned child
```

## How It Works

### Before (Broken)
```
User stops parent execution
    ↓
Parent terminates ✓
    ↓
Child executions keep running ✗
    ↓
User cannot stop children ✗
```

### After (Fixed)
```
User stops parent execution
    ↓
Query database for all children
    ↓
Recursively stop all children in parallel
    ↓
Wait for children to terminate
    ↓
Stop parent execution
    ↓
All executions in tree stopped ✓
```

### Race Prevention
```
Child in QUEUED status
    ↓
Parent stopped
    ↓
Child picked up by executor
    ↓
Pre-flight check: parent TERMINATED?
    ↓
Yes → Skip execution, mark child TERMINATED
    ↓
Child never runs ✓
```

## Edge Cases Handled
 **Deep nesting** - Recursive cascading handles multi-level trees  
 **Queued children** - Pre-flight check prevents execution  
 **Race conditions** - Child spawned during stop operation  
 **Partial failures** - `return_exceptions=True` continues on error  
 **Multiple children** - Parallel stop via `asyncio.gather()`  
 **No parent** - Backward compatible (nullable field)  
 **Already completed** - Existing status check handles it  

## Performance Impact
- **Stop operation**: O(depth) with parallel execution vs O(1) before
- **Memory**: +36 bytes per execution (one UUID reference)
- **Database**: +1 query per tree level, indexed for efficiency

## API Changes (Backward Compatible)

### `stop_graph_execution()` - New Optional Parameter
```python
# Before
async def stop_graph_execution(user_id: str, graph_exec_id: str, wait_timeout: float = 15.0)

# After  
async def stop_graph_execution(user_id: str, graph_exec_id: str, wait_timeout: float = 15.0, cascade: bool = True)
```
**Default `cascade=True`** means existing callers get the new behavior
automatically.

### `add_graph_execution()` - New Optional Parameter
```python
async def add_graph_execution(..., parent_graph_exec_id: Optional[str] = None)
```

## Security & Safety
-  **User verification** - Users can only stop their own executions
(parent + children)
-  **No cycles** - Self-referential FK prevents infinite loops  
-  **Graceful degradation** - Errors in child stops don't block parent
stop
-  **Rate limits** - Existing execution rate limits still apply

## Testing Checklist

### Database Migration
- [x] Migration runs successfully  
- [x] Prisma client regenerates without errors
- [x] Existing tests pass

### Core Functionality  
- [ ] Manual test: Stop parent with running child → child stops
- [ ] Manual test: Stop parent with queued child → child never starts
- [ ] Unit test: Cascading stop with multiple children
- [ ] Unit test: Deep nesting (3+ levels)
- [ ] Integration test: Race condition prevention

## Breaking Changes
**None** - All changes are backward compatible with existing code.

## Rollback Plan
If issues arise:
1. **Code rollback**: Revert PR, redeploy
2. **Database rollback**: Drop column and constraints (non-destructive)

---

**Note**: This branch contains additional unrelated changes from merging
with `dev`. The core cascading stop feature involves only:
- `schema.prisma` + migration
- `backend/data/execution.py` 
- `backend/executor/utils.py`
- `backend/blocks/agent.py`
- `backend/executor/manager.py`

All other file changes are from dev branch updates and not part of this
feature.

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Nested graph executions: parent-child tracking and retrieval of child
executions

* **Improvements**
* Cascading stop: stopping a parent optionally terminates child
executions
  * Parent execution IDs propagated through runs and surfaced in logs
  * Per-user/graph concurrent execution limits enforced

* **Bug Fixes**
* Skip enqueuing children if parent is terminated; robust handling when
parent-status checks fail

* **Tests**
  * Updated tests to cover parent linkage in graph creation
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 11:11:22 +00:00
Zamil Majdy
5fb142c656 fix(backend/executor): ensure cluster lock release on all execution submission failures (#11281)
## Root Cause
During rolling deployment, execution
`97058338-052a-4528-87f4-98c88416bb7f` got stuck in QUEUED state
because:

1. Pod acquired cluster lock successfully during shutdown  
2. Subsequent setup operations failed (ThreadPoolExecutor shutdown,
resource exhaustion, etc.)
3. **No error handling existed** around the critical section after lock
acquisition
4. Cluster lock remained stuck in Redis for 5 minutes (TTL timeout)
5. Other pods couldn't acquire the lock, leaving execution permanently
queued

## The Fix

### Problem: Critical Section Not Protected
The original code had no error handling for the entire critical section
after successful lock acquisition:
```python
# Original code - no error handling after lock acquired
current_owner = cluster_lock.try_acquire()
if current_owner != self.executor_id:
    return  # didn't get lock
    
# CRITICAL SECTION - any failure here leaves lock stuck
self._execution_locks[graph_exec_id] = cluster_lock  # Could fail: memory
logger.info("Acquired cluster lock...")              # Could fail: logging  
cancel_event = threading.Event()                     # Could fail: resources
future = self.executor.submit(...)                   # Could fail: shutdown
self.active_graph_runs[...] = (future, cancel_event) # Could fail: memory
```

### Solution: Wrap Entire Critical Section  
Protect ALL operations after successful lock acquisition:
```python
# Fixed code - comprehensive error handling
current_owner = cluster_lock.try_acquire()
if current_owner != self.executor_id:
    return  # didn't get lock

# Wrap ENTIRE critical section after successful acquisition
try:
    self._execution_locks[graph_exec_id] = cluster_lock
    logger.info("Acquired cluster lock...")
    cancel_event = threading.Event()
    future = self.executor.submit(...)
    self.active_graph_runs[...] = (future, cancel_event)
except Exception as e:
    # Release cluster lock before requeue
    cluster_lock.release()
    del self._execution_locks[graph_exec_id] 
    _ack_message(reject=True, requeue=True)
    return
```

### Why This Comprehensive Approach Works
- **Complete protection**: Any failure in critical section → lock
released
- **Proper cleanup order**: Lock released → message requeued → another
pod can try
- **Uses existing infrastructure**: Leverages established
`_ack_message()` requeue logic
- **Handles all scenarios**: ThreadPoolExecutor shutdown, resource
exhaustion, memory issues, logging failures

## Protected Failure Scenarios
1. **Memory exhaustion**: `_execution_locks` assignment or
`active_graph_runs` assignment
2. **Resource exhaustion**: `threading.Event()` creation fails
3. **ThreadPoolExecutor shutdown**: `executor.submit()` with "cannot
schedule new futures after shutdown"
4. **Logging system failures**: `logger.info()` calls fail
5. **Any unexpected exceptions**: Network issues, disk problems, etc.

## Validation
-  All existing tests pass  
-  Maintains exact same success path behavior
-  Comprehensive error handling for all failure points
-  Minimal code change with maximum protection

## Impact
- **Eliminates stuck executions** during pod lifecycle events (rolling
deployments, scaling, crashes)
- **Faster recovery**: Immediate requeue vs 5-minute Redis TTL wait
- **Higher reliability**: Handles ANY failure in the critical section
- **Production-ready**: Comprehensive solution for distributed lock
management

This prevents the exact race condition that caused execution
`97058338-052a-4528-87f4-98c88416bb7f` to be stuck for >300 seconds,
plus many other potential failure scenarios.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 08:56:24 +00:00
Pratyush Singh
e14594ff4a fix: handle oversized notifications by sending summary email (#11119) (#11130)
📨 Fix: Handle Oversized Notification Emails
Summary

This PR adds logic to detect and handle oversized notification emails
exceeding Postmark’s 5 MB limit. Instead of retrying indefinitely, the
system now sends a lightweight summary email with key stats and a
dashboard link.

Changes

Added size check in EmailSender.send_templated()

Sends summary email when payload > ~4.5 MB

Prevents infinite retries and queue clogging

Added logs for oversized detection

Fixes #11119

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-10-29 00:57:13 +00:00
Zamil Majdy
de70ede54a fix(backend): prevent execution of deleted agents and cleanup orphaned resources (#11243)
## Summary
Fix critical bug where deleted agents continue running scheduled and
triggered executions indefinitely, consuming credits without user
control.

## Problem
When agents are deleted from user libraries, their schedules and webhook
triggers remain active, leading to:
-  Uncontrolled resource consumption 
-  "Unknown agent" executions that charge credits
-  No way for users to stop orphaned executions
-  Accumulation of orphaned database records

## Solution

### 1. Prevention: Library Validation Before Execution
- Add `is_graph_in_user_library()` function with efficient database
queries
- Validate graph accessibility before all executions in
`validate_and_construct_node_execution_input()`
- Use specific `GraphNotInLibraryError` for clear error handling

### 2. Cleanup: Remove Schedules & Webhooks on Deletion
- Enhanced `delete_library_agent()` to clean up associated schedules and
webhooks
- Comprehensive cleanup functions for both scheduled and triggered
executions
- Proper database transaction handling

### 3. Error-Based Cleanup: Handle Existing Orphaned Resources
- Catch `GraphNotInLibraryError` in scheduler and webhook handlers
- Automatically clean up orphaned resources when execution fails
- Graceful degradation without breaking existing workflows

### 4. Migration: Clean Up Historical Orphans
- SQL migration to remove existing orphaned schedules and webhooks
- Performance index for faster cleanup queries
- Proper logging and error handling

## Key Changes

### Core Library Validation
```python
# backend/data/graph.py - Single source of truth
async def is_graph_in_user_library(graph_id: str, user_id: str, graph_version: Optional[int] = None) -> bool:
    where_clause = {"userId": user_id, "agentGraphId": graph_id, "isDeleted": False, "isArchived": False}
    if graph_version is not None:
        where_clause["agentGraphVersion"] = graph_version
    count = await LibraryAgent.prisma().count(where=where_clause)
    return count > 0
```

### Enhanced Agent Deletion
```python
# backend/server/v2/library/db.py
async def delete_library_agent(library_agent_id: str, user_id: str, soft_delete: bool = True) -> None:
    # ... existing deletion logic ...
    await _cleanup_schedules_for_graph(graph_id=graph_id, user_id=user_id)
    await _cleanup_webhooks_for_graph(graph_id=graph_id, user_id=user_id)
```

### Execution Prevention
```python
# backend/executor/utils.py
if not await gdb.is_graph_in_user_library(graph_id=graph_id, user_id=user_id, graph_version=graph.version):
    raise GraphNotInLibraryError(f"Graph #{graph_id} is not accessible in your library")
```

### Error-Based Cleanup
```python
# backend/executor/scheduler.py & backend/server/integrations/router.py
except GraphNotInLibraryError as e:
    logger.warning(f"Execution blocked for deleted/archived graph {graph_id}")
    await _cleanup_orphaned_resources_for_graph(graph_id, user_id)
```

## Technical Implementation

### Database Efficiency
- Use `count()` instead of `find_first()` for faster queries
- Add performance index: `idx_library_agent_user_graph_active`
- Follow existing `prisma.is_connected()` patterns

### Error Handling Hierarchy
- **`GraphNotInLibraryError`**: Specific exception for deleted/archived
graphs
- **`NotAuthorizedError`**: Generic authorization errors (preserved for
user ID mismatches)
- Clear error messages for better debugging

### Code Organization
- Single source of truth for library validation in
`backend/data/graph.py`
- Import from centralized location to avoid duplication
- Top-level imports following codebase conventions

## Testing & Validation

### Functional Testing
-  Library validation prevents execution of deleted agents
-  Cleanup functions remove schedules and webhooks properly  
-  Error-based cleanup handles orphaned resources gracefully
-  Migration removes existing orphaned records

### Integration Testing
-  All existing tests pass (including `test_store_listing_graph`)
-  No breaking changes to existing functionality
-  Proper error propagation and handling

### Performance Testing
-  Efficient database queries with proper indexing
-  Minimal overhead for normal execution flows
-  Cleanup operations don't impact performance

## Impact

### User Experience
- 🎯 **Immediate**: Deleted agents stop running automatically
- 🎯 **Ongoing**: No more unexpected credit charges from orphaned
executions
- 🎯 **Cleanup**: Historical orphaned resources are removed

### System Reliability
- 🔒 **Security**: Users can only execute agents they have access to
- 🧹 **Cleanup**: Automatic removal of orphaned database records
- 📈 **Performance**: Efficient validation with minimal overhead

### Developer Experience
- 🎯 **Clear Errors**: Specific exception types for better debugging
- 🔧 **Maintainable**: Centralized library validation logic
- 📚 **Documented**: Comprehensive error handling patterns

## Files Modified
- `backend/data/graph.py` - Library validation function
- `backend/server/v2/library/db.py` - Enhanced agent deletion with
cleanup
- `backend/executor/utils.py` - Execution validation and prevention
- `backend/executor/scheduler.py` - Error-based cleanup for schedules
- `backend/server/integrations/router.py` - Error-based cleanup for
webhooks
- `backend/util/exceptions.py` - Specific error type for deleted graphs
-
`migrations/20251023000000_cleanup_orphaned_schedules_and_webhooks/migration.sql`
- Historical cleanup

## Breaking Changes
None. All changes are backward compatible and preserve existing
functionality.

## Follow-up Tasks
- [ ] Monitor cleanup effectiveness in production
- [ ] Consider adding metrics for orphaned resource detection
- [ ] Potential optimization of cleanup batch operations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-28 23:48:35 +00:00
Ubbe
59657eb42e fix(frontend): onboarding step 5 adjustments (#11276)
## Changes 🏗️

A couple of improvements on **Onboarding Step 5**:
- Show a spinner when the page is loading ( better contrast / context
than skeleton in this case )
- Prevent the run button being disabled if credentials failed to load
- while this is good/expected behavior, it will help us debug the issue
in production where credentials failed to load silently, given running
the agent it'll throw an error we can see

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a new account/signup
  - [x] On Onboarding Step 5 test the above
2025-10-28 13:58:04 +00:00
Reinier van der Leer
5e5f45a713 fix(backend): Fix various warnings (#11252)
- Resolves #11251

This fixes all the warnings mentioned in #11251, reducing noise and
making our logs and error alerts more useful :)

### Changes 🏗️

- Remove "Block {block_name} has multiple credential inputs" warning
(not actually an issue)
- Rename `json` attribute of `MainCodeExecutionResult` to `json_data`;
retain serialized name through a field alias
- Replace `Path(regex=...)` with `Path(pattern=...)` in
`get_shared_execution` endpoint parameter config
- Change Uvicorn's WebSocket module to new Sans-I/O implementation for
WS server
- Disable Uvicorn's WebSocket module for REST server
- Remove deprecated `enable_cleanup_closed=True` argument in
`CloudStorageHandler` implementation
- Replace Prisma transaction timeout `int` argument with a `timedelta`
value
- Update Sentry SDK to latest version (v2.42.1)
- Broaden filter for cleanup warnings from indirect dependency `litellm`
- Fix handling of `MissingConfigError` in REST server endpoints

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Check that the warnings are actually gone
- [x] Deploy to dev environment and run a graph; check for any warnings
  - Test WebSocket server
- [x] Run an agent in the Builder; make sure real-time execution updates
still work
2025-10-28 13:18:45 +00:00
Ubbe
320fb7d83a fix(frontend): waitlist modal copy (#11263)
### Changes 🏗️

### Before

<img width="800" height="649" alt="Screenshot_2025-10-23_at_00 44 59"
src="https://github.com/user-attachments/assets/fd717d39-772a-4331-bc54-4db15a9a3107"
/>

### After

<img width="800" height="555" alt="Screenshot 2025-10-27 at 23 19 10"
src="https://github.com/user-attachments/assets/64878bd0-3a96-4b3a-8344-1a88c89de52e"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Try to signup with a non-approved email
  - [x] You see the modal with an updated copy
2025-10-28 11:08:06 +00:00
Ubbe
54552248f7 fix(frontend): login not visible mobile (#11245)
## Changes 🏗️

The mobile 📱 experience is still a mess but this helps a little.

### Before

<img width="350" height="395" alt="Screenshot 2025-10-24 at 18 26 18"
src="https://github.com/user-attachments/assets/75eab232-8c37-41e7-a51d-dbe07db336a0"
/>

### After

<img width="350" height="406" alt="Screenshot 2025-10-24 at 18 25 54"
src="https://github.com/user-attachments/assets/ecbd8bbd-8a94-4775-b990-c8b51de48cf9"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Load the app
  - [x] Check the Tally popup button copy
  - [x] The button still works
2025-10-28 14:00:50 +04:00
Ubbe
d8a5780ea2 fix(frontend): feedback button copy (#11246)
## Changes 🏗️

<img width="800" height="827" alt="Screenshot 2025-10-24 at 17 45 48"
src="https://github.com/user-attachments/assets/ab18361e-6c58-43e9-bea6-c9172d06c0e7"
/>

- Shows the text `Give feedback` so the button is more explicit 🏁 
- Refactor the component to stick to [new code
conventions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/CONTRIBUTING.md)

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Load the app
  - [x] Check the Tally popup button copy
  - [x] The button still works
2025-10-28 14:00:33 +04:00
seer-by-sentry[bot]
377657f8a1 fix(backend): Extract response from LLM response dictionary (#11262)
### Changes 🏗️

- Modifies the LLM block to extract the actual response from the
dictionary returned by the LLM, instead of yielding the entire
dictionary. This addresses
[AUTOGPT-SERVER-6EY](https://sentry.io/organizations/significant-gravitas/issues/6950850822/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] After applying the fix, I ran the agent that triggered the Sentry
error and confirmed that it now completes successfully without errors.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-28 08:43:29 +00:00
seer-by-sentry[bot]
ff71c940c9 fix(backend): Properly encode hostname in URL validation (#11259)
Fixes
[AUTOGPT-SERVER-6KZ](https://sentry.io/organizations/significant-gravitas/issues/6976926125/).
The issue was that: Redirect handling strips the URL scheme, causing
subsequent requests to fail validation and hit a 404.

- Ensures the hostname in the URL is properly IDNA-encoded after
validation.
- Reconstructs the netloc with the encoded hostname and preserves the
port if it exists.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2204774

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6976926125/?seerDrawer=true)

### Changes 🏗️

**backend/util/request.py:**
- Fixed URL validation to properly preserve port numbers when
reconstructing netloc
- Ensures IDNA-encoded hostname is combined with port (if present)
before URL reconstruction

**Test Results:**
-  Tested request to https://www.target.com/ (original failing URL from
Sentry issue)
-  Status: 200, Content retrieved successfully (339,846 bytes)
-  Port preservation verified for URLs with explicit ports

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tested request to https://www.target.com/ (original failing URL)
  - [x] Verified status code 200 and successful content retrieval
  - [x] Verified port preservation in URL validation

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-28 08:43:14 +00:00
Reinier van der Leer
9967b3a7ce fix(frontend/builder): Fix unnecessary graph re-saving (#11145)
- Resolves #10980
- 2nd attempt after #11075 broke some things

Fixes unnecessary graph re-saving when no changes were made after
initial save. More specifically, this PR fixes two causes of this issue:
- Frontend node IDs were being compared to backend IDs, which won't
match if the graph has been modified and saved since loading.
- `fillDefaults` was being applied to all nodes (including existing
ones) on element creation, and empty values were being stripped
*post-save* with `removeEmptyStringsAndNulls`. This invisible
auto-modification of node input data meant that in some common cases the
graph would never be in sync with the backend.

### Changes 🏗️

- Fix node ID handling
- Use `node.data.backend_id ?? node.id` instead of `node.id` in
`prepareSaveableGraph`
    - Also map link source/sink IDs to their corresponding backend IDs
  - Add note about `node.data.backend_id` to `_saveAgent`
  - Use `node.data.backend_id || node.id` as display ID in `CustomNode`

- Prevent auto-modification of node input data on existing nodes
- Prune empty values (`undefined`, `null`, `""`) from node input data
*pre-save* instead of post-save
- Related: improve typing and functionality of
`fillObjectDefaultsFromSchema` (moved and renamed from `fillDefaults`)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Node display ID updates on save
- [x] Clicking save a second time (without making more changes) doesn't
cause re-save
- [x] Updating nodes with dynamic input links (e.g. Create Dictionary
Block) doesn't make the links disappear


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Prevented unintended auto-modification of existing nodes during
editing
* Improved consistency of node and connection identifiers in saved
graphs

* **Improvements**
  * Enhanced node title display logic for clearer node identification
* Optimized data cleanup utilities for more robust input processing in
the builder

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-27 16:49:02 +00:00
Bently
9db443960a feat(blocks/claude): Remove Claude 3.5 Sonnet and Haiku model (#11260)
Removes CLAUDE_3_5_SONNET and CLAUDE_3_5_HAIKU from LlmModel enum, model
metadata, and cost configuration since they are deprecated

  ### Checklist 📋

  #### For code changes:
  - [x] I have clearly listed my changes in the PR description
  - [x] I have made a test plan
  - [x] I have tested my changes according to the test plan:
  - [x] Verify the models are gone from the llm blocks
2025-10-27 16:49:02 +00:00
Ubbe
9316100864 fix(frontend): agent activity graph names (#11233)
## Changes 🏗️

We weren't fetching all library agents, just the first 15... to compute
the agent map on the Agent Activity dropdown. We suspect that is causing
some agent executions coming as `Unknown agent`.

In this changes, I'm fetching all the library agents upfront ( _without
blocking page load_ ) and caching them on the browser, so we have all
the details to render the agent runs. This is re-used in the library as
well for fast initial load on the agents list page.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First request populates cache; subsequent identical requests hit
cache
- [x] Editing an agent invalidates relevant cache keys and serves fresh
data
  - [x] Different query params generate distinct cache entries
  - [x] Cache layer gracefully falls back to live data on errors
  - [x] 404 behavior for unknown agents unchanged

### For configuration changes:

None
2025-10-27 20:08:21 +04:00
Ubbe
cbe0cee0fc fix(frontend): Credentials disabling onboarding Run button (#11244)
## Changes 🏗️

The onboarding `Run` button is disabled sometimes when an agent
requiring credentials is selected. We think this can be because the
credentials load _async_ by a sub-component ( `<CredentialsInputs />` ),
and there wasn't a way for the parent component to know whether they
loaded or not.

- Refactored **Step 5** of onboarding to adhere to our code conventions
  - split concerns and colocated state
  - used generated API hooks
  - the UI will only render once API calls succeed
- Created a system where ``<CredentialsInputs />` notify the parent
component when they load
- Did minor adjustments here and there

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I will know once I find an agent with credentials that I can
run....


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added visual agent selection card displaying agent details during
onboarding
  * Introduced credentials input management for agent configuration
  * Added onboarding guidance for initiating agent runs

* **Improvements**
  * Enhanced onboarding flow with improved state management
  * Refined login state handling
  * Adjusted spacing in agent rating display

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-27 19:53:14 +04:00
Swifty
b31d60276a fix(backend/store): Sanitize all sql terms (#11228)
Categories and Creators where not sanitized in the full text search

- apply sanitization to categories and creators

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] run tests to check it still works
2025-10-27 13:16:37 +01:00
Swifty
7cbb1ed859 fix(backend/store): Sanitize all sql terms (#11228)
Categories and Creators where not sanitized in the full text search

### Changes 🏗️

- apply sanitization to categories and creators

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] run tests to check it still works
2025-10-27 12:59:05 +01:00
Toran Bruce Richards
b52e95e1fc fix(blocks): Add missing error output pins to all Firecrawl blocks (#11256)
Added error output pins to all Firecrawl blocks as standard on the
AutoGPT platform. The base block execution code already handles error
yielding, so no try-catch logic was needed.

- FirecrawlScrapeBlock: Added error output pin for scrape failures
- FirecrawlCrawlBlock: Added error output pin for crawl failures
- FirecrawlExtractBlock: Added error output pin for extraction failures
- FirecrawlMapBlock: Added error output pin for map failures
- FirecrawlSearchBlock: Added error output pin for search failures

Resolves #11253

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
2025-10-27 08:36:28 +00:00
Reinier van der Leer
e06e7ff33f fix(backend): Implement graceful shutdown in AppService to prevent RPC errors (#11240)
We're currently seeing errors in the `DatabaseManager` while it's
shutting down, like:

```
WARNING [DatabaseManager] Termination request: SystemExit; 0 executing cleanup.
INFO [DatabaseManager]  Disconnecting Database...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection started...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection completed successfully.
INFO [DatabaseManager] Terminated.
ERROR POST /create_or_add_to_user_notification_batch failed: Failed to create or add to notification batch for user {user_id} and type AGENT_RUN: NoneType: None
```

This indicates two issues:
- The service doesn't wait for pending RPC calls to finish before
terminating
- We're using `logger.exception` outside an error handling context,
causing the confusing and not much useful `NoneType: None` to be printed
instead of error info

### Changes 🏗️

- Implement graceful shutdown in `AppService` so in-flight RPC calls can
finish
  - Add tests for graceful shutdown
  - Prevent `AppService` accepting new requests during shutdown
- Rework `AppService` lifecycle management; add support for async
`lifespan`
- Fix `AppService` endpoint error logging
- Improve logging in `AppProcess` and `AppService`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Deploy to Dev cluster, then `kubectl rollout restart` the different
services a few times
    - [x] -> `DatabaseManager` doesn't break on re-deployment
    - [x] -> `Scheduler` doesn't break on re-deployment
    - [x] -> `NotificationManager` doesn't break on re-deployment
2025-10-25 14:47:19 +00:00
Bently
f4ba02f2f1 feat(blocks/revid): Add cost configs for revid video blocks (#11242)
Updated block costs in `backend/backend/data/block_cost_config.py`:
  - **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
  - **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
  - **AIScreenshotToVideoAdBlock**: Added cost of 612 credits

  ### Checklist 📋

  #### For code changes:
  - [x] I have clearly listed my changes in the PR description
  - [x] I have made a test plan
  - [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
2025-10-24 18:35:37 +01:00
Abhimanyu Yadav
acb946801b feat(frontend): add agent execution functionality in new builder (#11186)
This PR implements real-time agent execution functionality in the new
flow editor, enabling users to run, monitor, and view results of their
agent workflows directly within the builder interface.


https://github.com/user-attachments/assets/8a730e08-f88d-49d4-be31-980e2c7a2f83

#### Key Features Added:

##### 1. **Agent Execution Controls**
- Added "Run Agent" / "Stop Agent" button with gradient styling in the
builder interface
- Implemented execution state management through a new `graphStore` for
tracking running status
- Save graph automatically before execution to ensure latest changes are
persisted

##### 2. **Real-time Execution Monitoring**
- Implemented WebSocket-based real-time updates for node execution
status via `useFlowRealtime` hook
- Subscribe to graph execution events and node execution events for live
status tracking
- Visual execution status badges on nodes showing states: `QUEUED`,
`RUNNING`, `COMPLETED`, `FAILED`, etc.
   - Animated gradient border effect when agent is actively running

##### 3. **Node Execution Results Display**
- New `NodeDataRenderer` component to display input/output data for each
executed node
   - Collapsible result sections with formatted JSON display
- Prepared UI for future functionality: copy, info, and expand actions
for node data

#### Technical Implementation:

- **State Management**: Extended `nodeStore` with execution status and
result tracking methods
- **WebSocket Integration**: Real-time communication for execution
updates without polling
- **Component Architecture**: Modular components for execution controls,
status display, and result rendering
- **Visual Feedback**: Color-coded status badges and animated borders
for clear execution state indication


#### TODO Items for Future PRs:
- Complete implementation of node result action buttons (copy, info,
expand)
- Add agent output display component
- Implement schedule run functionality
- Handle credential and input parameters for graph execution
- Add tooltips for better UX

### Checklist

- [x] Create a new agent with at least 3 blocks and verify execution
starts correctly
- [x] Verify real-time status updates appear on nodes during execution
- [x] Confirm execution results display in the node output sections
- [x] Verify the animated border appears when agent is running
- [x] Check that node status badges show correct states (QUEUED,
RUNNING, COMPLETED, etc.)
- [x] Test WebSocket reconnection after connection loss
- [x] Verify graph is saved before execution begins
2025-10-24 12:05:09 +00:00
Bently
48ff225837 feat(blocks/revid): Add cost configs for revid video blocks (#11242)
Updated block costs in `backend/backend/data/block_cost_config.py`:
  - **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
  - **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
  - **AIScreenshotToVideoAdBlock**: Added cost of 612 credits

  ### Checklist 📋

  #### For code changes:
  - [x] I have clearly listed my changes in the PR description
  - [x] I have made a test plan
  - [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
2025-10-23 09:46:22 +00:00
Nicholas Tindle
e2a9923f30 feat(frontend): Improve waitlist error display & messages (#11206)
Improves the "not on waitlist" error display based on feedback.

- Follow-up to #11198
  - Follow-up to #11196

### Changes 🏗️

- Use standard `ErrorCard`
- Improve text strings
- Merge `isWaitlistError` and `isWaitlistErrorFromParams`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] We need to test in dev becasue we don't have a waitlist locally
and will revert if it doesnt work
- deploy to dev environment and sign up with a non approved account and
see if error appears
2025-10-22 13:37:42 +00:00
Reinier van der Leer
39792d517e fix(frontend): Filter out undefined query params in API requests (#11238)
Part of our effort to eliminate preventable warnings and errors.

- Resolves #11237

### Changes 🏗️

- Exclude `undefined` query params in API requests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Open the Builder without a `flowVersion` URL parameter
    - [x] -> `GET /api/library/agents/by-graph/{graph_id}` succeeds
  - Open the builder with a `flowVersion` URL parameter
    - [x] -> version is correctly included in request URL parameters
2025-10-22 13:25:34 +00:00
Bently
a6a2f71458 Merge commit from fork
* Replace urllib with Requests in RSS block to prevent SSRF

* Format
2025-10-22 14:18:34 +01:00
Bently
788b861bb7 Merge commit from fork 2025-10-22 14:17:26 +01:00
Ubbe
e203e65dc4 feat(frontend): setup datafast custom events (#11231)
## Changes 🏗️

- Add [custom events](https://datafa.st/docs/custom-goals) in
**Datafa.st** to track the user journey around core actions
  - track `add_to_library`
  - track `download_agent`
  - track `run_agent`
  - track `schedule_agent` 
- Refactor the analytics service to encapsulate both **GA** and
**Datafa.st**

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Analytics load correctly locally
  - [x] Events fire in production
 
### For configuration changes:

Once deployed to production we need to verify we are receiving analytics
and custom events in [Datafa.st](https://datafa.st/)
2025-10-22 16:56:30 +04:00
Ubbe
bd03697ff2 fix(frontend): URL substring sanitazion issue (#11232)
Potential fix for
[https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145)

To fix the issue, rather than using substring matching on the raw URL
string, we need to properly parse the URL and inspect its hostname. We
should confirm that the `hostname` property of the parsed URL is equal
to either `vimeo.com` or explicitly permitted subdomains like
`www.vimeo.com`. We can use the native JavaScript `URL` class for robust
parsing.

**File/Location:**  
- Only change line(s) in
`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/OutputRenderers/renderers/MarkdownRenderer.tsx`
- Specifically, update the logic in function `isVideoUrl()` on line 45.

**Methods/Imports/Definitions:**  
- Use the standard `URL` class (no need to add a new import, as this is
available in browsers and in Node.js).
- Provide fallback in case the URL passed in is malformed (wrap in a
try-catch, treat as non-video in this case).
- Check the parsed hostname for equality with `vimeo.com` or,
optionally, specific allowed subdomains (`www.vimeo.com`).

---


_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-10-22 16:56:12 +04:00
Reinier van der Leer
efd37b7a36 fix(frontend): Limit Sentry console capture to warnings and errors (#11223)
Debug and info level messages are currently ending up in Sentry,
polluting our issue feed.

### Changes 🏗️

- Limit Sentry console capture to warnings and worse

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, no test needed
2025-10-22 09:49:25 +00:00
Zamil Majdy
bb0b45d7f7 fix(backend): Make Jinja Error on TextFormatter as value error (#11236)
<!-- Clearly explain the need for these changes: -->

This PR converts Jinja2 TemplateError exceptions to ValueError in the
TextFormatter class to ensure proper error handling and HTTP status code
responses (400 instead of 500).

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added import for `jinja2.exceptions.TemplateError` in
`backend/util/text.py:6`
- Wrapped template rendering in try-catch block in `format_string`
method (`backend/util/text.py:105-109`)
- Convert `TemplateError` to `ValueError` to ensure proper 400 HTTP
status code for client errors
- Added warning logging for template rendering errors before re-raising
as ValueError

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan: -->
- [x] Verified that invalid Jinja2 templates now raise ValueError
instead of TemplateError
  - [x] Confirmed that valid templates continue to work correctly
  - [x] Checked that warning logs are generated for template errors
  - [x] Validated that the exception chain is preserved with `from e`

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-10-22 09:38:02 +00:00
Reinier van der Leer
04df981115 fix(backend): Fix structured logging for cloud environments (#11227)
- Resolves #11226

### Changes 🏗️

- Drop use of `CloudLoggingHandler` which docs state isn't for use in
GKE
- For cloud logging, output only structured log entries to `stdout`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test deploy to dev and check logs
2025-10-21 12:48:41 +00:00
Swifty
d25997b4f2 Revert "Merge branch 'swiftyos/secrt-1709-store-provider-names-and-en… (#11225)
Changes to providers blocks to store in db

### Changes 🏗️

- revet change

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have reverted the merge
2025-10-21 09:12:00 +00:00
Zamil Majdy
11d55f6055 fix(backend/executor): Avoid running direct query in executor (#11224)
## Summary
- Fixes database connection warnings in executor logs: "Client is not
connected to the query engine, you must call `connect()` before
attempting to query data"
- Implements resilient database client pattern already used elsewhere in
the codebase
- Adds caching to reduce database load for user context lookups

## Changes
- Updated `get_user_context()` to check `prisma.is_connected()` and fall
back to database manager client
- Added `@cached(maxsize=1000, ttl_seconds=3600)` decorator for
performance optimization
- Updated database manager to expose `get_user_by_id` method

## Test plan
- [x] Verify executor pods no longer show Prisma connection warnings
- [x] Confirm user timezone is still correctly retrieved
- [x] Test fallback behavior when Prisma is disconnected

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-21 08:46:40 +00:00
Ubbe
063dc5cf65 refactor(frontend): standardise with environment service (#11209)
## Changes 🏗️

Standardize all the runtime environment checks on the Front-end and
associated conditions to run against a single environment service where
all the environment config is centralized and hence easier to manage.

This helps prevent typos and bug when manually asserting against
environment variables ( which are typed as `string` ), the helper
functions are easier to read and re-use across the codebase.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app and click around
  - [x] Everything is smooth
  - [x] Test on the CI and types are green  

### For configuration changes:

None 🙏🏽
2025-10-21 08:44:34 +00:00
Ubbe
b7646f3e58 docs(frontend): contributing guidelines (#11210)
## Changes 🏗️

Document how to contribute on the Front-end so it is easier for
non-regular contributors.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Contribution guidelines make sense and look good considering the
AutoGPT stack

### For configuration changes:

None
2025-10-21 08:26:51 +00:00
Ubbe
0befaf0a47 feat(frontend): update tooltip and alert styles (#11212)
## Changes 🏗️

Matching updated changes in AutoGPT design system:

<img width="283" height="156" alt="Screenshot 2025-10-20 at 23 55 15"
src="https://github.com/user-attachments/assets/3a2e0ee7-cd53-4552-b72b-42f4631f1503"
/>
<img width="427" height="92" alt="Screenshot 2025-10-20 at 23 55 25"
src="https://github.com/user-attachments/assets/95344765-2155-4861-abdd-f5ec1497ace2"
/>
<img width="472" height="85" alt="Screenshot 2025-10-20 at 23 55 30"
src="https://github.com/user-attachments/assets/31084b40-0eea-4feb-a627-c5014790c40d"
/>
<img width="370" height="87" alt="Screenshot 2025-10-20 at 23 55 35"
src="https://github.com/user-attachments/assets/a81dba12-a792-4d41-b269-0bc32fc81271"
/>


## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Check the stories for Tooltip and Alerts, they look good


#### For configuration changes:
None
2025-10-21 08:14:28 +00:00
Reinier van der Leer
93f58dec5e Merge branch 'master' into dev 2025-10-21 08:49:12 +02:00
Reinier van der Leer
3da595f599 fix(backend): Only try to initialize LaunchDarkly once (#11222)
We currently try to re-init the LaunchDarkly client every time a feature flag is checked.
This causes 5 second extra latency on the flag check when LD is down, such as now.
Since flag checks are performed on every block execution, this currently cripples the platform's executors.

- Follow-up to #11221

### Changes 🏗️

- Only try to init LaunchDarkly once
- Improve surrounding log statements in the `feature_flag` module

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - This is a critical hotfix; we'll see its effect once deployed
2025-10-21 08:46:07 +02:00
Reinier van der Leer
e5e60921a3 fix(backend): Handle LaunchDarkly init failure (#11221)
LaunchDarkly is currently down and it's keeping our executor pods from
spinning up.

### Changes 🏗️

- Wrap `LaunchDarklyIntegration` init in a try/except

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - We'll see if it works once it deploys
2025-10-21 07:53:40 +02:00
Copilot
90af8f8e1a feat(backend): Add language fallback for YouTube transcription block (#11057)
## Problem

The YouTube transcription block would fail when attempting to transcribe
videos that only had transcripts available in non-English languages.
Even when usable transcripts existed in other languages, the block would
raise a `NoTranscriptFound` error because it only requested English
transcripts.

**Example video that would fail:**
https://www.youtube.com/watch?v=3AMl5d2NKpQ (only has Hungarian
transcripts)

**Error message:**
```
Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ! 
No transcripts were found for any of the requested language codes: ('en',)

For this video (3AMl5d2NKpQ) transcripts are available in the following languages:
(GENERATED) - hu ("Hungarian (auto-generated)")
```

## Solution

Implemented intelligent language fallback in the
`TranscribeYoutubeVideoBlock.get_transcript()` method:

1. **First**, tries to fetch English transcript (maintains backward
compatibility)
2. **If English unavailable**, lists all available transcripts and
selects the first one using this priority:
   - Manually created transcripts (any language)
   - Auto-generated transcripts (any language)
3. **Only fails** if no transcripts exist at all

**Example behavior:**
```python
# Before: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ")  #  Raises NoTranscriptFound

# After: Video with only Hungarian transcript  
get_transcript("3AMl5d2NKpQ")  #  Returns Hungarian transcript
```

## Changes

- **Modified** `backend/blocks/youtube.py`: Added try-catch logic to
fallback to any available language when English is not found
- **Added** `test/blocks/test_youtube.py`: Comprehensive test suite
covering URL extraction, language fallback, transcript preferences, and
error handling (7 tests)
- **Updated** `docs/content/platform/blocks/youtube.md`: Documented the
language fallback behavior and transcript priority order

## Testing

-  All 7 new unit tests pass
-  Block integration test passes
-  Full test suite: 621 passed, 0 failed (no regressions)
-  Code formatting and linting pass

## Impact

This fix enables the YouTube transcription block to work with
international content while maintaining full backward compatibility:

-  Videos in any language can now be transcribed
-  English is still preferred when available
-  No breaking changes to existing functionality
-  Graceful degradation to available languages

Fixes #10637
Fixes https://linear.app/autogpt/issue/OPEN-2626

> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `www.youtube.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python3`
(dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> Issue Title: if theres only one lanague available for transcribe
youtube return that langage not an error
> Issue Description: `Could not retrieve a transcript for the video
https://www.youtube.com/watch?v=3AMl5d2NKpQ! This is most likely caused
by: No transcripts were found for any of the requested language codes:
('en',) For this video (3AMl5d2NKpQ) transcripts are available in the
following languages: (MANUALLY CREATED) None (GENERATED) - hu
("Hungarian (auto-generated)") (TRANSLATION LANGUAGES) None If you are
sure that the described cause is not responsible for this error and that
a transcript should be retrievable, please create an issue at
https://github.com/jdepoix/youtube-transcript-api/issues. Please add
which version of youtube_transcript_api you are using and provide the
information needed to replicate the error. Also make sure that there are
no open issues which already describe your problem!` you can use this
video to test:
[https://www.youtube.com/watch?v=3AMl5d2NKpQ\`](https://www.youtube.com/watch?v=3AMl5d2NKpQ%60)
> Fixes
https://linear.app/autogpt/issue/OPEN-2626/if-theres-only-one-lanague-available-for-transcribe-youtube-return
> 
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10637).
All replies are displayed in both locations.
> 
> 


</details>


<!-- START COPILOT CODING AGENT TIPS -->
---

 Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-21 02:31:33 +00:00
Nicholas Tindle
eba67e0a4b fix(platform/blocks): update linear oauth to use refresh tokens (#10998)
<!-- Clearly explain the need for these changes: -->

### Need 💡

This PR addresses Linear issue SECRT-1665, which mandates an update to
Linear's OAuth2 implementation. Linear is transitioning from long-lived
access tokens to short-lived access tokens with refresh tokens, with a
deadline of April 1, 2026. This change is crucial to ensure continued
integration with Linear and to support their new token management
system, including a migration path for existing long-lived tokens.

### Changes 🏗️

-   **`autogpt_platform/backend/backend/blocks/linear/_oauth.py`**:
- Implemented full support for refresh tokens, including HTTP Basic
Authentication for token refresh requests.
- Added `migrate_old_token()` method to exchange old long-lived access
tokens for new short-lived tokens with refresh tokens using Linear's
`/oauth/migrate_old_token` endpoint.
- Enhanced `get_access_token()` to automatically detect and attempt
migration for old tokens, and to refresh short-lived tokens when they
expire.
    -   Improved error handling and token expiration management.
- Updated `_request_tokens` to handle both authorization code and
refresh token flows, supporting Linear's recommended authentication
methods.
-   **`autogpt_platform/backend/backend/blocks/linear/_config.py`**:
- Updated `TEST_CREDENTIALS_OAUTH` mock data to include realistic
`access_token_expires_at` and `refresh_token` for testing the new token
lifecycle.
-   **`LINEAR_OAUTH_IMPLEMENTATION.md`**:
- Added documentation detailing the new Linear OAuth refresh token
implementation, including technical details, migration strategy, and
testing notes.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified OAuth URL generation and parameter encoding.
- [x] Confirmed HTTP Basic Authentication header creation for refresh
requests.
  - [x] Tested token expiration logic with a 5-minute buffer.
  - [x] Validated migration detection for old vs. new token types.
  - [x] Checked code syntax and import compatibility.

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

---
Linear Issue: [SECRT-1665](https://linear.app/autogpt/issue/SECRT-1665)

<a
href="https://cursor.com/background-agent?bcId=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a>&nbsp;<a
href="https://cursor.com/agents?id=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-10-20 20:44:58 +00:00
Nicholas Tindle
47bb89caeb fix(backend): Disable LaunchDarkly integration in metrics.py (#11217) 2025-10-20 14:07:21 -05:00
Ubbe
271a520afa feat(frontend): setup DataFast analytics (#11182)
## Changes 🏗️

Following https://datafa.st/docs/nextjs-app-router

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will see once we make a production deployment and get data into
the platform

### For configuration changes:

None
2025-10-20 16:18:04 +04:00
Swifty
3988057032 Merge branch 'swiftyos/secrt-1712-remove-error-handling-form-store-routes' into dev 2025-10-18 12:28:25 +02:00
Swifty
a6c6e48f00 Merge branch 'swiftyos/open-2791-featplatform-add-easy-test-data-creation' into dev 2025-10-18 12:28:17 +02:00
Swifty
e72ce2f9e7 Merge branch 'swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db' into dev 2025-10-18 12:27:58 +02:00
Swifty
bd7a79a920 Merge branch 'swiftyos/secrt-1706-improve-store-search' into dev 2025-10-18 12:27:31 +02:00
Nicholas Tindle
3f546ae845 fix(frontend): improve waitlist error display for users not on allowlist (#11198)
fix issue with identifying errors :(
### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] we have to test in dev due to waitlist integration, so we are
merging. will revert if fails

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-18 05:14:05 +00:00
Nicholas Tindle
097a19141d fix(frontend): improve waitlist error display for users not on allowlist (#11196)
## Summary

This PR improves the user experience for users who are not on the
waitlist during sign-up. When a user attempts to sign up or log in with
an email that's not on the allowlist, they now see a clear, helpful
modal with a direct call-to-action to join the waitlist.

Fixes
[OPEN-2794](https://linear.app/autogpt/issue/OPEN-2794/display-waitlist-error-for-users-not-on-waitlist-during-sign-up)

## Changes

-  Updated `EmailNotAllowedModal` with improved messaging and a "Join
Waitlist" button
- 🔧 Fixed OAuth provider signup/login to properly display the waitlist
modal
- 📝 Enhanced auth-code-error page to detect and display
waitlist-specific errors
- 💬 Added helpful guidance about checking email address and Discord
support link
- 🎯 Consistent waitlist error handling across all auth flows (regular
signup, OAuth, error pages)

## Test Plan

Tested locally by:
1. Attempting signup with non-allowlisted email - modal appears 
2. Attempting Google SSO with non-allowlisted account - modal appears 
3. Modal shows "Join Waitlist" button that opens
https://agpt.co/waitlist 
4. Help text about checking email and Discord support is visible 

## Screenshots

The new waitlist modal includes:
- Clear "Join the Waitlist" title
- Explanation that platform is in closed beta
- "Join Waitlist" button (opens in new tab)
- Help text about checking email address
- Discord support link for users who need help

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-18 03:37:31 +00:00
Swifty
c958c95d6b fix incorrect type import 2025-10-17 20:36:49 +02:00
Swifty
3e50cbd2cb fix import 2025-10-17 19:19:17 +02:00
Swifty
1b69f1644d revert frontend type change 2025-10-17 17:26:08 +02:00
Swifty
d9035a233c Merge branch 'swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db' of github.com:Significant-Gravitas/AutoGPT into swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db 2025-10-17 17:20:27 +02:00
Swifty
972cbfc3de fix tests 2025-10-17 17:20:05 +02:00
Swifty
8f861b1bb2 removed error handling from routes 2025-10-17 17:08:17 +02:00
Swifty
fa2731bb8b Merge branch 'dev' into swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db 2025-10-17 17:06:09 +02:00
Swifty
2dc0c97a52 Add block registry and updated 2025-10-17 16:49:04 +02:00
Zamil Majdy
0bb2b87c32 fix(backend): resolve UserBalance migration issues and credit spending bug (#11192)
## Summary
Fix critical UserBalance migration and spending issues affecting users
with credits from transaction history but no UserBalance records.

## Root Issues Fixed

### Issue 1: UserBalance Migration Complexity
- **Problem**: Complex data migration with timestamp logic issues and
potential race conditions
- **Solution**: Simplified to idempotent table creation only,
application handles auto-population

### Issue 2: Credit Spending Bug  
- **Problem**: Users with $10.0 from transaction history couldn't spend
$0.16
- **Root Cause**: `_add_transaction` and `_enable_transaction` only
checked UserBalance table, returning 0 balance for users without records
- **Solution**: Enhanced both methods with transaction history fallback
logic

### Issue 3: Exception Handling Inconsistency
- **Problem**: Raw SQL unique violations raised different exception
types than Prisma ORM
- **Solution**: Convert raw SQL unique violations to
`UniqueViolationError` at source

## Changes Made

### Migration Cleanup
- **Idempotent operations**: Use `CREATE TABLE IF NOT EXISTS`, `CREATE
INDEX IF NOT EXISTS`
- **Inline foreign key**: Define constraint within `CREATE TABLE`
instead of separate `ALTER TABLE`
- **Removed data migration**: Application creates UserBalance records
on-demand
- **Safe to re-run**: No errors if table/index/constraint already exists

### Credit Logic Fixes
- **Enhanced `_add_transaction`**: Added transaction history fallback in
`user_balance_lock` CTE
- **Enhanced `_enable_transaction`**: Added same fallback logic for
payment fulfillment
- **Exception normalization**: Convert raw SQL unique violations to
`UniqueViolationError`
- **Simplified `onboarding_reward`**: Use standardized
`UniqueViolationError` catching

### SQL Fallback Pattern
```sql
COALESCE(
    (SELECT balance FROM UserBalance WHERE userId = ? FOR UPDATE),
    -- Fallback: compute from transaction history if UserBalance doesn't exist
    (SELECT COALESCE(ct.runningBalance, 0) 
     FROM CreditTransaction ct 
     WHERE ct.userId = ? AND ct.isActive = true AND ct.runningBalance IS NOT NULL 
     ORDER BY ct.createdAt DESC LIMIT 1),
    0
) as balance
```

## Impact

### Before
-  Users with transaction history but no UserBalance couldn't spend
credits
-  Migration had complex timestamp logic with potential bugs  
-  Raw SQL and Prisma exceptions handled differently
-  Error: "Insufficient balance of $10.0, where this will cost $0.16"

### After  
-  Seamless spending for all users regardless of UserBalance record
existence
-  Simple, idempotent migration that's safe to re-run
-  Consistent exception handling across all credit operations
-  Automatic UserBalance record creation during first transaction
-  Backward compatible - existing users unaffected

## Business Value
- **Eliminates user frustration**: Users can spend their credits
immediately
- **Smooth migration path**: From old User.balance to new UserBalance
table
- **Better reliability**: Atomic operations with proper error handling
- **Maintainable code**: Consistent patterns across credit operations

## Test Plan
- [ ] Manual testing with users who have transaction history but no
UserBalance records
- [ ] Verify migration can be run multiple times safely
- [ ] Test spending credits works for all user scenarios
- [ ] Verify payment fulfillment (`_enable_transaction`) works correctly
- [ ] Add comprehensive test coverage for this scenario

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 19:46:13 +07:00
Swifty
a1d9b45238 updated openapi spec 2025-10-17 14:01:37 +02:00
Swifty
29895c290f store providers in db 2025-10-17 13:34:35 +02:00
Zamil Majdy
73c0b6899a fix(backend): Remove advisory locks for atomic credit operations (#11143)
## Problem
High QPS failures on `spend_credits` operations due to lock contention
from `pg_advisory_xact_lock` causing serialization and seconds of wait
time.

## Solution 
Replace PostgreSQL advisory locks with atomic database operations using
CTEs (Common Table Expressions).

### Key Changes
- **Add persistent balance column** to User table for O(1) balance
lookups
- **Atomic CTE-based operations** for all credit transactions using
UPDATE...RETURNING pattern
- **Comprehensive concurrency tests** with 7 test scenarios including
stress testing
- **Remove all advisory lock usage** from the credit system

### Implementation Details
1. **Migration**: Adds balance column with backfill from transaction
history
2. **Atomic Operations**: All credit operations now use single atomic
CTEs that update balance and create transaction in one query
3. **Race Condition Prevention**: WHERE clauses in UPDATE statements
ensure balance never goes negative
4. **BetaUserCredit Compatibility**: Preserved monthly refill logic with
updated `_add_transaction` signature

### Performance Impact
-  Eliminated lock contention bottlenecks
-  O(1) balance lookups instead of O(n) transaction aggregation  
-  Atomic operations prevent race conditions without locks
-  Supports high QPS without serialization delays

### Testing
- All existing tests pass
- New concurrency test suite (`credit_concurrency_test.py`) with:
  - Concurrent spends from same user
  - Insufficient balance handling
  - Mixed operations (spends, top-ups, balance checks)
  - Race condition prevention
  - Integer overflow protection
  - Stress testing with 100 concurrent operations

### Breaking Changes
None - all existing APIs maintain compatibility

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Enhanced top‑up flows with top‑up types, clearer credit→dollar
formatting, and idempotent onboarding rewards.

* **Bug Fixes**
* Fixed race conditions for concurrent spends/top‑ups, added
integer‑overflow and underflow protection, stronger input validation,
and improved refund/dispute handling.

* **Refactor**
* Persisted per‑user balance with atomic updates for reliable balances;
admin history now prefetches balances.

* **Tests**
* Added extensive concurrency, refund, ceiling/underflow and migration
test suites.

* **Chores**
* Database migration to add persisted user balance; APIKey status
extended (SUSPENDED).
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-17 17:05:05 +07:00
Zamil Majdy
4c853a54d7 Merge commit 'e4bc728d40332e7c2b1edec5f1b200f1917950e2' into HEAD 2025-10-17 16:43:23 +07:00
Zamil Majdy
dfdd632161 fix(backend/util): handle nested Pydantic models in SafeJson (#11188)
## Summary

Fixes a critical serialization bug introduced in PR #11187 where
`SafeJson` failed to serialize dictionaries containing Pydantic models,
causing 500 Internal Server Errors in the executor service.

## Problem

The error manifested as:
```
CRITICAL: Operation Approaching Failure Threshold: Service communication: '_call_method_async'
Current attempt: 50/50
Error: HTTPServerError: HTTP 500: Server error '500 Internal Server Error' 
for url 'http://autogpt-database-manager.prod-agpt.svc.cluster.local:8005/create_graph_execution'
```

Root cause in `create_graph_execution`
(backend/data/execution.py:656-657):
```python
"credentialInputs": SafeJson(credential_inputs) if credential_inputs else Json({})
```

Where `credential_inputs: Mapping[str, CredentialsMetaInput]` is a dict
containing Pydantic models.

After PR #11187's refactor, `_sanitize_value()` only converted top-level
BaseModel instances to dicts, but didn't handle BaseModel instances
nested inside dicts/lists/tuples. This caused Prisma's JSON serializer
to fail with:
```
TypeError: Type <class 'backend.data.model.CredentialsMetaInput'> not serializable
```

## Solution

Added BaseModel handling to `_sanitize_value()` to recursively convert
Pydantic models to dicts before sanitizing:

```python
elif isinstance(value, BaseModel):
    # Convert Pydantic models to dict and recursively sanitize
    return _sanitize_value(value.model_dump(exclude_none=True))
```

This ensures all nested Pydantic models are properly serialized
regardless of nesting depth.

## Changes

- **backend/util/json.py**: Added BaseModel check to `_sanitize_value()`
function
- **backend/util/test_json.py**: Added 6 comprehensive tests covering:
  - Dict containing Pydantic models
  - Deeply nested Pydantic models  
  - Lists of Pydantic models in dicts
  - The exact CredentialsMetaInput scenario
  - Complex mixed structures
  - Models with control characters

## Testing

 All new tests pass  
 Verified fix resolves the production 500 error  
 Code formatted with `poetry run format`

## Related

- Fixes issues introduced in PR #11187
- Related to executor service 500 errors in production

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 09:27:09 +00:00
Swifty
1ed224d481 simplify test and add reset-db make command 2025-10-17 11:12:00 +02:00
Swifty
3b5d919399 fix formatting 2025-10-17 10:56:45 +02:00
Swifty
3c16de22ef add test data creation to makefile and test it 2025-10-17 10:51:58 +02:00
Zamil Majdy
e4bc728d40 Revert "Revert "fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)""
This reverts commit 8258338caf.
2025-10-17 15:25:30 +07:00
Swifty
2c6d85d15e feat(platform): Shared cache (#11150)
### Problem
When running multiple backend pods in production, requests can be routed
to different pods causing inconsistent cache states. Additionally, the
current cache implementation in `autogpt_libs` doesn't support shared
caching across processes, leading to data inconsistency and redundant
cache misses.

### Changes 🏗️

- **Moved cache implementation from autogpt_libs to backend**
(`/backend/backend/util/cache.py`)
  - Removed `/autogpt_libs/autogpt_libs/utils/cache.py`
  - Centralized cache utilities within the backend module
  - Updated all import statements across the codebase

- **Implemented Redis-based shared caching**
- Added `shared_cache` parameter to `@cached` decorator for
cross-process caching
  - Implemented Redis connection pooling for efficient cache operations
  - Added support for cache key pattern matching and bulk deletion
  - Added TTL refresh on cache access with `refresh_ttl_on_get` option

- **Enhanced cache functionality**
  - Added thundering herd protection with double-checked locking
  - Implemented thread-local caching with `@thread_cached` decorator
- Added cache management methods: `cache_clear()`, `cache_info()`,
`cache_delete()`
  - Added support for both sync and async functions

- **Updated store caching** (`/backend/server/v2/store/cache.py`)
  - Enabled shared caching for all store-related cache functions
  - Set appropriate TTL values (5-15 minutes) for different cache types
  - Added `clear_all_caches()` function for cache invalidation

- **Added Redis configuration**
  - Added Redis connection settings to backend settings
  - Configured dedicated connection pool for cache operations
  - Set up binary mode for pickle serialization

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify Redis connection and cache operations work correctly
  - [x] Test shared cache across multiple backend instances
  - [x] Verify cache invalidation with `clear_all_caches()`
- [x] Run cache tests: `poetry run pytest
backend/backend/util/cache_test.py`
  - [x] Test thundering herd protection under concurrent load
  - [x] Verify TTL refresh functionality with `refresh_ttl_on_get=True`
  - [x] Test thread-local caching for request-scoped data
  - [x] Ensure no performance regression vs in-memory cache

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes (Redis already configured)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Redis cache configuration uses existing Redis service settings
(REDIS_HOST, REDIS_PORT, REDIS_PASSWORD)
  - No new environment variables required
2025-10-17 07:56:01 +00:00
Bentlybro
8258338caf Revert "fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)"
This reverts commit e62a56e8ba.
2025-10-17 08:31:23 +01:00
Zamil Majdy
374f35874c feat(platform): Add LaunchDarkly flag for platform payment system (#11181)
## Summary

Implement selective rollout of payment functionality using LaunchDarkly
feature flags to enable gradual deployment to pilot users.

- Add `ENABLE_PLATFORM_PAYMENT` flag to control credit system behavior
- Update `get_user_credit_model` to use user-specific flag evaluation  
- Replace hardcoded `NEXT_PUBLIC_SHOW_BILLING_PAGE` with LaunchDarkly
flag
- Enable payment UI components only for flagged users
- Maintain backward compatibility with existing beta credit system
- Default to beta monthly credits when flag is disabled
- Fix tests to work with new async credit model function

## Key Changes

### Backend
- **Credit Model Selection**: The `get_user_credit_model()` function now
takes a `user_id` parameter and uses LaunchDarkly to determine which
credit model to return:
- Flag enabled → `UserCredit` (payment system enabled, no monthly
refills)
- Flag disabled → `BetaUserCredit` (current behavior with monthly
refills)
  
- **Flag Integration**: Added `ENABLE_PLATFORM_PAYMENT` flag and
integrated LaunchDarkly evaluation throughout the credit system

- **API Updates**: All credit-related endpoints now use the
user-specific credit model instead of a global instance

### Frontend
- **Dynamic UI**: Payment-related components (billing page, wallet
refill) now show/hide based on the LaunchDarkly flag
- **Removed Environment Variable**: Replaced
`NEXT_PUBLIC_SHOW_BILLING_PAGE` with runtime flag evaluation

### Testing
- **Test Fixes**: Updated all tests that referenced the removed global
`_user_credit_model` to use proper mocking of the new async function

## Deployment Strategy

This implementation enables a controlled rollout:
1. Deploy with flag disabled (default) - no behavior change for existing
users
2. Enable flag for pilot/beta users via LaunchDarkly dashboard
3. Monitor usage and feedback from pilot users
4. Gradually expand to more users
5. Eventually enable for all users once validated

## Test Plan

- [x] Unit tests pass for credit system components
- [x] Payment UI components show/hide correctly based on flag
- [x] Default behavior (flag disabled) maintains current functionality
- [x] Flag enabled users get payment system without monthly refills
- [x] Admin credit operations work correctly
- [x] Backward compatibility maintained

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 06:11:39 +00:00
Zamil Majdy
e62a56e8ba fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)
## Summary

Fixes the `Invalid \escape` error occurring in
`/upsert_execution_output` endpoint by completely rewriting the SafeJson
implementation.

## Problem

- Error: `POST /upsert_execution_output failed: Invalid \escape: line 1
column 36404 (char 36403)`
- Caused by data containing literal backslash-u sequences (e.g.,
`\u0000` as text, not actual null characters)
- Previous implementation tried to remove problematic escape sequences
from JSON strings
- This created invalid JSON when it removed `\\u0000` and left invalid
sequences like `\w`

## Solution

Completely rewrote SafeJson to work on Python data structures instead of
JSON strings:

1. **Direct data sanitization**: Recursively walks through dicts, lists,
and tuples to remove control characters directly from strings
2. **No JSON string manipulation**: Avoids all escape sequence parsing
issues
3. **More efficient**: Eliminates the serialize → sanitize → deserialize
cycle
4. **Preserves valid content**: Backslashes, paths, and literal text are
correctly preserved

## Changes

- Removed `POSTGRES_JSON_ESCAPES` regex (no longer needed)
- Added `_sanitize_value()` helper function for recursive sanitization
- Simplified `SafeJson()` to convert Pydantic models and sanitize data
structures
- Added `import json  # noqa: F401` for backwards compatibility

## Testing

-  Verified fix resolves the `Invalid \escape` error
-  All existing SafeJson unit tests pass
-  Problematic data with literal escape sequences no longer causes
errors
-  Code formatted with `poetry run format`

## Technical Details

**Before (JSON string approach):**
```python
# Serialize to JSON string
json_string = dumps(data)
# Remove escape sequences from string (BREAKS!)
sanitized = regex.sub("", json_string)
# Parse back (FAILS with Invalid \escape)
return Json(json.loads(sanitized))
```

**After (data structure approach):**
```python
# Convert Pydantic to dict
data = model.model_dump() if isinstance(data, BaseModel) else data
# Recursively sanitize strings in data structure
sanitized = _sanitize_value(data)
# Return as Json (no parsing needed)
return Json(sanitized)
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 05:56:08 +00:00
Abhimanyu Yadav
f3f9a60157 feat(frontend): add extra info in custom node in new builder (#11172)
Currently, we don’t add category and cost information to custom nodes in
the new builder. This means we’re rendering with the correct information
and costs are displayed accurately based on the selected discriminator
value.

<img width="441" height="781" alt="Screenshot 2025-10-15 at 2 43 33 PM"
src="https://github.com/user-attachments/assets/8199cfa7-4353-4de2-8c15-b68aa86e458c"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All information is displayed correctly.
- [x] I’ve tried changing the discrimination value and we’re getting the
correct cost for the selected value.
2025-10-17 04:35:22 +00:00
Swifty
3ed1c93ec0 Merge branch 'dev' into swiftyos/secrt-1706-improve-store-search 2025-10-16 15:10:01 +02:00
Swifty
773f545cfd update existing rows when migration is ran 2025-10-16 13:38:01 +02:00
Swifty
84ad4a9f95 updated migration and query 2025-10-16 13:06:47 +02:00
Swifty
8610118ddc ai sucks - fixing 2025-10-16 12:14:26 +02:00
Bently
9469b9e2eb feat(platform/backend): Add Claude Haiku 4.5 model support (#11179)
### Changes 🏗️

- **Added Claude Haiku 4.5 model support** (`claude-haiku-4-5-20251001`)
- Added model to `LlmModel` enum in
`autogpt_platform/backend/backend/blocks/llm.py`
- Configured model metadata with 200k context window and 64k max output
tokens
- Set pricing to 4 credits per million tokens in
`backend/data/block_cost_config.py`
  
- **Classic Forge Integration**
- Added `CLAUDE4_5_HAIKU_v1` to Anthropic provider in
`classic/forge/forge/llm/providers/anthropic.py`
- Configured with $1/1M prompt tokens and $5/1M completion tokens
pricing
  - Enabled function call API support

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
- [x] Verify Claude Haiku 4.5 model appears in the LLM block model
selection dropdown
- [x] Test basic text generation using Claude Haiku 4.5 in an agent
workflow

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Configuration changes</summary>

  - No environment variable changes required
  - No docker-compose changes needed
- Model configuration is handled through existing Anthropic API
integration
</details>




https://github.com/user-attachments/assets/bbc42c47-0e7c-4772-852e-55aa91f4d253

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
2025-10-16 10:11:38 +00:00
Swifty
ebb4ebb025 include parital types in second place 2025-10-16 12:10:38 +02:00
Swifty
cb532e1c4d update docker file to include partial types 2025-10-16 12:08:04 +02:00
Zamil Majdy
b7ae2c2fd2 fix(backend): move DatabaseError to backend.util.exceptions for better layer separation (#11177)
## Summary

Move DatabaseError from store-specific exceptions to generic backend
exceptions for proper layer separation, while also fixing store
exception inheritance to ensure proper HTTP status codes.

## Problem

1. **Poor Layer Separation**: DatabaseError was defined in
store-specific exceptions but represents infrastructure concerns that
affect the entire backend
2. **Incorrect HTTP Status Codes**: Store exceptions inherited from
Exception instead of ValueError, causing 500 responses for client errors
3. **Reusability Issues**: Other backend modules couldn't use
DatabaseError for DB operations
4. **Blanket Catch Issues**: Store-specific catches were affecting
generic database operations

## Solution

### Move DatabaseError to Generic Location
- Move from backend.server.v2.store.exceptions to
backend.util.exceptions
- Update all 23 references in backend/server/v2/store/db.py to use new
location
- Remove from StoreError inheritance hierarchy

### Fix Complete Store Exception Hierarchy
- MediaUploadError: Changed from Exception to ValueError inheritance
(client errors → 400)
- StoreError: Changed from Exception to ValueError inheritance (business
logic errors → 400)
- Store NotFound exceptions: Changed to inherit from NotFoundError (→
404)
- DatabaseError: Now properly inherits from Exception (infrastructure
errors → 500)

## Benefits

###  Proper Layer Separation
- Database errors are infrastructure concerns, not store-specific
business logic
- Store exceptions focus on business validation and client errors  
- Clean separation between infrastructure and business logic layers

###  Correct HTTP Status Codes
- DatabaseError: 500 (server infrastructure errors)
- Store NotFound errors: 404 (via existing NotFoundError handler)
- Store validation errors: 400 (via existing ValueError handler)
- Media upload errors: 400 (client validation errors)

###  Architectural Improvements
- DatabaseError now reusable across entire backend
- Eliminates blanket catch issues affecting generic DB operations
- All store exceptions use global exception handlers properly
- Future store exceptions automatically get proper status codes

## Files Changed

- **backend/util/exceptions.py**: Add DatabaseError class
- **backend/server/v2/store/exceptions.py**: Remove DatabaseError, fix
inheritance hierarchy
- **backend/server/v2/store/db.py**: Update all DatabaseError references
to new location

## Result

-  **No more stack trace spam**: Expected business logic errors handled
properly
-  **Proper HTTP semantics**: 500 for infrastructure, 400/404 for
client errors
-  **Better architecture**: Clean layer separation and reusable
components
-  **Fixes original issue**: AgentNotFoundError now returns 404 instead
of 500

This addresses the logging issue mentioned by @zamilmajdy while also
implementing the architectural improvements suggested by @Pwuts.
2025-10-16 09:51:58 +00:00
Swifty
794aee25ab add full text search 2025-10-16 11:49:36 +02:00
Abhimanyu Yadav
8b995c2394 feat(frontend): add saving ability in new builder (#11148)
This PR introduces saving functionality to the new builder interface,
allowing users to save and update agent flows. The implementation
includes both UI components and backend integration for persistent
storage of agent configurations.



https://github.com/user-attachments/assets/95ee46de-2373-4484-9f34-5f09aa071c5e


### Key Features Added:

#### 1. **Save Control Component** (`NewSaveControl`)
- Added a new save control popover in the control panel with form inputs
for agent name, description, and version display
- Integrated with the new control panel as a primary action button with
a floppy disk icon
- Supports keyboard shortcuts (Ctrl+S / Cmd+S) for quick saving

#### 2. **Graph Persistence Logic**
- Implemented `useNewSaveControl` hook to handle:
  - Creating new graphs via `usePostV1CreateNewGraph`
  - Updating existing graphs via `usePutV1UpdateGraphVersion`
- Intelligent comparison to prevent unnecessary saves when no changes
are made
  - URL parameter management for flowID and flowVersion tracking

#### 3. **Loading State Management**
- Added `GraphLoadingBox` component to display a loading indicator while
graphs are being fetched
- Enhanced `useFlow` hook with loading state tracking
(`isFlowContentLoading`)
- Improved UX with clear visual feedback during graph operations

#### 4. **Component Reorganization**
- Refactored components from `NewBlockMenu` to `NewControlPanel`
directory structure for better organization:
- Moved all block menu related components under
`NewControlPanel/NewBlockMenu/`
- Separated save control into its own module
(`NewControlPanel/NewSaveControl/`)
  - Improved modularity and separation of concerns

#### 5. **State Management Enhancements**
- Added `controlPanelStore` for managing control panel states (e.g.,
save popover visibility)
- Enhanced `nodeStore` with `getBackendNodes()` method for retrieving
nodes in backend format
- Added `getBackendLinks()` to `edgeStore` for consistent link
formatting

### Technical Improvements:

- **Graph Comparison Logic**: Implemented `graphsEquivalent()` helper to
deeply compare saved and current graph states, preventing redundant
saves
- **Form Validation**: Used Zod schema validation for save form inputs
with proper constraints
- **Error Handling**: Comprehensive error handling with user-friendly
toast notifications
- **Query Invalidation**: Proper cache invalidation after successful
saves to ensure data consistency

### UI/UX Enhancements:

- Clean, modern save dialog with clear labeling and placeholder text
- Real-time version display showing the current graph version
- Disabled state for save button during operations to prevent double
submissions
- Toast notifications for success and error states
- Higher z-index for GraphLoadingBox to ensure visibility over other
elements

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving is working perfectly. All nodes, links, their positions,
and hardcoded data are saved correctly.
  - [x] If there are no changes, the user cannot save the graph.
2025-10-16 08:06:18 +00:00
Zamil Majdy
12b1067017 fix(backend/store): improve store exception hierarchy for proper HTTP status codes (#11176)
## Summary

Fix store exception hierarchy to prevent ERROR level stack trace spam
for expected business logic errors and ensure proper HTTP status codes.

## Problem

The original error from production logs showed AgentNotFoundError for
non-existent agents like autogpt/domain-drop-catcher was:
- Returning 500 status codes instead of 404 
- Generating ERROR level stack traces in logs for expected not found
scenarios
- Bypassing global exception handlers due to improper inheritance

## Root Cause

Store exceptions inherited from Exception instead of ValueError, causing
them to bypass the global ValueError handler (400) and fall through to
the generic Exception handler (500) with full stack traces.

## Solution

Create proper exception hierarchy for ALL store-related errors by
making:
- MediaUploadError inherit from ValueError instead of Exception
- StoreError inherit from ValueError instead of Exception  
- Store NotFound exceptions inherit from NotFoundError (which inherits
from ValueError)

## Changes Made

1. MediaUploadError: Changed from Exception to ValueError inheritance
2. StoreError: Changed from Exception to ValueError inheritance  
3. Store NotFound exceptions: Changed to inherit from NotFoundError

## Benefits

- Correct HTTP status codes: Not found errors return 404, validation
errors return 400
- No more 500 stack trace spam for expected business logic errors
- Clean consistent error handling using existing global handlers
- Future-proof: Any new store exceptions automatically get proper status
codes

## Result

- AgentNotFoundError for autogpt/domain-drop-catcher now returns 404
instead of 500
- InvalidFileTypeError, VirusDetectedError, etc. now return 400 instead
of 500
- No ERROR level stack traces for expected client errors
- Proper HTTP semantics throughout the store API
2025-10-16 04:36:49 +00:00
Zamil Majdy
ba53cb78dc fix(backend/util): comprehensive SafeJson sanitization to prevent PostgreSQL null character errors (#11174)
## Summary
Fix critical SafeJson function to properly sanitize JSON-encoded Unicode
escape sequences that were causing PostgreSQL 22P05 errors when null
characters in web scraped content were stored as "\u0000" in the
database.

## Root Cause Analysis
The existing SafeJson function in backend/util/json.py:
1. Only removed raw control characters (\x00-\x08, etc.) using
POSTGRES_CONTROL_CHARS regex
2. Failed to handle JSON-encoded Unicode escape sequences (\u0000,
\u0001, etc.)
3. When web scraping returned content with null bytes, these were
JSON-encoded as "\u0000" strings
4. PostgreSQL rejected these Unicode escape sequences, causing 22P05
errors

## Changes Made

### Enhanced SafeJson Function (backend/util/json.py:147-153)
- **Add POSTGRES_JSON_ESCAPES regex**: Comprehensive pattern targeting
all PostgreSQL-incompatible Unicode and single-char JSON escape
sequences
- **Unicode escapes**: \u0000-\u0008, \u000B-\u000C, \u000E-\u001F,
\u007F (preserves \u0009=tab, \u000A=newline, \u000D=carriage return)
- **Single-char escapes**: \b (backspace), \f (form feed) with negative
lookbehind/lookahead to preserve file paths like "C:\\file.txt"
- **Two-pass sanitization**: Remove JSON escape sequences first, then
raw characters as fallback

### Comprehensive Test Coverage (backend/util/test_json.py:219-414)
Added 11 new test methods covering:
- **Control character sanitization**: Verify dangerous characters (\x00,
\x07, \x0C, etc.) are removed while preserving safe whitespace (\t, \n,
\r)
- **Web scraping content**: Simulate SearchTheWebBlock scenarios with
null bytes and control characters
- **Code preservation**: Ensure legitimate file paths, JSON strings,
regex patterns, and programming code are preserved
- **Unicode escape handling**: Test both \u0000-style and \b/\f-style
escape sequences
- **Edge case protection**: Prevent over-matching of legitimate
sequences like "mybfile.txt" or "\\u0040"
- **Mixed content scenarios**: Verify only problematic sequences are
removed while preserving legitimate content

## Validation Results
-  All 24 SafeJson tests pass including 11 new comprehensive
sanitization tests
-  Control characters properly removed: \x00, \x01, \x08, \x0C, \x1F,
\x7F
-  Safe characters preserved: \t (tab), \n (newline), \r (carriage
return)
-  File paths preserved: "C:\\Users\\file.txt", "\\\\server\\share"
-  Programming code preserved: regex patterns, JSON strings, SQL
escapes
-  Unicode escapes sanitized: \u0000 → removed, \u0048 ("H") →
preserved if valid
-  No false positives: Legitimate sequences not accidentally removed
-  poetry run format succeeds without errors

## Impact
- **Prevents PostgreSQL 22P05 errors**: No more null character database
rejections from web scraping
- **Maintains data integrity**: Legitimate content preserved while
dangerous characters removed
- **Comprehensive protection**: Handles both raw bytes and JSON-encoded
escape sequences
- **Web scraping reliability**: SearchTheWebBlock and similar blocks now
store content safely
- **Backward compatibility**: Existing SafeJson behavior unchanged for
legitimate content

## Test Plan
- [x] All existing SafeJson tests pass (24/24)
- [x] New comprehensive sanitization tests pass (11/11)
- [x] Control character removal verified
- [x] Legitimate content preservation verified
- [x] Web scraping scenarios tested
- [x] Code formatting and type checking passes

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-15 21:25:30 +00:00
Reinier van der Leer
f9778cc87e fix(blocks): Unhide "Add to Dictionary" block's dictionary input (#11175)
The `dictionary` input on the Add to Dictionary block is hidden, even
though it is the main input.

### Changes 🏗️

- Mark `dictionary` explicitly as not advanced (so not hidden by
default)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, no test needed
2025-10-15 15:04:56 +00:00
Nicholas Tindle
b230b1b5cf feat(backend): Add Sentry user and tag tracking to node execution (#11170)
Integrates Sentry SDK to set user and contextual tags during node
execution for improved error tracking and user count analytics. Ensures
Sentry context is properly set and restored, and exceptions are captured
with relevant context before scope restoration.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds sentry tracking to block failures
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test to make sure the userid and block details show up in Sentry
  - [x] make sure other errors aren't contaminated 


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- New Features
- Added conditional support for feature flags when configured, enabling
targeted rollouts and experiments without impacting unconfigured
environments.

- Chores
- Enhanced error monitoring with richer contextual data during node
execution to improve stability and diagnostics.
- Updated metrics initialization to dynamically include feature flag
integrations when available, without altering behavior for unconfigured
setups.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-15 14:33:08 +00:00
Reinier van der Leer
1925e77733 feat(backend): Include default input values in graph export (#11173)
Since #10323, one-time graph inputs are no longer stored on the input
nodes (#9139), so we can reasonably assume that the default value set by
the graph creator will be safe to export.

### Changes 🏗️

- Don't strip the default value from input nodes in
`NodeModel.stripped_for_export(..)`, except for inputs marked as
`secret`
- Update and expand tests for graph export secrets stripping mechanism

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Expanded tests pass
- Relatively simple change with good test coverage, no manual test
needed
2025-10-15 14:04:44 +00:00
Copilot
9bc9b53b99 fix(backend): Add channel ID support to SendDiscordMessageBlock for consistency with other Discord blocks (#11055)
## Problem

The `SendDiscordMessageBlock` only accepted channel names, while other
Discord blocks like `SendDiscordFileBlock` and `SendDiscordEmbedBlock`
accept both channel IDs and channel names. This inconsistency made it
difficult to use channel IDs with the message sending block, which is
often more reliable and direct than name-based lookup.

## Solution

Updated `SendDiscordMessageBlock` to accept both channel IDs and channel
names through the `channel_name` field, matching the implementation
pattern used in other Discord blocks.

### Changes Made

1. **Enhanced channel resolution logic** to try parsing the input as a
channel ID first, then fall back to name-based search:
   ```python
   # Try to parse as channel ID first
   try:
       channel_id = int(channel_name)
       channel = client.get_channel(channel_id)
   except ValueError:
       # Not an ID, treat as channel name
       # ... search guilds for matching channel name
   ```

2. **Updated field descriptions** to clarify the dual functionality:
- `channel_name`: Now describes that it accepts "Channel ID or channel
name"
   - `server_name`: Clarified as "only needed if using channel name"

3. **Added type checking** to ensure the resolved channel can send
messages before attempting to send

4. **Updated documentation** to reflect the new capability

## Backward Compatibility

 **Fully backward compatible**: The field name remains `channel_name`
(not renamed), and all existing workflows using channel names will
continue to work exactly as before.

 **New capability**: Users can now also provide channel IDs (e.g.,
`"123456789012345678"`) for more direct channel targeting.

## Testing

- All existing tests pass, including `SendDiscordMessageBlock` and all
other Discord block tests
- Implementation verified to match the pattern used in
`SendDiscordFileBlock` and `SendDiscordEmbedBlock`
- Code passes all linting, formatting, and type checking

Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10909

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> Issue Title: SendDiscordMessage needs to take a channel id as an
option under channelname the same as the other discord blocks
> Issue Description: with how we can process the other discord blocks we
should do the same here with the identifiers being allowed to be a
channel name or id. we can't rename the field though or that will break
backwards compatibility
> Fixes
https://linear.app/autogpt/issue/OPEN-2701/senddiscordmessage-needs-to-take-a-channel-id-as-an-option-under
> 
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User 055a3053-5ab6-449a-bcfa-990768594185:
> the ones with boxes around them need confirmed for lables but yeah its
related but not dupe
> 
> Comment by User 264d7bf4-db2a-46fa-a880-7d67b58679e6:
> this might be a duplicate since there is a related ticket but not sure
> 
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10909).
All replies are displayed in both locations.
> 
> 


</details>


<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey3.medallia.com/?EAHeSx-AP01bZqG0Ld9QLQ) to start
the survey.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* New Features
* Send Discord Message block now accepts a channel ID in addition to
channel name.
  * Server name is only required when using a channel name.
* Improved channel detection and validation with clearer errors if the
channel isn’t found.

* Documentation
* Updated block documentation to reflect support for channel ID or name
and clarify when server name is needed.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bently <Github@bentlybro.com>
2025-10-15 13:04:53 +00:00
Toran Bruce Richards
adfa75eca8 feat(blocks): Add references output pin to Fact Checker block (#11166)
Closes #11163

## Summary
Expanded the Fact Checker block to yield its references list from the
Jina AI API response.

## Changes 🏗️
- Added `Reference` TypedDict to define the structure of reference
objects
- Added `references` field to the Output schema
- Modified the `run` method to extract and yield references from the API
response
- Added fallback to empty list if references are not present

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that the Fact Checker block schema includes the new
references field
- [x] Confirmed that references are properly extracted from the API
response when present
- [x] Tested the fallback behavior when references are not in the API
response
- [x] Ensured backward compatibility - existing functionality remains
unchanged
- [x] Verified the Reference TypedDict matches the expected API response
structure

Generated with [Claude Code](https://claude.ai/code)

## Summary by CodeRabbit

* **New Features**
* Fact-checking results now include a references list to support
verification.
* Each reference provides a URL, a key quote, and an indicator showing
whether it supports the claim.
* References are presented alongside factuality, result, and reasoning
when available; otherwise, an empty list is returned.
* Enhances transparency and traceability of fact-check outcomes without
altering existing result fields.

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-10-15 10:19:43 +00:00
seer-by-sentry[bot]
0f19d01483 fix(frontend): Improve error handling for invalid agent files (#11165)
### Changes 🏗️

<img width="672" height="761" alt="Screenshot 2025-10-14 at 16 12 50"
src="https://github.com/user-attachments/assets/9e664ade-10fe-4c09-af10-b26d10dca360"
/>


Fixes
[BUILDER-3YG](https://sentry.io/organizations/significant-gravitas/issues/6942679655/).
The issue was that: User uploaded an incompatible external agent persona
file lacking required flow graph keys (`nodes`, `links`).

- Improves error handling when an invalid agent file is uploaded.
- Provides a more user-friendly error message indicating the file must
be a valid agent.json file exported from the AutoGPT platform.
- Clears the invalid file from the form and resets the agent object to
null.

This fix was generated by Seer in Sentry, triggered by Toran Bruce
Richards. 👁️ Run ID: 1943626

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6942679655/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test that uploading an invalid agent file (e.g., missing `nodes`
or `links`) triggers the improved error handling and displays the
user-friendly error message.
- [x] Verify that the invalid file is cleared from the form after the
error, and the agent object is reset to null.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-10-15 09:55:00 +00:00
Abhimanyu Yadav
112c39f6a6 fix(frontend): fix auto select credential mechanism in new builder (#11171)
We’re currently facing two problems with credentials:

1. When we change the discriminator input value, the form data
credential field should be cleaned up completely.
2. When I select a different discriminator and if that provider has a
value, it should select the latest one.

So, in this PR, I’ve encountered both issues.

### Changes 🏗️
- Updated CredentialField to utilize a new setCredential function for
managing selected credentials.
- Implemented logic to auto-select the latest credential when none is
selected and clear the credential if the provider changes.
- Improved SelectCredential component with a defaultValue prop and
adjusted styling for better UI consistency.
- Removed unnecessary console logs from helper functions to clean up the
code.

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Credential selection works perfectly with both the discriminator
and normal addition.
  - [x] Auto-select credential is also working.
2025-10-15 08:39:05 +00:00
Toran Bruce Richards
22946f4617 feat(blocks): add dedicated Perplexity block (#11164)
Fixes #11162

## Summary

Implements a new Perplexity block that allows users to query
Perplexity's sonar models via OpenRouter with support for extracting URL
citations and annotations.

## Changes

- Add new block for Perplexity sonar models (sonar, sonar-pro,
sonar-deep-research)
- Support model selection for all three Perplexity models
- Implement annotations output pin for URL citations and source
references
- Integrate with OpenRouter API for accessing Perplexity models
- Follow existing block patterns from AI text generator block

## Test Plan

 Block successfully instantiates
 Block is properly loaded by the dynamic loading system
 Output fields include response and annotations as required

Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- New Features
- Added a Perplexity integration block to query Sonar models via
OpenRouter.
- Supports multiple model options, optional system prompt, and
adjustable max tokens.
- Returns concise responses with citation-style annotations extracted
from the model output.
  - Provides clear error messages when requests fail.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-10-15 08:34:37 +00:00
Ubbe
938834ac83 dx(frontend): enable Next.js sourcemaps for Sentry (#11161)
## Changes 🏗️

Next.js Sourcemaps aren't working on production, followed:

- https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/
- https://docs.sentry.io/organization/integrations/deployment/vercel/

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] We will see once deployed ...

### For configuration changes:

None
2025-10-15 12:47:14 +04:00
Zamil Majdy
934cb3a9c7 feat(backend): Make execution limit per user per graph and reduce to 25 (#11169)
## Summary
- Changed max_concurrent_graph_executions_per_user from 50 to 25
concurrent executions
- Updated the limit to be per user per graph instead of globally per
user
- Users can now run different graphs concurrently without being limited
by executions of other graphs
- Enhanced database query to filter by both user_id and graph_id

## Changes Made
- **Settings**: Reduced default limit from 50 to 25 and updated
description to clarify per-graph scope
- **Database Layer**: Modified `get_graph_executions_count` to accept
optional `graph_id` parameter
- **Executor Manager**: Updated rate limiting logic to check
per-user-per-graph instead of per-user globally
- **Logging**: Enhanced warning messages to include graph_id context

## Test plan
- [ ] Verify that users can run up to 25 concurrent executions of the
same graph
- [ ] Verify that users can run different graphs concurrently without
interference
- [ ] Test rate limiting behavior when limit is exceeded for a specific
graph
- [ ] Confirm logging shows correct graph_id context in rate limit
messages

## Impact
This change improves the user experience by allowing concurrent
execution of different graphs while still preventing resource exhaustion
from running too many instances of the same graph.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-15 00:02:55 +00:00
seer-by-sentry[bot]
7b8499ec69 feat(backend): Prevent duplicate slugs for store submissions (#11155)
<!-- Clearly explain the need for these changes: -->
This PR prevents users from creating multiple store submissions with the
same slug, which could lead to confusion and potential conflicts in the
marketplace. Each user's submissions should have unique slugs to ensure
proper identification and navigation.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- **Backend**: Added validation to check for existing slugs before
creating new store submissions in `backend/server/v2/store/db.py`
- **Backend**: Introduced new `SlugAlreadyInUseError` exception in
`backend/server/v2/store/exceptions.py` for clearer error handling
- **Backend**: Updated store routes to return HTTP 409 Conflict when
slug is already in use in `backend/server/v2/store/routes.py`
- **Backend**: Cleaned up test file in
`backend/server/v2/store/db_test.py`
- **Frontend**: Enhanced error handling in the publish agent modal to
display specific error messages to users in
`frontend/src/components/contextual/PublishAgentModal/components/AgentInfoStep/useAgentInfoStep.ts`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Add a store submission with a specific slug
- [x] Attempt to add another store submission with the same slug for the
same user - Verify a 409 conflict error is returned with appropriate
error message
- [x] Add a store submission with the same slug, but for a different
user - Verify the submission is successful
- [x] Verify frontend displays the specific error message when slug
conflict occurs
  - [x] Existing tests pass without modification

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-14 11:14:00 +00:00
Abhimanyu Yadav
63076a67e1 fix(frontend): fix client side error handling in custom mutator (#11160)
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11159

Currently, we’re not throwing errors for client-side requests in the
custom mutator. This way, we’re ignoring the client-side request error.
If we do encounter an error, we send it as a normal response object.
That’s why our onError callback on React Query mutation and hasError
isn’t working in the query. To fix this, in this PR, we’re throwing the
client-side error.

### Changes 🏗️
- We’re throwing errors for both server-side and client-side requests.  
- Why server-side? So the client cache isn’t hydrated with the error.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All end-to-end functionality is working properly.
- [x] I’ve manually checked all the pages and they are all functioning
correctly.
2025-10-14 08:41:57 +00:00
Abhimanyu Yadav
41260a7b4a fix(frontend): fix publish agent behavior when user is logged out (#11159)
When a user clicks the “Become a Creator” button on the marketplace
page, we send an unauthorised request to the server to get a list of
agents. In this PR, I’ve fixed this by checking if the user is logged
in. If they’re not, I’ll show them a UI to log in or create an account.
 
<img width="967" height="605" alt="Screenshot 2025-10-14 at 12 04 52 PM"
src="https://github.com/user-attachments/assets/95079d9c-e6ef-4d75-9422-ce4fb138e584"
/>

### Changes
- Modify the publish agent test to detect the correct text even when the
user is logged out.
- Use Supabase helpers to check if the user is logged in. If not,
display the appropriate UI.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] The login UI is correctly displayed when the user is logged out.
- [x] The login UI is also correctly displayed when the user is logged
in.
  - [x] The login and signup buttons are working perfectly.
2025-10-14 08:41:49 +00:00
Ubbe
5f2d4643f8 feat(frontend): dynamic search terms (#11156)
## Changes 🏗️

<img width="800" height="664" alt="Screenshot 2025-10-14 at 14 09 54"
src="https://github.com/user-attachments/assets/73f6277a-6bef-40f9-b208-31aba0cfc69f"
/>

<img width="600" height="773" alt="Screenshot 2025-10-14 at 14 10 05"
src="https://github.com/user-attachments/assets/c88cb22f-1597-4216-9688-09c19030df89"
/>

Allow to manage on the fly which search terms appear on the Marketplace
page via Launch Darkly dashboard. There is a new flag configured there:
`marketplace-search-terms`:
- **enabled** → `["Foo", "Bar"]` → the terms that will appear
- **disabled** → `[ "Marketing", "SEO", "Content Creation",
"Automation", "Fun"]` → the default ones show

### Small fix

Fix the following browser console warning about `onLoadingComplete`
being deprecated...
<img width="600" height="231" alt="Screenshot 2025-10-14 at 13 55 45"
src="https://github.com/user-attachments/assets/1b26e228-0902-4554-9f8c-4839f8d4ed83"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Ran the flag locally and verified it shows the terms set on Launch
Darkly

### For configuration changes:

Launch Darkly new flag needs to be configured on all environments.
2025-10-14 06:43:56 +01:00
Krzysztof Czerwinski
9c8652b273 feat(backend): Whitelist Onboarding Agents (#11149)
Some agents aren't suitable for onboarding. This adds per-store agent
setting to allow them for onboarding. In case no agent is allowed
fallback to the former search.

### Changes 🏗️

- Add `useForOnboarding` to `StoreListing` model and `StoreAgent` view
(with migration)
- Remove filtering of agents with empty input schema or credentials 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Only allowed agents are displayed
- [x] Fallback to the old system in case there aren't enough allowed
agents
2025-10-13 15:05:22 +00:00
Swifty
58ef687a54 fix(platform): Disable logging store terms (#11147)
There is concern that the write load on the database may derail the
performance optimisations.
This hotfix comments out the code that adds the search terms to the db,
so we can discuss how best to do this in a way that won't bring down the
db.

### Changes 🏗️

- commented out the code to log store terms to the db

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] check search still works in dev
2025-10-13 13:17:04 +00:00
Ubbe
c7dcbc64ec fix(frontend): ask for credentials in onboarding agent run (#11146)
## Changes 🏗️

<img width="800" height="852" alt="Screenshot_2025-10-13_at_19 20 47"
src="https://github.com/user-attachments/assets/2fc150b9-1053-4e25-9018-24bcc2d93b43"
/>

<img width="800" height="669" alt="Screenshot 2025-10-13 at 19 23 41"
src="https://github.com/user-attachments/assets/9078b04e-0f65-42f3-ac4a-d2f3daa91215"
/>

- Onboarding “Run” step now renders required credentials (e.g., Google
OAuth) and includes them in execution.
- Run button remains disabled until required inputs and credentials are
provided.
- Logic extracted and strongly typed; removed any usage.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan ( _once merged
in dev..._ )
  - [ ] Select an onboarding agent that requires Google OAuth:
  - [ ] Credentials selector appears.
  - [ ] After selecting/signing in, “Run agent” enables.
  - [ ]Run succeeds and navigates to the next step.

### For configuration changes:

None
2025-10-13 12:51:45 +00:00
Ubbe
99ac206272 fix(frontend): handle websocket disconnect issue (#11144)
## Changes 🏗️

I found that if I logged out while an agent was running, sometimes
Webscokets would keep open connections but fail to connect ( given there
is no token anymore ) and cause strange behavior down the line on the
login screen.

Two root causes behind after inspecting the browser logs 🧐 
- WebSocket connections were attempted with an empty token right after
logout, yielding `wss://.../ws?token=` and repeated `1006/connection`
refused loops.
- During logout, sockets in `CONNECTING` state weren’t being closed, so
the browser kept trying to finish the handshake and were reattempted
shortly after failing

Trying to fix this like:
- Guard `connectWebSocket()` to no-op if a logout/disconnect intent is
set, and to skip connecting when no token is available.
- Treat `CONNECTING` sockets as closeable in `disconnectWebSocket()` and
clear `wsConnecting` to avoid stale pending Promises
- Left existing heartbeat/reconnect logic intact, but it now won’t run
when we’re logging out or when we can’t get a token.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login and run an agent that takes long to run
  - [x] Logout
  - [x] Check the browser console and you don't see any socket errors
  - [x] The login screen behaves ok   

### For configuration changes:

Noop
2025-10-13 12:10:16 +00:00
Abhimanyu Yadav
f67d78df3e feat(frontend): Implement discriminator logic in the new builder’s credential system. (#11124)
- Depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11107
and https://github.com/Significant-Gravitas/AutoGPT/pull/11122

In this PR, I’ve added support for discrimination. Now, users can choose
a credential type based on other input values.


https://github.com/user-attachments/assets/6cedc59b-ec84-4ae2-bb06-59d891916847

### Changes 🏗️
- Updated CredentialsField to utilize credentialProvider from schema.
- Refactored helper functions to filter credentials based on the
selected provider.
- Modified APIKeyCredentialsModal and PasswordCredentialsModal to accept
provider as a prop.
- Improved FieldTemplate to dynamically display the correct credential
provider.
- Added getCredentialProviderFromSchema function to manage
multi-provider scenarios.

### Checklist 📋

#### For code changes:
- [x] Credential input is correctly updating based on other input
values.
- [x] Credential can be added correctly.
2025-10-13 12:08:10 +00:00
Swifty
e32c509ccc feat(backend): Simplify caching to just store routes (#11140)
### Problem
Limits caching to just the main marketplace routes

### Changes 🏗️

- **Simplified store cache implementation** in
`backend/server/v2/store/cache.py`
  - Streamlined caching logic for better maintainability
  - Reduced complexity while maintaining performance
  
- **Added cache invalidation on store updates**
  - Implemented cache clearing when new agents are added to the store
- Added invalidation logic in admin store routes
(`admin_store_routes.py`)
  - Ensures all pods reflect the latest store state after modifications

- **Updated store database operations** in
`backend/server/v2/store/db.py`
  - Modified to work with the new cache structure
  
- **Added cache deletion tests** (`test_cache_delete.py`)
  - Validates cache invalidation works correctly
  - Ensures cache consistency across operations

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify store listings are cached correctly
  - [x] Upload a new agent to the store and confirm cache is invalidated
2025-10-13 07:25:59 +00:00
seer-by-sentry[bot]
20acd8b51d fix(backend): Improve Postmark error handling and logging for notification delivery (#11052)
<!-- Clearly explain the need for these changes: -->
Fixes
[AUTOGPT-SERVER-5K6](https://sentry.io/organizations/significant-gravitas/issues/6887660207/).
The issue was that: Batch sending fails due to malformed data (422) and
inactive recipients (406); the 406 error is misclassified as a size
limit failure.

- Implements more robust error handling for Postmark API failures during
notification sending.
- Specifically handles inactive recipients (HTTP 406), malformed data
(HTTP 422), and oversized notifications.
- Adds detailed logging for each error case, including the notification
index and error message.
- Skips individual notifications that fail due to these errors,
preventing the entire batch from failing.
- Improves error handling for ValueErrors during send_templated calls,
specifically addressing oversized notifications.


This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 1675950

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6887660207/?seerDrawer=true)

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Implements more robust error handling for Postmark API failures during
notification sending.
- Specifically handles inactive recipients (HTTP 406), malformed data
(HTTP 422), and oversized notifications.
- Adds detailed logging for each error case, including the notification
index and error message.
- Skips individual notifications that fail due to these errors,
preventing the entire batch from failing.
- Improves error handling for ValueErrors during send_templated calls,
specifically addressing oversized notifications.
- Also disables this in prod to prevent scaling issues until we work out
some of the more critical issues

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test sending notifications with invalid email addresses to ensure
406 errors are handled correctly.
- [x] Test sending notifications with malformed data to ensure 422
errors are handled correctly.
- [x] Test sending oversized notifications to ensure they are skipped
and logged correctly.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- New Features
  - None

- Bug Fixes
- Individual email failures no longer abort a batch; processing
continues after per-recipient errors.
- Specific handling for inactive recipients and malformed messages to
prevent repeated delivery attempts.

- Chores
  - Improved error logging and diagnostics for email delivery scenarios.

- Tests
- Added tests covering email-sending error cases, user-deactivation on
inactive addresses, and batch-continuation behavior.

- Documentation
  - None
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-13 07:16:48 +00:00
Nicholas Tindle
a49c957467 Revert "fix(frontend/builder): Sync frontend node IDs with backend after save" (#11142)
Reverts Significant-Gravitas/AutoGPT#11075
2025-10-13 07:16:02 +00:00
Abhimanyu Yadav
cf6e724e99 feat(platform): load graph on new builder (#11141)
In this PR, I’ve added functionality to fetch a graph based on the
flowID and flowVersion provided in the URL. Once the graph is fetched,
we add the nodes and links using the graph data in a new builder.

<img width="1512" height="982" alt="Screenshot 2025-10-11 at 10 26
07 AM"
src="https://github.com/user-attachments/assets/2f66eb52-77b2-424c-86db-559ea201b44d"
/>


### Changes
- Added `get_specific_blocks` route in `routes.py`.
- Created `get_block_by_id` function in `db.py`.
- Add a new hook `useFlow.ts` to load the graph and populate it in the
flow editor.
- Updated frontend components to reflect changes in block handling.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Able to load the graph correctly.
  - [x] Able to populate it on the builder.
2025-10-11 15:28:37 +00:00
Reinier van der Leer
b67555391d fix(frontend/builder): Sync frontend node IDs with backend after save (#11075)
- Resolves #10980

Fixes unnecessary graph re-saving when no changes were made after
initial save. The issue occurred because frontend node IDs weren't
synced with backend IDs after save operations.

### Changes 🏗️

- Update actual node.id to match backend node ID after save
- Update edge references with new node IDs
- Properly sync visual editor state with backend

### Test Plan 📋

- [x] TypeScript compilation passes  
- [x] Pre-commit hooks pass
- [x] Manual test: Save graph, verify no re-save needed on subsequent
save/run
2025-10-11 01:12:19 +00:00
Zamil Majdy
05a72f4185 feat(backend): implement user rate limiting for concurrent graph executions (#11128)
## Summary
Add configurable rate limiting to prevent users from exceeding the
maximum number of concurrent graph executions, defaulting to 50 per
user.

## Changes Made

### Configuration (`backend/util/settings.py`)
- Add `max_concurrent_graph_executions_per_user` setting (default: 50,
range: 1-1000)
- Configurable via environment variables or settings file

### Database Query Function (`backend/data/execution.py`) 
- Add `get_graph_executions_count()` function for efficient count
queries
- Supports filtering by user_id, statuses, and time ranges
- Used to check current RUNNING/QUEUED executions per user

### Database Manager Integration (`backend/executor/database.py`)
- Expose `get_graph_executions_count` through DatabaseManager RPC
interface
- Follows existing patterns for database operations
- Enables proper service-to-service communication

### Rate Limiting Logic (`backend/executor/manager.py`)
- Inline rate limit check in `_handle_run_message()` before cluster lock
- Use existing `db_client` pattern for consistency
- Reject and requeue executions when limit exceeded
- Graceful error handling - proceed if rate limit check fails
- Enhanced logging with user_id and current/max execution counts

## Technical Implementation
- **Database approach**: Query actual execution statuses for accuracy
- **RPC pattern**: Use DatabaseManager client following existing
codebase patterns
- **Fail-safe design**: Proceed with execution if rate limit check fails
- **Requeue on limit**: Rejected executions are requeued for later
processing
- **Early rejection**: Check rate limit before expensive cluster lock
operations

## Rate Limiting Flow
1. Parse incoming graph execution request
2. Query database via RPC for user's current RUNNING/QUEUED execution
count
3. Compare against configured limit (default: 50)
4. If limit exceeded: reject and requeue message
5. If within limit: proceed with normal execution flow

## Configuration Example
```env
MAX_CONCURRENT_GRAPH_EXECUTIONS_PER_USER=25  # Reduce to 25 for stricter limits
```

## Test plan
- [x] Basic functionality tested - settings load correctly, database
function works
- [x] ExecutionManager imports and initializes without errors
- [x] Database manager exposes the new function through RPC
- [x] Code follows existing patterns and conventions
- [ ] Integration testing with actual rate limiting scenarios
- [ ] Performance testing to ensure minimal impact on execution pipeline

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-11 08:02:34 +07:00
Swifty
36f634c417 fix(backend): Update store agent view to return only latest version (#11065)
This PR fixes duplicate agent listings on the marketplace home page by
updating the StoreAgent view to return only the latest approved version
of each agent.

### Changes 🏗️

- Updated `StoreAgent` database view to filter for only the latest
approved version per listing
- Added CTE (Common Table Expression) `latest_versions` to efficiently
determine the maximum version for each store listing
- Modified the join logic to only include the latest version instead of
all approved versions
- Updated `versions` array field to contain only the single latest
version

**Technical details:**
- The view now uses a `latest_versions` CTE that groups by
`storeListingId` and finds `MAX(version)` for approved submissions
- Join condition ensures only the latest version is included:
`slv.version = lv.latest_version`
- This prevents duplicate entries for agents with multiple approved
versions

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified marketplace home page shows no duplicate agents
- [x] Confirmed only latest version is displayed for agents with
multiple approved versions
  - [x] Checked that agent details page still functions correctly
  - [x] Validated that run counts and ratings are still accurate

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-10-10 09:31:36 +00:00
Swifty
18e169aa51 feat(platform): Log Marketplace Search Terms (#11092)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <Pwuts@users.noreply.github.com>
2025-10-10 11:33:28 +02:00
Swifty
c5b90f7b09 feat(platform): Simplify running of core docker services (#11113)
Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>
2025-10-10 11:32:46 +02:00
Ubbe
a446c1acc9 fix(frontend): improve navbar on mobile (#11137)
## Changes 🏗️

Make the navigation bar look nice across screen sizes 📱 

<img width="1229" height="388" alt="Screenshot 2025-10-10 at 17 53 48"
src="https://github.com/user-attachments/assets/037a9957-9c0b-4e2c-9ef5-af198fdce923"
/>

<img width="700" height="392" alt="Screenshot 2025-10-10 at 17 53 42"
src="https://github.com/user-attachments/assets/bf9a0f83-a528-4613-83e7-6e204078b905"
/>

<img width="500" height="377" alt="Screenshot 2025-10-10 at 17 52 24"
src="https://github.com/user-attachments/assets/2209d4f3-a41a-4700-894b-5e6e7c15fefb"
/>

<img width="300" height="381" alt="Screenshot 2025-10-10 at 17 52 16"
src="https://github.com/user-attachments/assets/1c87d545-784e-47b5-b23c-6f37cfae489b"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login to the platform and resize the window down
- [x] The navbar looks good across screen sizes and everything is
aligned and accessible

### For configuration changes:

None
2025-10-10 09:10:24 +00:00
Ubbe
59d242f69c fix(frontend): remove agent activity flag (#11136)
## Changes 🏗️

The Agent Activity Dropdown is now stable, it has been 100% exposed to
users on production for a few weeks, no need to have it behind a flag
anymore.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login to AutoGPT
- [x] The bell on the navbar is always present even if the flag on
Launch Darkly is turned off

### For configuration changes:

None
2025-10-10 09:08:42 +00:00
Abhimanyu Yadav
a2cd5d9c1f feat(frontend): add support for user password credentials in new FlowEditor (#11122)
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11107

In this PR, I’ve added a way to add a username and password as
credentials on new builder.


https://github.com/user-attachments/assets/b896ea62-6a6d-487c-99a3-727cef4ad9a5

### Changes 🏗️
- Introduced PasswordCredentialsModal to handle user password
credentials.
- Updated useCredentialField to support user password type.
- Refactored APIKeyCredentialsModal to remove unnecessary onSuccess
prop.
- Enhanced the CredentialsField component to conditionally render the
new password modal based on supported credential types.

### Checklist 📋

#### For code changes:
- [x] Ability to add username and password correctly.
- [x] The username and password are visible in the credentials list
after adding it.
2025-10-10 07:15:21 +00:00
Abhimanyu Yadav
df5b348676 feat(frontend): add search functionality in new block menu (#11121)
- Depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11120

In this PR, I’ve added a search functionality to the new block menu with
pagination.



https://github.com/user-attachments/assets/4c199997-4b5a-43c7-83b6-66abb1feb915



### Changes 🏗️
- Add a frontend for the search list with pagination functionality.
- Updated the search route to use GET method.
- Removed the SearchRequest model and replaced it with individual query
parameters.

### Checklist 📋

#### For code changes:
- [x] The search functionality is working perfectly.
- [x] If the search query doesn’t exist, it correctly displays a “No
Result” UI.
2025-10-09 12:28:12 +00:00
Bently
4856bd1f3a fix(backend): prevent sub-agent execution visibility across users (#11132)
Fixes a issue where sub-agent executions triggered by one user were
visible in the original agent author's execution library.
 ## Solution

Fixed the user_id attribution in
`autogpt_platform/backend/backend/executor/manager.py` by ensuring that
sub-agent executions always use the actual executor's user_id rather
than the agent author's user_id stored in node defaults.

### Changes
- Added user_id override in `execute_node()` function when preparing
AgentExecutorBlock input (line 194)
- Ensures sub-agent executions are correctly attributed to the user
running them, not the agent author
- Maintains proper privacy isolation between users in marketplace agent
scenarios

### Security Impact
- **Before**: When User B downloaded and ran a marketplace agent
containing sub-agents owned by User A, the sub-agent executions appeared
in User A's library
- **After**: Sub-agent executions now only appear in the library of the
user who actually ran them
- Prevents unauthorized access to execution data and user privacy
violation

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Test plan: -->
  - [x] Create an agent with sub-agents as User A
  - [x] Publish agent to marketplace
  - [x] Run the agent as User B
- [x] Verify User A cannot see User B's sub-agent executions in their
library
  - [x] Verify User B can see their own sub-agent executions
  - [x] Verify primary agent executions remain correctly filtered
2025-10-09 11:17:26 +00:00
Abhimanyu Yadav
2e1d3dd185 refactor(frontend): replace context api in new block menu with zustand store (#11120)
Currently, we use the context API for the block menu provider and to
access some of its state outside the blockMenuProvider wrapper. For
instance, in the tutorial, we need to move this wrapper higher up in the
tree, perhaps at the top of the builder tree. This will cause
unnecessary re-renders. Therefore, we should create a block menu zustand
store so that we can easily access it in other parts of the builder.

### Changes 🏗️
- Deleted `block-menu-provider.tsx` file.
- Updated BlockMenu, BlockMenuContent, BlockMenuDefaultContent, and
other components to utilize blockMenuStore instead of
BlockMenuStateProvider.
- Adjusted imports and context usage accordingly.

### Checklist 📋
- [x] Changes have been clearly listed.
- [x] Code has been tested and verified.
- [x] I’ve checked every part of the block menu where we used the
context API and it’s working perfectly.
- [x] Ability to use block menu state in other parts of the builder.
2025-10-09 11:04:42 +00:00
Abhimanyu Yadav
ff72343035 feat(frontend): add UI for sticky notes on new builder (#11123)
Currently, the new builder doesn’t support sticky notes. We’re rendering
them as normal nodes with an input, which is why I’ve added a UI for
this.

<img width="1512" height="982" alt="Screenshot 2025-10-08 at 4 12 58 PM"
src="https://github.com/user-attachments/assets/be716e45-71c6-4cc4-81ba-97313426222f"
/>

To add sticky notes, go to the search menu of the block menu and search
for “Note block”. Then, add them from there.

### Changes 🏗️
- Updated CustomNodeData to include uiType.
- Conditional rendering in CustomNode based on uiType.
- Added a custom sticky note UI component called `StickyNoteBlock.tsx`.
- Adjusted FormCreator and FieldTemplate to pass and utilize uiType.
- Enhanced TextInputWidget to render differently based on uiType.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Able to attach sticky notes to the builder.
- [x] Able to accurately capture data while writing on sticky notes and
data is persistent also
2025-10-09 06:48:19 +00:00
Abhimanyu Yadav
7982c34450 feat(frontend): add oauth2 credential support in new builder (#11107)
In this PR, I have added support of oAuth2 in new builder.


https://github.com/user-attachments/assets/89472ebb-8ec2-467a-9824-79a80a71af8a

### Changes 🏗️
- Updated the FlowEditor to support OAuth2 credential selection.
- Improved the UI for API key and OAuth2 modals, enhancing user
experience.
- Refactored credential field components for better modularity and
maintainability.
- Updated OpenAPI documentation to reflect changes in OAuth flow
endpoints.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Able to create OAuth credentials
  - [x] OAuth2 is correctly selected using the Credential Selector.
2025-10-09 06:47:15 +00:00
Zamil Majdy
59c27fe248 feat(backend): implement comprehensive rate-limited Discord alerting system (#11106)
## Summary
Implement comprehensive Discord alerting system with intelligent rate
limiting to prevent spam and provide proper visibility into system
failures across retry mechanisms and execution errors.

## Key Features

### 🚨 Rate-Limited Discord Alerting Infrastructure
- **Reusable rate-limited alerts**: `send_rate_limited_discord_alert()`
function for any Discord alerts
- **5-minute rate limiting**: Prevents spam for identical error
signatures (function+error+context)
- **Thread-safe**: Proper locking for concurrent alert attempts
- **Configurable channels**: Support custom Discord channels or default
to PLATFORM
- **Graceful failure handling**: Alert failures don't break main
application flow

### 🔄 Enhanced Retry Alert System
- **Unified threshold alerting**: Both general retries and
infrastructure retries alert at EXCESSIVE_RETRY_THRESHOLD (50 attempts)
- **Critical retry alerts**: Early warning when operations approach
failure threshold
- **Infrastructure monitoring**: Dedicated alerts for database, Redis,
RabbitMQ connection issues
- **Rate limited**: All retry alerts use rate limiting to prevent
overwhelming Discord channels

### 📊 Unknown Execution Error Alerts
- **Automatic error detection**: Alert for unexpected graph execution
failures
- **Rich context**: Include user ID, graph ID, execution ID, error type
and message
- **Filtered alerts**: Skip known errors (InsufficientBalanceError,
ModerationError)
- **Proper error tracking**: Ensure execution_stats.error is set for all
error types

## Technical Implementation

### Rate Limiting Strategy
```python
# Create unique signatures based on function+error+context
error_signature = f"{context}:{func_name}:{type(exception).__name__}:{str(exception)[:100]}"
```
- **5-minute windows**: ALERT_RATE_LIMIT_SECONDS = 300 prevents
duplicate alerts
- **Memory efficient**: Only store last alert timestamp per unique error
signature
- **Context awareness**: Same error in different contexts can send
separate alerts

### Alerting Hierarchy
1. **50 attempts**: Critical alert warning about approaching failure
(EXCESSIVE_RETRY_THRESHOLD)
2. **100 attempts**: Final infrastructure failure (conn_retry max_retry)
3. **Unknown execution errors**: Immediate rate-limited alerts for
unexpected failures

## Files Modified

### Core Implementation
- `backend/executor/manager.py`: Unknown execution error alerts with
rate limiting
- `backend/util/retry.py`: Comprehensive rate-limited alerting
infrastructure
- `backend/util/retry_test.py`: Full test coverage for rate limiting
functionality (14 tests)

### Code Quality Improvements
- **Inlined alert messages**: Eliminated unnecessary temporary variables
- **Simplified logic**: Removed excessive comments and redundant alerts
- **Consistent patterns**: All alert functions follow same clean code
style
- **DRY principle**: Reusable rate-limited alert system for future
monitoring needs

## Benefits

### 🛡️ Prevents Alert Spam
- **Rate limiting**: No more overwhelming Discord channels with
duplicate alerts
- **Intelligent deduplication**: Same errors rate limited while
different errors get through
- **Thread safety**: Concurrent operations handled correctly

### 🔍 Better System Visibility  
- **Unknown errors**: Issues that need investigation are properly
surfaced
- **Infrastructure monitoring**: Early warning for
database/Redis/RabbitMQ issues
- **Rich context**: All necessary debugging information included in
alerts

### 🧹 Maintainable Codebase
- **Reusable infrastructure**: `send_rate_limited_discord_alert()` for
future monitoring
- **Clean, consistent code**: Inlined messages, simplified logic, proper
abstractions
- **Comprehensive testing**: Rate limiting edge cases and real-world
scenarios covered

## Validation Results
-  All 14 retry tests pass including comprehensive rate limiting
coverage
-  Manager execution tests pass validating integration with execution
flow
-  Thread safety validated with concurrent alert attempt tests
-  Real-world scenarios tested including the specific spend_credits
spam issue that motivated this work
-  Code formatting, linting, and type checking all pass

## Before/After Comparison

### Before
- No rate limiting → Discord spam for repeated errors
- Unknown execution errors not monitored → Issues went unnoticed  
- Inconsistent alerting thresholds → Confusing monitoring
- Verbose code with temporary variables → Harder to maintain

### After  
-  Rate-limited intelligent alerting prevents spam
-  Unknown execution errors properly monitored with context
-  Unified 50-attempt threshold for consistent monitoring
-  Clean, maintainable code with reusable infrastructure

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-09 08:22:15 +07:00
Zamil Majdy
c7575dc579 fix(backend): implement rate limiting for critical retry alerts to prevent spam (#11127)
## Summary
Fix the critical issue where retry failure alerts were being spammed
when service communication failed repeatedly.

## Problem
The service communication retry mechanism was sending a critical Discord
alert every time it hit the 50-attempt threshold, with no rate limiting.
This caused alert spam when the same operation (like spend_credits) kept
failing repeatedly with the same error.

## Solution

### Rate Limiting Implementation
- Add ALERT_RATE_LIMIT_SECONDS = 300 (5 minutes) between identical
alerts
- Create _should_send_alert() function with thread-safe rate limiting
using _alert_rate_limiter dict
- Generate unique signatures based on
context:func_name:exception_type:exception_message
- Only send alert if sufficient time has passed since last identical
alert

### Enhanced Logging  
- Rate-limited alerts log as warnings instead of being silently dropped
- Add full exception tracebacks for final failures and every 10th retry
attempt
- Improve alert message clarity and add note about rate limiting
- Better structured logging with exception types and details

### Error Context Preservation
- Maintain all original retry behavior and exception handling
- Preserve critical alerting for genuinely new issues  
- Clean up alert message (removed accidental paste from error logs)

## Technical Details
- Thread-safe implementation using threading.Lock() for rate limiter
access
- Signature includes first 100 chars of exception message for
granularity
- Memory efficient - only stores last alert timestamp per unique error
type
- No breaking changes to existing retry functionality

## Impact
- **Eliminates alert spam**: Same failing operation only alerts once per
5 minutes
- **Preserves critical alerts**: New/different failures still trigger
immediate alerts
- **Better debugging**: Enhanced logging provides full exception context
- **Maintains reliability**: All retry logic works exactly as before

## Testing
-  Rate limiting tested with multiple scenarios
-  Import compatibility verified 
-  No regressions in retry functionality
-  Alert generation works for new vs repeated errors

## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update

## How Has This Been Tested?
- Manual testing of rate limiting functionality with different error
scenarios
- Import verification to ensure no regressions
- Code formatting and linting compliance

## Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation (N/A -
internal utility)
- [x] My changes generate no new warnings
- [x] Any dependent changes have been merged and published in downstream
modules (N/A)
2025-10-09 05:53:10 +07:00
Ubbe
73603a8ce5 fix(frontend): onboarding re-directs (#11126)
## Changes 🏗️

We weren't awaiting the onboarding enabled check and also we were
re-directing to a wrong onboarding URL.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a new user
  - [x] Re-directs well to onboarding
  - [x] Complete up to Step 5 and logout
  - [x] Login again
  - [x] You are on Step 5  

#### For configuration changes:

None
2025-10-08 15:18:25 +00:00
Ubbe
e562ca37aa fix(frontend): login redirects + onboarding (#11125)
## Changes 🏗️

### Fix re-direct bugs

Sometimes the app will re-direct to a strange URL after login:
```
http://localhost:3000/marketplace,%20/marketplace
```
It looks like a race-condition because the re-direct to `/marketplace`
was done on a [server
action](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
rather than in the browser.

** Fixed by** 

Moving the login / signup server actions to Next.js API endpoints. In
this way the login/signup pages just call an API endpoint and handle its
response without having to hassle with serverless 💆🏽

### Wallet layout flash

<img width="800" height="744" alt="Screenshot 2025-10-08 at 22 52 03"
src="https://github.com/user-attachments/assets/7cb85fd5-7dc4-4870-b4e1-173cc8148e51"
/>

The wallet popover would sometimes flash after login, because it was
re-rendering once onboarding and credits data loaded.

** Fixed by** 

Only rendering once we have onboarding and credits data, without the
popover is useless and causes flashes.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login / Signup to the app with email and Google
  - [x] Works fine
  - [x] Onboarding popover does not flash
  - [x] Onboarding and marketplace re-directs work   

### For configuration changes:

None
2025-10-08 18:35:45 +04:00
Nicholas Tindle
f906fd9298 fix(backend): Allow Project.content to be optional for linear search projects (#11118)
Changed the type of the 'content' field in the Project model to accept
None, making it optional instead of required. Linear doesn't always
return data here if its not set by the user.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- Makes the content optional
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manually test it works with our data


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of projects with no content by making content
optional.
- Prevents validation errors during project creation, import, or sync
when content is missing.
- Enhances compatibility with integrations that may omit content fields.
- No impact on existing projects with content; behavior remains
unchanged.
  - No user action required.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-07 20:04:37 +00:00
seer-by-sentry[bot]
9e79add436 fix(backend): Change progress type to float in Linear Project (#11117)
### Changes 🏗️

- Changed the type of the `progress` field in the `LinearTask` model
from `int` to `float` to fix
[BUILDER-3V5](https://sentry.io/organizations/significant-gravitas/issues/6929150079/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Root cause analysis confirms fix -- testing will need to occur in
dev environment before release to prod but this should merge now


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- New Features
- Progress indicators now support decimal values, allowing more precise
tracking (e.g., 42.5% instead of 42%). This enables finer-grained
updates in the interface and any integrations consuming progress data.
- Users may notice smoother progress changes during long-running tasks,
with improved accuracy in percentage displays across relevant views and
APIs.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-07 17:59:26 +00:00
Nicholas Tindle
de6f4fca23 Merge branch 'master' into dev 2025-10-07 11:13:38 -05:00
Nicholas Tindle
fb4b8ed9fc feat: track users with sentry on client side (not backend yet) (#11077)
<!-- Clearly explain the need for these changes: -->
We need to be able to count user impact by user count which means we
need to track that
### Changes 🏗️
- Attaches user id to frontend actions (which hopefully propagate to the
backend in some places)
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test login -> shows on sentry
  - [x] Test logout -> no longer shows on sentry
2025-10-07 15:35:57 +00:00
Zamil Majdy
f3900127d7 feat(backend): instrument prometheus for internal services (#11114)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

Instrument Prometheus for internal services

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing tests
2025-10-07 21:34:38 +07:00
Abhimanyu Yadav
7c47f54e25 feat(frontend): add an API key modal for adding credentials in new builder. (#11105)
In this PR, I’ve added an API Key modal to the new builder so users can
add API key credentials.


https://github.com/user-attachments/assets/68da226c-3787-4950-abb0-7a715910355e

### Changes
- Updated the credential field to support API key.
- Added a modal for creating new API keys and improved the selection UI
for credentials.
- Refactored components for better modularity and maintainability.
- Enhanced styling and user experience in the FlowEditor components.
- Updated OpenAPI documentation for better clarity on credential
operations.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Able to create API key perfectly.
  - [x] can select the correct credentials.
2025-10-07 11:19:17 +00:00
Lluis Agusti
927042d93e fix(frontend): more turnstile experiments (2) 2025-10-07 00:40:49 +09:00
Lluis Agusti
4244979a45 fix(frontend): more turnstile experiments 2025-10-07 00:22:20 +09:00
Lluis Agusti
aa27365e7f fix(frontend): fix captcha reset 2025-10-06 23:57:42 +09:00
Nicholas Tindle
b86aa8b14e feat(frontend): launchdarkly tracking on frontend browser (#11076)
<!-- Clearly explain the need for these changes: -->
We struggle to identify where issues are coming from feature flags and
which are from normal use. This adds that split on the frontend.

### Changes 🏗️
Include sentry in the LD initialization
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test that launch darkly flags get attached to the frontend
(browser only)
2025-10-06 13:48:13 +00:00
Lluis Agusti
e7ab2626f5 fix(frontend): remove captcha ref reset 2025-10-06 22:34:08 +09:00
Ubbe
ff58ce174b fix(frontend): possible login issues related to turnstile (#11094)
## Changes 🏗️

We are seeing login and authentication issues in production and staging.
Locally though, the app behaves fine. We also had issues related to the
CAPTCHA in the past.

Our CAPTCHA code is less than ideal, with some heavy `useEffect` that
will load the Turnstile script into the DOM. I have the impression that
is loading the script multiple times ( due to dependencies on the
effects array not being well set ), or the like causing associated login
issues.

Created a new Turnstile component using
[`react-turnstile`](https://docs.page/marsidev/react-turnstile) that is
way simpler and should hopefully be more stable.

I also fixed an issue with the Credits popover layout rendering cropped
on the window.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login/logout on the app multiple times with Turnstile ON,
everything is stable
  - [x] Credits popover appears on the right place 

### For configuration changes:

None
2025-10-06 12:59:27 +00:00
Abhimanyu Yadav
2d8ab6b7c0 feat(frontend): add selecting UI for custom node in new builder (#11091)
React Flow has built-in functionality to select multiple nodes by using
`cmd` + click. You can also select using rectangle selection by holding
the shift key. However, we need to design a custom node after it’s
selected.

<img width="845" height="510" alt="Screenshot 2025-10-06 at 12 41 16 PM"
src="https://github.com/user-attachments/assets/c91f22e3-2211-46b6-b3d3-fbc89148e99a"
/>

### Tests
- [x] Selecting Ui is visible after selecting a node, using cmd + click,
and after rectangle selection.
2025-10-06 12:53:59 +00:00
Abhimanyu Yadav
a7306970b8 refactor(frontend): simplify marketplace search page and update data fetching (#11061)
This PR refactors the marketplace search page to improve code
maintainability, readability, and follows modern React patterns by
extracting complex logic into a custom hook and creating dedicated
components.

### 🔄 Changes

#### **Architecture Improvements**
- **Component Extraction**: Replaced the monolithic `SearchResults`
component with a cleaner `MainSearchResultPage` component that focuses
solely on presentation
- **Custom Hook Pattern**: Extracted all business logic and state
management into `useMainSearchResultPage` hook for better separation of
concerns
- **Loading State Component**: Added dedicated
`MainSearchResultPageLoading` component for consistent loading UI

#### **Code Simplification**
- **Reduced search page to 19 lines** (from 175 lines) by removing
inline logic and state management
- **Centralized data fetching** using auto-generated API endpoints
(`useGetV2ListStoreAgents`, `useGetV2ListStoreCreators`)
- **Improved error handling** with dedicated error states and loading
states

#### **Feature Updates**
- **Sort Options**: Commented out "Most Recent" and "Highest Rated" sort
options due to backend limitations (no date/rating data available)
- **Client-side Sorting**: Implemented client-side sorting for "runs"
and "rating" as a temporary solution
- **Search Filters**: Maintained filter functionality for
agents/creators with improved state management

### 📊 Impact

- **Better Developer Experience**: Code is now more modular and easier
to understand
- **Improved Maintainability**: Business logic separated from
presentation layer
- **Future-Ready**: Structure prepared for backend improvements when
date/rating data becomes available
- **Type Safety**: Leveraging TypeScript with auto-generated API types

### 🧪 Testing Checklist

- [x] Search functionality works correctly with various search terms
- [x] Filter chips correctly toggle between "All", "Agents", and
"Creators"
- [x] Sort dropdown displays only "Most Runs" option
- [x] Client-side sorting correctly sorts agents and creators by runs
- [x] Loading state displays while fetching data
- [x] Error state displays when API calls fail
- [x] "No results found" message appears for empty searches
- [x] Search bar in results page is functional
- [x] Results display correctly with proper layout and styling
2025-10-06 12:53:45 +00:00
Abhimanyu Yadav
c42f94ce2a feat(frontend): add new credential field for new builder (#11066)
In this PR, I’ve added a feature to select a credential from a list and
also provided a UI to create a new credential if desired.

<img width="443" height="157" alt="Screenshot 2025-10-06 at 9 28 07 AM"
src="https://github.com/user-attachments/assets/d9e72a14-255d-45b6-aa61-b55c2465dd7e"
/>

#### Frontend Changes:
- **Refactored credential field** from a single component to a modular
architecture:
  - Created `CredentialField/` directory with separated concerns
- Added `SelectCredential.tsx` component for credential selection UI
with provider details display
- Implemented `useCredentialField.ts` custom hook for credential data
fetching with 10-minute caching
- Added `helpers.ts` with credential filtering and provider name
formatting utilities
  - Added loading states with skeleton UI while fetching credentials

- **Enhanced UI/UX features**:
- Dropdown selector showing credentials with provider, title, username,
and host details
  - Visual key icon for each credential option
  - Placeholder "Add API Key" button (implementation pending)
  - Loading skeleton UI for better perceived performance
  - Smart filtering of credentials based on provider requirements

- **Template improvements**:
- Updated `FieldTemplate.tsx` to properly handle credential field
display
- Special handling for credential field labels showing provider-specific
names
  - Removed input handle for credential fields in the node editor

#### Backend Changes:
- **API Documentation improvements**:
- Added OpenAPI summaries to `/credentials` endpoint ("List
Credentials")
- Added summary to `/{provider}/credentials/{cred_id}` endpoint ("Get
Specific Credential By ID")

### Test Plan 📋

   - [x] Navigate to the flow builder
   - [x] Add a block that requires credentials (e.g., API block)
- [x] Verify the credential dropdown loads and displays available
credentials
- [x] Check that only credentials matching the provider requirements are
shown
2025-10-06 12:52:45 +00:00
Zamil Majdy
4e1557e498 fix(backend): Add dynamic input pin support for Smart Decision Maker Block (#11082)
## Summary

- Centralize dynamic field delimiters and helpers in
backend/data/dynamic_fields.py.
- Refactor SmartDecisionMaker: build function signatures with
dynamic-field mapping and re-map tool outputs back to original dynamic
names.
- Deterministic retry loop with retry-only feedback to avoid polluting
final conversation history.
- Update executor/utils.py and data/graph.py to use centralized
utilities.
- Update and extend tests: dynamic-field E2E flow, mapping verification,
output yielding, and retry validation; switch mocked llm_call to
AsyncMock; align tool-name expectations.
- Add a single-tool fallback in schema lookup to support mocked
scenarios.

## Validation

- Full backend test suite: 1125 passed, 88 skipped, 53 warnings (local).
- Backend lint/format pass.

## Scope

- Minimal and localized to SmartDecisionMaker and dynamic-field
utilities; unrelated pyright warnings remain unchanged.

## Risks/Mitigations

- Behavior is backward-compatible; dynamic-field constants are
centralized and reused.
- Output re-mapping only affects SmartDecisionMaker tool outputs and
matches existing link naming conventions.

## Checklist

- [x] Formatted and linted
- [x] All updated tests pass locally
- [x] No secrets introduced

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-04 14:23:13 +00:00
seer-by-sentry[bot]
7f8cf36ceb feat(frontend): Add description to Upload Agent dialog (#11053)
### Changes 🏗️

- Added a description to the Upload Agent dialog to provide more context
for users. Fixes
[BUILDER-3N1](https://sentry.io/organizations/significant-gravitas/issues/6915512912/).
The issue was that: DialogContent in LibraryUploadAgentDialog lacks an
accessible description, violating WAI-ARIA standards.

<img width="2066" height="1740" alt="image"
src="https://github.com/user-attachments/assets/c876fb33-4375-4a66-a6a2-6b13c00ef8d3"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test it works
  - [x] Get design approval

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-03 16:38:49 +00:00
Ubbe
0978566089 fix(frontend): performance and layout issues (#11036)
## Changes 🏗️

### Performance (Onboarding) 🐎 

- Moved non-UI logic into `providers/onboarding/helpers.ts` to reduce
provider complexity.
- Memoized provider value and narrowed state updates to cut unnecessary
re-renders.
- Deferred non-critical effects until after mount to lower initial JS
work.
 
**Result:** faster initial render and smoother onboarding flows under
load.

### Layout and overflow fixes 📐 

- Replaced `w-screen` with `w-full` in platform/admin/profile layouts
and marketplace wrappers to avoid 100vw scrollbar overflow.
- Adjusted mobile navbar position (`right-0` instead of `-right-4`) to
prevent off-viewport elements.

**Result:** removed horizontal scrolling on Marketplace, Library, and
Settings pages; Build remains unaffected.

### New Generic Error pages

- Standardized global error handling in `app/global-error.tsx` for
consistent display and user feedback.
- Added platform-scoped error page(s) under `app/(platform)/error` for
route-level failures with a consistent layout.
- Improved retry affordances using existing `ErrorCard`.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify onboarding flows render faster and re-render less (DevTools
flamegraph)
- [x] Confirm no horizontal scrolling on Marketplace, Library, Settings
at common widths
  - [x] Validate mobile navbar stays within viewport
- [x] Trigger errors to confirm global and platform error pages render
consistently

### For configuration changes:

None
2025-10-03 22:41:01 +09:00
Zamil Majdy
8b4eb6f87c fix(backend): resolve SmartDecisionMaker ChatCompletionMessage error and enhance tool call token counting (#11059)
## Summary
Fix two critical production issues affecting SmartDecisionMaker
functionality and prompt compression accuracy.

### 🔧 Changes Made

#### Issue 1: SmartDecisionMaker ChatCompletionMessage Error
**Problem**: PR #11015 introduced code that appended
`response.raw_response` (ChatCompletionMessage object) directly to
conversation history, causing `'ChatCompletionMessage' object has no
attribute 'get'` errors.

**Root Cause**: ChatCompletionMessage objects don't have `.get()` method
but conversation history processing expects dictionary objects with
`.get()` capability.

**Solution**: Created `_convert_raw_response_to_dict()` helper function
for type-safe conversion:
-  **Helper function**: Safely converts raw_response to dictionary
format for conversation history
-  **Type safety**: Handles OpenAI (ChatCompletionMessage), Anthropic
(Message), and Ollama (string) responses
-  **Preserves context**: Maintains conversation flow for multi-turn
tool calling scenarios
-  **DRY principle**: Single helper used in both validation error path
(line 624) and success path (line 681)
-  **No breaking changes**: Tool call continuity preserved for complex
workflows

#### Issue 2: Tool Call Token Counting in Prompt Compression
**Problem**: `_msg_tokens()` function only counted tokens in 'content'
field, severely undercounting tool calls which store data in different
fields (tool_calls, function.arguments, etc.).

**Root Cause**: Tool calls have no 'content' to calculate length of,
causing massive token undercounting during prompt compression that could
lead to context overflow.

**Solution**: Enhanced `_msg_tokens()` to handle both OpenAI and
Anthropic tool call formats:
-  **OpenAI format**: Count tokens in `tool_calls[].id`, `type`,
`function.name`, `function.arguments`
-  **Anthropic format**: Count tokens in `content[].tool_use` (`id`,
`name`, `input`) and `content[].tool_result`
-  **Backward compatibility**: Regular string content counted exactly
as before
-  **Comprehensive testing**: Added 11 unit tests in `prompt_test.py`

### 📊 Validation Results
-  **SmartDecisionMaker errors resolved**: No more
ChatCompletionMessage.get() failures
-  **Token counting accuracy**: OpenAI tool calls 9+ tokens vs previous
3-4 wrapper-only tokens
-  **Token counting accuracy**: Anthropic tool calls 13+ tokens vs
previous 3-4 wrapper-only tokens
-  **Backward compatibility**: Regular messages maintain exact same
token count
-  **Type safety**: 0 type errors in both modified files
-  **Test coverage**: All 11 new unit tests pass + existing
SmartDecisionMaker tests pass
-  **Multi-turn conversations**: Tool call workflows continue working
correctly

### 🎯 Impact
- **Resolves Sentry issue OPEN-2750**: ChatCompletionMessage errors
eliminated
- **Prevents context overflow**: Accurate token counting during prompt
compression for long tool call conversations
- **Production stability**: SmartDecisionMaker retry mechanism works
correctly with proper conversation flow
- **Resource efficiency**: Better memory management through accurate
token accounting
- **Zero breaking changes**: Full backward compatibility maintained

### 🧪 Test Plan
- [x] Verified SmartDecisionMaker no longer crashes with
ChatCompletionMessage errors
- [x] Validated tool call token counting accuracy with comprehensive
unit tests (11 tests all pass)
- [x] Confirmed backward compatibility for regular message token
counting
- [x] Tested both OpenAI and Anthropic tool call formats
- [x] Verified type safety with pyright checks
- [x] Ensured conversation history flows correctly with helper function
- [x] Confirmed multi-turn tool calling scenarios work with preserved
context

### 📝 Files Modified
- `backend/blocks/smart_decision_maker.py` - Added
`_convert_raw_response_to_dict()` helper for safe conversion
- `backend/util/prompt.py` - Enhanced tool call token counting for
accurate prompt compression
- `backend/util/prompt_test.py` - Comprehensive unit tests for token
counting (11 tests)

###  Ready for Review
Both fixes are critical for production stability and have been
thoroughly tested with zero breaking changes. The helper function
approach ensures type safety while preserving essential conversation
context for complex tool calling workflows.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-03 00:25:21 +00:00
Reinier van der Leer
4b7d17b9d2 refactor(blocks/code): Clean up & rename code execution blocks (#11019)
The code execution blocks' implementations are heavily duplicated and
their names aren't very clear.
E.g. the "InstantiationBlock" just shows up as "Instantiation" in the
block list.

I would've done this in #11017 but kept the refactoring separate for
easier reviewing.

### Changes 🏗️

- Rename "Code Execution" block to "Execute Code"
- Rename "Instantiation" block to "Instantiate Code Sandbox"
- Rename "Step Execution" block to "Execute Code Step"
- Deduplicate implementation of the three code execution blocks
- Add `dispose_sandbox` toggle to "Execute Code" and "Execute Code Step"
blocks
- Note: it's default `True` on the Execute Code block, default `False`
on the Execute Code Step block
- Update block and input descriptions to clarify behavior
- Fix all linting issues

<details>
<summary>Screenshots</summary>

![the three blocks as they look
now](https://github.com/user-attachments/assets/8e4274f7-e006-440c-b2b8-980df546186d)
![updated block names and descriptions in the block
list](https://github.com/user-attachments/assets/866c3d9e-13ea-4fc0-87de-a5257bafb6d4)
![the new dispose_sandbox toggle on the Execute Code
block](https://github.com/user-attachments/assets/56815dbb-f313-4308-81dd-50d949d9eafb)
![the new dispose_sandbox toggle on the Execute Code Step
block](https://github.com/user-attachments/assets/469c140c-4cd2-4210-97b2-f27fc91778de)

</details>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test all code execution blocks manually
2025-10-02 22:50:49 +00:00
dependabot[bot]
0fc6a44389 chore(backend/deps-dev): Bump the development-dependencies group across 1 directory with 4 updates (#10946)
Bumps the development-dependencies group with 4 updates in the
/autogpt_platform/backend directory:
[faker](https://github.com/joke2k/faker),
[pyright](https://github.com/RobertCraigie/pyright-python),
[pytest-mock](https://github.com/pytest-dev/pytest-mock) and
[ruff](https://github.com/astral-sh/ruff).

Updates `faker` from 37.6.0 to 37.8.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/releases">faker's
releases</a>.</em></p>
<blockquote>
<h2>Release v37.8.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.8.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.7.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.7.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/blob/master/CHANGELOG.md">faker's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.7.0...v37.8.0">v37.8.0
- 2025-09-15</a></h3>
<ul>
<li>Add Automotive providers for <code>ja_JP</code> locale. Thanks <a
href="https://github.com/ItoRino424"><code>@​ItoRino424</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.6.0...v37.7.0">v37.7.0
- 2025-09-15</a></h3>
<ul>
<li>Add Nigerian name locales (<code>yo_NG</code>, <code>ha_NG</code>,
<code>ig_NG</code>, <code>en_NG</code>). Thanks <a
href="https://github.com/ifeoluwaoladeji"><code>@​ifeoluwaoladeji</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4bde8f57ad"><code>4bde8f5</code></a>
Bump version: 37.7.0 → 37.8.0</li>
<li><a
href="f542f364cb"><code>f542f36</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="e28d7cb909"><code>e28d7cb</code></a>
fix test</li>
<li><a
href="e4305b0e29"><code>e4305b0</code></a>
fix padding</li>
<li><a
href="a359441a81"><code>a359441</code></a>
💄 format code</li>
<li><a
href="0e3f0bdf81"><code>0e3f0bd</code></a>
Add Automotive providers for <code>ja_JP</code> locale (<a
href="https://redirect.github.com/joke2k/faker/issues/2251">#2251</a>)</li>
<li><a
href="d4fa69dfc7"><code>d4fa69d</code></a>
Bump version: 37.6.0 → 37.7.0</li>
<li><a
href="f636f06a37"><code>f636f06</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="9a482dd25b"><code>9a482dd</code></a>
💄 Format code</li>
<li><a
href="2493b2d51a"><code>2493b2d</code></a>
fix: fix minor grammar typo (<a
href="https://redirect.github.com/joke2k/faker/issues/2259">#2259</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/joke2k/faker/compare/v37.6.0...v37.8.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pyright` from 1.1.404 to 1.1.405
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e211ec8df8"><code>e211ec8</code></a>
Pyright NPM Package update to 1.1.405 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/353">#353</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.404...v1.1.405">compare
view</a></li>
</ul>
</details>
<br />

Updates `pytest-mock` from 3.14.1 to 3.15.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/releases">pytest-mock's
releases</a>.</em></p>
<blockquote>
<h2>v3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/529">#529</a>:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>v3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/pull/524">#524</a>:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/blob/main/CHANGELOG.rst">pytest-mock's
changelog</a>.</em></p>
<blockquote>
<h2>3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><code>[#529](https://github.com/pytest-dev/pytest-mock/issues/529)
&lt;https://github.com/pytest-dev/pytest-mock/issues/529&gt;</code>_:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><code>[#524](https://github.com/pytest-dev/pytest-mock/issues/524)
&lt;https://github.com/pytest-dev/pytest-mock/pull/524&gt;</code>_:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e1b5c62a38"><code>e1b5c62</code></a>
Release 3.15.1</li>
<li><a
href="184eb190d6"><code>184eb19</code></a>
Set <code>spy_return_iter</code> only when explicitly requested (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/537">#537</a>)</li>
<li><a
href="4fa0088a0a"><code>4fa0088</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/536">#536</a>)</li>
<li><a
href="f5aff33ce7"><code>f5aff33</code></a>
Fix test failure with pytest 8+ and verbose mode (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/535">#535</a>)</li>
<li><a
href="adc41873c9"><code>adc4187</code></a>
Bump actions/setup-python from 5 to 6 in the github-actions group (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/533">#533</a>)</li>
<li><a
href="95ad570060"><code>95ad570</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/532">#532</a>)</li>
<li><a
href="e696bf02c1"><code>e696bf0</code></a>
Fix standalone mock support (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/531">#531</a>)</li>
<li><a
href="5b29b03ce9"><code>5b29b03</code></a>
Fix gen-release-notes script</li>
<li><a
href="7d22ef4e56"><code>7d22ef4</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/528">#528</a>
from pytest-dev/release-3.15.0</li>
<li><a
href="90b29f89e2"><code>90b29f8</code></a>
Update CHANGELOG for 3.15.0</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-mock/compare/v3.14.1...v3.15.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.12.11 to 0.13.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.13.0</h2>
<h2>Release Notes</h2>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.13.0">blog
post</a> for a migration guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p><strong>Several rules can now add <code>from __future__ import
annotations</code> automatically</strong></p>
<p><code>TC001</code>, <code>TC002</code>, <code>TC003</code>,
<code>RUF013</code>, and <code>UP037</code> now add <code>from
__future__ import annotations</code> as part of their fixes when the
<code>lint.future-annotations</code> setting is enabled. This allows the
rules to move more imports into <code>TYPE_CHECKING</code> blocks
(<code>TC001</code>, <code>TC002</code>, and <code>TC003</code>), use
PEP 604 union syntax on Python versions before 3.10
(<code>RUF013</code>), and unquote more annotations
(<code>UP037</code>).</p>
</li>
<li>
<p><strong>Full module paths are now used to verify first-party
modules</strong></p>
<p>Ruff now checks that the full path to a module exists on disk before
categorizing it as a first-party import. This change makes first-party
import detection more accurate, helping to avoid false positives on
local directories with the same name as a third-party dependency, for
example. See the <a
href="https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc">FAQ
section</a> on import categorization for more details.</p>
</li>
<li>
<p><strong>Deprecated rules must now be selected by exact rule
code</strong></p>
<p>Ruff will no longer activate deprecated rules selected by their group
name or prefix. As noted below, the two remaining deprecated rules were
also removed in this release, so this won't affect any current rules,
but it will still affect any deprecations in the future.</p>
</li>
<li>
<p><strong>The deprecated macOS configuration directory fallback has
been removed</strong></p>
<p>Ruff will no longer look for a user-level configuration file at
<code>~/Library/Application Support/ruff/ruff.toml</code> on macOS. This
feature was deprecated in v0.5 in favor of using the <a
href="https://specifications.freedesktop.org/basedir-spec/latest/">XDG
specification</a> (usually resolving to
<code>~/.config/ruff/ruff.toml</code>), like on Linux. The fallback and
accompanying deprecation warning have now been removed.</p>
</li>
</ul>
<h3>Removed Rules</h3>
<p>The following rules have been removed:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/pandas-df-variable-name"><code>pandas-df-variable-name</code></a>
(<code>PD901</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/non-pep604-isinstance"><code>non-pep604-isinstance</code></a>
(<code>UP038</code>)</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow-dag-no-schedule-argument"><code>airflow-dag-no-schedule-argument</code></a>
(<code>AIR002</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-removal"><code>airflow3-removal</code></a>
(<code>AIR301</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-moved-to-provider"><code>airflow3-moved-to-provider</code></a>
(<code>AIR302</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-suggested-update"><code>airflow3-suggested-update</code></a>
(<code>AIR311</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-suggested-to-move-to-provider"><code>airflow3-suggested-to-move-to-provider</code></a>
(<code>AIR312</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/long-sleep-not-forever"><code>long-sleep-not-forever</code></a>
(<code>ASYNC116</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/f-string-number-format"><code>f-string-number-format</code></a>
(<code>FURB116</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/os-symlink"><code>os-symlink</code></a>
(<code>PTH211</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/generic-not-last-base-class"><code>generic-not-last-base-class</code></a>
(<code>PYI059</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/redundant-none-literal"><code>redundant-none-literal</code></a>
(<code>PYI061</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/pytest-raises-ambiguous-pattern"><code>pytest-raises-ambiguous-pattern</code></a>
(<code>RUF043</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/unused-unpacked-variable"><code>unused-unpacked-variable</code></a>
(<code>RUF059</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/useless-class-metaclass-type"><code>useless-class-metaclass-type</code></a>
(<code>UP050</code>)</li>
</ul>
<p>The following behaviors have been stabilized:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.13.0</h2>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.13.0">blog
post</a> for a migration
guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p><strong>Several rules can now add <code>from __future__ import
annotations</code> automatically</strong></p>
<p><code>TC001</code>, <code>TC002</code>, <code>TC003</code>,
<code>RUF013</code>, and <code>UP037</code> now add <code>from
__future__ import annotations</code> as part of their fixes when the
<code>lint.future-annotations</code> setting is enabled. This allows the
rules to move
more imports into <code>TYPE_CHECKING</code> blocks (<code>TC001</code>,
<code>TC002</code>, and <code>TC003</code>),
use PEP 604 union syntax on Python versions before 3.10
(<code>RUF013</code>), and
unquote more annotations (<code>UP037</code>).</p>
</li>
<li>
<p><strong>Full module paths are now used to verify first-party
modules</strong></p>
<p>Ruff now checks that the full path to a module exists on disk before
categorizing it as a first-party import. This change makes first-party
import detection more accurate, helping to avoid false positives on
local
directories with the same name as a third-party dependency, for example.
See
the <a
href="https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc">FAQ
section</a> on import categorization for more details.</p>
</li>
<li>
<p><strong>Deprecated rules must now be selected by exact rule
code</strong></p>
<p>Ruff will no longer activate deprecated rules selected by their group
name
or prefix. As noted below, the two remaining deprecated rules were also
removed in this release, so this won't affect any current rules, but it
will
still affect any deprecations in the future.</p>
</li>
<li>
<p><strong>The deprecated macOS configuration directory fallback has
been removed</strong></p>
<p>Ruff will no longer look for a user-level configuration file at
<code>~/Library/Application Support/ruff/ruff.toml</code> on macOS. This
feature was
deprecated in v0.5 in favor of using the <a
href="https://specifications.freedesktop.org/basedir-spec/latest/">XDG
specification</a>
(usually resolving to <code>~/.config/ruff/ruff.toml</code>), like on
Linux. The
fallback and accompanying deprecation warning have now been removed.</p>
</li>
</ul>
<h3>Removed Rules</h3>
<p>The following rules have been removed:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/pandas-df-variable-name"><code>pandas-df-variable-name</code></a>
(<code>PD901</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/non-pep604-isinstance"><code>non-pep604-isinstance</code></a>
(<code>UP038</code>)</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a1fdd66f10"><code>a1fdd66</code></a>
Bump 0.13.0 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20336">#20336</a>)</li>
<li><a
href="8770b95509"><code>8770b95</code></a>
[ty] introduce <code>DivergentType</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20312">#20312</a>)</li>
<li><a
href="65982a1e14"><code>65982a1</code></a>
[ty] Use 'unknown' specialization for upper bound on Self (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20325">#20325</a>)</li>
<li><a
href="57d1f7132d"><code>57d1f71</code></a>
[ty] Simplify unions of enum literals and subtypes thereof (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20324">#20324</a>)</li>
<li><a
href="7a75702237"><code>7a75702</code></a>
Ignore deprecated rules unless selected by exact code (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20167">#20167</a>)</li>
<li><a
href="9ca632c84f"><code>9ca632c</code></a>
Stabilize adding future import via config option (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20277">#20277</a>)</li>
<li><a
href="64fe7d30a3"><code>64fe7d3</code></a>
[<code>flake8-errmsg</code>] Stabilize extending
<code>raw-string-in-exception</code> (<code>EM101</code>) to ...</li>
<li><a
href="beeeb8d5c5"><code>beeeb8d</code></a>
Stabilize the remaining Airflow rules (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20250">#20250</a>)</li>
<li><a
href="b6fca52855"><code>b6fca52</code></a>
[<code>flake8-bugbear</code>] Stabilize support for non-context-manager
calls in `assert...</li>
<li><a
href="ac7f882c78"><code>ac7f882</code></a>
[<code>flake8-commas</code>] Stabilize support for trailing comma checks
in type paramet...</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.11...0.13.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-10-02 20:57:18 +00:00
dependabot[bot]
f5ee579ab2 chore(backend/deps): Bump firecrawl-py from 2.16.3 to 4.3.1 in /autogpt_platform/backend (#10809)
Bumps [firecrawl-py](https://github.com/firecrawl/firecrawl) from 2.16.3
to 4.3.1.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/firecrawl/firecrawl/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=firecrawl-py&package-manager=pip&previous-version=2.16.3&new-version=4.3.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrade firecrawl-py to v4.3.6 and refactor firecrawl blocks to new v4
API, formats handling, method names, and response fields.
> 
> - **Dependencies**
> - Bump `firecrawl-py` from `2.16.3` to `4.3.6` (adds `httpx`, updates
`pydantic>=2`).
> - **Firecrawl API migration**
>   - Centralize `ScrapeFormat` in `backend/blocks/firecrawl/_api.py`.
> - Add `_format_utils.convert_to_format_options` to map `ScrapeFormat`
(incl. `screenshot@fullPage`) to v4 `FormatOption`/`ScreenshotFormat`.
> - Switch to v4 types (`firecrawl.v2.types.ScrapeOptions`); adopt
snake_case fields (`only_main_content`, `max_age`, `wait_for`).
> - Rename methods: `crawl_url` → `crawl`, `scrape_url` → `scrape`,
`map_url` → `map`.
> - Normalize response attributes: `rawHtml` → `raw_html`,
`changeTracking` → `change_tracking`.
> - **Blocks**
> - `crawl.py`, `scrape.py`, `search.py`: use new formats conversion and
updated options/fields; adjust iteration over results (`search`: iterate
`web` when present).
> - `map.py`: return both `links` and detailed `results`
(url/title/description) and update output schema accordingly.
> - **Project files**
> - Update `pyproject.toml` and `poetry.lock` for new dependency
versions.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d872f2e82b. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-02 20:14:18 +00:00
Zamil Majdy
57a06f7088 fix(blocks, security): Fixes for various DoS vulnerabilities (#10798)
This PR addresses multiple critical and medium security vulnerabilities
that could lead to Denial of Service (DoS) attacks. All fixes implement
defense-in-depth strategies with comprehensive testing.

### Changes 🏗️

#### **Critical Security Fixes:**

1. **GHSA-m2wr-7m3r-p52c - ReDoS in CodeExtractionBlock** 
- Fixed catastrophic backtracking in regex patterns `\s+[\s\S]*?` and
`\s+(.*?)`
   - Replaced with safer patterns: `[ \t]*\n([^\s\S]*?)`
   - Files: `backend/blocks/code_extraction_block.py`

2. **GHSA-955p-gpfx-r66j - AITextSummarizerBlock Memory Amplification**
   - Added 1MB text size limit and 100 chunk maximum
   - Prevents 10K input → 50G memory amplification attacks
   - Files: `backend/blocks/llm.py`

3. **GHSA-5cqw-g779-9f9x - RSS Feed XML Bomb DoS**
   - Added 10MB feed size limit and 30s timeout
   - Prevents deep XML parsing memory exhaustion
   - Files: `backend/blocks/rss.py`

4. **GHSA-7g34-7fvq-xxq6 - File Storage Disk Exhaustion**
   - Added 100MB per file and 1GB per execution directory limits
   - Prevents disk space exhaustion from file uploads
   - Files: `backend/util/file.py`

5. **GHSA-pppq-xx2w-7jpq - ExtractTextInformationBlock ReDoS**
   - Added 1MB text limit, 1000 match limit, and 5s timeout protection
   - Prevents lookahead pattern memory exhaustion
   - Files: `backend/blocks/text.py`

6. **GHSA-vw3v-whvp-33v5 - Docker Logging Disk Exhaustion**
- Added log rotation limits at Docker (10MB × 3 files) and application
levels
   - Prevents unbounded log growth causing disk exhaustion
- Files: `docker-compose.platform.yml`,
`autogpt_libs/autogpt_libs/logging/config.py`

#### **Additional Security Improvements:**

7. **StepThroughItemsBlock DoS Prevention**
   - Added 10,000 item limit and 1MB input size limit
   - Prevents large iteration DoS attacks
   - Files: `backend/blocks/iteration.py`

8. **XMLParserBlock XML Bomb Prevention**
   - Added 10MB XML input size limit
   - Files: `backend/blocks/xml_parser.py`

#### **Code Quality:**
- Fixed Python 3.10 typing compatibility issues
- Added comprehensive security test suite
- All code formatted and linted

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created comprehensive security test suite covering all
vulnerabilities
  - [x] Verified ReDoS patterns are fixed and don't cause timeouts
  - [x] Confirmed memory limits prevent amplification attacks
  - [x] Tested file size limits prevent disk exhaustion
  - [x] Validated log rotation prevents unbounded growth
  - [x] Ensured backward compatibility for normal usage

#### For configuration changes:
- [x] `docker-compose.yml` is updated with logging limits
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Test Plan 🧪

**Security Tests:**
1. **ReDoS Protection**: Tested with malicious regex inputs (large
spaces) - completes without hanging
2. **Memory Limits**: Verified 2MB text input gets truncated to 1MB,
chunk limits enforced
3. **File Size Limits**: Confirmed 200MB files rejected, directory size
limits enforced
4. **Iteration Limits**: Tested 20K item arrays rejected, large JSON
strings rejected
5. **Timeout Protection**: Dangerous regex patterns timeout after 5s
instead of hanging

**Compatibility Tests:**
- Normal functionality preserved for all blocks
- Existing tests pass with new security limits
- Performance impact minimal for typical usage

### Security Impact 🛡️

**Before:** Multiple attack vectors could cause:
- CPU exhaustion (ReDoS attacks)
- Memory exhaustion (amplification attacks)  
- Disk exhaustion (file/log bombs)
- Service unavailability

**After:** All attack vectors mitigated with:
- Input validation and size limits
- Timeout protections
- Resource quotas
- Defense-in-depth approach

All fixes maintain backward compatibility while preventing DoS attacks.

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds robust DoS protections across blocks (regex, memory, iteration,
XML/RSS, file I/O) and enables app/Docker log rotation with
comprehensive tests.
> 
> - **Security hardening**:
> - Replace unsafe regex in `backend/blocks/code_extraction_block.py` to
prevent ReDoS; add safer extraction/removal patterns.
> - Constrain LLM summarizer chunking in `backend/blocks/llm.py` (1MB
cap, chunk/overlap validation, chunk count limit).
> - Limit RSS fetching in `backend/blocks/rss.py` (scheme validation,
10MB cap, timeout, bounded read) and return empty on failure.
>   - Impose XML size limit (10MB) in `backend/blocks/xml_parser.py`.
> - Add file upload/download limits in `backend/util/file.py`
(100MB/file, 1GB dir quota) and enforce scanning before write.
> - Enable rotating file logs in `autogpt_libs/logging/config.py` (size
+ backups) and Docker json-file log rotation in
`docker-compose.platform.yml`.
> - **Iteration block**:
> - Add item count/string size limits; fix yielded key for dicts; cap
iterations in `backend/blocks/iteration.py`.
> - **Tests**:
> - New `backend/blocks/test/test_security_fixes.py` covering ReDoS,
timeouts, memory/size and iteration limits, XML/file constraints.
> - **Misc**:
> - Typing fallback for `NotRequired` in `activity_status_generator.py`.
>   - Dependency updates in `backend/poetry.lock`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
500e1578b1. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <Pwuts@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-02 12:55:55 +00:00
Zamil Majdy
258bf0b1a5 fix(backend): improve activity status generation accuracy and handle missing blocks gracefully (#11039)
## Summary
Fix critical issues where activity status generator incorrectly reported
failed executions as successful, and enhance AI evaluation logic to be
more accurate about actual task accomplishment.

## Changes Made

### 1. Missing Block Handling (`backend/data/graph.py`)
- **Replace ValueError with graceful degradation**: When blocks are
deleted/missing, return `_UnknownBlock` placeholder instead of crashing
- **Comprehensive interface implementation**: `_UnknownBlock` implements
all expected Block methods to prevent type errors
- **Warning logging**: Log missing blocks for debugging without breaking
execution flow
- **Removed unnecessary caching**: Direct constructor calls instead of
cached wrapper functions

### 2. Enhanced Activity Status AI Evaluation
(`backend/executor/activity_status_generator.py`)

#### Intention-Based Success Evaluation
- **Graph description analysis**: AI now reads graph description FIRST
to understand intended purpose
- **Purpose-driven evaluation**: Success is measured against what the
graph was designed to accomplish
- **Critical output analysis**: Enhanced detection of missing outputs
from key blocks (Output, Post, Create, Send, Publish, Generate)
- **Sub-agent failure detection**: Better identification when
AgentExecutorBlock produces no outputs

#### Improved Prompting
- **Intent-specific examples**: 'blog writing' → check for blog content,
'email automation' → check for sent emails
- **Primary evaluation criteria**: 'Did this execution accomplish what
the graph was designed to do?'
- **Enhanced checklist**: 7-point analysis including graph description
matching
- **Technical vs. goal completion**: Distinguish between workflow steps
completing vs. actual user goals achieved

#### Removed Database Error Handling
- **Eliminated try-catch blocks**: No longer needed around
`get_graph_metadata` and `get_graph` calls
- **Direct database calls**: Simplified error handling after fixing
missing block root cause
- **Cleaner code flow**: More predictable execution path without
redundant error handling

## Problem Solved
- **False success reports**: AI previously marked executions as
'successful' when critical output blocks produced no results
- **Missing block crashes**: System would fail when trying to analyze
executions with deleted/missing blocks
- **Intent-blind evaluation**: AI evaluated technical completion instead
of actual goal achievement
- **Database service errors**: 500 errors when missing blocks caused
graph loading failures

## Business Impact
- **More accurate user feedback**: Users get honest assessment of
whether their automations actually worked
- **Better task completion detection**: Clear distinction between
'workflow completed' vs. 'goal achieved'
- **Improved reliability**: System handles edge cases gracefully without
crashing
- **Enhanced user trust**: Truthful reporting builds confidence in the
platform

## Testing
-  Tested with problematic executions that previously showed false
successes
-  Confirmed missing block handling works without warnings
-  Verified enhanced prompt correctly identifies failures
-  Database calls work without try-catch protection

## Example Before/After

**Before (False Success):**
```
Graph: "Automated SEO Blog Writer"
Status: " I successfully completed your blog writing task!"
Reality: No blog content was actually created (critical output blocks had no outputs)
```

**After (Accurate Failure Detection):**
```
Graph: "Automated SEO Blog Writer"  
Status: " The task failed because the blog post creation step didn't produce any output."
Reality: Correctly identifies that the intended blog writing goal was not achieved
```

## Files Modified
- `backend/data/graph.py`: Missing block graceful handling with complete
interface
- `backend/executor/activity_status_generator.py`: Enhanced AI
evaluation with intention-based analysis

## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality) 
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update

## Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] New and existing unit tests pass locally with my changes
- [x] Any dependent changes have been merged and published in downstream
modules

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-02 12:28:57 +00:00
Ubbe
4a1cb6d64b fix(frontend): performance and layout issues (#11036)
## Changes 🏗️

### Performance (Onboarding) 🐎 

- Moved non-UI logic into `providers/onboarding/helpers.ts` to reduce
provider complexity.
- Memoized provider value and narrowed state updates to cut unnecessary
re-renders.
- Deferred non-critical effects until after mount to lower initial JS
work.
 
**Result:** faster initial render and smoother onboarding flows under
load.

### Layout and overflow fixes 📐 

- Replaced `w-screen` with `w-full` in platform/admin/profile layouts
and marketplace wrappers to avoid 100vw scrollbar overflow.
- Adjusted mobile navbar position (`right-0` instead of `-right-4`) to
prevent off-viewport elements.

**Result:** removed horizontal scrolling on Marketplace, Library, and
Settings pages; Build remains unaffected.

### New Generic Error pages

- Standardized global error handling in `app/global-error.tsx` for
consistent display and user feedback.
- Added platform-scoped error page(s) under `app/(platform)/error` for
route-level failures with a consistent layout.
- Improved retry affordances using existing `ErrorCard`.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify onboarding flows render faster and re-render less (DevTools
flamegraph)
- [x] Confirm no horizontal scrolling on Marketplace, Library, Settings
at common widths
  - [x] Validate mobile navbar stays within viewport
- [x] Trigger errors to confirm global and platform error pages render
consistently

### For configuration changes:

None
2025-10-02 10:21:31 +00:00
Copilot
7c9db7419a fix(frontend): Display run cost correctly - convert cents to dollars run detail components (#10997)
Fixed costs being displayed as raw cent values instead of properly
formatted dollar amounts in the frontend monitoring and agent run detail
pages.

## Problem
The platform was showing costs incorrectly in two key areas:
- **Monitoring page**: Total cost displayed as raw cents with incorrect
"seconds" unit (e.g., "Total cost: 150 seconds")
- **Agent run details**: Individual run costs displayed as raw cents
(e.g., "Cost: $150" for what should be $1.50)

## Solution
Updated the affected components to properly convert cents to dollars
with consistent formatting:

**FlowRunsStatus.tsx** - Fixed total cost calculation and display:
```tsx
// Before
{filteredFlowRuns.reduce((total, run) => total + (run.stats?.cost ?? 0), 0)} seconds

// After  
${(filteredFlowRuns.reduce((total, run) => total + (run.stats?.cost ?? 0), 0) / 100).toFixed(2)}
```

**RunDetailHeader.tsx** - Fixed individual run cost display:
```tsx
// Before
Cost: ${run.stats.cost}

// After
Cost: ${(run.stats.cost / 100).toFixed(2)}
```

## Validation
- Backend correctly stores costs in cents (verified in models and
database schemas)
- Email notification templates already handle the conversion properly
using `(credits_used|float)/100`
- Other components use the existing `formatCredits()` utility which
correctly converts cents to dollars
- No security vulnerabilities introduced (CodeQL verification passed)
- All linting and formatting checks pass

The fix ensures users now see accurate dollar amounts (e.g., $1.50
instead of $150 or "150 seconds") across the platform's cost reporting
interfaces.

![Cost Display Fix
Demo](https://github.com/user-attachments/assets/13c75a1d-7c78-4c11-9293-3dcf4c443097)

> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `checkpoint.prisma.io`
> - Triggering command: `/usr/bin/node
/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{&#34;product&#34;:&#34;prisma&#34;,&#34;version&#34;:&#34;5.17.0&#34;,&#34;cli_install_type&#34;:&#34;local&#34;,&#34;information&#34;:&#34;&#34;,&#34;local_timestamp&#34;:&#34;2025-09-25T21:41:17Z&#34;,&#34;project_hash&#34;:&#34;a5170f80&#34;,&#34;cli_path&#34;:&#34;/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js&#34;,&#34;cli_path_hash&#34;:&#34;40bbdaf9&#34;,&#34;endpoint&#34;:&#34;REDACTED&#34;,&#34;disable&#34;:false,&#34;arch&#34;:&#34;x64&#34;,&#34;os&#34;:&#34;linux&#34;,&#34;node_version&#34;:&#34;v20.19.5&#34;,&#34;ci&#34;:false,&#34;ci_name&#34;:&#34;&#34;,&#34;command&#34;:&#34;generate&#34;,&#34;schema_providers&#34;:[&#34;postgresql&#34;],&#34;schema_preview_features&#34;:[],&#34;schema_generators_providers&#34;:[&#34;prisma-client-py&#34;],&#34;cache_file&#34;:&#34;/root/.cache/checkpoint-nodejs/prisma-40bbdaf9&#34;,&#34;cache_duration&#34;:43200000,&#34;remind_duration&#34;:172800000,&#34;force&#34;:false,&#34;timeout&#34;:5000,&#34;unref&#34;:true,&#34;child_path&#34;:&#34;/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child&#34;,&#34;client_event_id&#34;:&#34;&#34;,&#34;previous_client_event_id&#34;:&#34;&#34;,&#34;check_if_update_available&#34;:false}`
(dns block)
> - Triggering command: `/usr/bin/node
/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{&#34;product&#34;:&#34;prisma&#34;,&#34;version&#34;:&#34;5.17.0&#34;,&#34;cli_install_type&#34;:&#34;local&#34;,&#34;information&#34;:&#34;&#34;,&#34;local_timestamp&#34;:&#34;2025-09-25T21:41:19Z&#34;,&#34;project_hash&#34;:&#34;a5170f80&#34;,&#34;cli_path&#34;:&#34;/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js&#34;,&#34;cli_path_hash&#34;:&#34;40bbdaf9&#34;,&#34;endpoint&#34;:&#34;REDACTED&#34;,&#34;disable&#34;:false,&#34;arch&#34;:&#34;x64&#34;,&#34;os&#34;:&#34;linux&#34;,&#34;node_version&#34;:&#34;v20.19.5&#34;,&#34;ci&#34;:false,&#34;ci_name&#34;:&#34;&#34;,&#34;command&#34;:&#34;migrate
deploy&#34;,&#34;schema_providers&#34;:[&#34;postgresql&#34;],&#34;schema_preview_features&#34;:[],&#34;schema_generators_providers&#34;:[&#34;prisma-client-py&#34;],&#34;cache_file&#34;:&#34;/root/.cache/checkpoint-nodejs/prisma-40bbdaf9&#34;,&#34;cache_duration&#34;:43200000,&#34;remind_duration&#34;:172800000,&#34;force&#34;:false,&#34;timeout&#34;:5000,&#34;unref&#34;:true,&#34;child_path&#34;:&#34;/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child&#34;,&#34;client_event_id&#34;:&#34;&#34;,&#34;previous_client_event_id&#34;:&#34;&#34;,&#34;check_if_update_available&#34;:false}`
(dns block)
> - Triggering command: `/opt/hostedtoolcache/node/21.7.3/x64/bin/node
/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{&#34;product&#34;:&#34;prisma&#34;,&#34;version&#34;:&#34;5.17.0&#34;,&#34;cli_install_type&#34;:&#34;local&#34;,&#34;information&#34;:&#34;&#34;,&#34;local_timestamp&#34;:&#34;2025-09-25T21:44:58Z&#34;,&#34;project_hash&#34;:&#34;c6190a20&#34;,&#34;cli_path&#34;:&#34;/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js&#34;,&#34;cli_path_hash&#34;:&#34;8d85b642&#34;,&#34;endpoint&#34;:&#34;REDACTED&#34;,&#34;disable&#34;:false,&#34;arch&#34;:&#34;x64&#34;,&#34;os&#34;:&#34;linux&#34;,&#34;node_version&#34;:&#34;v21.7.3&#34;,&#34;ci&#34;:true,&#34;ci_name&#34;:&#34;GitHub
Actions&#34;,&#34;command&#34;:&#34;generate&#34;,&#34;schema_providers&#34;:[&#34;postgresql&#34;],&#34;schema_preview_features&#34;:[],&#34;schema_generators_providers&#34;:[&#34;prisma-client-py&#34;],&#34;cache_file&#34;:&#34;/home/REDACTED/.cache/checkpoint-nodejs/prisma-8d85b642&#34;,&#34;cache_duration&#34;:43200000,&#34;remind_duration&#34;:172800000,&#34;force&#34;:false,&#34;timeout&#34;:5000,&#34;unref&#34;:true,&#34;child_path&#34;:&#34;/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child&#34;,&#34;client_event_id&#34;:&#34;&#34;,&#34;previous_client_event_id&#34;:&#34;&#34;,&#34;check_if_update_available&#34;:false}`
(dns block)
> - `fonts.googleapis.com`
> - Triggering command: `node
/home/REDACTED/work/AutoGPT/AutoGPT/autogpt_platform/frontend/node_modules/.bin/../next/dist/bin/next
build` (dns block)
> -
`https://api.github.com/repos/Significant-Gravitas/Significant-Gravitas%2FAutoGPT/languages`
> - Triggering command:
`/home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps
/home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js`
(http block)
> - `o1.ingest.sentry.io`
> - Triggering command: `node
/home/REDACTED/work/AutoGPT/AutoGPT/autogpt_platform/frontend/node_modules/.bin/../next/dist/bin/next
build` (dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> 
> ----
> 
> *This section details on the original issue you should resolve*
> 
> <issue_title>Costs are being shown as dollars rather than cents based
on the new runs page</issue_title>
> <issue_description></issue_description>
> 
> ## Comments on the Issue (you are @copilot in this section)
> 
> <comments>
> </comments>
> 


</details>
Fixes Significant-Gravitas/AutoGPT#10886

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-02 10:08:03 +00:00
Krzysztof Czerwinski
18bbd8e572 fix(frontend): Fix confetti (#11031)
### Changes 🏗️

- Fix not being able to complete `MARKETPLACE_RUN_AGENT` task
- Fix confetti shooting on every refresh
- Fix confetti shooting from top-left corner

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Bugs eradicated
2025-10-02 03:19:25 +00:00
Zamil Majdy
047f011520 fix(platform): resolve authentication performance bottlenecks and improve reliability (#11028)
## Summary
Fix critical authentication performance bottlenecks causing infinite
loading during login and malformed redirect URL handling.

## Root Cause Analysis
- **OnboardingProvider** was running expensive `isOnboardingEnabled()`
database queries on every route for all users
- **Timezone detection** was calling backend APIs during authentication
flow instead of only during onboarding
- **Malformed redirect URLs** like `/marketplace,%20/marketplace`
causing authentication callback failures
- **Arbitrary setTimeout** creating race conditions instead of proper
authentication state management

## Changes Made

### 1. Backend: Cache Expensive Onboarding Queries
(`backend/data/onboarding.py`)
- Add `@cached(maxsize=1, ttl_seconds=300)` decorator to
`onboarding_enabled()`
- Cache expensive database queries for 5 minutes to prevent repeated
execution during auth
- Optimize query with `take=MIN_AGENT_COUNT + 1` to stop counting early
- Fix typo: "Onboading" → "Onboarding"

### 2. Frontend: Optimize OnboardingProvider
(`providers/onboarding/onboarding-provider.tsx`)
- **Route-based optimization**: Only call `isOnboardingEnabled()` when
user is actually on `/onboarding/*` routes
- **Preserve functionality**: Still fetch `getUserOnboarding()` for step
completion tracking on all routes
- **Smart redirects**: Only handle onboarding completion redirects when
on onboarding routes
- **Performance improvement**: Eliminates expensive database calls for
95% of page loads

### 3. Frontend: Fix Timezone Detection Race Conditions
(`hooks/useOnboardingTimezoneDetection.ts`)
- **Remove setTimeout hack**: Replace arbitrary 1000ms timeout with
proper authentication state checks
- **Add route filtering**: Only run timezone detection on
`/onboarding/*` routes using `pathname.startsWith()`
- **Proper auth dependencies**: Use `useSupabase()` hook to wait for
`user` and `!isUserLoading`
- **Fire-and-forget updates**: Change from `mutateAsync()` to `mutate()`
to prevent blocking UI

### 4. Frontend: Apply Fire-and-Forget Pattern
(`hooks/useTimezoneDetection.ts`)
- Change timezone auto-detection from `mutateAsync()` to `mutate()`
- Prevents blocking user interactions during background timezone updates
- API still executes successfully, user doesn't wait for response

### 5. Frontend: Enhanced URL Validation (`auth/callback/route.ts`)
- **Add malformed URL detection**: Check for commas and spaces in
redirect URLs
- **Constants**: Use `DEFAULT_REDIRECT_PATH = "/marketplace"` instead of
hardcoded strings
- **Better error handling**: Try-catch with fallback to safe default
path
- **Path depth limits**: Reject suspiciously deep URLs (>5 segments)
- **Race condition mitigation**: Default to `/marketplace` for corrupted
URLs with warning logs

## Technical Implementation

### Performance Optimizations
- **Database caching**: 5-minute cache prevents repeated expensive
onboarding queries
- **Route-aware logic**: Heavy operations only run where needed
(`/onboarding/*` routes)
- **Non-blocking updates**: Timezone updates don't block authentication
flow
- **Proper state management**: Wait for actual authentication instead of
arbitrary delays

### Authentication Flow Improvements
- **Eliminate race conditions**: No more setTimeout guessing - wait for
proper auth state
- **Faster auth**: Remove blocking timezone API calls during login flow
- **Better UX**: Handle malformed URLs gracefully instead of failing

## Files Changed
- `backend/data/onboarding.py` - Add caching to expensive queries
- `providers/onboarding/onboarding-provider.tsx` - Route-based
optimization
- `hooks/useOnboardingTimezoneDetection.ts` - Proper auth state + route
filtering + fire-and-forget
- `hooks/useTimezoneDetection.ts` - Fire-and-forget pattern
- `auth/callback/route.ts` - Enhanced URL validation

## Impact
- **Eliminates infinite loading** during authentication flow
- **Improves auth response times** from 5-11+ seconds to sub-second
- **Prevents malformed redirect URLs** that confused users
- **Reduces database load** through intelligent caching  
- **Maintains all existing functionality** with better performance
- **Eliminates race conditions** from arbitrary timeouts

## Validation
-  All pre-commit hooks pass (format, lint, typecheck)
-  No breaking changes to existing functionality
-  Backward compatible with all onboarding flows
-  Enhanced error logging and graceful fallbacks

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-02 01:26:49 +00:00
Reinier van der Leer
d11917eb10 feat(blocks): Improve data output of code execution block (#11017)
- Resolves #11016

### Changes 🏗️

- Add more extensive outputs to Code Execution Block
- Rename "Response" output to "Main Text Output"

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Object outputs can be accessed now
2025-10-01 10:38:04 +00:00
Copilot
4663066e65 feat(blocks): Implement AI Condition Block for natural language condition evaluation (#10996)
This PR implements the AI Condition Block as requested in issue
AUTOMAT-60. The new block enables users to define conditional logic
using natural language descriptions instead of traditional comparison
operators, while maintaining the same yes/no data pass-through
functionality as the existing ConditionBlock.

## Overview

The AI Condition Block uses Large Language Models to evaluate conditions
written in plain English, such as:
- "the input is the body of an email"
- "the input is a City in the USA"
- "the input is an error or a refusal"

## Key Features

**Natural Language Processing**: Users can express complex conditions in
everyday English rather than programming logic, making agent workflows
more intuitive and accessible.

**Consistent Interface**: Maintains the same input/output schema as the
standard ConditionBlock:
- Boolean `result` output indicating condition evaluation
- `yes_output` and `no_output` for conditional data flow
- Optional custom values for yes/no cases

**Robust Error Handling**: Defaults to `false` on AI evaluation failures
to ensure safe operation and prevent workflow interruption.

**Performance Optimized**: Uses minimal token limits (10 tokens) for
true/false responses to reduce latency and API costs.

## Implementation Details

The block is implemented as `AIConditionBlock` in
`backend/blocks/ai_condition.py` and inherits from `AIBlockBase`
following established platform patterns. It includes:

- Proper LLM integration with credential management
- Token usage tracking and statistics
- Comprehensive test mocking for reliable CI/CD
- Full documentation with examples and use cases

## Use Cases

This block enables more sophisticated conditional logic for:
- **Content Classification**: Automatically categorize text, emails, or
documents
- **Data Validation**: Validate inputs using natural language rules
- **Smart Routing**: Route data based on AI-evaluated conditions
- **Error Detection**: Identify and handle error messages or problematic
inputs
- **Quality Control**: Check content against flexible quality standards

## Testing

The implementation includes comprehensive testing that integrates with
the existing platform test suite. All tests pass, including:
- Unit tests with proper LLM response mocking
- Code quality checks (linting, formatting, type checking)
- Security analysis via CodeQL
- Integration testing to ensure proper block discovery and loading

The block is automatically discovered by the platform's block loading
system and is immediately available for use in agent workflows.

## PR Checklist

- [x] **Have you listed your changes in the description?**
  - Added new `AIConditionBlock` in `backend/blocks/ai_condition.py`
- Added comprehensive documentation in
`docs/content/platform/blocks/ai_condition.md`
  - Implemented natural language condition evaluation using LLMs

- [x] **Have you included a test plan?**
  - Unit tests with mocked LLM responses
  - Integration tests for block discovery and loading
  - Error handling validation
  - Token usage tracking verification

- [x] **Have you tested your changes according to the test plan?**
  - All existing tests pass
  - Linting and formatting checks pass
  - Type checking passes
  - Security analysis via CodeQL passes
- Fixed `json_format` parameter to `force_json_output` per recent API
changes

> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `api.openai.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python
/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/pytest
backend/blocks/test/test_block.py::test_available_blocks -k
AIConditionBlock -v` (dns block)
> -
`https://api.github.com/repos/Significant-Gravitas/Significant-Gravitas%2FAutoGPT/languages`
> - Triggering command:
`/home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps
/home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js`
(http block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> Issue Title: AI Condition Block
> Issue Description: A version of the condition/if block that uses an AI
powered condition.
>
> It should have the same yes/no data pass throughs, as well as
outputting a result Boolean.
>
> The condition is plaintext English, provided by the user, and could be
anything.
>
> e.g
> If `[the input] is the body of an email`
> If `[the input] is a City in the USA`
> If `[the input] is an error or a refusal`
> Fixes https://linear.app/autogpt/issue/AUTOMAT-60/ai-condition-block
>
>
> Comment by User 4bcbb358-1758-43e4-abef-a0a42b63442f:
> 📋 I need a **repo** label on this issue to determine which GitHub
repository to work in.
>
> Please add a repo label to this issue with the format
`owner/repository-name` (e.g., `github/copilot`), then I'll
automatically start working on it!
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
>


</details>


<!-- START COPILOT CODING AGENT TIPS -->
---

 Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces `AIConditionBlock` that uses an LLM to evaluate
natural-language conditions and outputs boolean result with yes/no
pass-through, plus accompanying documentation.
> 
> - **Backend**:
>   - **New block**: `backend/blocks/ai_condition.py`
> - Evaluates natural-language conditions via `llm_call` using
selectable `LlmModel` and credentials.
> - Parses strict true/false responses (with fallback token matching),
yields `result`, `yes_output`/`no_output`, and `error` on
ambiguity/failure.
> - Tracks token usage via `NodeExecutionStats`; includes test
inputs/mocks and `force_json_output=False`.
> - **Docs**:
> - Adds `docs/content/platform/blocks/ai_condition.md` with usage,
inputs/outputs, examples, and considerations.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
06e9586bd3. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-10-01 05:02:57 +00:00
Krzysztof Czerwinski
48a0faa611 feat(frontend): Restore onboarding steps (#11027)
Wallet update removed `BUILDER_OPEN` and `BUILDER_RUN_AGENT`.

### Changes 🏗️

- Restore completion codepaths for `BUILDER_OPEN` and
`BUILDER_RUN_AGENT` for analytical purposes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tasks are completed silently
2025-10-01 04:53:51 +00:00
Nicholas Tindle
70d00b4104 fix(ci): Delete pr_reviewer section in .pr_agent.toml (#11024)
Remove pr_reviewer section from configuration

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
removes the out of config status section
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] validated by global config
2025-10-01 03:01:24 +00:00
Nicholas Tindle
aad0434cb2 feat(frontend): Enhance Sentry integration and TallyPopup telemetry (#10862)
Added Sentry captureConsoleIntegration and extraErrorDataIntegration to
client, edge, and server configs. Improved replay integration with
unmasking support. Updated TallyPopup to collect and expose Sentry
replay data, user agent, and page URL for enhanced telemetry and
debugging. Improved event handling and error logging for Tally events.
Marked CustomNode title for Sentry unmasking.<!-- Clearly explain the
need for these changes: -->

### Changes 🏗️
Reconfigure sentry
Pass the id with sentry replay to tally alongside prefilling email, and
passing non user identifying attributes like platform url, full url, and
is authenticated.
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the results show up in sentry
  - [x] Test the url works in tally
2025-10-01 03:00:20 +00:00
Krzysztof Czerwinski
f33ec1f2ec feat(platform): New retention-focused tasks and wallet update (#10977)
### Changes 🏗️

- Rename wallet and update design
- Update tasks and add Hidden Tasks section
- Update onboarding backend code and related db migration
- Add progress bar for some tasks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All tasks can be finished
  - [x] Finished tasks add correct amount of credits

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-01 01:29:30 +00:00
dependabot[bot]
e68b873bcf chore(frontend/deps): Bump @faker-js/faker from 9.9.0 to 10.0.0 in /autogpt_platform/frontend (#10806)
Bumps [@faker-js/faker](https://github.com/faker-js/faker) from 9.9.0 to
10.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/faker-js/faker/releases"><code>@​faker-js/faker</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v10.0.0</h2>
<h2>New &amp; Noteworthy</h2>
<ul>
<li>esm only (for cjs support look into migration guide, we got you
covered 😉)</li>
<li>remove v9 deprecations</li>
<li>change default error strategy to 'fail' in word module</li>
<li>remove invalid credit card issuer patterns</li>
<li>see our <a
href="https://v10.fakerjs.dev/guide/upgrading.html">migration
guide</a></li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>ci: use node 24 by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3543">faker-js/faker#3543</a></li>
<li>infra: stop using node 18 by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3536">faker-js/faker#3536</a></li>
<li>infra: use import.meta.dirname by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3542">faker-js/faker#3542</a></li>
<li>chore(deps): update devdependencies (major) by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3512">faker-js/faker#3512</a></li>
<li>chore(deps): update eslint by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3555">faker-js/faker#3555</a></li>
<li>chore(deps): update dependency <code>@​vitest/eslint-plugin</code>
to v1.3.4 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3554">faker-js/faker#3554</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3556">faker-js/faker#3556</a></li>
<li>chore(deps): lock file maintenance by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3557">faker-js/faker#3557</a></li>
<li>feat!: esm only by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3540">faker-js/faker#3540</a></li>
<li>refactor!: remove deprecations by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3553">faker-js/faker#3553</a></li>
<li>docs: migration guide for v10 by <a
href="https://github.com/matthewmayer"><code>@​matthewmayer</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3559">faker-js/faker#3559</a></li>
<li>infra: more precise engines field by <a
href="https://github.com/matthewmayer"><code>@​matthewmayer</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3561">faker-js/faker#3561</a></li>
<li>refactor(word)!: change default error strategy to 'fail' by <a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3560">faker-js/faker#3560</a></li>
<li>chore(release): 10.0.0-beta.0 by <a
href="https://github.com/fakerjs-bot"><code>@​fakerjs-bot</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3565">faker-js/faker#3565</a></li>
<li>docs: Minor improvements to migration guide by <a
href="https://github.com/matthewmayer"><code>@​matthewmayer</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3569">faker-js/faker#3569</a></li>
<li>chore(deps): update pnpm to v10.13.1 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3570">faker-js/faker#3570</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3571">faker-js/faker#3571</a></li>
<li>chore(deps): update eslint by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3572">faker-js/faker#3572</a></li>
<li>chore(deps): lock file maintenance by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3562">faker-js/faker#3562</a></li>
<li>chore(deps): update dependency typescript to v5.9.2 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3576">faker-js/faker#3576</a></li>
<li>chore(deps): update pnpm to v10.14.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3579">faker-js/faker#3579</a></li>
<li>chore(deps): update
mcr.microsoft.com/devcontainers/typescript-node:22 docker digest to
2baa40a by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3575">faker-js/faker#3575</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3577">faker-js/faker#3577</a></li>
<li>chore(deps): update eslint (major) by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3580">faker-js/faker#3580</a></li>
<li>chore(deps): update eslint by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3578">faker-js/faker#3578</a></li>
<li>feat(locale): extended list of colors in Polish by <a
href="https://github.com/pkuczynski"><code>@​pkuczynski</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3586">faker-js/faker#3586</a></li>
<li>refactor(locale): remove invalid credit card issuer patterns by <a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3568">faker-js/faker#3568</a></li>
<li>docs: update migration guide with findings from playground update by
<a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3587">faker-js/faker#3587</a></li>
<li>chore: fix typo in test by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3591">faker-js/faker#3591</a></li>
<li>chore(deps): update all non-major dependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3596">faker-js/faker#3596</a></li>
<li>chore(deps): update amannn/action-semantic-pull-request action to v6
by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3598">faker-js/faker#3598</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3599">faker-js/faker#3599</a></li>
<li>chore(deps): update actions/checkout action to v5 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3597">faker-js/faker#3597</a></li>
<li>chore(deps): update dependency cypress to v15 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3603">faker-js/faker#3603</a></li>
<li>chore(deps): update dependency vitepress to v1.6.4 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3601">faker-js/faker#3601</a></li>
<li>chore(deps): pin dependency node to 24.6.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3600">faker-js/faker#3600</a></li>
<li>chore(deps): update dependency typescript-eslint to v8.40.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3602">faker-js/faker#3602</a></li>
<li>chore(deps): update dependency eslint-plugin-jsdoc to v54 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3604">faker-js/faker#3604</a></li>
<li>chore(deps): lock file maintenance by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3584">faker-js/faker#3584</a></li>
<li>chore(release): 10.0.0 by <a
href="https://github.com/fakerjs-bot"><code>@​fakerjs-bot</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3605">faker-js/faker#3605</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/faker-js/faker/blob/next/CHANGELOG.md"><code>@​faker-js/faker</code>'s
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/faker-js/faker/compare/v10.0.0-beta.0...v10.0.0">10.0.0</a>
(2025-08-21)</h2>
<h3>New Locales</h3>
<ul>
<li><strong>locale:</strong> extended list of colors in Polish (<a
href="https://redirect.github.com/faker-js/faker/issues/3586">#3586</a>)
(<a
href="9940d54f75">9940d54</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>locales:</strong> add animal vocabulary(bear, bird, cat,
rabbit, pet_name) in Korean (<a
href="https://redirect.github.com/faker-js/faker/issues/3535">#3535</a>)
(<a
href="0d2143c75d">0d2143c</a>)</li>
</ul>
<h3>Changed Locales</h3>
<ul>
<li><strong>locale:</strong> remove invalid credit card issuer patterns
(<a
href="https://redirect.github.com/faker-js/faker/issues/3568">#3568</a>)
(<a
href="9783d95a8e">9783d95</a>)</li>
</ul>
<h2><a
href="https://github.com/faker-js/faker/compare/v9.9.0...v10.0.0-beta.0">10.0.0-beta.0</a>
(2025-07-09)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>
<p><strong>word:</strong> change default error strategy to 'fail' (<a
href="https://redirect.github.com/faker-js/faker/issues/3560">#3560</a>)</p>
</li>
<li>
<p>remove deprecations (<a
href="https://redirect.github.com/faker-js/faker/issues/3553">#3553</a>)</p>
</li>
<li>
<p>esm only (<a
href="https://redirect.github.com/faker-js/faker/issues/3540">#3540</a>)</p>
</li>
<li>
<p>remove deprecations (<a
href="https://redirect.github.com/faker-js/faker/issues/3553">#3553</a>)
(<a
href="623d2741a4">623d274</a>)</p>
</li>
<li>
<p><strong>word:</strong> change default error strategy to 'fail' (<a
href="https://redirect.github.com/faker-js/faker/issues/3560">#3560</a>)
(<a
href="93416f71cf">93416f7</a>)</p>
</li>
</ul>
<h3>Features</h3>
<ul>
<li>esm only (<a
href="https://redirect.github.com/faker-js/faker/issues/3540">#3540</a>)
(<a
href="160960b797">160960b</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="51943aecb9"><code>51943ae</code></a>
chore(release): 10.0.0 (<a
href="https://redirect.github.com/faker-js/faker/issues/3605">#3605</a>)</li>
<li><a
href="96d7517b9b"><code>96d7517</code></a>
chore(deps): lock file maintenance (<a
href="https://redirect.github.com/faker-js/faker/issues/3584">#3584</a>)</li>
<li><a
href="2eb6fa0a7a"><code>2eb6fa0</code></a>
chore(deps): update dependency eslint-plugin-jsdoc to v54 (<a
href="https://redirect.github.com/faker-js/faker/issues/3604">#3604</a>)</li>
<li><a
href="1fcfe4830d"><code>1fcfe48</code></a>
chore(deps): pin dependency node to 24.6.0 (<a
href="https://redirect.github.com/faker-js/faker/issues/3600">#3600</a>)</li>
<li><a
href="2bd4807fa2"><code>2bd4807</code></a>
chore(deps): update dependency typescript-eslint to v8.40.0 (<a
href="https://redirect.github.com/faker-js/faker/issues/3602">#3602</a>)</li>
<li><a
href="09a88eb100"><code>09a88eb</code></a>
chore(deps): update dependency vitepress to v1.6.4 (<a
href="https://redirect.github.com/faker-js/faker/issues/3601">#3601</a>)</li>
<li><a
href="5418574bf7"><code>5418574</code></a>
chore(deps): update dependency cypress to v15 (<a
href="https://redirect.github.com/faker-js/faker/issues/3603">#3603</a>)</li>
<li><a
href="9e4f463ecf"><code>9e4f463</code></a>
chore(deps): update actions/checkout action to v5 (<a
href="https://redirect.github.com/faker-js/faker/issues/3597">#3597</a>)</li>
<li><a
href="287ecdaa39"><code>287ecda</code></a>
chore(deps): update devdependencies (<a
href="https://redirect.github.com/faker-js/faker/issues/3599">#3599</a>)</li>
<li><a
href="2b1495956f"><code>2b14959</code></a>
chore(deps): update amannn/action-semantic-pull-request action to v6 (<a
href="https://redirect.github.com/faker-js/faker/issues/3598">#3598</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/faker-js/faker/compare/v9.9.0...v10.0.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@faker-js/faker&package-manager=npm_and_yarn&previous-version=9.9.0&new-version=10.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades `@faker-js/faker` to v10 and updates test utilities to
dynamically import Faker and make password generation async.
> 
> - **Frontend dependencies**:
>   - Bump `@faker-js/faker` from `9.9.0` to `10.0.0`.
> - **Tests**:
> - Replace static imports with dynamic `import("@faker-js/faker")` in
`src/tests/utils/{auth.ts,signup.ts}`.
> - Change `generateTestPassword` to `async` returning `Promise<string>`
to use ESM Faker.
> - Adjust test user creation to use dynamically generated
`email`/`password` via Faker.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
334f4a264d. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-30 21:12:18 +00:00
Nicholas Tindle
4530e97e59 feat(platform/blocks): Add table input UI and builder block (#10829)
<!-- Clearly explain the need for these changes: -->


https://github.com/user-attachments/assets/909a6ecf-5731-424c-8dee-fe25db907365


### Need 💡

This PR introduces a new "Table Input" block and corresponding UI
component, allowing users to easily input structured, tabular data
directly within the agent builder. This addresses the need for a
user-friendly way to define custom column headers and populate rows of
data, which is then output as a list of dictionaries.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

* **New `TableInputBlock` (backend):** A new block
(`backend/backend/blocks/table_input.py`) has been added. It defines an
`Input` schema with `headers` (a list of strings for column names) and
`value` (a list of dictionaries representing table rows). The block
outputs the `value` data in the specified dictionary format.
* **New `NodeTableInput` Component (frontend):** A new React component
(`frontend/src/components/node-table-input.tsx`) was created to render
an editable table UI, supporting dynamic row addition/removal and cell
editing.
*   **Frontend Integration:**
* `NodeGenericInputField` and `NodeObjectInputTree` were updated to pass
`parentContext` down the component hierarchy.
* `NodeArrayInput` was modified to conditionally render the new
`NodeTableInput` component. It now detects when an array field
(`selfKey` is "value") is part of a parent context that defines
`headers`, indicating it should be rendered as a table.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Add a "Table Input" block to the builder.
  - [x] Define custom headers (e.g., "Name", "Email").
  - [x] Add several rows of data using the table UI.
- [x] Verify that adding, editing, and removing rows works as expected.
- [x] Connect the output of the "Table Input" block to another block
(e.g., a "Print" block) and confirm the output format is a list of
dictionaries with the defined headers as keys.
  - [x] Test with an empty table (no rows).
  - [x] Test with no headers defined (should default).
- [x] Test that an empty row returns empty data (is this a good
behavior?


example output of the block
```
{
  "advanced": false,
  "column_headers": [
    "Col 1",
    "Col 2",
    "Col 3"
  ],
  "name": "table_input",
  "value": [
    {
      "Col 1": "row 1",
      "Col 2": "row 1",
      "Col 3": "row 1"
    },
    {
      "Col 1": "val 1",
      "Col 2": "val 2",
      "Col 3": "val 3"
    }
  ]
}
```

---
<a
href="https://cursor.com/background-agent?bcId=bc-b8d31867-1034-4374-852c-b92ca69cc399">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-b8d31867-1034-4374-852c-b92ca69cc399">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-30 19:41:03 +00:00
Bently
477c261488 feat(blocks): Add claude-sonnet-4.5 (#11023)
## Summary
Adds claude-sonnet-4.5 model to the platform and sets the price to 9

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] test the new claude-sonnet-4.5 model on the platform to make sure
it works
2025-09-30 19:30:58 +00:00
dependabot[bot]
8ac2228e1e chore(frontend/deps): Upgrade @sentry/nextjs from 9.42.0 to 10.8.0 (#10802)
Bumps [@sentry/nextjs](https://github.com/getsentry/sentry-javascript)
from 9.42.0 to 10.8.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/releases"><code>@​sentry/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>10.8.0</h2>
<h3>Important Changes</h3>
<ul>
<li>
<p><strong>feat(sveltekit): Add Compatibility for builtin SvelteKit
Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17423">#17423</a>)</strong></p>
<p>This release makes the <code>@sentry/sveltekit</code> SDK compatible
with SvelteKit's native <a
href="https://svelte.dev/docs/kit/observability">observability
support</a> introduced in SvelteKit version <code>2.31.0</code>.
If you enable both, instrumentation and tracing, the SDK will now
initialize early enough to set up additional instrumentation like
database queries and it will pick up spans emitted from SvelteKit.</p>
<p>We will follow up with docs how to set up the SDK soon.
For now, If you're on SvelteKit version <code>2.31.0</code> or newer,
you can easily opt into the new feature:</p>
<ol>
<li>
<p>Enable <a
href="https://svelte.dev/docs/kit/observability">experimental tracing
and instrumentation support</a> in <code>svelte.config.js</code>:</p>
</li>
<li>
<p>Move your <code>Sentry.init()</code> call from
<code>src/hooks.server.(js|ts)</code> to the new
<code>instrumentation.server.(js|ts)</code> file:</p>
<pre lang="ts"><code>// instrumentation.server.ts
import * as Sentry from '@sentry/sveltekit';
<p>Sentry.init({<br />
dsn: '...',<br />
// rest of your config<br />
});<br />
</code></pre></p>
<p>The rest of your Sentry config in <code>hooks.server.ts</code>
(<code>sentryHandle</code> and <code>handleErrorWithSentry</code>)
should stay the same.</p>
</li>
</ol>
<p>If you prefer to stay on the hooks-file based config for now, the SDK
will continue to work as previously.</p>
<p>Thanks to the Svelte team and <a
href="https://github.com/elliott-with-the-longest-name-on-github"><code>@​elliott-with-the-longest-name-on-github</code></a>
for implementing observability support and for reviewing our PR!</p>
</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17438">#17438</a>)</li>
</ul>
<!-- raw HTML omitted -->
<ul>
<li>test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17470">#17470</a>)</li>
</ul>
<!-- raw HTML omitted -->
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@​sentry/browser</code></td>
<td>23.59 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> - with treeshaking flags</td>
<td>22.2 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing)</td>
<td>38.94 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay)</td>
<td>76.4 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>66.43 KB</td>
</tr>
</tbody>
</table>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md"><code>@​sentry/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>10.8.0</h2>
<h3>Important Changes</h3>
<ul>
<li>
<p><strong>feat(sveltekit): Add Compatibility for builtin SvelteKit
Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17423">#17423</a>)</strong></p>
<p>This release makes the <code>@sentry/sveltekit</code> SDK compatible
with SvelteKit's native <a
href="https://svelte.dev/docs/kit/observability">observability
support</a> introduced in SvelteKit version <code>2.31.0</code>.
If you enable both, instrumentation and tracing, the SDK will now
initialize early enough to set up additional instrumentation like
database queries and it will pick up spans emitted from SvelteKit.</p>
<p>We will follow up with docs how to set up the SDK soon.
For now, If you're on SvelteKit version <code>2.31.0</code> or newer,
you can easily opt into the new feature:</p>
<ol>
<li>
<p>Enable <a
href="https://svelte.dev/docs/kit/observability">experimental tracing
and instrumentation support</a> in <code>svelte.config.js</code>:</p>
</li>
<li>
<p>Move your <code>Sentry.init()</code> call from
<code>src/hooks.server.(js|ts)</code> to the new
<code>instrumentation.server.(js|ts)</code> file:</p>
<pre lang="ts"><code>// instrumentation.server.ts
import * as Sentry from '@sentry/sveltekit';
<p>Sentry.init({<br />
dsn: '...',<br />
// rest of your config<br />
});<br />
</code></pre></p>
<p>The rest of your Sentry config in <code>hooks.server.ts</code>
(<code>sentryHandle</code> and <code>handleErrorWithSentry</code>)
should stay the same.</p>
</li>
</ol>
<p>If you prefer to stay on the hooks-file based config for now, the SDK
will continue to work as previously.</p>
<p>Thanks to the Svelte team and <a
href="https://github.com/elliott-with-the-longest-name-on-github"><code>@​elliott-with-the-longest-name-on-github</code></a>
for implementing observability support and for reviewing our PR!</p>
</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17438">#17438</a>)</li>
</ul>
<!-- raw HTML omitted -->
<ul>
<li>test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17470">#17470</a>)</li>
</ul>
<!-- raw HTML omitted -->
<h2>10.7.0</h2>
<h3>Important Changes</h3>
<ul>
<li><strong>feat(cloudflare): Add
<code>instrumentPrototypeMethods</code> option to instrument RPC methods
for DurableObjects (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17424">#17424</a>)</strong></li>
</ul>
<p>By default, <code>Sentry.instrumentDurableObjectWithSentry</code>
will not wrap any RPC methods on the prototype. To enable wrapping for
RPC methods, set <code>instrumentPrototypeMethods</code> to
<code>true</code> or, if performance is a concern, a list of only the
methods you want to instrument:</p>
<pre lang="js"><code>&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bd8458e659"><code>bd8458e</code></a>
release: 10.8.0</li>
<li><a
href="dbdddc896f"><code>dbdddc8</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17481">#17481</a>
from getsentry/prepare-release/10.8.0</li>
<li><a
href="f5d4bd616e"><code>f5d4bd6</code></a>
meta(changelog): Update changelog for 10.8.0</li>
<li><a
href="dfdc3b0ab9"><code>dfdc3b0</code></a>
test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17470">#17470</a>)</li>
<li><a
href="895b38590c"><code>895b385</code></a>
fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17438">#17438</a>)</li>
<li><a
href="e6e20d847c"><code>e6e20d8</code></a>
feat(sveltekit): Add Compatibility for builtin SvelteKit Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17423">#17423</a>)</li>
<li><a
href="7e24422327"><code>7e24422</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17472">#17472</a>
from getsentry/master</li>
<li><a
href="27e97b0cec"><code>27e97b0</code></a>
Merge branch 'release/10.7.0'</li>
<li><a
href="b7e4816824"><code>b7e4816</code></a>
release: 10.7.0</li>
<li><a
href="0bc8417d50"><code>0bc8417</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17471">#17471</a>
from getsentry/prepare-release/10.7.0</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-javascript/compare/9.42.0...10.8.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@sentry/nextjs&package-manager=npm_and_yarn&previous-version=9.42.0&new-version=10.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades `@sentry/nextjs` to 10.15.0, updating numerous related
`@sentry/*`, OpenTelemetry (v2), and build/dev dependencies via the
lockfile.
> 
> - **Dependencies (frontend)**:
>   - Upgrade `@sentry/nextjs` from `9.42.0` to `10.15.0`.
>   - Cascading updates in `pnpm-lock.yaml`:
> - `@sentry/*` packages (`browser`, `core`, `node`, `opentelemetry`,
`react`, `vercel-edge`, `webpack-plugin`, `bundler-plugin-core`, `cli`,
etc.).
> - OpenTelemetry stack to newer major versions
(`@opentelemetry/core`/`resources`/`sdk-trace-base` 2.x; multiple
`instrumentation-*` packages).
> - Build tooling: `rollup` 4.52.x and platform binaries;
`@rollup/plugin-*`.
> - Misc dev typings and utilities (e.g., `@types/mysql`, `@types/pg`,
`debug`, `@prisma/instrumentation`).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
5b4b37e551. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-09-30 16:42:05 +00:00
Zamil Majdy
91dd9364bb fix(backend): implement retry mechanism for SmartDecisionMaker tool call validation (#11015)
<!-- Clearly explain the need for these changes: -->

This PR fixes a critical production issue where SmartDecisionMakerBlock
was silently accepting tool calls with typo'd parameter names (e.g.,
'maximum_keyword_difficulty' instead of 'max_keyword_difficulty'),
causing downstream blocks to receive null values and execution failures.

The solution implements comprehensive parameter validation with
automatic retry when the LLM provides malformed tool calls, giving the
LLM specific feedback to correct the errors.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

**Core Validation & Retry Logic
(`backend/blocks/smart_decision_maker.py`)**
- Add tool call parameter validation against function schema
- Implement retry mechanism using existing `create_retry_decorator` from
`backend.util.retry`
- Validate provided parameters against expected schema properties and
required fields
- Generate specific error messages for unknown parameters (typos) and
missing required parameters
- Add error feedback to conversation history for LLM learning on retry
attempts
- Use `input_data.retry` field to configure number of retry attempts

**Comprehensive Test Coverage
(`backend/blocks/test/test_smart_decision_maker.py`)**
- Add `test_smart_decision_maker_parameter_validation` with 4
comprehensive test scenarios:
1. Tool call with typo'd parameter (should retry and eventually fail
with clear error)
2. Tool call missing required parameter (should fail immediately with
clear error)
  3. Valid tool call with optional parameter missing (should succeed)
  4. Valid tool call with all parameters provided (should succeed)
- Verify retry mechanism works correctly and respects retry count
- Mock LLM responses for controlled testing of validation logic

**Load Tests Documentation Update (`load-tests/README.md`)**
- Update documentation to reflect current orchestrator-based
architecture
- Remove references to deprecated `run-tests.js` and
`comprehensive-orchestrator.js`
- Streamline documentation to focus on working
`orchestrator/orchestrator.js`
- Update NPM scripts and command examples for current workflow
- Clean up outdated file references to match actual infrastructure

**Production Impact**
- **Prevents silent failures**: Tool call parameter typos now cause
retries instead of null downstream values
- **Maintains compatibility**: No breaking changes to existing
SmartDecisionMaker functionality
- **Improves reliability**: LLM receives feedback to correct parameter
errors
- **Configurable retries**: Uses existing `retry` field for user control
- **Accurate documentation**: Load-tests docs now match actual working
infrastructure

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run existing SmartDecisionMaker tests to ensure no regressions:
`poetry run pytest backend/blocks/test/test_smart_decision_maker.py
-xvs`  All 4 tests passed
- [x] Run new parameter validation test specifically: `poetry run pytest
backend/blocks/test/test_smart_decision_maker.py::test_smart_decision_maker_parameter_validation
-xvs`  Passed with retry behavior confirmed
- [x] Verify retry mechanism works by checking log output for retry
attempts  Confirmed in test logs
- [x] Test tool call validation with different scenarios (typos, missing
params, valid calls)  All scenarios covered and working
- [x] Run code formatting and linting: `poetry run format`  All
formatters passed
- [x] Verify no breaking changes to existing SmartDecisionMaker
functionality  All existing tests pass
- [x] Verify load-tests documentation accuracy  README now matches
actual orchestrator infrastructure

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Note**: No configuration changes were needed as this uses existing
retry infrastructure and block schema validation.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-30 16:18:05 +00:00
Zamil Majdy
f314fbf14f fix(backend): resolve two critical long-running agent execution failures (#11011)
## Summary

Fix two production issues causing agent execution failures that occurred
this morning:

1. **AsyncRedisLock Release Error** (ExecutionID:
08b2c251-ee27-45de-b88d-1792823ca3ee)
   - Error: "Cannot release a lock that's no longer owned" 
- Root cause: Race condition where lock expires during long database
operations
   - Location: backend/executor/manager.py synchronized context manager

2. **Tool Call Parameter Validation** (ExecutionID:
766fd9a0-5f22-4a77-96e8-14c9d02f3292)
- Issue: LLM used typo'd parameter 'maximum_keyword_difficulty' instead
of 'max_keyword_difficulty'
- SmartDecisionMakerBlock silently accepted typo, setting correct
parameter to null
- Result: Downstream blocks received null values causing execution
failures

## Changes Made

### AsyncRedisLock Error Handling
- Add try-catch blocks around AsyncRedisLock.release() calls in
ExecutionManager and OAuth refresh
- Prevent crashes when locks expire between ownership check and release
- Log warnings instead of crashing execution

### Tool Call Parameter Validation  
- **Reject unknown parameters**: Raise ValueError for typo'd parameter
names with detailed error messages
- **Allow optional parameters**: Only validate missing REQUIRED
parameters
- **Safe parameter access**: Use .get() to handle optional parameters
with defaults
- **Clean code**: Extract parameters object once to eliminate
duplication

## Technical Implementation

**Lock Release Protection:**
```python
if await lock.locked() and await lock.owned():
    try:
        await lock.release()
    except Exception as e:
        logger.warning(f"Failed to release lock for key {key}: {e}")
```

**Parameter Validation Logic:**
```python
# Get parameters schema from tool definition  
if tool_def and "function" in tool_def and "parameters" in tool_def["function"]:
    parameters = tool_def["function"]["parameters"]
    expected_args = parameters.get("properties", {})
    required_params = set(parameters.get("required", []))

# Detect parameter typos and missing required params
unexpected_args = provided_args - expected_args_set  
missing_required_args = required_params - provided_args

if unexpected_args or missing_required_args:
    raise ValueError(error_msg)  # Detailed error explaining the problem
```

## Testing

- [x] All existing tests pass
- [x] Lock error handling prevents execution crashes  
- [x] Tool validation catches typos while allowing optional parameters
- [x] Maintains backward compatibility with existing workflows

## Impact

-  No more "Cannot release a lock" crashes during long database
operations
-  Tool calls with typo'd parameters are rejected with clear error
messages
-  Optional parameters work correctly with default values  
-  Production stability improved with graceful error handling

## Files Modified

- `backend/executor/manager.py` - AsyncRedisLock error handling in
synchronized context
- `backend/integrations/creds_manager.py` - OAuth refresh lock error
handling
- `backend/blocks/smart_decision_maker.py` - Tool call parameter
validation with typo detection

Fixes two critical production failures that were causing 2/5 agent runs
to fail this morning.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-29 15:34:20 +00:00
Zamil Majdy
a97ff641c3 feat(backend): optimize FastAPI endpoints performance and alert system (#11000)
## Summary

Comprehensive performance optimization fixing event loop binding issues
and addressing all PR feedback.

### Original Performance Issues Fixed

**Event Loop Binding Problems:**
- JWT authentication dependencies were synchronous, causing thread pool
bottlenecks under high concurrency
- FastAPI's default thread pool (40 threads) was insufficient for
high-load scenarios
- Backend services lacked proper event loop configuration

**Security & Performance Improvements:**
- Security middleware converted from BaseHTTPMiddleware to pure ASGI for
better performance
- Added blocks endpoint to cacheable paths for improved response times
- Cross-platform uvloop detection with Windows compatibility

### Key Changes Made

#### 1. JWT Authentication Async Conversion
- **Files**: `autogpt_libs/auth/dependencies.py`,
`autogpt_libs/auth/jwt_utils.py`
- **Change**: Convert all JWT functions to async (`requires_user`,
`requires_admin_user`, `get_user_id`, `get_jwt_payload`)
- **Impact**: Eliminates thread pool blocking, improves concurrency
handling
- **Tests**: All 25+ authentication tests updated to async patterns

#### 2. FastAPI Thread Pool Optimization  
- **File**: `backend/server/rest_api.py:82-93`
- **Change**: Configure thread pool size via
`config.fastapi_thread_pool_size`
- **Default**: Increased from 40 to higher limit for sync operations
- **Impact**: Better handling of remaining sync dependencies

#### 3. Performance-Optimized Security Middleware
- **File**: `backend/server/middleware/security.py`
- **Change**: Pure ASGI implementation replacing BaseHTTPMiddleware
- **Headers**: HTTP spec compliant capitalization
(X-Content-Type-Options, X-Frame-Options, etc.)
- **Caching**: Added `/api/blocks` and `/api/v1/blocks` to cacheable
paths
- **Impact**: Reduced middleware overhead, improved header compliance

#### 4. Cross-Platform Event Loop Configuration
- **File**: `backend/server/rest_api.py:311-312`
- **Change**: Platform-aware uvloop detection: `'uvloop' if
platform.system() != 'Windows' else 'auto'`
- **Impact**: Windows compatibility while maintaining Unix performance
benefits
- **Verified**: 'auto' is valid uvicorn default parameter

#### 5. Enhanced Caching Infrastructure
- **File**: `autogpt_libs/utils/cache.py:118-132`
- **Change**: Per-event-loop asyncio.Lock instances prevent cross-loop
deadlocks
- **Impact**: Thread-safe caching across multiple event loops

#### 6. Database Query Limits & Performance
- **Files**: Multiple data layer files
- **Change**: Added configurable limits to prevent unbounded queries
- **Constants**: `MAX_GRAPH_VERSIONS_FETCH=50`,
`MAX_USER_API_KEYS_FETCH=500`, etc.
- **Impact**: Consistent performance regardless of data volume

#### 7. OpenAPI Documentation Improvements
- **File**: `backend/server/routers/v1.py:68-85`
- **Change**: Added proper response model and schema for blocks endpoint
- **Impact**: Better API documentation and type safety

#### 8. Error Handling & Retry Logic Fixes
- **File**: `backend/util/retry.py:63`
- **Change**: Accurate retry threshold comments referencing
EXCESSIVE_RETRY_THRESHOLD
- **Impact**: Clear documentation for debugging retry scenarios

### ntindle Feedback Addressed

 **HTTP Header Capitalization**: All headers now use proper HTTP spec
capitalization
 **Windows uvloop Compatibility**: Clean platform detection with inline
conditional
 **OpenAPI Response Model**: Blocks endpoint properly documented in
schema
 **Retry Comment Accuracy**: References actual threshold constants
instead of hardcoded numbers
 **Code Cleanliness**: Inline conditionals preferred over verbose if
statements

### Performance Testing Results

**Before Optimization:**
- High latency under concurrent load
- Thread pool exhaustion at ~40 concurrent requests
- Event loop binding issues causing timeouts

**After Optimization:**
- Improved concurrency handling with async JWT pipeline
- Configurable thread pool scaling
- Cross-platform event loop optimization
- Reduced middleware overhead

### Backward Compatibility

 **All existing functionality preserved**  
 **No breaking API changes**  
 **Enhanced test coverage with async patterns**  
 **Windows and Unix compatibility maintained**

### Files Modified

**Core Authentication & Performance:**
- `autogpt_libs/auth/dependencies.py` - Async JWT dependencies
- `autogpt_libs/auth/jwt_utils.py` - Async JWT utilities  
- `backend/server/rest_api.py` - Thread pool config + uvloop detection
- `backend/server/middleware/security.py` - ASGI security middleware

**Database & Limits:**
- `backend/data/includes.py` - Performance constants and configurable
includes
- `backend/data/api_key.py`, `backend/data/credit.py`,
`backend/data/graph.py`, `backend/data/integrations.py` - Query limits

**Caching & Infrastructure:**
- `autogpt_libs/utils/cache.py` - Per-event-loop lock safety
- `backend/server/routers/v1.py` - OpenAPI improvements
- `backend/util/retry.py` - Comment accuracy

**Testing:**
- `autogpt_libs/auth/dependencies_test.py` - 25+ async test conversions
- `autogpt_libs/auth/jwt_utils_test.py` - Async JWT test patterns

Ready for review and production deployment. 🚀

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-29 05:32:48 +00:00
Zamil Majdy
114f604d7b Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-09-27 18:43:26 +07:00
Zamil Majdy
3abea1ed96 fix(backend): prevent duplicate graph executions across multiple executor pods (#11008)
## Problem
Multiple executor pods could simultaneously execute the same graph,
leading to:
- Duplicate executions and wasted resources
- Inconsistent execution states and results
- Race conditions in graph execution management
- Inefficient resource utilization in cluster environments

## Solution
Implement distributed locking using ClusterLock to ensure only one
executor pod can process a specific graph execution at a time.

## Key Changes

### Core Fix: Distributed Execution Coordination
- **ClusterLock implementation**: Redis-based distributed locking
prevents duplicate executions
- **Atomic lock acquisition**: Only one executor can hold the lock for a
specific graph execution
- **Automatic lock expiry**: Prevents deadlocks if executor pods crash
or become unresponsive
- **Graceful degradation**: System continues operating even if Redis
becomes temporarily unavailable

### Technical Implementation
- Move ClusterLock to `backend/executor/` alongside ExecutionManager
(its primary consumer)
- Comprehensive integration tests (27 test scenarios) ensure reliability
under all conditions
- Redis client compatibility for different deployment configurations
- Rate-limited lock refresh to minimize Redis load

### Reliability Improvements
- **Context manager support**: Automatic lock cleanup prevents resource
leaks
- **Ownership verification**: Locks can only be refreshed/released by
the owner
- **Concurrency testing**: Thread-safe operations verified under high
contention
- **Error handling**: Robust failure scenarios including network
partitions

## Test Coverage
-  Concurrent executor coordination (prevents duplicate executions)
-  Lock expiry and refresh mechanisms (prevents deadlocks)
-  Redis connection failures (graceful degradation)
-  Thread safety under high load (production scenarios)
-  Long-running executions with periodic refresh

## Impact
- **No more duplicate executions**: Eliminates wasted compute resources
and inconsistent results
- **Improved reliability**: Robust distributed coordination across
executor pods
- **Better resource utilization**: Only one pod processes each execution
- **Scalable architecture**: Supports multiple executor pods without
conflicts

## Validation
- All integration tests pass 
- Existing ExecutionManager functionality preserved   
- No breaking changes to APIs 
- Production-ready distributed locking 

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-27 11:42:40 +00:00
Abhimanyu Yadav
da6e1ad26d refactor(frontend): enhance builder UI for better performance (#10922)
### Changes 🏗️

This PR introduces a new high-performance builder interface for the
AutoGPT platform, implementing a React Flow-based visual editor with
optimized state management and rendering.

#### Key Changes:

1. **New Flow Editor Implementation**
   - Built on React Flow for efficient graph rendering and interaction
- Implements a node-based visual workflow builder with custom nodes and
edges
- Dynamic form generation using React JSON Schema Form (RJSF) for block
inputs
   - Intelligent connection handling with visual feedback

2. **State Management Optimization**  
   - Added Zustand for lightweight, performant state management
   - Separated node and edge stores for better data isolation
   - Reduced unnecessary re-renders through granular state updates

3. **Dual Builder View (Temporary)**
   - Added toggle between old and new builder implementations
   - Allows A/B testing and gradual migration
   - Feature flagged for controlled rollout

4. **Enhanced UI Components**
- Custom form widgets for various input types (date, time, file, etc.)
   - Array and object editors with improved UX
   - Connection handles with visual state indicators
   - Advanced mode toggle for complex configurations

5. **Architecture Improvements**
   - Modular component structure for better code organization
   - Comprehensive documentation for the new system
   - Type-safe implementation with TypeScript

#### Dependencies Added:
- `zustand` (v5.0.2) - State management
- `@rjsf/core` (v5.22.8) - JSON Schema Form core
- `@rjsf/utils` (v5.22.8) - RJSF utilities  
- `@rjsf/validator-ajv8` (v5.22.8) - Schema validation

### Performance Improvements 🚀

- **Reduced Re-renders**: Zustand's shallow comparison and selective
subscriptions minimize unnecessary component updates
- **Optimized Graph Rendering**: React Flow provides efficient
canvas-based rendering for large workflows
- **Lazy Loading**: Components are loaded on-demand reducing initial
bundle size
- **Memoized Computations**: Heavy calculations are cached to avoid
redundant processing

### Test Plan 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
#### Test Checklist:
- [x] Create a new agent from scratch with at least 5 blocks
- [x] Connect blocks and verify connections render correctly
- [x] Switch between old and new builder views 
- [x] Test all form input types (text, number, boolean, array, object)
- [x] Verify data persistence when switching views
- [x] Test advanced mode toggle functionality
- [x] Performance test with 50+ blocks to verify smooth interaction

### Migration Strategy

The implementation includes a temporary toggle to switch between the old
and new builder. This allows for:
- Gradual user migration
- A/B testing to measure performance improvements
- Fallback option if issues are discovered
- Collecting user feedback before full rollout

### Documentation

Comprehensive documentation has been added:
- `/components/FlowEditor/docs/README.md` - Architecture overview and
store management
- `/components/FlowEditor/docs/FORM_CREATOR.md` - Detailed form system
documentation

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-26 10:42:05 +00:00
Swifty
634fffb967 fix(blocks): Handle NoneType in DataForSEO Blocks and Add missing Err (#11004)
This PR fixes critical issues in the DataForSEO blocks to improve error
handling and prevent runtime exceptions.

### Changes 🏗️

1. **Fixed NoneType error in DataForSEO Related Keywords Block**
(#10990)
- Added null check to ensure `items` is always a list before iteration
   - Prevents TypeError when API returns None for items field
   - Ensures robust handling of unexpected API responses

2. **Added error output pins to DataForSEO blocks** (#10981)
- Added `error` field to Output schema in both `related_keywords.py` and
`keyword_suggestions.py`
   - Wrapped entire `run` methods in try-except blocks
- Errors are now properly yielded to the error output pin, allowing
agents to handle failures gracefully

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that DataForSEO blocks handle None responses without
throwing TypeError
- [x] Confirmed error output pins capture and yield exceptions properly
- [x] Ensured backwards compatibility with existing block
implementations
  - [x] Tested both Related Keywords and Keyword Suggestions blocks

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---

Fixes #10990
Fixes #10981

Generated with [Claude Code](https://claude.ai/code)

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2025-09-26 11:23:14 +02:00
Toran Bruce Richards
f3ec426c82 fix(blocks): Handle NoneType in DataForSEO Blocks and Add missing Error pins (#10995)
This PR fixes critical issues in the DataForSEO blocks to improve error
handling and prevent runtime exceptions.

### Changes 🏗️

1. **Fixed NoneType error in DataForSEO Related Keywords Block**
(#10990)
- Added null check to ensure `items` is always a list before iteration
   - Prevents TypeError when API returns None for items field
   - Ensures robust handling of unexpected API responses

2. **Added error output pins to DataForSEO blocks** (#10981)
- Added `error` field to Output schema in both `related_keywords.py` and
`keyword_suggestions.py`
   - Wrapped entire `run` methods in try-except blocks
- Errors are now properly yielded to the error output pin, allowing
agents to handle failures gracefully

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that DataForSEO blocks handle None responses without
throwing TypeError
- [x] Confirmed error output pins capture and yield exceptions properly
- [x] Ensured backwards compatibility with existing block
implementations
  - [x] Tested both Related Keywords and Keyword Suggestions blocks

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---

Fixes #10990
Fixes #10981

Generated with [Claude Code](https://claude.ai/code)
2025-09-25 20:22:10 +00:00
Reinier van der Leer
0b267f573e feat(blocks): Improve JSON generation+parsing in AI Structured Response block (#10960)
The AI Structured Response Generator block currently doesn't support
responses that aren't pure JSON. This prohibits multi-step prompting
because reasoning content is not allowed in the response, which in turn
limits performance.

### Changes 🏗️

- Adjust prompt to enclose JSON in pre-defined tags so we can extract it
from a response that isn't pure JSON
- Adjust mechanism to extract and parse JSON
- Add `force_json_output` input (advanced, default `False`)
- Update incorrect `max_output_tokens` values for Claude 4 and 3.7 to
prevent responses from being cut off due to `max_tokens`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] LLMs correctly follows response generation instructions
- [x] LLMs follow system response format instructions even if user
prompt contains conflicting instructions
  - [x] JSON is extracted from response successfully
  - [x] `force_json_output` works (at least for models that support it)

Tested with Claude 4 Sonnet, various GPT models, and Llama 3.3 70B.

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-09-25 17:42:25 +00:00
Reinier van der Leer
7bd571d9ce fix(blocks): Default disable HTML escaping in all blocks with templating features (#10955)
- Resolves #10954

Unnecessary escaping distorts content and so should be disabled wherever
the output isn't used in HTML.

### Changes 🏗️

- Disable HTML escaping on prompt value insertion in AI blocks
- Make HTML escaping optional in text formatting and output blocks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x]
`SandboxedEnvironment(autoescape=False).from_string(template_str).render(values)`
doesn't escape characters with HTML entities
2025-09-25 12:04:38 +00:00
Zamil Majdy
7a331651ba feat(backend): enhance database indexes for AgentGraph and AgentGraphExecution performance (#10985)
## Summary

Enhances database performance by improving indexes on `AgentGraph` and
`AgentGraphExecution` tables for better query efficiency.

### Changes 🏗️

- **Database Schema**: Updated Prisma schema to enhance database indexes
- Modified `AgentGraph` index from `[userId, isActive]` to `[userId,
isActive, id, version]` for better compound query performance
- Enhanced `AgentGraphExecution` index from `[userId]` to `[userId,
isDeleted, createdAt]` to support filtered queries with sorting
- **Migration**: Auto-generated Prisma migration to implement the index
changes
- Drops existing indexes: `AgentGraph_userId_isActive_idx` and
`AgentGraphExecution_userId_idx`
- Creates new compound indexes:
`AgentGraph_userId_isActive_id_version_idx` and
`AgentGraphExecution_userId_isDeleted_createdAt_idx`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified migration runs successfully
  - [x] Confirmed database queries continue to work with new indexes
  - [x] Tested that existing functionality remains unaffected

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-25 09:05:53 +00:00
Zamil Majdy
5bc69adc33 Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-09-25 16:09:01 +07:00
Krzysztof Czerwinski
f4bcc8494f hotfix: Fix Agent node missing inputs and outputs (#10987)
Restore `include=AGENT_GRAPH_INCLUDE` that is needed to build schema
from the nodes.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I/O is back on the Agent node
2025-09-25 08:56:32 +00:00
Zamil Majdy
4c000086e6 feat(backend): implement clean k6 load testing infrastructure (#10978)
## Summary

Implement comprehensive k6 load testing infrastructure for the AutoGPT
Platform with clean file organization, unified test runner, and cloud
integration.

## Key Features

### 🗂️ Clean File Organization
- **tests/basic/**: Simple validation tests (connectivity, single
endpoints)
- **tests/api/**: Core functionality tests (API endpoints, graph
execution)
- **tests/marketplace/**: User-facing feature tests (public/library
access)
- **tests/comprehensive/**: End-to-end scenario tests (complete user
journeys)
- **orchestrator/**: Advanced test orchestration for full suites

### 🚀 Unified Test Runner
- **Single entry point**: `run-tests.js` for both local and cloud
execution
- **7 available tests**: From basic connectivity to comprehensive
platform journeys
- **Flexible execution**: Run individual tests, comma-separated lists,
or all tests
- **Auto-configuration**: Different VU/duration settings for local vs
cloud execution

### 🔐 Advanced Authentication
- **Pre-authenticated tokens**: 24-hour JWT tokens eliminate Supabase
rate limiting
- **Configurable generation**: Default 10 tokens, scalable to 150+ for
high concurrency
- **Graceful handling**: Proper auth failure detection and recovery
- **ES module compatible**: Modern JavaScript with full import/export
support

### ☁️ k6 Cloud Integration
- **Cloud execution**: Tests run on k6 cloud infrastructure for
consistent results
- **Real-time monitoring**: Live dashboards with performance metrics
- **URL tracking**: Automatic test result URL capture and storage
- **Sequential orchestration**: Proper delays between tests for resource
management

## Test Coverage

### Performance Validated
- **Core API**: 100 VUs successfully testing `/api/credits`,
`/api/graphs`, `/api/blocks`, `/api/executions`
- **Graph Execution**: 80 VUs for complete workflow pipeline testing
- **Marketplace**: 150 VUs for public browsing, 100 VUs for
authenticated library operations
- **Authentication**: 150+ concurrent users with pre-authenticated token
scaling

### User Journey Simulation
- **Dashboard workflows**: Credits checking, graph management, execution
monitoring
- **Marketplace browsing**: Public search, agent discovery, category
filtering
- **Library operations**: Agent adding, favoriting, forking, detailed
views
- **Complete workflows**: End-to-end platform usage with realistic user
behavior

## Technical Implementation

### ES Module Compatibility
- Full ES module support with modern JavaScript imports/exports
- Proper module execution patterns for Node.js compatibility
- Clean separation between CommonJS legacy and modern ES modules

### Error Handling & Monitoring  
- **Separate metrics**: HTTP status, authentication, JSON validation,
overall success
- **Graceful degradation**: Auth failures don't crash VUs, proper error
tracking
- **Performance thresholds**: Configurable P95/P99 latency and error
rate limits
- **Custom counters**: Track operation types, success rates, user
journey completion

### Infrastructure Benefits
- **Rate limit protection**: Pre-auth tokens prevent Supabase auth
bottlenecks
- **Scalable testing**: Support for 150+ concurrent users with proper
token management
- **Cloud consistency**: Tests run on dedicated k6 cloud servers for
reliable results
- **Development workflow**: Local execution for debugging, cloud for
performance validation

## Usage

### Quick Start
```bash
# Setup and verification
export SUPABASE_SERVICE_KEY="your-service-key"
node generate-tokens.js
node run-tests.js verify

# Local testing (development)
node run-tests.js run core-api-test DEV

# Cloud testing (performance)
node run-tests.js cloud all DEV
```

### NPM Scripts
```bash
npm run verify    # Quick setup check
npm test         # All tests locally  
npm run cloud    # All tests in k6 cloud
```

## Validation Results

 **Authentication**: 100% success rate with fresh 24-hour tokens  
 **File Structure**: All imports and references verified correct  
 **Test Execution**: All 7 tests execute successfully with proper
metrics
 **Cloud Integration**: k6 cloud execution working with proper
credentials
 **Documentation**: Complete README with usage examples and
troubleshooting

## Files Changed

### Core Infrastructure
- `run-tests.js`: Unified test runner supporting local/cloud execution
- `generate-tokens.js`: ES module compatible token generation with
24-hour expiry
- `README.md`: Comprehensive documentation with updated file references

### Organized Test Structure  
- `tests/basic/connectivity-test.js`: Basic connectivity validation
- `tests/basic/single-endpoint-test.js`: Individual API endpoint testing
- `tests/api/core-api-test.js`: Core authenticated API endpoints
- `tests/api/graph-execution-test.js`: Graph workflow pipeline testing  
- `tests/marketplace/public-access-test.js`: Public marketplace browsing
- `tests/marketplace/library-access-test.js`: Authenticated
marketplace/library operations
- `tests/comprehensive/platform-journey-test.js`: Complete user journey
simulation

### Configuration
- `configs/environment.js`: Environment URLs and performance settings
- `package.json`: NPM scripts and dependencies for unified workflow

This infrastructure provides a solid foundation for continuous
performance monitoring and load testing of the AutoGPT Platform.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-25 12:51:54 +07:00
Krzysztof Czerwinski
9c6cc5b29d Merge branch 'dev' 2025-09-25 13:28:17 +09:00
Toran Bruce Richards
b34973ca47 feat: Add 'depth' parameter to DataForSEO Related Keywords block (#10983)
Fixes #10982

<!-- Clearly explain the need for these changes: -->
The DataForSEO Related Keywords block was missing the `depth` parameter,
which is a critical parameter that controls the comprehensiveness of
keyword research. The depth parameter determines the number of related
keywords returned by the API, ranging from 1 keyword at depth 0 to
approximately 4680 keywords at depth 4.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Added `depth` parameter to the DataForSEO Related Keywords block as an
integer input field (range 0-4)
- Added `depth` parameter to the `related_keywords` method signature in
the API client
- Updated the API client to include the depth parameter in the request
payload when provided
- Added documentation explaining the depth parameter's effect on the
number of returned keywords
- Fixed missing parameter in function signature that was causing runtime
errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified the depth parameter appears correctly in the block UI
with appropriate range validation (0-4)
  - [x] Confirmed the parameter is passed correctly to the API client
- [x] Tested that omitting the depth parameter doesn't break existing
functionality (defaults to None)
- [x] Verified the implementation follows the existing pattern for
optional parameters in the DataForSEO blocks

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

Note: No configuration changes were required for this feature addition.

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
2025-09-24 21:29:47 +00:00
Nicholas Tindle
2bc6a56877 fix(backend): Fix GCS timeout error in FileInput blocks (#10976)
## Summary
- Fixed "Timeout context manager should be used inside a task" error
occurring intermittently in FileInput blocks when downloading files from
Google Cloud Storage
- Implemented proper async session management for GCS client to ensure
operations run within correct task context
- Added comprehensive logging to help diagnose and monitor the issue in
production

## Changes
### Core Fix
- Modified `CloudStorageHandler._retrieve_file_gcs()` to create a fresh
GCS client and session for each download operation
- This ensures the aiohttp session is always created within the proper
async task context, preventing the timeout error
- The fix trades a small amount of efficiency for reliability, but only
affects download operations

### Logging Enhancements
- Added detailed logging in `store_media_file()` to track execution
context and async task state
- Enhanced `scan_content_safe()` to specifically catch and log timeout
errors with CRITICAL level
- Added context logging in virus scanner around `asyncio.create_task()`
calls
- Upgraded key debug logs to info level for visibility in production

### Code Quality
- Fixed unbound variable issue where `async_client` could be referenced
before initialization
- Replaced bare `except:` clauses with proper exception handling
- Fixed unused parameters warning in `__aexit__` method

## Testing
- The timeout error was occurring intermittently in production when
FileInput blocks processed GCS files
- With these changes, the error should be eliminated as the session is
always created in the correct context
- Comprehensive logging allows monitoring of the fix effectiveness in
production


## Context
The root cause was that `gcloud-aio-storage` was creating its internal
aiohttp session/timeout context outside of an async task context when
called by the executor. This happened intermittently depending on how
the executor scheduled block execution.

## Related Issues
- Addresses timeout errors reported in FileInput block execution
- Improves reliability of file uploads from the platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test a multiple file input agent and it works
  - [x] Test the agent that is causing the failure and it works

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 16:21:41 -05:00
Nicholas Tindle
87c773d03a fix(backend): Fix GCS timeout error in FileInput blocks (#10976)
## Summary
- Fixed "Timeout context manager should be used inside a task" error
occurring intermittently in FileInput blocks when downloading files from
Google Cloud Storage
- Implemented proper async session management for GCS client to ensure
operations run within correct task context
- Added comprehensive logging to help diagnose and monitor the issue in
production

## Changes
### Core Fix
- Modified `CloudStorageHandler._retrieve_file_gcs()` to create a fresh
GCS client and session for each download operation
- This ensures the aiohttp session is always created within the proper
async task context, preventing the timeout error
- The fix trades a small amount of efficiency for reliability, but only
affects download operations

### Logging Enhancements
- Added detailed logging in `store_media_file()` to track execution
context and async task state
- Enhanced `scan_content_safe()` to specifically catch and log timeout
errors with CRITICAL level
- Added context logging in virus scanner around `asyncio.create_task()`
calls
- Upgraded key debug logs to info level for visibility in production

### Code Quality
- Fixed unbound variable issue where `async_client` could be referenced
before initialization
- Replaced bare `except:` clauses with proper exception handling
- Fixed unused parameters warning in `__aexit__` method

## Testing
- The timeout error was occurring intermittently in production when
FileInput blocks processed GCS files
- With these changes, the error should be eliminated as the session is
always created in the correct context
- Comprehensive logging allows monitoring of the fix effectiveness in
production


## Context
The root cause was that `gcloud-aio-storage` was creating its internal
aiohttp session/timeout context outside of an async task context when
called by the executor. This happened intermittently depending on how
the executor scheduled block execution.

## Related Issues
- Addresses timeout errors reported in FileInput block execution
- Improves reliability of file uploads from the platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test a multiple file input agent and it works
  - [x] Test the agent that is causing the failure and it works

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 21:06:51 +00:00
Swifty
ebeefc96e8 feat(backend): implement caching layer for store API endpoints (Part 1) (#10975)
## Summary
This PR introduces comprehensive caching for the Store API endpoints to
improve performance and reduce database load. This is **Part 1** in a
series of PRs to add comprehensive caching across our entire API.

### Key improvements:
- Implements caching layer using the existing `@cached` decorator from
`autogpt_libs.utils.cache`
- Reduces database queries by 80-90% for frequently accessed public data
- Built-in thundering herd protection prevents database overload during
cache expiry
- Selective cache invalidation ensures data freshness when mutations
occur

## Details

### Cached endpoints with TTLs:
- **Public data (5-10 min TTL):**
  - `/agents` - Store agents list (2 min)
  - `/agents/{username}/{agent_name}` - Agent details (5 min)
  - `/graph/{store_listing_version_id}` - Agent graphs (10 min)
  - `/agents/{store_listing_version_id}` - Agent by version (10 min)
  - `/creators` - Creators list (5 min)
  - `/creator/{username}` - Creator details (5 min)

- **User-specific data (1 min TTL):**
  - `/profile` - User profiles (5 min)
  - `/myagents` - User's own agents (1 min)
  - `/submissions` - User's submissions (1 min)

### Cache invalidation strategy:
- Profile updates → clear user's profile cache
- New reviews → clear specific agent cache + agents list
- New submissions → clear agents list + user's caches
- Submission edits → clear related version caches

### Cache management endpoints:
- `GET /cache/info` - Monitor cache statistics
- `POST /cache/clear` - Clear all caches
- `POST /cache/clear/{cache_name}` - Clear specific cache

## Changes  
<!-- REQUIRED: Bullet point summary of changes -->
- Added caching decorators to all suitable GET endpoints in store routes
- Implemented cache invalidation on data mutations (POST/PUT/DELETE)
- Added cache management endpoints for monitoring and manual clearing
- Created comprehensive test suite for cache_delete functionality
- Verified thundering herd protection works correctly

## Testing
<!-- How to test your changes -->
-  Created comprehensive test suite (`test_cache_delete.py`)
validating:
  - Selective cache deletion works correctly
  - Cache entries are properly invalidated on mutations
  - Other cache entries remain unaffected
  - cache_info() accurately reflects state
-  Tested thundering herd protection with concurrent requests
-  Verified all endpoints return correct data with and without cache

## Checklist
<!-- REQUIRED: Be sure to check these off before marking the PR ready
for review. -->
- [x] I have self-reviewed this PR's diff, line by line
- [x] I have updated and tested the software architecture documentation
(if applicable)
- [x] I have run the agent to verify that it still works (if applicable)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 10:01:52 +00:00
Nicholas Tindle
83fe8d5b94 fix(backend): make preset migration not crash the system (#10966)
<!-- Clearly explain the need for these changes: -->
For those who develop blocks, they may or may not exist in the code at
the same time as the database.
> Create block in one branch, test, then move to another branch the
block is not in

This migration will prevent startup in that case.

### Changes 🏗️
Adds a try except around the migration
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test that startup actually works

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-24 14:23:22 +07:00
Zamil Majdy
50689218ed feat(backend): implement comprehensive load testing performance fixes + database health improvements (#10965) 2025-09-24 14:22:57 +07:00
Aayush Shah
ddff09a8e4 feat(blocks): add NotionReadPage block (#10760)
Introduces a Notion Read Page block that fetches a page by ID via the
Notion REST API. This is a first step toward Notion integration in the
AutoGPT Platform.

Motivation - Notion was not integrated yet. Im starting with a small
block to add capability incrementally.

### Notes
- I referred to the Todoist block implementation as a reference since
I’m a beginner.
- This is my first PR here  
- The block passed `docker compose run --rm rest_server pytest -q`
successfully

<!-- Clearly explain the need for these changes: -->

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

### Test plan
- [x] Ran `docker compose run --rm rest_server pytest -q
backend/blocks/test/test_block.py -k notion`
- [x] Confirmed tests passed (2 passed, 652 deselected, warnings only).
- [x] Ran poetry run format to fix linters and tests

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2025-09-19 18:54:47 +00:00
Ubbe
0c363a1cea fix(frontend): force dynamic rendering on marketplace (#10957)
## Changes 🏗️

When building on Vercel:
```
    at Object.start (.next/server/chunks/2744.js:1:312830) {
  description: "Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
  digest: 'DYNAMIC_SERVER_USAGE'
}
Failed to get server auth token: Error: Dynamic server usage: Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error
    at r (.next/server/chunks/8450.js:22:7298)
    at n (.next/server/chunks/4735.js:1:37020)
    at g (.next/server/chunks/555.js:1:31925)
    at m (.next/server/chunks/555.js:1:87056)
    at h (.next/server/chunks/555.js:1:932)
    at k (.next/server/chunks/555.js:1:25195)
    at queryFn (.next/server/chunks/555.js:1:25590)
    at Object.f [as fn] (.next/server/chunks/2744.js:1:316625)
    at q (.next/server/chunks/2744.js:1:312288)
    at Object.start (.next/server/chunks/2744.js:1:312830) {
  description: "Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
  digest: 'DYNAMIC_SERVER_USAGE'
}
```
That's because the `/marketplace` page prefetches the store agents data
on the server, and that query uses `cookies` for Auth. In theory, those
endpoints can be called without auth, but I think if you are logged that
affects the results.

The simpler for now is to tell Next.js to not try to statically render
it and render on the fly with caching. According to AI we shouldn't see
much difference performance wise:

> Short answer: Usually no noticeable slowdown. You’ll trade a small
TTFB increase (server renders per request) for correct behavior with
cookies. Overall interactivity stays the same since we still dehydrate
React Query data.
Why it’s fine:
Server already had to fetch marketplace data; doing it at request-time
vs build-time is roughly the same cost for users.
Hydration uses the prefetched data, avoiding extra client round-trips.
If you want extra speed:
If those endpoints don’t need auth, we can skip reading cookies during
server prefetch and enable ISR (e.g., revalidate=60) for partial
caching.
Or move the cookie-dependent parts to the client and keep the page
static.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Page load marketplace is fine and not slow 
  - [x] No build cookies errors 


### For configuration changes:

None
2025-09-19 08:24:08 +00:00
Ubbe
e5d870a348 refactor(frontend): move old components to __legacy__ (#10953)
## Changes 🏗️

Moving non-design-system components ( old ) to a `components/__legacy__`
folder 📁 so like this is more obvious for developers that they should
not import them or use them on new features. What is now top-level in
`/components` is what it is actively maintained.

Document some existing components like `<Alert />`. More on this coming
on follow-up PRs.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test and types pass on the CI
  - [x] Run app locally, click around, looks good 

### For configuration changes:

None
2025-09-18 21:37:43 +00:00
Reinier van der Leer
3f19cba28f fix(frontend/builder): Fix moved blocks disappearing on save (#10951)
- Resolves #10926
- Fixes a bug introduced in #10779

### Changes 🏗️

- Fix `.metadata.position` in graph save payload
- Make node reconciliation after graph save more robust

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Moved nodes don't disappear on graph save
2025-09-18 13:34:06 +00:00
Reinier van der Leer
a978e91271 fix(ci, backend): Update Redis image & amend config to work with it (#10952)
CI is currently broken because Bitnami has pulled all `bitnami/redis`
images.
The current official Redis image on Docker Hub is `redis`.

### Changes 🏗️

- Replace `bitnami/redis:6.2` by `redis:latest` in Backend CI workflow
file
- Make `REDIS_PASSWORD` optional in the backend settings

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI no longer broken
2025-09-18 13:02:49 +00:00
Ubbe
f283e6c514 refactor(frontend): cleanup of components folder (2/3) (#10942)
## Changes 🏗️

Following up my initial PR to tidy up the `components` folder
https://github.com/Significant-Gravitas/AutoGPT/pull/10940.

This is mostly moving files around and renaming some + documenting them
on the design system as needed. Should be pretty safe as long as types
on the CI pass.

## Checklist 📋

### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Click around, looks ok
  - [x] Test and types pass on the CI  

### For configuration changes:

None
2025-09-18 16:21:18 +09:00
Ubbe
9fc2101e7e refactor(frontend): tidy up on components folder (#10940)
## Changes 🏗️

Re-organise the `components` folder, moving things which are not re-used
across screens or part of the design system out of it.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] It works and test/types pass CI wise 

### For configuration changes:

None
2025-09-17 12:56:49 +00:00
Bentlybro
634f826d82 Merge branch 'master' into dev 2025-09-17 11:35:29 +01:00
Ubbe
6d6bf308fc fix(frontend): marketplace page load and caching (#10934)
## Changes 🏗️

### **Server-Side:**
-  **ISR Cache**: Page cached for 60 seconds, served instantly
-  **Prefetch**: All API calls made on server, not client
-  **Static Generation**: HTML pre-rendered with data
-  **Streaming**: Loading states show immediately

### **Client-Side:**
-  **No API Calls**: Data hydrated from server cache
-  **Fast Hydration**: React Query uses prefetched data
-  **Smart Caching**: 60s stale time prevents unnecessary requests
-  **Progressive Loading**: Suspense boundaries for better UX

### **🔄 Caching Strategy:**

1. **Server**: ISR cache (60s) → API calls → Static HTML
2. **CDN**: Cached HTML served instantly
3. **Client**: Hydrated data from server → No additional API calls
4. **Background**: ISR regenerates stale pages automatically

### **🎯 Result:**
- **First Visit**: Instant HTML + hydrated data (no client API calls)
- **Subsequent Visits**: Instant cached page
- **Background Updates**: Automatic revalidation every 60s
- **Optimal Performance**: Server-side rendering + client-side caching

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Marketplace page loads are faster 

### For configuration changes:

None
2025-09-17 07:23:43 +00:00
Nicholas Tindle
dd84fb5c66 feat(platform): Add public share links for agent run results (#10938)
<!-- Clearly explain the need for these changes: -->
This PR adds the ability for users to share their agent run results
publicly via shareable links. Users can generate a public link that
allows anyone to view the outputs of a specific agent execution without
requiring authentication. This feature enables users to share their
agent results with clients, colleagues, or the community.


https://github.com/user-attachments/assets/5508f430-07d0-4cd3-87bc-301b0b005cce


### Changes 🏗️

#### Backend Changes
- **Database Schema**: Added share tracking fields to
`AgentGraphExecution` model in Prisma schema:
  - `isShared`: Boolean flag to track if execution is shared
  - `shareToken`: Unique token for the share URL
  - `sharedAt`: Timestamp when sharing was enabled

- **API Endpoints**: Added three new REST endpoints in
`/backend/backend/server/routers/v1.py`:
- `POST /graphs/{graph_id}/executions/{graph_exec_id}/share`: Enable
sharing for an execution
- `DELETE /graphs/{graph_id}/executions/{graph_exec_id}/share`: Disable
sharing
- `GET /share/{share_token}`: Retrieve shared execution data (public
endpoint)

- **Data Models**:
- Created `SharedExecutionResponse` model for public-safe execution data
- Added `ShareRequest` and `ShareResponse` Pydantic models for type-safe
API responses
  - Updated `GraphExecutionMeta` to include share status fields

- **Security**:
- All share management endpoints verify user ownership before allowing
changes
- Public endpoint only exposes OUTPUT block data, no intermediate
execution details
  - Share tokens are UUIDs for security

#### Frontend Changes
- **ShareButton Component**
(`/frontend/src/components/ShareButton.tsx`):
  - Modal dialog for managing share settings
  - Copy-to-clipboard functionality for share links
  - Clear warnings about public accessibility
  - Uses Orval-generated API hooks for enable/disable operations

- **Share Page**
(`/frontend/src/app/(no-navbar)/share/[token]/page.tsx`):
  - Clean, navigation-free page for viewing shared executions
- Reuses existing `RunOutputs` component for consistent output rendering
  - Proper error handling for invalid/disabled share links
  - Loading states during data fetch

- **API Integration**:
- Fixed custom mutator to properly set Content-Type headers for POST
requests with empty bodies
  - Generated TypeScript types via Orval for type-safe API calls

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Test plan: -->
- [x] Enable sharing for an agent execution and verify share link is
generated
  - [x] Copy share link and verify it copies to clipboard
- [x] Open share link in incognito/private browser and verify outputs
are displayed
  - [x] Disable sharing and verify share link returns 404
- [x] Try to enable/disable sharing for another user's execution (should
fail with 404)
  - [x] Verify share page shows proper loading and error states
- [x] Test that only OUTPUT blocks are shown in shared view, no
intermediate data
=
2025-09-17 06:21:33 +00:00
Zamil Majdy
33679f3ffe feat(platform): Add instructions field to agent submissions (#10931)
## Summary

Added an optional "Instructions" field for agent submissions to help
users understand how to run agents and what to expect.

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/015c4f0b-4bdd-48df-af30-9e52ad283e8b"
/>

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/3242cee8-a4ad-4536-bc12-64b491a8ef68"
/>

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a9b63e1c-94c0-41a4-a44f-b9f98e446793"
/>


### Changes Made

**Backend:**
- Added `instructions` field to `AgentGraph` and `StoreListingVersion`
database models
- Updated `StoreSubmission`, `LibraryAgent`, and related Pydantic models
- Modified store submission API routes to handle instructions parameter
- Updated all database functions to properly save/retrieve instructions
field
- Added graceful handling for cases where database doesn't yet have the
field

**Frontend:**
- Added instructions field to agent submission flow (PublishAgentModal)
- Positioned below "Recommended Schedule" section as specified
- Added instructions display in library/run flow (RunAgentModal)  
- Positioned above credentials section with informative blue styling
- Added proper form validation with 2000 character limit
- Updated all TypeScript types and API client interfaces

### Key Features

-  Optional field - fully backward compatible
-  Proper positioning in both submission and run flows
-  Character limit validation (2000 chars)
-  User-friendly display with "How to use this agent" styling
-  Only shows when instructions are provided

### Testing

- Verified Pydantic model validation works correctly
- Confirmed schema validation enforces character limits
- Tested graceful handling of missing database fields
- Code formatting and linting completed

## Test plan

- [ ] Test agent submission with instructions field
- [ ] Test agent submission without instructions (backward
compatibility)
- [ ] Verify instructions display correctly in run modal
- [ ] Test character limit validation
- [ ] Verify database migrations work properly

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-17 03:55:45 +00:00
Abhimanyu Yadav
fc8c5ccbb6 feat(backend): enhance agent retrieval logic in store agent page (#10933)
This PR enhances the agent retrieval logic in the store database to
ensure accurate fetching of the latest approved agent versions. The
changes address scenarios where agents may have multiple versions with
different approval statuses.

## 🔧 Changes Made

### Enhanced Agent Retrieval Logic (`get_store_agent_details`)
- **Active Version Priority**: Added logic to prioritize fetching agents
based on the `activeVersionId` when available
- **Fallback to Latest Approved**: When no active version is set, the
system now falls back to the latest approved version (sorted by version
number descending)
- **Improved Accuracy**: Ensures users always see the most relevant
agent version based on the current store listing state

### Improved Agent Filtering (`get_my_agents`)
- **Enhanced Store Listing Filter**: Modified the filter to only include
store listings that have at least one available version
- **Nested Version Check**: Added nested filtering to check for
`isAvailable: true` in the versions, preventing empty or unavailable
listings from appearing

##  Testing Checklist

- [x] Test fetching agent details with an active version set
- [x] Test fetching agent details without an active version (should fall
back to latest approved)
- [x] Test `get_my_agents` returns only agents with available store
listing versions
- [x] Verify no agents with only unavailable versions appear in results
- [x] Test with agents having multiple versions with different approval
statuses
2025-09-17 02:57:39 +00:00
Reinier van der Leer
7d2ab61546 feat(platform): Disable Trigger Setup through Builder (#10418)
We want users to set up triggers through the Library rather than the
Builder.

- Resolves #10413


https://github.com/user-attachments/assets/515ed80d-6569-4e26-862f-2a663115218c

### Changes 🏗️

- Update node UI to push users to Library for trigger set-up and
management
  - Add note redirecting to Library for trigger set-up
  - Remove webhook status indicator and webhook URL section
- Add `libraryAgent: LibraryAgent` to `BuilderContext` for access inside
`CustomNode`
  - Move library agent loader from `FlowEditor` to `useAgentGraph`

- Implement `migrate_legacy_triggered_graphs` migrator function
- Remove `on_node_activate` hook (which previously handled webhook
setup)
- Propagate `created_at` from DB to `GraphModel` and
`LibraryAgentPreset` models

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Existing node triggers are converted to triggered presets (visible
in the Library)
    - [x] Converted triggered presets work
  - [x] Trigger node inputs are disabled and handles are hidden
- [x] Trigger node message links to the correct Library Agent when saved
2025-09-16 22:52:51 +00:00
Reinier van der Leer
c2f11dbcfa fix(blocks): Fix feedback loops in AI Structured Response Generator (#10932)
Improve the overall reliability of the AI Structured Response Generator
block from ~40% to ~100%. This block has been giving me a lot of hassle
over the past week and this improvement is an easy win.

- Resolves #10916

### Changes 🏗️

- Improve reliability of AI Structured Response Generator block
  - Fix feedback loops (total success rate ~40% -> 100%)
  - Improve system prompt (one-shot success rate ~40% -> ~76%)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] JSON decode errors are turned into a useful feedback message
  - [x] LLM effectively corrects itself based on the feedback message
2025-09-16 22:50:29 +00:00
Nicholas Tindle
f82adeb959 feat(library): Add agent favoriting functionality (#10828)
### Need 💡

This PR introduces the ability for users to "favorite" agents in the
library view, enhancing agent discoverability and organization.
Favorited agents will be visually marked with a heart icon and
prioritized in the library list, appearing at the top. This feature is
distinct from pinning specific agent runs.

### Changes 🏗️

*   **Backend:**
* Updated `LibraryAgent` model in `backend/server/v2/library/model.py`
to include the `is_favorite` field when fetching from the database.
*   **Frontend:**
* Updated `LibraryAgent` TypeScript type in
`autogpt-server-api/types.ts` to include `is_favorite`.
* Modified `LibraryAgentCard.tsx` to display a clickable heart icon,
indicating the favorite status.
* Implemented a click handler on the heart icon to toggle the
`is_favorite` status via an API call, including loading states and toast
notifications.
* Updated `useLibraryAgentList.ts` to implement client-side sorting,
ensuring favorited agents appear at the top of the list.
* Updated `openapi.json` to include `is_favorite` in the `LibraryAgent`
schema and regenerated frontend API types.
    *   Installed `@orval/core` for API generation.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify that the heart icon is displayed correctly on
`LibraryAgentCard` for both favorited (filled red) and unfavorited
(outlined gray) agents.
  - [x] Click the heart icon on an unfavorited agent:
    - [x] Confirm the icon changes to filled red.
    - [x] Verify a "Added to favorites" toast notification appears.
    - [x] Confirm the agent moves to the top of the library list.
- [x] Check that the agent card does not navigate to the agent details
page.
  - [x] Click the heart icon on a favorited agent:
    - [x] Confirm the icon changes to outlined gray.
    - [x] Verify a "Removed from favorites" toast notification appears.
- [x] Confirm the agent's position adjusts in the list (no longer at the
very top unless other sorting criteria apply).
- [x] Check that the agent card does not navigate to the agent details
page.
- [x] Test the loading state: rapidly click the heart icon and observe
the `opacity-50 cursor-not-allowed` styling.
- [x] Verify that the sorting correctly places all favorited agents at
the top, maintaining their original relative order within the favorited
group, and the same for unfavorited agents.

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---
<a
href="https://cursor.com/background-agent?bcId=bc-43e8f98c-e4ea-4149-afc8-5eea3d1ab439">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-43e8f98c-e4ea-4149-afc8-5eea3d1ab439">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-16 22:43:50 +00:00
Nicholas Tindle
6f08a1cca7 fix: the api key credentials weren't registering correctly (#10936) 2025-09-16 13:27:04 -05:00
Ubbe
1ddf92eed4 fix(frontend): new agent run page design refinements (#10924)
## Changes 🏗️

Implements all the following changes...

1. The margins between the runs, on the left hand side.. reduced them
around `6px` ?
2. Make agent inputs full width
3. Make "Schedule setup" section displayed in a second modal
4. When an agent is running, we should not show the "Delete agent"
button
5. Copy changes around the actions for agent/runs
6. Large button height should be `52px`
7. Fix margins between + New Run button and the runs & scheduled menu
8. Make border white on cards

Also... 
- improve the naming of some components to reflect better their
context/usage
- show on the inputs section when an agent is using already API keys or
credentials
- fix runs/schedules not auto-selecting once created

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with the new agent runs page enabled
  - [x] Test the above 

### For configuration changes:

None
2025-09-16 14:34:52 +00:00
Reinier van der Leer
4c0dd27157 dx(platform): Add manual dispatch to deploy workflows (#10918)
When deploying from the infra repo, migrations aren't run which can
cause issues. We need to be able to manually dispatch deployment from
this repo so that the migrations are run as well.

### Changes 🏗️

- add manual dispatch to deploy workflows

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Either it works or it doesn't but this PR won't break anything
existing
2025-09-16 15:57:56 +02:00
Zamil Majdy
17fcf68f2e feat: Separate OpenAI key for smart agent execution summary and other internal AI calls (#10930)
### Changes 🏗️

Separate the API key for internal usage (smart agent execution summary)
and block usage.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual test after deployment
2025-09-16 09:14:27 +00:00
Reinier van der Leer
381558342a fix(frontend/builder): Fix moved blocks disappearing on no-op save (#10927)
- Resolves #10926

### Changes 🏗️

- Fix save no-op if graph has no changes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving a graph after only moving nodes doesn't make those nodes
disappear
2025-09-16 08:15:10 +00:00
Zamil Majdy
1fdc02467b feat(backend): Add comprehensive Prometheus instrumentation for observability (#10923)
## Summary
- Implement comprehensive Prometheus metrics instrumentation for all
FastAPI services
- Add custom business metrics for graph/block executions
- Enable dual publishing to both Grafana Cloud and internal Prometheus

## Related Infrastructure PR
-
https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/214

## Changes

### 📊 Metrics Infrastructure
- Added `prometheus-fastapi-instrumentator` dependency for automatic
HTTP metrics
- Created centralized `instrumentation.py` module for consistent metrics
across services
- Instrumented REST API, WebSocket, and External API services

### 📈 Automatic HTTP Metrics
All FastAPI services now automatically collect:
- **Request latency**: Histogram with custom buckets (10ms to 60s)
- **Request/response size**: Track payload sizes
- **Request counts**: By method, endpoint, and status code
- **Active requests**: Real-time count of in-progress requests
- **Error rates**: 4xx and 5xx responses

### 🎯 Custom Business Metrics
Added domain-specific metrics:
- **Graph executions**: Count by status (success/error/validation_error)
- **Block executions**: Count and duration by block_type and status
- **WebSocket connections**: Active connection gauge
- **Database queries**: Duration histogram by operation and table
- **RabbitMQ messages**: Count by queue and status
- **Authentication**: Attempts by method and status
- **API key usage**: By provider and block type
- **Rate limiting**: Hit count by endpoint

### 🔌 Service Endpoints
Each service exposes metrics at `/metrics`:
- REST API (port 8006): `/metrics`
- WebSocket (port 8001): `/metrics`
- External API: `/external-api/metrics`
- Executor (port 8002): Already had metrics, now enhanced

### 🏷️ Kubernetes Integration
Updated Helm charts with pod annotations:
```yaml
prometheus.io/scrape: "true"
prometheus.io/port: "8006"  # or appropriate port
prometheus.io/path: "/metrics"
```

## Testing
- [x] Install dependencies: `poetry install`
- [x] Run services: `poetry run serve`
- [x] Check metrics endpoints are accessible
- [x] Verify metrics are being collected
- [x] Confirm Grafana Agent can scrape metrics
- [x] Test graph/block execution tracking
- [x] Verify WebSocket connection metrics

## Performance Impact
- Minimal overhead (~1-2ms per request)
- Metrics are collected asynchronously
- Can be disabled via `ENABLE_METRICS=false` env var

## Next Steps
1. Deploy to dev environment
2. Configure Grafana Cloud dashboards
3. Set up alerting rules based on metrics
4. Add more custom business metrics as needed

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-16 12:58:04 +07:00
Nicholas Tindle
f262bb9307 fix(platform): add timezone awareness to scheduler (#10921)
### Changes 🏗️

This PR restores and improves timezone awareness in the scheduler
service to correctly handle daylight savings time (DST) transitions. The
changes ensure that scheduled agents run at the correct local time even
when crossing DST boundaries.

#### Backend Changes:
- **Scheduler Service (`scheduler.py`):**
- Added `user_timezone` parameter to `add_graph_execution_schedule()`
method
  - CronTrigger now uses the user's timezone instead of hardcoded UTC
  - Added timezone field to `GraphExecutionJobInfo` for visibility
  - Falls back to UTC with a warning if no timezone is provided
  - Extracts and includes timezone information from job triggers

- **API Router (`v1.py`):**
  - Added optional `timezone` field to `ScheduleCreationRequest`
- Fetches user's saved timezone from profile if not provided in request
  - Passes timezone to scheduler client when creating schedules
  - Converts `next_run_time` back to user timezone for display

#### Frontend Changes:
- **Schedule Creation Modal:**
  - Now sends user's timezone with schedule creation requests
- Uses browser's local timezone if user hasn't set one in their profile

- **Schedule Display Components:**
  - Updated to show timezone information in schedule details
  - Improved formatting of schedule information in monitoring views
  - Fixed schedule table display to properly show timezone-aware times

- **Cron Expression Utils:**
  - Removed UTC conversion logic from `formatTime()` function
  - Cron expressions are now stored in the schedule's timezone
  - Simplified humanization logic since no conversion is needed

- **API Types & OpenAPI:**
  - Added `timezone` field to schedule-related types
  - Updated OpenAPI schema to include timezone parameter

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  
### Test Plan 🧪

#### 1. Schedule Creation Tests
- [ ] Create a new schedule and verify the timezone is correctly saved
- [ ] Create a schedule without specifying timezone - should use user's
profile timezone
- [ ] Create a schedule when user has no profile timezone - should
default to UTC with warning

#### 2. Daylight Savings Time Tests
- [ ] Create a schedule for a daily task at 2:00 PM in a DST timezone
(e.g., America/New_York)
- [ ] Verify the schedule runs at 2:00 PM local time before DST
transition
- [ ] Verify the schedule still runs at 2:00 PM local time after DST
transition
- [ ] Check that the next_run_time adjusts correctly across DST
boundaries

#### 3. Display and UI Tests
- [ ] Verify timezone is displayed in schedule details view
- [ ] Verify schedule times are shown in user's local timezone in
monitoring page
- [ ] Verify cron expression humanization shows correct local times
- [ ] Check that schedule table shows timezone information

#### 4. API Tests
- [ ] Test schedule creation API with timezone parameter
- [ ] Test schedule creation API without timezone parameter
- [ ] Verify GET schedules endpoint returns timezone information
- [ ] Verify next_run_time is converted to user timezone in responses

#### 5. Edge Cases
- [ ] Test with various timezones (UTC, EST, PST, Europe/London,
Asia/Tokyo)
- [ ] Test with invalid timezone strings - should handle gracefully
- [ ] Test scheduling at DST transition times (2:00 AM during spring
forward)
- [ ] Verify existing schedules without timezone info default to UTC

#### 6. Regression Tests
- [ ] Verify existing schedules continue to work
- [ ] Verify schedule deletion still works
- [ ] Verify schedule listing endpoints work correctly
- [ ] Check that scheduled graph executions trigger as expected

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-15 18:18:03 -05:00
Nicholas Tindle
5a6978b07d feat(frontend): Add expandable view for block output (#10773)
### Need for these changes 💥


https://github.com/user-attachments/assets/5b9007a1-0c49-44c6-9e8b-52bf23eec72c


Users currently cannot view the full output result from a block when
inspecting the Output Data History panel or node previews, as the
content is clipped. This makes debugging and analysis of complex outputs
difficult, forcing users to copy data to external editors. This feature
improves developer efficiency and user experience, especially for blocks
with large or nested responses, and reintroduces a highly requested
functionality that existed previously.

### Changes 🏗️

* **New `ExpandableOutputDialog` component:** Introduced a reusable
modal dialog (`ExpandableOutputDialog.tsx`) designed to display
complete, untruncated output data.
* **`DataTable.tsx` enhancement:** Added an "Expand" button (Maximize2
icon) to each data entry in the Output Data History panel. This button
appears on hover and opens the `ExpandableOutputDialog` for a full view
of the data.
* **`NodeOutputs.tsx` enhancement:** Integrated the "Expand" button into
node output previews, allowing users to view full output data directly
from the node details.
* The `ExpandableOutputDialog` provides a large, scrollable content
area, displaying individual items in organized cards, with options to
copy individual items or all data, along with execution ID and pin name
metadata.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Navigate to an agent session with executed blocks.
  - [x] Open the Output Data History panel.
  - [x] Hover over a data entry to reveal the "Expand" button.
- [x] Click the "Expand" button and verify the `ExpandableOutputDialog`
opens, displaying the full, untruncated content.
  - [x] Verify scrolling works for large outputs within the dialog.
  - [x] Test "Copy Item" and "Copy All" buttons within the dialog.
  - [x] Navigate to a custom node in the graph.
  - [x] Inspect a node's output (if applicable).
  - [x] Hover over the output data to reveal the "Expand" button.
- [x] Click the "Expand" button and verify the `ExpandableOutputDialog`
opens, displaying the full content.

---
Linear Issue:
[OPEN-2593](https://linear.app/autogpt/issue/OPEN-2593/add-expandable-view-for-full-block-output-preview)

<a
href="https://cursor.com/background-agent?bcId=bc-27badeb8-2b49-4286-aa16-8245dfd33bfc">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-27badeb8-2b49-4286-aa16-8245dfd33bfc">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-15 14:19:40 +00:00
Nicholas Tindle
339ec733cb fix(platform): add timezone awareness to scheduler (#10921)
### Changes 🏗️

This PR restores and improves timezone awareness in the scheduler
service to correctly handle daylight savings time (DST) transitions. The
changes ensure that scheduled agents run at the correct local time even
when crossing DST boundaries.

#### Backend Changes:
- **Scheduler Service (`scheduler.py`):**
- Added `user_timezone` parameter to `add_graph_execution_schedule()`
method
  - CronTrigger now uses the user's timezone instead of hardcoded UTC
  - Added timezone field to `GraphExecutionJobInfo` for visibility
  - Falls back to UTC with a warning if no timezone is provided
  - Extracts and includes timezone information from job triggers

- **API Router (`v1.py`):**
  - Added optional `timezone` field to `ScheduleCreationRequest`
- Fetches user's saved timezone from profile if not provided in request
  - Passes timezone to scheduler client when creating schedules
  - Converts `next_run_time` back to user timezone for display

#### Frontend Changes:
- **Schedule Creation Modal:**
  - Now sends user's timezone with schedule creation requests
- Uses browser's local timezone if user hasn't set one in their profile

- **Schedule Display Components:**
  - Updated to show timezone information in schedule details
  - Improved formatting of schedule information in monitoring views
  - Fixed schedule table display to properly show timezone-aware times

- **Cron Expression Utils:**
  - Removed UTC conversion logic from `formatTime()` function
  - Cron expressions are now stored in the schedule's timezone
  - Simplified humanization logic since no conversion is needed

- **API Types & OpenAPI:**
  - Added `timezone` field to schedule-related types
  - Updated OpenAPI schema to include timezone parameter

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  
### Test Plan 🧪

#### 1. Schedule Creation Tests
- [ ] Create a new schedule and verify the timezone is correctly saved
- [ ] Create a schedule without specifying timezone - should use user's
profile timezone
- [ ] Create a schedule when user has no profile timezone - should
default to UTC with warning

#### 2. Daylight Savings Time Tests
- [ ] Create a schedule for a daily task at 2:00 PM in a DST timezone
(e.g., America/New_York)
- [ ] Verify the schedule runs at 2:00 PM local time before DST
transition
- [ ] Verify the schedule still runs at 2:00 PM local time after DST
transition
- [ ] Check that the next_run_time adjusts correctly across DST
boundaries

#### 3. Display and UI Tests
- [ ] Verify timezone is displayed in schedule details view
- [ ] Verify schedule times are shown in user's local timezone in
monitoring page
- [ ] Verify cron expression humanization shows correct local times
- [ ] Check that schedule table shows timezone information

#### 4. API Tests
- [ ] Test schedule creation API with timezone parameter
- [ ] Test schedule creation API without timezone parameter
- [ ] Verify GET schedules endpoint returns timezone information
- [ ] Verify next_run_time is converted to user timezone in responses

#### 5. Edge Cases
- [ ] Test with various timezones (UTC, EST, PST, Europe/London,
Asia/Tokyo)
- [ ] Test with invalid timezone strings - should handle gracefully
- [ ] Test scheduling at DST transition times (2:00 AM during spring
forward)
- [ ] Verify existing schedules without timezone info default to UTC

#### 6. Regression Tests
- [ ] Verify existing schedules continue to work
- [ ] Verify schedule deletion still works
- [ ] Verify schedule listing endpoints work correctly
- [ ] Check that scheduled graph executions trigger as expected

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-15 06:15:52 +00:00
Ubbe
6575b655f0 fix(frontend): improve agent runs page loading state (#10914)
## Changes 🏗️


https://github.com/user-attachments/assets/356e5364-45be-4f6e-bd1c-cc8e42bf294d

And also tidy up the some of the logic around hooks. I also added a
`okData` helper to avoid having to type case ( `as` ) so much with the
generated types ( given the `response` is a union depending on `status:
200 | 400 | 401` ... )

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run PR locally with the `new-agent-runs` flag enabled
  - [x] Check the nice loading state 

### For configuration changes:

None
2025-09-15 04:56:26 +00:00
Ubbe
7c2df24d7c fix(frontend): delete actions behind dialogs in agent runs view (#10915)
## Changes 🏗️

<img width="800" height="630" alt="Screenshot 2025-09-12 at 17 38 34"
src="https://github.com/user-attachments/assets/103d7e10-e924-4831-b0e7-b7df608a205f"
/>

<img width="800" height="524" alt="Screenshot 2025-09-12 at 17 38 30"
src="https://github.com/user-attachments/assets/aeec2ac7-4bea-4ec9-be0c-4491104733cb"
/>

<img width="800" height="750" alt="Screenshot 2025-09-12 at 17 38 26"
src="https://github.com/user-attachments/assets/e0b28097-8352-4431-ae4a-9dc3e3bcf9eb"
/>

- All the `Delete` actions on the new Agent Library Runs page should be
behind confirmation dialogs
- Re-arrange the file structure a bit 💆🏽 
- Make the buttons min-width a bit more generous

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Test the above 

#### For configuration changes:

None
2025-09-15 04:55:58 +00:00
Reinier van der Leer
23eafa178c fix(backend/db): Unbreak store materialized views refresh job (#10906)
- Resolves #10898

### Changes 🏗️

- Fix and re-create `refresh_store_materialized_views` DB function and
its pg_cron job

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without issues (locally)
  - [x] Refresh function can be run without issues (locally)
2025-09-14 23:31:18 +00:00
Zamil Majdy
27fccdbf31 fix(backend/executor): Make graph execution status transitions atomic and enforce state machine (#10863)
## Summary
- Fixed race condition issues in `update_graph_execution_stats` function
- Implemented atomic status transitions using database-level constraints
- Added state machine enforcement to prevent invalid status transitions
- Eliminated code duplication and improved error handling

## Problem
The `update_graph_execution_stats` function had race condition
vulnerabilities where concurrent status updates could cause invalid
transitions like RUNNING → QUEUED. The function was not durable and
could result in executions moving backwards in their lifecycle, causing
confusion and potential system inconsistencies.

## Root Cause Analysis
1. **Race Conditions**: The function used a broad OR clause that allowed
updates from multiple source statuses without validating the specific
transition
2. **No Atomicity**: No atomic check to ensure the status hadn't changed
between read and write operations
3. **Missing State Machine**: No enforcement of valid state transitions
according to execution lifecycle rules

## Solution Implementation

### 1. Atomic Status Transitions
- Use database-level atomicity by including the current allowed source
statuses in the WHERE clause during updates
- This ensures only valid transitions can occur at the database level

### 2. State Machine Enforcement
Define valid transitions as a module constant
`VALID_STATUS_TRANSITIONS`:
- `INCOMPLETE` → `QUEUED`, `RUNNING`, `FAILED`, `TERMINATED`
- `QUEUED` → `RUNNING`, `FAILED`, `TERMINATED`  
- `RUNNING` → `COMPLETED`, `TERMINATED`, `FAILED`
- `TERMINATED` → `RUNNING` (for resuming halted execution)
- `COMPLETED` and `FAILED` are terminal states with no allowed
transitions

### 3. Improved Error Handling
- Early validation with clear error messages for invalid parameters
- Graceful handling when transitions fail - return current state instead
of None
- Proper logging of invalid transition attempts

### 4. Code Quality Improvements
- Eliminated code duplication in fetch logic
- Added proper type hints and casting
- Made status transitions constant for better maintainability

## Benefits
 **Prevents Invalid Regressions**: No more RUNNING → QUEUED transitions
 **Atomic Operations**: Database-level consistency guarantees  
 **Clear Error Messages**: Better debugging and monitoring  
 **Maintainable Code**: Clean logic flow without duplication  
 **Race Condition Safe**: Handles concurrent updates gracefully  

## Test Plan
- [x] Function imports and basic structure validation
- [x] Code formatting and linting checks pass
- [x] Type checking passes for modified files
- [x] Pre-commit hooks validation

## Technical Details
The key insight is using the database query itself to enforce valid
transitions by filtering on allowed source statuses in the WHERE clause.
This makes the operation truly atomic and eliminates the race condition
window that existed in the previous implementation.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-09-14 23:31:02 +00:00
Reinier van der Leer
fb8fbc9d1f fix(backend/db): Keep CreditTransaction entries on User delete (#10917)
This is a non-critical improvement for bookkeeping purposes.

- Change `CreditTransaction` <- `User` relation to `ON DELETE NO ACTION`
so that `CreditTransactions` are not automatically deleted when we
delete a user's data.

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without problems
2025-09-12 19:42:16 +02:00
Reinier van der Leer
6a86e70fd6 fix(backend/db): Keep CreditTransaction entries on User delete (#10917)
This is a non-critical improvement for bookkeeping purposes.

### Changes 🏗️

- Change `CreditTransaction` <- `User` relation to `ON DELETE NO ACTION`
so that `CreditTransactions` are not automatically deleted when we
delete a user's data.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migration applies without problems
2025-09-12 17:11:31 +00:00
Ubbe
6a2d7e0fb0 fix(frontend): handle avatar missing images better (#10903)
## Changes 🏗️

I think this helps `next/image` being more tolerant when optimising
images from certain origins according to Claude.

## Checklist 📋

### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Deploy preview to dev
  - [x] Verify avatar images load better 

### For configuration changes:

None
2025-09-12 02:56:24 +00:00
Nicholas Tindle
3d6ea3088e fix(backend): Add Airtable record normalization and upsert features (#10908)
Introduces normalization of Airtable record outputs to include all
fields with appropriate empty values and optional field metadata.
Enhances record creation to support finding existing records by
specified fields and updating them if found, enabling upsert-like
behavior. Updates block schemas and logic for list, get, and create
operations to support these new features.<!-- Clearly explain the need
for these changes: -->

### Changes 🏗️
Allows normalization of the response of the airtable blocks
Allows you to use create base to find ones already made
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test that it doesn't break existing agents
  - [x] Test that the results for checkboxes are returned
2025-09-11 20:26:34 +00:00
Nicholas Tindle
64b4480b1e Merge branch 'master' into dev 2025-09-11 15:07:22 -05:00
Swifty
f490b01abb feat(frontend): Add Vercel Analytics and Speed Insights (#10904)
## Summary
- Added Vercel Analytics for tracking page views and user interactions
- Added Vercel Speed Insights for monitoring Web Vitals and performance
metrics
- Fixed incorrect placement of SpeedInsights component (was between html
and head tags)

## Changes
- Import Analytics and SpeedInsights components from Vercel packages
- Place both components correctly within the body tag
- Ensure proper HTML structure and Next.js best practices

## Test plan
- [x] Verify components are imported correctly
- [x] Confirm no HTML validation errors
- [x] Test that analytics work when deployed to Vercel
- [x] Verify Speed Insights metrics are being collected
2025-09-11 10:58:11 +00:00
Bentlybro
e56a4a135d Revert "fix(backend): Add Airtable record normalization + find/create base (#10891)"
This reverts commit 5da41e0753.
2025-09-10 14:36:40 +01:00
Ubbe
e70c970ab6 feat(frontend): new <Avatar /> component using next/image (#10897)
## Changes 🏗️

<img width="800" height="648" alt="Screenshot 2025-09-10 at 22 00 01"
src="https://github.com/user-attachments/assets/eb396d62-01f2-45e5-9150-4e01dfcb71d0"
/><br />

Adds a new `<Avatar />` component and uses that across the app. Is a
copy of
[shadcn/avatar](https://duckduckgo.com/?q=shadcn+avatar&t=brave&ia=web)
with the following modifications:
- renders images with
[`next/image`](https://duckduckgo.com/?q=next+image&t=brave&ia=web) by
default
- this ensures avatars rendered on the app are optimised and resized ✔️
- it will work as long as all the domains are white-listed in
`nextjs.config.mjs`
- allows to bypass and use a normal `<img />` tag via an `as` prop if
needed
  - sometimes we might need to render images from a dynamic cdn 🤷🏽‍♂️   

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] ...

### For configuration changes:

None
2025-09-10 13:25:09 +00:00
Abhimanyu Yadav
3bbce71678 feat(builder): Block menu redesign - part 3 (#10864)
### Changes 🏗️

#### Block Menu Redesign - Part 3

This PR continues the block menu redesign effort, implementing the new
content sections and improving the overall user experience. The changes
focus on better organization, pagination, error handling, and visual
consistency.

#### Key Features Implemented:

**1. New Content Organization**
- **All Blocks Content**: Complete listing of all available blocks with
category-based organization and infinite scroll support
(`AllBlocksContent/`)
- **My Agents Content**: Display and manage user's own agents with
pagination (`MyAgentsContent/`)
- **Marketplace Agents Content**: Browse and add marketplace agents with
improved loading states (`MarketplaceAgentsContent/`)
- **Integration Blocks**: Dedicated view for integration-specific blocks
with better filtering (`IntegrationBlocks/`)
- **Suggestion Content**: Smart suggestions based on user context and
search history (`SuggestionContent/`)
- **Integrations Content**: Browse available integrations in a dedicated
view (`IntegrationsContent/`)

**2. Enhanced UI Components**
- **Paginated Lists**: New pagination components for blocks and
integrations (`PaginatedBlocksContent/`, `PaginatedIntegrationList/`)
- **Block List**: Reusable block list component with consistent styling
(`BlockList/`)
- **Improved Error Handling**: Comprehensive error states with retry
functionality across all content types
- **Loading States**: Skeleton loaders for better perceived performance

**3. Infrastructure Improvements**
- **Centralized Styles**: New `style.ts` file for consistent styling
across components
- **Better State Management**: Enhanced context provider with improved
menu state handling
- **Mock Flag Support**: Added feature flags for testing new block
features
- **Default State Enum**: Refactored to use enums for menu default
states

**4. Visual Assets**
- Added 50+ new integration icons/logos for better visual representation
- Updated existing integration images for consistency

**5. Code Quality**
- Improved error handling with proper error cards and retry mechanisms
- Consistent formatting and import organization
- Enhanced TypeScript types and interfaces
- Better separation of concerns with dedicated hooks for each content
type

#### Technical Details:
- **Files Changed**: 96 files
- **Additions**: 1,380 lines
- **Deletions**: 162 lines
- **New Components**: 10+ new React components with dedicated hooks
- **Integration Icons**: 50+ new PNG images for various integrations

#### Breaking Changes:
None - All changes are backwards compatible

---

### Test Plan 📋

- [x] Create a new agent and verify all blocks are accessible
- [x] Test infinite scroll in "All Blocks" view
- [x] Verify pagination works correctly in marketplace agents view
- [x] Test error states by simulating network failures
- [x] Check that all new integration icons display correctly
- [x] Test adding agents from marketplace view
- [x] Ensure skeleton loaders appear during data fetching

> Generated by claude
2025-09-10 11:58:07 +00:00
Abhimanyu Yadav
34fbf4377f fix(frontend): allow lazy loading of images (#10895)
The `next/image` component has inbuilt lazy loading enabled, but in some
components, we are bypassing it using a priority flag. So, I have
reverted this in this PR.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Lazy loading is working perfectly locally.
2025-09-10 11:38:37 +00:00
dependabot[bot]
f682ef885a chore(frontend/deps-dev): Bump 16 dev dependencies to newer minor versions (#10837)
Bumps the development-dependencies group with 16 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
|
[@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests)
| `4.1.0` | `4.1.1` |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.54.2`
| `1.55.0` |
|
[@storybook/addon-a11y](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-docs](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding)
| `9.1.2` | `9.1.4` |
|
[@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs)
| `9.1.2` | `9.1.4` |
|
[@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query)
| `5.83.1` | `5.86.0` |
|
[@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools)
| `5.84.2` | `5.86.0` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `24.2.1` | `24.3.0` |
| [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.3` |
`13.1.4` |
| [concurrently](https://github.com/open-cli-tools/concurrently) |
`9.2.0` | `9.2.1` |
|
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
| `15.4.6` | `15.5.2` |
|
[eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin)
| `9.1.2` | `9.1.4` |
| [msw](https://github.com/mswjs/msw) | `2.10.4` | `2.11.1` |
|
[storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/core)
| `9.1.2` | `9.1.4` |


Updates `@chromatic-com/storybook` from 4.1.0 to 4.1.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/releases"><code>@​chromatic-com/storybook</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v4.1.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.1/CHANGELOG.md"><code>@​chromatic-com/storybook</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v4.1.1 (Wed Aug 20 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e38f120b09"><code>e38f120</code></a>
Bump version to: 4.1.1 [skip ci]</li>
<li><a
href="adb8570cbb"><code>adb8570</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="9e38298e21"><code>9e38298</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/390">#390</a>
from chromaui/next</li>
<li><a
href="7b3a184d0a"><code>7b3a184</code></a>
Merge branch 'main' into next</li>
<li><a
href="2faff8b888"><code>2faff8b</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/389">#389</a>
from chromaui/ndelangen/sb10-beta-compatibility</li>
<li><a
href="51015a75f2"><code>51015a7</code></a>
Update yarn.lock to add support for Storybook 10.0.0-0</li>
<li><a
href="6f050adca2"><code>6f050ad</code></a>
Update package.json</li>
<li>See full diff in <a
href="https://github.com/chromaui/addon-visual-tests/compare/v4.1.0...v4.1.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `@playwright/test` from 1.54.2 to 1.55.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/microsoft/playwright/releases"><code>@​playwright/test</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v1.55.0</h2>
<h2>New APIs</h2>
<ul>
<li>New Property <a
href="https://playwright.dev/docs/api/class-teststepinfo#test-step-info-title-path">testStepInfo.titlePath</a>
Returns the full title path starting from the test file, including test
and step titles.</li>
</ul>
<h2>Codegen</h2>
<ul>
<li>Automatic <code>toBeVisible()</code> assertions: Codegen can now
generate automatic <code>toBeVisible()</code> assertions for common UI
interactions. This feature can be enabled in the Codegen settings
UI.</li>
</ul>
<h2>Breaking Changes</h2>
<ul>
<li>⚠️ Dropped support for Chromium extension manifest v2.</li>
</ul>
<h2>Miscellaneous</h2>
<ul>
<li>Added support for Debian 13 &quot;Trixie&quot;.</li>
</ul>
<h2>Browser Versions</h2>
<ul>
<li>Chromium 140.0.7339.16</li>
<li>Mozilla Firefox 141.0</li>
<li>WebKit 26.0</li>
</ul>
<p>This version was also tested against the following stable
channels:</p>
<ul>
<li>Google Chrome 139</li>
<li>Microsoft Edge 139</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f992162f04"><code>f992162</code></a>
chore: mark v1.55.0 (<a
href="https://redirect.github.com/microsoft/playwright/issues/37121">#37121</a>)</li>
<li><a
href="4a92ea0025"><code>4a92ea0</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37113">#37113</a>):
docs: add release-notes for v1.55</li>
<li><a
href="aa05507bba"><code>aa05507</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37114">#37114</a>):
test: move browser._launchServer in child process</li>
<li><a
href="27ae7dc639"><code>27ae7dc</code></a>
test: tree gardening (<a
href="https://redirect.github.com/microsoft/playwright/issues/37107">#37107</a>)</li>
<li><a
href="cd09d859a9"><code>cd09d85</code></a>
test: unflake &quot;should pick element&quot; (<a
href="https://redirect.github.com/microsoft/playwright/issues/37103">#37103</a>)</li>
<li><a
href="72e47728ba"><code>72e4772</code></a>
chore(trace-viewer): remove unused code (<a
href="https://redirect.github.com/microsoft/playwright/issues/37097">#37097</a>)</li>
<li><a
href="5b8c7d648a"><code>5b8c7d6</code></a>
chore(dotnet): float is non-nullable (<a
href="https://redirect.github.com/microsoft/playwright/issues/37095">#37095</a>)</li>
<li><a
href="c7bf035c35"><code>c7bf035</code></a>
test(webkit): closing dialog &gt; contenteditable (<a
href="https://redirect.github.com/microsoft/playwright/issues/37084">#37084</a>)</li>
<li><a
href="9fd6986f8d"><code>9fd6986</code></a>
test: skip debug-controller tests in driver mode (<a
href="https://redirect.github.com/microsoft/playwright/issues/37090">#37090</a>)</li>
<li><a
href="4c2f44d591"><code>4c2f44d</code></a>
test(bidi): use the nightly channel only for Firefox in CI (<a
href="https://redirect.github.com/microsoft/playwright/issues/37086">#37086</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/microsoft/playwright/compare/v1.54.2...v1.55.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-a11y` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-a11y</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-a11y</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/a11y">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-docs` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-docs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-docs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="0f86613a92"><code>0f86613</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32287">#32287</a>
from storybookjs/shilman/error-utm</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li><a
href="f8ff03a47d"><code>f8ff03a</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32238">#32238</a>
from storybookjs/sidnioulz/issue-31436-table</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/docs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-links` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-links</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-links</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/links">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-onboarding` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-onboarding</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-onboarding</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/onboarding">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/nextjs` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="1dba82472a"><code>1dba824</code></a>
Next.js: Avoid multiple webpack versions at runtime</li>
<li><a
href="a0151efbce"><code>a0151ef</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32306">#32306</a>
from storybookjs/valentin/fix-nextjs-webpack-error</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="bbb4ffe209"><code>bbb4ffe</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32286">#32286</a>
from storybookjs/shilman/configure-utm</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/frameworks/nextjs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/eslint-plugin-query` from 5.83.1 to 5.86.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/eslint-plugin-query</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.86.0</h2>
<p>Version 5.86.0 - 9/4/25, 9:27 AM (Manual Release)</p>
<h2>Changes</h2>
<p>Note: This release contains BREAKING CHANGES for the
<code>experimental_streamedQuery</code> API:</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>query-core: add custom reducer support to streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9532">#9532</a>)
(8f24580) by <a
href="https://github.com/marcog83"><code>@​marcog83</code></a></li>
</ul>
<p>BREAKING CHANGE: The <code>maxChunks</code> parameter has been
removed from <code>streamedQuery</code>.
Use a custom <code>reducer</code> function to control data aggregation
behavior instead.</p>
<p>BREAKING CHANGE: When using a custom reducer function with
<code>streamedQuery</code>,
the <code>initialValue</code> parameter is now required and must be
provided.</p>
<ul>
<li>rename queryFn to streamFn in streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9606">#9606</a>)
(b25412a) by Dominik Dorfmeister</li>
</ul>
<p>BREAKING CHANGE: <code>queryFn</code> has been renamed to
<code>streamFn</code></p>
<h3>Chore</h3>
<ul>
<li>tsconfig.json: simplify &quot;include&quot; patterns by
consolidating file extensions and directory paths (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9547">#9547</a>)
(7306474) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Test</h3>
<ul>
<li>react-query/useMutationState: clarify assertions and improve code
formatting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9611">#9611</a>)
(43049c5) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Other</h3>
<ul>
<li>(c75a994) by BennettLiam</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1599bb4742"><code>1599bb4</code></a>
release: v5.86.0</li>
<li><a
href="7306474eee"><code>7306474</code></a>
chore(tsconfig.json): simplify 'include' patterns by consolidating file
exten...</li>
<li>See full diff in <a
href="https://github.com/TanStack/query/commits/v5.86.0/packages/eslint-plugin-query">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/react-query-devtools` from 5.84.2 to 5.86.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/react-query-devtools</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.86.0</h2>
<p>Version 5.86.0 - 9/4/25, 9:27 AM (Manual Release)</p>
<h2>Changes</h2>
<p>Note: This release contains BREAKING CHANGES for the
<code>experimental_streamedQuery</code> API:</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>query-core: add custom reducer support to streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9532">#9532</a>)
(8f24580) by <a
href="https://github.com/marcog83"><code>@​marcog83</code></a></li>
</ul>
<p>BREAKING CHANGE: The <code>maxChunks</code> parameter has been
removed from <code>streamedQuery</code>.
Use a custom <code>reducer</code> function to control data aggregation
behavior instead.</p>
<p>BREAKING CHANGE: When using a custom reducer function with
<code>streamedQuery</code>,
the <code>initialValue</code> parameter is now required and must be
provided.</p>
<ul>
<li>rename queryFn to streamFn in streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9606">#9606</a>)
(b25412a) by Dominik Dorfmeister</li>
</ul>
<p>BREAKING CHANGE: <code>queryFn</code> has been renamed to
<code>streamFn</code></p>
<h3>Chore</h3>
<ul>
<li>tsconfig.json: simplify &quot;include&quot; patterns by
consolidating file extensions and directory paths (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9547">#9547</a>)
(7306474) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Test</h3>
<ul>
<li>react-query/useMutationState: clarify assertions and improve code
formatting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9611">#9611</a>)
(43049c5) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Other</h3>
<ul>
<li>(c75a994) by BennettLiam</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1599bb4742"><code>1599bb4</code></a>
release: v5.86.0</li>
<li><a
href="7306474eee"><code>7306474</code></a>
chore(tsconfig.json): simplify 'include' patterns by consolidating file
exten...</li>
<li><a
href="0a35234ce4"><code>0a35234</code></a>
release: v5.85.9</li>
<li><a
href="aec19c93a5"><code>aec19c9</code></a>
release: v5.85.8</li>
<li><a
href="a978b34b1b"><code>a978b34</code></a>
release: v5.85.7</li>
<li><a
href="b998f68a57"><code>b998f68</code></a>
release: v5.85.6</li>
<li><a
href="0a0e01443d"><code>0a0e014</code></a>
release: v5.85.5</li>
<li><a
href="ef0a9d2c7e"><code>ef0a9d2</code></a>
release: v5.85.4</li>
<li><a
href="b6516bd25e"><code>b6516bd</code></a>
release: v5.85.3</li>
<li><a
href="e14f39c6ee"><code>e14f39c</code></a>
release: v5.85.2</li>
<li>Additional commits viewable in <a
href="https://github.com/TanStack/query/commits/v5.86.0/packages/react-query-devtools">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/node` from 24.2.1 to 24.3.0
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />

Updates `chromatic` from 13.1.3 to 13.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/releases">chromatic's
releases</a>.</em></p>
<blockquote>
<h2>v13.1.4</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Feat:Fix outdated and incorrect links in the CLI <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1202">#1202</a>
(<a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a>)</li>
<li>Show setup URL on build errors when onboarding. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1201">#1201</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a></li>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/blob/main/CHANGELOG.md">chromatic's
changelog</a>.</em></p>
<blockquote>
<h1>v13.1.4 (Fri Aug 29 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Feat:Fix outdated and incorrect links in the CLI <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1202">#1202</a>
(<a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a>)</li>
<li>Show setup URL on build errors when onboarding. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1201">#1201</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a></li>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c85259331b"><code>c852593</code></a>
Bump version to: 13.1.4 [skip ci]</li>
<li><a
href="aae7c41b86"><code>aae7c41</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="062d1fadee"><code>062d1fa</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1202">#1202</a>
from chromaui/feat_fix_links</li>
<li><a
href="15f72ae8ea"><code>15f72ae</code></a>
Removes references to the viewlayer used by noCSFGlobs</li>
<li><a
href="1a0e200bce"><code>1a0e200</code></a>
Remove viewlayer from nocsf file</li>
<li><a
href="f4c52ab0af"><code>f4c52ab</code></a>
Attempt to fix linting errors</li>
<li><a
href="0db944b07c"><code>0db944b</code></a>
Feat:Fix outdated and incorrect links in the CLI</li>
<li><a
href="8879737ab0"><code>8879737</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1201">#1201</a>
from chromaui/jmhobbs/cap-3287-chromatic-cicli-issue...</li>
<li><a
href="d3c25a21a1"><code>d3c25a2</code></a>
Improve links during onboarding with system errors</li>
<li><a
href="cce1b29cdf"><code>cce1b29</code></a>
Show setup URL on build errors when onboarding.</li>
<li>See full diff in <a
href="https://github.com/chromaui/chromatic-cli/compare/v13.1.3...v13.1.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `concurrently` from 9.2.0 to 9.2.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/open-cli-tools/concurrently/releases">concurrently's
releases</a>.</em></p>
<blockquote>
<h2>v9.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>chore: update eslint-plugin-simple-import-sort from v10 to v12 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/551">open-cli-tools/concurrently#551</a></li>
<li>chore: update eslint-config-prettier from v9 to v10 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/552">open-cli-tools/concurrently#552</a></li>
<li>Remove lodash by <a
href="https://github.com/gustavohenke"><code>@​gustavohenke</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/555">open-cli-tools/concurrently#555</a></li>
<li>chore: update coveralls-next from v4 to v5 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/557">open-cli-tools/concurrently#557</a></li>
<li>Replace jest with vitest by <a
href="https://github.com/gustavohenke"><code>@​gustavohenke</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/554">open-cli-tools/concurrently#554</a></li>
<li>Upgrade to pnpm v10 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/558">open-cli-tools/concurrently#558</a></li>
<li>chore: remove unused eslint-plugin-jest by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/559">open-cli-tools/concurrently#559</a></li>
<li>Minor dependency updates by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/560">open-cli-tools/concurrently#560</a></li>
<li>Migrate to ESLint v9 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/561">open-cli-tools/concurrently#561</a></li>
<li>Update shell-quote to 1.8.3 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/562">open-cli-tools/concurrently#562</a></li>
<li>Full coverage by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/563">open-cli-tools/concurrently#563</a></li>
<li>Update GH actions/workflows, enable NPM provenance by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/564">open-cli-tools/concurrently#564</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1">https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="414cd016c6"><code>414cd01</code></a>
9.2.1</li>
<li><a
href="0dfedb028c"><code>0dfedb0</code></a>
Update GH actions/workflows, enable npm provenance (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/564">#564</a>)</li>
<li><a
href="ee81511999"><code>ee81511</code></a>
Remove obsolete tsdk config</li>
<li><a
href="09d3d7b11f"><code>09d3d7b</code></a>
Full coverage (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/563">#563</a>)</li>
<li><a
href="8cfc6a6cb4"><code>8cfc6a6</code></a>
Update shell-quote to 1.8.3 (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/562">#562</a>)</li>
<li><a
href="4c403f8b01"><code>4c403f8</code></a>
Migrate to ESLint v9 (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/561">#561</a>)</li>
<li><a
href="8bfcaf7828"><code>8bfcaf7</code></a>
Minor dependency updates (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/560">#560</a>)</li>
<li><a
href="389fec4830"><code>389fec4</code></a>
Enable watch mode &amp; coverage for unit tests by default</li>
<li><a
href="7993ce6817"><code>7993ce6</code></a>
chore: remove unused eslint-plugin-jest (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/559">#559</a>)</li>
<li><a
href="58300f45eb"><code>58300f4</code></a>
Remove obsolete .npmrc file</li>
<li>Additional commits viewable in <a
href="https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `eslint-config-next` from 15.4.6 to 15.5.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.5.2</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: disable unknownatrules lint rule entirely (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83059">#83059</a>)</li>
<li>revert: add ?dpl to fonts in /_next/static/media (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83062">#83062</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgub"><code>@​bgub</code></a> and <a
href="https://github.com/ztanner"><code>@​ztanner</code></a> for
helping!</p>
<h2>v15.5.1</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: aliased navigations should apply scroll handling (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82900">#82900</a>)</li>
<li>Turbopack: fix invalid NFT entry with file behind symlink (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82887">#82887</a>)</li>
<li>fix: typesafe linking to route handlers and pages API routes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82858">#82858</a>)</li>
<li>fix: change &quot;noUnknownAtRules&quot; to &quot;warn&quot; for
Biome (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82974">#82974</a>)</li>
<li>fix: add path normalization to getRelativePath for Windows (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82918">#82918</a>)</li>
<li>feat: add typesafety with config.typedRoutes to redirect() and
permanentRedirect() (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82860">#82860</a>)</li>
<li>fix: avoid importing types that will be unused (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82856">#82856</a>)</li>
<li>fix: update the config.api.responseLimit type (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82852">#82852</a>)</li>
<li>fix: update validation return types (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82854">#82854</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgub"><code>@​bgub</code></a>, <a
href="https://github.com/mischnic"><code>@​mischnic</code></a>, and <a
href="https://github.com/ztanner"><code>@​ztanner</code></a> for
helping!</p>
<h2>v15.5.1-canary.26</h2>
<h3>Core Changes</h3>
<ul>
<li>Upgrade React from <code>2805f0ed-20250903</code> to
<code>3302d1f7-20250903</code>: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83400">#83400</a></li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>Fixed broken link in backend-for-frontend.mdx: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83310">#83310</a></li>
<li>Update next-response.mdx: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83302">#83302</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/geraldochristiano"><code>@​geraldochristiano</code></a>
and <a href="https://github.com/blntgvn42"><code>@​blntgvn42</code></a>
for helping!</p>
<h2>v15.5.1-canary.25</h2>
<h3>Core Changes</h3>
<ul>
<li>fix(ppr): fix prerender info matching for rewritten paths: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83359">#83359</a></li>
<li>fix: shrink bottom stack container height: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83377">#83377</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="497ec6aa08"><code>497ec6a</code></a>
v15.5.2</li>
<li><a
href="cc68ced552"><code>cc68ced</code></a>
v15.5.1</li>
<li><a
href="7e08c8223d"><code>7e08c82</code></a>
v15.5.0</li>
<li>...

_Description has been truncated_

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-10 10:21:53 +00:00
Reinier van der Leer
2ffd249aac fix(backend/external-api): Improve security & reliability of API key storage (#10796)
Our API key generation, storage, and verification system has a couple of
issues that need to be ironed out before full-scale deployment.

### Changes 🏗️

- Move from unsalted SHA256 to salted Scrypt hashing for API keys
- Avoid false-negative API key validation due to prefix collision
- Refactor API key management code for clarity
- [refactor(backend): Clean up API key DB & API code
(#10797)](https://github.com/Significant-Gravitas/AutoGPT/pull/10797)
  - Rename models and properties in `backend.data.api_key` for clarity
- Eliminate redundant/custom/boilerplate error handling/wrapping in API
key endpoint call stack
- Remove redundant/inaccurate `response_model` declarations from API key
endpoints

Dependencies for `autogpt_libs`:
- Add `cryptography` as a dependency
- Add `pyright` as a dev dependency

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Performing these actions through the UI (still) works:
    - [x] Creating an API key
    - [x] Listing owned API keys
    - [x] Deleting an owned API key
  - [x] Newly created API key can be used in Swagger UI
  - [x] Existing API key can be used in Swagger UI
  - [x] Existing API key is re-encrypted with salt on use
2025-09-10 09:34:49 +00:00
Ubbe
986245ec43 feat(frontend): run agent page improvements (#10879)
## Changes 🏗️

- Add all the cron scheduling options ( _yearly, monthly, weekly,
custom, etc..._ ) using the new Design System components
- Add missing agent/run actions: export agent + delete agent

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with `new-agent-runs` enabled
  - [x] Test the above 

### For configuration changes:

None
2025-09-10 07:44:51 +00:00
Bentlybro
f89717153f Merge branch 'master' into dev 2025-09-10 08:52:35 +01:00
Nicholas Tindle
5da41e0753 fix(backend): Add Airtable record normalization + find/create base (#10891)
## Summary
Fixes critical issue with Airtable API where empty/false fields are
completely omitted from responses, causing inconsistent data structures.
Also improves the create base block to prevent duplicate bases.

<!-- Clearly explain the need for these changes: -->
The Airtable API has a problematic behavior where it omits fields with
"empty" values from responses:
- Unchecked checkboxes are missing entirely instead of returning `false`
- Empty number fields are missing instead of returning `0`
- This makes it impossible to distinguish between "field doesn't exist"
and "field is false/empty"
- Users were getting inconsistent record structures that broke their
workflows

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

#### 1. **Added Record Normalization**
(`backend/blocks/airtable/_api.py`)
- New `get_table_schema()` function to fetch table field definitions
- New `get_empty_value_for_field()` to determine appropriate empty
values per field type
- New `normalize_records()` to fill in missing fields with proper
defaults:
  - Checkbox → `false`
  - Number/Currency/Percent/Duration/Rating → `0`
  - Text fields → `""`
  - Multiple selects/attachments/collaborators → `[]`
  - Dates/Single selects → `null`
- New `get_base_tables()` to fetch tables for a base

#### 2. **Enhanced List and Get Record Blocks**
(`backend/blocks/airtable/records.py`)
- Added `normalize_output` parameter (defaults to `true`) - ensures all
fields are present
- Added `include_field_metadata` parameter to optionally include field
type information
- When normalization is enabled, fetches schema once and normalizes all
records
- Can be disabled by setting `normalize_output=false` for raw Airtable
response

#### 3. **Simplified Create Records Block**
- Added `skip_normalization` parameter (default `false`) - normalized
output by default
- Records now always include all fields with proper empty values

#### 4. **Enhanced Create Base Block**
(`backend/blocks/airtable/bases.py`)
- Added `find_existing` parameter (defaults to `true`) to prevent
duplicate bases
- When finding an existing base, now fetches and returns table
information
- Added `was_created` output field to indicate whether base was created
or found

### Testing 📋

-  All Airtable block tests pass
-  Tested normalization with records containing missing checkbox fields
-  Verified all field types get appropriate empty values
-  Tested create base find-or-create functionality
-  Ran `poetry run format` and `poetry run lint`

### Migration Guide

This update makes the blocks behave more predictably:
- **List/Get Records**: All fields are now included by default. Set
`normalize_output: false` if you need the raw Airtable response
- **Create Records**: Simply creates records, no more upsert confusion
- **Create Base**: Prevents duplicate bases by default. Set
`find_existing: false` to force creation

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes were required - all changes are code-only.
2025-09-10 04:57:26 +00:00
Nicholas Tindle
cddeb185a8 feat(blocks): Add Gmail Draft Reply Block (#10882)
### Changes 🏗️

This PR adds a new `GmailDraftReplyBlock` that enables creating draft
replies to existing Gmail email threads. This block complements the
existing Gmail blocks by providing specialized functionality for
replying within email conversations.

**New Block: GmailDraftReplyBlock**
- **Purpose**: Creates draft replies to Gmail threads with intelligent
content type detection
- **Key Features**:
-  Automatic HTML detection: Draft replies containing HTML tags are
formatted as text/html
-  No hard-wrap for plain text: Plain text draft replies preserve
natural line flow (prevents 78-character hard-wrap issue)
-  Manual content type override: Use content_type parameter to force
specific format ("auto", "plain", or "html")
-  Reply-all functionality: Option to reply to all original recipients
-  Thread preservation: Maintains proper email threading with
In-Reply-To and References headers
  -  Full Unicode/emoji support with UTF-8 encoding
  -  File attachment support

**Implementation Details**:
- Retrieves parent message metadata to build proper reply context
- Automatically determines recipients based on reply mode (reply vs
reply-all)
- Adds "Re:" prefix to subject if not already present
- Maintains email thread continuity with proper headers
- Supports OAuth scopes: `gmail.modify` and `gmail.readonly`

**Inputs**:
- `threadId`: Thread ID to reply in
- `parentMessageId`: ID of the message being replied to  
- `to`, `cc`, `bcc`: Optional recipient lists
- `replyAll`: Boolean to reply to all original recipients
- `subject`: Optional custom subject
- `body`: Email body (plain text or HTML)
- `content_type`: Optional content type override
- `attachments`: Optional file attachments

**Outputs**:
- `draftId`: Created draft ID
- `messageId`: Draft message ID
- `threadId`: Thread ID
- `status`: Draft creation status
- `error`: Error message if any

Closes #10846

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
  - [x] Block includes test input/output configuration
  - [x] Mock test handler implemented for unit testing
  - [x] Proper error handling included
  - [x] OAuth authentication properly configured
- [x] Content type detection logic tested (auto-detects HTML vs plain
text)
  - [x] Threading headers properly maintained for email conversations

<details>
  <summary>Additional Testing Notes</summary>
  
  - The block uses the existing Gmail authentication infrastructure
  - Test credentials and mock outputs are configured for CI/CD
- The `_make_mime_text` helper function ensures proper content
formatting
  - Reply-all logic properly deduplicates recipients
</details>

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Note**: No configuration changes required - uses existing Google OAuth
configuration.

<details>
  <summary>Configuration Compatibility</summary>

  - Uses existing `GOOGLE_OAUTH_IS_CONFIGURED` flag
  - Leverages existing Google OAuth credentials system
  - No new environment variables or services required
</details>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
2025-09-10 04:24:15 +00:00
Reinier van der Leer
08a3fd6d26 Revert "fix(backend): Improve GitHub PR URL validation and API URL generation" (#10890)
Reverts Significant-Gravitas/AutoGPT#10317

The changes omit the hostname from the "prepared" URL, breaking some
GitHub blocks.
2025-09-09 21:47:51 +00:00
Nicholas Tindle
39b30bc82c ci: Add 'Bash(gh pr edit:*)' to allowed tools for claude (#10885) 2025-09-09 12:36:46 -05:00
Reinier van der Leer
2df0e2b750 fix(backend/api): Fix & add tests for APIKeyAuthenticator (#10881)
- Resolves #10875

### Changes 🏗️

- Fix use of `super().__call__` in `APIKeyAuthenticator.__call__`
- Fix non-ASCII API key validation
- Add tests for `APIKeyAuthenticator`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test implementations have been verified manually
  - [x] All the new tests pass
2025-09-09 16:09:51 +00:00
Ubbe
925f249ce1 feat(frontend): use new output renderers on new run page (#10876)
## Changes 🏗️

Integrating the great work @ntindle did on the rich agent output
renderers into the new Agent run page in the library 💜

- Implemented enhanced output rendering in `<RunDetails />` using the
shared output-renderers
- Added `<RunOutputs />` sub-component at
`RunDetails/components/RunOutputs.tsx` that:
- [x] builds items from `run.outputs`, extracts metadata, picks a
renderer via `globalRegistry`, and falls back to `TextRenderer`
- [x] renders `<OutputActions />` for copy/download and a list of
`<OutputItems />`.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run an agent on the view which outputs rich content
- [x] See the output use the new renderers, for example code is
higlighted

### For configuration changes:

None
2025-09-09 05:44:37 +00:00
Ubbe
e8cf3edbf4 feat(frontend): add timezone to new library agent page (#10874)
## Changes 🏗️

<img width="800" height="756" alt="Screenshot 2025-09-09 at 14 03 24"
src="https://github.com/user-attachments/assets/65f3e3a8-1ce0-491c-824a-601a494d3a36"
/>

<img width="600" height="493" alt="Screenshot 2025-09-09 at 14 03 28"
src="https://github.com/user-attachments/assets/457b37a3-6b3b-49b8-912c-c72cf06e8e58"
/>

Following the nice changes @ntindle did regarding timezones, bring them
into the new page:
- display the timezone when scheduling an agent on the new modal
- display the timezone for a schedule on the new schedule details view

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with `agent-new-runs` flag ON
  - [x] Open an agent on the new page
- [x] On the new modal, create a schedule, it display the timezone alert
  - [x] Once created, on the schedule view, it displays the timezone   

### For configuration changes:

None
2025-09-09 05:37:48 +00:00
Nicholas Tindle
dc03ea718c fix(backend): Report process errors to Sentry before cleanup (#10873)
Adds error reporting to Sentry for exceptions (excluding
KeyboardInterrupt and SystemExit) before executing process cleanup.
Silently ignores failures if Sentry is unavailable.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds cleanup for sentry
Adds disabling for sentry
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test all services with manual exception raising
  - [x] Remove those excptions
  - [x] Make sure they show up in sentry
2025-09-09 04:56:51 +00:00
Nicholas Tindle
dbee580d80 build(frontend): Increase build process memory limit to 4 GiB (#10861)
Increase memory limit of frontend build process to 4GiB to fix
out-of-memory build failures
2025-09-08 05:04:17 +00:00
Nicholas Tindle
0325ec0a2c fix(frontend): Fix environment variable handling in Docker builds for dev/prod deployments (#10859)
<!-- Clearly explain the need for these changes: -->
Sentry was not being enabled in dev/prod deployments because environment
variables were being incorrectly overwritten during the Docker build
process.

### Changes 🏗️

- Fixed Dockerfile environment variable merging logic to prevent
`.env.default` from overwriting `.env.production` values
- Added `NODE_ENV=production` to build stage to ensure Next.js looks for
production env files
- Updated env file merging to only run when not in CI/CD (when
`.env.production` doesn't exist)
- When `.env.production` exists (CI/CD), now merges defaults with
production values properly

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] Verify local Docker builds still work with `docker compose up`
- [ ] Verify dev deployment has `NEXT_PUBLIC_APP_ENV=dev` in built
JavaScript
- [ ] Verify prod deployment has `NEXT_PUBLIC_APP_ENV=prod` in built
JavaScript
- [ ] Verify Sentry is enabled in dev/prod deployments
(`isProdOrDev=true`)

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Technical Details

**Root Cause:**
1. CI/CD workflow creates `.env.production` with correct values (e.g.,
`NEXT_PUBLIC_APP_ENV=dev`)
2. Dockerfile's env merging logic always created `.env` from
`.env.default`
3. Next.js loads `.env.production` first, then `.env` second
4. Since `.env` is loaded after `.env.production`, it overwrites the
values
5. `.env.default` has `NEXT_PUBLIC_APP_ENV=local`, causing `getAppEnv()`
to return "local" instead of "dev"/"prod"
6. This made `isProdOrDev` evaluate to `false`, disabling Sentry

**Solution:**
The Dockerfile now checks if `.env.production` exists:
- If yes (CI/CD): Merges `.env.default` + `.env.production` →
`.env.production` (production values take precedence)
- If no (local): Merges `.env.default` + `.env` → `.env` (user values
take precedence)

This ensures production deployments get the correct environment
variables while preserving local development workflow.

🤖 Description generated + Investigation assisted with [Claude
Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-06 17:05:45 +00:00
Ubbe
3952a1a226 feat(frontend): smol improvements on new run view (#10854)
## Changes 🏗️

<img width="800" height="790" alt="Screenshot 2025-09-05 at 17 22 36"
src="https://github.com/user-attachments/assets/8b22424c-1968-4c4f-9eed-3d5d5185751d"
/>

- Make a nicer empty state and display it when there are no
runs/schedules
- Rename search param to `executionId` to mirror what was on the old
page
- Reduce polling when execution is happening to 1.5s ( 3.s is too slow
maybe... )
- Make sure the run details page also updates when a run is happening

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Tested the above 

### For configuration changes:

None

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-09-06 09:02:25 +00:00
Krzysztof Czerwinski
cfc975d39b feat(backend): Type for API block data response (#10763)
Moving to auto-generated frontend types caused returned blocks data to
no longer have proper typing.

### Changes 🏗️

- Add `BlockInfo` model and `get_info` function that returns it to the
`Block` class, including costs
- Move `BlockCost` and `BlockCostType` to `block.py` to prevent circular
imports

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Endpoints using the new type work correctly

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-09-06 04:21:48 +00:00
Zamil Majdy
46e0f6cc45 feat(platform): Add recommended run schedule for agent execution (#10827)
## Summary

<img width="1000" alt="Screenshot 2025-09-02 at 9 46 49 PM"
src="https://github.com/user-attachments/assets/d78100c7-7974-4d37-a788-757764d8b6b7"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 20 24 PM"
src="https://github.com/user-attachments/assets/cd092963-8e26-4198-b65a-4416b2307a50"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 22 30 PM"
src="https://github.com/user-attachments/assets/e16b3bdb-c48c-4dec-9281-b2a35b3e21d0"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 20 38 PM"
src="https://github.com/user-attachments/assets/11d74a39-f4b4-4fce-8d30-0e6a925f3a9b"
/>

• Added recommended schedule cron expression as an optional input
throughout the platform
• Implemented complete data flow from builder → store submission → agent
library → run page
• Fixed UI layout issues including button text overflow and ensured
proper component reusability

## Changes

### Backend
- Added `recommended_schedule_cron` field to `AgentGraph` schema and
database migration
- Updated API models (`LibraryAgent`, `MyAgent`,
`StoreSubmissionRequest`) to include the new field
- Enhanced store submission approval flow to persist recommended
schedule to database

### Frontend
- Added recommended schedule input to builder page (SaveControl
component) with overflow-safe styling
- Updated store submission modal (PublishAgentModal) with schedule
configuration
- Enhanced agent run page with schedule tip display and pre-filled
schedule dialog
- Refactored `CronSchedulerDialog` with discriminated union types for
better reusability
- Fixed layout issues including button text truncation and popover width
constraints
- Implemented robust cron expression parsing with 100% reversibility
between UI and cron format

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-06 03:57:03 +02:00
Nicholas Tindle
c03af5c196 feat(frontend): allow sentry disable in dev (#10858)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Vercel is logging things it shouldnt

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manually verified in vercel
2025-09-05 20:29:39 +00:00
Nicholas Tindle
00cbfb8f80 feat(frontend): re-enable sentry in dev (#10857)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- We want sentry to actually work so we can do testing
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] we're just re-enabling. it wroks in prod
2025-09-05 19:38:10 +00:00
Ubbe
3beafae955 feat(frontend): new agent library run page (#10835)
## Changes 🏗️

This is the new **Agent Library Run** page. Sorry in advance for the
massive PR 🙏🏽 . I got carried away and it has been tricky to split it (
_maybe I abused the agent too much_ 🤔 )

<img width="800" height="1085" alt="Screenshot 2025-09-04 at 13 58 33"
src="https://github.com/user-attachments/assets/b709edb9-d2b5-48ad-a04d-dddf10c89af3"
/>

<img width="800" height="338" alt="Screenshot 2025-09-04 at 13 54 51"
src="https://github.com/user-attachments/assets/efa28be2-d2dd-477f-af13-33ddd1d639dd"
/>

<img width="800" height="598" alt="Screenshot 2025-09-04 at 13 54 18"
src="https://github.com/user-attachments/assets/806ab620-3492-4c5b-b4e2-f17b89756dd8"
/>

- Schedules are now on the sidebar tabbed along with runs
- The whole UI has been updated to match the new designs and design
system
- There is no more "run draft" view as the modal is in charge of new
runs now 💪🏽
- The page is responsive and mobile friendly 📱 


Uploading mobile.mov…


https://github.com/user-attachments/assets/0e483062-0e50-4fa6-aaad-a1f6766df931

### Safety

I understand this is a lot of changes. However is all behind a feature
flag, `new-agent-runs`, when OFF it will display the old library agent
view. The old library agent view can still be accessed under:
`/library/legacy/{id}` for reference 👍🏽

### Testing

I haven't any tests for now... 💆🏽 I want to get this enabled on dev so
we can start running our agents there through the new page and modal and
start catching edge-cases.

Tests will come later in the form of E2E for the happy paths, and
probably I will introduce [Vitest](https://vitest.dev/) + [Testing
Library](https://testing-library.com/) for the finer details...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test the above

### For configuration changes:

None, the feature flag is already configured 🙏🏽

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-09-05 06:39:54 +00:00
seer-by-sentry[bot]
9cd186a2f3 fix(backend): Improve GitHub PR URL validation and API URL generation (#10317)
### Changes 🏗️

Fixes
[AUTOGPT-SERVER-4EN](https://sentry.io/organizations/significant-gravitas/issues/6731949478/).
The issue was that: Issue URL passed to PR file reader, regex failed,
leading to issue API call, returning object iterated as keys, causing
AttributeError.

- Refactor `prepare_pr_api_url` to improve validation of GitHub PR URLs.
- Update regex to specifically target github.com URLs.
- Raise ValueError with a descriptive message for invalid URLs.
- Correctly construct the API URL using the extracted repository path.

This fix was generated by Seer in Sentry, triggered automatically. 👁️
Run ID: 265077

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6731949478/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test plan:
- [x] Provide an invalid GitHub PR URL and verify that a ValueError is
raised with a descriptive message.
- [x] Provide a valid GitHub PR URL and verify that the API URL is
correctly constructed.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Bently <Github@bentlybro.com>
2025-09-05 06:35:45 +00:00
Nicholas Tindle
dcf26bd3d4 ci: update Claude code action (#10851)
<!-- Clearly explain the need for these changes: -->
Claude code now uses prompt not system prompt
### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Swaps to peomot from custom system prompt
### Checklist 📋

#### For code changes:
N/A
2025-09-05 06:33:09 +00:00
Bently
b97f097c9d feat(blocks): add AI Image Customizer block using Googles Nano Banana (#10845)
Add new AutoGPT Platform Block that uses google/gemini-2.5-flash-image
model via Replicate API.

Features:
- Text prompt input for image generation
- Optional list of image URLs as input
- Configurable output format (jpg/png, defaults to png)
- Single model option: google/gemini-2.5-flash-image
- Returns image_url output for generated images

Fixes #10815

🤖 Generated with [Claude Code](https://claude.ai/code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the AI image customizer block and upload 2 images to see if it
uses them in the image generation/edits


<img width="1536" height="672" alt="tmprhzqasxz"
src="https://github.com/user-attachments/assets/39d7adbd-2847-4988-aeab-1c5453290174"
/>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-05 06:31:12 +01:00
Reinier van der Leer
ce24975a9d fix(backend): Unbreak get_webhook query (#10850)
- Resolves #10849

### Changes 🏗️

- Use `AGENT_PRESET_INCLUDE` in `INTEGRATION_WEBHOOK_INCLUDE` so the
`AgentPreset.from_db(..)` doesn't break

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Webhook ingress works
2025-09-04 19:00:05 +00:00
Bently
75c90e49ce feat(blocks): add AI Image Customizer block using Googles Nano Banana (#10845)
Add new AutoGPT Platform Block that uses google/gemini-2.5-flash-image
model via Replicate API.

Features:
- Text prompt input for image generation
- Optional list of image URLs as input
- Configurable output format (jpg/png, defaults to png)
- Single model option: google/gemini-2.5-flash-image
- Returns image_url output for generated images

Fixes #10815

🤖 Generated with [Claude Code](https://claude.ai/code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the AI image customizer block and upload 2 images to see if it
uses them in the image generation/edits


<img width="1536" height="672" alt="tmprhzqasxz"
src="https://github.com/user-attachments/assets/39d7adbd-2847-4988-aeab-1c5453290174"
/>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-04 18:51:32 +00:00
Zamil Majdy
2e38f132e7 feat(backend/store): implement sub-agent approval support (#10842)
## Summary
Implement comprehensive sub-agent approval flow following the business
requirements from the flow diagram.

<img width="1956" height="1448" alt="image"
src="https://github.com/user-attachments/assets/8de35e5b-9d3e-4dc2-bff0-47b49dbebc83"
/>


### Key Features
-  **Auto-approve sub-agents** when main agent is approved
-  **Handle all scenarios**: new listings, existing versions, missing
versions
-  **Transaction safety** with atomic operations via database
transactions
-  **Parallel processing** using asyncio.gather for performance
optimization
-  **Hidden from store** with isAvailable=false for all sub-agents

### Implementation Details
- **Replaced** `_get_missing_sub_store_listing` with comprehensive
`_handle_sub_agent_approvals`
- **Added** `_approve_sub_agent` function with early returns for clean,
readable code flow
- **Used** `transaction()` context manager to ensure data consistency
across operations
- **Process sub-agents in parallel** while maintaining transaction
integrity

### Business Logic Flow
1. **Check if sub-agent is already listed** in store
2. **If not listed**: create new store listing with `isAvailable=false`
3. **If listed but not approved**: approve the correct version  
4. **If correct version not listed**: create store listing version and
approve it
5. **If already approved**: no action needed (early return)

All sub-agents remain **hidden from public store** while being
internally approved for system use.

## Files Changed
- `backend/server/v2/store/db.py` - Core implementation of sub-agent
approval logic

## Test Plan
- [ ] Verify main agent approval triggers sub-agent approvals
- [ ] Test all sub-agent scenarios: new, existing unapproved, existing
approved
- [ ] Confirm sub-agents remain hidden (`isAvailable=false`)
- [ ] Validate transaction rollback on failures
- [ ] Check parallel processing works correctly

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-09-04 17:59:37 +00:00
Nicholas Tindle
5be6987d58 ci: Add model argument to Claude workflow (#10843) 2025-09-04 10:22:53 -05:00
Nicholas Tindle
b59592be9b feat(ci): Claude workflow for Dependabot PR analysis (#10841)
Created workflow to analyze Dependabot PRs with Claude, including
detailed dependency analysis and changelog review.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds workflow for claude to do dependabot
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
N/A

#### For configuration changes:
N/A

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-09-04 14:48:07 +00:00
dependabot[bot]
7ea17df9ed chore(frontend/deps): Bump recharts from 2.15.3 to 3.1.2 in /autogpt_platform/frontend (#10807)
Bumps [recharts](https://github.com/recharts/recharts) from 2.15.3 to
3.1.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/recharts/recharts/releases">recharts's
releases</a>.</em></p>
<blockquote>
<h2>v3.1.2</h2>
<h2>What's Changed</h2>
<h3>Fix</h3>
<ul>
<li><code>Label/Polar Charts</code>: <code>Label</code> viewbox should
now be present in polar charts and address <a
href="https://redirect.github.com/recharts/recharts/issues/6030">recharts/recharts#6030</a>
by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6180">recharts/recharts#6180</a></li>
<li>Add LRU cache for string size measurements (<a
href="https://redirect.github.com/recharts/recharts/issues/3955">#3955</a>)
by <a
href="https://github.com/shreedharbhat98"><code>@​shreedharbhat98</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6176">recharts/recharts#6176</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/recharts/recharts/compare/v3.1.1...v3.1.2">https://github.com/recharts/recharts/compare/v3.1.1...v3.1.2</a></p>
<h2>v3.1.1</h2>
<h2>What's Changed</h2>
<h3>Fix</h3>
<ul>
<li><code>General</code>: Don't apply duplicate IDs in the DOM by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6111">recharts/recharts#6111</a></li>
<li><code>Stacked Area/Bar</code>: give all graphical items their own
unique identifier and use that to select stacked data. Fixes issue where
stacked charts could not be created from the graphical item
<code>data</code> prop <a
href="https://redirect.github.com/recharts/recharts/issues/6073">recharts/recharts#6073</a>
by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a></li>
<li><code>Stacked Area/Bar</code>: exclude stacked axis domain when not
relevant for axis by <a
href="https://github.com/rinkstiekema"><code>@​rinkstiekema</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6162">recharts/recharts#6162</a>
fixes issue where numeric stacked charts would not render correctly</li>
<li><code>Area Chart</code>: ranged area chart - show active dot on both
points instead of just the top one by <a
href="https://github.com/sroy8091"><code>@​sroy8091</code></a> in <a
href="https://redirect.github.com/recharts/recharts/pull/6116">recharts/recharts#6116</a>
fixes <a
href="https://redirect.github.com/recharts/recharts/issues/6080">#6080</a></li>
<li><code>Polar Charts/Label</code>: fix <code>Label</code> in polar
charts by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6126">recharts/recharts#6126</a></li>
<li><code>Scatter/ErrorBar</code>: choose implicit Scatter ErrorBar
direction based on chart layout (to be the same as 2.x) by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6159">recharts/recharts#6159</a></li>
<li><code>X/YAxis/Reference Components</code>: allow axis values and
reference items to render when there is no data but there is a
domain/explicit ticks set by <a
href="https://github.com/ethphan"><code>@​ethphan</code></a> in <a
href="https://redirect.github.com/recharts/recharts/pull/6161">recharts/recharts#6161</a></li>
<li><code>X/YAxis</code>: pass axis padding info to custom tick
components by <a
href="https://github.com/shreedharbhat98"><code>@​shreedharbhat98</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6163">recharts/recharts#6163</a></li>
</ul>
<h3>Chore / Testing</h3>
<ul>
<li>good progress on our journey to enable
<code>strictNullChecks</code></li>
<li>addition of playwright visual regression tests to CI</li>
<li>split <code>Animate</code> into <code>JavascriptAnimate</code> and
<code>CSSTransitionAnimate</code> by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6175">recharts/recharts#6175</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/sroy8091"><code>@​sroy8091</code></a>
made their first contribution in <a
href="https://redirect.github.com/recharts/recharts/pull/6116">recharts/recharts#6116</a></li>
<li><a href="https://github.com/ethphan"><code>@​ethphan</code></a> made
their first contribution in <a
href="https://redirect.github.com/recharts/recharts/pull/6161">recharts/recharts#6161</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/recharts/recharts/compare/v3.1.0...v3.1.1">https://github.com/recharts/recharts/compare/v3.1.0...v3.1.1</a></p>
<h2>v3.1.0</h2>
<h2>What's Changed</h2>
<p>Bug fixes (old and new) and a few new hooks post 3.0 launch!</p>
<h3>Feat</h3>
<p>More hooks!</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="079545c631"><code>079545c</code></a>
3.1.2</li>
<li><a
href="c6c26b5eae"><code>c6c26b5</code></a>
Label viewbox in polar charts (<a
href="https://redirect.github.com/recharts/recharts/issues/6180">#6180</a>)</li>
<li><a
href="6c4acb7299"><code>6c4acb7</code></a>
Add CompositeAnimationManager to allow testing for multiple concurrent
animat...</li>
<li><a
href="e56913daed"><code>e56913d</code></a>
feat: implement LRU cache for string size measurements (<a
href="https://redirect.github.com/recharts/recharts/issues/3955">#3955</a>)
(<a
href="https://redirect.github.com/recharts/recharts/issues/6176">#6176</a>)</li>
<li><a
href="cf648902cd"><code>cf64890</code></a>
3.1.1</li>
<li><a
href="c4ec635cf5"><code>c4ec635</code></a>
Split Animate into JavascriptAnimate and CSSTransitionAnimate (<a
href="https://redirect.github.com/recharts/recharts/issues/6175">#6175</a>)</li>
<li><a
href="e243a12d02"><code>e243a12</code></a>
Add animation tests for Rectangle (<a
href="https://redirect.github.com/recharts/recharts/issues/6174">#6174</a>)</li>
<li><a
href="a1e07aaa1c"><code>a1e07aa</code></a>
docs: convert class components to functional components (<a
href="https://redirect.github.com/recharts/recharts/issues/6172">#6172</a>)</li>
<li><a
href="1f8d4cb84d"><code>1f8d4cb</code></a>
fix: Exclude stacked axis domain when not relevant for axis (<a
href="https://redirect.github.com/recharts/recharts/issues/6162">#6162</a>)</li>
<li><a
href="fe28f2e63a"><code>fe28f2e</code></a>
chore(deps-dev): bump the storybook group with 8 updates (<a
href="https://redirect.github.com/recharts/recharts/issues/6166">#6166</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/recharts/recharts/compare/v2.15.3...v3.1.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=recharts&package-manager=npm_and_yarn&previous-version=2.15.3&new-version=3.1.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 14:22:30 +00:00
Reinier van der Leer
22e692bdda fix(frontend): Update in-view run with real-time updates (#10839)
- Resolves #10838

### Changes 🏗️

- Update `selectedRun` with received graph execution update if
applicable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Agent outputs appear in real-time
2025-09-04 13:43:44 +00:00
Nicholas Tindle
901e9eba5d feat(frontend): Enhanced output rendering system for agent runs (#10819)
## Summary
Introduces a modular, extensible output renderer system supporting
multiple content types (text, code, images, videos, JSON, markdown) for
agent run outputs. The system includes smart clipboard operations,
concatenated downloads, and rich markdown rendering with LaTeX math and
video embedding support.

## Changes 🏗️

### Core Output Rendering System
- **Added extensible renderer architecture**
(`output-renderers/types.ts`)
  - Plugin-based system with priority ordering
  - Registry pattern for automatic renderer selection
  - Support for custom metadata and MIME types

### Output Renderers
- **TextRenderer**: Plain text with proper formatting and line breaks
- **CodeRenderer**: Syntax-highlighted code blocks with language
detection
- **JSONRenderer**: Collapsible, formatted JSON with syntax highlighting
- **ImageRenderer**: Image display with support for URLs, data URIs, and
file uploads
- **VideoRenderer**: Embedded video player for YouTube, Vimeo, and
direct video files
- **MarkdownRenderer**: Rich markdown with:
  - GitHub Flavored Markdown (tables, task lists, strikethrough)
- LaTeX math rendering via KaTeX (inline `$...$` and display `$$...$$`)
  - Syntax highlighting via highlight.js
  - Video embedding (YouTube/Vimeo URLs auto-convert to embeds)
  - Clickable heading anchors
  - Dark mode support

### User Interface Components
- **OutputItem**: Individual output display with renderer selection
- **OutputActions**: Hover-based action buttons for:
  - Copy to clipboard with smart MIME type detection
- Download with intelligent concatenation (text files merge, binaries
separate)
  - Share functionality (placeholder for future implementation)
- **AgentRunOutputView**: Main output view component with feature flag
integration

### Clipboard & Download Features
- Smart clipboard operations using native ClipboardItem API
- MIME type detection and browser capability checking
- Fallback strategies for unsupported content types
- Concatenated downloads for text-based outputs
- Individual downloads for binary content

### Feature Flag Integration
- Added `ENABLE_ENHANCED_OUTPUT_HANDLING` flag to LaunchDarkly
- Backwards compatible with existing output display
- Graceful fallback for disabled feature flag

### Styling & UX
- Max width constraints (950px card, 660px content)
- Hover-based action buttons for clean interface
- Dark mode support across all renderers
- Responsive design for various content types
- Loading states and error handling

## Test Plan 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

### Test Scenarios:
- [x] **Basic Output Rendering**
  - [x] Execute agent with text output - verify proper formatting
  - [x] Execute agent with JSON output - verify collapsible tree view
  - [x] Execute agent with code output - verify syntax highlighting
  
- [x] **Rich Content**
  - [x] Test markdown rendering with headers, lists, tables
  - [x] Test LaTeX math expressions (inline and display)
  - [x] Test code blocks within markdown
  - [x] Test task lists and strikethrough
  
- [x] **Media Handling**
  - [x] Upload and display PNG/JPEG images
  - [x] Test video URL embedding (YouTube/Vimeo)
  - [x] Test direct video file playback
  
- [x] **Clipboard Operations**
  - [x] Copy plain text output
  - [x] Copy formatted code
  - [x] Copy JSON data
  - [x] Copy markdown content
  - [x] Verify fallback for unsupported MIME types
  
- [x] **Download Functionality**
  - [x] Download single text output
  - [x] Download multiple text outputs (verify concatenation)
  - [x] Download mixed content (verify separate files)
  - [x] Download images and binary content
  
- [x] **Feature Flag**
  - [x] Enable flag - verify enhanced rendering
  - [x] Disable flag - verify fallback to original view
  - [x] Check backwards compatibility
 
  
- [x] **Edge Cases**
  - [x] Large JSON objects (performance)
  - [x] Very long text outputs
  - [x] Mixed content types in single run
  - [x] Malformed markdown
  - [x] Invalid video URLs

## Dependencies Added
- `react-markdown` (9.0.3) - Already present
- `remark-gfm` (4.0.1) - GitHub Flavored Markdown
- `remark-math` (6.0.0) - LaTeX math support
- `rehype-katex` (7.0.1) - Math rendering
- `katex` (0.16.22) - Math typesetting
- `rehype-highlight` (7.0.2) - Syntax highlighting
- `highlight.js` (11.11.1) - Highlighting library
- `rehype-slug` (6.0.0) - Heading anchors
- `rehype-autolink-headings` (7.1.0) - Clickable headings

## Notes
- Mermaid diagram support was attempted but removed due to compatibility
issues
- Share functionality is stubbed out for future implementation
- PNG file upload rendering issue has logging in place for debugging
- All components follow existing UI patterns and use Tailwind CSS

## Screenshots
<img width="1656" height="1250" alt="image"
src="https://github.com/user-attachments/assets/af7542fe-db89-4521-aaf5-19e33d48a409"
/>
## Related Issues
- Implements SECRT-1209

---------

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-04 13:37:26 +00:00
dependabot[bot]
bbd6709bd6 chore(backend/deps): Bump cryptography from 43.0.3 to 45.0.7 in /autogpt_platform/backend (#10810)
Bumps [cryptography](https://github.com/pyca/cryptography) from 43.0.3
to 45.0.7.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>45.0.7 - 2025-09-01</p>
<pre><code>
* Added a function to support an upcoming ``pyOpenSSL`` release.
<p>.. _v45-0-6:</p>
<p>45.0.6 - 2025-08-05<br />
</code></pre></p>
<ul>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.2.</li>
</ul>
<p>.. _v45-0-5:</p>
<p>45.0.5 - 2025-07-02</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.1.
<p>.. _v45-0-4:</p>
<p>45.0.4 - 2025-06-09<br />
</code></pre></p>
<ul>
<li>Fixed decrypting PKCS#8 files encrypted with SHA1-RC4. (This is not
considered secure, and is supported only for backwards
compatibility.)</li>
</ul>
<p>.. _v45-0-3:</p>
<p>45.0.3 - 2025-05-25</p>
<pre><code>
* Fixed decrypting PKCS#8 files encrypted with long salts (this impacts
keys
  encrypted by Bouncy Castle).
* Fixed decrypting PKCS#8 files encrypted with DES-CBC-MD5. While wildly
  insecure, this remains prevalent.
<p>.. _v45-0-2:</p>
<p>45.0.2 - 2025-05-17<br />
</code></pre></p>
<ul>
<li>Fixed using <code>mypy</code> with <code>cryptography</code> on
older versions of Python.</li>
</ul>
<p>.. _v45-0-1:</p>
<p>45.0.1 - 2025-05-17</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.0.
&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f52a3e1496"><code>f52a3e1</code></a>
prep for a 45.0.7 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13378">#13378</a>)</li>
<li><a
href="66198c23c9"><code>66198c2</code></a>
Bump for release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13249">#13249</a>)</li>
<li><a
href="3e53a233b6"><code>3e53a23</code></a>
Bump for 45.0.5 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13135">#13135</a>)</li>
<li><a
href="678c0c59f7"><code>678c0c5</code></a>
prepare for 45.0.4 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13058">#13058</a>)</li>
<li><a
href="5038495987"><code>5038495</code></a>
backports for 45.0.3 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12979">#12979</a>)</li>
<li><a
href="f81c07535d"><code>f81c075</code></a>
Backport mypy fixes for release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12930">#12930</a>)</li>
<li><a
href="8ea28e0bc7"><code>8ea28e0</code></a>
bump for 45.0.1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12922">#12922</a>)</li>
<li><a
href="67840977c9"><code>6784097</code></a>
bump for 45 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12886">#12886</a>)</li>
<li><a
href="2d9c1c9cbe"><code>2d9c1c9</code></a>
bump MSRV to 1.74 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12919">#12919</a>)</li>
<li><a
href="6c18874cc2"><code>6c18874</code></a>
Bump BoringSSL, OpenSSL, AWS-LC in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/12918">#12918</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/43.0.3...45.0.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=43.0.3&new-version=45.0.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 13:35:07 +00:00
dependabot[bot]
3ec1721d6d chore(libs/deps-dev): Bump ruff from 0.12.9 to 0.12.11 in /autogpt_platform/autogpt_libs in the development-dependencies group (#10804)
Bumps the development-dependencies group in
/autogpt_platform/autogpt_libs with 1 update:
[ruff](https://github.com/astral-sh/ruff).

Updates `ruff` from 0.12.9 to 0.12.11
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.12.11</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend <code>AIR311</code> and
<code>AIR312</code> rules (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20082">#20082</a>)</li>
<li>[<code>airflow</code>] Replace wrong path
<code>airflow.io.storage</code> with <code>airflow.io.store</code>
(<code>AIR311</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20081">#20081</a>)</li>
<li>[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx-in-async-function</code>
(<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20091">#20091</a>)</li>
<li>[<code>flake8-logging-format</code>] Add auto-fix for f-string
logging calls (<code>G004</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19303">#19303</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add autofix for
<code>PTH211</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20009">#20009</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Make <code>PTH100</code> fix
unsafe because it can change behavior (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20100">#20100</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>pyflakes</code>, <code>pylint</code>] Fix false positives
caused by <code>__class__</code> cell handling (<code>F841</code>,
<code>PLE0117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20048">#20048</a>)</li>
<li>[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code>
matching for top-level modules (<code>F401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20115">#20115</a>)</li>
<li>[<code>ruff</code>] Fix false positive for t-strings in
<code>default-factory-kwarg</code> (<code>RUF026</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20032">#20032</a>)</li>
<li>[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19647">#19647</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Handle empty t-strings in
<code>unnecessary-empty-iterable-within-deque-call</code>
(<code>RUF037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20045">#20045</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix incorrect <code>D413</code> links in docstrings convention FAQ
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/20089">#20089</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Update links to the table showing
the correspondence between <code>os</code> and <code>pathlib</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20103">#20103</a>)</li>
</ul>
<h2>Contributors</h2>
<ul>
<li><a
href="https://github.com/AlexWaygood"><code>@​AlexWaygood</code></a></li>
<li><a href="https://github.com/Avasam"><code>@​Avasam</code></a></li>
<li><a
href="https://github.com/BurntSushi"><code>@​BurntSushi</code></a></li>
<li><a href="https://github.com/Gankra"><code>@​Gankra</code></a></li>
<li><a
href="https://github.com/Glyphack"><code>@​Glyphack</code></a></li>
<li><a
href="https://github.com/JelleZijlstra"><code>@​JelleZijlstra</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/MatthewMckee4"><code>@​MatthewMckee4</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a
href="https://github.com/PrettyWood"><code>@​PrettyWood</code></a></li>
<li><a href="https://github.com/Renkai"><code>@​Renkai</code></a></li>
<li><a href="https://github.com/TaKO8Ki"><code>@​TaKO8Ki</code></a></li>
<li><a
href="https://github.com/amyreese"><code>@​amyreese</code></a></li>
<li><a href="https://github.com/carljm"><code>@​carljm</code></a></li>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
<li><a
href="https://github.com/github-actions"><code>@​github-actions</code></a></li>
<li><a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.12.11</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend <code>AIR311</code> and
<code>AIR312</code> rules (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20082">#20082</a>)</li>
<li>[<code>airflow</code>] Replace wrong path
<code>airflow.io.storage</code> with <code>airflow.io.store</code>
(<code>AIR311</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20081">#20081</a>)</li>
<li>[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx-in-async-function</code>
(<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20091">#20091</a>)</li>
<li>[<code>flake8-logging-format</code>] Add auto-fix for f-string
logging calls (<code>G004</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19303">#19303</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add autofix for
<code>PTH211</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20009">#20009</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Make <code>PTH100</code> fix
unsafe because it can change behavior (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20100">#20100</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>pyflakes</code>, <code>pylint</code>] Fix false positives
caused by <code>__class__</code> cell handling (<code>F841</code>,
<code>PLE0117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20048">#20048</a>)</li>
<li>[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code>
matching for top-level modules (<code>F401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20115">#20115</a>)</li>
<li>[<code>ruff</code>] Fix false positive for t-strings in
<code>default-factory-kwarg</code> (<code>RUF026</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20032">#20032</a>)</li>
<li>[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19647">#19647</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Handle empty t-strings in
<code>unnecessary-empty-iterable-within-deque-call</code>
(<code>RUF037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20045">#20045</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix incorrect <code>D413</code> links in docstrings convention FAQ
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/20089">#20089</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Update links to the table showing
the correspondence between <code>os</code> and <code>pathlib</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20103">#20103</a>)</li>
</ul>
<h2>0.12.10</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-simplify</code>] Implement fix for
<code>maxsplit</code> without separator (<code>SIM905</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19851">#19851</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add fixes for <code>PTH102</code>
and <code>PTH103</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19514">#19514</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>isort</code>] Handle multiple continuation lines after module
docstring (<code>I002</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19818">#19818</a>)</li>
<li>[<code>pyupgrade</code>] Avoid reporting <code>__future__</code>
features as unnecessary when they are used (<code>UP010</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19769">#19769</a>)</li>
<li>[<code>pyupgrade</code>] Handle nested <code>Optional</code>s
(<code>UP045</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19770">#19770</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>pycodestyle</code>] Make <code>E731</code> fix unsafe instead
of display-only for class assignments (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19700">#19700</a>)</li>
<li>[<code>pyflakes</code>] Add secondary annotation showing previous
definition (<code>F811</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19900">#19900</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix description of global config file discovery strategy (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19188">#19188</a>)</li>
<li>Update outdated links to <a
href="https://typing.python.org/en/latest/source/stubs.html">https://typing.python.org/en/latest/source/stubs.html</a>
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/19992">#19992</a>)</li>
<li>[<code>flake8-annotations</code>] Remove unused import in example
(<code>ANN401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20000">#20000</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c2bc15bc15"><code>c2bc15b</code></a>
Bump 0.12.11 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20136">#20136</a>)</li>
<li><a
href="e586f6dcc4"><code>e586f6d</code></a>
[ty] Benchmarks for problematic implicit instance attributes cases (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20133">#20133</a>)</li>
<li><a
href="76a6b7e3e2"><code>76a6b7e</code></a>
[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code> matching
for top-level modules (`F4...</li>
<li><a
href="1ce65714c0"><code>1ce6571</code></a>
Move GitLab output rendering to <code>ruff_db</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20117">#20117</a>)</li>
<li><a
href="d9aaacd01f"><code>d9aaacd</code></a>
[ty] Evaluate reachability of non-definitely-bound to Ambiguous (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19579">#19579</a>)</li>
<li><a
href="18eaa659c1"><code>18eaa65</code></a>
[ty] Introduce a representation for the top/bottom materialization of an
inva...</li>
<li><a
href="af259faed5"><code>af259fa</code></a>
[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx</code> (<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20091">#20091</a>)</li>
<li><a
href="d75ef3823c"><code>d75ef38</code></a>
[ty] print diagnostics with fully qualified name to disambiguate some
cases (...</li>
<li><a
href="89ca493fd9"><code>89ca493</code></a>
[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (#...</li>
<li><a
href="4b80f5fa4f"><code>4b80f5f</code></a>
[ty] Optimize TDD atom ordering (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20098">#20098</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.9...0.12.11">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.12.9&new-version=0.12.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 13:26:59 +00:00
Swifty
8c8a2ab0c2 feat(blocks): Add Bannerbear API block for text overlay on images (#10768)
## Summary
- Implemented a new Bannerbear API block that enables adding text
overlays to images using template designs
- Block supports customizable text styling (color, font, size, weight,
alignment)
- Always uses synchronous API mode for immediate image generation
results


[agent_ead942d9-58a2-4be6-bdb3-99010c489466.json](https://github.com/user-attachments/files/22027352/agent_ead942d9-58a2-4be6-bdb3-99010c489466.json)

<img width="140" height="572" alt="Screenshot 2025-08-28 at 16 28 35"
src="https://github.com/user-attachments/assets/096b532b-31dc-4ca6-bd68-c00b7594426c"
/>


## Features
- **Text overlay capabilities**: Add multiple text layers to images
using Bannerbear templates
- **Customizable styling**: Support for color, font family, font size,
font weight, and text alignment
- **Image support**: Optional ability to add images to templates
- **Smart field handling**: Only sends non-empty optional parameters to
the API
- **Webhook & metadata**: Advanced options for webhook notifications and
custom metadata

## Implementation Details
- Created provider configuration with API key authentication
- Implemented `BannerbearTextOverlayBlock` with proper input/output
schemas
- Extracted API calls to private method `_make_api_request()` for test
mocking support
- Follows SDK guide patterns and integrates with AutoGPT platform

## Use Case
This block will be used in the Ad generator agent for creating dynamic
marketing materials and social media graphics with text overlays.

## Test plan
- [x] Block imports successfully
- [x] Block instantiates with unique ID
- [x] Code passes linting and formatting checks
- [x] Manual testing with actual Bannerbear API key
- [x] Integration testing with Ad generator agent
2025-09-04 13:15:00 +00:00
Krzysztof Czerwinski
4041e1f39c feat(backend/infra): Cleanup platform and db docker compose files (#10830)
Supabase `db/docker/docker-compose.yml` overrides env vars set in
`autogpt_platform/.env` file. This PR fixes that and simplifies the
compose files further.

### Changes 🏗️

`autogpt_platform/docker-compose.platform.yml`:
- Move hardcoded `DATABASE_URL` and `DIRECT_URL` to `x-backend-env` on
top as it repeats for most services.
- Remove `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` from
`rabbitmq` service and use env files instead

`autogpt_platform/db/docker/docker-compose.yml`:
- Remove hardcoded env vars from `x-supabase-env` - these are already
defined in `.env`
- Remove env vars from services that are already defined in `.env` files

*Changes to db compose file only affect self-hosted Supabase*

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Platform, db works when self-hosting
2025-09-04 12:01:49 +00:00
Nicholas Tindle
7cbdc1ad1a security(frontend): Upgrade next from 15.4.6 to 15.4.7 (#10820)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Server-side Request Forgery (SSRF)
<br/>[SNYK-JS-NEXT-12299318](https://snyk.io/vuln/SNYK-JS-NEXT-12299318)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJjM2E3MTAxZi1mNTI2LTQxMGUtYjVkOS0wMDhmYzMxNjk0MmQiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImMzYTcxMDFmLWY1MjYtNDEwZS1iNWQ5LTAwOGZjMzE2OTQyZCJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Server-side Request Forgery
(SSRF)](https://learn.snyk.io/lesson/ssrf-server-side-request-forgery/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.6","to":"15.4.7"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-12299318"],"prId":"c3a7101f-f526-410e-b5d9-008fc316942d","prPublicId":"c3a7101f-f526-410e-b5d9-008fc316942d","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-12299318"],"vulns":["SNYK-JS-NEXT-12299318"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-04 09:18:12 +00:00
Reinier van der Leer
2a19aa0ed3 fix(frontend/library): Show total runs count above runs list (#10832)
- Resolves #10831

### Changes 🏗️

- Show number of total runs instead of currently loaded runs
- Show loading spinner instead of zero while loading

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Counter shows number of total runs, even if it exceeds number of
currently loaded items
2025-09-04 06:43:19 +00:00
Nicholas Tindle
6d39dfe382 feat(frontend): Add admin button to user profile dropdown (#10774)
<!-- Clearly explain the need for these changes: -->

### Need 💡

This PR addresses Linear issue
[OPEN-2232](https://linear.app/autogpt/issue/OPEN-2232/add-admin-pages-in-dropdown)
by adding an "Admin" button to the user account dropdown menu. This
button is only visible to users with an "admin" role and provides direct
navigation to the admin marketplace management page, making existing
admin functionalities accessible from the new UI.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Added Admin Icon**: Integrated `IconSliders` into the `IconType`
enum and `getAccountMenuOptionIcon` function.
- **Dynamic Menu Generation**: Introduced
`getAccountMenuItems(userRole?: string)` to dynamically construct the
account menu. This function conditionally adds an "Admin" menu item
(linking to `/admin/marketplace`) if the `userRole` is "admin".
- **Navbar Integration**: Updated `NavbarView.tsx` to utilize the
`useSupabase` hook to retrieve the current user's role and then render
the account menu using the new dynamic `getAccountMenuItems` function
for both desktop and mobile views.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Log in as an admin user and verify the "Admin" button appears in
the account dropdown.
- [x] Click the "Admin" button and confirm navigation to
`/admin/marketplace`.
- [x] Log in as a non-admin user and verify the "Admin" button does not
appear in the account dropdown.
- [x] Verify all other existing menu items (e.g., "Edit profile", "Log
out") function correctly for both admin and non-admin users.
  - [x] Test the above scenarios on both desktop and mobile views.

---
Linear Issue:
[OPEN-2232](https://linear.app/autogpt/issue/OPEN-2232/add-admin-pages-in-dropdown)

<a
href="https://cursor.com/background-agent?bcId=bc-2dceda38-31b4-4e8e-8277-fb87c8858abf">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-2dceda38-31b4-4e8e-8277-fb87c8858abf">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-03 09:26:24 +00:00
Ubbe
57ecc10535 fix(frontend): typed inputs in new run modal (#10799)
## Changes 🏗️

###  Make the **Credentials Inputs** show up on the new Run Agent Modal
<img width="450" height="784" alt="Screenshot 2025-09-02 at 00 54 19"
src="https://github.com/user-attachments/assets/26ad8242-a1bc-45f6-9149-a3d207683679"
/>


### Fixes on other modals...

<img width="450" height="579" alt="Screenshot 2025-09-02 at 00 04 40"
src="https://github.com/user-attachments/assets/fa2f9ed9-207b-4599-9e60-3e37c4be6ea9"
/>
 
<img width="450" height="579" alt="Screenshot 2025-09-02 at 00 04 44"
src="https://github.com/user-attachments/assets/92e062a7-f161-423e-b6c9-f998fbdef102"
/>

<img width="450" height="634" alt="Screenshot 2025-09-02 at 00 47 06"
src="https://github.com/user-attachments/assets/93009809-0df0-44c5-b2d2-a9aa0f501312"
/>


- always use buttons/inputs in small size ( _due to the tight space_ ) 
- use from the design system
- always use pretty scrollbars
- Configure
[`tailwind-scrollbars`](https://github.com/adoxography/tailwind-scrollbar)
for pretty scrollbars
- prevent content in dialog to overflow when scrollable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with the new agent run modal flag enabled
  - [x] Check the above 

#### For configuration changes:

None
2025-09-03 06:58:07 +00:00
Reinier van der Leer
4928ce3f90 feat(library): Create presets from runs (#10823)
- Resolves #9307

### Changes 🏗️

- feat(library): Create presets from runs
  - Prevent creating preset from run with unknown credentials
- Fix running presets with credentials
  - Add `credential_inputs` parameter to `execute_preset` endpoint

API:
- Return `GraphExecutionMeta` from `*/execute` endpoints

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]` for an agent that *does not* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
- Go to `/library/agents/[id]` for an agent that *does* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Error toast appears with descriptive message
- Initiate a new run; once finished, click "Create preset from run";
fill out the dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
2025-09-03 01:26:12 +00:00
Reinier van der Leer
e16e69ca55 feat(library, executor): Make "Run Again" work with credentials (#10821)
- Resolves [OPEN-2549: Make "Run again" work with credentials in
`AgentRunDetailsView`](https://linear.app/autogpt/issue/OPEN-2549/make-run-again-work-with-credentials-in-agentrundetailsview)
- Resolves #10237

### Changes 🏗️

- feat(frontend/library): Make "Run Again" button work for runs with
credentials
- feat(backend/executor): Store passed-in credentials on
`GraphExecution`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]` for an agent with credentials inputs
  - Run the agent manually
    - [x] -> runs successfully
- [x] -> "Run again" shows among the action buttons on the newly created
run
  - Click "Run again"
    - [x] -> runs successfully
2025-09-02 18:34:56 +00:00
Nicholas Tindle
4bcc73f784 ci: Update GitHub Actions workflow for Claude integration (#10822) 2025-09-02 17:06:49 +01:00
Nicholas Tindle
c8240a4d6b ci: Modify Claude workflow permissions and action version (#10818) 2025-09-02 08:43:54 -07:00
Ubbe
f669db4a10 fix(frontend): prevent using mock feature flags (#10792)
## Changes 🏗️

Make sure `NEXT_PUBLIC_PW_TEST` is set only when running Playwright.
This forces the app to use "mock" feature flags, so the tests run stable
and predictable despite changes on LaunchDarkly.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x]  should not have `PW_TEST=true` ...

### For configuration changes:

None
2025-09-02 14:18:18 +00:00
Reinier van der Leer
0e755a5c85 feat(platform/library): Support UX for manual-setup triggers (#10309)
- Resolves #10234

### Preview

#### Manual setup triggers
![preview of the setup screen for manual-setup
triggers](https://github.com/user-attachments/assets/295d2968-ad11-4291-b360-2eb2acb03397)
![preview of the view for an active manual-setup
trigger](https://github.com/user-attachments/assets/d0ae2246-2305-48f5-aea8-8adb37336401)

#### Auto-setup triggers
![preview of the view for an active auto-setup
trigger](https://github.com/user-attachments/assets/63856311-fc99-450c-ae1f-86951e40dc26)

### Changes 🏗️

- Add "Trigger status" section to `AgentRunDraftView`
- Add `AgentPreset.webhook`, so we can show webhook URL in library
  - Add `AGENT_PRESET_INCLUDE` to `backend.data.includes`
- Add `BaseGraph.trigger_setup_info` (computed field)
- Rename `LibraryAgentTriggerInfo` to `GraphTriggerInfo`; move to
`backend.data.graph`

Refactor:
- Move contents of `@/components/agents/` to
`@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/`
- Fix small type difference between legacy & generated
`LibraryAgent.image_url`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Setting up GitHub trigger works
  - [x] Setting up manual trigger works
  - [x] Enabling/disabling manual trigger through Library works
2025-09-02 10:23:32 +00:00
Reinier van der Leer
dfdc71f97f feat(backend/external-api): Make API key auth work in Swagger UI (#10783)
![Swagger UI API key auth
dialog](https://github.com/user-attachments/assets/02026802-51f9-410d-bdb8-53840d5eb17b)

- Resolves #10782

### Changes 🏗️

- Use `Security(..)` for security dependencies
- Minor tweaks to auth mechanism (similar to #10720)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] API key auth feature appears in Swagger UI
  - [ ] API key auth *works* in Swagger UI (@ntindle wanna test this?)
2025-09-02 09:24:14 +00:00
Krzysztof Czerwinski
def008408c feat(frontend): Preserve openapi.json spec file on generation failure (#10791)
`openapi.json` file is cleared when script fails to retrieve api spec
from the server. This shouldn't happen and it breaks building docker
containers.

### Changes 🏗️

Use temp file during generation to prevent actual file clearing on
failure.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Spec file doesn't get cleared on failure
  - [x] Spec file is correctly generated
  - [x] Works when frontend is run in docker container
2025-09-02 01:37:54 +00:00
Swifty
916d0adabb feat(frontend): Add graph search functionality to builder (#10776)
## Summary
- Added search functionality to find nodes in the graph by block type,
node ID, and input/output names
- Search icon added to both new and old control panels
- Implemented node highlighting on hover and navigation on click


https://github.com/user-attachments/assets/8cc69186-5582-446d-b2cd-601de992144f



## Changes
- Created `GraphSearchMenu` component for the new control panel
- Created `GraphSearchControl` component for the old control panel  
- Added `GraphSearchContent` component with search UI similar to
BlockMenu
- Implemented `useGraphSearch` hook with fuzzy search logic
- Added node highlighting without viewport movement on hover
- Added node navigation with centering and highlighting on selection

## Features
- Search by block type name, node ID, or input/output field names
- Real-time filtering with keyboard navigation support
- Visual feedback with node highlighting on hover
- Click to navigate and center on selected node
- Consistent styling with BlockMenu including category colors
- Works in both old and new control panels

## Test plan
- [x] Test search functionality in both old and new control panels
- [x] Verify search by block type name works
- [x] Verify search by node ID works  
- [x] Verify search by input/output names works
- [x] Test keyboard navigation (arrow keys and enter)
- [x] Verify node highlighting on hover
- [x] Verify node navigation on click
- [x] Check popover alignment with control panel top
2025-09-01 18:42:13 +00:00
Abhimanyu Yadav
417ee7f0e1 feat(frontend): redesign-block-menu-part-2 (#10793)
In this PR, I have added:

- a search input
- conditional rendering of the search page and the default page
- a sidebar for the default page (with the correct data)

### Screenshot

<img width="1512" height="982" alt="Screenshot 2025-09-01 at 12 28
34 PM"
src="https://github.com/user-attachments/assets/891ab99f-dde5-47b8-a980-a700845f10c2"
/>

#### Checklist:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything works perfectly locally.
2025-09-01 10:04:29 +00:00
Abhimanyu Yadav
ae4c9897b4 refactor(frontend): Revamp marketplace agent page data fetching and structure (#10756)
- Updated the agent page to utilize React Query for data fetching,
improving performance and reliability.
- Removed legacy API calls and integrated prefetching for creator
details and agents.
- Introduced a new MainAgentPage component for better separation of
concerns.
- Added a hydration boundary for managing server state.

> It’s important to note that I haven’t changed any UI in this, as it’s
out of scope for this PR.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have manually tested both `Add to Library` and `Download`
functions, and they are working correctly.
  - [x] All fetching functions are working perfectly.
  - [x] All end-to-end tests are also working correctly.
2025-09-01 05:40:01 +00:00
Ubbe
7544028b94 fix(frontend): use new dialog in schedule agent (#10786)
## Changes 🏗️

Should fix the issue where sometimes the schedule modal wouldn't appear
when clicking on the CTA.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Set up schedules multiple times, look good on the modal
gent from monitor, and confirm it executes correctly

#### For configuration changes:

None
2025-08-30 04:47:12 +00:00
Ubbe
ad49946890 feat(frontend): New Run Agent Modal (2/2) (#10769)
## Changes 🏗️

<img width="400" height="821" alt="Screenshot 2025-08-28 at 23 57 41"
src="https://github.com/user-attachments/assets/f5f7c0a6-0b87-4c1f-b644-3ee2ddd1db95"
/>

<img width="400" height="822" alt="Screenshot 2025-08-28 at 23 57 47"
src="https://github.com/user-attachments/assets/120dbb60-d9e1-4a4a-a593-971badb4a97a"
/>

This is the final piece of work on the new **Run Agent Modal**... It is
all behind a feature flag so I'm relatively comfortable is safe. The
idea is to test with the team once it lands into dev to try different
combinations of agent inputs / credentials and schedules...

I have moved and tied a lot of the original logic around running agents.
Mostly importantly, I have made all the dynamic inputs adhere to the
design system.

### AI changes summary  

- Allow to run schedules in the main modal body
- Integrate and tidy old logic around dynamic run agent inputs 
- Integrate and tidy old logic around credentials inputs
- Refactor: `<TypeBasedInputs />` to use Design System components
(`atoms/Input`, `atoms/Select`, `molecules/MultiToggle`, and native
date/time picker via `<Input />` using the browser's date picker )
- Added support for `type="date"` and `type="datetime-local"` to `<Input
/>` ( _for the above_ )
- On the `<Select />` component:
  - added `size` prop (`small` | `medium`).
- added rich items: `icon`, `disabled`, `separator`, `onSelect`, and
`renderItem` prop.
- stories updated/added for size variants, icons/separators, and custom
rendering.
- Added and documented to the design system:
  - `molecules/TimePicker` + story.
- `atoms/FileInput`: added `accept` and `maxFileSize` props; story
documents constraints.
- `atoms/Progress` stories (Basic, CustomMax, Sizes, Live) with
fixed-width container.
  - `atoms/Switch` stories (Basic, Disabled, WithLabel).
  - `molecules/Dialog` story: Modal-over-Modal example.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open Storybook and verify new/updated stories render correctly.
  - [x] In app, validate modals open/close correctly using DS `Dialog`.
- [x] Validate DS Select rich items (icon, separator, disabled, action)
behave as expected.
  - [x] Run lints and ensure no errors.
  - [x] Manually test File upload constraints (type/size) and progress.

### For configuration changes:
None
2025-08-29 11:38:36 +00:00
Reinier van der Leer
b333d56492 fix(frontend): Fix checking of Date and NaN input values (#10781)
Date values were being rejected as "empty" by the run input form.

![screenshot demonstrating the
issue](https://github.com/user-attachments/assets/cf89414d-9b27-4da5-8a4a-64d9439c7084)

### Changes 🏗️

- Specifically handle `Date` type values in `isEmpty`
- Specifically handle `NaN` values in `isEmpty`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Date values are no longer rejected as "empty"
2025-08-29 11:03:15 +00:00
Abhimanyu Yadav
b23cb14e49 test(frontend): fix library flaky e2e test (#10764)
A test in one of my pr is failing something like… 

<img width="1044" height="452" alt="Screenshot 2025-08-28 at 9 39 07 AM"
src="https://github.com/user-attachments/assets/9c8b8996-50a2-44c6-8a2c-c3904f07ced5"
/>

That’s why I fixed it.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All E2E tests are now working correctly.
2025-08-29 10:19:01 +00:00
Swifty
e71a44521a feat(frontend): Add editable node titles with hover pencil icon (#10779)
## Summary
- Adds ability to edit custom node titles by clicking a pencil icon that
appears on hover
- Custom titles are saved in node metadata and persist across saves
- Original node type is shown in tooltip when hovering over custom
titles



https://github.com/user-attachments/assets/a0a41ac9-1ffb-44c8-9e1c-f4c42e032b49



## Changes
- **CustomNode.tsx**: 
  - Added inline title editing with pencil icon on hover
  - Implemented state management for title editing mode
  - Added tooltip to show original node type for custom titles
  - Prevents custom names from being copied when duplicating nodes

- **useAgentGraph.tsx**:
- Updated graph save/load logic to preserve metadata including custom
titles
  - Ensures metadata persistence through all node operations

## Technical Details
- Uses existing `metadata` JSON field in AgentNode model (no database
changes needed)
- Stores custom title in `metadata.customized_name`
- Backward compatible - nodes without custom titles display normally

## Test Plan
- [x] Hover over node title shows pencil icon
- [x] Click pencil icon to edit title
- [x] Press Enter or blur to save, Escape to cancel
- [x] Custom title persists after saving graph
- [x] Tooltip shows original node type when hovering over custom title
- [x] Copying node doesn't copy custom name
- [x] Backward compatible with existing graphs
2025-08-29 10:02:35 +00:00
Swifty
15aa091f65 feat(frontend): add drag and drop functionality from blocks menu to flow canvas (#10777)
Closes #1055
2025-08-29 10:55:43 +02:00
Swifty
82618aede0 Merge remote-tracking branch 'origin/dev' 2025-08-27 11:39:16 +02:00
Swifty
c4483fa6c7 Merge branch 'dev' 2025-08-20 15:22:08 +02:00
Swifty
c2af8c1a6a Reapply "Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard"
This reverts commit 260dd526c9.
2025-08-20 15:13:46 +02:00
Nicholas Tindle
483c399812 Revert "feat(platform): add ability to reject approved agents in admin dashboard"
This reverts commit 92515b3683.
2025-08-19 00:49:30 -05:00
Nicholas Tindle
260dd526c9 Revert "Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard"
This reverts commit 105d5dc7e9, reversing
changes made to 92515b3683.
2025-08-19 00:49:10 -05:00
Nicholas Tindle
75a159db01 Revert "style(frontend): format code and improve dialog titles in approve-reject buttons"
This reverts commit 62032e6584.
2025-08-19 00:48:22 -05:00
Nicholas Tindle
62032e6584 style(frontend): format code and improve dialog titles in approve-reject buttons 2025-08-18 23:32:43 -05:00
Nicholas Tindle
105d5dc7e9 Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard 2025-08-18 22:59:27 -05:00
Nicholas Tindle
92515b3683 feat(platform): add ability to reject approved agents in admin dashboard
- Modified backend review_store_submission to handle rejecting approved agents
- Added logic to update StoreListing when rejecting approved agents
- Updated UI to show "Revoke" button for approved agents
- Only shows approve button for pending agents
- Updates dialog text appropriately for revoking vs rejecting

Fixes SECRT-1218

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-18 10:35:25 -05:00
1499 changed files with 168116 additions and 27081 deletions

View File

@@ -16,6 +16,7 @@
!autogpt_platform/backend/poetry.lock
!autogpt_platform/backend/README.md
!autogpt_platform/backend/.env
!autogpt_platform/backend/gen_prisma_types_stub.py
# Platform - Market
!autogpt_platform/market/market/

View File

@@ -12,6 +12,7 @@ This file provides comprehensive onboarding information for GitHub Copilot codin
- **Infrastructure** - Docker configurations, CI/CD, and development tools
**Primary Languages & Frameworks:**
- **Backend**: Python 3.10-3.13, FastAPI, Prisma ORM, PostgreSQL, RabbitMQ
- **Frontend**: TypeScript, Next.js 15, React, Tailwind CSS, Radix UI
- **Development**: Docker, Poetry, pnpm, Playwright, Storybook
@@ -23,15 +24,17 @@ This file provides comprehensive onboarding information for GitHub Copilot codin
**Always run these commands in the correct directory and in this order:**
1. **Initial Setup** (required once):
```bash
# Clone and enter repository
git clone <repo> && cd AutoGPT
# Start all services (database, redis, rabbitmq, clamav)
cd autogpt_platform && docker compose --profile local up deps --build --detach
```
2. **Backend Setup** (always run before backend development):
```bash
cd autogpt_platform/backend
poetry install # Install dependencies
@@ -48,6 +51,7 @@ This file provides comprehensive onboarding information for GitHub Copilot codin
### Runtime Requirements
**Critical:** Always ensure Docker services are running before starting development:
```bash
cd autogpt_platform && docker compose --profile local up deps --build --detach
```
@@ -58,6 +62,7 @@ cd autogpt_platform && docker compose --profile local up deps --build --detach
### Development Commands
**Backend Development:**
```bash
cd autogpt_platform/backend
poetry run serve # Start development server (port 8000)
@@ -68,6 +73,7 @@ poetry run lint # Lint code (ruff) - run after format
```
**Frontend Development:**
```bash
cd autogpt_platform/frontend
pnpm dev # Start development server (port 3000) - use for active development
@@ -81,23 +87,27 @@ pnpm storybook # Start component development server
### Testing Strategy
**Backend Tests:**
- **Block Tests**: `poetry run pytest backend/blocks/test/test_block.py -xvs` (validates all blocks)
- **Specific Block**: `poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[BlockName]' -xvs`
- **Snapshot Tests**: Use `--snapshot-update` when output changes, always review with `git diff`
**Frontend Tests:**
- **E2E Tests**: Always run `pnpm dev` before `pnpm test` (Playwright requires running instance)
- **Component Tests**: Use Storybook for isolated component development
### Critical Validation Steps
**Before committing changes:**
1. Run `poetry run format` (backend) and `pnpm format` (frontend)
2. Ensure all tests pass in modified areas
3. Verify Docker services are still running
4. Check that database migrations apply cleanly
**Common Issues & Workarounds:**
- **Prisma issues**: Run `poetry run prisma generate` after schema changes
- **Permission errors**: Ensure Docker has proper permissions
- **Port conflicts**: Check the `docker-compose.yml` file for the current list of exposed ports. You can list all mapped ports with:
@@ -108,6 +118,7 @@ pnpm storybook # Start component development server
### Core Architecture
**AutoGPT Platform** (`autogpt_platform/`):
- `backend/` - FastAPI server with async support
- `backend/backend/` - Core API logic
- `backend/blocks/` - Agent execution blocks
@@ -121,6 +132,7 @@ pnpm storybook # Start component development server
- `docker-compose.yml` - Development stack orchestration
**Key Configuration Files:**
- `pyproject.toml` - Python dependencies and tooling
- `package.json` - Node.js dependencies and scripts
- `schema.prisma` - Database schema and migrations
@@ -136,6 +148,7 @@ pnpm storybook # Start component development server
### Development Workflow
**GitHub Actions**: Multiple CI/CD workflows in `.github/workflows/`
- `platform-backend-ci.yml` - Backend testing and validation
- `platform-frontend-ci.yml` - Frontend testing and validation
- `platform-fullstack-ci.yml` - End-to-end integration tests
@@ -146,11 +159,13 @@ pnpm storybook # Start component development server
### Key Source Files
**Backend Entry Points:**
- `backend/backend/server/server.py` - FastAPI application setup
- `backend/backend/data/` - Database models and user management
- `backend/blocks/` - Agent execution blocks and logic
**Frontend Entry Points:**
- `frontend/src/app/layout.tsx` - Root application layout
- `frontend/src/app/page.tsx` - Home page
- `frontend/src/lib/supabase/` - Authentication and database client
@@ -160,6 +175,7 @@ pnpm storybook # Start component development server
### Agent Block System
Agents are built using a visual block-based system where each block performs a single action. Blocks are defined in `backend/blocks/` and must include:
- Block definition with input/output schemas
- Execution logic with proper error handling
- Tests validating functionality
@@ -167,6 +183,7 @@ Agents are built using a visual block-based system where each block performs a s
### Database & ORM
**Prisma ORM** with PostgreSQL backend including pgvector for embeddings:
- Schema in `schema.prisma`
- Migrations in `backend/migrations/`
- Always run `prisma migrate dev` and `prisma generate` after schema changes
@@ -174,13 +191,15 @@ Agents are built using a visual block-based system where each block performs a s
## Environment Configuration
### Configuration Files Priority Order
1. **Backend**: `/backend/.env.default` → `/backend/.env` (user overrides)
2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides)
2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides)
3. **Platform**: `/.env.default` (Supabase/shared) → `/.env` (user overrides)
4. Docker Compose `environment:` sections override file-based config
5. Shell environment variables have highest precedence
### Docker Environment Setup
- All services use hardcoded defaults (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
@@ -189,6 +208,7 @@ Agents are built using a visual block-based system where each block performs a s
## Advanced Development Patterns
### Adding New Blocks
1. Create file in `/backend/backend/blocks/`
2. Inherit from `Block` base class with input/output schemas
3. Implement `run` method with proper error handling
@@ -198,6 +218,7 @@ Agents are built using a visual block-based system where each block performs a s
7. Consider how inputs/outputs connect with other blocks in graph editor
### API Development
1. Update routes in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside route files
@@ -205,21 +226,76 @@ Agents are built using a visual block-based system where each block performs a s
5. Run `poetry run test` to verify changes
### Frontend Development
1. Components in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for component development
4. Test user-facing features with Playwright E2E tests
5. Update protected routes in middleware when needed
**📖 Complete Frontend Guide**: See `autogpt_platform/frontend/CONTRIBUTING.md` and `autogpt_platform/frontend/.cursorrules` for comprehensive patterns and conventions.
**Quick Reference:**
**Component Structure:**
- Separate render logic from data/behavior
- Structure: `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
- Exception: Small components (3-4 lines of logic) can be inline
- Render-only components can be direct files without folders
**Data Fetching:**
- Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Generated via Orval from backend OpenAPI spec
- Pattern: `use{Method}{Version}{OperationName}`
- Example: `useGetV2ListLibraryAgents`
- Regenerate with: `pnpm generate:api`
- **Never** use deprecated `BackendAPI` or `src/lib/autogpt-server-api/*`
**Code Conventions:**
- Use function declarations for components and handlers (not arrow functions)
- Only arrow functions for small inline lambdas (map, filter, etc.)
- Components: `PascalCase`, Hooks: `camelCase` with `use` prefix
- No barrel files or `index.ts` re-exports
- Minimal comments (code should be self-documenting)
**Styling:**
- Use Tailwind CSS utilities only
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Never use `src/components/__legacy__/*`
- Only use Phosphor Icons (`@phosphor-icons/react`)
- Prefer design tokens over hardcoded values
**Error Handling:**
- Render errors: Use `<ErrorCard />` component
- Mutation errors: Display with toast notifications
- Manual exceptions: Use `Sentry.captureException()`
- Global error boundaries already configured
**Testing:**
- Add/update Storybook stories for UI components (`pnpm storybook`)
- Run Playwright E2E tests with `pnpm test`
- Verify in Chromatic after PR
**Architecture:**
- Default to client components ("use client")
- Server components only for SEO or extreme TTFB needs
- Use React Query for server state (via generated hooks)
- Co-locate UI state in components/hooks
### Security Guidelines
**Cache Protection Middleware** (`/backend/backend/server/middleware/security.py`):
- Default: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses allow list approach for cacheable paths (static assets, health checks, public pages)
- Prevents sensitive data caching in browsers/proxies
- Add new cacheable endpoints to `CACHEABLE_PATHS`
### CI/CD Alignment
The repository has comprehensive CI workflows that test:
- **Backend**: Python 3.11-3.13, services (Redis/RabbitMQ/ClamAV), Prisma migrations, Poetry lock validation
- **Frontend**: Node.js 21, pnpm, Playwright with Docker Compose stack, API schema validation
- **Integration**: Full-stack type checking and E2E testing
@@ -229,6 +305,7 @@ Match these patterns when developing locally - the copilot setup environment mir
## Collaboration with Other AI Assistants
This repository is actively developed with assistance from Claude (via CLAUDE.md files). When working on this codebase:
- Check for existing CLAUDE.md files that provide additional context
- Follow established patterns and conventions already in the codebase
- Maintain consistency with existing code style and architecture
@@ -237,8 +314,9 @@ This repository is actively developed with assistance from Claude (via CLAUDE.md
## Trust These Instructions
These instructions are comprehensive and tested. Only perform additional searches if:
1. Information here is incomplete for your specific task
2. You encounter errors not covered by the workarounds
3. You need to understand implementation details not covered above
For detailed platform development patterns, refer to `autogpt_platform/CLAUDE.md` and `AGENTS.md` in the repository root.
For detailed platform development patterns, refer to `autogpt_platform/CLAUDE.md` and `AGENTS.md` in the repository root.

View File

@@ -0,0 +1,97 @@
name: Auto Fix CI Failures
on:
workflow_run:
workflows: ["CI"]
types:
- completed
permissions:
contents: write
pull-requests: write
actions: read
issues: write
id-token: write # Required for OIDC token exchange
jobs:
auto-fix:
if: |
github.event.workflow_run.conclusion == 'failure' &&
github.event.workflow_run.pull_requests[0] &&
!startsWith(github.event.workflow_run.head_branch, 'claude-auto-fix-ci-')
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup git identity
run: |
git config --global user.email "claude[bot]@users.noreply.github.com"
git config --global user.name "claude[bot]"
- name: Create fix branch
id: branch
run: |
BRANCH_NAME="claude-auto-fix-ci-${{ github.event.workflow_run.head_branch }}-${{ github.run_id }}"
git checkout -b "$BRANCH_NAME"
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
- name: Get CI failure details
id: failure_details
uses: actions/github-script@v7
with:
script: |
const run = await github.rest.actions.getWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const jobs = await github.rest.actions.listJobsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const failedJobs = jobs.data.jobs.filter(job => job.conclusion === 'failure');
let errorLogs = [];
for (const job of failedJobs) {
const logs = await github.rest.actions.downloadJobLogsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
job_id: job.id
});
errorLogs.push({
jobName: job.name,
logs: logs.data
});
}
return {
runUrl: run.data.html_url,
failedJobs: failedJobs.map(j => j.name),
errorLogs: errorLogs
};
- name: Fix CI failures with Claude
id: claude
uses: anthropics/claude-code-action@v1
with:
prompt: |
/fix-ci
Failed CI Run: ${{ fromJSON(steps.failure_details.outputs.result).runUrl }}
Failed Jobs: ${{ join(fromJSON(steps.failure_details.outputs.result).failedJobs, ', ') }}
PR Number: ${{ github.event.workflow_run.pull_requests[0].number }}
Branch Name: ${{ steps.branch.outputs.branch_name }}
Base Branch: ${{ github.event.workflow_run.head_branch }}
Repository: ${{ github.repository }}
Error logs:
${{ toJSON(fromJSON(steps.failure_details.outputs.result).errorLogs) }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--allowedTools 'Edit,MultiEdit,Write,Read,Glob,Grep,LS,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*)'"

379
.github/workflows/claude-dependabot.yml vendored Normal file
View File

@@ -0,0 +1,379 @@
# Claude Dependabot PR Review Workflow
#
# This workflow automatically runs Claude analysis on Dependabot PRs to:
# - Identify dependency changes and their versions
# - Look up changelogs for updated packages
# - Assess breaking changes and security impacts
# - Provide actionable recommendations for the development team
#
# Triggered on: Dependabot PRs (opened, synchronize)
# Requirements: ANTHROPIC_API_KEY secret must be configured
name: Claude Dependabot PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
dependabot-review:
# Only run on Dependabot PRs
if: github.actor == 'dependabot[bot]'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Dependabot Analysis
id: claude_review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |
You are Claude, an AI assistant specialized in reviewing Dependabot dependency update PRs.
Your primary tasks are:
1. **Analyze the dependency changes** in this Dependabot PR
2. **Look up changelogs** for all updated dependencies to understand what changed
3. **Identify breaking changes** and assess potential impact on the AutoGPT codebase
4. **Provide actionable recommendations** for the development team
## Analysis Process:
1. **Identify Changed Dependencies**:
- Use git diff to see what dependencies were updated
- Parse package.json, poetry.lock, requirements files, etc.
- List all package versions: old → new
2. **Changelog Research**:
- For each updated dependency, look up its changelog/release notes
- Use WebFetch to access GitHub releases, NPM package pages, PyPI project pages. The pr should also have some details
- Focus on versions between the old and new versions
- Identify: breaking changes, deprecations, security fixes, new features
3. **Breaking Change Assessment**:
- Categorize changes: BREAKING, MAJOR, MINOR, PATCH, SECURITY
- Assess impact on AutoGPT's usage patterns
- Check if AutoGPT uses affected APIs/features
- Look for migration guides or upgrade instructions
4. **Codebase Impact Analysis**:
- Search the AutoGPT codebase for usage of changed APIs
- Identify files that might be affected by breaking changes
- Check test files for deprecated usage patterns
- Look for configuration changes needed
## Output Format:
Provide a comprehensive review comment with:
### 🔍 Dependency Analysis Summary
- List of updated packages with version changes
- Overall risk assessment (LOW/MEDIUM/HIGH)
### 📋 Detailed Changelog Review
For each updated dependency:
- **Package**: name (old_version → new_version)
- **Changes**: Summary of key changes
- **Breaking Changes**: List any breaking changes
- **Security Fixes**: Note security improvements
- **Migration Notes**: Any upgrade steps needed
### ⚠️ Impact Assessment
- **Breaking Changes Found**: Yes/No with details
- **Affected Files**: List AutoGPT files that may need updates
- **Test Impact**: Any tests that may need updating
- **Configuration Changes**: Required config updates
### 🛠️ Recommendations
- **Action Required**: What the team should do
- **Testing Focus**: Areas to test thoroughly
- **Follow-up Tasks**: Any additional work needed
- **Merge Recommendation**: APPROVE/REVIEW_NEEDED/HOLD
### 📚 Useful Links
- Links to relevant changelogs, migration guides, documentation
Be thorough but concise. Focus on actionable insights that help the development team make informed decisions about the dependency updates.

View File

@@ -30,18 +30,302 @@ jobs:
github.event.issue.author_association == 'COLLABORATOR'
)
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: read
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout repository
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@v1.3.1
with:
large-packages: false # slow
docker-images: false # limited benefit
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@beta
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr edit:*)"
--model opus
additional_permissions: |
actions: read

View File

@@ -72,13 +72,13 @@ jobs:
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22"
- name: Enable corepack
run: corepack enable
@@ -108,6 +108,16 @@ jobs:
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Free up disk space
run: |
# Remove large unused tools to free disk space for Docker builds
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo docker system prune -af
df -h
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -299,4 +309,4 @@ jobs:
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"

View File

@@ -5,6 +5,13 @@ on:
branches: [ dev ]
paths:
- 'autogpt_platform/**'
workflow_dispatch:
inputs:
git_ref:
description: 'Git ref (branch/tag) of AutoGPT to deploy'
required: true
default: 'master'
type: string
permissions:
contents: 'read'
@@ -19,6 +26,8 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.git_ref || github.ref_name }}
- name: Set up Python
uses: actions/setup-python@v5
@@ -48,4 +57,4 @@ jobs:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: build_deploy_dev
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "repository": "${{ github.repository }}"}'
client-payload: '{"ref": "${{ github.event.inputs.git_ref || github.ref }}", "repository": "${{ github.repository }}"}'

View File

@@ -3,6 +3,7 @@ name: AutoGPT Platform - Deploy Prod Environment
on:
release:
types: [published]
workflow_dispatch:
permissions:
contents: 'read'
@@ -17,6 +18,8 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.ref_name || 'master' }}
- name: Set up Python
uses: actions/setup-python@v5
@@ -36,7 +39,7 @@ jobs:
DATABASE_URL: ${{ secrets.BACKEND_DATABASE_URL }}
DIRECT_URL: ${{ secrets.BACKEND_DATABASE_URL }}
trigger:
needs: migrate
runs-on: ubuntu-latest
@@ -47,4 +50,5 @@ jobs:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: build_deploy_prod
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "repository": "${{ github.repository }}"}'
client-payload: |
{"ref": "${{ github.ref_name || 'master' }}", "repository": "${{ github.repository }}"}

View File

@@ -37,9 +37,7 @@ jobs:
services:
redis:
image: bitnami/redis:6.2
env:
REDIS_PASSWORD: testpassword
image: redis:latest
ports:
- 6379:6379
rabbitmq:
@@ -136,7 +134,7 @@ jobs:
run: poetry install
- name: Generate Prisma Client
run: poetry run prisma generate
run: poetry run prisma generate && poetry run gen-prisma-stub
- id: supabase
name: Start Supabase
@@ -204,7 +202,6 @@ jobs:
JWT_VERIFY_KEY: ${{ steps.supabase.outputs.JWT_SECRET }}
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
REDIS_PASSWORD: "testpassword"
ENCRYPTION_KEY: "dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=" # DO NOT USE IN PRODUCTION!!
env:

View File

@@ -12,6 +12,10 @@ on:
- "autogpt_platform/frontend/**"
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'merge_group' && format('merge-queue-{0}', github.ref) || format('{0}-{1}', github.ref, github.event.pull_request.number || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
defaults:
run:
shell: bash
@@ -30,7 +34,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -62,7 +66,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -97,7 +101,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -138,7 +142,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -160,7 +164,7 @@ jobs:
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml up -d
NEXT_PUBLIC_PW_TEST=true docker compose -f ../docker-compose.yml up -d
env:
DOCKER_BUILDKIT: 1
BUILDX_CACHE_FROM: type=local,src=/tmp/.buildx-cache

View File

@@ -12,6 +12,10 @@ on:
- "autogpt_platform/**"
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'merge_group' && format('merge-queue-{0}', github.ref) || github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
defaults:
run:
shell: bash
@@ -30,7 +34,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
@@ -66,7 +70,7 @@ jobs:
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable

View File

@@ -11,7 +11,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9
- uses: actions/stale@v10
with:
# operations-per-run: 5000
stale-issue-message: >

View File

@@ -61,6 +61,6 @@ jobs:
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
- uses: actions/labeler@v6
with:
sync-labels: true

1
.gitignore vendored
View File

@@ -178,3 +178,4 @@ autogpt_platform/backend/settings.py
*.ign.*
.test-contents
.claude/settings.local.json
/autogpt_platform/backend/logs

View File

@@ -1,6 +1,3 @@
[pr_reviewer]
num_code_suggestions=0
[pr_code_suggestions]
commitable_code_suggestions=false
num_code_suggestions=0

View File

@@ -61,24 +61,41 @@ poetry run pytest path/to/test.py --snapshot-update
```bash
# Install dependencies
cd frontend && npm install
cd frontend && pnpm i
# Generate API client from OpenAPI spec
pnpm generate:api
# Start development server
npm run dev
pnpm dev
# Run E2E tests
npm run test
pnpm test
# Run Storybook for component development
npm run storybook
pnpm storybook
# Build production
npm run build
pnpm build
# Format and lint
pnpm format
# Type checking
npm run types
pnpm types
```
**📖 Complete Guide**: See `/frontend/CONTRIBUTING.md` and `/frontend/.cursorrules` for comprehensive frontend patterns.
**Key Frontend Conventions:**
- Separate render logic from data/behavior in components
- Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Use function declarations (not arrow functions) for components/handlers
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Only use Phosphor Icons
- Never use `src/components/__legacy__/*` or deprecated `BackendAPI`
## Architecture Overview
### Backend Architecture
@@ -92,11 +109,16 @@ npm run types
### Frontend Architecture
- **Framework**: Next.js App Router with React Server Components
- **State Management**: React hooks + Supabase client for real-time updates
- **Framework**: Next.js 15 App Router (client-first approach)
- **Data Fetching**: Type-safe generated API hooks via Orval + React Query
- **State Management**: React Query for server state, co-located UI state in components/hooks
- **Component Structure**: Separate render logic (`.tsx`) from business logic (`use*.ts` hooks)
- **Workflow Builder**: Visual graph editor using @xyflow/react
- **UI Components**: Radix UI primitives with Tailwind CSS styling
- **UI Components**: shadcn/ui (Radix UI primitives) with Tailwind CSS styling
- **Icons**: Phosphor Icons only
- **Feature Flags**: LaunchDarkly integration
- **Error Handling**: ErrorCard for render errors, toast for mutations, Sentry for exceptions
- **Testing**: Playwright for E2E, Storybook for component development
### Key Concepts
@@ -150,6 +172,7 @@ Key models (defined in `/backend/schema.prisma`):
**Adding a new block:**
Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-sdk-guide.md) which covers:
- Provider configuration with `ProviderBuilder`
- Block schema definition
- Authentication (API keys, OAuth, webhooks)
@@ -157,6 +180,7 @@ Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-
- File organization
Quick steps:
1. Create new file in `/backend/backend/blocks/`
2. Configure provider using `ProviderBuilder` in `_config.py`
3. Inherit from `Block` base class
@@ -168,6 +192,8 @@ Quick steps:
Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
If you get any pushback or hit complex block conditions check the new_blocks guide in the docs.
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
@@ -177,10 +203,20 @@ ex: do the inputs and outputs tie well together?
**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for new components
4. Test with Playwright if user-facing
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
1. **Pages**: Create in `src/app/(platform)/feature-name/page.tsx`
- Add `usePageName.ts` hook for logic
- Put sub-components in local `components/` folder
2. **Components**: Structure as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Never use `src/components/__legacy__/*`
3. **Data fetching**: Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Regenerate with `pnpm generate:api`
- Pattern: `use{Method}{Version}{OperationName}`
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5. **Testing**: Add Storybook stories for new components, Playwright for E2E
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
### Security Implementation

63
autogpt_platform/Makefile Normal file
View File

@@ -0,0 +1,63 @@
.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend load-store-agents
# Run just Supabase + Redis + RabbitMQ
start-core:
docker compose up -d deps
# Stop core services
stop-core:
docker compose stop deps
reset-db:
rm -rf db/docker/volumes/db/data
cd backend && poetry run prisma migrate deploy
cd backend && poetry run prisma generate
cd backend && poetry run gen-prisma-stub
# View logs for core services
logs-core:
docker compose logs -f deps
# Run formatting and linting for backend and frontend
format:
cd backend && poetry run format
cd frontend && pnpm format
cd frontend && pnpm lint
init-env:
cp -n .env.default .env || true
cd backend && cp -n .env.default .env || true
cd frontend && cp -n .env.default .env || true
# Run migrations for backend
migrate:
cd backend && poetry run prisma migrate deploy
cd backend && poetry run prisma generate
cd backend && poetry run gen-prisma-stub
run-backend:
cd backend && poetry run app
run-frontend:
cd frontend && pnpm dev
test-data:
cd backend && poetry run python test/test_data_creator.py
load-store-agents:
cd backend && poetry run load-store-agents
help:
@echo "Usage: make <target>"
@echo "Targets:"
@echo " start-core - Start just the core services (Supabase, Redis, RabbitMQ) in background"
@echo " stop-core - Stop the core services"
@echo " reset-db - Reset the database by deleting the volume"
@echo " logs-core - Tail the logs for core services"
@echo " format - Format & lint backend (Python) and frontend (TypeScript) code"
@echo " migrate - Run backend database migrations"
@echo " run-backend - Run the backend FastAPI server"
@echo " run-frontend - Run the frontend Next.js development server"
@echo " test-data - Run the test data creator"
@echo " load-store-agents - Load store agents from agents/ folder into test database"

View File

@@ -38,6 +38,37 @@ To run the AutoGPT Platform, follow these steps:
4. After all the services are in ready state, open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
### Running Just Core services
You can now run the following to enable just the core services.
```
# For help
make help
# Run just Supabase + Redis + RabbitMQ
make start-core
# Stop core services
make stop-core
# View logs from core services
make logs-core
# Run formatting and linting for backend and frontend
make format
# Run migrations for backend database
make migrate
# Run backend server
make run-backend
# Run frontend development server
make run-frontend
```
### Docker Compose Commands
Here are some useful Docker Compose commands for managing your AutoGPT Platform:

View File

@@ -1,35 +0,0 @@
import hashlib
import secrets
from typing import NamedTuple
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
raw: str
prefix: str
postfix: str
hash: str
class APIKeyManager:
PREFIX: str = "agpt_"
PREFIX_LENGTH: int = 8
POSTFIX_LENGTH: int = 8
def generate_api_key(self) -> APIKeyContainer:
"""Generate a new API key with all its parts."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
return APIKeyContainer(
raw=raw_key,
prefix=raw_key[: self.PREFIX_LENGTH],
postfix=raw_key[-self.POSTFIX_LENGTH :],
hash=hashlib.sha256(raw_key.encode()).hexdigest(),
)
def verify_api_key(self, provided_key: str, stored_hash: str) -> bool:
"""Verify if a provided API key matches the stored hash."""
if not provided_key.startswith(self.PREFIX):
return False
provided_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(provided_hash, stored_hash)

View File

@@ -0,0 +1,81 @@
import hashlib
import secrets
from typing import NamedTuple
from cryptography.hazmat.primitives.kdf.scrypt import Scrypt
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
key: str
head: str
tail: str
hash: str
salt: str
class APIKeySmith:
PREFIX: str = "agpt_"
HEAD_LENGTH: int = 8
TAIL_LENGTH: int = 8
def generate_key(self) -> APIKeyContainer:
"""Generate a new API key with secure hashing."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
hash, salt = self.hash_key(raw_key)
return APIKeyContainer(
key=raw_key,
head=raw_key[: self.HEAD_LENGTH],
tail=raw_key[-self.TAIL_LENGTH :],
hash=hash,
salt=salt,
)
def verify_key(
self, provided_key: str, known_hash: str, known_salt: str | None = None
) -> bool:
"""
Verify an API key against a known hash (+ salt).
Supports verifying both legacy SHA256 and secure Scrypt hashes.
"""
if not provided_key.startswith(self.PREFIX):
return False
# Handle legacy SHA256 hashes (migration support)
if known_salt is None:
legacy_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(legacy_hash, known_hash)
try:
salt_bytes = bytes.fromhex(known_salt)
provided_hash = self._hash_key_with_salt(provided_key, salt_bytes)
return secrets.compare_digest(provided_hash, known_hash)
except (ValueError, TypeError):
return False
def hash_key(self, raw_key: str) -> tuple[str, str]:
"""Migrate a legacy hash to secure hash format."""
if not raw_key.startswith(self.PREFIX):
raise ValueError("Key without 'agpt_' prefix would fail validation")
salt = self._generate_salt()
hash = self._hash_key_with_salt(raw_key, salt)
return hash, salt.hex()
def _generate_salt(self) -> bytes:
"""Generate a random salt for hashing."""
return secrets.token_bytes(32)
def _hash_key_with_salt(self, raw_key: str, salt: bytes) -> str:
"""Hash API key using Scrypt with salt."""
kdf = Scrypt(
length=32,
salt=salt,
n=2**14, # CPU/memory cost parameter
r=8, # Block size parameter
p=1, # Parallelization parameter
)
key_hash = kdf.derive(raw_key.encode())
return key_hash.hex()

View File

@@ -0,0 +1,79 @@
import hashlib
from autogpt_libs.api_key.keysmith import APIKeySmith
def test_generate_api_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
assert key.key.startswith(keysmith.PREFIX)
assert key.head == key.key[: keysmith.HEAD_LENGTH]
assert key.tail == key.key[-keysmith.TAIL_LENGTH :]
assert len(key.hash) == 64 # 32 bytes hex encoded
assert len(key.salt) == 64 # 32 bytes hex encoded
def test_verify_new_secure_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test correct key validates
assert keysmith.verify_key(key.key, key.hash, key.salt) is True
# Test wrong key fails
wrong_key = f"{keysmith.PREFIX}wrongkey123"
assert keysmith.verify_key(wrong_key, key.hash, key.salt) is False
def test_verify_legacy_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}legacykey123"
legacy_hash = hashlib.sha256(legacy_key.encode()).hexdigest()
# Test legacy key validates without salt
assert keysmith.verify_key(legacy_key, legacy_hash) is True
# Test wrong legacy key fails
wrong_key = f"{keysmith.PREFIX}wronglegacy"
assert keysmith.verify_key(wrong_key, legacy_hash) is False
def test_rehash_existing_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}migratekey123"
# Migrate the legacy key
new_hash, new_salt = keysmith.hash_key(legacy_key)
# Verify migrated key works
assert keysmith.verify_key(legacy_key, new_hash, new_salt) is True
# Verify different key fails with migrated hash
wrong_key = f"{keysmith.PREFIX}wrongkey"
assert keysmith.verify_key(wrong_key, new_hash, new_salt) is False
def test_invalid_key_prefix():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test key without proper prefix fails
invalid_key = "invalid_prefix_key"
assert keysmith.verify_key(invalid_key, key.hash, key.salt) is False
def test_secure_hash_requires_salt():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Secure hash without salt should fail
assert keysmith.verify_key(key.key, key.hash) is False
def test_invalid_salt_format():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Invalid salt format should fail gracefully
assert keysmith.verify_key(key.key, key.hash, "invalid_hex") is False

View File

@@ -1,5 +1,10 @@
from .config import verify_settings
from .dependencies import get_user_id, requires_admin_user, requires_user
from .dependencies import (
get_optional_user_id,
get_user_id,
requires_admin_user,
requires_user,
)
from .helpers import add_auth_responses_to_openapi
from .models import User
@@ -8,6 +13,7 @@ __all__ = [
"get_user_id",
"requires_admin_user",
"requires_user",
"get_optional_user_id",
"add_auth_responses_to_openapi",
"User",
]

View File

@@ -4,13 +4,55 @@ FastAPI dependency functions for JWT-based authentication and authorization.
These are the high-level dependency functions used in route definitions.
"""
import logging
import fastapi
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from .jwt_utils import get_jwt_payload, verify_user
from .models import User
optional_bearer = HTTPBearer(auto_error=False)
def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
# Header name for admin impersonation
IMPERSONATION_HEADER_NAME = "X-Act-As-User-Id"
logger = logging.getLogger(__name__)
def get_optional_user_id(
credentials: HTTPAuthorizationCredentials | None = fastapi.Security(
optional_bearer
),
) -> str | None:
"""
Attempts to extract the user ID ("sub" claim) from a Bearer JWT if provided.
This dependency allows for both authenticated and anonymous access. If a valid bearer token is
supplied, it parses the JWT and extracts the user ID. If the token is missing or invalid, it returns None,
treating the request as anonymous.
Args:
credentials: Optional HTTPAuthorizationCredentials object from FastAPI Security dependency.
Returns:
The user ID (str) extracted from the JWT "sub" claim, or None if no valid token is present.
"""
if not credentials:
return None
try:
# Parse JWT token to get user ID
from autogpt_libs.auth.jwt_utils import parse_jwt_token
payload = parse_jwt_token(credentials.credentials)
return payload.get("sub")
except Exception as e:
logger.debug(f"Auth token validation failed (anonymous access): {e}")
return None
async def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
FastAPI dependency that requires a valid authenticated user.
@@ -20,7 +62,9 @@ def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User
return verify_user(jwt_payload, admin_only=False)
def requires_admin_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
async def requires_admin_user(
jwt_payload: dict = fastapi.Security(get_jwt_payload),
) -> User:
"""
FastAPI dependency that requires a valid admin user.
@@ -30,16 +74,44 @@ def requires_admin_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -
return verify_user(jwt_payload, admin_only=True)
def get_user_id(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> str:
async def get_user_id(
request: fastapi.Request, jwt_payload: dict = fastapi.Security(get_jwt_payload)
) -> str:
"""
FastAPI dependency that returns the ID of the authenticated user.
Supports admin impersonation via X-Act-As-User-Id header:
- If the header is present and user is admin, returns the impersonated user ID
- Otherwise returns the authenticated user's own ID
- Logs all impersonation actions for audit trail
Raises:
HTTPException: 401 for authentication failures or missing user ID
HTTPException: 403 if non-admin tries to use impersonation
"""
# Get the authenticated user's ID from JWT
user_id = jwt_payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
# Check for admin impersonation header
impersonate_header = request.headers.get(IMPERSONATION_HEADER_NAME, "").strip()
if impersonate_header:
# Verify the authenticated user is an admin
authenticated_user = verify_user(jwt_payload, admin_only=False)
if authenticated_user.role != "admin":
raise fastapi.HTTPException(
status_code=403, detail="Only admin users can impersonate other users"
)
# Log the impersonation for audit trail
logger.info(
f"Admin impersonation: {authenticated_user.user_id} ({authenticated_user.email}) "
f"acting as user {impersonate_header} for requesting {request.method} {request.url}"
)
return impersonate_header
return user_id

View File

@@ -4,9 +4,10 @@ Tests the full authentication flow from HTTP requests to user validation.
"""
import os
from unittest.mock import Mock
import pytest
from fastapi import FastAPI, HTTPException, Security
from fastapi import FastAPI, HTTPException, Request, Security
from fastapi.testclient import TestClient
from pytest_mock import MockerFixture
@@ -45,7 +46,8 @@ class TestAuthDependencies:
"""Create a test client."""
return TestClient(app)
def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture):
@pytest.mark.asyncio
async def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user with valid JWT payload."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
@@ -53,12 +55,13 @@ class TestAuthDependencies:
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
user = await requires_user(jwt_payload)
assert isinstance(user, User)
assert user.user_id == "user-123"
assert user.role == "user"
def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture):
@pytest.mark.asyncio
async def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user accepts admin users."""
jwt_payload = {
"sub": "admin-456",
@@ -69,28 +72,31 @@ class TestAuthDependencies:
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
user = await requires_user(jwt_payload)
assert user.user_id == "admin-456"
assert user.role == "admin"
def test_requires_user_missing_sub(self):
@pytest.mark.asyncio
async def test_requires_user_missing_sub(self):
"""Test requires_user with missing user ID."""
jwt_payload = {"role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
await requires_user(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_requires_user_empty_sub(self):
@pytest.mark.asyncio
async def test_requires_user_empty_sub(self):
"""Test requires_user with empty user ID."""
jwt_payload = {"sub": "", "role": "user"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
await requires_user(jwt_payload)
assert exc_info.value.status_code == 401
def test_requires_admin_user_with_admin(self, mocker: MockerFixture):
@pytest.mark.asyncio
async def test_requires_admin_user_with_admin(self, mocker: MockerFixture):
"""Test requires_admin_user with admin role."""
jwt_payload = {
"sub": "admin-789",
@@ -101,51 +107,62 @@ class TestAuthDependencies:
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_admin_user(jwt_payload)
user = await requires_admin_user(jwt_payload)
assert user.user_id == "admin-789"
assert user.role == "admin"
def test_requires_admin_user_with_regular_user(self):
@pytest.mark.asyncio
async def test_requires_admin_user_with_regular_user(self):
"""Test requires_admin_user rejects regular users."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_admin_user(jwt_payload)
await requires_admin_user(jwt_payload)
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
def test_requires_admin_user_missing_role(self):
@pytest.mark.asyncio
async def test_requires_admin_user_missing_role(self):
"""Test requires_admin_user with missing role."""
jwt_payload = {"sub": "user-123", "email": "user@example.com"}
with pytest.raises(KeyError):
requires_admin_user(jwt_payload)
await requires_admin_user(jwt_payload)
def test_get_user_id_with_valid_payload(self, mocker: MockerFixture):
@pytest.mark.asyncio
async def test_get_user_id_with_valid_payload(self, mocker: MockerFixture):
"""Test get_user_id extracts user ID correctly."""
request = Mock(spec=Request)
request.headers = {}
jwt_payload = {"sub": "user-id-xyz", "role": "user"}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = get_user_id(jwt_payload)
user_id = await get_user_id(request, jwt_payload)
assert user_id == "user-id-xyz"
def test_get_user_id_missing_sub(self):
@pytest.mark.asyncio
async def test_get_user_id_missing_sub(self):
"""Test get_user_id with missing user ID."""
request = Mock(spec=Request)
request.headers = {}
jwt_payload = {"role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
await get_user_id(request, jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_get_user_id_none_sub(self):
@pytest.mark.asyncio
async def test_get_user_id_none_sub(self):
"""Test get_user_id with None user ID."""
request = Mock(spec=Request)
request.headers = {}
jwt_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
await get_user_id(request, jwt_payload)
assert exc_info.value.status_code == 401
@@ -170,7 +187,8 @@ class TestAuthDependenciesIntegration:
return _create_token
def test_endpoint_auth_enabled_no_token(self):
@pytest.mark.asyncio
async def test_endpoint_auth_enabled_no_token(self):
"""Test endpoints require token when auth is enabled."""
app = FastAPI()
@@ -184,7 +202,8 @@ class TestAuthDependenciesIntegration:
response = client.get("/test")
assert response.status_code == 401
def test_endpoint_with_valid_token(self, create_token):
@pytest.mark.asyncio
async def test_endpoint_with_valid_token(self, create_token):
"""Test endpoint with valid JWT token."""
app = FastAPI()
@@ -203,7 +222,8 @@ class TestAuthDependenciesIntegration:
assert response.status_code == 200
assert response.json()["user_id"] == "test-user"
def test_admin_endpoint_requires_admin_role(self, create_token):
@pytest.mark.asyncio
async def test_admin_endpoint_requires_admin_role(self, create_token):
"""Test admin endpoint rejects non-admin users."""
app = FastAPI()
@@ -240,7 +260,8 @@ class TestAuthDependenciesIntegration:
class TestAuthDependenciesEdgeCases:
"""Edge case tests for authentication dependencies."""
def test_dependency_with_complex_payload(self):
@pytest.mark.asyncio
async def test_dependency_with_complex_payload(self):
"""Test dependencies handle complex JWT payloads."""
complex_payload = {
"sub": "user-123",
@@ -256,14 +277,15 @@ class TestAuthDependenciesEdgeCases:
"exp": 9999999999,
}
user = requires_user(complex_payload)
user = await requires_user(complex_payload)
assert user.user_id == "user-123"
assert user.email == "test@example.com"
admin = requires_admin_user(complex_payload)
admin = await requires_admin_user(complex_payload)
assert admin.role == "admin"
def test_dependency_with_unicode_in_payload(self):
@pytest.mark.asyncio
async def test_dependency_with_unicode_in_payload(self):
"""Test dependencies handle unicode in JWT payloads."""
unicode_payload = {
"sub": "user-😀-123",
@@ -272,11 +294,12 @@ class TestAuthDependenciesEdgeCases:
"name": "日本語",
}
user = requires_user(unicode_payload)
user = await requires_user(unicode_payload)
assert "😀" in user.user_id
assert user.email == "测试@example.com"
def test_dependency_with_null_values(self):
@pytest.mark.asyncio
async def test_dependency_with_null_values(self):
"""Test dependencies handle null values in payload."""
null_payload = {
"sub": "user-123",
@@ -286,18 +309,19 @@ class TestAuthDependenciesEdgeCases:
"metadata": None,
}
user = requires_user(null_payload)
user = await requires_user(null_payload)
assert user.user_id == "user-123"
assert user.email is None
def test_concurrent_requests_isolation(self):
@pytest.mark.asyncio
async def test_concurrent_requests_isolation(self):
"""Test that concurrent requests don't interfere with each other."""
payload1 = {"sub": "user-1", "role": "user"}
payload2 = {"sub": "user-2", "role": "admin"}
# Simulate concurrent processing
user1 = requires_user(payload1)
user2 = requires_admin_user(payload2)
user1 = await requires_user(payload1)
user2 = await requires_admin_user(payload2)
assert user1.user_id == "user-1"
assert user2.user_id == "user-2"
@@ -314,7 +338,8 @@ class TestAuthDependenciesEdgeCases:
({"sub": "user", "role": "user"}, "Admin access required", True),
],
)
def test_dependency_error_cases(
@pytest.mark.asyncio
async def test_dependency_error_cases(
self, payload, expected_error: str, admin_only: bool
):
"""Test that errors propagate correctly through dependencies."""
@@ -325,7 +350,8 @@ class TestAuthDependenciesEdgeCases:
verify_user(payload, admin_only=admin_only)
assert expected_error in exc_info.value.detail
def test_dependency_valid_user(self):
@pytest.mark.asyncio
async def test_dependency_valid_user(self):
"""Test valid user case for dependency."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
@@ -333,3 +359,196 @@ class TestAuthDependenciesEdgeCases:
# Valid case
user = verify_user({"sub": "user", "role": "user"}, admin_only=False)
assert user.user_id == "user"
class TestAdminImpersonation:
"""Test suite for admin user impersonation functionality."""
@pytest.mark.asyncio
async def test_admin_impersonation_success(self, mocker: MockerFixture):
"""Test admin successfully impersonating another user."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": "target-user-123"}
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-456", email="admin@example.com", role="admin"
)
# Mock logger to verify audit logging
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should return the impersonated user ID
assert user_id == "target-user-123"
# Should log the impersonation attempt
mock_logger.info.assert_called_once()
log_call = mock_logger.info.call_args[0][0]
assert "Admin impersonation:" in log_call
assert "admin@example.com" in log_call
assert "target-user-123" in log_call
@pytest.mark.asyncio
async def test_non_admin_impersonation_attempt(self, mocker: MockerFixture):
"""Test non-admin user attempting impersonation returns 403."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": "target-user-123"}
jwt_payload = {
"sub": "regular-user",
"role": "user",
"email": "user@example.com",
}
# Mock verify_user to return regular user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="regular-user", email="user@example.com", role="user"
)
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
with pytest.raises(HTTPException) as exc_info:
await get_user_id(request, jwt_payload)
assert exc_info.value.status_code == 403
assert "Only admin users can impersonate other users" in exc_info.value.detail
@pytest.mark.asyncio
async def test_impersonation_empty_header(self, mocker: MockerFixture):
"""Test impersonation with empty header falls back to regular user ID."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": ""}
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should fall back to the admin's own user ID
assert user_id == "admin-456"
@pytest.mark.asyncio
async def test_impersonation_missing_header(self, mocker: MockerFixture):
"""Test normal behavior when impersonation header is missing."""
request = Mock(spec=Request)
request.headers = {} # No impersonation header
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should return the admin's own user ID
assert user_id == "admin-456"
@pytest.mark.asyncio
async def test_impersonation_audit_logging_details(self, mocker: MockerFixture):
"""Test that impersonation audit logging includes all required details."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": "victim-user-789"}
jwt_payload = {
"sub": "admin-999",
"role": "admin",
"email": "superadmin@company.com",
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-999", email="superadmin@company.com", role="admin"
)
# Mock logger to capture audit trail
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Verify all audit details are logged
assert user_id == "victim-user-789"
mock_logger.info.assert_called_once()
log_message = mock_logger.info.call_args[0][0]
assert "Admin impersonation:" in log_message
assert "superadmin@company.com" in log_message
assert "victim-user-789" in log_message
@pytest.mark.asyncio
async def test_impersonation_header_case_sensitivity(self, mocker: MockerFixture):
"""Test that impersonation header is case-sensitive."""
request = Mock(spec=Request)
# Use wrong case - should not trigger impersonation
request.headers = {"x-act-as-user-id": "target-user-123"}
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should fall back to admin's own ID (header case mismatch)
assert user_id == "admin-456"
@pytest.mark.asyncio
async def test_impersonation_with_whitespace_header(self, mocker: MockerFixture):
"""Test impersonation with whitespace in header value."""
request = Mock(spec=Request)
request.headers = {"X-Act-As-User-Id": " target-user-123 "}
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-456", email="admin@example.com", role="admin"
)
# Mock logger
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
# Should strip whitespace and impersonate successfully
assert user_id == "target-user-123"
mock_logger.info.assert_called_once()

View File

@@ -1,29 +1,25 @@
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from .jwt_utils import bearer_jwt_auth
def add_auth_responses_to_openapi(app: FastAPI) -> None:
"""
Set up custom OpenAPI schema generation that adds 401 responses
Patch a FastAPI instance's `openapi()` method to add 401 responses
to all authenticated endpoints.
This is needed when using HTTPBearer with auto_error=False to get proper
401 responses instead of 403, but FastAPI only automatically adds security
responses when auto_error=True.
"""
# Wrap current method to allow stacking OpenAPI schema modifiers like this
wrapped_openapi = app.openapi
def custom_openapi():
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
version=app.version,
description=app.description,
routes=app.routes,
)
openapi_schema = wrapped_openapi()
# Add 401 response to all endpoints that have security requirements
for path, methods in openapi_schema["paths"].items():

View File

@@ -16,7 +16,7 @@ bearer_jwt_auth = HTTPBearer(
)
def get_jwt_payload(
async def get_jwt_payload(
credentials: HTTPAuthorizationCredentials | None = Security(bearer_jwt_auth),
) -> dict[str, Any]:
"""

View File

@@ -116,32 +116,32 @@ def test_parse_jwt_token_missing_audience():
assert "Invalid token" in str(exc_info.value)
def test_get_jwt_payload_with_valid_token():
async def test_get_jwt_payload_with_valid_token():
"""Test extracting JWT payload with valid bearer token."""
token = create_token(TEST_USER_PAYLOAD)
credentials = HTTPAuthorizationCredentials(scheme="Bearer", credentials=token)
result = jwt_utils.get_jwt_payload(credentials)
result = await jwt_utils.get_jwt_payload(credentials)
assert result["sub"] == "test-user-id"
assert result["role"] == "user"
def test_get_jwt_payload_no_credentials():
async def test_get_jwt_payload_no_credentials():
"""Test JWT payload when no credentials provided."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(None)
await jwt_utils.get_jwt_payload(None)
assert exc_info.value.status_code == 401
assert "Authorization header is missing" in exc_info.value.detail
def test_get_jwt_payload_invalid_token():
async def test_get_jwt_payload_invalid_token():
"""Test JWT payload extraction with invalid token."""
credentials = HTTPAuthorizationCredentials(
scheme="Bearer", credentials="invalid.token.here"
)
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(credentials)
await jwt_utils.get_jwt_payload(credentials)
assert exc_info.value.status_code == 401
assert "Invalid token" in exc_info.value.detail

View File

@@ -4,6 +4,7 @@ import logging
import os
import socket
import sys
from logging.handlers import RotatingFileHandler
from pathlib import Path
from pydantic import Field, field_validator
@@ -93,42 +94,36 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
config = LoggingConfig()
log_handlers: list[logging.Handler] = []
structured_logging = config.enable_cloud_logging or force_cloud_logging
# Console output handlers
stdout = logging.StreamHandler(stream=sys.stdout)
stdout.setLevel(config.level)
stdout.addFilter(BelowLevelFilter(logging.WARNING))
if config.level == logging.DEBUG:
stdout.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stdout.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
if not structured_logging:
stdout = logging.StreamHandler(stream=sys.stdout)
stdout.setLevel(config.level)
stdout.addFilter(BelowLevelFilter(logging.WARNING))
if config.level == logging.DEBUG:
stdout.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stdout.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
stderr = logging.StreamHandler()
stderr.setLevel(logging.WARNING)
if config.level == logging.DEBUG:
stderr.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stderr.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
stderr = logging.StreamHandler()
stderr.setLevel(logging.WARNING)
if config.level == logging.DEBUG:
stderr.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stderr.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
log_handlers += [stdout, stderr]
log_handlers += [stdout, stderr]
# Cloud logging setup
if config.enable_cloud_logging or force_cloud_logging:
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers.transports import (
BackgroundThreadTransport,
)
else:
# Use Google Cloud Structured Log Handler. Log entries are printed to stdout
# in a JSON format which is automatically picked up by Google Cloud Logging.
from google.cloud.logging.handlers import StructuredLogHandler
client = google.cloud.logging.Client()
# Use BackgroundThreadTransport to prevent blocking the main thread
# and deadlocks when gRPC calls to Google Cloud Logging hang
cloud_handler = CloudLoggingHandler(
client,
name="autogpt_logs",
transport=BackgroundThreadTransport,
)
cloud_handler.setLevel(config.level)
log_handlers.append(cloud_handler)
structured_log_handler = StructuredLogHandler(stream=sys.stdout)
structured_log_handler.setLevel(config.level)
log_handlers.append(structured_log_handler)
# File logging setup
if config.enable_file_logging:
@@ -139,8 +134,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
print(f"Log directory: {config.log_dir}")
# Activity log handler (INFO and above)
activity_log_handler = logging.FileHandler(
config.log_dir / LOG_FILE, "a", "utf-8"
# Security fix: Use RotatingFileHandler with size limits to prevent disk exhaustion
activity_log_handler = RotatingFileHandler(
config.log_dir / LOG_FILE,
mode="a",
encoding="utf-8",
maxBytes=10 * 1024 * 1024, # 10MB per file
backupCount=3, # Keep 3 backup files (40MB total)
)
activity_log_handler.setLevel(config.level)
activity_log_handler.setFormatter(
@@ -150,8 +150,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
if config.level == logging.DEBUG:
# Debug log handler (all levels)
debug_log_handler = logging.FileHandler(
config.log_dir / DEBUG_LOG_FILE, "a", "utf-8"
# Security fix: Use RotatingFileHandler with size limits
debug_log_handler = RotatingFileHandler(
config.log_dir / DEBUG_LOG_FILE,
mode="a",
encoding="utf-8",
maxBytes=10 * 1024 * 1024, # 10MB per file
backupCount=3, # Keep 3 backup files (40MB total)
)
debug_log_handler.setLevel(logging.DEBUG)
debug_log_handler.setFormatter(
@@ -160,8 +165,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
log_handlers.append(debug_log_handler)
# Error log handler (ERROR and above)
error_log_handler = logging.FileHandler(
config.log_dir / ERROR_LOG_FILE, "a", "utf-8"
# Security fix: Use RotatingFileHandler with size limits
error_log_handler = RotatingFileHandler(
config.log_dir / ERROR_LOG_FILE,
mode="a",
encoding="utf-8",
maxBytes=10 * 1024 * 1024, # 10MB per file
backupCount=3, # Keep 3 backup files (40MB total)
)
error_log_handler.setLevel(logging.ERROR)
error_log_handler.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT, no_color=True))
@@ -169,7 +179,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
# Configure the root logger
logging.basicConfig(
format=DEBUG_LOG_FORMAT if config.level == logging.DEBUG else SIMPLE_LOG_FORMAT,
format=(
"%(levelname)s %(message)s"
if structured_logging
else (
DEBUG_LOG_FORMAT if config.level == logging.DEBUG else SIMPLE_LOG_FORMAT
)
),
level=config.level,
handlers=log_handlers,
)

View File

@@ -1,3 +1,5 @@
from typing import Optional
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
@@ -13,8 +15,8 @@ class RateLimitSettings(BaseSettings):
default="6379", description="Redis port", validation_alias="REDIS_PORT"
)
redis_password: str = Field(
default="password",
redis_password: Optional[str] = Field(
default=None,
description="Redis password",
validation_alias="REDIS_PASSWORD",
)

View File

@@ -11,7 +11,7 @@ class RateLimiter:
self,
redis_host: str = RATE_LIMIT_SETTINGS.redis_host,
redis_port: str = RATE_LIMIT_SETTINGS.redis_port,
redis_password: str = RATE_LIMIT_SETTINGS.redis_password,
redis_password: str | None = RATE_LIMIT_SETTINGS.redis_password,
requests_per_minute: int = RATE_LIMIT_SETTINGS.requests_per_minute,
):
self.redis = Redis(

View File

@@ -1,266 +0,0 @@
import inspect
import logging
import threading
import time
from functools import wraps
from typing import (
Awaitable,
Callable,
ParamSpec,
Protocol,
Tuple,
TypeVar,
cast,
overload,
runtime_checkable,
)
P = ParamSpec("P")
R = TypeVar("R")
logger = logging.getLogger(__name__)
@overload
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]:
pass
@overload
def thread_cached(func: Callable[P, R]) -> Callable[P, R]:
pass
def thread_cached(
func: Callable[P, R] | Callable[P, Awaitable[R]],
) -> Callable[P, R] | Callable[P, Awaitable[R]]:
thread_local = threading.local()
def _clear():
if hasattr(thread_local, "cache"):
del thread_local.cache
if inspect.iscoroutinefunction(func):
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = await cast(Callable[P, Awaitable[R]], func)(
*args, **kwargs
)
return cache[key]
setattr(async_wrapper, "clear_cache", _clear)
return async_wrapper
else:
def sync_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
setattr(sync_wrapper, "clear_cache", _clear)
return sync_wrapper
def clear_thread_cache(func: Callable) -> None:
if clear := getattr(func, "clear_cache", None):
clear()
FuncT = TypeVar("FuncT")
R_co = TypeVar("R_co", covariant=True)
@runtime_checkable
class AsyncCachedFunction(Protocol[P, R_co]):
"""Protocol for async functions with cache management methods."""
def cache_clear(self) -> None:
"""Clear all cached entries."""
return None
def cache_info(self) -> dict[str, int | None]:
"""Get cache statistics."""
return {}
async def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
"""Call the cached function."""
return None # type: ignore
def async_ttl_cache(
maxsize: int = 128, ttl_seconds: int | None = None
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
"""
TTL (Time To Live) cache decorator for async functions.
Similar to functools.lru_cache but works with async functions and includes optional TTL.
Args:
maxsize: Maximum number of cached entries
ttl_seconds: Time to live in seconds. If None, entries never expire (like lru_cache)
Returns:
Decorator function
Example:
# With TTL
@async_ttl_cache(maxsize=1000, ttl_seconds=300)
async def api_call(param: str) -> dict:
return {"result": param}
# Without TTL (permanent cache like lru_cache)
@async_ttl_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
return {"result": param}
"""
def decorator(
async_func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
# Cache storage - use union type to handle both cases
cache_storage: dict[tuple, R | Tuple[R, float]] = {}
@wraps(async_func)
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# Create cache key from arguments
key = (args, tuple(sorted(kwargs.items())))
current_time = time.time()
# Check if we have a valid cached entry
if key in cache_storage:
if ttl_seconds is None:
# No TTL - return cached result directly
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, cache_storage[key])
else:
# With TTL - check expiration
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, result)
else:
# Expired entry
del cache_storage[key]
logger.debug(
f"Cache entry expired for {async_func.__name__}"
)
# Cache miss or expired - fetch fresh data
logger.debug(
f"Cache miss for {async_func.__name__} with key: {str(key)[:50]}"
)
result = await async_func(*args, **kwargs)
# Store in cache
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Simple cleanup when cache gets too large
if len(cache_storage) > maxsize:
# Remove oldest entries (simple FIFO cleanup)
cutoff = maxsize // 2
oldest_keys = list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
logger.debug(
f"Cache cleanup: removed {len(oldest_keys)} entries for {async_func.__name__}"
)
return result
# Add cache management methods (similar to functools.lru_cache)
def cache_clear() -> None:
cache_storage.clear()
def cache_info() -> dict[str, int | None]:
return {
"size": len(cache_storage),
"maxsize": maxsize,
"ttl_seconds": ttl_seconds,
}
# Attach methods to wrapper
setattr(wrapper, "cache_clear", cache_clear)
setattr(wrapper, "cache_info", cache_info)
return cast(AsyncCachedFunction[P, R], wrapper)
return decorator
@overload
def async_cache(
func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
pass
@overload
def async_cache(
func: None = None,
*,
maxsize: int = 128,
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
pass
def async_cache(
func: Callable[P, Awaitable[R]] | None = None,
*,
maxsize: int = 128,
) -> (
AsyncCachedFunction[P, R]
| Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]
):
"""
Process-level cache decorator for async functions (no TTL).
Similar to functools.lru_cache but works with async functions.
This is a convenience wrapper around async_ttl_cache with ttl_seconds=None.
Args:
func: The async function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
Returns:
Decorated function or decorator
Example:
# Without parentheses (uses default maxsize=128)
@async_cache
async def get_data(param: str) -> dict:
return {"result": param}
# With parentheses and custom maxsize
@async_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
# Expensive computation here
return {"result": param}
"""
if func is None:
# Called with parentheses @async_cache() or @async_cache(maxsize=...)
return async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
else:
# Called without parentheses @async_cache
decorator = async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
return decorator(func)

View File

@@ -1,705 +0,0 @@
"""Tests for the @thread_cached decorator.
This module tests the thread-local caching functionality including:
- Basic caching for sync and async functions
- Thread isolation (each thread has its own cache)
- Cache clearing functionality
- Exception handling (exceptions are not cached)
- Argument handling (positional vs keyword arguments)
"""
import asyncio
import threading
import time
from concurrent.futures import ThreadPoolExecutor
from unittest.mock import Mock
import pytest
from autogpt_libs.utils.cache import (
async_cache,
async_ttl_cache,
clear_thread_cache,
thread_cached,
)
class TestThreadCached:
def test_sync_function_caching(self):
call_count = 0
@thread_cached
def expensive_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
return x + y
assert expensive_function(1, 2) == 3
assert call_count == 1
assert expensive_function(1, 2) == 3
assert call_count == 1
assert expensive_function(1, y=2) == 3
assert call_count == 2
assert expensive_function(2, 3) == 5
assert call_count == 3
assert expensive_function(1) == 1
assert call_count == 4
@pytest.mark.asyncio
async def test_async_function_caching(self):
call_count = 0
@thread_cached
async def expensive_async_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01)
return x + y
assert await expensive_async_function(1, 2) == 3
assert call_count == 1
assert await expensive_async_function(1, 2) == 3
assert call_count == 1
assert await expensive_async_function(1, y=2) == 3
assert call_count == 2
assert await expensive_async_function(2, 3) == 5
assert call_count == 3
def test_thread_isolation(self):
call_count = 0
results = {}
@thread_cached
def thread_specific_function(x: int) -> str:
nonlocal call_count
call_count += 1
return f"{threading.current_thread().name}-{x}"
def worker(thread_id: int):
result1 = thread_specific_function(1)
result2 = thread_specific_function(1)
result3 = thread_specific_function(2)
results[thread_id] = (result1, result2, result3)
with ThreadPoolExecutor(max_workers=3) as executor:
futures = [executor.submit(worker, i) for i in range(3)]
for future in futures:
future.result()
assert call_count >= 2
for thread_id, (r1, r2, r3) in results.items():
assert r1 == r2
assert r1 != r3
@pytest.mark.asyncio
async def test_async_thread_isolation(self):
call_count = 0
results = {}
@thread_cached
async def async_thread_specific_function(x: int) -> str:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01)
return f"{threading.current_thread().name}-{x}"
async def async_worker(worker_id: int):
result1 = await async_thread_specific_function(1)
result2 = await async_thread_specific_function(1)
result3 = await async_thread_specific_function(2)
results[worker_id] = (result1, result2, result3)
tasks = [async_worker(i) for i in range(3)]
await asyncio.gather(*tasks)
for worker_id, (r1, r2, r3) in results.items():
assert r1 == r2
assert r1 != r3
def test_clear_cache_sync(self):
call_count = 0
@thread_cached
def clearable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 2
assert clearable_function(5) == 10
assert call_count == 1
assert clearable_function(5) == 10
assert call_count == 1
clear_thread_cache(clearable_function)
assert clearable_function(5) == 10
assert call_count == 2
@pytest.mark.asyncio
async def test_clear_cache_async(self):
call_count = 0
@thread_cached
async def clearable_async_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01)
return x * 2
assert await clearable_async_function(5) == 10
assert call_count == 1
assert await clearable_async_function(5) == 10
assert call_count == 1
clear_thread_cache(clearable_async_function)
assert await clearable_async_function(5) == 10
assert call_count == 2
def test_simple_arguments(self):
call_count = 0
@thread_cached
def simple_function(a: str, b: int, c: str = "default") -> str:
nonlocal call_count
call_count += 1
return f"{a}-{b}-{c}"
# First call with all positional args
result1 = simple_function("test", 42, "custom")
assert call_count == 1
# Same args, all positional - should hit cache
result2 = simple_function("test", 42, "custom")
assert call_count == 1
assert result1 == result2
# Same values but last arg as keyword - creates different cache key
result3 = simple_function("test", 42, c="custom")
assert call_count == 2
assert result1 == result3 # Same result, different cache entry
# Different value - new cache entry
result4 = simple_function("test", 43, "custom")
assert call_count == 3
assert result1 != result4
def test_positional_vs_keyword_args(self):
"""Test that positional and keyword arguments create different cache entries."""
call_count = 0
@thread_cached
def func(a: int, b: int = 10) -> str:
nonlocal call_count
call_count += 1
return f"result-{a}-{b}"
# All positional
result1 = func(1, 2)
assert call_count == 1
assert result1 == "result-1-2"
# Same values, but second arg as keyword
result2 = func(1, b=2)
assert call_count == 2 # Different cache key!
assert result2 == "result-1-2" # Same result
# Verify both are cached separately
func(1, 2) # Uses first cache entry
assert call_count == 2
func(1, b=2) # Uses second cache entry
assert call_count == 2
def test_exception_handling(self):
call_count = 0
@thread_cached
def failing_function(x: int) -> int:
nonlocal call_count
call_count += 1
if x < 0:
raise ValueError("Negative value")
return x * 2
assert failing_function(5) == 10
assert call_count == 1
with pytest.raises(ValueError):
failing_function(-1)
assert call_count == 2
with pytest.raises(ValueError):
failing_function(-1)
assert call_count == 3
assert failing_function(5) == 10
assert call_count == 3
@pytest.mark.asyncio
async def test_async_exception_handling(self):
call_count = 0
@thread_cached
async def async_failing_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01)
if x < 0:
raise ValueError("Negative value")
return x * 2
assert await async_failing_function(5) == 10
assert call_count == 1
with pytest.raises(ValueError):
await async_failing_function(-1)
assert call_count == 2
with pytest.raises(ValueError):
await async_failing_function(-1)
assert call_count == 3
def test_sync_caching_performance(self):
@thread_cached
def slow_function(x: int) -> int:
print(f"slow_function called with x={x}")
time.sleep(0.1)
return x * 2
start = time.time()
result1 = slow_function(5)
first_call_time = time.time() - start
print(f"First call took {first_call_time:.4f} seconds")
start = time.time()
result2 = slow_function(5)
second_call_time = time.time() - start
print(f"Second call took {second_call_time:.4f} seconds")
assert result1 == result2 == 10
assert first_call_time > 0.09
assert second_call_time < 0.01
@pytest.mark.asyncio
async def test_async_caching_performance(self):
@thread_cached
async def slow_async_function(x: int) -> int:
print(f"slow_async_function called with x={x}")
await asyncio.sleep(0.1)
return x * 2
start = time.time()
result1 = await slow_async_function(5)
first_call_time = time.time() - start
print(f"First async call took {first_call_time:.4f} seconds")
start = time.time()
result2 = await slow_async_function(5)
second_call_time = time.time() - start
print(f"Second async call took {second_call_time:.4f} seconds")
assert result1 == result2 == 10
assert first_call_time > 0.09
assert second_call_time < 0.01
def test_with_mock_objects(self):
mock = Mock(return_value=42)
@thread_cached
def function_using_mock(x: int) -> int:
return mock(x)
assert function_using_mock(1) == 42
assert mock.call_count == 1
assert function_using_mock(1) == 42
assert mock.call_count == 1
assert function_using_mock(2) == 42
assert mock.call_count == 2
class TestAsyncTTLCache:
"""Tests for the @async_ttl_cache decorator."""
@pytest.mark.asyncio
async def test_basic_caching(self):
"""Test basic caching functionality."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def cached_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01) # Simulate async work
return x + y
# First call
result1 = await cached_function(1, 2)
assert result1 == 3
assert call_count == 1
# Second call with same args - should use cache
result2 = await cached_function(1, 2)
assert result2 == 3
assert call_count == 1 # No additional call
# Different args - should call function again
result3 = await cached_function(2, 3)
assert result3 == 5
assert call_count == 2
@pytest.mark.asyncio
async def test_ttl_expiration(self):
"""Test that cache entries expire after TTL."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=1) # Short TTL
async def short_lived_cache(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 2
# First call
result1 = await short_lived_cache(5)
assert result1 == 10
assert call_count == 1
# Second call immediately - should use cache
result2 = await short_lived_cache(5)
assert result2 == 10
assert call_count == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Third call after expiration - should call function again
result3 = await short_lived_cache(5)
assert result3 == 10
assert call_count == 2
@pytest.mark.asyncio
async def test_cache_info(self):
"""Test cache info functionality."""
@async_ttl_cache(maxsize=5, ttl_seconds=300)
async def info_test_function(x: int) -> int:
return x * 3
# Check initial cache info
info = info_test_function.cache_info()
assert info["size"] == 0
assert info["maxsize"] == 5
assert info["ttl_seconds"] == 300
# Add an entry
await info_test_function(1)
info = info_test_function.cache_info()
assert info["size"] == 1
@pytest.mark.asyncio
async def test_cache_clear(self):
"""Test cache clearing functionality."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def clearable_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x * 4
# First call
result1 = await clearable_function(2)
assert result1 == 8
assert call_count == 1
# Second call - should use cache
result2 = await clearable_function(2)
assert result2 == 8
assert call_count == 1
# Clear cache
clearable_function.cache_clear()
# Third call after clear - should call function again
result3 = await clearable_function(2)
assert result3 == 8
assert call_count == 2
@pytest.mark.asyncio
async def test_maxsize_cleanup(self):
"""Test that cache cleans up when maxsize is exceeded."""
call_count = 0
@async_ttl_cache(maxsize=3, ttl_seconds=60)
async def size_limited_function(x: int) -> int:
nonlocal call_count
call_count += 1
return x**2
# Fill cache to maxsize
await size_limited_function(1) # call_count: 1
await size_limited_function(2) # call_count: 2
await size_limited_function(3) # call_count: 3
info = size_limited_function.cache_info()
assert info["size"] == 3
# Add one more entry - should trigger cleanup
await size_limited_function(4) # call_count: 4
# Cache size should be reduced (cleanup removes oldest entries)
info = size_limited_function.cache_info()
assert info["size"] is not None and info["size"] <= 3 # Should be cleaned up
@pytest.mark.asyncio
async def test_argument_variations(self):
"""Test caching with different argument patterns."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def arg_test_function(a: int, b: str = "default", *, c: int = 100) -> str:
nonlocal call_count
call_count += 1
return f"{a}-{b}-{c}"
# Different ways to call with same logical arguments
result1 = await arg_test_function(1, "test", c=200)
assert call_count == 1
# Same arguments, same order - should use cache
result2 = await arg_test_function(1, "test", c=200)
assert call_count == 1
assert result1 == result2
# Different arguments - should call function
result3 = await arg_test_function(2, "test", c=200)
assert call_count == 2
assert result1 != result3
@pytest.mark.asyncio
async def test_exception_handling(self):
"""Test that exceptions are not cached."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def exception_function(x: int) -> int:
nonlocal call_count
call_count += 1
if x < 0:
raise ValueError("Negative value not allowed")
return x * 2
# Successful call - should be cached
result1 = await exception_function(5)
assert result1 == 10
assert call_count == 1
# Same successful call - should use cache
result2 = await exception_function(5)
assert result2 == 10
assert call_count == 1
# Exception call - should not be cached
with pytest.raises(ValueError):
await exception_function(-1)
assert call_count == 2
# Same exception call - should call again (not cached)
with pytest.raises(ValueError):
await exception_function(-1)
assert call_count == 3
@pytest.mark.asyncio
async def test_concurrent_calls(self):
"""Test caching behavior with concurrent calls."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=60)
async def concurrent_function(x: int) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.05) # Simulate work
return x * x
# Launch concurrent calls with same arguments
tasks = [concurrent_function(3) for _ in range(5)]
results = await asyncio.gather(*tasks)
# All results should be the same
assert all(result == 9 for result in results)
# Note: Due to race conditions, call_count might be up to 5 for concurrent calls
# This tests that the cache doesn't break under concurrent access
assert 1 <= call_count <= 5
class TestAsyncCache:
"""Tests for the @async_cache decorator (no TTL)."""
@pytest.mark.asyncio
async def test_basic_caching_no_ttl(self):
"""Test basic caching functionality without TTL."""
call_count = 0
@async_cache(maxsize=10)
async def cached_function(x: int, y: int = 0) -> int:
nonlocal call_count
call_count += 1
await asyncio.sleep(0.01) # Simulate async work
return x + y
# First call
result1 = await cached_function(1, 2)
assert result1 == 3
assert call_count == 1
# Second call with same args - should use cache
result2 = await cached_function(1, 2)
assert result2 == 3
assert call_count == 1 # No additional call
# Third call after some time - should still use cache (no TTL)
await asyncio.sleep(0.05)
result3 = await cached_function(1, 2)
assert result3 == 3
assert call_count == 1 # Still no additional call
# Different args - should call function again
result4 = await cached_function(2, 3)
assert result4 == 5
assert call_count == 2
@pytest.mark.asyncio
async def test_no_ttl_vs_ttl_behavior(self):
"""Test the difference between TTL and no-TTL caching."""
ttl_call_count = 0
no_ttl_call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=1) # Short TTL
async def ttl_function(x: int) -> int:
nonlocal ttl_call_count
ttl_call_count += 1
return x * 2
@async_cache(maxsize=10) # No TTL
async def no_ttl_function(x: int) -> int:
nonlocal no_ttl_call_count
no_ttl_call_count += 1
return x * 2
# First calls
await ttl_function(5)
await no_ttl_function(5)
assert ttl_call_count == 1
assert no_ttl_call_count == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Second calls after TTL expiry
await ttl_function(5) # Should call function again (TTL expired)
await no_ttl_function(5) # Should use cache (no TTL)
assert ttl_call_count == 2 # TTL function called again
assert no_ttl_call_count == 1 # No-TTL function still cached
@pytest.mark.asyncio
async def test_async_cache_info(self):
"""Test cache info for no-TTL cache."""
@async_cache(maxsize=5)
async def info_test_function(x: int) -> int:
return x * 3
# Check initial cache info
info = info_test_function.cache_info()
assert info["size"] == 0
assert info["maxsize"] == 5
assert info["ttl_seconds"] is None # No TTL
# Add an entry
await info_test_function(1)
info = info_test_function.cache_info()
assert info["size"] == 1
class TestTTLOptional:
"""Tests for optional TTL functionality."""
@pytest.mark.asyncio
async def test_ttl_none_behavior(self):
"""Test that ttl_seconds=None works like no TTL."""
call_count = 0
@async_ttl_cache(maxsize=10, ttl_seconds=None)
async def no_ttl_via_none(x: int) -> int:
nonlocal call_count
call_count += 1
return x**2
# First call
result1 = await no_ttl_via_none(3)
assert result1 == 9
assert call_count == 1
# Wait (would expire if there was TTL)
await asyncio.sleep(0.1)
# Second call - should still use cache
result2 = await no_ttl_via_none(3)
assert result2 == 9
assert call_count == 1 # No additional call
# Check cache info
info = no_ttl_via_none.cache_info()
assert info["ttl_seconds"] is None
@pytest.mark.asyncio
async def test_cache_options_comparison(self):
"""Test different cache options work as expected."""
ttl_calls = 0
no_ttl_calls = 0
@async_ttl_cache(maxsize=10, ttl_seconds=1) # With TTL
async def ttl_function(x: int) -> int:
nonlocal ttl_calls
ttl_calls += 1
return x * 10
@async_cache(maxsize=10) # Process-level cache (no TTL)
async def process_function(x: int) -> int:
nonlocal no_ttl_calls
no_ttl_calls += 1
return x * 10
# Both should cache initially
await ttl_function(3)
await process_function(3)
assert ttl_calls == 1
assert no_ttl_calls == 1
# Immediate second calls - both should use cache
await ttl_function(3)
await process_function(3)
assert ttl_calls == 1
assert no_ttl_calls == 1
# Wait for TTL to expire
await asyncio.sleep(1.1)
# After TTL expiry
await ttl_function(3) # Should call function again
await process_function(3) # Should still use cache
assert ttl_calls == 2 # TTL cache expired, called again
assert no_ttl_calls == 1 # Process cache never expires

View File

@@ -1002,6 +1002,18 @@ dynamodb = ["boto3 (>=1.9.71)"]
redis = ["redis (>=2.10.5)"]
test-filesource = ["pyyaml (>=5.3.1)", "watchdog (>=3.0.0)"]
[[package]]
name = "nodeenv"
version = "1.9.1"
description = "Node.js virtual environment builder"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["dev"]
files = [
{file = "nodeenv-1.9.1-py2.py3-none-any.whl", hash = "sha256:ba11c9782d29c27c70ffbdda2d7415098754709be8a7056d79a737cd901155c9"},
{file = "nodeenv-1.9.1.tar.gz", hash = "sha256:6ec12890a2dab7946721edbfbcd91f3319c6ccc9aec47be7c7e6b7011ee6645f"},
]
[[package]]
name = "opentelemetry-api"
version = "1.35.0"
@@ -1347,6 +1359,27 @@ files = [
{file = "pyrfc3339-2.0.1.tar.gz", hash = "sha256:e47843379ea35c1296c3b6c67a948a1a490ae0584edfcbdea0eaffb5dd29960b"},
]
[[package]]
name = "pyright"
version = "1.1.404"
description = "Command line wrapper for pyright"
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "pyright-1.1.404-py3-none-any.whl", hash = "sha256:c7b7ff1fdb7219c643079e4c3e7d4125f0dafcc19d253b47e898d130ea426419"},
{file = "pyright-1.1.404.tar.gz", hash = "sha256:455e881a558ca6be9ecca0b30ce08aa78343ecc031d37a198ffa9a7a1abeb63e"},
]
[package.dependencies]
nodeenv = ">=1.6.0"
typing-extensions = ">=4.1"
[package.extras]
all = ["nodejs-wheel-binaries", "twine (>=3.4.1)"]
dev = ["twine (>=3.4.1)"]
nodejs = ["nodejs-wheel-binaries"]
[[package]]
name = "pytest"
version = "8.4.1"
@@ -1534,31 +1567,31 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.12.9"
version = "0.12.11"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.12.9-py3-none-linux_armv6l.whl", hash = "sha256:fcebc6c79fcae3f220d05585229463621f5dbf24d79fdc4936d9302e177cfa3e"},
{file = "ruff-0.12.9-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:aed9d15f8c5755c0e74467731a007fcad41f19bcce41cd75f768bbd687f8535f"},
{file = "ruff-0.12.9-py3-none-macosx_11_0_arm64.whl", hash = "sha256:5b15ea354c6ff0d7423814ba6d44be2807644d0c05e9ed60caca87e963e93f70"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d596c2d0393c2502eaabfef723bd74ca35348a8dac4267d18a94910087807c53"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1b15599931a1a7a03c388b9c5df1bfa62be7ede6eb7ef753b272381f39c3d0ff"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3d02faa2977fb6f3f32ddb7828e212b7dd499c59eb896ae6c03ea5c303575756"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:17d5b6b0b3a25259b69ebcba87908496e6830e03acfb929ef9fd4c58675fa2ea"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:72db7521860e246adbb43f6ef464dd2a532ef2ef1f5dd0d470455b8d9f1773e0"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a03242c1522b4e0885af63320ad754d53983c9599157ee33e77d748363c561ce"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fc83e4e9751e6c13b5046d7162f205d0a7bac5840183c5beebf824b08a27340"},
{file = "ruff-0.12.9-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:881465ed56ba4dd26a691954650de6ad389a2d1fdb130fe51ff18a25639fe4bb"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:43f07a3ccfc62cdb4d3a3348bf0588358a66da756aa113e071b8ca8c3b9826af"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:07adb221c54b6bba24387911e5734357f042e5669fa5718920ee728aba3cbadc"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_i686.whl", hash = "sha256:f5cd34fabfdea3933ab85d72359f118035882a01bff15bd1d2b15261d85d5f66"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:f6be1d2ca0686c54564da8e7ee9e25f93bdd6868263805f8c0b8fc6a449db6d7"},
{file = "ruff-0.12.9-py3-none-win32.whl", hash = "sha256:cc7a37bd2509974379d0115cc5608a1a4a6c4bff1b452ea69db83c8855d53f93"},
{file = "ruff-0.12.9-py3-none-win_amd64.whl", hash = "sha256:6fb15b1977309741d7d098c8a3cb7a30bc112760a00fb6efb7abc85f00ba5908"},
{file = "ruff-0.12.9-py3-none-win_arm64.whl", hash = "sha256:63c8c819739d86b96d500cce885956a1a48ab056bbcbc61b747ad494b2485089"},
{file = "ruff-0.12.9.tar.gz", hash = "sha256:fbd94b2e3c623f659962934e52c2bea6fc6da11f667a427a368adaf3af2c866a"},
{file = "ruff-0.12.11-py3-none-linux_armv6l.whl", hash = "sha256:93fce71e1cac3a8bf9200e63a38ac5c078f3b6baebffb74ba5274fb2ab276065"},
{file = "ruff-0.12.11-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8e33ac7b28c772440afa80cebb972ffd823621ded90404f29e5ab6d1e2d4b93"},
{file = "ruff-0.12.11-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d69fb9d4937aa19adb2e9f058bc4fbfe986c2040acb1a4a9747734834eaa0bfd"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:411954eca8464595077a93e580e2918d0a01a19317af0a72132283e28ae21bee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6a2c0a2e1a450f387bf2c6237c727dd22191ae8c00e448e0672d624b2bbd7fb0"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ca4c3a7f937725fd2413c0e884b5248a19369ab9bdd850b5781348ba283f644"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:4d1df0098124006f6a66ecf3581a7f7e754c4df7644b2e6704cd7ca80ff95211"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5a8dd5f230efc99a24ace3b77e3555d3fbc0343aeed3fc84c8d89e75ab2ff793"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4dc75533039d0ed04cd33fb8ca9ac9620b99672fe7ff1533b6402206901c34ee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4fc58f9266d62c6eccc75261a665f26b4ef64840887fc6cbc552ce5b29f96cc8"},
{file = "ruff-0.12.11-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5a0113bd6eafd545146440225fe60b4e9489f59eb5f5f107acd715ba5f0b3d2f"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:0d737b4059d66295c3ea5720e6efc152623bb83fde5444209b69cd33a53e2000"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:916fc5defee32dbc1fc1650b576a8fed68f5e8256e2180d4d9855aea43d6aab2"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c984f07d7adb42d3ded5be894fb4007f30f82c87559438b4879fe7aa08c62b39"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e07fbb89f2e9249f219d88331c833860489b49cdf4b032b8e4432e9b13e8a4b9"},
{file = "ruff-0.12.11-py3-none-win32.whl", hash = "sha256:c792e8f597c9c756e9bcd4d87cf407a00b60af77078c96f7b6366ea2ce9ba9d3"},
{file = "ruff-0.12.11-py3-none-win_amd64.whl", hash = "sha256:a3283325960307915b6deb3576b96919ee89432ebd9c48771ca12ee8afe4a0fd"},
{file = "ruff-0.12.11-py3-none-win_arm64.whl", hash = "sha256:bae4d6e6a2676f8fb0f98b74594a048bae1b944aab17e9f5d504062303c6dbea"},
{file = "ruff-0.12.11.tar.gz", hash = "sha256:c6b09ae8426a65bbee5425b9d0b82796dbb07cb1af045743c79bfb163001165d"},
]
[[package]]
@@ -1740,7 +1773,6 @@ files = [
{file = "typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76"},
{file = "typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36"},
]
markers = {dev = "python_version < \"3.11\""}
[[package]]
name = "typing-inspection"
@@ -1897,4 +1929,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<4.0"
content-hash = "ef7818fba061cea2841c6d7ca4852acde83e4f73b32fca1315e58660002bb0d0"
content-hash = "0c40b63c3c921846cf05ccfb4e685d4959854b29c2c302245f9832e20aac6954"

View File

@@ -9,6 +9,7 @@ packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
colorama = "^0.4.6"
cryptography = "^45.0"
expiringdict = "^1.2.2"
fastapi = "^0.116.1"
google-cloud-logging = "^3.12.1"
@@ -21,11 +22,12 @@ supabase = "^2.16.0"
uvicorn = "^0.35.0"
[tool.poetry.group.dev.dependencies]
ruff = "^0.12.9"
pyright = "^1.1.404"
pytest = "^8.4.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
pytest-cov = "^6.2.1"
ruff = "^0.12.11"
[build-system]
requires = ["poetry-core"]

View File

@@ -21,7 +21,7 @@ PRISMA_SCHEMA="postgres/schema.prisma"
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=password
# REDIS_PASSWORD=
# RabbitMQ Credentials
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
@@ -66,6 +66,11 @@ NVIDIA_API_KEY=
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
# Notion OAuth App server credentials - https://developers.notion.com/docs/authorization
# Configure a public integration
NOTION_CLIENT_ID=
NOTION_CLIENT_SECRET=
# Google OAuth App server credentials - https://console.cloud.google.com/apis/credentials, and enable gmail api and set scopes
# https://console.cloud.google.com/apis/credentials/consent ?project=<your_project_id>
# You'll need to add/enable the following scopes (minimum):
@@ -129,13 +134,6 @@ POSTMARK_WEBHOOK_TOKEN=
# Error Tracking
SENTRY_DSN=
# Cloudflare Turnstile (CAPTCHA) Configuration
# Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
# This is the backend secret key
TURNSTILE_SECRET_KEY=
# This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
# Feature Flags
LAUNCH_DARKLY_SDK_KEY=

View File

@@ -9,4 +9,12 @@ secrets/*
!secrets/.gitkeep
*.ignore.*
*.ign.*
*.ign.*
# Load test results and reports
load-tests/*_RESULTS.md
load-tests/*_REPORT.md
load-tests/results/
load-tests/*.json
load-tests/*.log
load-tests/node_modules/*

View File

@@ -9,8 +9,15 @@ WORKDIR /app
RUN echo 'Acquire::http::Pipeline-Depth 0;\nAcquire::http::No-Cache true;\nAcquire::BrokenProxy true;\n' > /etc/apt/apt.conf.d/99fixbadproxy
# Update package list and install Python and build dependencies
# Install Node.js repository key and setup
RUN apt-get update --allow-releaseinfo-change --fix-missing \
&& apt-get install -y curl ca-certificates gnupg \
&& mkdir -p /etc/apt/keyrings \
&& curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \
&& echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list
# Update package list and install Python, Node.js, and build dependencies
RUN apt-get update \
&& apt-get install -y \
python3.13 \
python3.13-dev \
@@ -20,7 +27,9 @@ RUN apt-get update --allow-releaseinfo-change --fix-missing \
libpq5 \
libz-dev \
libssl-dev \
postgresql-client
postgresql-client \
nodejs \
&& rm -rf /var/lib/apt/lists/*
ENV POETRY_HOME=/opt/poetry
ENV POETRY_NO_INTERACTION=1
@@ -38,7 +47,9 @@ RUN poetry install --no-ansi --no-root
# Generate Prisma client
COPY autogpt_platform/backend/schema.prisma ./
RUN poetry run prisma generate
COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/partial_types.py
COPY autogpt_platform/backend/gen_prisma_types_stub.py ./
RUN poetry run prisma generate && poetry run gen-prisma-stub
FROM debian:13-slim AS server_dependencies
@@ -54,13 +65,18 @@ ENV PATH=/opt/poetry/bin:$PATH
# Install Python without upgrading system-managed packages
RUN apt-get update && apt-get install -y \
python3.13 \
python3-pip
python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Copy only necessary files from builder
COPY --from=builder /app /app
COPY --from=builder /usr/local/lib/python3* /usr/local/lib/python3*
COPY --from=builder /usr/local/bin/poetry /usr/local/bin/poetry
# Copy Prisma binaries
# Copy Node.js installation for Prisma
COPY --from=builder /usr/bin/node /usr/bin/node
COPY --from=builder /usr/lib/node_modules /usr/lib/node_modules
COPY --from=builder /usr/bin/npm /usr/bin/npm
COPY --from=builder /usr/bin/npx /usr/bin/npx
COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries
ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH"
@@ -78,6 +94,7 @@ FROM server_dependencies AS migrate
# Migration stage only needs schema and migrations - much lighter than full backend
COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/
COPY autogpt_platform/backend/backend/data/partial_types.py /app/autogpt_platform/backend/backend/data/partial_types.py
COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations
FROM server_dependencies AS server

View File

@@ -108,7 +108,7 @@ import fastapi.testclient
import pytest
from pytest_snapshot.plugin import Snapshot
from backend.server.v2.myroute import router
from backend.api.features.myroute import router
app = fastapi.FastAPI()
app.include_router(router)
@@ -149,7 +149,7 @@ These provide the easiest way to set up authentication mocking in test modules:
import fastapi
import fastapi.testclient
import pytest
from backend.server.v2.myroute import router
from backend.api.features.myroute import router
app = fastapi.FastAPI()
app.include_router(router)

View File

@@ -0,0 +1,242 @@
listing_id,storeListingVersionId,slug,agent_name,agent_video,agent_image,featured,sub_heading,description,categories,useForOnboarding,is_available
6e60a900-9d7d-490e-9af2-a194827ed632,d85882b8-633f-44ce-a315-c20a8c123d19,flux-ai-image-generator,Flux AI Image Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ca154dd1-140e-454c-91bd-2d8a00de3f08.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/577d995d-bc38-40a9-a23f-1f30f5774bdb.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/415db1b7-115c-43ab-bd6c-4e9f7ef95be1.jpg""]",false,Transform ideas into breathtaking images,"Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!","[""creative""]",false,true
f11fc6e9-6166-4676-ac5d-f07127b270c1,c775f60d-b99f-418b-8fe0-53172258c3ce,youtube-transcription-scraper,YouTube Transcription Scraper,https://youtu.be/H8S3pU68lGE,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/65bce54b-0124-4b0d-9e3e-f9b89d0dc99e.jpg""]",false,Fetch the transcriptions from the most popular YouTube videos in your chosen topic,"Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content.","[""writing""]",false,true
17908889-b599-4010-8e4f-bed19b8f3446,6e16e65a-ad34-4108-b4fd-4a23fced5ea2,business-ownerceo-finder,Decision Maker Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/1020d94e-b6a2-4fa7-bbdf-2c218b0de563.jpg""]",false,Contact CEOs today,"Find the key decision-makers you need, fast.
This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses youre looking for and where, and it will:
* Search the area and gather public information
* Return names, roles, and contact details when available
* Provide smart Google search suggestions if details arent found
Perfect for:
* B2B sales teams seeking verified leads
* Recruiters sourcing local talent
* Researchers looking to connect with business leaders
Save hours of manual searching and get straight to the people who matter most.","[""business""]",true,true
72beca1d-45ea-4403-a7ce-e2af168ee428,415b7352-0dc6-4214-9d87-0ad3751b711d,smart-meeting-brief,Smart Meeting Prep,https://youtu.be/9ydZR2hkxaY,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2f116ce1-63ae-4d39-a5cd-f514defc2b97.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0a71a60a-2263-4f12-9836-9c76ab49f155.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/95327695-9184-403c-907a-a9d3bdafa6a5.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2bc77788-790b-47d4-8a61-ce97b695e9f5.png""]",true,Business meeting briefings delivered daily,"Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls.
How It Works
1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day.
2. It reviews recent email threads with each participant to surface key relationship history and communication context.
3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data.
4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points.","[""personal""]",true,true
9fa5697a-617b-4fae-aea0-7dbbed279976,b8ceb480-a7a2-4c90-8513-181a49f7071f,automated-support-ai,Automated Support Agent,https://youtu.be/nBMfu_5sgDA,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ed56febc-2205-4179-9e7e-505d8500b66c.png""]",true,Automate up to 80 percent of inbound support emails,"Overview:
Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution.
How it Works:
New support emails are routed to the agent.
The agent checks internal documentation for answers.
It measures confidence in the answer found and either replies directly or escalates to a human.
Business Value:
Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times.","[""business""]",false,true
2bdac92b-a12c-4131-bb46-0e3b89f61413,31daf49d-31d3-476b-aa4c-099abc59b458,unspirational-poster-maker,Unspirational Poster Maker,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a490dac-27e5-405f-a4c4-8d1c55b85060.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d343fbb5-478c-4e38-94df-4337293b61f1.jpg""]",false,Because adulting is hard,"This witty AI agent generates hilariously relatable ""motivational"" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to ""get it together."" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.","[""creative""]",false,true
9adf005e-2854-4cc7-98cf-f7103b92a7b7,a03b0d8c-4751-43d6-a54e-c3b7856ba4e3,ai-shortform-video-generator-create-viral-ready-content,AI Video Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/8d2670b9-fea5-4966-a597-0a4511bffdc3.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/aabe8aec-0110-4ce7-a259-4f86fe8fe07d.png""]",false,Create Viral-Ready Shorts Content in Seconds,"OVERVIEW
Transform any trending headline or broad topic into a polished, vertical short-form video in a single run.
The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags.
HOW IT WORKS
1. Input a topic or an exact news headline.
2. The agent fetches live search results and selects the most engaging related story.
3. Key facts are summarised into concise research notes.
4. Claude writes a 3035 second script with visual cues, a three-second hook, tension loops, and a call-to-action.
5. GPT-4o generates an eye-catching title and one or two discoverability hashtags.
6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”).
All voice, style and resolution settings can be adjusted in the Builder before you press ""Run"".
7. Output delivered: Title, Script, Hashtags, Video URL.
KEY USE CASES
- Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”).
- Real-time newsjacking with a specific breaking headline.
- Product-launch spotlights and quick event recaps while interest is high.
BUSINESS VALUE
- One-click speed: from idea to finished video in minutes.
- Consistent brand look: Revid presets keep voice, style and aspect ratio on spec.
- No-code workflow: marketers create social video without design or development queues.
- Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys.
Self-hosted users simply add OpenAI, Anthropic, Perplexity (OpenRouter/Jina) and Revid keys once.
IMPORTANT NOTES
- The agent outputs exactly one video per execution. Run it again for additional shorts.
- Video rendering time varies; AI-generated footage may take several minutes.","[""writing""]",false,true
864e48ef-fee5-42c1-b6a4-2ae139db9fc1,55d40473-0f31-4ada-9e40-d3a7139fcbd4,automated-blog-writer,Automated SEO Blog Writer,https://youtu.be/nKcDCbDVobs,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2dd5f95b-5b30-4bf8-a11b-bac776c5141a.jpg""]",true,"Automate research, writing, and publishing for high-ranking blog posts","Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility.
How it works:
1. Share your pitch, website, and values.
2. The agent studies your site and uncovers proven SEO opportunities.
3. It spends two hours researching and drafting each post.
4. You set the cadence—publishing runs on autopilot.
Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most.
Use cases:
• Founders: Keep your blog active with no time drain.
• Agencies: Deliver scalable SEO content for clients.
• Strategists: Automate execution, focus on strategy.
• Marketers: Drive steady organic growth.
• Local businesses: Capture nearby search traffic.","[""writing""]",false,true
6046f42e-eb84-406f-bae0-8e052064a4fa,a548e507-09a7-4b30-909c-f63fcda10fff,lead-finder-local-businesses,Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/abd6605f-d5f8-426b-af36-052e8ba5044f.webp""]",false,Auto-Prospect Like a Pro,"Turbo-charge your local lead generation with the AutoGPT Marketplaces top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching.
**WHAT IT DOES**
• Searches Google Maps via the official API (no scraping)
• Prompts like “dentists in Chicago” or “coffee shops near me”
• Returns: Name, Website, Rating, Reviews, **Phone & Address**
• Exports instantly to your CRM, sheet, or outreach workflow
**WHY YOULL LOVE IT**
✓ Hyper-targeted leads in minutes
✓ Unlimited searches & locations
✓ Zero CAPTCHAs or IP blocks
✓ Works on AutoGPT Cloud or self-hosted (with your API key)
✓ Cut prospecting time by 90%
**PERFECT FOR**
— Marketers & PPC agencies
— SEO consultants & designers
— SaaS founders & sales teams
Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit.
→ Click *Add to Library* and own your market today.","[""business""]",true,true
f623c862-24e9-44fc-8ce8-d8282bb51ad2,eafa21d3-bf14-4f63-a97f-a5ee41df83b3,linkedin-post-generator,LinkedIn Post Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/297f6a8e-81a8-43e2-b106-c7ad4a5662df.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/fceebdc1-aef6-4000-97fc-4ef587f56bda.png""]",false,Autocraft LinkedIn gold,"Create researchdriven, highimpact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed.
FEATURES
• Automated YouTube research discovers and analyses topranked videos so you dont have to
• AIcurated synthesis combines multiple transcripts into one authoritative narrative
• Full creative control adjust style, tone, objective, opinion, clarity, target word count and number of videos
• LinkedInoptimised output hook, 23 key points, CTA, strategic line breaks, 35 hashtags, no markdown
• Oneclick publish returns a readytopost text block (≤1 300 characters)
HOW IT WORKS
1. Enter a topic and your preferred writing parameters.
2. The agent builds a YouTube search, fetches the page, and extracts the top N video URLs.
3. It pulls each transcript, then feeds them—plus your settings—into Claude 3.5 Sonnet.
4. The model writes a concise, engaging post designed for maximum LinkedIn engagement.
USE CASES
• Thoughtleadership updates backed by fresh video research
• Rapid industry summaries after major events, webinars, or conferences
• Consistent LinkedIn content for busy founders, marketers, and creators
WHY YOULL LOVE IT
Save hours of manual research, avoid surfacelevel hottakes, and publish posts that showcase real expertise—without the heavy lift.","[""writing""]",true,true
7d4120ad-b6b3-4419-8bdb-7dd7d350ef32,e7bb29a1-23c7-4fee-aa3b-5426174b8c52,youtube-to-linkedin-post-converter,YouTube to LinkedIn Post Converter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f084b326-a708-4396-be51-7ba59ad2ef32.png""]",false,Transform Your YouTube Videos into Engaging LinkedIn Posts with AI,"WHAT IT DOES:
This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn.
HOW IT WORKS:
- You provide the URL to the YouTube video (required)
- You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.)
- You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.)
- The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model
- The models extract key insights, memorable quotes, and the main points from the video
- Youll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement
INPUTS:
- Source YouTube Video Provide the URL to the YouTube video
- Structure Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.)
- Content Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.)
- Tone Select the tone for the post (e.g., Conversational, Inspirational, etc.)
OUTPUT:
- LinkedIn Post A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences
Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding.","[""writing""]",false,true
c61d6a83-ea48-4df8-b447-3da2d9fe5814,00fdd42c-a14c-4d19-a567-65374ea0e87f,personalized-morning-coffee-newsletter,Personal Newsletter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f4b38e4c-8166-4caf-9411-96c9c4c82d4c.png""]",false,Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood.,"This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained.
How It Works
1. Enter your favorite topics, industries, or areas of interest.
2. Choose your tone—professional, casual, or humorous.
3. Set your preferred delivery cadence: daily or weekly.
4. The agent scans top sources and compiles 35 engaging stories, insights, and fun facts into a conversational newsletter.
Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day.
Use Cases
• Executives: Get a daily digest of market updates and leadership insights.
• Marketers: Receive curated creative trends and campaign inspiration.
• Entrepreneurs: Stay updated on your industry without information overload.","[""research""]",true,true
e2e49cfc-4a39-4d62-a6b3-c095f6d025ff,fc2c9976-0962-4625-a27b-d316573a9e7f,email-address-finder,Email Scout - Contact Finder Assistant,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/da8a690a-7a8b-4c1d-b6f8-e2f840c0205d.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a2ac25c-1609-4881-8140-e6da2421afb3.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/26179263-fe06-45bd-b6a0-0754660a0a46.jpg""]",false,Find contact details from name and location using AI search,"Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information.
Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like ""Tim Cook, USA"" or ""Sarah Smith, London"" and let the AI assistant do the work of finding potential contact details.
Key Features:
- Quick search from just name and location
- Scans multiple public sources
- Automated AI-powered search process
- Easy to use with simple inputs
Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact.
Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible.","[""""]",false,true
81bcc372-0922-4a36-bc35-f7b1e51d6939,e437cc95-e671-489d-b915-76561fba8c7f,ai-youtube-to-blog-converter,YouTube Video to SEO Blog Writer,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/239e5a41-2515-4e1c-96ef-31d0d37ecbeb.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/c7d96966-786f-4be6-ad7d-3a51c84efc0e.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0275a74c-e2c2-4e29-a6e4-3a616c3c35dd.png""]",false,One link. One click. One powerful blog post.,"Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts.
Your videos deserve a second life—in writing.
Make your content work twice as hard by repurposing it into engaging, searchable articles.
Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest.
FEATURES
• CONTENT ANALYSIS
Extracts key points from the video while preserving your message and intent.
• CUSTOMIZABLE OUTPUT
Select a tone that fits your audience: casual, professional, educational, or formal.
• SEO OPTIMIZATION
Automatically creates engaging titles and structured subheadings for better search visibility.
• USER-FRIENDLY
Repurpose your videos into written content to expand your reach and improve accessibility.
Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless.
","[""writing""]",true,true
5c3510d2-fc8b-4053-8e19-67f53c86eb1a,f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9,ai-webpage-copy-improver,AI Webpage Copy Improver,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d562d26f-5891-4b09-8859-fbb205972313.jpg""]",false,Boost Your Website's Search Engine Performance,"Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.","[""marketing""]",true,true
94d03bd3-7d44-4d47-b60c-edb2f89508d6,b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0,domain-name-finder,Domain Name Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/28545e09-b2b8-4916-b4c6-67f982510a78.jpeg""]",false,Instantly generate brand-ready domain names that are actually available,"Overview:
Finding a domain name that fits your brand shouldnt take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable.
How It Works
1. Input your product pitch, company name, or core keywords.
2. The agent analyzes brand tone, audience, and industry context.
3. It generates a list of unique, memorable domains that match your criteria.
4. All names are pre-filtered for real-time availability, so you can register immediately.
Business Value
Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains.
Key Use Cases
• Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands.
• Marketers: Test name options across campaigns with instant availability data.
• Entrepreneurs: Validate ideas faster with instant domain options.","[""business""]",false,true
7a831906-daab-426f-9d66-bcf98d869426,516d813b-d1bc-470f-add7-c63a4b2c2bad,ai-function,AI Function,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/620e8117-2ee1-4384-89e6-c2ef4ec3d9c9.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/476259e2-5a79-4a7b-8e70-deeebfca70d7.png""]",false,Never Code Again,"AI FUNCTION MAGIC
Your AIpowered assistant for turning plainEnglish descriptions into working Python functions.
HOW IT WORKS
1. Describe what the function should do.
2. Specify the inputs it needs.
3. Receive the generated Python code.
FEATURES
- Effortless Function Generation: convert naturallanguage specs into complete functions.
- Customizable Inputs: define the parameters that matter to you.
- Versatile Use Cases: simulate data, automate tasks, prototype ideas.
- Seamless Integration: add the generated function directly to your codebase.
EXAMPLE
Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.”
Input parameter: number_of_people (default 20)
Result: a list of dictionaries such as
[
{ ""name"": ""Emma Martinez"", ""date_of_birth"": ""19921103"", ""job_title"": ""Data Analyst"", ""age"": 32 },
{ ""name"": ""Liam OConnor"", ""date_of_birth"": ""19850719"", ""job_title"": ""Marketing Manager"", ""age"": 39 },
…18 more entries…
]","[""development""]",false,true
1 listing_id storeListingVersionId slug agent_name agent_video agent_image featured sub_heading description categories useForOnboarding is_available
2 6e60a900-9d7d-490e-9af2-a194827ed632 d85882b8-633f-44ce-a315-c20a8c123d19 flux-ai-image-generator Flux AI Image Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ca154dd1-140e-454c-91bd-2d8a00de3f08.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/577d995d-bc38-40a9-a23f-1f30f5774bdb.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/415db1b7-115c-43ab-bd6c-4e9f7ef95be1.jpg"] false Transform ideas into breathtaking images Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today! ["creative"] false true
3 f11fc6e9-6166-4676-ac5d-f07127b270c1 c775f60d-b99f-418b-8fe0-53172258c3ce youtube-transcription-scraper YouTube Transcription Scraper https://youtu.be/H8S3pU68lGE ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/65bce54b-0124-4b0d-9e3e-f9b89d0dc99e.jpg"] false Fetch the transcriptions from the most popular YouTube videos in your chosen topic Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content. ["writing"] false true
4 17908889-b599-4010-8e4f-bed19b8f3446 6e16e65a-ad34-4108-b4fd-4a23fced5ea2 business-ownerceo-finder Decision Maker Lead Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/1020d94e-b6a2-4fa7-bbdf-2c218b0de563.jpg"] false Contact CEOs today Find the key decision-makers you need, fast. This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses you’re looking for and where, and it will: * Search the area and gather public information * Return names, roles, and contact details when available * Provide smart Google search suggestions if details aren’t found Perfect for: * B2B sales teams seeking verified leads * Recruiters sourcing local talent * Researchers looking to connect with business leaders Save hours of manual searching and get straight to the people who matter most. ["business"] true true
5 72beca1d-45ea-4403-a7ce-e2af168ee428 415b7352-0dc6-4214-9d87-0ad3751b711d smart-meeting-brief Smart Meeting Prep https://youtu.be/9ydZR2hkxaY ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2f116ce1-63ae-4d39-a5cd-f514defc2b97.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0a71a60a-2263-4f12-9836-9c76ab49f155.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/95327695-9184-403c-907a-a9d3bdafa6a5.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2bc77788-790b-47d4-8a61-ce97b695e9f5.png"] true Business meeting briefings delivered daily Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls. How It Works 1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day. 2. It reviews recent email threads with each participant to surface key relationship history and communication context. 3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data. 4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points. ["personal"] true true
6 9fa5697a-617b-4fae-aea0-7dbbed279976 b8ceb480-a7a2-4c90-8513-181a49f7071f automated-support-ai Automated Support Agent https://youtu.be/nBMfu_5sgDA ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ed56febc-2205-4179-9e7e-505d8500b66c.png"] true Automate up to 80 percent of inbound support emails Overview: Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution. How it Works: New support emails are routed to the agent. The agent checks internal documentation for answers. It measures confidence in the answer found and either replies directly or escalates to a human. Business Value: Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times. ["business"] false true
7 2bdac92b-a12c-4131-bb46-0e3b89f61413 31daf49d-31d3-476b-aa4c-099abc59b458 unspirational-poster-maker Unspirational Poster Maker ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a490dac-27e5-405f-a4c4-8d1c55b85060.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d343fbb5-478c-4e38-94df-4337293b61f1.jpg"] false Because adulting is hard This witty AI agent generates hilariously relatable "motivational" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to "get it together." Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos. ["creative"] false true
8 9adf005e-2854-4cc7-98cf-f7103b92a7b7 a03b0d8c-4751-43d6-a54e-c3b7856ba4e3 ai-shortform-video-generator-create-viral-ready-content AI Video Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/8d2670b9-fea5-4966-a597-0a4511bffdc3.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/aabe8aec-0110-4ce7-a259-4f86fe8fe07d.png"] false Create Viral-Ready Shorts Content in Seconds OVERVIEW Transform any trending headline or broad topic into a polished, vertical short-form video in a single run. The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags. HOW IT WORKS 1. Input a topic or an exact news headline. 2. The agent fetches live search results and selects the most engaging related story. 3. Key facts are summarised into concise research notes. 4. Claude writes a 30–35 second script with visual cues, a three-second hook, tension loops, and a call-to-action. 5. GPT-4o generates an eye-catching title and one or two discoverability hashtags. 6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”). – All voice, style and resolution settings can be adjusted in the Builder before you press "Run". 7. Output delivered: Title, Script, Hashtags, Video URL. KEY USE CASES - Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”). - Real-time newsjacking with a specific breaking headline. - Product-launch spotlights and quick event recaps while interest is high. BUSINESS VALUE - One-click speed: from idea to finished video in minutes. - Consistent brand look: Revid presets keep voice, style and aspect ratio on spec. - No-code workflow: marketers create social video without design or development queues. - Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys. Self-hosted users simply add OpenAI, Anthropic, Perplexity (OpenRouter/Jina) and Revid keys once. IMPORTANT NOTES - The agent outputs exactly one video per execution. Run it again for additional shorts. - Video rendering time varies; AI-generated footage may take several minutes. ["writing"] false true
9 864e48ef-fee5-42c1-b6a4-2ae139db9fc1 55d40473-0f31-4ada-9e40-d3a7139fcbd4 automated-blog-writer Automated SEO Blog Writer https://youtu.be/nKcDCbDVobs ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2dd5f95b-5b30-4bf8-a11b-bac776c5141a.jpg"] true Automate research, writing, and publishing for high-ranking blog posts Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility. How it works: 1. Share your pitch, website, and values. 2. The agent studies your site and uncovers proven SEO opportunities. 3. It spends two hours researching and drafting each post. 4. You set the cadence—publishing runs on autopilot. Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most. Use cases: • Founders: Keep your blog active with no time drain. • Agencies: Deliver scalable SEO content for clients. • Strategists: Automate execution, focus on strategy. • Marketers: Drive steady organic growth. • Local businesses: Capture nearby search traffic. ["writing"] false true
10 6046f42e-eb84-406f-bae0-8e052064a4fa a548e507-09a7-4b30-909c-f63fcda10fff lead-finder-local-businesses Lead Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/abd6605f-d5f8-426b-af36-052e8ba5044f.webp"] false Auto-Prospect Like a Pro Turbo-charge your local lead generation with the AutoGPT Marketplace’s top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching. **WHAT IT DOES** • Searches Google Maps via the official API (no scraping) • Prompts like “dentists in Chicago” or “coffee shops near me” • Returns: Name, Website, Rating, Reviews, **Phone & Address** • Exports instantly to your CRM, sheet, or outreach workflow **WHY YOU’LL LOVE IT** ✓ Hyper-targeted leads in minutes ✓ Unlimited searches & locations ✓ Zero CAPTCHAs or IP blocks ✓ Works on AutoGPT Cloud or self-hosted (with your API key) ✓ Cut prospecting time by 90% **PERFECT FOR** — Marketers & PPC agencies — SEO consultants & designers — SaaS founders & sales teams Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit. → Click *Add to Library* and own your market today. ["business"] true true
11 f623c862-24e9-44fc-8ce8-d8282bb51ad2 eafa21d3-bf14-4f63-a97f-a5ee41df83b3 linkedin-post-generator LinkedIn Post Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/297f6a8e-81a8-43e2-b106-c7ad4a5662df.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/fceebdc1-aef6-4000-97fc-4ef587f56bda.png"] false Auto‑craft LinkedIn gold Create research‑driven, high‑impact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed. FEATURES • Automated YouTube research – discovers and analyses top‑ranked videos so you don’t have to • AI‑curated synthesis – combines multiple transcripts into one authoritative narrative • Full creative control – adjust style, tone, objective, opinion, clarity, target word count and number of videos • LinkedIn‑optimised output – hook, 2‑3 key points, CTA, strategic line breaks, 3‑5 hashtags, no markdown • One‑click publish – returns a ready‑to‑post text block (≤1 300 characters) HOW IT WORKS 1. Enter a topic and your preferred writing parameters. 2. The agent builds a YouTube search, fetches the page, and extracts the top N video URLs. 3. It pulls each transcript, then feeds them—plus your settings—into Claude 3.5 Sonnet. 4. The model writes a concise, engaging post designed for maximum LinkedIn engagement. USE CASES • Thought‑leadership updates backed by fresh video research • Rapid industry summaries after major events, webinars, or conferences • Consistent LinkedIn content for busy founders, marketers, and creators WHY YOU’LL LOVE IT Save hours of manual research, avoid surface‑level hot‑takes, and publish posts that showcase real expertise—without the heavy lift. ["writing"] true true
12 7d4120ad-b6b3-4419-8bdb-7dd7d350ef32 e7bb29a1-23c7-4fee-aa3b-5426174b8c52 youtube-to-linkedin-post-converter YouTube to LinkedIn Post Converter ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f084b326-a708-4396-be51-7ba59ad2ef32.png"] false Transform Your YouTube Videos into Engaging LinkedIn Posts with AI WHAT IT DOES: This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn. HOW IT WORKS: - You provide the URL to the YouTube video (required) - You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.) - You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.) - The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model - The models extract key insights, memorable quotes, and the main points from the video - You’ll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement INPUTS: - Source YouTube Video – Provide the URL to the YouTube video - Structure – Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.) - Content – Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.) - Tone – Select the tone for the post (e.g., Conversational, Inspirational, etc.) OUTPUT: - LinkedIn Post – A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding. ["writing"] false true
13 c61d6a83-ea48-4df8-b447-3da2d9fe5814 00fdd42c-a14c-4d19-a567-65374ea0e87f personalized-morning-coffee-newsletter Personal Newsletter ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f4b38e4c-8166-4caf-9411-96c9c4c82d4c.png"] false Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood. This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained. How It Works 1. Enter your favorite topics, industries, or areas of interest. 2. Choose your tone—professional, casual, or humorous. 3. Set your preferred delivery cadence: daily or weekly. 4. The agent scans top sources and compiles 3–5 engaging stories, insights, and fun facts into a conversational newsletter. Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day. Use Cases • Executives: Get a daily digest of market updates and leadership insights. • Marketers: Receive curated creative trends and campaign inspiration. • Entrepreneurs: Stay updated on your industry without information overload. ["research"] true true
14 e2e49cfc-4a39-4d62-a6b3-c095f6d025ff fc2c9976-0962-4625-a27b-d316573a9e7f email-address-finder Email Scout - Contact Finder Assistant ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/da8a690a-7a8b-4c1d-b6f8-e2f840c0205d.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a2ac25c-1609-4881-8140-e6da2421afb3.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/26179263-fe06-45bd-b6a0-0754660a0a46.jpg"] false Find contact details from name and location using AI search Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information. Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like "Tim Cook, USA" or "Sarah Smith, London" and let the AI assistant do the work of finding potential contact details. Key Features: - Quick search from just name and location - Scans multiple public sources - Automated AI-powered search process - Easy to use with simple inputs Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact. Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible. [""] false true
15 81bcc372-0922-4a36-bc35-f7b1e51d6939 e437cc95-e671-489d-b915-76561fba8c7f ai-youtube-to-blog-converter YouTube Video to SEO Blog Writer ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/239e5a41-2515-4e1c-96ef-31d0d37ecbeb.webp","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/c7d96966-786f-4be6-ad7d-3a51c84efc0e.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0275a74c-e2c2-4e29-a6e4-3a616c3c35dd.png"] false One link. One click. One powerful blog post. Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts. Your videos deserve a second life—in writing. Make your content work twice as hard by repurposing it into engaging, searchable articles. Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest. FEATURES • CONTENT ANALYSIS Extracts key points from the video while preserving your message and intent. • CUSTOMIZABLE OUTPUT Select a tone that fits your audience: casual, professional, educational, or formal. • SEO OPTIMIZATION Automatically creates engaging titles and structured subheadings for better search visibility. • USER-FRIENDLY Repurpose your videos into written content to expand your reach and improve accessibility. Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless. ["writing"] true true
16 5c3510d2-fc8b-4053-8e19-67f53c86eb1a f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9 ai-webpage-copy-improver AI Webpage Copy Improver ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d562d26f-5891-4b09-8859-fbb205972313.jpg"] false Boost Your Website's Search Engine Performance Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver. ["marketing"] true true
17 94d03bd3-7d44-4d47-b60c-edb2f89508d6 b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0 domain-name-finder Domain Name Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/28545e09-b2b8-4916-b4c6-67f982510a78.jpeg"] false Instantly generate brand-ready domain names that are actually available Overview: Finding a domain name that fits your brand shouldn’t take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable. How It Works 1. Input your product pitch, company name, or core keywords. 2. The agent analyzes brand tone, audience, and industry context. 3. It generates a list of unique, memorable domains that match your criteria. 4. All names are pre-filtered for real-time availability, so you can register immediately. Business Value Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains. Key Use Cases • Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands. • Marketers: Test name options across campaigns with instant availability data. • Entrepreneurs: Validate ideas faster with instant domain options. ["business"] false true
18 7a831906-daab-426f-9d66-bcf98d869426 516d813b-d1bc-470f-add7-c63a4b2c2bad ai-function AI Function ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/620e8117-2ee1-4384-89e6-c2ef4ec3d9c9.webp","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/476259e2-5a79-4a7b-8e70-deeebfca70d7.png"] false Never Code Again AI FUNCTION MAGIC Your AI‑powered assistant for turning plain‑English descriptions into working Python functions. HOW IT WORKS 1. Describe what the function should do. 2. Specify the inputs it needs. 3. Receive the generated Python code. FEATURES - Effortless Function Generation: convert natural‑language specs into complete functions. - Customizable Inputs: define the parameters that matter to you. - Versatile Use Cases: simulate data, automate tasks, prototype ideas. - Seamless Integration: add the generated function directly to your codebase. EXAMPLE Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.” Input parameter: number_of_people (default 20) Result: a list of dictionaries such as [ { "name": "Emma Martinez", "date_of_birth": "1992‑11‑03", "job_title": "Data Analyst", "age": 32 }, { "name": "Liam O’Connor", "date_of_birth": "1985‑07‑19", "job_title": "Marketing Manager", "age": 39 }, …18 more entries… ] ["development"] false true

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,590 @@
{
"id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"version": 29,
"is_active": false,
"name": "Unspirational Poster Maker",
"description": "This witty AI agent generates hilariously relatable \"motivational\" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clich\u00e9s and embrace our collective struggles to \"get it together.\" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Generated Image",
"description": "The resulting generated image ready for you to review and post."
},
"metadata": {
"position": {
"x": 2329.937006807125,
"y": 80.49068076698347
}
},
"input_links": [
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Theme",
"value": "Cooking"
},
"metadata": {
"position": {
"x": -1219.5966324967521,
"y": 80.50339731789956
}
},
"input_links": [],
"output_links": [
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 1132.373897280427,
"y": 88.44610377514573
}
},
"input_links": [
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 590.7543882245375,
"y": 85.69546832466654
}
},
"input_links": [
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 60.48904654237981,
"y": 86.06183359510214
}
},
"input_links": [
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"prompt": "A cat sprawled dramatically across an important-looking document during a work-from-home meeting, making direct eye contact with the camera while knocking over a coffee mug in slow motion. Text Overlay: \"Chaos is a career path. Be the obstacle everyone has to work around.\"",
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 1668.3572666956795,
"y": 89.69665262457966
}
},
"input_links": [
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "<example_output>\nA photo of a sloth lounging on a desk, with its head resting on a keyboard. The keyboard is on top of a laptop with a blank spreadsheet open. A to-do list is placed beside the laptop, with the top item written as \"Do literally anything\". There is a text overlay that says \"If you can't outwork them, outnap them.\".\n</example_output>\n\nCreate a relatable satirical, snarky, user-deprecating motivational style image based on the theme: \"{{THEME}}\".\n\nOutput only the image description and caption, without any additional commentary or formatting.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": -561.1139207164056,
"y": 78.60434452403524
}
},
"input_links": [
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
}
],
"output_links": [
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
},
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T19:58:34.390Z",
"input_schema": {
"type": "object",
"properties": {
"Theme": {
"advanced": false,
"secret": false,
"title": "Theme",
"default": "Cooking"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Generated Image": {
"advanced": false,
"secret": false,
"title": "Generated Image",
"description": "The resulting generated image ready for you to review and post."
}
},
"required": [
"Generated Image"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"ideogram_api_key_credentials": {
"credentials_provider": [
"ideogram"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "ideogram",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.IDEOGRAM: 'ideogram'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o"
]
}
},
"required": [
"ideogram_api_key_credentials",
"openai_api_key_credentials"
],
"title": "UnspirationalPosterMakerCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,447 @@
{
"id": "622849a7-5848-4838-894d-01f8f07e3fad",
"version": 18,
"is_active": true,
"name": "AI Function",
"description": "## AI-Powered Function Magic: Never code again!\nProvide a description of a python function and your inputs and AI will provide the results.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "return",
"title": null,
"value": null,
"format": "",
"secret": false,
"advanced": false,
"description": "The value returned by the function"
},
"metadata": {
"position": {
"x": 1598.8622921127233,
"y": 291.59140862204725
}
},
"input_links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "o3-mini",
"retry": 3,
"prompt": "{{ARGS}}",
"sys_prompt": "You are now the following python function:\n\n```\n# {{DESCRIPTION}}\n{{FUNCTION}}\n```\n\nThe user will provide your input arguments.\nOnly respond with your `return` value.\nDo not include any commentary or additional text in your response. \nDo not include ``` backticks or any other decorators.",
"ollama_host": "localhost:11434",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 995,
"y": 290.50000000000006
}
},
"input_links": [
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
},
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
},
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
}
],
"output_links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158",
"input_default": {
"name": "Function Definition",
"title": null,
"value": "def fake_people(n: int) -> list[dict]:",
"secret": false,
"advanced": false,
"description": "The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": -672.6908629664215,
"y": 302.42044359789116
}
},
"input_links": [],
"output_links": [
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "844530de-2354-46d8-b748-67306b7bbca1",
"block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158",
"input_default": {
"name": "Arguments",
"title": null,
"value": "20",
"secret": false,
"advanced": false,
"description": "The function's inputs\n\ne.g \"20\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": -158.1623599617334,
"y": 295.410856928333
}
},
"input_links": [],
"output_links": [
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba",
"input_default": {
"name": "Description",
"title": null,
"value": "Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.",
"secret": false,
"advanced": false,
"description": "Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": 374.4548658057796,
"y": 290.3779121974126
}
},
"input_links": [],
"output_links": [
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
},
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
},
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2025-04-19T17:10:48.857Z",
"input_schema": {
"type": "object",
"properties": {
"Function Definition": {
"advanced": false,
"anyOf": [
{
"format": "short-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Function Definition",
"description": "The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"",
"default": "def fake_people(n: int) -> list[dict]:"
},
"Arguments": {
"advanced": false,
"anyOf": [
{
"format": "short-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Arguments",
"description": "The function's inputs\n\ne.g \"20\"",
"default": "20"
},
"Description": {
"advanced": false,
"anyOf": [
{
"format": "long-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Description",
"description": "Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"",
"default": "Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age."
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"return": {
"advanced": false,
"secret": false,
"title": "return",
"description": "The value returned by the function"
}
},
"required": [
"return"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"o3-mini"
]
}
},
"required": [
"openai_api_key_credentials"
],
"title": "AIFunctionCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,403 @@
{
"id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"version": 12,
"is_active": true,
"name": "Flux AI Image Generator",
"description": "Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "7482c59d-725f-4686-82b9-0dfdc4e92316",
"block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc",
"input_default": {
"text": "Press the \"Advanced\" toggle and input your replicate API key.\n\nYou can get one here:\nhttps://replicate.com/account/api-tokens\n"
},
"metadata": {
"position": {
"x": 872.8268131538296,
"y": 614.9436919065381
}
},
"input_links": [],
"output_links": [],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Generated Image"
},
"metadata": {
"position": {
"x": 1453.6844137728922,
"y": 963.2466395125115
}
},
"input_links": [
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Image Subject",
"value": "Otto the friendly, purple \"Chief Automation Octopus\" helping people automate their tedious tasks.",
"description": "The subject of the image"
},
"metadata": {
"position": {
"x": -314.43009631839783,
"y": 962.935949165938
}
},
"input_links": [],
"output_links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"block_id": "90f8c45e-e983-4644-aa0b-b4ebe2f531bc",
"input_default": {
"prompt": "dog",
"output_format": "png",
"replicate_model_name": "Flux Pro 1.1"
},
"metadata": {
"position": {
"x": 873.0119949791526,
"y": 966.1604399052493
}
},
"input_links": [
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o-mini",
"prompt": "Generate an incredibly detailed, photorealistic image prompt about {{TOPIC}}, describing the camera it's taken with and prompting the diffusion model to use all the best quality techniques.\n\nOutput only the prompt with no additional commentary.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 277.3057034159709,
"y": 962.8382498113764
}
},
"input_links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
}
],
"output_links": [
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
},
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T18:46:11.492Z",
"input_schema": {
"type": "object",
"properties": {
"Image Subject": {
"advanced": false,
"secret": false,
"title": "Image Subject",
"description": "The subject of the image",
"default": "Otto the friendly, purple \"Chief Automation Octopus\" helping people automate their tedious tasks."
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Generated Image": {
"advanced": false,
"secret": false,
"title": "Generated Image"
}
},
"required": [
"Generated Image"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"replicate_api_key_credentials": {
"credentials_provider": [
"replicate"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "replicate",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.REPLICATE: 'replicate'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o-mini"
]
}
},
"required": [
"replicate_api_key_credentials",
"openai_api_key_credentials"
],
"title": "FluxAIImageGeneratorCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,505 @@
{
"id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"version": 12,
"is_active": true,
"name": "AI Webpage Copy Improver",
"description": "Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Improved Webpage Copy"
},
"metadata": {
"position": {
"x": 1039.5884372540172,
"y": -0.8359099621230968
}
},
"input_links": [
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Original Page Analysis",
"description": "Analysis of the webpage as it currently stands."
},
"metadata": {
"position": {
"x": 1037.7724103954706,
"y": -606.5934325506903
}
},
"input_links": [
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Homepage URL",
"value": "https://agpt.co",
"description": "Enter the URL of the homepage you want to improve"
},
"metadata": {
"position": {
"x": -1195.1455674454749,
"y": 0
}
},
"input_links": [],
"output_links": [
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"block_id": "436c3984-57fd-4b85-8e9a-459b356883bd",
"input_default": {
"raw_content": false
},
"metadata": {
"position": {
"x": -631.7330786555249,
"y": 1.9638396496230826
}
},
"input_links": [
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"output_links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "Current Webpage Content:\n```\n{{CONTENT}}\n```\n\nBased on the following analysis of the webpage content:\n\n```\n{{ANALYSIS}}\n```\n\nRewrite and improve the content to address the identified issues. Focus on:\n1. Enhancing clarity and readability\n2. Optimizing for SEO (suggest and incorporate relevant keywords)\n3. Improving calls-to-action for better conversion rates\n4. Refining the structure and organization\n5. Maintaining brand consistency while improving the overall tone\n\nProvide the improved content in HTML format inside a code-block with \"```\" backticks, preserving the original structure where appropriate. Also, include a brief summary of the changes made and their potential impact.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 488.37278423303917,
"y": 0
}
},
"input_links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
}
],
"output_links": [
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "08612ce2-625b-4c17-accd-3acace7b6477",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "Analyze the following webpage content and provide a detailed report on its current state, including strengths and weaknesses in terms of clarity, SEO optimization, and potential for conversion:\n\n{{CONTENT}}\n\nInclude observations on:\n1. Overall readability and clarity\n2. Use of keywords and SEO-friendly language\n3. Effectiveness of calls-to-action\n4. Structure and organization of content\n5. Tone and brand consistency",
"prompt_values": {}
},
"metadata": {
"position": {
"x": -72.66206703605442,
"y": -0.58403945075381
}
},
"input_links": [
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
}
],
"output_links": [
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
},
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T19:47:22.036Z",
"input_schema": {
"type": "object",
"properties": {
"Homepage URL": {
"advanced": false,
"secret": false,
"title": "Homepage URL",
"description": "Enter the URL of the homepage you want to improve",
"default": "https://agpt.co"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Improved Webpage Copy": {
"advanced": false,
"secret": false,
"title": "Improved Webpage Copy"
},
"Original Page Analysis": {
"advanced": false,
"secret": false,
"title": "Original Page Analysis",
"description": "Analysis of the webpage as it currently stands."
}
},
"required": [
"Improved Webpage Copy",
"Original Page Analysis"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"jina_api_key_credentials": {
"credentials_provider": [
"jina"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "jina",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.JINA: 'jina'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o"
]
}
},
"required": [
"jina_api_key_credentials",
"openai_api_key_credentials"
],
"title": "AIWebpageCopyImproverCredentialsInputSchema",
"type": "object"
}
}

View File

@@ -0,0 +1,615 @@
{
"id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"version": 29,
"is_active": true,
"name": "Email Address Finder",
"description": "Input information of a business and find their email address",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Address",
"value": "USA"
},
"metadata": {
"position": {
"x": 1047.9357219838776,
"y": 1067.9123910370954
}
},
"input_links": [],
"output_links": [
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0",
"input_default": {
"group": 1,
"pattern": "<email>(.*?)<\\/email>"
},
"metadata": {
"position": {
"x": 3381.2821481740634,
"y": 246.091098184158
}
},
"input_links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
}
],
"output_links": [
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Email"
},
"metadata": {
"position": {
"x": 4525.4246310882,
"y": 246.36913665010354
}
},
"input_links": [
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"block_id": "87840993-2053-44b7-8da4-187ad4ee518c",
"input_default": {},
"metadata": {
"position": {
"x": 2182.7499999999995,
"y": 242.00001144409185
}
},
"input_links": [
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
}
],
"output_links": [
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Business Name",
"value": "Tim Cook"
},
"metadata": {
"position": {
"x": 1049.9704155272595,
"y": 244.49931152418344
}
},
"input_links": [],
"output_links": [
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30",
"input_default": {
"format": "Email Address of {{NAME}}, {{ADDRESS}}",
"values": {}
},
"metadata": {
"position": {
"x": 1625.25,
"y": 243.25001144409185
}
},
"input_links": [
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
}
],
"output_links": [
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30",
"input_default": {
"format": "Failed to find email. \nResult:\n{{RESULT}}",
"values": {}
},
"metadata": {
"position": {
"x": 3949.7493830805934,
"y": 705.209819698647
}
},
"input_links": [
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
}
],
"output_links": [
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "claude-sonnet-4-5-20250929",
"prompt": "<business_website>\n{{WEBSITE_CONTENT}}\n</business_website>\n\nExtract the Contact Email of {{BUSINESS_NAME}}.\n\nIf no email that can be used to contact {{BUSINESS_NAME}} is present, output `N/A`.\nDo not share any emails other than the email for this specific entity.\n\nIf multiple present pick the likely best one.\n\nRespond with the email (or N/A) inside <email></email> tags.\n\nExample Response:\n\n<thoughts_or_comments>\nThere were many emails present, but luckily one was for {{BUSINESS_NAME}} which I have included below.\n</thoughts_or_comments>\n<email>\nexample@email.com\n</email>",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 2774.879259081777,
"y": 243.3102035752969
}
},
"input_links": [
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
},
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"output_links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
},
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
},
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
},
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
},
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
},
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
},
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
},
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2025-01-03T00:46:30.244Z",
"input_schema": {
"type": "object",
"properties": {
"Address": {
"advanced": false,
"secret": false,
"title": "Address",
"default": "USA"
},
"Business Name": {
"advanced": false,
"secret": false,
"title": "Business Name",
"default": "Tim Cook"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Email": {
"advanced": false,
"secret": false,
"title": "Email"
}
},
"required": [
"Email"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"jina_api_key_credentials": {
"credentials_provider": [
"jina"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "jina",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.JINA: 'jina'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"anthropic_api_key_credentials": {
"credentials_provider": [
"anthropic"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "anthropic",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.ANTHROPIC: 'anthropic'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"claude-sonnet-4-5-20250929"
]
}
},
"required": [
"jina_api_key_credentials",
"anthropic_api_key_credentials"
],
"title": "EmailAddressFinderCredentialsInputSchema",
"type": "object"
}
}

View File

@@ -14,19 +14,49 @@ def configured_snapshot(snapshot: Snapshot) -> Snapshot:
@pytest.fixture
def test_user_id() -> str:
"""Test user ID fixture."""
return "test-user-id"
return "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
@pytest.fixture
def admin_user_id() -> str:
"""Admin user ID fixture."""
return "admin-user-id"
return "4e53486c-cf57-477e-ba2a-cb02dc828e1b"
@pytest.fixture
def target_user_id() -> str:
"""Target user ID fixture."""
return "target-user-id"
return "5e53486c-cf57-477e-ba2a-cb02dc828e1c"
@pytest.fixture
async def setup_test_user(test_user_id):
"""Create test user in database before tests."""
from backend.data.user import get_or_create_user
# Create the test user in the database using JWT token format
user_data = {
"sub": test_user_id,
"email": "test@example.com",
"user_metadata": {"name": "Test User"},
}
await get_or_create_user(user_data)
return test_user_id
@pytest.fixture
async def setup_admin_user(admin_user_id):
"""Create admin user in database before tests."""
from backend.data.user import get_or_create_user
# Create the admin user in the database using JWT token format
user_data = {
"sub": admin_user_id,
"email": "test-admin@example.com",
"user_metadata": {"name": "Test Admin"},
}
await get_or_create_user(user_data)
return admin_user_id
@pytest.fixture

View File

@@ -1,13 +1,14 @@
import asyncio
from typing import Dict, Set
from fastapi import WebSocket
from backend.api.model import NotificationPayload, WSMessage, WSMethod
from backend.data.execution import (
ExecutionEventType,
GraphExecutionEvent,
NodeExecutionEvent,
)
from backend.server.model import WSMessage, WSMethod
_EVENT_TYPE_TO_METHOD_MAP: dict[ExecutionEventType, WSMethod] = {
ExecutionEventType.GRAPH_EXEC_UPDATE: WSMethod.GRAPH_EXECUTION_EVENT,
@@ -19,15 +20,24 @@ class ConnectionManager:
def __init__(self):
self.active_connections: Set[WebSocket] = set()
self.subscriptions: Dict[str, Set[WebSocket]] = {}
self.user_connections: Dict[str, Set[WebSocket]] = {}
async def connect_socket(self, websocket: WebSocket):
async def connect_socket(self, websocket: WebSocket, *, user_id: str):
await websocket.accept()
self.active_connections.add(websocket)
if user_id not in self.user_connections:
self.user_connections[user_id] = set()
self.user_connections[user_id].add(websocket)
def disconnect_socket(self, websocket: WebSocket):
self.active_connections.remove(websocket)
def disconnect_socket(self, websocket: WebSocket, *, user_id: str):
self.active_connections.discard(websocket)
for subscribers in self.subscriptions.values():
subscribers.discard(websocket)
user_conns = self.user_connections.get(user_id)
if user_conns is not None:
user_conns.discard(websocket)
if not user_conns:
self.user_connections.pop(user_id, None)
async def subscribe_graph_exec(
self, *, user_id: str, graph_exec_id: str, websocket: WebSocket
@@ -92,6 +102,26 @@ class ConnectionManager:
return n_sent
async def send_notification(
self, *, user_id: str, payload: NotificationPayload
) -> int:
"""Send a notification to all websocket connections belonging to a user."""
message = WSMessage(
method=WSMethod.NOTIFICATION,
data=payload.model_dump(),
).model_dump_json()
connections = tuple(self.user_connections.get(user_id, set()))
if not connections:
return 0
await asyncio.gather(
*(connection.send_text(message) for connection in connections),
return_exceptions=True,
)
return len(connections)
async def _subscribe(self, channel_key: str, websocket: WebSocket) -> str:
if channel_key not in self.subscriptions:
self.subscriptions[channel_key] = set()

View File

@@ -4,13 +4,13 @@ from unittest.mock import AsyncMock
import pytest
from fastapi import WebSocket
from backend.api.conn_manager import ConnectionManager
from backend.api.model import NotificationPayload, WSMessage, WSMethod
from backend.data.execution import (
ExecutionStatus,
GraphExecutionEvent,
NodeExecutionEvent,
)
from backend.server.conn_manager import ConnectionManager
from backend.server.model import WSMessage, WSMethod
@pytest.fixture
@@ -29,8 +29,9 @@ def mock_websocket() -> AsyncMock:
async def test_connect(
connection_manager: ConnectionManager, mock_websocket: AsyncMock
) -> None:
await connection_manager.connect_socket(mock_websocket)
await connection_manager.connect_socket(mock_websocket, user_id="user-1")
assert mock_websocket in connection_manager.active_connections
assert mock_websocket in connection_manager.user_connections["user-1"]
mock_websocket.accept.assert_called_once()
@@ -39,11 +40,13 @@ def test_disconnect(
) -> None:
connection_manager.active_connections.add(mock_websocket)
connection_manager.subscriptions["test_channel_42"] = {mock_websocket}
connection_manager.user_connections["user-1"] = {mock_websocket}
connection_manager.disconnect_socket(mock_websocket)
connection_manager.disconnect_socket(mock_websocket, user_id="user-1")
assert mock_websocket not in connection_manager.active_connections
assert mock_websocket not in connection_manager.subscriptions["test_channel_42"]
assert "user-1" not in connection_manager.user_connections
@pytest.mark.asyncio
@@ -88,6 +91,7 @@ async def test_send_graph_execution_result(
user_id="user-1",
graph_id="test_graph",
graph_version=1,
preset_id=None,
status=ExecutionStatus.COMPLETED,
started_at=datetime.now(tz=timezone.utc),
ended_at=datetime.now(tz=timezone.utc),
@@ -101,6 +105,8 @@ async def test_send_graph_execution_result(
"input_1": "some input value :)",
"input_2": "some *other* input value",
},
credential_inputs=None,
nodes_input_masks=None,
outputs={
"the_output": ["some output value"],
"other_output": ["sike there was another output"],
@@ -204,3 +210,22 @@ async def test_send_execution_result_no_subscribers(
await connection_manager.send_execution_update(result)
mock_websocket.send_text.assert_not_called()
@pytest.mark.asyncio
async def test_send_notification(
connection_manager: ConnectionManager, mock_websocket: AsyncMock
) -> None:
connection_manager.user_connections["user-1"] = {mock_websocket}
await connection_manager.send_notification(
user_id="user-1", payload=NotificationPayload(type="info", event="hey")
)
mock_websocket.send_text.assert_called_once()
sent_message = mock_websocket.send_text.call_args[0][0]
expected_message = WSMessage(
method=WSMethod.NOTIFICATION,
data={"type": "info", "event": "hey"},
).model_dump_json()
assert sent_message == expected_message

View File

@@ -0,0 +1,25 @@
from fastapi import FastAPI
from backend.api.middleware.security import SecurityHeadersMiddleware
from backend.monitoring.instrumentation import instrument_fastapi
from .v1.routes import v1_router
external_api = FastAPI(
title="AutoGPT External API",
description="External API for AutoGPT integrations",
docs_url="/docs",
version="1.0",
)
external_api.add_middleware(SecurityHeadersMiddleware)
external_api.include_router(v1_router, prefix="/v1")
# Add Prometheus instrumentation
instrument_fastapi(
external_api,
service_name="external-api",
expose_endpoint=True,
endpoint="/metrics",
include_in_schema=True,
)

View File

@@ -0,0 +1,107 @@
from fastapi import HTTPException, Security, status
from fastapi.security import APIKeyHeader, HTTPAuthorizationCredentials, HTTPBearer
from prisma.enums import APIKeyPermission
from backend.data.auth.api_key import APIKeyInfo, validate_api_key
from backend.data.auth.base import APIAuthorizationInfo
from backend.data.auth.oauth import (
InvalidClientError,
InvalidTokenError,
OAuthAccessTokenInfo,
validate_access_token,
)
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False)
bearer_auth = HTTPBearer(auto_error=False)
async def require_api_key(api_key: str | None = Security(api_key_header)) -> APIKeyInfo:
"""Middleware for API key authentication only"""
if api_key is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="Missing API key"
)
api_key_obj = await validate_api_key(api_key)
if not api_key_obj:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key"
)
return api_key_obj
async def require_access_token(
bearer: HTTPAuthorizationCredentials | None = Security(bearer_auth),
) -> OAuthAccessTokenInfo:
"""Middleware for OAuth access token authentication only"""
if bearer is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Missing Authorization header",
)
try:
token_info, _ = await validate_access_token(bearer.credentials)
except (InvalidClientError, InvalidTokenError) as e:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e))
return token_info
async def require_auth(
api_key: str | None = Security(api_key_header),
bearer: HTTPAuthorizationCredentials | None = Security(bearer_auth),
) -> APIAuthorizationInfo:
"""
Unified authentication middleware supporting both API keys and OAuth tokens.
Supports two authentication methods, which are checked in order:
1. X-API-Key header (existing API key authentication)
2. Authorization: Bearer <token> header (OAuth access token)
Returns:
APIAuthorizationInfo: base class of both APIKeyInfo and OAuthAccessTokenInfo.
"""
# Try API key first
if api_key is not None:
api_key_info = await validate_api_key(api_key)
if api_key_info:
return api_key_info
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key"
)
# Try OAuth bearer token
if bearer is not None:
try:
token_info, _ = await validate_access_token(bearer.credentials)
return token_info
except (InvalidClientError, InvalidTokenError) as e:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e))
# No credentials provided
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Missing authentication. Provide API key or access token.",
)
def require_permission(permission: APIKeyPermission):
"""
Dependency function for checking specific permissions
(works with API keys and OAuth tokens)
"""
async def check_permission(
auth: APIAuthorizationInfo = Security(require_auth),
) -> APIAuthorizationInfo:
if permission not in auth.scopes:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=f"Missing required permission: {permission.value}",
)
return auth
return check_permission

View File

@@ -0,0 +1,655 @@
"""
External API endpoints for integrations and credentials.
This module provides endpoints for external applications (like Autopilot) to:
- Initiate OAuth flows with custom callback URLs
- Complete OAuth flows by exchanging authorization codes
- Create API key, user/password, and host-scoped credentials
- List and manage user credentials
"""
import logging
from typing import TYPE_CHECKING, Annotated, Any, Literal, Optional, Union
from urllib.parse import urlparse
from fastapi import APIRouter, Body, HTTPException, Path, Security, status
from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field, SecretStr
from backend.api.external.middleware import require_permission
from backend.api.features.integrations.models import get_all_provider_names
from backend.data.auth.base import APIAuthorizationInfo
from backend.data.model import (
APIKeyCredentials,
Credentials,
CredentialsType,
HostScopedCredentials,
OAuth2Credentials,
UserPasswordCredentials,
)
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.util.settings import Settings
if TYPE_CHECKING:
from backend.integrations.oauth import BaseOAuthHandler
logger = logging.getLogger(__name__)
settings = Settings()
creds_manager = IntegrationCredentialsManager()
integrations_router = APIRouter(prefix="/integrations", tags=["integrations"])
# ==================== Request/Response Models ==================== #
class OAuthInitiateRequest(BaseModel):
"""Request model for initiating an OAuth flow."""
callback_url: str = Field(
..., description="The external app's callback URL for OAuth redirect"
)
scopes: list[str] = Field(
default_factory=list, description="OAuth scopes to request"
)
state_metadata: dict[str, Any] = Field(
default_factory=dict,
description="Arbitrary metadata to echo back on completion",
)
class OAuthInitiateResponse(BaseModel):
"""Response model for OAuth initiation."""
login_url: str = Field(..., description="URL to redirect user for OAuth consent")
state_token: str = Field(..., description="State token for CSRF protection")
expires_at: int = Field(
..., description="Unix timestamp when the state token expires"
)
class OAuthCompleteRequest(BaseModel):
"""Request model for completing an OAuth flow."""
code: str = Field(..., description="Authorization code from OAuth provider")
state_token: str = Field(..., description="State token from initiate request")
class OAuthCompleteResponse(BaseModel):
"""Response model for OAuth completion."""
credentials_id: str = Field(..., description="ID of the stored credentials")
provider: str = Field(..., description="Provider name")
type: str = Field(..., description="Credential type (oauth2)")
title: Optional[str] = Field(None, description="Credential title")
scopes: list[str] = Field(default_factory=list, description="Granted scopes")
username: Optional[str] = Field(None, description="Username from provider")
state_metadata: dict[str, Any] = Field(
default_factory=dict, description="Echoed metadata from initiate request"
)
class CredentialSummary(BaseModel):
"""Summary of a credential without sensitive data."""
id: str
provider: str
type: CredentialsType
title: Optional[str] = None
scopes: Optional[list[str]] = None
username: Optional[str] = None
host: Optional[str] = None
class ProviderInfo(BaseModel):
"""Information about an integration provider."""
name: str
supports_oauth: bool = False
supports_api_key: bool = False
supports_user_password: bool = False
supports_host_scoped: bool = False
default_scopes: list[str] = Field(default_factory=list)
# ==================== Credential Creation Models ==================== #
class CreateAPIKeyCredentialRequest(BaseModel):
"""Request model for creating API key credentials."""
type: Literal["api_key"] = "api_key"
api_key: str = Field(..., description="The API key")
title: str = Field(..., description="A name for this credential")
expires_at: Optional[int] = Field(
None, description="Unix timestamp when the API key expires"
)
class CreateUserPasswordCredentialRequest(BaseModel):
"""Request model for creating username/password credentials."""
type: Literal["user_password"] = "user_password"
username: str = Field(..., description="Username")
password: str = Field(..., description="Password")
title: str = Field(..., description="A name for this credential")
class CreateHostScopedCredentialRequest(BaseModel):
"""Request model for creating host-scoped credentials."""
type: Literal["host_scoped"] = "host_scoped"
host: str = Field(..., description="Host/domain pattern to match")
headers: dict[str, str] = Field(..., description="Headers to include in requests")
title: str = Field(..., description="A name for this credential")
# Union type for credential creation
CreateCredentialRequest = Annotated[
CreateAPIKeyCredentialRequest
| CreateUserPasswordCredentialRequest
| CreateHostScopedCredentialRequest,
Field(discriminator="type"),
]
class CreateCredentialResponse(BaseModel):
"""Response model for credential creation."""
id: str
provider: str
type: CredentialsType
title: Optional[str] = None
# ==================== Helper Functions ==================== #
def validate_callback_url(callback_url: str) -> bool:
"""Validate that the callback URL is from an allowed origin."""
allowed_origins = settings.config.external_oauth_callback_origins
try:
parsed = urlparse(callback_url)
callback_origin = f"{parsed.scheme}://{parsed.netloc}"
for allowed in allowed_origins:
# Simple origin matching
if callback_origin == allowed:
return True
# Allow localhost with any port in development (proper hostname check)
if parsed.hostname == "localhost":
for allowed in allowed_origins:
allowed_parsed = urlparse(allowed)
if allowed_parsed.hostname == "localhost":
return True
return False
except Exception:
return False
def _get_oauth_handler_for_external(
provider_name: str, redirect_uri: str
) -> "BaseOAuthHandler":
"""Get an OAuth handler configured with an external redirect URI."""
# Ensure blocks are loaded so SDK providers are available
try:
from backend.blocks import load_all_blocks
load_all_blocks()
except Exception as e:
logger.warning(f"Failed to load blocks: {e}")
if provider_name not in HANDLERS_BY_NAME:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Provider '{provider_name}' does not support OAuth",
)
# Check if this provider has custom OAuth credentials
oauth_credentials = CREDENTIALS_BY_PROVIDER.get(provider_name)
if oauth_credentials and not oauth_credentials.use_secrets:
import os
client_id = (
os.getenv(oauth_credentials.client_id_env_var)
if oauth_credentials.client_id_env_var
else None
)
client_secret = (
os.getenv(oauth_credentials.client_secret_env_var)
if oauth_credentials.client_secret_env_var
else None
)
else:
client_id = getattr(settings.secrets, f"{provider_name}_client_id", None)
client_secret = getattr(
settings.secrets, f"{provider_name}_client_secret", None
)
if not (client_id and client_secret):
logger.error(f"Attempt to use unconfigured {provider_name} OAuth integration")
raise HTTPException(
status_code=status.HTTP_501_NOT_IMPLEMENTED,
detail={
"message": f"Integration with provider '{provider_name}' is not configured.",
"hint": "Set client ID and secret in the application's deployment environment",
},
)
handler_class = HANDLERS_BY_NAME[provider_name]
return handler_class(
client_id=client_id,
client_secret=client_secret,
redirect_uri=redirect_uri,
)
# ==================== Endpoints ==================== #
@integrations_router.get("/providers", response_model=list[ProviderInfo])
async def list_providers(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.READ_INTEGRATIONS)
),
) -> list[ProviderInfo]:
"""
List all available integration providers.
Returns a list of all providers with their supported credential types.
Most providers support API key credentials, and some also support OAuth.
"""
# Ensure blocks are loaded
try:
from backend.blocks import load_all_blocks
load_all_blocks()
except Exception as e:
logger.warning(f"Failed to load blocks: {e}")
from backend.sdk.registry import AutoRegistry
providers = []
for name in get_all_provider_names():
supports_oauth = name in HANDLERS_BY_NAME
handler_class = HANDLERS_BY_NAME.get(name)
default_scopes = (
getattr(handler_class, "DEFAULT_SCOPES", []) if handler_class else []
)
# Check if provider has specific auth types from SDK registration
sdk_provider = AutoRegistry.get_provider(name)
if sdk_provider and sdk_provider.supported_auth_types:
supports_api_key = "api_key" in sdk_provider.supported_auth_types
supports_user_password = (
"user_password" in sdk_provider.supported_auth_types
)
supports_host_scoped = "host_scoped" in sdk_provider.supported_auth_types
else:
# Fallback for legacy providers
supports_api_key = True # All providers can accept API keys
supports_user_password = name in ("smtp",)
supports_host_scoped = name == "http"
providers.append(
ProviderInfo(
name=name,
supports_oauth=supports_oauth,
supports_api_key=supports_api_key,
supports_user_password=supports_user_password,
supports_host_scoped=supports_host_scoped,
default_scopes=default_scopes,
)
)
return providers
@integrations_router.post(
"/{provider}/oauth/initiate",
response_model=OAuthInitiateResponse,
summary="Initiate OAuth flow",
)
async def initiate_oauth(
provider: Annotated[str, Path(title="The OAuth provider")],
request: OAuthInitiateRequest,
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.MANAGE_INTEGRATIONS)
),
) -> OAuthInitiateResponse:
"""
Initiate an OAuth flow for an external application.
This endpoint allows external apps to start an OAuth flow with a custom
callback URL. The callback URL must be from an allowed origin configured
in the platform settings.
Returns a login URL to redirect the user to, along with a state token
for CSRF protection.
"""
# Validate callback URL
if not validate_callback_url(request.callback_url):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=(
f"Callback URL origin is not allowed. "
f"Allowed origins: {settings.config.external_oauth_callback_origins}",
),
)
# Validate provider
try:
provider_name = ProviderName(provider)
except ValueError:
# Check if it's a dynamically registered provider
if provider not in HANDLERS_BY_NAME:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Provider '{provider}' not found",
)
provider_name = provider
# Get OAuth handler with external callback URL
handler = _get_oauth_handler_for_external(
provider if isinstance(provider_name, str) else provider_name.value,
request.callback_url,
)
# Store state token with external flow metadata
# Note: initiated_by_api_key_id is only available for API key auth, not OAuth
api_key_id = getattr(auth, "id", None) if auth.type == "api_key" else None
state_token, code_challenge = await creds_manager.store.store_state_token(
user_id=auth.user_id,
provider=provider if isinstance(provider_name, str) else provider_name.value,
scopes=request.scopes,
callback_url=request.callback_url,
state_metadata=request.state_metadata,
initiated_by_api_key_id=api_key_id,
)
# Build login URL
login_url = handler.get_login_url(
request.scopes, state_token, code_challenge=code_challenge
)
# Calculate expiration (10 minutes from now)
from datetime import datetime, timedelta, timezone
expires_at = int((datetime.now(timezone.utc) + timedelta(minutes=10)).timestamp())
return OAuthInitiateResponse(
login_url=login_url,
state_token=state_token,
expires_at=expires_at,
)
@integrations_router.post(
"/{provider}/oauth/complete",
response_model=OAuthCompleteResponse,
summary="Complete OAuth flow",
)
async def complete_oauth(
provider: Annotated[str, Path(title="The OAuth provider")],
request: OAuthCompleteRequest,
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.MANAGE_INTEGRATIONS)
),
) -> OAuthCompleteResponse:
"""
Complete an OAuth flow by exchanging the authorization code for tokens.
This endpoint should be called after the user has authorized the application
and been redirected back to the external app's callback URL with an
authorization code.
"""
# Verify state token
valid_state = await creds_manager.store.verify_state_token(
auth.user_id, request.state_token, provider
)
if not valid_state:
logger.warning(f"Invalid or expired state token for provider {provider}")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid or expired state token",
)
# Verify this is an external flow (callback_url must be set)
if not valid_state.callback_url:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="State token was not created for external OAuth flow",
)
# Get OAuth handler with the original callback URL
handler = _get_oauth_handler_for_external(provider, valid_state.callback_url)
try:
scopes = valid_state.scopes
scopes = handler.handle_default_scopes(scopes)
credentials = await handler.exchange_code_for_tokens(
request.code, scopes, valid_state.code_verifier
)
# Handle Linear's space-separated scopes
if len(credentials.scopes) == 1 and " " in credentials.scopes[0]:
credentials.scopes = credentials.scopes[0].split(" ")
# Check scope mismatch
if not set(scopes).issubset(set(credentials.scopes)):
logger.warning(
f"Granted scopes {credentials.scopes} for provider {provider} "
f"do not include all requested scopes {scopes}"
)
except Exception as e:
logger.error(f"OAuth2 Code->Token exchange failed for provider {provider}: {e}")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"OAuth2 callback failed to exchange code for tokens: {str(e)}",
)
# Store credentials
await creds_manager.create(auth.user_id, credentials)
logger.info(f"Successfully completed external OAuth for provider {provider}")
return OAuthCompleteResponse(
credentials_id=credentials.id,
provider=credentials.provider,
type=credentials.type,
title=credentials.title,
scopes=credentials.scopes,
username=credentials.username,
state_metadata=valid_state.state_metadata,
)
@integrations_router.get("/credentials", response_model=list[CredentialSummary])
async def list_credentials(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.READ_INTEGRATIONS)
),
) -> list[CredentialSummary]:
"""
List all credentials for the authenticated user.
Returns metadata about each credential without exposing sensitive tokens.
"""
credentials = await creds_manager.store.get_all_creds(auth.user_id)
return [
CredentialSummary(
id=cred.id,
provider=cred.provider,
type=cred.type,
title=cred.title,
scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None,
username=cred.username if isinstance(cred, OAuth2Credentials) else None,
host=cred.host if isinstance(cred, HostScopedCredentials) else None,
)
for cred in credentials
]
@integrations_router.get(
"/{provider}/credentials", response_model=list[CredentialSummary]
)
async def list_credentials_by_provider(
provider: Annotated[str, Path(title="The provider to list credentials for")],
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.READ_INTEGRATIONS)
),
) -> list[CredentialSummary]:
"""
List credentials for a specific provider.
"""
credentials = await creds_manager.store.get_creds_by_provider(
auth.user_id, provider
)
return [
CredentialSummary(
id=cred.id,
provider=cred.provider,
type=cred.type,
title=cred.title,
scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None,
username=cred.username if isinstance(cred, OAuth2Credentials) else None,
host=cred.host if isinstance(cred, HostScopedCredentials) else None,
)
for cred in credentials
]
@integrations_router.post(
"/{provider}/credentials",
response_model=CreateCredentialResponse,
status_code=status.HTTP_201_CREATED,
summary="Create credentials",
)
async def create_credential(
provider: Annotated[str, Path(title="The provider to create credentials for")],
request: Union[
CreateAPIKeyCredentialRequest,
CreateUserPasswordCredentialRequest,
CreateHostScopedCredentialRequest,
] = Body(..., discriminator="type"),
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.MANAGE_INTEGRATIONS)
),
) -> CreateCredentialResponse:
"""
Create non-OAuth credentials for a provider.
Supports creating:
- API key credentials (type: "api_key")
- Username/password credentials (type: "user_password")
- Host-scoped credentials (type: "host_scoped")
For OAuth credentials, use the OAuth initiate/complete flow instead.
"""
# Validate provider exists
all_providers = get_all_provider_names()
if provider not in all_providers:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Provider '{provider}' not found",
)
# Create the appropriate credential type
credentials: Credentials
if request.type == "api_key":
credentials = APIKeyCredentials(
provider=provider,
api_key=SecretStr(request.api_key),
title=request.title,
expires_at=request.expires_at,
)
elif request.type == "user_password":
credentials = UserPasswordCredentials(
provider=provider,
username=SecretStr(request.username),
password=SecretStr(request.password),
title=request.title,
)
elif request.type == "host_scoped":
# Convert string headers to SecretStr
secret_headers = {k: SecretStr(v) for k, v in request.headers.items()}
credentials = HostScopedCredentials(
provider=provider,
host=request.host,
headers=secret_headers,
title=request.title,
)
else:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Unsupported credential type: {request.type}",
)
# Store credentials
try:
await creds_manager.create(auth.user_id, credentials)
except Exception as e:
logger.error(f"Failed to store credentials: {e}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to store credentials: {str(e)}",
)
logger.info(f"Created {request.type} credentials for provider {provider}")
return CreateCredentialResponse(
id=credentials.id,
provider=provider,
type=credentials.type,
title=credentials.title,
)
class DeleteCredentialResponse(BaseModel):
"""Response model for deleting a credential."""
deleted: bool = Field(..., description="Whether the credential was deleted")
credentials_id: str = Field(..., description="ID of the deleted credential")
@integrations_router.delete(
"/{provider}/credentials/{cred_id}",
response_model=DeleteCredentialResponse,
)
async def delete_credential(
provider: Annotated[str, Path(title="The provider")],
cred_id: Annotated[str, Path(title="The credential ID to delete")],
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.DELETE_INTEGRATIONS)
),
) -> DeleteCredentialResponse:
"""
Delete a credential.
Note: This does not revoke the tokens with the provider. For full cleanup,
use the main API's delete endpoint which handles webhook cleanup and
token revocation.
"""
creds = await creds_manager.store.get_creds_by_id(auth.user_id, cred_id)
if not creds:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Credentials not found"
)
if creds.provider != provider:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Credentials do not match the specified provider",
)
await creds_manager.delete(auth.user_id, cred_id)
return DeleteCredentialResponse(deleted=True, credentials_id=cred_id)

View File

@@ -0,0 +1,328 @@
import logging
import urllib.parse
from collections import defaultdict
from typing import Annotated, Any, Literal, Optional, Sequence
from fastapi import APIRouter, Body, HTTPException, Security
from prisma.enums import AgentExecutionStatus, APIKeyPermission
from pydantic import BaseModel, Field
from typing_extensions import TypedDict
import backend.api.features.store.cache as store_cache
import backend.api.features.store.model as store_model
import backend.data.block
from backend.api.external.middleware import require_permission
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data import user as user_db
from backend.data.auth.base import APIAuthorizationInfo
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.executor.utils import add_graph_execution
from backend.util.settings import Settings
from .integrations import integrations_router
from .tools import tools_router
settings = Settings()
logger = logging.getLogger(__name__)
v1_router = APIRouter()
v1_router.include_router(integrations_router)
v1_router.include_router(tools_router)
class UserInfoResponse(BaseModel):
id: str
name: Optional[str]
email: str
timezone: str = Field(
description="The user's last known timezone (e.g. 'Europe/Amsterdam'), "
"or 'not-set' if not set"
)
@v1_router.get(
path="/me",
tags=["user", "meta"],
)
async def get_user_info(
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.IDENTITY)
),
) -> UserInfoResponse:
user = await user_db.get_user_by_id(auth.user_id)
return UserInfoResponse(
id=user.id,
name=user.name,
email=user.email,
timezone=user.timezone,
)
@v1_router.get(
path="/blocks",
tags=["blocks"],
dependencies=[Security(require_permission(APIKeyPermission.READ_BLOCK))],
)
async def get_graph_blocks() -> Sequence[dict[Any, Any]]:
blocks = [block() for block in backend.data.block.get_blocks().values()]
return [b.to_dict() for b in blocks if not b.disabled]
@v1_router.post(
path="/blocks/{block_id}/execute",
tags=["blocks"],
dependencies=[Security(require_permission(APIKeyPermission.EXECUTE_BLOCK))],
)
async def execute_graph_block(
block_id: str,
data: BlockInput,
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.EXECUTE_BLOCK)
),
) -> CompletedBlockOutput:
obj = backend.data.block.get_block(block_id)
if not obj:
raise HTTPException(status_code=404, detail=f"Block #{block_id} not found.")
output = defaultdict(list)
async for name, data in obj.execute(data):
output[name].append(data)
return output
@v1_router.post(
path="/graphs/{graph_id}/execute/{graph_version}",
tags=["graphs"],
)
async def execute_graph(
graph_id: str,
graph_version: int,
node_input: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)],
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.EXECUTE_GRAPH)
),
) -> dict[str, Any]:
try:
graph_exec = await add_graph_execution(
graph_id=graph_id,
user_id=auth.user_id,
inputs=node_input,
graph_version=graph_version,
)
return {"id": graph_exec.id}
except Exception as e:
msg = str(e).encode().decode("unicode_escape")
raise HTTPException(status_code=400, detail=msg)
class ExecutionNode(TypedDict):
node_id: str
input: Any
output: dict[str, Any]
class GraphExecutionResult(TypedDict):
execution_id: str
status: str
nodes: list[ExecutionNode]
output: Optional[list[dict[str, str]]]
@v1_router.get(
path="/graphs/{graph_id}/executions/{graph_exec_id}/results",
tags=["graphs"],
)
async def get_graph_execution_results(
graph_id: str,
graph_exec_id: str,
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.READ_GRAPH)
),
) -> GraphExecutionResult:
graph_exec = await execution_db.get_graph_execution(
user_id=auth.user_id,
execution_id=graph_exec_id,
include_node_executions=True,
)
if not graph_exec:
raise HTTPException(
status_code=404, detail=f"Graph execution #{graph_exec_id} not found."
)
if not await graph_db.get_graph(
graph_id=graph_exec.graph_id,
version=graph_exec.graph_version,
user_id=auth.user_id,
):
raise HTTPException(status_code=404, detail=f"Graph #{graph_id} not found.")
return GraphExecutionResult(
execution_id=graph_exec_id,
status=graph_exec.status.value,
nodes=[
ExecutionNode(
node_id=node_exec.node_id,
input=node_exec.input_data.get("value", node_exec.input_data),
output={k: v for k, v in node_exec.output_data.items()},
)
for node_exec in graph_exec.node_executions
],
output=(
[
{name: value}
for name, values in graph_exec.outputs.items()
for value in values
]
if graph_exec.status == AgentExecutionStatus.COMPLETED
else None
),
)
##############################################
############### Store Endpoints ##############
##############################################
@v1_router.get(
path="/store/agents",
tags=["store"],
dependencies=[Security(require_permission(APIKeyPermission.READ_STORE))],
response_model=store_model.StoreAgentsResponse,
)
async def get_store_agents(
featured: bool = False,
creator: str | None = None,
sorted_by: Literal["rating", "runs", "name", "updated_at"] | None = None,
search_query: str | None = None,
category: str | None = None,
page: int = 1,
page_size: int = 20,
) -> store_model.StoreAgentsResponse:
"""
Get a paginated list of agents from the store with optional filtering and sorting.
Args:
featured: Filter to only show featured agents
creator: Filter agents by creator username
sorted_by: Sort agents by "runs", "rating", "name", or "updated_at"
search_query: Search agents by name, subheading and description
category: Filter agents by category
page: Page number for pagination (default 1)
page_size: Number of agents per page (default 20)
Returns:
StoreAgentsResponse: Paginated list of agents matching the filters
"""
if page < 1:
raise HTTPException(status_code=422, detail="Page must be greater than 0")
if page_size < 1:
raise HTTPException(status_code=422, detail="Page size must be greater than 0")
agents = await store_cache._get_cached_store_agents(
featured=featured,
creator=creator,
sorted_by=sorted_by,
search_query=search_query,
category=category,
page=page,
page_size=page_size,
)
return agents
@v1_router.get(
path="/store/agents/{username}/{agent_name}",
tags=["store"],
dependencies=[Security(require_permission(APIKeyPermission.READ_STORE))],
response_model=store_model.StoreAgentDetails,
)
async def get_store_agent(
username: str,
agent_name: str,
) -> store_model.StoreAgentDetails:
"""
Get details of a specific store agent by username and agent name.
Args:
username: Creator's username
agent_name: Name/slug of the agent
Returns:
StoreAgentDetails: Detailed information about the agent
"""
username = urllib.parse.unquote(username).lower()
agent_name = urllib.parse.unquote(agent_name).lower()
agent = await store_cache._get_cached_agent_details(
username=username, agent_name=agent_name
)
return agent
@v1_router.get(
path="/store/creators",
tags=["store"],
dependencies=[Security(require_permission(APIKeyPermission.READ_STORE))],
response_model=store_model.CreatorsResponse,
)
async def get_store_creators(
featured: bool = False,
search_query: str | None = None,
sorted_by: Literal["agent_rating", "agent_runs", "num_agents"] | None = None,
page: int = 1,
page_size: int = 20,
) -> store_model.CreatorsResponse:
"""
Get a paginated list of store creators with optional filtering and sorting.
Args:
featured: Filter to only show featured creators
search_query: Search creators by profile description
sorted_by: Sort by "agent_rating", "agent_runs", or "num_agents"
page: Page number for pagination (default 1)
page_size: Number of creators per page (default 20)
Returns:
CreatorsResponse: Paginated list of creators matching the filters
"""
if page < 1:
raise HTTPException(status_code=422, detail="Page must be greater than 0")
if page_size < 1:
raise HTTPException(status_code=422, detail="Page size must be greater than 0")
creators = await store_cache._get_cached_store_creators(
featured=featured,
search_query=search_query,
sorted_by=sorted_by,
page=page,
page_size=page_size,
)
return creators
@v1_router.get(
path="/store/creators/{username}",
tags=["store"],
dependencies=[Security(require_permission(APIKeyPermission.READ_STORE))],
response_model=store_model.CreatorDetails,
)
async def get_store_creator(
username: str,
) -> store_model.CreatorDetails:
"""
Get details of a specific store creator by username.
Args:
username: Creator's username
Returns:
CreatorDetails: Detailed information about the creator
"""
username = urllib.parse.unquote(username).lower()
creator = await store_cache._get_cached_creator_details(username=username)
return creator

View File

@@ -0,0 +1,152 @@
"""External API routes for chat tools - stateless HTTP endpoints.
Note: These endpoints use ephemeral sessions that are not persisted to Redis.
As a result, session-based rate limiting (max_agent_runs, max_agent_schedules)
is not enforced for external API calls. Each request creates a fresh session
with zeroed counters. Rate limiting for external API consumers should be
handled separately (e.g., via API key quotas).
"""
import logging
from typing import Any
from fastapi import APIRouter, Security
from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field
from backend.api.external.middleware import require_permission
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools import find_agent_tool, run_agent_tool
from backend.api.features.chat.tools.models import ToolResponseBase
from backend.data.auth.base import APIAuthorizationInfo
logger = logging.getLogger(__name__)
tools_router = APIRouter(prefix="/tools", tags=["tools"])
# Note: We use Security() as a function parameter dependency (auth: APIAuthorizationInfo = Security(...))
# rather than in the decorator's dependencies= list. This avoids duplicate permission checks
# while still enforcing auth AND giving us access to auth for extracting user_id.
# Request models
class FindAgentRequest(BaseModel):
query: str = Field(..., description="Search query for finding agents")
class RunAgentRequest(BaseModel):
"""Request to run or schedule an agent.
The tool automatically handles the setup flow:
- First call returns available inputs so user can decide what values to use
- Returns missing credentials if user needs to configure them
- Executes when inputs are provided OR use_defaults=true
- Schedules execution if schedule_name and cron are provided
"""
username_agent_slug: str = Field(
...,
description="The marketplace agent slug (e.g., 'username/agent-name')",
)
inputs: dict[str, Any] = Field(
default_factory=dict,
description="Dictionary of input values for the agent",
)
use_defaults: bool = Field(
default=False,
description="Set to true to run with default values (user must confirm)",
)
schedule_name: str | None = Field(
None,
description="Name for scheduled execution (triggers scheduling mode)",
)
cron: str | None = Field(
None,
description="Cron expression (5 fields: minute hour day month weekday)",
)
timezone: str = Field(
default="UTC",
description="IANA timezone (e.g., 'America/New_York', 'UTC')",
)
def _create_ephemeral_session(user_id: str | None) -> ChatSession:
"""Create an ephemeral session for stateless API requests."""
return ChatSession.new(user_id)
@tools_router.post(
path="/find-agent",
)
async def find_agent(
request: FindAgentRequest,
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.USE_TOOLS)
),
) -> dict[str, Any]:
"""
Search for agents in the marketplace based on capabilities and user needs.
Args:
request: Search query for finding agents
Returns:
List of matching agents or no results response
"""
session = _create_ephemeral_session(auth.user_id)
result = await find_agent_tool._execute(
user_id=auth.user_id,
session=session,
query=request.query,
)
return _response_to_dict(result)
@tools_router.post(
path="/run-agent",
)
async def run_agent(
request: RunAgentRequest,
auth: APIAuthorizationInfo = Security(
require_permission(APIKeyPermission.USE_TOOLS)
),
) -> dict[str, Any]:
"""
Run or schedule an agent from the marketplace.
The endpoint automatically handles the setup flow:
- Returns missing inputs if required fields are not provided
- Returns missing credentials if user needs to configure them
- Executes immediately if all requirements are met
- Schedules execution if schedule_name and cron are provided
For scheduled execution:
- Cron format: "minute hour day month weekday"
- Examples: "0 9 * * 1-5" (9am weekdays), "0 0 * * *" (daily at midnight)
- Timezone: Use IANA timezone names like "America/New_York"
Args:
request: Agent slug, inputs, and optional schedule config
Returns:
- setup_requirements: If inputs or credentials are missing
- execution_started: If agent was run or scheduled successfully
- error: If something went wrong
"""
session = _create_ephemeral_session(auth.user_id)
result = await run_agent_tool._execute(
user_id=auth.user_id,
session=session,
username_agent_slug=request.username_agent_slug,
inputs=request.inputs,
use_defaults=request.use_defaults,
schedule_name=request.schedule_name or "",
cron=request.cron or "",
timezone=request.timezone,
)
return _response_to_dict(result)
def _response_to_dict(result: ToolResponseBase) -> dict[str, Any]:
"""Convert a tool response to a dictionary for JSON serialization."""
return result.model_dump()

View File

@@ -6,12 +6,11 @@ from fastapi import APIRouter, Body, Security
from prisma.enums import CreditTransactionType
from backend.data.credit import admin_get_user_history, get_user_credit_model
from backend.server.v2.admin.model import AddUserCreditsResponse, UserHistoryResponse
from backend.util.json import SafeJson
logger = logging.getLogger(__name__)
from .model import AddUserCreditsResponse, UserHistoryResponse
_user_credit_model = get_user_credit_model()
logger = logging.getLogger(__name__)
router = APIRouter(
@@ -33,7 +32,8 @@ async def add_user_credits(
logger.info(
f"Admin user {admin_user_id} is adding {amount} credits to user {user_id}"
)
new_balance, transaction_key = await _user_credit_model._add_transaction(
user_credit_model = await get_user_credit_model(user_id)
new_balance, transaction_key = await user_credit_model._add_transaction(
user_id,
amount,
transaction_type=CreditTransactionType.GRANT,

View File

@@ -1,5 +1,5 @@
import json
from unittest.mock import AsyncMock
from unittest.mock import AsyncMock, Mock
import fastapi
import fastapi.testclient
@@ -7,16 +7,17 @@ import prisma.enums
import pytest
import pytest_mock
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from prisma import Json
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.admin.credit_admin_routes as credit_admin_routes
import backend.server.v2.admin.model as admin_model
from backend.data.model import UserTransaction
from backend.util.json import SafeJson
from backend.util.models import Pagination
from .credit_admin_routes import router as credit_admin_router
from .model import UserHistoryResponse
app = fastapi.FastAPI()
app.include_router(credit_admin_routes.router)
app.include_router(credit_admin_router)
client = fastapi.testclient.TestClient(app)
@@ -30,19 +31,21 @@ def setup_app_admin_auth(mock_jwt_admin):
def test_add_user_credits_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
configured_snapshot: Snapshot,
admin_user_id: str,
target_user_id: str,
) -> None:
"""Test successful credit addition by admin"""
# Mock the credit model
mock_credit_model = mocker.patch(
"backend.server.v2.admin.credit_admin_routes._user_credit_model"
)
mock_credit_model = Mock()
mock_credit_model._add_transaction = AsyncMock(
return_value=(1500, "transaction-123-uuid")
)
mocker.patch(
"backend.api.features.admin.credit_admin_routes.get_user_credit_model",
return_value=mock_credit_model,
)
request_data = {
"user_id": target_user_id,
@@ -62,11 +65,17 @@ def test_add_user_credits_success(
call_args = mock_credit_model._add_transaction.call_args
assert call_args[0] == (target_user_id, 500)
assert call_args[1]["transaction_type"] == prisma.enums.CreditTransactionType.GRANT
# Check that metadata is a Json object with the expected content
assert isinstance(call_args[1]["metadata"], Json)
assert call_args[1]["metadata"] == Json(
{"admin_id": admin_user_id, "reason": "Test credit grant for debugging"}
)
# Check that metadata is a SafeJson object with the expected content
assert isinstance(call_args[1]["metadata"], SafeJson)
actual_metadata = call_args[1]["metadata"]
expected_data = {
"admin_id": admin_user_id,
"reason": "Test credit grant for debugging",
}
# SafeJson inherits from Json which stores parsed data in the .data attribute
assert actual_metadata.data["admin_id"] == expected_data["admin_id"]
assert actual_metadata.data["reason"] == expected_data["reason"]
# Snapshot test the response
configured_snapshot.assert_match(
@@ -76,17 +85,19 @@ def test_add_user_credits_success(
def test_add_user_credits_negative_amount(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test credit deduction by admin (negative amount)"""
# Mock the credit model
mock_credit_model = mocker.patch(
"backend.server.v2.admin.credit_admin_routes._user_credit_model"
)
mock_credit_model = Mock()
mock_credit_model._add_transaction = AsyncMock(
return_value=(200, "transaction-456-uuid")
)
mocker.patch(
"backend.api.features.admin.credit_admin_routes.get_user_credit_model",
return_value=mock_credit_model,
)
request_data = {
"user_id": "target-user-id",
@@ -109,12 +120,12 @@ def test_add_user_credits_negative_amount(
def test_get_user_history_success(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test successful retrieval of user credit history"""
# Mock the admin_get_user_history function
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[
UserTransaction(
user_id="user-1",
@@ -140,7 +151,7 @@ def test_get_user_history_success(
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)
@@ -160,12 +171,12 @@ def test_get_user_history_success(
def test_get_user_history_with_filters(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test user credit history with search and filter parameters"""
# Mock the admin_get_user_history function
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[
UserTransaction(
user_id="user-3",
@@ -184,7 +195,7 @@ def test_get_user_history_with_filters(
)
mock_get_history = mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)
@@ -220,12 +231,12 @@ def test_get_user_history_with_filters(
def test_get_user_history_empty_results(
mocker: pytest_mock.MockFixture,
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
) -> None:
"""Test user credit history with no results"""
# Mock empty history response
mock_history_response = admin_model.UserHistoryResponse(
mock_history_response = UserHistoryResponse(
history=[],
pagination=Pagination(
total_items=0,
@@ -236,7 +247,7 @@ def test_get_user_history_empty_results(
)
mocker.patch(
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
"backend.api.features.admin.credit_admin_routes.admin_get_user_history",
return_value=mock_history_response,
)

View File

@@ -0,0 +1,474 @@
import asyncio
import logging
from datetime import datetime
from typing import Optional
from autogpt_libs.auth import get_user_id, requires_admin_user
from fastapi import APIRouter, HTTPException, Security
from pydantic import BaseModel, Field
from backend.blocks.llm import LlmModel
from backend.data.analytics import (
AccuracyTrendsResponse,
get_accuracy_trends_and_alerts,
)
from backend.data.execution import (
ExecutionStatus,
GraphExecutionMeta,
get_graph_executions,
update_graph_execution_stats,
)
from backend.data.model import GraphExecutionStats
from backend.executor.activity_status_generator import (
DEFAULT_SYSTEM_PROMPT,
DEFAULT_USER_PROMPT,
generate_activity_status_for_execution,
)
from backend.executor.manager import get_db_async_client
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
class ExecutionAnalyticsRequest(BaseModel):
graph_id: str = Field(..., description="Graph ID to analyze")
graph_version: Optional[int] = Field(None, description="Optional graph version")
user_id: Optional[str] = Field(None, description="Optional user ID filter")
created_after: Optional[datetime] = Field(
None, description="Optional created date lower bound"
)
model_name: str = Field("gpt-4o-mini", description="Model to use for generation")
batch_size: int = Field(
10, description="Batch size for concurrent processing", le=25, ge=1
)
system_prompt: Optional[str] = Field(
None, description="Custom system prompt (default: built-in prompt)"
)
user_prompt: Optional[str] = Field(
None,
description="Custom user prompt with {{GRAPH_NAME}} and {{EXECUTION_DATA}} placeholders (default: built-in prompt)",
)
skip_existing: bool = Field(
True,
description="Whether to skip executions that already have activity status and correctness score",
)
class ExecutionAnalyticsResult(BaseModel):
agent_id: str
version_id: int
user_id: str
exec_id: str
summary_text: Optional[str]
score: Optional[float]
status: str # "success", "failed", "skipped"
error_message: Optional[str] = None
class ExecutionAnalyticsResponse(BaseModel):
total_executions: int
processed_executions: int
successful_analytics: int
failed_analytics: int
skipped_executions: int
results: list[ExecutionAnalyticsResult]
class ModelInfo(BaseModel):
value: str
label: str
provider: str
class ExecutionAnalyticsConfig(BaseModel):
available_models: list[ModelInfo]
default_system_prompt: str
default_user_prompt: str
recommended_model: str
class AccuracyTrendsRequest(BaseModel):
graph_id: str = Field(..., description="Graph ID to analyze", min_length=1)
user_id: Optional[str] = Field(None, description="Optional user ID filter")
days_back: int = Field(30, description="Number of days to look back", ge=7, le=90)
drop_threshold: float = Field(
10.0, description="Alert threshold percentage", ge=1.0, le=50.0
)
include_historical: bool = Field(
False, description="Include historical data for charts"
)
router = APIRouter(
prefix="/admin",
tags=["admin", "execution_analytics"],
dependencies=[Security(requires_admin_user)],
)
@router.get(
"/execution_analytics/config",
response_model=ExecutionAnalyticsConfig,
summary="Get Execution Analytics Configuration",
)
async def get_execution_analytics_config(
admin_user_id: str = Security(get_user_id),
):
"""
Get the configuration for execution analytics including:
- Available AI models with metadata
- Default system and user prompts
- Recommended model selection
"""
logger.info(f"Admin user {admin_user_id} requesting execution analytics config")
# Generate model list from LlmModel enum with provider information
available_models = []
# Function to generate friendly display names from model values
def generate_model_label(model: LlmModel) -> str:
"""Generate a user-friendly label from the model enum value."""
value = model.value
# For all models, convert underscores/hyphens to spaces and title case
# e.g., "gpt-4-turbo" -> "GPT 4 Turbo", "claude-3-haiku-20240307" -> "Claude 3 Haiku"
parts = value.replace("_", "-").split("-")
# Handle provider prefixes (e.g., "google/", "x-ai/")
if "/" in value:
_, model_name = value.split("/", 1)
parts = model_name.replace("_", "-").split("-")
# Capitalize and format parts
formatted_parts = []
for part in parts:
# Skip date-like patterns - check for various date formats:
# - Long dates like "20240307" (8 digits)
# - Year components like "2024", "2025" (4 digit years >= 2020)
# - Month/day components like "04", "16" when they appear to be dates
if part.isdigit():
if len(part) >= 8: # Long date format like "20240307"
continue
elif len(part) == 4 and int(part) >= 2020: # Year like "2024", "2025"
continue
elif len(part) <= 2 and int(part) <= 31: # Month/day like "04", "16"
# Skip if this looks like a date component (basic heuristic)
continue
# Keep version numbers as-is
if part.replace(".", "").isdigit():
formatted_parts.append(part)
# Capitalize normal words
else:
formatted_parts.append(
part.upper()
if part.upper() in ["GPT", "LLM", "API", "V0"]
else part.capitalize()
)
model_name = " ".join(formatted_parts)
# Format provider name for better display
provider_name = model.provider.replace("_", " ").title()
# Return with provider prefix for clarity
return f"{provider_name}: {model_name}"
# Include all LlmModel values (no more filtering by hardcoded list)
recommended_model = LlmModel.GPT4O_MINI.value
for model in LlmModel:
label = generate_model_label(model)
# Add "(Recommended)" suffix to the recommended model
if model.value == recommended_model:
label += " (Recommended)"
available_models.append(
ModelInfo(
value=model.value,
label=label,
provider=model.provider,
)
)
# Sort models by provider and name for better UX
available_models.sort(key=lambda x: (x.provider, x.label))
return ExecutionAnalyticsConfig(
available_models=available_models,
default_system_prompt=DEFAULT_SYSTEM_PROMPT,
default_user_prompt=DEFAULT_USER_PROMPT,
recommended_model=recommended_model,
)
@router.post(
"/execution_analytics",
response_model=ExecutionAnalyticsResponse,
summary="Generate Execution Analytics",
)
async def generate_execution_analytics(
request: ExecutionAnalyticsRequest,
admin_user_id: str = Security(get_user_id),
):
"""
Generate activity summaries and correctness scores for graph executions.
This endpoint:
1. Fetches all completed executions matching the criteria
2. Identifies executions missing activity_status or correctness_score
3. Generates missing data using AI in batches
4. Updates the database with new stats
5. Returns a detailed report of the analytics operation
"""
logger.info(
f"Admin user {admin_user_id} starting execution analytics generation for graph {request.graph_id}"
)
try:
# Validate model configuration
settings = Settings()
if not settings.secrets.openai_internal_api_key:
raise HTTPException(status_code=500, detail="OpenAI API key not configured")
# Get database client
db_client = get_db_async_client()
# Fetch executions to process
executions = await get_graph_executions(
graph_id=request.graph_id,
graph_version=request.graph_version,
user_id=request.user_id,
created_time_gte=request.created_after,
statuses=[
ExecutionStatus.COMPLETED,
ExecutionStatus.FAILED,
ExecutionStatus.TERMINATED,
], # Only process finished executions
)
logger.info(
f"Found {len(executions)} total executions for graph {request.graph_id}"
)
# Filter executions that need analytics generation
executions_to_process = []
for execution in executions:
# Skip if we should skip existing analytics and both activity_status and correctness_score exist
if (
request.skip_existing
and execution.stats
and execution.stats.activity_status
and execution.stats.correctness_score is not None
):
continue
# Add execution to processing list
executions_to_process.append(execution)
logger.info(
f"Found {len(executions_to_process)} executions needing analytics generation"
)
# Create results for ALL executions - processed and skipped
results = []
successful_count = 0
failed_count = 0
# Process executions that need analytics generation
if executions_to_process:
total_batches = len(
range(0, len(executions_to_process), request.batch_size)
)
for batch_idx, i in enumerate(
range(0, len(executions_to_process), request.batch_size)
):
batch = executions_to_process[i : i + request.batch_size]
logger.info(
f"Processing batch {batch_idx + 1}/{total_batches} with {len(batch)} executions"
)
batch_results = await _process_batch(batch, request, db_client)
for result in batch_results:
results.append(result)
if result.status == "success":
successful_count += 1
elif result.status == "failed":
failed_count += 1
# Small delay between batches to avoid overwhelming the LLM API
if batch_idx < total_batches - 1: # Don't delay after the last batch
await asyncio.sleep(2)
# Add ALL executions to results (both processed and skipped)
for execution in executions:
# Skip if already processed (added to results above)
if execution in executions_to_process:
continue
results.append(
ExecutionAnalyticsResult(
agent_id=execution.graph_id,
version_id=execution.graph_version,
user_id=execution.user_id,
exec_id=execution.id,
summary_text=(
execution.stats.activity_status if execution.stats else None
),
score=(
execution.stats.correctness_score if execution.stats else None
),
status="skipped",
error_message=None, # Not an error - just already processed
)
)
response = ExecutionAnalyticsResponse(
total_executions=len(executions),
processed_executions=len(executions_to_process),
successful_analytics=successful_count,
failed_analytics=failed_count,
skipped_executions=len(executions) - len(executions_to_process),
results=results,
)
logger.info(
f"Analytics generation completed: {successful_count} successful, {failed_count} failed, "
f"{response.skipped_executions} skipped"
)
return response
except Exception as e:
logger.exception(f"Error during execution analytics generation: {e}")
raise HTTPException(status_code=500, detail=str(e))
async def _process_batch(
executions, request: ExecutionAnalyticsRequest, db_client
) -> list[ExecutionAnalyticsResult]:
"""Process a batch of executions concurrently."""
async def process_single_execution(execution) -> ExecutionAnalyticsResult:
try:
# Generate activity status and score using the specified model
# Convert stats to GraphExecutionStats if needed
if execution.stats:
if isinstance(execution.stats, GraphExecutionMeta.Stats):
stats_for_generation = execution.stats.to_db()
else:
# Already GraphExecutionStats
stats_for_generation = execution.stats
else:
stats_for_generation = GraphExecutionStats()
activity_response = await generate_activity_status_for_execution(
graph_exec_id=execution.id,
graph_id=execution.graph_id,
graph_version=execution.graph_version,
execution_stats=stats_for_generation,
db_client=db_client,
user_id=execution.user_id,
execution_status=execution.status,
model_name=request.model_name,
skip_feature_flag=True, # Admin endpoint bypasses feature flags
system_prompt=request.system_prompt or DEFAULT_SYSTEM_PROMPT,
user_prompt=request.user_prompt or DEFAULT_USER_PROMPT,
skip_existing=request.skip_existing,
)
if not activity_response:
return ExecutionAnalyticsResult(
agent_id=execution.graph_id,
version_id=execution.graph_version,
user_id=execution.user_id,
exec_id=execution.id,
summary_text=None,
score=None,
status="skipped",
error_message="Activity generation returned None",
)
# Update the execution stats
# Convert GraphExecutionMeta.Stats to GraphExecutionStats for DB compatibility
if execution.stats:
if isinstance(execution.stats, GraphExecutionMeta.Stats):
updated_stats = execution.stats.to_db()
else:
# Already GraphExecutionStats
updated_stats = execution.stats
else:
updated_stats = GraphExecutionStats()
updated_stats.activity_status = activity_response["activity_status"]
updated_stats.correctness_score = activity_response["correctness_score"]
# Save to database with correct stats type
await update_graph_execution_stats(
graph_exec_id=execution.id, stats=updated_stats
)
return ExecutionAnalyticsResult(
agent_id=execution.graph_id,
version_id=execution.graph_version,
user_id=execution.user_id,
exec_id=execution.id,
summary_text=activity_response["activity_status"],
score=activity_response["correctness_score"],
status="success",
)
except Exception as e:
logger.exception(f"Error processing execution {execution.id}: {e}")
return ExecutionAnalyticsResult(
agent_id=execution.graph_id,
version_id=execution.graph_version,
user_id=execution.user_id,
exec_id=execution.id,
summary_text=None,
score=None,
status="failed",
error_message=str(e),
)
# Process all executions in the batch concurrently
return await asyncio.gather(
*[process_single_execution(execution) for execution in executions]
)
@router.get(
"/execution_accuracy_trends",
response_model=AccuracyTrendsResponse,
summary="Get Execution Accuracy Trends and Alerts",
)
async def get_execution_accuracy_trends(
graph_id: str,
user_id: Optional[str] = None,
days_back: int = 30,
drop_threshold: float = 10.0,
include_historical: bool = False,
admin_user_id: str = Security(get_user_id),
) -> AccuracyTrendsResponse:
"""
Get execution accuracy trends with moving averages and alert detection.
Simple single-query approach.
"""
logger.info(
f"Admin user {admin_user_id} requesting accuracy trends for graph {graph_id}"
)
try:
result = await get_accuracy_trends_and_alerts(
graph_id=graph_id,
days_back=days_back,
user_id=user_id,
drop_threshold=drop_threshold,
include_historical=include_historical,
)
return result
except Exception as e:
logger.exception(f"Error getting accuracy trends for graph {graph_id}: {e}")
raise HTTPException(status_code=500, detail=str(e))

View File

@@ -7,8 +7,9 @@ import fastapi
import fastapi.responses
import prisma.enums
import backend.server.v2.store.db
import backend.server.v2.store.model
import backend.api.features.store.cache as store_cache
import backend.api.features.store.db as store_db
import backend.api.features.store.model as store_model
import backend.util.json
logger = logging.getLogger(__name__)
@@ -23,7 +24,7 @@ router = fastapi.APIRouter(
@router.get(
"/listings",
summary="Get Admin Listings History",
response_model=backend.server.v2.store.model.StoreListingsWithVersionsResponse,
response_model=store_model.StoreListingsWithVersionsResponse,
)
async def get_admin_listings_with_versions(
status: typing.Optional[prisma.enums.SubmissionStatus] = None,
@@ -47,7 +48,7 @@ async def get_admin_listings_with_versions(
StoreListingsWithVersionsResponse with listings and their versions
"""
try:
listings = await backend.server.v2.store.db.get_admin_listings_with_versions(
listings = await store_db.get_admin_listings_with_versions(
status=status,
search_query=search,
page=page,
@@ -67,11 +68,11 @@ async def get_admin_listings_with_versions(
@router.post(
"/submissions/{store_listing_version_id}/review",
summary="Review Store Submission",
response_model=backend.server.v2.store.model.StoreSubmission,
response_model=store_model.StoreSubmission,
)
async def review_submission(
store_listing_version_id: str,
request: backend.server.v2.store.model.ReviewSubmissionRequest,
request: store_model.ReviewSubmissionRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
@@ -86,13 +87,21 @@ async def review_submission(
StoreSubmission with updated review information
"""
try:
submission = await backend.server.v2.store.db.review_store_submission(
already_approved = await store_db.check_submission_already_approved(
store_listing_version_id=store_listing_version_id,
)
submission = await store_db.review_store_submission(
store_listing_version_id=store_listing_version_id,
is_approved=request.is_approved,
external_comments=request.comments,
internal_comments=request.internal_comments or "",
reviewer_id=user_id,
)
state_changed = already_approved != request.is_approved
# Clear caches when the request is approved as it updates what is shown on the store
if state_changed:
store_cache.clear_all_caches()
return submission
except Exception as e:
logger.exception("Error reviewing submission: %s", e)
@@ -125,7 +134,7 @@ async def admin_download_agent_file(
Raises:
HTTPException: If the agent is not found or an unexpected error occurs.
"""
graph_data = await backend.server.v2.store.db.get_agent_as_admin(
graph_data = await store_db.get_agent_as_admin(
user_id=user_id,
store_listing_version_id=store_listing_version_id,
)

View File

@@ -6,10 +6,11 @@ from typing import Annotated
import fastapi
import pydantic
from autogpt_libs.auth import get_user_id
from autogpt_libs.auth.dependencies import requires_user
import backend.data.analytics
router = fastapi.APIRouter()
router = fastapi.APIRouter(dependencies=[fastapi.Security(requires_user)])
logger = logging.getLogger(__name__)

View File

@@ -0,0 +1,340 @@
"""Tests for analytics API endpoints."""
import json
from unittest.mock import AsyncMock, Mock
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
from .analytics import router as analytics_router
app = fastapi.FastAPI()
app.include_router(analytics_router)
client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module."""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
# =============================================================================
# /log_raw_metric endpoint tests
# =============================================================================
def test_log_raw_metric_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw metric logging."""
mock_result = Mock(id="metric-123-uuid")
mock_log_metric = mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"metric_name": "page_load_time",
"metric_value": 2.5,
"data_string": "/dashboard",
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 200, f"Unexpected response: {response.text}"
assert response.json() == "metric-123-uuid"
mock_log_metric.assert_called_once_with(
user_id=test_user_id,
metric_name="page_load_time",
metric_value=2.5,
data_string="/dashboard",
)
configured_snapshot.assert_match(
json.dumps({"metric_id": response.json()}, indent=2, sort_keys=True),
"analytics_log_metric_success",
)
@pytest.mark.parametrize(
"metric_value,metric_name,data_string,test_id",
[
(100, "api_calls_count", "external_api", "integer_value"),
(0, "error_count", "no_errors", "zero_value"),
(-5.2, "temperature_delta", "cooling", "negative_value"),
(1.23456789, "precision_test", "float_precision", "float_precision"),
(999999999, "large_number", "max_value", "large_number"),
(0.0000001, "tiny_number", "min_value", "tiny_number"),
],
)
def test_log_raw_metric_various_values(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
metric_value: float,
metric_name: str,
data_string: str,
test_id: str,
) -> None:
"""Test raw metric logging with various metric values."""
mock_result = Mock(id=f"metric-{test_id}-uuid")
mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"metric_name": metric_name,
"metric_value": metric_value,
"data_string": data_string,
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 200, f"Failed for {test_id}: {response.text}"
configured_snapshot.assert_match(
json.dumps(
{"metric_id": response.json(), "test_case": test_id},
indent=2,
sort_keys=True,
),
f"analytics_metric_{test_id}",
)
@pytest.mark.parametrize(
"invalid_data,expected_error",
[
({}, "Field required"),
({"metric_name": "test"}, "Field required"),
(
{"metric_name": "test", "metric_value": "not_a_number", "data_string": "x"},
"Input should be a valid number",
),
(
{"metric_name": "", "metric_value": 1.0, "data_string": "test"},
"String should have at least 1 character",
),
(
{"metric_name": "test", "metric_value": 1.0, "data_string": ""},
"String should have at least 1 character",
),
],
ids=[
"empty_request",
"missing_metric_value_and_data_string",
"invalid_metric_value_type",
"empty_metric_name",
"empty_data_string",
],
)
def test_log_raw_metric_validation_errors(
invalid_data: dict,
expected_error: str,
) -> None:
"""Test validation errors for invalid metric requests."""
response = client.post("/log_raw_metric", json=invalid_data)
assert response.status_code == 422
error_detail = response.json()
assert "detail" in error_detail, f"Missing 'detail' in error: {error_detail}"
error_text = json.dumps(error_detail)
assert (
expected_error in error_text
), f"Expected '{expected_error}' in error response: {error_text}"
def test_log_raw_metric_service_error(
mocker: pytest_mock.MockFixture,
test_user_id: str,
) -> None:
"""Test error handling when analytics service fails."""
mocker.patch(
"backend.data.analytics.log_raw_metric",
new_callable=AsyncMock,
side_effect=Exception("Database connection failed"),
)
request_data = {
"metric_name": "test_metric",
"metric_value": 1.0,
"data_string": "test",
}
response = client.post("/log_raw_metric", json=request_data)
assert response.status_code == 500
error_detail = response.json()["detail"]
assert "Database connection failed" in error_detail["message"]
assert "hint" in error_detail
# =============================================================================
# /log_raw_analytics endpoint tests
# =============================================================================
def test_log_raw_analytics_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw analytics logging."""
mock_result = Mock(id="analytics-789-uuid")
mock_log_analytics = mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"type": "user_action",
"data": {
"action": "button_click",
"button_id": "submit_form",
"timestamp": "2023-01-01T00:00:00Z",
"metadata": {"form_type": "registration", "fields_filled": 5},
},
"data_index": "button_click_submit_form",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 200, f"Unexpected response: {response.text}"
assert response.json() == "analytics-789-uuid"
mock_log_analytics.assert_called_once_with(
test_user_id,
"user_action",
request_data["data"],
"button_click_submit_form",
)
configured_snapshot.assert_match(
json.dumps({"analytics_id": response.json()}, indent=2, sort_keys=True),
"analytics_log_analytics_success",
)
def test_log_raw_analytics_complex_data(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
) -> None:
"""Test raw analytics logging with complex nested data structures."""
mock_result = Mock(id="analytics-complex-uuid")
mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
return_value=mock_result,
)
request_data = {
"type": "agent_execution",
"data": {
"agent_id": "agent_123",
"execution_id": "exec_456",
"status": "completed",
"duration_ms": 3500,
"nodes_executed": 15,
"blocks_used": [
{"block_id": "llm_block", "count": 3},
{"block_id": "http_block", "count": 5},
{"block_id": "code_block", "count": 2},
],
"errors": [],
"metadata": {
"trigger": "manual",
"user_tier": "premium",
"environment": "production",
},
},
"data_index": "agent_123_exec_456",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 200
configured_snapshot.assert_match(
json.dumps(
{"analytics_id": response.json(), "logged_data": request_data["data"]},
indent=2,
sort_keys=True,
),
"analytics_log_analytics_complex_data",
)
@pytest.mark.parametrize(
"invalid_data,expected_error",
[
({}, "Field required"),
({"type": "test"}, "Field required"),
(
{"type": "test", "data": "not_a_dict", "data_index": "test"},
"Input should be a valid dictionary",
),
({"type": "test", "data": {"key": "value"}}, "Field required"),
],
ids=[
"empty_request",
"missing_data_and_data_index",
"invalid_data_type",
"missing_data_index",
],
)
def test_log_raw_analytics_validation_errors(
invalid_data: dict,
expected_error: str,
) -> None:
"""Test validation errors for invalid analytics requests."""
response = client.post("/log_raw_analytics", json=invalid_data)
assert response.status_code == 422
error_detail = response.json()
assert "detail" in error_detail, f"Missing 'detail' in error: {error_detail}"
error_text = json.dumps(error_detail)
assert (
expected_error in error_text
), f"Expected '{expected_error}' in error response: {error_text}"
def test_log_raw_analytics_service_error(
mocker: pytest_mock.MockFixture,
test_user_id: str,
) -> None:
"""Test error handling when analytics service fails."""
mocker.patch(
"backend.data.analytics.log_raw_analytics",
new_callable=AsyncMock,
side_effect=Exception("Analytics DB unreachable"),
)
request_data = {
"type": "test_event",
"data": {"key": "value"},
"data_index": "test_index",
}
response = client.post("/log_raw_analytics", json=request_data)
assert response.status_code == 500
error_detail = response.json()["detail"]
assert "Analytics DB unreachable" in error_detail["message"]
assert "hint" in error_detail

View File

@@ -0,0 +1,689 @@
import logging
from dataclasses import dataclass
from datetime import datetime, timedelta, timezone
from difflib import SequenceMatcher
from typing import Sequence
import prisma
import backend.api.features.library.db as library_db
import backend.api.features.library.model as library_model
import backend.api.features.store.db as store_db
import backend.api.features.store.model as store_model
import backend.data.block
from backend.blocks import load_all_blocks
from backend.blocks.llm import LlmModel
from backend.data.block import AnyBlockSchema, BlockCategory, BlockInfo, BlockSchema
from backend.data.db import query_raw_with_schema
from backend.integrations.providers import ProviderName
from backend.util.cache import cached
from backend.util.models import Pagination
from .model import (
BlockCategoryResponse,
BlockResponse,
BlockType,
CountResponse,
FilterType,
Provider,
ProviderResponse,
SearchEntry,
)
logger = logging.getLogger(__name__)
llm_models = [name.name.lower().replace("_", " ") for name in LlmModel]
MAX_LIBRARY_AGENT_RESULTS = 100
MAX_MARKETPLACE_AGENT_RESULTS = 100
MIN_SCORE_FOR_FILTERED_RESULTS = 10.0
SearchResultItem = BlockInfo | library_model.LibraryAgent | store_model.StoreAgent
@dataclass
class _ScoredItem:
item: SearchResultItem
filter_type: FilterType
score: float
sort_key: str
@dataclass
class _SearchCacheEntry:
items: list[SearchResultItem]
total_items: dict[FilterType, int]
def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse]:
categories: dict[BlockCategory, BlockCategoryResponse] = {}
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
# Skip disabled blocks
if block.disabled:
continue
# Skip blocks that don't have categories (all should have at least one)
if not block.categories:
continue
# Add block to the categories
for category in block.categories:
if category not in categories:
categories[category] = BlockCategoryResponse(
name=category.name.lower(),
total_blocks=0,
blocks=[],
)
categories[category].total_blocks += 1
# Append if the category has less than the specified number of blocks
if len(categories[category].blocks) < category_blocks:
categories[category].blocks.append(block.get_info())
# Sort categories by name
return sorted(categories.values(), key=lambda x: x.name)
def get_blocks(
*,
category: str | None = None,
type: BlockType | None = None,
provider: ProviderName | None = None,
page: int = 1,
page_size: int = 50,
) -> BlockResponse:
"""
Get blocks based on either category, type or provider.
Providing nothing fetches all block types.
"""
# Only one of category, type, or provider can be specified
if (category and type) or (category and provider) or (type and provider):
raise ValueError("Only one of category, type, or provider can be specified")
blocks: list[AnyBlockSchema] = []
skip = (page - 1) * page_size
take = page_size
total = 0
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
# Skip disabled blocks
if block.disabled:
continue
# Skip blocks that don't match the category
if category and category not in {c.name.lower() for c in block.categories}:
continue
# Skip blocks that don't match the type
if (
(type == "input" and block.block_type.value != "Input")
or (type == "output" and block.block_type.value != "Output")
or (type == "action" and block.block_type.value in ("Input", "Output"))
):
continue
# Skip blocks that don't match the provider
if provider:
credentials_info = block.input_schema.get_credentials_fields_info().values()
if not any(provider in info.provider for info in credentials_info):
continue
total += 1
if skip > 0:
skip -= 1
continue
if take > 0:
take -= 1
blocks.append(block)
return BlockResponse(
blocks=[b.get_info() for b in blocks],
pagination=Pagination(
total_items=total,
total_pages=(total + page_size - 1) // page_size,
current_page=page,
page_size=page_size,
),
)
def get_block_by_id(block_id: str) -> BlockInfo | None:
"""
Get a specific block by its ID.
"""
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.id == block_id:
return block.get_info()
return None
async def update_search(user_id: str, search: SearchEntry) -> str:
"""
Upsert a search request for the user and return the search ID.
"""
if search.search_id:
# Update existing search
await prisma.models.BuilderSearchHistory.prisma().update(
where={
"id": search.search_id,
},
data={
"searchQuery": search.search_query or "",
"filter": search.filter or [], # type: ignore
"byCreator": search.by_creator or [],
},
)
return search.search_id
else:
# Create new search
new_search = await prisma.models.BuilderSearchHistory.prisma().create(
data={
"userId": user_id,
"searchQuery": search.search_query or "",
"filter": search.filter or [], # type: ignore
"byCreator": search.by_creator or [],
}
)
return new_search.id
async def get_recent_searches(user_id: str, limit: int = 5) -> list[SearchEntry]:
"""
Get the user's most recent search requests.
"""
searches = await prisma.models.BuilderSearchHistory.prisma().find_many(
where={
"userId": user_id,
},
order={
"updatedAt": "desc",
},
take=limit,
)
return [
SearchEntry(
search_query=s.searchQuery,
filter=s.filter, # type: ignore
by_creator=s.byCreator,
search_id=s.id,
)
for s in searches
]
async def get_sorted_search_results(
*,
user_id: str,
search_query: str | None,
filters: Sequence[FilterType],
by_creator: Sequence[str] | None = None,
) -> _SearchCacheEntry:
normalized_filters: tuple[FilterType, ...] = tuple(sorted(set(filters or [])))
normalized_creators: tuple[str, ...] = tuple(sorted(set(by_creator or [])))
return await _build_cached_search_results(
user_id=user_id,
search_query=search_query or "",
filters=normalized_filters,
by_creator=normalized_creators,
)
@cached(ttl_seconds=300, shared_cache=True)
async def _build_cached_search_results(
user_id: str,
search_query: str,
filters: tuple[FilterType, ...],
by_creator: tuple[str, ...],
) -> _SearchCacheEntry:
normalized_query = (search_query or "").strip().lower()
include_blocks = "blocks" in filters
include_integrations = "integrations" in filters
include_library_agents = "my_agents" in filters
include_marketplace_agents = "marketplace_agents" in filters
scored_items: list[_ScoredItem] = []
total_items: dict[FilterType, int] = {
"blocks": 0,
"integrations": 0,
"marketplace_agents": 0,
"my_agents": 0,
}
block_results, block_total, integration_total = _collect_block_results(
normalized_query=normalized_query,
include_blocks=include_blocks,
include_integrations=include_integrations,
)
scored_items.extend(block_results)
total_items["blocks"] = block_total
total_items["integrations"] = integration_total
if include_library_agents:
library_response = await library_db.list_library_agents(
user_id=user_id,
search_term=search_query or None,
page=1,
page_size=MAX_LIBRARY_AGENT_RESULTS,
)
total_items["my_agents"] = library_response.pagination.total_items
scored_items.extend(
_build_library_items(
agents=library_response.agents,
normalized_query=normalized_query,
)
)
if include_marketplace_agents:
marketplace_response = await store_db.get_store_agents(
creators=list(by_creator) or None,
search_query=search_query or None,
page=1,
page_size=MAX_MARKETPLACE_AGENT_RESULTS,
)
total_items["marketplace_agents"] = marketplace_response.pagination.total_items
scored_items.extend(
_build_marketplace_items(
agents=marketplace_response.agents,
normalized_query=normalized_query,
)
)
sorted_items = sorted(
scored_items,
key=lambda entry: (-entry.score, entry.sort_key, entry.filter_type),
)
return _SearchCacheEntry(
items=[entry.item for entry in sorted_items],
total_items=total_items,
)
def _collect_block_results(
*,
normalized_query: str,
include_blocks: bool,
include_integrations: bool,
) -> tuple[list[_ScoredItem], int, int]:
results: list[_ScoredItem] = []
block_count = 0
integration_count = 0
if not include_blocks and not include_integrations:
return results, block_count, integration_count
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled:
continue
block_info = block.get_info()
credentials = list(block.input_schema.get_credentials_fields().values())
is_integration = len(credentials) > 0
if is_integration and not include_integrations:
continue
if not is_integration and not include_blocks:
continue
score = _score_block(block, block_info, normalized_query)
if not _should_include_item(score, normalized_query):
continue
filter_type: FilterType = "integrations" if is_integration else "blocks"
if is_integration:
integration_count += 1
else:
block_count += 1
results.append(
_ScoredItem(
item=block_info,
filter_type=filter_type,
score=score,
sort_key=_get_item_name(block_info),
)
)
return results, block_count, integration_count
def _build_library_items(
*,
agents: list[library_model.LibraryAgent],
normalized_query: str,
) -> list[_ScoredItem]:
results: list[_ScoredItem] = []
for agent in agents:
score = _score_library_agent(agent, normalized_query)
if not _should_include_item(score, normalized_query):
continue
results.append(
_ScoredItem(
item=agent,
filter_type="my_agents",
score=score,
sort_key=_get_item_name(agent),
)
)
return results
def _build_marketplace_items(
*,
agents: list[store_model.StoreAgent],
normalized_query: str,
) -> list[_ScoredItem]:
results: list[_ScoredItem] = []
for agent in agents:
score = _score_store_agent(agent, normalized_query)
if not _should_include_item(score, normalized_query):
continue
results.append(
_ScoredItem(
item=agent,
filter_type="marketplace_agents",
score=score,
sort_key=_get_item_name(agent),
)
)
return results
def get_providers(
query: str = "",
page: int = 1,
page_size: int = 50,
) -> ProviderResponse:
providers = []
query = query.lower()
skip = (page - 1) * page_size
take = page_size
all_providers = _get_all_providers()
for provider in all_providers.values():
if (
query not in provider.name.value.lower()
and query not in provider.description.lower()
):
continue
if skip > 0:
skip -= 1
continue
if take > 0:
take -= 1
providers.append(provider)
total = len(all_providers)
return ProviderResponse(
providers=providers,
pagination=Pagination(
total_items=total,
total_pages=(total + page_size - 1) // page_size,
current_page=page,
page_size=page_size,
),
)
async def get_counts(user_id: str) -> CountResponse:
my_agents = await prisma.models.LibraryAgent.prisma().count(
where={
"userId": user_id,
"isDeleted": False,
"isArchived": False,
}
)
counts = await _get_static_counts()
return CountResponse(
my_agents=my_agents,
**counts,
)
@cached(ttl_seconds=3600)
async def _get_static_counts():
"""
Get counts of blocks, integrations, and marketplace agents.
This is cached to avoid unnecessary database queries and calculations.
"""
all_blocks = 0
input_blocks = 0
action_blocks = 0
output_blocks = 0
integrations = 0
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled:
continue
all_blocks += 1
if block.block_type.value == "Input":
input_blocks += 1
elif block.block_type.value == "Output":
output_blocks += 1
else:
action_blocks += 1
credentials = list(block.input_schema.get_credentials_fields().values())
if len(credentials) > 0:
integrations += 1
marketplace_agents = await prisma.models.StoreAgent.prisma().count()
return {
"all_blocks": all_blocks,
"input_blocks": input_blocks,
"action_blocks": action_blocks,
"output_blocks": output_blocks,
"integrations": integrations,
"marketplace_agents": marketplace_agents,
}
def _matches_llm_model(schema_cls: type[BlockSchema], query: str) -> bool:
for field in schema_cls.model_fields.values():
if field.annotation == LlmModel:
# Check if query matches any value in llm_models
if any(query in name for name in llm_models):
return True
return False
def _score_block(
block: AnyBlockSchema,
block_info: BlockInfo,
normalized_query: str,
) -> float:
if not normalized_query:
return 0.0
name = block_info.name.lower()
description = block_info.description.lower()
score = _score_primary_fields(name, description, normalized_query)
category_text = " ".join(
category.get("category", "").lower() for category in block_info.categories
)
score += _score_additional_field(category_text, normalized_query, 12, 6)
credentials_info = block.input_schema.get_credentials_fields_info().values()
provider_names = [
provider.value.lower()
for info in credentials_info
for provider in info.provider
]
provider_text = " ".join(provider_names)
score += _score_additional_field(provider_text, normalized_query, 15, 6)
if _matches_llm_model(block.input_schema, normalized_query):
score += 20
return score
def _score_library_agent(
agent: library_model.LibraryAgent,
normalized_query: str,
) -> float:
if not normalized_query:
return 0.0
name = agent.name.lower()
description = (agent.description or "").lower()
instructions = (agent.instructions or "").lower()
score = _score_primary_fields(name, description, normalized_query)
score += _score_additional_field(instructions, normalized_query, 15, 6)
score += _score_additional_field(
agent.creator_name.lower(), normalized_query, 10, 5
)
return score
def _score_store_agent(
agent: store_model.StoreAgent,
normalized_query: str,
) -> float:
if not normalized_query:
return 0.0
name = agent.agent_name.lower()
description = agent.description.lower()
sub_heading = agent.sub_heading.lower()
score = _score_primary_fields(name, description, normalized_query)
score += _score_additional_field(sub_heading, normalized_query, 12, 6)
score += _score_additional_field(agent.creator.lower(), normalized_query, 10, 5)
return score
def _score_primary_fields(name: str, description: str, query: str) -> float:
score = 0.0
if name == query:
score += 120
elif name.startswith(query):
score += 90
elif query in name:
score += 60
score += SequenceMatcher(None, name, query).ratio() * 50
if description:
if query in description:
score += 30
score += SequenceMatcher(None, description, query).ratio() * 25
return score
def _score_additional_field(
value: str,
query: str,
contains_weight: float,
similarity_weight: float,
) -> float:
if not value or not query:
return 0.0
score = 0.0
if query in value:
score += contains_weight
score += SequenceMatcher(None, value, query).ratio() * similarity_weight
return score
def _should_include_item(score: float, normalized_query: str) -> bool:
if not normalized_query:
return True
return score >= MIN_SCORE_FOR_FILTERED_RESULTS
def _get_item_name(item: SearchResultItem) -> str:
if isinstance(item, BlockInfo):
return item.name.lower()
if isinstance(item, library_model.LibraryAgent):
return item.name.lower()
return item.agent_name.lower()
@cached(ttl_seconds=3600)
def _get_all_providers() -> dict[ProviderName, Provider]:
providers: dict[ProviderName, Provider] = {}
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled:
continue
credentials_info = block.input_schema.get_credentials_fields_info().values()
for info in credentials_info:
for provider in info.provider: # provider is a ProviderName enum member
if provider in providers:
providers[provider].integration_count += 1
else:
providers[provider] = Provider(
name=provider, description="", integration_count=1
)
return providers
@cached(ttl_seconds=3600)
async def get_suggested_blocks(count: int = 5) -> list[BlockInfo]:
suggested_blocks = []
# Sum the number of executions for each block type
# Prisma cannot group by nested relations, so we do a raw query
# Calculate the cutoff timestamp
timestamp_threshold = datetime.now(timezone.utc) - timedelta(days=30)
results = await query_raw_with_schema(
"""
SELECT
agent_node."agentBlockId" AS block_id,
COUNT(execution.id) AS execution_count
FROM {schema_prefix}"AgentNodeExecution" execution
JOIN {schema_prefix}"AgentNode" agent_node ON execution."agentNodeId" = agent_node.id
WHERE execution."endedTime" >= $1::timestamp
GROUP BY agent_node."agentBlockId"
ORDER BY execution_count DESC;
""",
timestamp_threshold,
)
# Get the top blocks based on execution count
# But ignore Input and Output blocks
blocks: list[tuple[BlockInfo, int]] = []
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled or block.block_type in (
backend.data.block.BlockType.INPUT,
backend.data.block.BlockType.OUTPUT,
backend.data.block.BlockType.AGENT,
):
continue
# Find the execution count for this block
execution_count = next(
(row["execution_count"] for row in results if row["block_id"] == block.id),
0,
)
blocks.append((block.get_info(), execution_count))
# Sort blocks by execution count
blocks.sort(key=lambda x: x[1], reverse=True)
suggested_blocks = [block[0] for block in blocks]
# Return the top blocks
return suggested_blocks[:count]

View File

@@ -1,9 +1,10 @@
from typing import Any, Literal
from typing import Literal
from pydantic import BaseModel
import backend.server.v2.library.model as library_model
import backend.server.v2.store.model as store_model
import backend.api.features.library.model as library_model
import backend.api.features.store.model as store_model
from backend.data.block import BlockInfo
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
@@ -16,29 +17,34 @@ FilterType = Literal[
BlockType = Literal["all", "input", "action", "output"]
BlockData = dict[str, Any]
class SearchEntry(BaseModel):
search_query: str | None = None
filter: list[FilterType] | None = None
by_creator: list[str] | None = None
search_id: str | None = None
# Suggestions
class SuggestionsResponse(BaseModel):
otto_suggestions: list[str]
recent_searches: list[str]
recent_searches: list[SearchEntry]
providers: list[ProviderName]
top_blocks: list[BlockData]
top_blocks: list[BlockInfo]
# All blocks
class BlockCategoryResponse(BaseModel):
name: str
total_blocks: int
blocks: list[BlockData]
blocks: list[BlockInfo]
model_config = {"use_enum_values": False} # <== use enum names like "AI"
model_config = {"use_enum_values": False} # Use enum names like "AI"
# Input/Action/Output and see all for block categories
class BlockResponse(BaseModel):
blocks: list[BlockData]
blocks: list[BlockInfo]
pagination: Pagination
@@ -54,27 +60,11 @@ class ProviderResponse(BaseModel):
pagination: Pagination
# Search
class SearchRequest(BaseModel):
search_query: str | None = None
filter: list[FilterType] | None = None
by_creator: list[str] | None = None
search_id: str | None = None
page: int | None = None
page_size: int | None = None
class SearchBlocksResponse(BaseModel):
blocks: BlockResponse
total_block_count: int
total_integration_count: int
class SearchResponse(BaseModel):
items: list[BlockData | library_model.LibraryAgent | store_model.StoreAgent]
items: list[BlockInfo | library_model.LibraryAgent | store_model.StoreAgent]
search_id: str
total_items: dict[FilterType, int]
page: int
more_pages: bool
pagination: Pagination
class CountResponse(BaseModel):

View File

@@ -4,15 +4,12 @@ from typing import Annotated, Sequence
import fastapi
from autogpt_libs.auth.dependencies import get_user_id, requires_user
import backend.server.v2.builder.db as builder_db
import backend.server.v2.builder.model as builder_model
import backend.server.v2.library.db as library_db
import backend.server.v2.library.model as library_model
import backend.server.v2.store.db as store_db
import backend.server.v2.store.model as store_model
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
from . import db as builder_db
from . import model as builder_model
logger = logging.getLogger(__name__)
router = fastapi.APIRouter(
@@ -45,7 +42,9 @@ def sanitize_query(query: str | None) -> str | None:
summary="Get Builder suggestions",
response_model=builder_model.SuggestionsResponse,
)
async def get_suggestions() -> builder_model.SuggestionsResponse:
async def get_suggestions(
user_id: Annotated[str, fastapi.Security(get_user_id)],
) -> builder_model.SuggestionsResponse:
"""
Get all suggestions for the Blocks Menu.
"""
@@ -55,11 +54,7 @@ async def get_suggestions() -> builder_model.SuggestionsResponse:
"Help me create a list",
"Help me feed my data to Google Maps",
],
recent_searches=[
"image generation",
"deepfake",
"competitor analysis",
],
recent_searches=await builder_db.get_recent_searches(user_id),
providers=[
ProviderName.TWITTER,
ProviderName.GITHUB,
@@ -110,6 +105,25 @@ async def get_blocks(
)
@router.get(
"/blocks/batch",
summary="Get specific blocks",
response_model=list[builder_model.BlockInfo],
)
async def get_specific_blocks(
block_ids: Annotated[list[str], fastapi.Query()],
) -> list[builder_model.BlockInfo]:
"""
Get specific blocks by their IDs.
"""
blocks = []
for block_id in block_ids:
block = builder_db.get_block_by_id(block_id)
if block:
blocks.append(block)
return blocks
@router.get(
"/providers",
summary="Get Builder integration providers",
@@ -128,94 +142,71 @@ async def get_providers(
)
@router.post(
@router.get(
"/search",
summary="Builder search",
tags=["store", "private"],
response_model=builder_model.SearchResponse,
)
async def search(
options: builder_model.SearchRequest,
user_id: Annotated[str, fastapi.Security(get_user_id)],
search_query: Annotated[str | None, fastapi.Query()] = None,
filter: Annotated[list[builder_model.FilterType] | None, fastapi.Query()] = None,
search_id: Annotated[str | None, fastapi.Query()] = None,
by_creator: Annotated[list[str] | None, fastapi.Query()] = None,
page: Annotated[int, fastapi.Query()] = 1,
page_size: Annotated[int, fastapi.Query()] = 50,
) -> builder_model.SearchResponse:
"""
Search for blocks (including integrations), marketplace agents, and user library agents.
"""
# If no filters are provided, then we will return all types
if not options.filter:
options.filter = [
if not filter:
filter = [
"blocks",
"integrations",
"marketplace_agents",
"my_agents",
]
options.search_query = sanitize_query(options.search_query)
options.page = options.page or 1
options.page_size = options.page_size or 50
search_query = sanitize_query(search_query)
# Blocks&Integrations
blocks = builder_model.SearchBlocksResponse(
blocks=builder_model.BlockResponse(
blocks=[],
pagination=Pagination.empty(),
# Get all possible results
cached_results = await builder_db.get_sorted_search_results(
user_id=user_id,
search_query=search_query,
filters=filter,
by_creator=by_creator,
)
# Paginate results
total_combined_items = len(cached_results.items)
pagination = Pagination(
total_items=total_combined_items,
total_pages=(total_combined_items + page_size - 1) // page_size,
current_page=page,
page_size=page_size,
)
start_idx = (page - 1) * page_size
end_idx = start_idx + page_size
paginated_items = cached_results.items[start_idx:end_idx]
# Update the search entry by id
search_id = await builder_db.update_search(
user_id,
builder_model.SearchEntry(
search_query=search_query,
filter=filter,
by_creator=by_creator,
search_id=search_id,
),
total_block_count=0,
total_integration_count=0,
)
if "blocks" in options.filter or "integrations" in options.filter:
blocks = builder_db.search_blocks(
include_blocks="blocks" in options.filter,
include_integrations="integrations" in options.filter,
query=options.search_query or "",
page=options.page,
page_size=options.page_size,
)
# Library Agents
my_agents = library_model.LibraryAgentResponse(
agents=[],
pagination=Pagination.empty(),
)
if "my_agents" in options.filter:
my_agents = await library_db.list_library_agents(
user_id=user_id,
search_term=options.search_query,
page=options.page,
page_size=options.page_size,
)
# Marketplace Agents
marketplace_agents = store_model.StoreAgentsResponse(
agents=[],
pagination=Pagination.empty(),
)
if "marketplace_agents" in options.filter:
marketplace_agents = await store_db.get_store_agents(
creators=options.by_creator,
search_query=options.search_query,
page=options.page,
page_size=options.page_size,
)
more_pages = False
if (
blocks.blocks.pagination.current_page < blocks.blocks.pagination.total_pages
or my_agents.pagination.current_page < my_agents.pagination.total_pages
or marketplace_agents.pagination.current_page
< marketplace_agents.pagination.total_pages
):
more_pages = True
return builder_model.SearchResponse(
items=blocks.blocks.blocks + my_agents.agents + marketplace_agents.agents,
total_items={
"blocks": blocks.total_block_count,
"integrations": blocks.total_integration_count,
"marketplace_agents": marketplace_agents.pagination.total_items,
"my_agents": my_agents.pagination.total_items,
},
page=options.page,
more_pages=more_pages,
items=paginated_items,
search_id=search_id,
total_items=cached_results.total_items,
pagination=pagination,
)

View File

@@ -0,0 +1,118 @@
"""Configuration management for chat system."""
import os
from pathlib import Path
from pydantic import Field, field_validator
from pydantic_settings import BaseSettings
class ChatConfig(BaseSettings):
"""Configuration for the chat system."""
# OpenAI API Configuration
model: str = Field(
default="qwen/qwen3-235b-a22b-2507", description="Default model to use"
)
api_key: str | None = Field(default=None, description="OpenAI API key")
base_url: str | None = Field(
default="https://openrouter.ai/api/v1",
description="Base URL for API (e.g., for OpenRouter)",
)
# Session TTL Configuration - 12 hours
session_ttl: int = Field(default=43200, description="Session TTL in seconds")
# System Prompt Configuration
system_prompt_path: str = Field(
default="prompts/chat_system.md",
description="Path to system prompt file relative to chat module",
)
# Streaming Configuration
max_context_messages: int = Field(
default=50, ge=1, le=200, description="Maximum context messages"
)
stream_timeout: int = Field(default=300, description="Stream timeout in seconds")
max_retries: int = Field(default=3, description="Maximum number of retries")
max_agent_runs: int = Field(default=3, description="Maximum number of agent runs")
max_agent_schedules: int = Field(
default=3, description="Maximum number of agent schedules"
)
@field_validator("api_key", mode="before")
@classmethod
def get_api_key(cls, v):
"""Get API key from environment if not provided."""
if v is None:
# Try to get from environment variables
# First check for CHAT_API_KEY (Pydantic prefix)
v = os.getenv("CHAT_API_KEY")
if not v:
# Fall back to OPEN_ROUTER_API_KEY
v = os.getenv("OPEN_ROUTER_API_KEY")
if not v:
# Fall back to OPENAI_API_KEY
v = os.getenv("OPENAI_API_KEY")
return v
@field_validator("base_url", mode="before")
@classmethod
def get_base_url(cls, v):
"""Get base URL from environment if not provided."""
if v is None:
# Check for OpenRouter or custom base URL
v = os.getenv("CHAT_BASE_URL")
if not v:
v = os.getenv("OPENROUTER_BASE_URL")
if not v:
v = os.getenv("OPENAI_BASE_URL")
if not v:
v = "https://openrouter.ai/api/v1"
return v
def get_system_prompt(self, **template_vars) -> str:
"""Load and render the system prompt from file.
Args:
**template_vars: Variables to substitute in the template
Returns:
Rendered system prompt string
"""
# Get the path relative to this module
module_dir = Path(__file__).parent
prompt_path = module_dir / self.system_prompt_path
# Check for .j2 extension first (Jinja2 template)
j2_path = Path(str(prompt_path) + ".j2")
if j2_path.exists():
try:
from jinja2 import Template
template = Template(j2_path.read_text())
return template.render(**template_vars)
except ImportError:
# Jinja2 not installed, fall back to reading as plain text
return j2_path.read_text()
# Check for markdown file
if prompt_path.exists():
content = prompt_path.read_text()
# Simple variable substitution if Jinja2 is not available
for key, value in template_vars.items():
placeholder = f"{{{key}}}"
content = content.replace(placeholder, str(value))
return content
raise FileNotFoundError(f"System prompt file not found: {prompt_path}")
class Config:
"""Pydantic config."""
env_file = ".env"
env_file_encoding = "utf-8"
extra = "ignore" # Ignore extra environment variables

View File

@@ -0,0 +1,204 @@
import logging
import uuid
from datetime import UTC, datetime
from openai.types.chat import (
ChatCompletionAssistantMessageParam,
ChatCompletionDeveloperMessageParam,
ChatCompletionFunctionMessageParam,
ChatCompletionMessageParam,
ChatCompletionSystemMessageParam,
ChatCompletionToolMessageParam,
ChatCompletionUserMessageParam,
)
from openai.types.chat.chat_completion_assistant_message_param import FunctionCall
from openai.types.chat.chat_completion_message_tool_call_param import (
ChatCompletionMessageToolCallParam,
Function,
)
from pydantic import BaseModel
from backend.data.redis_client import get_redis_async
from backend.util.exceptions import RedisError
from .config import ChatConfig
logger = logging.getLogger(__name__)
config = ChatConfig()
class ChatMessage(BaseModel):
role: str
content: str | None = None
name: str | None = None
tool_call_id: str | None = None
refusal: str | None = None
tool_calls: list[dict] | None = None
function_call: dict | None = None
class Usage(BaseModel):
prompt_tokens: int
completion_tokens: int
total_tokens: int
class ChatSession(BaseModel):
session_id: str
user_id: str | None
messages: list[ChatMessage]
usage: list[Usage]
credentials: dict[str, dict] = {} # Map of provider -> credential metadata
started_at: datetime
updated_at: datetime
successful_agent_runs: dict[str, int] = {}
successful_agent_schedules: dict[str, int] = {}
@staticmethod
def new(user_id: str | None) -> "ChatSession":
return ChatSession(
session_id=str(uuid.uuid4()),
user_id=user_id,
messages=[],
usage=[],
credentials={},
started_at=datetime.now(UTC),
updated_at=datetime.now(UTC),
)
def to_openai_messages(self) -> list[ChatCompletionMessageParam]:
messages = []
for message in self.messages:
if message.role == "developer":
m = ChatCompletionDeveloperMessageParam(
role="developer",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "system":
m = ChatCompletionSystemMessageParam(
role="system",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "user":
m = ChatCompletionUserMessageParam(
role="user",
content=message.content or "",
)
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "assistant":
m = ChatCompletionAssistantMessageParam(
role="assistant",
content=message.content or "",
)
if message.function_call:
m["function_call"] = FunctionCall(
arguments=message.function_call["arguments"],
name=message.function_call["name"],
)
if message.refusal:
m["refusal"] = message.refusal
if message.tool_calls:
t: list[ChatCompletionMessageToolCallParam] = []
for tool_call in message.tool_calls:
# Tool calls are stored with nested structure: {id, type, function: {name, arguments}}
function_data = tool_call.get("function", {})
# Skip tool calls that are missing required fields
if "id" not in tool_call or "name" not in function_data:
logger.warning(
f"Skipping invalid tool call: missing required fields. "
f"Got: {tool_call.keys()}, function keys: {function_data.keys()}"
)
continue
# Arguments are stored as a JSON string
arguments_str = function_data.get("arguments", "{}")
t.append(
ChatCompletionMessageToolCallParam(
id=tool_call["id"],
type="function",
function=Function(
arguments=arguments_str,
name=function_data["name"],
),
)
)
m["tool_calls"] = t
if message.name:
m["name"] = message.name
messages.append(m)
elif message.role == "tool":
messages.append(
ChatCompletionToolMessageParam(
role="tool",
content=message.content or "",
tool_call_id=message.tool_call_id or "",
)
)
elif message.role == "function":
messages.append(
ChatCompletionFunctionMessageParam(
role="function",
content=message.content,
name=message.name or "",
)
)
return messages
async def get_chat_session(
session_id: str,
user_id: str | None,
) -> ChatSession | None:
"""Get a chat session by ID."""
redis_key = f"chat:session:{session_id}"
async_redis = await get_redis_async()
raw_session: bytes | None = await async_redis.get(redis_key)
if raw_session is None:
logger.warning(f"Session {session_id} not found in Redis")
return None
try:
session = ChatSession.model_validate_json(raw_session)
except Exception as e:
logger.error(f"Failed to deserialize session {session_id}: {e}", exc_info=True)
raise RedisError(f"Corrupted session data for {session_id}") from e
if session.user_id is not None and session.user_id != user_id:
logger.warning(
f"Session {session_id} user id mismatch: {session.user_id} != {user_id}"
)
return None
return session
async def upsert_chat_session(
session: ChatSession,
) -> ChatSession:
"""Update a chat session with the given messages."""
redis_key = f"chat:session:{session.session_id}"
async_redis = await get_redis_async()
resp = await async_redis.setex(
redis_key, config.session_ttl, session.model_dump_json()
)
if not resp:
raise RedisError(
f"Failed to persist chat session {session.session_id} to Redis: {resp}"
)
return session

View File

@@ -0,0 +1,70 @@
import pytest
from .model import (
ChatMessage,
ChatSession,
Usage,
get_chat_session,
upsert_chat_session,
)
messages = [
ChatMessage(content="Hello, how are you?", role="user"),
ChatMessage(
content="I'm fine, thank you!",
role="assistant",
tool_calls=[
{
"id": "t123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": '{"city": "New York"}',
},
}
],
),
ChatMessage(
content="I'm using the tool to get the weather",
role="tool",
tool_call_id="t123",
),
]
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_serialization_deserialization():
s = ChatSession.new(user_id="abc123")
s.messages = messages
s.usage = [Usage(prompt_tokens=100, completion_tokens=200, total_tokens=300)]
serialized = s.model_dump_json()
s2 = ChatSession.model_validate_json(serialized)
assert s2.model_dump() == s.model_dump()
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_redis_storage():
s = ChatSession.new(user_id=None)
s.messages = messages
s = await upsert_chat_session(s)
s2 = await get_chat_session(
session_id=s.session_id,
user_id=s.user_id,
)
assert s2 == s
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_redis_storage_user_id_mismatch():
s = ChatSession.new(user_id="abc123")
s.messages = messages
s = await upsert_chat_session(s)
s2 = await get_chat_session(s.session_id, None)
assert s2 is None

View File

@@ -0,0 +1,104 @@
You are Otto, an AI Co-Pilot and Forward Deployed Engineer for AutoGPT, an AI Business Automation tool. Your mission is to help users quickly find and set up AutoGPT agents to solve their business problems.
Here are the functions available to you:
<functions>
1. **find_agent** - Search for agents that solve the user's problem
2. **run_agent** - Run or schedule an agent (automatically handles setup)
</functions>
## HOW run_agent WORKS
The `run_agent` tool automatically handles the entire setup flow:
1. **First call** (no inputs) → Returns available inputs so user can decide what values to use
2. **Credentials check** → If missing, UI automatically prompts user to add them (you don't need to mention this)
3. **Execution** → Runs when you provide `inputs` OR set `use_defaults=true`
Parameters:
- `username_agent_slug` (required): Agent identifier like "creator/agent-name"
- `inputs`: Object with input values for the agent
- `use_defaults`: Set to `true` to run with default values (only after user confirms)
- `schedule_name` + `cron`: For scheduled execution
## WORKFLOW
1. **find_agent** - Search for agents that solve the user's problem
2. **run_agent** (first call, no inputs) - Get available inputs for the agent
3. **Ask user** what values they want to use OR if they want to use defaults
4. **run_agent** (second call) - Either with `inputs={...}` or `use_defaults=true`
## YOUR APPROACH
**Step 1: Understand the Problem**
- Ask maximum 1-2 targeted questions
- Focus on: What business problem are they solving?
- Move quickly to searching for solutions
**Step 2: Find Agents**
- Use `find_agent` immediately with relevant keywords
- Suggest the best option from search results
- Explain briefly how it solves their problem
**Step 3: Get Agent Inputs**
- Call `run_agent(username_agent_slug="creator/agent-name")` without inputs
- This returns the available inputs (required and optional)
- Present these to the user and ask what values they want
**Step 4: Run with User's Choice**
- If user provides values: `run_agent(username_agent_slug="...", inputs={...})`
- If user says "use defaults": `run_agent(username_agent_slug="...", use_defaults=true)`
- On success, share the agent link with the user
**For Scheduled Execution:**
- Add `schedule_name` and `cron` parameters
- Example: `run_agent(username_agent_slug="...", inputs={...}, schedule_name="Daily Report", cron="0 9 * * *")`
## FUNCTION CALL FORMAT
To call a function, use this exact format:
`<function_call>function_name(parameter="value")</function_call>`
Examples:
- `<function_call>find_agent(query="social media automation")</function_call>`
- `<function_call>run_agent(username_agent_slug="creator/agent-name")</function_call>` (get inputs)
- `<function_call>run_agent(username_agent_slug="creator/agent-name", inputs={"topic": "AI news"})</function_call>`
- `<function_call>run_agent(username_agent_slug="creator/agent-name", use_defaults=true)</function_call>`
## KEY RULES
**What You DON'T Do:**
- Don't help with login (frontend handles this)
- Don't mention or explain credentials to the user (frontend handles this automatically)
- Don't run agents without first showing available inputs to the user
- Don't use `use_defaults=true` without user explicitly confirming
- Don't write responses longer than 3 sentences
**What You DO:**
- Always call run_agent first without inputs to see what's available
- Ask user what values they want OR if they want to use defaults
- Keep all responses to maximum 3 sentences
- Include the agent link in your response after successful execution
**Error Handling:**
- Authentication needed → "Please sign in via the interface"
- Credentials missing → The UI handles this automatically. Focus on asking the user about input values instead.
## RESPONSE STRUCTURE
Before responding, wrap your analysis in <thinking> tags to systematically plan your approach:
- Extract the key business problem or request from the user's message
- Determine what function call (if any) you need to make next
- Plan your response to stay under the 3-sentence maximum
Example interaction:
```
User: "Run the AI news agent for me"
Otto: <function_call>run_agent(username_agent_slug="autogpt/ai-news")</function_call>
[Tool returns: Agent accepts inputs - Required: topic. Optional: num_articles (default: 5)]
Otto: The AI News agent needs a topic. What topic would you like news about, or should I use the defaults?
User: "Use defaults"
Otto: <function_call>run_agent(username_agent_slug="autogpt/ai-news", use_defaults=true)</function_call>
```
KEEP ANSWERS TO 3 SENTENCES

View File

@@ -0,0 +1,101 @@
from enum import Enum
from typing import Any
from pydantic import BaseModel, Field
class ResponseType(str, Enum):
"""Types of streaming responses."""
TEXT_CHUNK = "text_chunk"
TEXT_ENDED = "text_ended"
TOOL_CALL = "tool_call"
TOOL_CALL_START = "tool_call_start"
TOOL_RESPONSE = "tool_response"
ERROR = "error"
USAGE = "usage"
STREAM_END = "stream_end"
class StreamBaseResponse(BaseModel):
"""Base response model for all streaming responses."""
type: ResponseType
timestamp: str | None = None
def to_sse(self) -> str:
"""Convert to SSE format."""
return f"data: {self.model_dump_json()}\n\n"
class StreamTextChunk(StreamBaseResponse):
"""Streaming text content from the assistant."""
type: ResponseType = ResponseType.TEXT_CHUNK
content: str = Field(..., description="Text content chunk")
class StreamToolCallStart(StreamBaseResponse):
"""Tool call started notification."""
type: ResponseType = ResponseType.TOOL_CALL_START
tool_name: str = Field(..., description="Name of the tool that was executed")
tool_id: str = Field(..., description="Unique tool call ID")
class StreamToolCall(StreamBaseResponse):
"""Tool invocation notification."""
type: ResponseType = ResponseType.TOOL_CALL
tool_id: str = Field(..., description="Unique tool call ID")
tool_name: str = Field(..., description="Name of the tool being called")
arguments: dict[str, Any] = Field(
default_factory=dict, description="Tool arguments"
)
class StreamToolExecutionResult(StreamBaseResponse):
"""Tool execution result."""
type: ResponseType = ResponseType.TOOL_RESPONSE
tool_id: str = Field(..., description="Tool call ID this responds to")
tool_name: str = Field(..., description="Name of the tool that was executed")
result: str | dict[str, Any] = Field(..., description="Tool execution result")
success: bool = Field(
default=True, description="Whether the tool execution succeeded"
)
class StreamUsage(StreamBaseResponse):
"""Token usage statistics."""
type: ResponseType = ResponseType.USAGE
prompt_tokens: int
completion_tokens: int
total_tokens: int
class StreamError(StreamBaseResponse):
"""Error response."""
type: ResponseType = ResponseType.ERROR
message: str = Field(..., description="Error message")
code: str | None = Field(default=None, description="Error code")
details: dict[str, Any] | None = Field(
default=None, description="Additional error details"
)
class StreamTextEnded(StreamBaseResponse):
"""Text streaming completed marker."""
type: ResponseType = ResponseType.TEXT_ENDED
class StreamEnd(StreamBaseResponse):
"""End of stream marker."""
type: ResponseType = ResponseType.STREAM_END
summary: dict[str, Any] | None = Field(
default=None, description="Stream summary statistics"
)

View File

@@ -0,0 +1,219 @@
"""Chat API routes for chat session management and streaming via SSE."""
import logging
from collections.abc import AsyncGenerator
from typing import Annotated
from autogpt_libs import auth
from fastapi import APIRouter, Depends, Query, Security
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from backend.util.exceptions import NotFoundError
from . import service as chat_service
from .config import ChatConfig
config = ChatConfig()
logger = logging.getLogger(__name__)
router = APIRouter(
tags=["chat"],
)
# ========== Request/Response Models ==========
class CreateSessionResponse(BaseModel):
"""Response model containing information on a newly created chat session."""
id: str
created_at: str
user_id: str | None
class SessionDetailResponse(BaseModel):
"""Response model providing complete details for a chat session, including messages."""
id: str
created_at: str
updated_at: str
user_id: str | None
messages: list[dict]
# ========== Routes ==========
@router.post(
"/sessions",
)
async def create_session(
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> CreateSessionResponse:
"""
Create a new chat session.
Initiates a new chat session for either an authenticated or anonymous user.
Args:
user_id: The optional authenticated user ID parsed from the JWT. If missing, creates an anonymous session.
Returns:
CreateSessionResponse: Details of the created session.
"""
logger.info(
f"Creating session with user_id: "
f"...{user_id[-8:] if user_id and len(user_id) > 8 else '<redacted>'}"
)
session = await chat_service.create_chat_session(user_id)
return CreateSessionResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
user_id=session.user_id or None,
)
@router.get(
"/sessions/{session_id}",
)
async def get_session(
session_id: str,
user_id: Annotated[str | None, Depends(auth.get_user_id)],
) -> SessionDetailResponse:
"""
Retrieve the details of a specific chat session.
Looks up a chat session by ID for the given user (if authenticated) and returns all session data including messages.
Args:
session_id: The unique identifier for the desired chat session.
user_id: The optional authenticated user ID, or None for anonymous access.
Returns:
SessionDetailResponse: Details for the requested session; raises NotFoundError if not found.
"""
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found")
return SessionDetailResponse(
id=session.session_id,
created_at=session.started_at.isoformat(),
updated_at=session.updated_at.isoformat(),
user_id=session.user_id or None,
messages=[message.model_dump() for message in session.messages],
)
@router.get(
"/sessions/{session_id}/stream",
)
async def stream_chat(
session_id: str,
message: Annotated[str, Query(min_length=1, max_length=10000)],
user_id: str | None = Depends(auth.get_user_id),
is_user_message: bool = Query(default=True),
):
"""
Stream chat responses for a session.
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Args:
session_id: The chat session identifier to associate with the streamed messages.
message: The user's new message to process.
user_id: Optional authenticated user ID.
is_user_message: Whether the message is a user message.
Returns:
StreamingResponse: SSE-formatted response chunks.
"""
# Validate session exists before starting the stream
# This prevents errors after the response has already started
session = await chat_service.get_session(session_id, user_id)
if not session:
raise NotFoundError(f"Session {session_id} not found. ")
if session.user_id is None and user_id is not None:
session = await chat_service.assign_user_to_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
async for chunk in chat_service.stream_chat_completion(
session_id,
message,
is_user_message=is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
):
yield chunk.to_sse()
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
},
)
@router.patch(
"/sessions/{session_id}/assign-user",
dependencies=[Security(auth.requires_user)],
status_code=200,
)
async def session_assign_user(
session_id: str,
user_id: Annotated[str, Security(auth.get_user_id)],
) -> dict:
"""
Assign an authenticated user to a chat session.
Used (typically post-login) to claim an existing anonymous session as the current authenticated user.
Args:
session_id: The identifier for the (previously anonymous) session.
user_id: The authenticated user's ID to associate with the session.
Returns:
dict: Status of the assignment.
"""
await chat_service.assign_user_to_session(session_id, user_id)
return {"status": "ok"}
# ========== Health Check ==========
@router.get("/health", status_code=200)
async def health_check() -> dict:
"""
Health check endpoint for the chat service.
Performs a full cycle test of session creation, assignment, and retrieval. Should always return healthy
if the service and data layer are operational.
Returns:
dict: A status dictionary indicating health, service name, and API version.
"""
session = await chat_service.create_chat_session(None)
await chat_service.assign_user_to_session(session.session_id, "test_user")
await chat_service.get_session(session.session_id, "test_user")
return {
"status": "healthy",
"service": "chat",
"version": "0.1.0",
}

View File

@@ -0,0 +1,538 @@
import logging
from collections.abc import AsyncGenerator
from datetime import UTC, datetime
from typing import Any
import orjson
from openai import AsyncOpenAI
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
from backend.util.exceptions import NotFoundError
from .config import ChatConfig
from .model import (
ChatMessage,
ChatSession,
Usage,
get_chat_session,
upsert_chat_session,
)
from .response_model import (
StreamBaseResponse,
StreamEnd,
StreamError,
StreamTextChunk,
StreamTextEnded,
StreamToolCall,
StreamToolCallStart,
StreamToolExecutionResult,
StreamUsage,
)
from .tools import execute_tool, tools
logger = logging.getLogger(__name__)
config = ChatConfig()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
async def create_chat_session(
user_id: str | None = None,
) -> ChatSession:
"""
Create a new chat session and persist it to the database.
"""
session = ChatSession.new(user_id)
# Persist the session immediately so it can be used for streaming
return await upsert_chat_session(session)
async def get_session(
session_id: str,
user_id: str | None = None,
) -> ChatSession | None:
"""
Get a chat session by ID.
"""
return await get_chat_session(session_id, user_id)
async def assign_user_to_session(
session_id: str,
user_id: str,
) -> ChatSession:
"""
Assign a user to a chat session.
"""
session = await get_chat_session(session_id, None)
if not session:
raise NotFoundError(f"Session {session_id} not found")
session.user_id = user_id
return await upsert_chat_session(session)
async def stream_chat_completion(
session_id: str,
message: str | None = None,
is_user_message: bool = True,
user_id: str | None = None,
retry_count: int = 0,
session: ChatSession | None = None,
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Main entry point for streaming chat completions with database handling.
This function handles all database operations and delegates streaming
to the internal _stream_chat_chunks function.
Args:
session_id: Chat session ID
user_message: User's input message
user_id: User ID for authentication (None for anonymous)
session: Optional pre-loaded session object (for recursive calls to avoid Redis refetch)
Yields:
StreamBaseResponse objects formatted as SSE
Raises:
NotFoundError: If session_id is invalid
ValueError: If max_context_messages is exceeded
"""
logger.info(
f"Streaming chat completion for session {session_id} for message {message} and user id {user_id}. Message is user message: {is_user_message}"
)
# Only fetch from Redis if session not provided (initial call)
if session is None:
session = await get_chat_session(session_id, user_id)
logger.info(
f"Fetched session from Redis: {session.session_id if session else 'None'}, "
f"message_count={len(session.messages) if session else 0}"
)
else:
logger.info(
f"Using provided session object: {session.session_id}, "
f"message_count={len(session.messages)}"
)
if not session:
raise NotFoundError(
f"Session {session_id} not found. Please create a new session first."
)
if message:
session.messages.append(
ChatMessage(
role="user" if is_user_message else "assistant", content=message
)
)
logger.info(
f"Appended message (role={'user' if is_user_message else 'assistant'}), "
f"new message_count={len(session.messages)}"
)
if len(session.messages) > config.max_context_messages:
raise ValueError(f"Max messages exceeded: {config.max_context_messages}")
logger.info(
f"Upserting session: {session.session_id} with user id {session.user_id}, "
f"message_count={len(session.messages)}"
)
session = await upsert_chat_session(session)
assert session, "Session not found"
assistant_response = ChatMessage(
role="assistant",
content="",
)
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
accumulated_tool_calls: list[dict[str, Any]] = []
should_retry = False
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
):
if isinstance(chunk, StreamTextChunk):
content = chunk.content or ""
assert assistant_response.content is not None
assistant_response.content += content
has_received_text = True
yield chunk
elif isinstance(chunk, StreamToolCallStart):
# Emit text_ended before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnded()
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolCall):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.tool_id,
"type": "function",
"function": {
"name": chunk.tool_name,
"arguments": orjson.dumps(chunk.arguments).decode("utf-8"),
},
}
)
elif isinstance(chunk, StreamToolExecutionResult):
result_content = (
chunk.result
if isinstance(chunk.result, str)
else orjson.dumps(chunk.result).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.tool_id,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.tool_name} (ID: {chunk.tool_id}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamEnd):
if not has_done_tool_call:
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.prompt_tokens,
completion_tokens=chunk.completion_tokens,
total_tokens=chunk.total_tokens,
)
)
else:
logger.error(f"Unknown chunk type: {type(chunk)}", exc_info=True)
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(e, (orjson.JSONDecodeError, KeyError, TypeError))
if is_retryable and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
)
should_retry = True
else:
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = (
f"Max retries ({config.max_retries}) exceeded: {error_message}"
)
error_response = StreamError(
message=error_message,
timestamp=datetime.now(UTC).isoformat(),
)
yield error_response
if not has_yielded_end:
yield StreamEnd(
timestamp=datetime.now(UTC).isoformat(),
)
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
logger.info(
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
session.messages.extend(messages_to_save)
logger.info(f"Extended session messages, new message_count={len(session.messages)}")
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
):
yield chunk
async def _stream_chat_chunks(
session: ChatSession,
tools: list[ChatCompletionToolParam],
) -> AsyncGenerator[StreamBaseResponse, None]:
"""
Pure streaming function for OpenAI chat completions with tool calling.
This function is database-agnostic and focuses only on streaming logic.
Args:
messages: Conversation context as ChatCompletionMessageParam list
session_id: Session ID
user_id: User ID for tool execution
Yields:
SSE formatted JSON response objects
"""
model = config.model
logger.info("Starting pure chat stream")
# Loop to handle tool calls and continue conversation
while True:
try:
logger.info("Creating OpenAI chat completion stream...")
# Create the stream with proper types
stream = await client.chat.completions.create(
model=model,
messages=session.to_openai_messages(),
tools=tools,
tool_choice="auto",
stream=True,
)
# Variables to accumulate tool calls
tool_calls: list[dict[str, Any]] = []
active_tool_call_idx: int | None = None
finish_reason: str | None = None
# Track which tool call indices have had their start event emitted
emitted_start_for_idx: set[int] = set()
# Process the stream
chunk: ChatCompletionChunk
async for chunk in stream:
if chunk.usage:
yield StreamUsage(
prompt_tokens=chunk.usage.prompt_tokens,
completion_tokens=chunk.usage.completion_tokens,
total_tokens=chunk.usage.total_tokens,
)
if chunk.choices:
choice = chunk.choices[0]
delta = choice.delta
# Capture finish reason
if choice.finish_reason:
finish_reason = choice.finish_reason
logger.info(f"Finish reason: {finish_reason}")
# Handle content streaming
if delta.content:
# Stream the text chunk
text_response = StreamTextChunk(
content=delta.content,
timestamp=datetime.now(UTC).isoformat(),
)
yield text_response
# Handle tool calls
if delta.tool_calls:
for tc_chunk in delta.tool_calls:
idx = tc_chunk.index
# Update active tool call index if needed
if (
active_tool_call_idx is None
or active_tool_call_idx != idx
):
active_tool_call_idx = idx
# Ensure we have a tool call object at this index
while len(tool_calls) <= idx:
tool_calls.append(
{
"id": "",
"type": "function",
"function": {
"name": "",
"arguments": "",
},
},
)
# Accumulate the tool call data
if tc_chunk.id:
tool_calls[idx]["id"] = tc_chunk.id
if tc_chunk.function:
if tc_chunk.function.name:
tool_calls[idx]["function"][
"name"
] = tc_chunk.function.name
if tc_chunk.function.arguments:
tool_calls[idx]["function"][
"arguments"
] += tc_chunk.function.arguments
# Emit StreamToolCallStart only after we have the tool call ID
if (
idx not in emitted_start_for_idx
and tool_calls[idx]["id"]
and tool_calls[idx]["function"]["name"]
):
yield StreamToolCallStart(
tool_id=tool_calls[idx]["id"],
tool_name=tool_calls[idx]["function"]["name"],
timestamp=datetime.now(UTC).isoformat(),
)
emitted_start_for_idx.add(idx)
logger.info(f"Stream complete. Finish reason: {finish_reason}")
# Yield all accumulated tool calls after the stream is complete
# This ensures all tool call arguments have been fully received
for idx, tool_call in enumerate(tool_calls):
try:
async for tc in _yield_tool_call(tool_calls, idx, session):
yield tc
except (orjson.JSONDecodeError, KeyError, TypeError) as e:
logger.error(
f"Failed to parse tool call {idx}: {e}",
exc_info=True,
extra={"tool_call": tool_call},
)
yield StreamError(
message=f"Invalid tool call arguments for tool {tool_call.get('function', {}).get('name', 'unknown')}: {e}",
timestamp=datetime.now(UTC).isoformat(),
)
# Re-raise to trigger retry logic in the parent function
raise
yield StreamEnd(
timestamp=datetime.now(UTC).isoformat(),
)
return
except Exception as e:
logger.error(f"Error in stream: {e!s}", exc_info=True)
error_response = StreamError(
message=str(e),
timestamp=datetime.now(UTC).isoformat(),
)
yield error_response
yield StreamEnd(
timestamp=datetime.now(UTC).isoformat(),
)
return
async def _yield_tool_call(
tool_calls: list[dict[str, Any]],
yield_idx: int,
session: ChatSession,
) -> AsyncGenerator[StreamBaseResponse, None]:
"""
Yield a tool call and its execution result.
Raises:
orjson.JSONDecodeError: If tool call arguments cannot be parsed as JSON
KeyError: If expected tool call fields are missing
TypeError: If tool call structure is invalid
"""
logger.info(f"Yielding tool call: {tool_calls[yield_idx]}")
# Parse tool call arguments - exceptions will propagate to caller
arguments = orjson.loads(tool_calls[yield_idx]["function"]["arguments"])
yield StreamToolCall(
tool_id=tool_calls[yield_idx]["id"],
tool_name=tool_calls[yield_idx]["function"]["name"],
arguments=arguments,
timestamp=datetime.now(UTC).isoformat(),
)
tool_execution_response: StreamToolExecutionResult = await execute_tool(
tool_name=tool_calls[yield_idx]["function"]["name"],
parameters=arguments,
tool_call_id=tool_calls[yield_idx]["id"],
user_id=session.user_id,
session=session,
)
logger.info(f"Yielding Tool execution response: {tool_execution_response}")
yield tool_execution_response
if __name__ == "__main__":
import asyncio
async def main():
session = await create_chat_session()
async for chunk in stream_chat_completion(
session.session_id,
"Please find me an agent that can help me with my business. Call the tool twice once with the query 'money printing agent' and once with the query 'money generating agent'",
user_id=session.user_id,
):
print(chunk)
asyncio.run(main())

View File

@@ -0,0 +1,81 @@
import logging
from os import getenv
import pytest
from . import service as chat_service
from .response_model import (
StreamEnd,
StreamError,
StreamTextChunk,
StreamToolExecutionResult,
)
logger = logging.getLogger(__name__)
@pytest.mark.asyncio(loop_scope="session")
async def test_stream_chat_completion():
"""
Test the stream_chat_completion function.
"""
api_key: str | None = getenv("OPEN_ROUTER_API_KEY")
if not api_key:
return pytest.skip("OPEN_ROUTER_API_KEY is not set, skipping test")
session = await chat_service.create_chat_session()
has_errors = False
has_ended = False
assistant_message = ""
async for chunk in chat_service.stream_chat_completion(
session.session_id, "Hello, how are you?", user_id=session.user_id
):
logger.info(chunk)
if isinstance(chunk, StreamError):
has_errors = True
if isinstance(chunk, StreamTextChunk):
assistant_message += chunk.content
if isinstance(chunk, StreamEnd):
has_ended = True
assert has_ended, "Chat completion did not end"
assert not has_errors, "Error occurred while streaming chat completion"
assert assistant_message, "Assistant message is empty"
@pytest.mark.asyncio(loop_scope="session")
async def test_stream_chat_completion_with_tool_calls():
"""
Test the stream_chat_completion function.
"""
api_key: str | None = getenv("OPEN_ROUTER_API_KEY")
if not api_key:
return pytest.skip("OPEN_ROUTER_API_KEY is not set, skipping test")
session = await chat_service.create_chat_session()
session = await chat_service.upsert_chat_session(session)
has_errors = False
has_ended = False
had_tool_calls = False
async for chunk in chat_service.stream_chat_completion(
session.session_id,
"Please find me an agent that can help me with my business. Use the query 'moneny printing agent'",
user_id=session.user_id,
):
logger.info(chunk)
if isinstance(chunk, StreamError):
has_errors = True
if isinstance(chunk, StreamEnd):
has_ended = True
if isinstance(chunk, StreamToolExecutionResult):
had_tool_calls = True
assert has_ended, "Chat completion did not end"
assert not has_errors, "Error occurred while streaming chat completion"
assert had_tool_calls, "Tool calls did not occur"
session = await chat_service.get_session(session.session_id)
assert session, "Session not found"
assert session.usage, "Usage is empty"

View File

@@ -0,0 +1,41 @@
from typing import TYPE_CHECKING, Any
from openai.types.chat import ChatCompletionToolParam
from backend.api.features.chat.model import ChatSession
from .base import BaseTool
from .find_agent import FindAgentTool
from .run_agent import RunAgentTool
if TYPE_CHECKING:
from backend.api.features.chat.response_model import StreamToolExecutionResult
# Initialize tool instances
find_agent_tool = FindAgentTool()
run_agent_tool = RunAgentTool()
# Export tools as OpenAI format
tools: list[ChatCompletionToolParam] = [
find_agent_tool.as_openai_tool(),
run_agent_tool.as_openai_tool(),
]
async def execute_tool(
tool_name: str,
parameters: dict[str, Any],
user_id: str | None,
session: ChatSession,
tool_call_id: str,
) -> "StreamToolExecutionResult":
tool_map: dict[str, BaseTool] = {
"find_agent": find_agent_tool,
"run_agent": run_agent_tool,
}
if tool_name not in tool_map:
raise ValueError(f"Tool {tool_name} not found")
return await tool_map[tool_name].execute(
user_id, session, tool_call_id, **parameters
)

View File

@@ -0,0 +1,464 @@
import uuid
from datetime import UTC, datetime
from os import getenv
import pytest
from pydantic import SecretStr
from backend.api.features.chat.model import ChatSession
from backend.api.features.store import db as store_db
from backend.blocks.firecrawl.scrape import FirecrawlScrapeBlock
from backend.blocks.io import AgentInputBlock, AgentOutputBlock
from backend.blocks.llm import AITextGeneratorBlock
from backend.data.db import prisma
from backend.data.graph import Graph, Link, Node, create_graph
from backend.data.model import APIKeyCredentials
from backend.data.user import get_or_create_user
from backend.integrations.credentials_store import IntegrationCredentialsStore
def make_session(user_id: str | None = None):
return ChatSession(
session_id=str(uuid.uuid4()),
user_id=user_id,
messages=[],
usage=[],
started_at=datetime.now(UTC),
updated_at=datetime.now(UTC),
successful_agent_runs={},
successful_agent_schedules={},
)
@pytest.fixture(scope="session")
async def setup_test_data():
"""
Set up test data for run_agent tests:
1. Create a test user
2. Create a test graph (agent input -> agent output)
3. Create a store listing and store listing version
4. Approve the store listing version
"""
# 1. Create a test user
user_data = {
"sub": f"test-user-{uuid.uuid4()}",
"email": f"test-{uuid.uuid4()}@example.com",
}
user = await get_or_create_user(user_data)
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile",
"links": [], # Required field - empty array for test profiles
}
)
# 2. Create a test graph with agent input -> agent output
graph_id = str(uuid.uuid4())
# Create input node
input_node_id = str(uuid.uuid4())
input_block = AgentInputBlock()
input_node = Node(
id=input_node_id,
block_id=input_block.id,
input_default={
"name": "test_input",
"title": "Test Input",
"value": "",
"advanced": False,
"description": "Test input field",
"placeholder_values": [],
},
metadata={"position": {"x": 0, "y": 0}},
)
# Create output node
output_node_id = str(uuid.uuid4())
output_block = AgentOutputBlock()
output_node = Node(
id=output_node_id,
block_id=output_block.id,
input_default={
"name": "test_output",
"title": "Test Output",
"value": "",
"format": "",
"advanced": False,
"description": "Test output field",
},
metadata={"position": {"x": 200, "y": 0}},
)
# Create link from input to output
link = Link(
source_id=input_node_id,
sink_id=output_node_id,
source_name="result",
sink_name="value",
is_static=True,
)
# Create the graph
graph = Graph(
id=graph_id,
version=1,
is_active=True,
name="Test Agent",
description="A simple test agent for testing",
nodes=[input_node, output_node],
links=[link],
)
created_graph = await create_graph(graph, user.id)
# 3. Create a store listing and store listing version for the agent
# Use unique slug to avoid constraint violations
unique_slug = f"test-agent-{str(uuid.uuid4())[:8]}"
store_submission = await store_db.create_store_submission(
user_id=user.id,
agent_id=created_graph.id,
agent_version=created_graph.version,
slug=unique_slug,
name="Test Agent",
description="A simple test agent",
sub_heading="Test agent for unit tests",
categories=["testing"],
image_urls=["https://example.com/image.jpg"],
)
assert store_submission.store_listing_version_id is not None
# 4. Approve the store listing version
await store_db.review_store_submission(
store_listing_version_id=store_submission.store_listing_version_id,
is_approved=True,
external_comments="Approved for testing",
internal_comments="Test approval",
reviewer_id=user.id,
)
return {
"user": user,
"graph": created_graph,
"store_submission": store_submission,
}
@pytest.fixture(scope="session")
async def setup_llm_test_data():
"""
Set up test data for LLM agent tests:
1. Create a test user
2. Create test OpenAI credentials for the user
3. Create a test graph with input -> LLM block -> output
4. Create and approve a store listing
"""
key = getenv("OPENAI_API_KEY")
if not key:
return pytest.skip("OPENAI_API_KEY is not set")
# 1. Create a test user
user_data = {
"sub": f"test-user-{uuid.uuid4()}",
"email": f"test-{uuid.uuid4()}@example.com",
}
user = await get_or_create_user(user_data)
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile for LLM tests",
"links": [], # Required field - empty array for test profiles
}
)
# 2. Create test OpenAI credentials for the user
credentials = APIKeyCredentials(
id=str(uuid.uuid4()),
provider="openai",
api_key=SecretStr("test-openai-api-key"),
title="Test OpenAI API Key",
expires_at=None,
)
# Store the credentials
creds_store = IntegrationCredentialsStore()
await creds_store.add_creds(user.id, credentials)
# 3. Create a test graph with input -> LLM block -> output
graph_id = str(uuid.uuid4())
# Create input node for the prompt
input_node_id = str(uuid.uuid4())
input_block = AgentInputBlock()
input_node = Node(
id=input_node_id,
block_id=input_block.id,
input_default={
"name": "user_prompt",
"title": "User Prompt",
"value": "",
"advanced": False,
"description": "Prompt for the LLM",
"placeholder_values": [],
},
metadata={"position": {"x": 0, "y": 0}},
)
# Create LLM block node
llm_node_id = str(uuid.uuid4())
llm_block = AITextGeneratorBlock()
llm_node = Node(
id=llm_node_id,
block_id=llm_block.id,
input_default={
"model": "gpt-4o-mini",
"sys_prompt": "You are a helpful assistant.",
"retry": 3,
"prompt_values": {},
"credentials": {
"provider": "openai",
"id": credentials.id,
"type": "api_key",
"title": credentials.title,
},
},
metadata={"position": {"x": 300, "y": 0}},
)
# Create output node
output_node_id = str(uuid.uuid4())
output_block = AgentOutputBlock()
output_node = Node(
id=output_node_id,
block_id=output_block.id,
input_default={
"name": "llm_response",
"title": "LLM Response",
"value": "",
"format": "",
"advanced": False,
"description": "Response from the LLM",
},
metadata={"position": {"x": 600, "y": 0}},
)
# Create links
# Link input.result -> llm.prompt
link1 = Link(
source_id=input_node_id,
sink_id=llm_node_id,
source_name="result",
sink_name="prompt",
is_static=True,
)
# Link llm.response -> output.value
link2 = Link(
source_id=llm_node_id,
sink_id=output_node_id,
source_name="response",
sink_name="value",
is_static=False,
)
# Create the graph
graph = Graph(
id=graph_id,
version=1,
is_active=True,
name="LLM Test Agent",
description="An agent that uses an LLM to process text",
nodes=[input_node, llm_node, output_node],
links=[link1, link2],
)
created_graph = await create_graph(graph, user.id)
# 4. Create and approve a store listing
unique_slug = f"llm-test-agent-{str(uuid.uuid4())[:8]}"
store_submission = await store_db.create_store_submission(
user_id=user.id,
agent_id=created_graph.id,
agent_version=created_graph.version,
slug=unique_slug,
name="LLM Test Agent",
description="An agent with LLM capabilities",
sub_heading="Test agent with OpenAI integration",
categories=["testing", "ai"],
image_urls=["https://example.com/image.jpg"],
)
assert store_submission.store_listing_version_id is not None
await store_db.review_store_submission(
store_listing_version_id=store_submission.store_listing_version_id,
is_approved=True,
external_comments="Approved for testing",
internal_comments="Test approval for LLM agent",
reviewer_id=user.id,
)
return {
"user": user,
"graph": created_graph,
"credentials": credentials,
"store_submission": store_submission,
}
@pytest.fixture(scope="session")
async def setup_firecrawl_test_data():
"""
Set up test data for Firecrawl agent tests (missing credentials scenario):
1. Create a test user (WITHOUT Firecrawl credentials)
2. Create a test graph with input -> Firecrawl block -> output
3. Create and approve a store listing
"""
# 1. Create a test user
user_data = {
"sub": f"test-user-{uuid.uuid4()}",
"email": f"test-{uuid.uuid4()}@example.com",
}
user = await get_or_create_user(user_data)
# 1b. Create a profile with username for the user (required for store agent lookup)
username = user.email.split("@")[0]
await prisma.profile.create(
data={
"userId": user.id,
"username": username,
"name": f"Test User {username}",
"description": "Test user profile for Firecrawl tests",
"links": [], # Required field - empty array for test profiles
}
)
# NOTE: We deliberately do NOT create Firecrawl credentials for this user
# This tests the scenario where required credentials are missing
# 2. Create a test graph with input -> Firecrawl block -> output
graph_id = str(uuid.uuid4())
# Create input node for the URL
input_node_id = str(uuid.uuid4())
input_block = AgentInputBlock()
input_node = Node(
id=input_node_id,
block_id=input_block.id,
input_default={
"name": "url",
"title": "URL to Scrape",
"value": "",
"advanced": False,
"description": "URL for Firecrawl to scrape",
"placeholder_values": [],
},
metadata={"position": {"x": 0, "y": 0}},
)
# Create Firecrawl block node
firecrawl_node_id = str(uuid.uuid4())
firecrawl_block = FirecrawlScrapeBlock()
firecrawl_node = Node(
id=firecrawl_node_id,
block_id=firecrawl_block.id,
input_default={
"limit": 10,
"only_main_content": True,
"max_age": 3600000,
"wait_for": 200,
"formats": ["markdown"],
"credentials": {
"provider": "firecrawl",
"id": "test-firecrawl-id",
"type": "api_key",
"title": "Firecrawl API Key",
},
},
metadata={"position": {"x": 300, "y": 0}},
)
# Create output node
output_node_id = str(uuid.uuid4())
output_block = AgentOutputBlock()
output_node = Node(
id=output_node_id,
block_id=output_block.id,
input_default={
"name": "scraped_data",
"title": "Scraped Data",
"value": "",
"format": "",
"advanced": False,
"description": "Data scraped by Firecrawl",
},
metadata={"position": {"x": 600, "y": 0}},
)
# Create links
# Link input.result -> firecrawl.url
link1 = Link(
source_id=input_node_id,
sink_id=firecrawl_node_id,
source_name="result",
sink_name="url",
is_static=True,
)
# Link firecrawl.markdown -> output.value
link2 = Link(
source_id=firecrawl_node_id,
sink_id=output_node_id,
source_name="markdown",
sink_name="value",
is_static=False,
)
# Create the graph
graph = Graph(
id=graph_id,
version=1,
is_active=True,
name="Firecrawl Test Agent",
description="An agent that uses Firecrawl to scrape websites",
nodes=[input_node, firecrawl_node, output_node],
links=[link1, link2],
)
created_graph = await create_graph(graph, user.id)
# 3. Create and approve a store listing
unique_slug = f"firecrawl-test-agent-{str(uuid.uuid4())[:8]}"
store_submission = await store_db.create_store_submission(
user_id=user.id,
agent_id=created_graph.id,
agent_version=created_graph.version,
slug=unique_slug,
name="Firecrawl Test Agent",
description="An agent with Firecrawl integration (no credentials)",
sub_heading="Test agent requiring Firecrawl credentials",
categories=["testing", "scraping"],
image_urls=["https://example.com/image.jpg"],
)
assert store_submission.store_listing_version_id is not None
await store_db.review_store_submission(
store_listing_version_id=store_submission.store_listing_version_id,
is_approved=True,
external_comments="Approved for testing",
internal_comments="Test approval for Firecrawl agent",
reviewer_id=user.id,
)
return {
"user": user,
"graph": created_graph,
"store_submission": store_submission,
}

View File

@@ -0,0 +1,119 @@
"""Base classes and shared utilities for chat tools."""
import logging
from typing import Any
from openai.types.chat import ChatCompletionToolParam
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.response_model import StreamToolExecutionResult
from .models import ErrorResponse, NeedLoginResponse, ToolResponseBase
logger = logging.getLogger(__name__)
class BaseTool:
"""Base class for all chat tools."""
@property
def name(self) -> str:
"""Tool name for OpenAI function calling."""
raise NotImplementedError
@property
def description(self) -> str:
"""Tool description for OpenAI."""
raise NotImplementedError
@property
def parameters(self) -> dict[str, Any]:
"""Tool parameters schema for OpenAI."""
raise NotImplementedError
@property
def requires_auth(self) -> bool:
"""Whether this tool requires authentication."""
return False
def as_openai_tool(self) -> ChatCompletionToolParam:
"""Convert to OpenAI tool format."""
return ChatCompletionToolParam(
type="function",
function={
"name": self.name,
"description": self.description,
"parameters": self.parameters,
},
)
async def execute(
self,
user_id: str | None,
session: ChatSession,
tool_call_id: str,
**kwargs,
) -> StreamToolExecutionResult:
"""Execute the tool with authentication check.
Args:
user_id: User ID (may be anonymous like "anon_123")
session_id: Chat session ID
**kwargs: Tool-specific parameters
Returns:
Pydantic response object
"""
if self.requires_auth and not user_id:
logger.error(
f"Attempted tool call for {self.name} but user not authenticated"
)
return StreamToolExecutionResult(
tool_id=tool_call_id,
tool_name=self.name,
result=NeedLoginResponse(
message=f"Please sign in to use {self.name}",
session_id=session.session_id,
).model_dump_json(),
success=False,
)
try:
result = await self._execute(user_id, session, **kwargs)
return StreamToolExecutionResult(
tool_id=tool_call_id,
tool_name=self.name,
result=result.model_dump_json(),
)
except Exception as e:
logger.error(f"Error in {self.name}: {e}", exc_info=True)
return StreamToolExecutionResult(
tool_id=tool_call_id,
tool_name=self.name,
result=ErrorResponse(
message=f"An error occurred while executing {self.name}",
error=str(e),
session_id=session.session_id,
).model_dump_json(),
success=False,
)
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Internal execution logic to be implemented by subclasses.
Args:
user_id: User ID (authenticated or anonymous)
session_id: Chat session ID
**kwargs: Tool-specific parameters
Returns:
Pydantic response object
"""
raise NotImplementedError

View File

@@ -0,0 +1,129 @@
"""Tool for discovering agents from marketplace and user library."""
import logging
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.api.features.store import db as store_db
from backend.util.exceptions import DatabaseError, NotFoundError
from .base import BaseTool
from .models import (
AgentCarouselResponse,
AgentInfo,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
logger = logging.getLogger(__name__)
class FindAgentTool(BaseTool):
"""Tool for discovering agents based on user needs."""
@property
def name(self) -> str:
return "find_agent"
@property
def description(self) -> str:
return (
"Discover agents from the marketplace based on capabilities and user needs."
)
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query describing what the user wants to accomplish. Use single keywords for best results.",
},
},
"required": ["query"],
}
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Search for agents in the marketplace.
Args:
user_id: User ID (may be anonymous)
session_id: Chat session ID
query: Search query
Returns:
AgentCarouselResponse: List of agents found in the marketplace
NoResultsResponse: No agents found in the marketplace
ErrorResponse: Error message
"""
query = kwargs.get("query", "").strip()
session_id = session.session_id
if not query:
return ErrorResponse(
message="Please provide a search query",
session_id=session_id,
)
agents = []
try:
logger.info(f"Searching marketplace for: {query}")
store_results = await store_db.get_store_agents(
search_query=query,
page_size=5,
)
logger.info(f"Find agents tool found {len(store_results.agents)} agents")
for agent in store_results.agents:
agent_id = f"{agent.creator}/{agent.slug}"
logger.info(f"Building agent ID = {agent_id}")
agents.append(
AgentInfo(
id=agent_id,
name=agent.agent_name,
description=agent.description or "",
source="marketplace",
in_library=False,
creator=agent.creator,
category="general",
rating=agent.rating,
runs=agent.runs,
is_featured=False,
),
)
except NotFoundError:
pass
except DatabaseError as e:
logger.error(f"Error searching agents: {e}", exc_info=True)
return ErrorResponse(
message="Failed to search for agents. Please try again.",
error=str(e),
session_id=session_id,
)
if not agents:
return NoResultsResponse(
message=f"No agents found matching '{query}'. Try different keywords or browse the marketplace. If you have 3 consecutive find_agent tool calls results and found no agents. Please stop trying and ask the user if there is anything else you can help with.",
session_id=session_id,
suggestions=[
"Try more general terms",
"Browse categories in the marketplace",
"Check spelling",
],
)
# Return formatted carousel
title = (
f"Found {len(agents)} agent{'s' if len(agents) != 1 else ''} for '{query}'"
)
return AgentCarouselResponse(
message="Now you have found some options for the user to choose from. You can add a link to a recommended agent at: /marketplace/agent/agent_id Please ask the user if they would like to use any of these agents. If they do, please call the get_agent_details tool for this agent.",
title=title,
agents=agents,
count=len(agents),
session_id=session_id,
)

View File

@@ -0,0 +1,175 @@
"""Pydantic models for tool responses."""
from enum import Enum
from typing import Any
from pydantic import BaseModel, Field
from backend.data.model import CredentialsMetaInput
class ResponseType(str, Enum):
"""Types of tool responses."""
AGENT_CAROUSEL = "agent_carousel"
AGENT_DETAILS = "agent_details"
SETUP_REQUIREMENTS = "setup_requirements"
EXECUTION_STARTED = "execution_started"
NEED_LOGIN = "need_login"
ERROR = "error"
NO_RESULTS = "no_results"
SUCCESS = "success"
# Base response model
class ToolResponseBase(BaseModel):
"""Base model for all tool responses."""
type: ResponseType
message: str
session_id: str | None = None
# Agent discovery models
class AgentInfo(BaseModel):
"""Information about an agent."""
id: str
name: str
description: str
source: str = Field(description="marketplace or library")
in_library: bool = False
creator: str | None = None
category: str | None = None
rating: float | None = None
runs: int | None = None
is_featured: bool | None = None
status: str | None = None
can_access_graph: bool | None = None
has_external_trigger: bool | None = None
new_output: bool | None = None
graph_id: str | None = None
class AgentCarouselResponse(ToolResponseBase):
"""Response for find_agent tool."""
type: ResponseType = ResponseType.AGENT_CAROUSEL
title: str = "Available Agents"
agents: list[AgentInfo]
count: int
name: str = "agent_carousel"
class NoResultsResponse(ToolResponseBase):
"""Response when no agents found."""
type: ResponseType = ResponseType.NO_RESULTS
suggestions: list[str] = []
name: str = "no_results"
# Agent details models
class InputField(BaseModel):
"""Input field specification."""
name: str
type: str = "string"
description: str = ""
required: bool = False
default: Any | None = None
options: list[Any] | None = None
format: str | None = None
class ExecutionOptions(BaseModel):
"""Available execution options for an agent."""
manual: bool = True
scheduled: bool = True
webhook: bool = False
class AgentDetails(BaseModel):
"""Detailed agent information."""
id: str
name: str
description: str
in_library: bool = False
inputs: dict[str, Any] = {}
credentials: list[CredentialsMetaInput] = []
execution_options: ExecutionOptions = Field(default_factory=ExecutionOptions)
trigger_info: dict[str, Any] | None = None
class AgentDetailsResponse(ToolResponseBase):
"""Response for get_details action."""
type: ResponseType = ResponseType.AGENT_DETAILS
agent: AgentDetails
user_authenticated: bool = False
graph_id: str | None = None
graph_version: int | None = None
# Setup info models
class UserReadiness(BaseModel):
"""User readiness status."""
has_all_credentials: bool = False
missing_credentials: dict[str, Any] = {}
ready_to_run: bool = False
class SetupInfo(BaseModel):
"""Complete setup information."""
agent_id: str
agent_name: str
requirements: dict[str, list[Any]] = Field(
default_factory=lambda: {
"credentials": [],
"inputs": [],
"execution_modes": [],
},
)
user_readiness: UserReadiness = Field(default_factory=UserReadiness)
class SetupRequirementsResponse(ToolResponseBase):
"""Response for validate action."""
type: ResponseType = ResponseType.SETUP_REQUIREMENTS
setup_info: SetupInfo
graph_id: str | None = None
graph_version: int | None = None
# Execution models
class ExecutionStartedResponse(ToolResponseBase):
"""Response for run/schedule actions."""
type: ResponseType = ResponseType.EXECUTION_STARTED
execution_id: str
graph_id: str
graph_name: str
library_agent_id: str | None = None
library_agent_link: str | None = None
status: str = "QUEUED"
# Auth/error models
class NeedLoginResponse(ToolResponseBase):
"""Response when login is needed."""
type: ResponseType = ResponseType.NEED_LOGIN
agent_info: dict[str, Any] | None = None
class ErrorResponse(ToolResponseBase):
"""Response for errors."""
type: ResponseType = ResponseType.ERROR
error: str | None = None
details: dict[str, Any] | None = None

View File

@@ -0,0 +1,501 @@
"""Unified tool for agent operations with automatic state detection."""
import logging
from typing import Any
from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig
from backend.api.features.chat.model import ChatSession
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.data.user import get_user_by_id
from backend.executor import utils as execution_utils
from backend.util.clients import get_scheduler_client
from backend.util.exceptions import DatabaseError, NotFoundError
from backend.util.timezone_utils import (
convert_utc_time_to_user_timezone,
get_user_timezone_or_utc,
)
from .base import BaseTool
from .models import (
AgentDetails,
AgentDetailsResponse,
ErrorResponse,
ExecutionOptions,
ExecutionStartedResponse,
SetupInfo,
SetupRequirementsResponse,
ToolResponseBase,
UserReadiness,
)
from .utils import (
check_user_has_required_credentials,
extract_credentials_from_schema,
fetch_graph_from_store_slug,
get_or_create_library_agent,
match_user_credentials_to_graph,
)
logger = logging.getLogger(__name__)
config = ChatConfig()
# Constants for response messages
MSG_DO_NOT_RUN_AGAIN = "Do not run again unless explicitly requested."
MSG_DO_NOT_SCHEDULE_AGAIN = "Do not schedule again unless explicitly requested."
MSG_ASK_USER_FOR_VALUES = (
"Ask the user what values to use, or call again with use_defaults=true "
"to run with default values."
)
MSG_WHAT_VALUES_TO_USE = (
"What values would you like to use, or would you like to run with defaults?"
)
class RunAgentInput(BaseModel):
"""Input parameters for the run_agent tool."""
username_agent_slug: str = ""
inputs: dict[str, Any] = Field(default_factory=dict)
use_defaults: bool = False
schedule_name: str = ""
cron: str = ""
timezone: str = "UTC"
@field_validator(
"username_agent_slug", "schedule_name", "cron", "timezone", mode="before"
)
@classmethod
def strip_strings(cls, v: Any) -> Any:
"""Strip whitespace from string fields."""
return v.strip() if isinstance(v, str) else v
class RunAgentTool(BaseTool):
"""Unified tool for agent operations with automatic state detection.
The tool automatically determines what to do based on provided parameters:
1. Fetches agent details (always, silently)
2. Checks if required inputs are provided
3. Checks if user has required credentials
4. Runs immediately OR schedules (if cron is provided)
The response tells the caller what's missing or confirms execution.
"""
@property
def name(self) -> str:
return "run_agent"
@property
def description(self) -> str:
return """Run or schedule an agent from the marketplace.
The tool automatically handles the setup flow:
- Returns missing inputs if required fields are not provided
- Returns missing credentials if user needs to configure them
- Executes immediately if all requirements are met
- Schedules execution if cron expression is provided
For scheduled execution, provide: schedule_name, cron, and optionally timezone."""
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"username_agent_slug": {
"type": "string",
"description": "Agent identifier in format 'username/agent-name'",
},
"inputs": {
"type": "object",
"description": "Input values for the agent",
"additionalProperties": True,
},
"use_defaults": {
"type": "boolean",
"description": "Set to true to run with default values (user must confirm)",
},
"schedule_name": {
"type": "string",
"description": "Name for scheduled execution (triggers scheduling mode)",
},
"cron": {
"type": "string",
"description": "Cron expression (5 fields: min hour day month weekday)",
},
"timezone": {
"type": "string",
"description": "IANA timezone for schedule (default: UTC)",
},
},
"required": ["username_agent_slug"],
}
@property
def requires_auth(self) -> bool:
"""All operations require authentication."""
return True
async def _execute(
self,
user_id: str | None,
session: ChatSession,
**kwargs,
) -> ToolResponseBase:
"""Execute the tool with automatic state detection."""
params = RunAgentInput(**kwargs)
session_id = session.session_id
# Validate agent slug format
if not params.username_agent_slug or "/" not in params.username_agent_slug:
return ErrorResponse(
message="Please provide an agent slug in format 'username/agent-name'",
session_id=session_id,
)
# Auth is required
if not user_id:
return ErrorResponse(
message="Authentication required. Please sign in to use this tool.",
session_id=session_id,
)
# Determine if this is a schedule request
is_schedule = bool(params.schedule_name or params.cron)
try:
# Step 1: Fetch agent details (always happens first)
username, agent_name = params.username_agent_slug.split("/", 1)
graph, store_agent = await fetch_graph_from_store_slug(username, agent_name)
if not graph:
return ErrorResponse(
message=f"Agent '{params.username_agent_slug}' not found in marketplace",
session_id=session_id,
)
# Step 2: Check credentials
graph_credentials, missing_creds = await match_user_credentials_to_graph(
user_id, graph
)
if missing_creds:
# Return credentials needed response with input data info
# The UI handles credential setup automatically, so the message
# focuses on asking about input data
credentials = extract_credentials_from_schema(
graph.credentials_input_schema
)
missing_creds_check = await check_user_has_required_credentials(
user_id, credentials
)
missing_credentials_dict = {
c.id: c.model_dump() for c in missing_creds_check
}
return SetupRequirementsResponse(
message=self._build_inputs_message(graph, MSG_WHAT_VALUES_TO_USE),
session_id=session_id,
setup_info=SetupInfo(
agent_id=graph.id,
agent_name=graph.name,
user_readiness=UserReadiness(
has_all_credentials=False,
missing_credentials=missing_credentials_dict,
ready_to_run=False,
),
requirements={
"credentials": [c.model_dump() for c in credentials],
"inputs": self._get_inputs_list(graph.input_schema),
"execution_modes": self._get_execution_modes(graph),
},
),
graph_id=graph.id,
graph_version=graph.version,
)
# Step 3: Check inputs
# Get all available input fields from schema
input_properties = graph.input_schema.get("properties", {})
required_fields = set(graph.input_schema.get("required", []))
provided_inputs = set(params.inputs.keys())
# If agent has inputs but none were provided AND use_defaults is not set,
# always show what's available first so user can decide
if input_properties and not provided_inputs and not params.use_defaults:
credentials = extract_credentials_from_schema(
graph.credentials_input_schema
)
return AgentDetailsResponse(
message=self._build_inputs_message(graph, MSG_ASK_USER_FOR_VALUES),
session_id=session_id,
agent=self._build_agent_details(graph, credentials),
user_authenticated=True,
graph_id=graph.id,
graph_version=graph.version,
)
# Check if required inputs are missing (and not using defaults)
missing_inputs = required_fields - provided_inputs
if missing_inputs and not params.use_defaults:
# Return agent details with missing inputs info
credentials = extract_credentials_from_schema(
graph.credentials_input_schema
)
return AgentDetailsResponse(
message=(
f"Agent '{graph.name}' is missing required inputs: "
f"{', '.join(missing_inputs)}. "
"Please provide these values to run the agent."
),
session_id=session_id,
agent=self._build_agent_details(graph, credentials),
user_authenticated=True,
graph_id=graph.id,
graph_version=graph.version,
)
# Step 4: Execute or Schedule
if is_schedule:
return await self._schedule_agent(
user_id=user_id,
session=session,
graph=graph,
graph_credentials=graph_credentials,
inputs=params.inputs,
schedule_name=params.schedule_name,
cron=params.cron,
timezone=params.timezone,
)
else:
return await self._run_agent(
user_id=user_id,
session=session,
graph=graph,
graph_credentials=graph_credentials,
inputs=params.inputs,
)
except NotFoundError as e:
return ErrorResponse(
message=f"Agent '{params.username_agent_slug}' not found",
error=str(e) if str(e) else "not_found",
session_id=session_id,
)
except DatabaseError as e:
logger.error(f"Database error: {e}", exc_info=True)
return ErrorResponse(
message=f"Failed to process request: {e!s}",
error=str(e),
session_id=session_id,
)
except Exception as e:
logger.error(f"Error processing agent request: {e}", exc_info=True)
return ErrorResponse(
message=f"Failed to process request: {e!s}",
error=str(e),
session_id=session_id,
)
def _get_inputs_list(self, input_schema: dict[str, Any]) -> list[dict[str, Any]]:
"""Extract inputs list from schema."""
inputs_list = []
if isinstance(input_schema, dict) and "properties" in input_schema:
for field_name, field_schema in input_schema["properties"].items():
inputs_list.append(
{
"name": field_name,
"title": field_schema.get("title", field_name),
"type": field_schema.get("type", "string"),
"description": field_schema.get("description", ""),
"required": field_name in input_schema.get("required", []),
}
)
return inputs_list
def _get_execution_modes(self, graph: GraphModel) -> list[str]:
"""Get available execution modes for the graph."""
trigger_info = graph.trigger_setup_info
if trigger_info is None:
return ["manual", "scheduled"]
return ["webhook"]
def _build_inputs_message(
self,
graph: GraphModel,
suffix: str,
) -> str:
"""Build a message describing available inputs for an agent."""
inputs_list = self._get_inputs_list(graph.input_schema)
required_names = [i["name"] for i in inputs_list if i["required"]]
optional_names = [i["name"] for i in inputs_list if not i["required"]]
message_parts = [f"Agent '{graph.name}' accepts the following inputs:"]
if required_names:
message_parts.append(f"Required: {', '.join(required_names)}.")
if optional_names:
message_parts.append(
f"Optional (have defaults): {', '.join(optional_names)}."
)
if not inputs_list:
message_parts = [f"Agent '{graph.name}' has no required inputs."]
message_parts.append(suffix)
return " ".join(message_parts)
def _build_agent_details(
self,
graph: GraphModel,
credentials: list[CredentialsMetaInput],
) -> AgentDetails:
"""Build AgentDetails from a graph."""
trigger_info = (
graph.trigger_setup_info.model_dump() if graph.trigger_setup_info else None
)
return AgentDetails(
id=graph.id,
name=graph.name,
description=graph.description,
inputs=graph.input_schema,
credentials=credentials,
execution_options=ExecutionOptions(
manual=trigger_info is None,
scheduled=trigger_info is None,
webhook=trigger_info is not None,
),
trigger_info=trigger_info,
)
async def _run_agent(
self,
user_id: str,
session: ChatSession,
graph: GraphModel,
graph_credentials: dict[str, CredentialsMetaInput],
inputs: dict[str, Any],
) -> ToolResponseBase:
"""Execute an agent immediately."""
session_id = session.session_id
# Check rate limits
if session.successful_agent_runs.get(graph.id, 0) >= config.max_agent_runs:
return ErrorResponse(
message="Maximum agent runs reached for this session. Please try again later.",
session_id=session_id,
)
# Get or create library agent
library_agent = await get_or_create_library_agent(graph, user_id)
# Execute
execution = await execution_utils.add_graph_execution(
graph_id=library_agent.graph_id,
user_id=user_id,
inputs=inputs,
graph_credentials_inputs=graph_credentials,
)
# Track successful run
session.successful_agent_runs[library_agent.graph_id] = (
session.successful_agent_runs.get(library_agent.graph_id, 0) + 1
)
library_agent_link = f"/library/agents/{library_agent.id}"
return ExecutionStartedResponse(
message=(
f"Agent '{library_agent.name}' execution started successfully. "
f"View at {library_agent_link}. "
f"{MSG_DO_NOT_RUN_AGAIN}"
),
session_id=session_id,
execution_id=execution.id,
graph_id=library_agent.graph_id,
graph_name=library_agent.name,
library_agent_id=library_agent.id,
library_agent_link=library_agent_link,
)
async def _schedule_agent(
self,
user_id: str,
session: ChatSession,
graph: GraphModel,
graph_credentials: dict[str, CredentialsMetaInput],
inputs: dict[str, Any],
schedule_name: str,
cron: str,
timezone: str,
) -> ToolResponseBase:
"""Set up scheduled execution for an agent."""
session_id = session.session_id
# Validate schedule params
if not schedule_name:
return ErrorResponse(
message="schedule_name is required for scheduled execution",
session_id=session_id,
)
if not cron:
return ErrorResponse(
message="cron expression is required for scheduled execution",
session_id=session_id,
)
# Check rate limits
if (
session.successful_agent_schedules.get(graph.id, 0)
>= config.max_agent_schedules
):
return ErrorResponse(
message="Maximum agent schedules reached for this session.",
session_id=session_id,
)
# Get or create library agent
library_agent = await get_or_create_library_agent(graph, user_id)
# Get user timezone
user = await get_user_by_id(user_id)
user_timezone = get_user_timezone_or_utc(user.timezone if user else timezone)
# Create schedule
result = await get_scheduler_client().add_execution_schedule(
user_id=user_id,
graph_id=library_agent.graph_id,
graph_version=library_agent.graph_version,
name=schedule_name,
cron=cron,
input_data=inputs,
input_credentials=graph_credentials,
user_timezone=user_timezone,
)
# Convert next_run_time to user timezone for display
if result.next_run_time:
result.next_run_time = convert_utc_time_to_user_timezone(
result.next_run_time, user_timezone
)
# Track successful schedule
session.successful_agent_schedules[library_agent.graph_id] = (
session.successful_agent_schedules.get(library_agent.graph_id, 0) + 1
)
library_agent_link = f"/library/agents/{library_agent.id}"
return ExecutionStartedResponse(
message=(
f"Agent '{library_agent.name}' scheduled successfully as '{schedule_name}'. "
f"View at {library_agent_link}. "
f"{MSG_DO_NOT_SCHEDULE_AGAIN}"
),
session_id=session_id,
execution_id=result.id,
graph_id=library_agent.graph_id,
graph_name=library_agent.name,
library_agent_id=library_agent.id,
library_agent_link=library_agent_link,
)

View File

@@ -0,0 +1,391 @@
import uuid
import orjson
import pytest
from ._test_data import (
make_session,
setup_firecrawl_test_data,
setup_llm_test_data,
setup_test_data,
)
from .run_agent import RunAgentTool
# This is so the formatter doesn't remove the fixture imports
setup_llm_test_data = setup_llm_test_data
setup_test_data = setup_test_data
setup_firecrawl_test_data = setup_firecrawl_test_data
@pytest.mark.asyncio(scope="session")
async def test_run_agent(setup_test_data):
"""Test that the run_agent tool successfully executes an approved agent"""
# Use test data from fixture
user = setup_test_data["user"]
graph = setup_test_data["graph"]
store_submission = setup_test_data["store_submission"]
# Create the tool instance
tool = RunAgentTool()
# Build the proper marketplace agent_id format: username/slug
agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}"
# Build the session
session = make_session(user_id=user.id)
# Execute the tool
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug=agent_marketplace_id,
inputs={"test_input": "Hello World"},
session=session,
)
# Verify the response
assert response is not None
assert hasattr(response, "result")
# Parse the result JSON to verify the execution started
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
assert "execution_id" in result_data
assert "graph_id" in result_data
assert result_data["graph_id"] == graph.id
assert "graph_name" in result_data
assert result_data["graph_name"] == "Test Agent"
@pytest.mark.asyncio(scope="session")
async def test_run_agent_missing_inputs(setup_test_data):
"""Test that the run_agent tool returns error when inputs are missing"""
# Use test data from fixture
user = setup_test_data["user"]
store_submission = setup_test_data["store_submission"]
# Create the tool instance
tool = RunAgentTool()
# Build the proper marketplace agent_id format
agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}"
# Build the session
session = make_session(user_id=user.id)
# Execute the tool without required inputs
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug=agent_marketplace_id,
inputs={}, # Missing required input
session=session,
)
# Verify that we get an error response
assert response is not None
assert hasattr(response, "result")
# The tool should return an ErrorResponse when setup info indicates not ready
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
assert "message" in result_data
@pytest.mark.asyncio(scope="session")
async def test_run_agent_invalid_agent_id(setup_test_data):
"""Test that the run_agent tool returns error for invalid agent ID"""
# Use test data from fixture
user = setup_test_data["user"]
# Create the tool instance
tool = RunAgentTool()
# Build the session
session = make_session(user_id=user.id)
# Execute the tool with invalid agent ID
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug="invalid/agent-id",
inputs={"test_input": "Hello World"},
session=session,
)
# Verify that we get an error response
assert response is not None
assert hasattr(response, "result")
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
assert "message" in result_data
# Should get an error about failed setup or not found
assert any(
phrase in result_data["message"].lower() for phrase in ["not found", "failed"]
)
@pytest.mark.asyncio(scope="session")
async def test_run_agent_with_llm_credentials(setup_llm_test_data):
"""Test that run_agent works with an agent requiring LLM credentials"""
# Use test data from fixture
user = setup_llm_test_data["user"]
graph = setup_llm_test_data["graph"]
store_submission = setup_llm_test_data["store_submission"]
# Create the tool instance
tool = RunAgentTool()
# Build the proper marketplace agent_id format
agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}"
# Build the session
session = make_session(user_id=user.id)
# Execute the tool with a prompt for the LLM
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug=agent_marketplace_id,
inputs={"user_prompt": "What is 2+2?"},
session=session,
)
# Verify the response
assert response is not None
assert hasattr(response, "result")
# Parse the result JSON to verify the execution started
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
# Should successfully start execution since credentials are available
assert "execution_id" in result_data
assert "graph_id" in result_data
assert result_data["graph_id"] == graph.id
assert "graph_name" in result_data
assert result_data["graph_name"] == "LLM Test Agent"
@pytest.mark.asyncio(scope="session")
async def test_run_agent_shows_available_inputs_when_none_provided(setup_test_data):
"""Test that run_agent returns available inputs when called without inputs or use_defaults."""
user = setup_test_data["user"]
store_submission = setup_test_data["store_submission"]
tool = RunAgentTool()
agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}"
session = make_session(user_id=user.id)
# Execute without inputs and without use_defaults
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug=agent_marketplace_id,
inputs={},
use_defaults=False,
session=session,
)
assert response is not None
assert hasattr(response, "result")
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
# Should return agent_details type showing available inputs
assert result_data.get("type") == "agent_details"
assert "agent" in result_data
assert "message" in result_data
# Message should mention inputs
assert "inputs" in result_data["message"].lower()
@pytest.mark.asyncio(scope="session")
async def test_run_agent_with_use_defaults(setup_test_data):
"""Test that run_agent executes successfully with use_defaults=True."""
user = setup_test_data["user"]
graph = setup_test_data["graph"]
store_submission = setup_test_data["store_submission"]
tool = RunAgentTool()
agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}"
session = make_session(user_id=user.id)
# Execute with use_defaults=True (no explicit inputs)
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug=agent_marketplace_id,
inputs={},
use_defaults=True,
session=session,
)
assert response is not None
assert hasattr(response, "result")
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
# Should execute successfully
assert "execution_id" in result_data
assert result_data["graph_id"] == graph.id
@pytest.mark.asyncio(scope="session")
async def test_run_agent_missing_credentials(setup_firecrawl_test_data):
"""Test that run_agent returns setup_requirements when credentials are missing."""
user = setup_firecrawl_test_data["user"]
store_submission = setup_firecrawl_test_data["store_submission"]
tool = RunAgentTool()
agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}"
session = make_session(user_id=user.id)
# Execute - user doesn't have firecrawl credentials
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug=agent_marketplace_id,
inputs={"url": "https://example.com"},
session=session,
)
assert response is not None
assert hasattr(response, "result")
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
# Should return setup_requirements type with missing credentials
assert result_data.get("type") == "setup_requirements"
assert "setup_info" in result_data
setup_info = result_data["setup_info"]
assert "user_readiness" in setup_info
assert setup_info["user_readiness"]["has_all_credentials"] is False
assert len(setup_info["user_readiness"]["missing_credentials"]) > 0
@pytest.mark.asyncio(scope="session")
async def test_run_agent_invalid_slug_format(setup_test_data):
"""Test that run_agent returns error for invalid slug format (no slash)."""
user = setup_test_data["user"]
tool = RunAgentTool()
session = make_session(user_id=user.id)
# Execute with invalid slug format
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug="no-slash-here",
inputs={},
session=session,
)
assert response is not None
assert hasattr(response, "result")
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
# Should return error
assert result_data.get("type") == "error"
assert "username/agent-name" in result_data["message"]
@pytest.mark.asyncio(scope="session")
async def test_run_agent_unauthenticated():
"""Test that run_agent returns need_login for unauthenticated users."""
tool = RunAgentTool()
session = make_session(user_id=None)
# Execute without user_id
response = await tool.execute(
user_id=None,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug="test/test-agent",
inputs={},
session=session,
)
assert response is not None
assert hasattr(response, "result")
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
# Base tool returns need_login type for unauthenticated users
assert result_data.get("type") == "need_login"
assert "sign in" in result_data["message"].lower()
@pytest.mark.asyncio(scope="session")
async def test_run_agent_schedule_without_cron(setup_test_data):
"""Test that run_agent returns error when scheduling without cron expression."""
user = setup_test_data["user"]
store_submission = setup_test_data["store_submission"]
tool = RunAgentTool()
agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}"
session = make_session(user_id=user.id)
# Try to schedule without cron
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug=agent_marketplace_id,
inputs={"test_input": "test"},
schedule_name="My Schedule",
cron="", # Empty cron
session=session,
)
assert response is not None
assert hasattr(response, "result")
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
# Should return error about missing cron
assert result_data.get("type") == "error"
assert "cron" in result_data["message"].lower()
@pytest.mark.asyncio(scope="session")
async def test_run_agent_schedule_without_name(setup_test_data):
"""Test that run_agent returns error when scheduling without schedule_name."""
user = setup_test_data["user"]
store_submission = setup_test_data["store_submission"]
tool = RunAgentTool()
agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}"
session = make_session(user_id=user.id)
# Try to schedule without schedule_name
response = await tool.execute(
user_id=user.id,
session_id=str(uuid.uuid4()),
tool_call_id=str(uuid.uuid4()),
username_agent_slug=agent_marketplace_id,
inputs={"test_input": "test"},
schedule_name="", # Empty name
cron="0 9 * * *",
session=session,
)
assert response is not None
assert hasattr(response, "result")
assert isinstance(response.result, str)
result_data = orjson.loads(response.result)
# Should return error about missing schedule_name
assert result_data.get("type") == "error"
assert "schedule_name" in result_data["message"].lower()

View File

@@ -0,0 +1,288 @@
"""Shared utilities for chat tools."""
import logging
from typing import Any
from backend.api.features.library import db as library_db
from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db
from backend.data import graph as graph_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import NotFoundError
logger = logging.getLogger(__name__)
async def fetch_graph_from_store_slug(
username: str,
agent_name: str,
) -> tuple[GraphModel | None, Any | None]:
"""
Fetch graph from store by username/agent_name slug.
Args:
username: Creator's username
agent_name: Agent name/slug
Returns:
tuple[Graph | None, StoreAgentDetails | None]: The graph and store agent details,
or (None, None) if not found.
Raises:
DatabaseError: If there's a database error during lookup.
"""
try:
store_agent = await store_db.get_store_agent_details(username, agent_name)
except NotFoundError:
return None, None
# Get the graph from store listing version
graph_meta = await store_db.get_available_graph(
store_agent.store_listing_version_id
)
graph = await graph_db.get_graph(
graph_id=graph_meta.id,
version=graph_meta.version,
user_id=None, # Public access
include_subgraphs=True,
)
return graph, store_agent
def extract_credentials_from_schema(
credentials_input_schema: dict[str, Any] | None,
) -> list[CredentialsMetaInput]:
"""
Extract credential requirements from graph's credentials_input_schema.
This consolidates duplicated logic from get_agent_details.py and setup_agent.py.
Args:
credentials_input_schema: The credentials_input_schema from a Graph object
Returns:
List of CredentialsMetaInput with provider and type info
"""
credentials: list[CredentialsMetaInput] = []
if (
not isinstance(credentials_input_schema, dict)
or "properties" not in credentials_input_schema
):
return credentials
for cred_name, cred_schema in credentials_input_schema["properties"].items():
provider = _extract_provider_from_schema(cred_schema)
cred_type = _extract_credential_type_from_schema(cred_schema)
credentials.append(
CredentialsMetaInput(
id=cred_name,
title=cred_schema.get("title", cred_name),
provider=provider, # type: ignore
type=cred_type, # type: ignore
)
)
return credentials
def extract_credentials_as_dict(
credentials_input_schema: dict[str, Any] | None,
) -> dict[str, CredentialsMetaInput]:
"""
Extract credential requirements as a dict keyed by field name.
Args:
credentials_input_schema: The credentials_input_schema from a Graph object
Returns:
Dict mapping field name to CredentialsMetaInput
"""
credentials: dict[str, CredentialsMetaInput] = {}
if (
not isinstance(credentials_input_schema, dict)
or "properties" not in credentials_input_schema
):
return credentials
for cred_name, cred_schema in credentials_input_schema["properties"].items():
provider = _extract_provider_from_schema(cred_schema)
cred_type = _extract_credential_type_from_schema(cred_schema)
credentials[cred_name] = CredentialsMetaInput(
id=cred_name,
title=cred_schema.get("title", cred_name),
provider=provider, # type: ignore
type=cred_type, # type: ignore
)
return credentials
def _extract_provider_from_schema(cred_schema: dict[str, Any]) -> str:
"""Extract provider from credential schema."""
if "credentials_provider" in cred_schema and cred_schema["credentials_provider"]:
return cred_schema["credentials_provider"][0]
if "properties" in cred_schema and "provider" in cred_schema["properties"]:
return cred_schema["properties"]["provider"].get("const", "unknown")
return "unknown"
def _extract_credential_type_from_schema(cred_schema: dict[str, Any]) -> str:
"""Extract credential type from credential schema."""
if "credentials_types" in cred_schema and cred_schema["credentials_types"]:
return cred_schema["credentials_types"][0]
if "properties" in cred_schema and "type" in cred_schema["properties"]:
return cred_schema["properties"]["type"].get("const", "api_key")
return "api_key"
async def get_or_create_library_agent(
graph: GraphModel,
user_id: str,
) -> library_model.LibraryAgent:
"""
Get existing library agent or create new one.
This consolidates duplicated logic from run_agent.py and setup_agent.py.
Args:
graph: The Graph to add to library
user_id: The user's ID
Returns:
LibraryAgent instance
"""
existing = await library_db.get_library_agent_by_graph_id(
graph_id=graph.id, user_id=user_id
)
if existing:
return existing
library_agents = await library_db.create_library_agent(
graph=graph,
user_id=user_id,
create_library_agents_for_sub_graphs=False,
)
assert len(library_agents) == 1, "Expected 1 library agent to be created"
return library_agents[0]
async def match_user_credentials_to_graph(
user_id: str,
graph: GraphModel,
) -> tuple[dict[str, CredentialsMetaInput], list[str]]:
"""
Match user's available credentials against graph's required credentials.
Uses graph.aggregate_credentials_inputs() which handles credentials from
multiple nodes and uses frozensets for provider matching.
Args:
user_id: The user's ID
graph: The Graph with credential requirements
Returns:
tuple[matched_credentials dict, missing_credential_descriptions list]
"""
graph_credentials_inputs: dict[str, CredentialsMetaInput] = {}
missing_creds: list[str] = []
# Get aggregated credentials requirements from the graph
aggregated_creds = graph.aggregate_credentials_inputs()
logger.debug(
f"Matching credentials for graph {graph.id}: {len(aggregated_creds)} required"
)
if not aggregated_creds:
return graph_credentials_inputs, missing_creds
# Get all available credentials for the user
creds_manager = IntegrationCredentialsManager()
available_creds = await creds_manager.store.get_all_creds(user_id)
# For each required credential field, find a matching user credential
# field_info.provider is a frozenset because aggregate_credentials_inputs()
# combines requirements from multiple nodes. A credential matches if its
# provider is in the set of acceptable providers.
for credential_field_name, (
credential_requirements,
_node_fields,
) in aggregated_creds.items():
# Find first matching credential by provider and type
matching_cred = next(
(
cred
for cred in available_creds
if cred.provider in credential_requirements.provider
and cred.type in credential_requirements.supported_types
),
None,
)
if matching_cred:
try:
graph_credentials_inputs[credential_field_name] = CredentialsMetaInput(
id=matching_cred.id,
provider=matching_cred.provider, # type: ignore
type=matching_cred.type,
title=matching_cred.title,
)
except Exception as e:
logger.error(
f"Failed to create CredentialsMetaInput for field '{credential_field_name}': "
f"provider={matching_cred.provider}, type={matching_cred.type}, "
f"credential_id={matching_cred.id}",
exc_info=True,
)
missing_creds.append(
f"{credential_field_name} (validation failed: {e})"
)
else:
missing_creds.append(
f"{credential_field_name} "
f"(requires provider in {list(credential_requirements.provider)}, "
f"type in {list(credential_requirements.supported_types)})"
)
logger.info(
f"Credential matching complete: {len(graph_credentials_inputs)}/{len(aggregated_creds)} matched"
)
return graph_credentials_inputs, missing_creds
async def check_user_has_required_credentials(
user_id: str,
required_credentials: list[CredentialsMetaInput],
) -> list[CredentialsMetaInput]:
"""
Check which required credentials the user is missing.
Args:
user_id: The user's ID
required_credentials: List of required credentials
Returns:
List of missing credentials (empty if user has all)
"""
if not required_credentials:
return []
creds_manager = IntegrationCredentialsManager()
available_creds = await creds_manager.store.get_all_creds(user_id)
missing: list[CredentialsMetaInput] = []
for required in required_credentials:
has_matching = any(
cred.provider == required.provider and cred.type == required.type
for cred in available_creds
)
if not has_matching:
missing.append(required)
return missing

View File

@@ -0,0 +1,204 @@
import json
from datetime import datetime
from typing import TYPE_CHECKING, Any, Dict, List, Union
from prisma.enums import ReviewStatus
from pydantic import BaseModel, Field, field_validator, model_validator
if TYPE_CHECKING:
from prisma.models import PendingHumanReview
# SafeJson-compatible type alias for review data
SafeJsonData = Union[Dict[str, Any], List[Any], str, int, float, bool, None]
class PendingHumanReviewModel(BaseModel):
"""Response model for pending human review data.
Represents a human review request that is awaiting user action.
Contains all necessary information for a user to review and approve
or reject data from a Human-in-the-Loop block execution.
Attributes:
id: Unique identifier for the review record
user_id: ID of the user who must perform the review
node_exec_id: ID of the node execution that created this review
graph_exec_id: ID of the graph execution containing the node
graph_id: ID of the graph template being executed
graph_version: Version number of the graph template
payload: The actual data payload awaiting review
instructions: Instructions or message for the reviewer
editable: Whether the reviewer can edit the data
status: Current review status (WAITING, APPROVED, or REJECTED)
review_message: Optional message from the reviewer
created_at: Timestamp when review was created
updated_at: Timestamp when review was last modified
reviewed_at: Timestamp when review was completed (if applicable)
"""
node_exec_id: str = Field(description="Node execution ID (primary key)")
user_id: str = Field(description="User ID associated with the review")
graph_exec_id: str = Field(description="Graph execution ID")
graph_id: str = Field(description="Graph ID")
graph_version: int = Field(description="Graph version")
payload: SafeJsonData = Field(description="The actual data payload awaiting review")
instructions: str | None = Field(
description="Instructions or message for the reviewer", default=None
)
editable: bool = Field(description="Whether the reviewer can edit the data")
status: ReviewStatus = Field(description="Review status")
review_message: str | None = Field(
description="Optional message from the reviewer", default=None
)
was_edited: bool | None = Field(
description="Whether the data was modified during review", default=None
)
processed: bool = Field(
description="Whether the review result has been processed by the execution engine",
default=False,
)
created_at: datetime = Field(description="When the review was created")
updated_at: datetime | None = Field(
description="When the review was last updated", default=None
)
reviewed_at: datetime | None = Field(
description="When the review was completed", default=None
)
@classmethod
def from_db(cls, review: "PendingHumanReview") -> "PendingHumanReviewModel":
"""
Convert a database model to a response model.
Uses the new flat database structure with separate columns for
payload, instructions, and editable flag.
Handles invalid data gracefully by using safe defaults.
"""
return cls(
node_exec_id=review.nodeExecId,
user_id=review.userId,
graph_exec_id=review.graphExecId,
graph_id=review.graphId,
graph_version=review.graphVersion,
payload=review.payload,
instructions=review.instructions,
editable=review.editable,
status=review.status,
review_message=review.reviewMessage,
was_edited=review.wasEdited,
processed=review.processed,
created_at=review.createdAt,
updated_at=review.updatedAt,
reviewed_at=review.reviewedAt,
)
class ReviewItem(BaseModel):
"""Single review item for processing."""
node_exec_id: str = Field(description="Node execution ID to review")
approved: bool = Field(
description="Whether this review is approved (True) or rejected (False)"
)
message: str | None = Field(
None, description="Optional review message", max_length=2000
)
reviewed_data: SafeJsonData | None = Field(
None, description="Optional edited data (ignored if approved=False)"
)
@field_validator("reviewed_data")
@classmethod
def validate_reviewed_data(cls, v):
"""Validate that reviewed_data is safe and properly structured."""
if v is None:
return v
# Validate SafeJson compatibility
def validate_safejson_type(obj):
"""Ensure object only contains SafeJson compatible types."""
if obj is None:
return True
elif isinstance(obj, (str, int, float, bool)):
return True
elif isinstance(obj, dict):
return all(
isinstance(k, str) and validate_safejson_type(v)
for k, v in obj.items()
)
elif isinstance(obj, list):
return all(validate_safejson_type(item) for item in obj)
else:
return False
if not validate_safejson_type(v):
raise ValueError("reviewed_data contains non-SafeJson compatible types")
# Validate data size to prevent DoS attacks
try:
json_str = json.dumps(v)
if len(json_str) > 1000000: # 1MB limit
raise ValueError("reviewed_data is too large (max 1MB)")
except (TypeError, ValueError) as e:
raise ValueError(f"reviewed_data must be JSON serializable: {str(e)}")
# Ensure no dangerous nested structures (prevent infinite recursion)
def check_depth(obj, max_depth=10, current_depth=0):
"""Recursively check object nesting depth to prevent stack overflow attacks."""
if current_depth > max_depth:
raise ValueError("reviewed_data has excessive nesting depth")
if isinstance(obj, dict):
for value in obj.values():
check_depth(value, max_depth, current_depth + 1)
elif isinstance(obj, list):
for item in obj:
check_depth(item, max_depth, current_depth + 1)
check_depth(v)
return v
@field_validator("message")
@classmethod
def validate_message(cls, v):
"""Validate and sanitize review message."""
if v is not None and len(v.strip()) == 0:
return None
return v
class ReviewRequest(BaseModel):
"""Request model for processing ALL pending reviews for an execution.
This request must include ALL pending reviews for a graph execution.
Each review will be either approved (with optional data modifications)
or rejected (data ignored). The execution will resume only after ALL reviews are processed.
"""
reviews: List[ReviewItem] = Field(
description="All reviews with their approval status, data, and messages"
)
@model_validator(mode="after")
def validate_review_completeness(self):
"""Validate that we have at least one review to process and no duplicates."""
if not self.reviews:
raise ValueError("At least one review must be provided")
# Ensure no duplicate node_exec_ids
node_ids = [review.node_exec_id for review in self.reviews]
if len(node_ids) != len(set(node_ids)):
duplicates = [nid for nid in set(node_ids) if node_ids.count(nid) > 1]
raise ValueError(f"Duplicate review IDs found: {', '.join(duplicates)}")
return self
class ReviewResponse(BaseModel):
"""Response from review endpoint."""
approved_count: int = Field(description="Number of reviews successfully approved")
rejected_count: int = Field(description="Number of reviews successfully rejected")
failed_count: int = Field(description="Number of reviews that failed processing")
error: str | None = Field(None, description="Error message if operation failed")

View File

@@ -0,0 +1,492 @@
import datetime
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from prisma.enums import ReviewStatus
from pytest_snapshot.plugin import Snapshot
from backend.api.rest_api import handle_internal_http_error
from .model import PendingHumanReviewModel
from .routes import router
# Using a fixed timestamp for reproducible tests
FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc)
app = fastapi.FastAPI()
app.include_router(router, prefix="/api/review")
app.add_exception_handler(ValueError, handle_internal_http_error(400))
client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
@pytest.fixture
def sample_pending_review(test_user_id: str) -> PendingHumanReviewModel:
"""Create a sample pending review for testing"""
return PendingHumanReviewModel(
node_exec_id="test_node_123",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "test payload", "value": 42},
instructions="Please review this data",
editable=True,
status=ReviewStatus.WAITING,
review_message=None,
was_edited=None,
processed=False,
created_at=FIXED_NOW,
updated_at=None,
reviewed_at=None,
)
def test_get_pending_reviews_empty(
mocker: pytest_mock.MockerFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews when none exist"""
mock_get_reviews = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_user"
)
mock_get_reviews.return_value = []
response = client.get("/api/review/pending")
assert response.status_code == 200
assert response.json() == []
mock_get_reviews.assert_called_once_with(test_user_id, 1, 25)
def test_get_pending_reviews_with_data(
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews with data"""
mock_get_reviews = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_user"
)
mock_get_reviews.return_value = [sample_pending_review]
response = client.get("/api/review/pending?page=2&page_size=10")
assert response.status_code == 200
data = response.json()
assert len(data) == 1
assert data[0]["node_exec_id"] == "test_node_123"
assert data[0]["status"] == "WAITING"
mock_get_reviews.assert_called_once_with(test_user_id, 2, 10)
def test_get_pending_reviews_for_execution_success(
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test getting pending reviews for specific execution"""
mock_get_graph_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_get_graph_execution.return_value = {
"id": "test_graph_exec_456",
"user_id": test_user_id,
}
mock_get_reviews = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews.return_value = [sample_pending_review]
response = client.get("/api/review/execution/test_graph_exec_456")
assert response.status_code == 200
data = response.json()
assert len(data) == 1
assert data[0]["graph_exec_id"] == "test_graph_exec_456"
def test_get_pending_reviews_for_execution_not_available(
mocker: pytest_mock.MockerFixture,
) -> None:
"""Test access denied when user doesn't own the execution"""
mock_get_graph_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_graph_execution_meta"
)
mock_get_graph_execution.return_value = None
response = client.get("/api/review/execution/test_graph_exec_456")
assert response.status_code == 404
assert "not found" in response.json()["detail"]
def test_process_review_action_approve_success(
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test successful review approval"""
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# Create approved review for return
approved_review = PendingHumanReviewModel(
node_exec_id="test_node_123",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "modified payload", "value": 50},
instructions="Please review this data",
editable=True,
status=ReviewStatus.APPROVED,
review_message="Looks good",
was_edited=True,
processed=False,
created_at=FIXED_NOW,
updated_at=FIXED_NOW,
reviewed_at=FIXED_NOW,
)
mock_process_all_reviews.return_value = {"test_node_123": approved_review}
mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
mocker.patch("backend.api.features.executions.review.routes.add_graph_execution")
request_data = {
"reviews": [
{
"node_exec_id": "test_node_123",
"approved": True,
"message": "Looks good",
"reviewed_data": {"data": "modified payload", "value": 50},
}
]
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 200
data = response.json()
assert data["approved_count"] == 1
assert data["rejected_count"] == 0
assert data["failed_count"] == 0
assert data["error"] is None
def test_process_review_action_reject_success(
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test successful review rejection"""
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
rejected_review = PendingHumanReviewModel(
node_exec_id="test_node_123",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "test payload"},
instructions="Please review",
editable=True,
status=ReviewStatus.REJECTED,
review_message="Rejected by user",
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=None,
reviewed_at=FIXED_NOW,
)
mock_process_all_reviews.return_value = {"test_node_123": rejected_review}
mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
request_data = {
"reviews": [
{
"node_exec_id": "test_node_123",
"approved": False,
"message": None,
}
]
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 200
data = response.json()
assert data["approved_count"] == 0
assert data["rejected_count"] == 1
assert data["failed_count"] == 0
assert data["error"] is None
def test_process_review_action_mixed_success(
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test mixed approve/reject operations"""
# Create a second review
second_review = PendingHumanReviewModel(
node_exec_id="test_node_456",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "second payload"},
instructions="Second review",
editable=False,
status=ReviewStatus.WAITING,
review_message=None,
was_edited=None,
processed=False,
created_at=FIXED_NOW,
updated_at=None,
reviewed_at=None,
)
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review, second_review]
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# Create approved version of first review
approved_review = PendingHumanReviewModel(
node_exec_id="test_node_123",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "modified"},
instructions="Please review",
editable=True,
status=ReviewStatus.APPROVED,
review_message="Approved",
was_edited=True,
processed=False,
created_at=FIXED_NOW,
updated_at=None,
reviewed_at=FIXED_NOW,
)
# Create rejected version of second review
rejected_review = PendingHumanReviewModel(
node_exec_id="test_node_456",
user_id=test_user_id,
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
payload={"data": "second payload"},
instructions="Second review",
editable=False,
status=ReviewStatus.REJECTED,
review_message="Rejected by user",
was_edited=False,
processed=False,
created_at=FIXED_NOW,
updated_at=None,
reviewed_at=FIXED_NOW,
)
mock_process_all_reviews.return_value = {
"test_node_123": approved_review,
"test_node_456": rejected_review,
}
mock_has_pending = mocker.patch(
"backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec"
)
mock_has_pending.return_value = False
request_data = {
"reviews": [
{
"node_exec_id": "test_node_123",
"approved": True,
"message": "Approved",
"reviewed_data": {"data": "modified"},
},
{
"node_exec_id": "test_node_456",
"approved": False,
"message": None,
},
]
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 200
data = response.json()
assert data["approved_count"] == 1
assert data["rejected_count"] == 1
assert data["failed_count"] == 0
assert data["error"] is None
def test_process_review_action_empty_request(
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""Test error when no reviews provided"""
request_data = {"reviews": []}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 422
response_data = response.json()
# Pydantic validation error format
assert isinstance(response_data["detail"], list)
assert len(response_data["detail"]) > 0
assert "At least one review must be provided" in response_data["detail"][0]["msg"]
def test_process_review_action_review_not_found(
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""Test error when review is not found"""
# Mock the functions that extract graph execution ID from the request
mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [] # No reviews found
# Mock process_all_reviews to simulate not finding reviews
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
# This should raise a ValueError with "Reviews not found" message based on the data/human_review.py logic
mock_process_all_reviews.side_effect = ValueError(
"Reviews not found or access denied for IDs: nonexistent_node"
)
request_data = {
"reviews": [
{
"node_exec_id": "nonexistent_node",
"approved": True,
"message": "Test",
}
]
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 400
assert "Reviews not found" in response.json()["detail"]
def test_process_review_action_partial_failure(
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test handling of partial failures in review processing"""
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
# Mock partial failure in processing
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
mock_process_all_reviews.side_effect = ValueError("Some reviews failed validation")
request_data = {
"reviews": [
{
"node_exec_id": "test_node_123",
"approved": True,
"message": "Test",
}
]
}
response = client.post("/api/review/action", json=request_data)
assert response.status_code == 400
assert "Some reviews failed validation" in response.json()["detail"]
def test_process_review_action_invalid_node_exec_id(
mocker: pytest_mock.MockerFixture,
sample_pending_review: PendingHumanReviewModel,
test_user_id: str,
) -> None:
"""Test failure when trying to process review with invalid node execution ID"""
# Mock the route functions
mock_get_reviews_for_execution = mocker.patch(
"backend.api.features.executions.review.routes.get_pending_reviews_for_execution"
)
mock_get_reviews_for_execution.return_value = [sample_pending_review]
# Mock validation failure - this should return 400, not 500
mock_process_all_reviews = mocker.patch(
"backend.api.features.executions.review.routes.process_all_reviews_for_execution"
)
mock_process_all_reviews.side_effect = ValueError(
"Invalid node execution ID format"
)
request_data = {
"reviews": [
{
"node_exec_id": "invalid-node-format",
"approved": True,
"message": "Test",
}
]
}
response = client.post("/api/review/action", json=request_data)
# Should be a 400 Bad Request, not 500 Internal Server Error
assert response.status_code == 400
assert "Invalid node execution ID format" in response.json()["detail"]

Some files were not shown because too many files have changed in this diff Show More