When we run multiple instances of the executor, some of the executors
can oversubscribe the messages and end up queuing the agent execution
request instead of letting another executor handle the job. This change
solves the problem.
### Changes 🏗️
* Reject execution request when the executor is full.
* Improve `active_graph_runs` tracking for better horizontal scaling
heuristics.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual graph execution & CI
### Changes 🏗️
1. Json columns have to be json serializable, but sometimes the data is
not, so `SafeJson` is introduced to make sure that the data being loaded
can be string serialized and back before persisting into the database.
2. Locks & transactions seem to be used in the case where it's not
needed, this reduces database & redis performance.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] CI tests
We want scrolling for agent dialog list
- Based on #9833
### Changes 🏗️
- adds backend support for paginating this content
- adds frontend support for scrolling pagination
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] test UI for this
---------
Co-authored-by: Venkat Sai Kedari Nath Gandham <154089422+Kedarinath1502@users.noreply.github.com>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10458
### Changes 🏗️
Improve logic in `useAgentGraph`:
- Correctly handle unset `flowVersion` in checks in hooks
- Prevent unnecessary WebSocket re-connects
- Remove redundant WebSocket connection management logic
- Untangle hooks for initial load and set-up
- Simplify block filtering logic
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Edit an agent in the builder
- [x] WebSocket doesn't re-connect unnecessarily
- [x] Graph doesn't reset on WebSocket re-connect
- [x] Graph doesn't reset on LaunchDarkly re-connect
Bumps [redis](https://github.com/redis/redis-py) from 5.2.1 to 6.2.0,
for both `autogpt_libs` and `backend`.
Also, additional fixes in `autogpt_libs/pyproject.toml`:
- Move `redis` from dev dependencies to prod dependencies
- Fix author info
- Sort dependencies
> [!NOTE]
> Of course dependabot wouldn't do this on its own; this PR has been
taken over and augmented by @Pwuts
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/releases">redis's
releases</a>.</em></p>
<blockquote>
<h2>6.2.0</h2>
<h1>Changes</h1>
<h2>🚀 New Features</h2>
<ul>
<li>Add <code>dynamic_startup_nodes</code> parameter to async
RedisCluster (<a
href="https://redirect.github.com/redis/redis-py/issues/3646">#3646</a>)</li>
<li>Support RESP3 with <code>hiredis-py</code> parser (<a
href="https://redirect.github.com/redis/redis-py/issues/3648">#3648</a>)</li>
<li>[Async] Support for transactions in async <code>RedisCluster</code>
client (<a
href="https://redirect.github.com/redis/redis-py/issues/3649">#3649</a>)</li>
</ul>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Revert wrongly changed default value for <code>check_hostname</code>
when instantiating <code>RedisSSLContext</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3655">#3655</a>)</li>
<li>Fixed potential deadlock from unexpected <code>__del__</code> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<ul>
<li>Update <code>search_json_examples.ipynb</code>: Fix the old import
<code>indexDefinition</code> -> <code>index_definition</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3652">#3652</a>)</li>
<li>Remove mandatory update of the CHANGES file for new PRs. Changes
file will be kept for history for versions < 4.0.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/3645">#3645</a>)</li>
<li>Dropping <code>Python 3.8</code> support as it has reached end of
life (<a
href="https://redirect.github.com/redis/redis-py/issues/3657">#3657</a>)</li>
<li>fix(doc): update Python print output in json doctests (<a
href="https://redirect.github.com/redis/redis-py/issues/3658">#3658</a>)</li>
<li>Update redis-entraid dependency (<a
href="https://redirect.github.com/redis/redis-py/issues/3661">#3661</a>)</li>
</ul>
<h2></h2>
<p>We'd like to thank all the contributors who worked on this release!
<a href="https://github.com/JCornat"><code>@JCornat</code></a> <a
href="https://github.com/ShubhamKaudewar"><code>@ShubhamKaudewar</code></a>
<a href="https://github.com/uglide"><code>@uglide</code></a> <a
href="https://github.com/petyaslavova"><code>@petyaslavova</code></a>
<a
href="https://github.com/vladvildanov"><code>@vladvildanov</code></a></p>
<h2>v6.1.1</h2>
<h1>Changes</h1>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Revert wrongly changed default value for <code>check_hostname</code>
when instantiating <code>RedisSSLContext</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3655">#3655</a>)</li>
<li>Fixed potential deadlock from unexpected <code>__del__</code> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
</ul>
<h2></h2>
<p>We'd like to thank all the contributors who worked on this release!
<a
href="https://github.com/vladvildanov"><code>@vladvildanov</code></a>
<a
href="https://github.com/petyaslavova"><code>@petyaslavova</code></a></p>
<h2>6.1.0</h2>
<h1>Changes</h1>
<h2>🚀 New Features</h2>
<ul>
<li>Support for transactions in <code>RedisCluster</code> client (<a
href="https://redirect.github.com/redis/redis-py/issues/3611">#3611</a>)</li>
<li>Add equality and hashability to <code>Retry</code> and backoff
classes (<a
href="https://redirect.github.com/redis/redis-py/issues/3628">#3628</a>)</li>
</ul>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Fix RedisCluster <code>ssl_check_hostname</code> not set to
connections. For SSL verification with
<code>ssl_cert_reqs="none"</code>, check_hostname is set to
<code>False</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3637">#3637</a>)
<strong>Important</strong>: The default value for the
<code>check_hostname</code> field of <code>RedisSSLContext</code> has
been changed as part of this PR - this is a breaking change and should
not be introduced in minor versions - unfortunately, it is part of the
current release.
The breaking change is reverted in the next release to fix the behavior
--> 6.2.0</li>
<li>Prevent RuntimeError while reinitializing clusters - sync and async
(<a
href="https://redirect.github.com/redis/redis-py/issues/3633">#3633</a>)</li>
<li>Add equality and hashability to <code>Retry</code> and backoff
classes (<a
href="https://redirect.github.com/redis/redis-py/issues/3628">#3628</a>)
- fixes integration with Django RQ</li>
<li>Fix <code>AttributeError</code> on <code>ClusterPipeline</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3634">#3634</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1a59471870"><code>1a59471</code></a>
Adding small change in code to trigger pipeline for the branch.</li>
<li><a
href="83cf781be6"><code>83cf781</code></a>
Adding small change in README to trigger pipeline for the branch.</li>
<li><a
href="f5cd264c40"><code>f5cd264</code></a>
maintenance: Preparation for release 6.2.0 - updating lib version. (<a
href="https://redirect.github.com/redis/redis-py/issues/3662">#3662</a>)</li>
<li><a
href="793cdc63ac"><code>793cdc6</code></a>
maintenance: Update redis-entraid dependency (<a
href="https://redirect.github.com/redis/redis-py/issues/3661">#3661</a>)</li>
<li><a
href="34c40ff82d"><code>34c40ff</code></a>
fix(doc) : update Python print output in json doctests (<a
href="https://redirect.github.com/redis/redis-py/issues/3658">#3658</a>)</li>
<li><a
href="e5756daafa"><code>e5756da</code></a>
Dropping Python 3.8 support as it has reached end of life (<a
href="https://redirect.github.com/redis/redis-py/issues/3657">#3657</a>)</li>
<li><a
href="bc7de608b8"><code>bc7de60</code></a>
[Async] Support for transactions in async RedisCluster client (<a
href="https://redirect.github.com/redis/redis-py/issues/3649">#3649</a>)</li>
<li><a
href="e226ad2b4c"><code>e226ad2</code></a>
Removing connection_pool field from the CommandProtocol definition (<a
href="https://redirect.github.com/redis/redis-py/issues/3656">#3656</a>)</li>
<li><a
href="14a6fc39bd"><code>14a6fc3</code></a>
fix: Fixed potential deadlock from unexpected <strong>del</strong> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
<li><a
href="3ebfd5b569"><code>3ebfd5b</code></a>
fix: Revert wrongly changed default value for check_hostname when
instantiati...</li>
<li>Additional commits viewable in <a
href="https://github.com/redis/redis-py/compare/v5.2.1...v6.2.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Changes 🏗️
- Close websocket connections gracefully during logout ( _whether from
another tab or not_ )
- Also fixed an error on `HeroSection` that shows when the onboarding is
disabled locally ( `null` )
- Uncomment legit tests about connecting/saving agents
- Centralise local storage usage through a single service with typed
keys
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login in 3 tabs ( 1 builder, 1 marketplace, 1 agent run view )
- [x] Logout from the marketplace tab
- [x] The other tabs show logout state gracefully without toasts or
errors
- [x] Websocket connections are closed ( _devtools console shows that_ )
### For configuration changes:
None
This PR implements a comprehensive Ayrshare social media integration for
AutoGPT Platform, enabling users to post content across multiple social
media platforms through a unified interface. Ayrshare provides a single
API to manage posts across Facebook, Twitter/X, LinkedIn, Instagram,
YouTube, TikTok, Pinterest, Reddit, Telegram, Google My Business,
Bluesky, Snapchat, and Threads.
The integration addresses the need for social media automation and
content distribution workflows within AutoGPT agents, allowing users to:
- Connect their social media accounts via SSO
- Post content with platform-specific options and constraints
- Schedule posts across multiple platforms simultaneously
- Handle platform-specific media requirements and validation
⚠️ To simplify the review process all except the twitter post block has
been commented out, future pr's will uncomment other platfroms so we can
test them in isolation.
### Changes 🏗️
#### Backend Integration (`backend/integrations/ayrshare.py`)
- **AyrshareClient**: Complete API client implementation with post
creation, profile management, and JWT generation
- **SocialPlatform enum**: Comprehensive platform definitions for all
supported social networks
- **Response models**: PostResponse, ProfileResponse, JWTResponse for
type-safe API interactions
- **Error handling**: Custom AyrshareAPIException with proper HTTP
status code handling
#### Social Media Posting Blocks (`backend/blocks/ayrshare/post.py`)
- **BaseAyrshareInput**: Shared input schema with common fields (post
text, media URLs, scheduling, etc.)
- **Platform-specific blocks**: 13 dedicated posting blocks, each with
platform-specific validation and options:
- PostToFacebookBlock: Carousel, Reels, Stories, targeting, alt text
- PostToXBlock: Threads, polls, long posts, premium features, subtitles
- PostToLinkedInBlock: Document support, visibility controls, audience
targeting
- PostToInstagramBlock: Stories, Reels, user tags, collaborators
- PostToYouTubeBlock: Video uploads, playlists, visibility, country
targeting
- PostToPinterestBlock: Pins, carousels, board management
- PostToTikTokBlock: Video/image posts, AI labeling, brand content
- PostToRedditBlock: Basic posting functionality
- PostToTelegramBlock: GIF handling, mentions
- PostToGMBBlock: Event/offer posts, call-to-action buttons
- PostToBlueskyBlock: Character limit validation, alt text
- PostToSnapchatBlock: Story types, video thumbnails
- PostToThreadsBlock: Hashtag restrictions, carousel support
#### Helper Models
- **CarouselItem**: Facebook carousel configuration
- **CallToAction, EventDetails, OfferDetails**: Google My Business post
types
- **InstagramUserTag**: Instagram user tagging with coordinates
- **LinkedInTargeting**: LinkedIn audience targeting options
- **PinterestCarouselOption**: Pinterest carousel image options
- **YouTubeTargeting**: YouTube country blocking/allowing
#### Authentication & SSO (`backend/server/integrations/router.py`)
- **SSO endpoint**: `/integrations/ayrshare/sso_url` for account linking
- **Profile management**: Automatic profile creation and key management
- **JWT generation**: Secure token generation for social media account
linking
- **Platform allowlist**: Configured access to all supported social
platforms
#### Frontend Integration (`frontend/src/components/CustomNode.tsx`)
- **AYRSHARE block type**: New BlockUIType.AYRSHARE for
Ayrshare-specific nodes
- **SSO button**: "Connect Social Media Accounts" with loading states
- **Handle generation**: Special handling for Ayrshare blocks with SSO
integration
#### Configuration
- **Environment variables**: Added AYRSHARE_API_KEY and AYRSHARE_JWT_KEY
to .env.example
- **Block registration**: All Ayrshare blocks registered in
AYRSHARE_NODE_IDS array
#### Type Safety & Error Handling
- **Modern typing**: Updated to use `list`, `dict`, `Any` instead of
legacy typing
- **Comprehensive validation**: Platform-specific constraints (character
limits, media counts, file types)
- **User-friendly errors**: Clear error messages for validation failures
and API errors
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
**Test Plan:**
**Backend API Testing:**
- [x] Verify AyrshareClient initializes correctly with API key
- [x] Test JWT generation for SSO authentication
- [x] Test profile creation and management
- [x] Verify all 13 posting blocks are properly registered
- [x] Test platform-specific validation rules for each block
- [x] Verify error handling for missing credentials and API failures
**Frontend Integration Testing:**
- [x] Verify AYRSHARE block type renders correctly in flow editor
- [x] Test SSO button functionality and popup window behavior
- [x] Confirm loading states work properly during authentication
- [x] Verify input handles generate correctly for Ayrshare blocks
- [x] Test platform-specific input fields and validation
**End-to-End Workflow Testing:**
- [x] Create agent with Ayrshare posting blocks
- [x] Test SSO flow: click "Connect Social Media Accounts" button
- [x] Verify popup opens with Ayrshare authentication page
- [x] Test social media account linking process
- [x] Create posts with various platform-specific options:
- [ X ] X (Twitter) - tested basic posting with image
- [] Test scheduling functionality across platforms
- [x] Verify media upload constraints and validation
- [] Test error handling for invalid inputs and failed posts
**Error Case Testing:**
- [] Test behavior with missing AYRSHARE_API_KEY configuration
- [] Test invalid social media credentials handling
- [] Test network failure scenarios
- [] Verify platform-specific validation error messages
- [] Test character limit enforcement per platform
- [] Test media file type and size restrictions
**Security Testing:**
- [ x ] Verify JWT tokens are properly generated and validated
- [x] Test profile key isolation between users
- [x] Confirm sensitive credentials are not logged
- [x] Verify SSO popup prevents XSS attacks
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Configuration Changes:**
- Added `AYRSHARE_API_KEY` environment variable for Ayrshare API
authentication
- Added `AYRSHARE_JWT_KEY` environment variable for SSO token generation
- No docker-compose.yml changes required (uses existing backend
services)
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Overview
This PR adds comprehensive Airtable integration to the AutoGPT platform,
enabling users to seamlessly connect their Airtable bases with AutoGPT
workflows for powerful no-code automation capabilities.
## Why Airtable Integration?
Airtable is one of the most popular no-code databases used by teams for
project management, CRMs, inventory tracking, and countless other use
cases. This integration brings significant value:
- **Data Automation**: Automate data entry, updates, and synchronization
between Airtable and other services
- **Workflow Triggers**: React to changes in Airtable bases with
webhook-based triggers
- **Schema Management**: Programmatically create and manage Airtable
table structures
- **Bulk Operations**: Efficiently process large amounts of data with
batch create/update/delete operations
## Key Features
### 🔌 Webhook Trigger
- **AirtableWebhookTriggerBlock**: Listens for changes in Airtable bases
and triggers workflows
- Supports filtering by table, view, and specific fields
- Includes webhook signature validation for security
### 📊 Record Operations
- **AirtableCreateRecordsBlock**: Create single or multiple records (up
to 10 at once)
- **AirtableUpdateRecordsBlock**: Update existing records with upsert
support
- **AirtableDeleteRecordsBlock**: Delete single or multiple records
- **AirtableGetRecordBlock**: Retrieve specific record details
- **AirtableListRecordsBlock**: Query records with filtering, sorting,
and pagination
### 🏗️ Schema Management
- **AirtableCreateTableBlock**: Create new tables with custom field
definitions
- **AirtableUpdateTableBlock**: Modify table properties
- **AirtableAddFieldBlock**: Add new fields to existing tables
- **AirtableUpdateFieldBlock**: Update field properties
## Technical Implementation Details
### Authentication
- Supports both API Key and OAuth authentication methods
- OAuth implementation includes proper token refresh handling
- Credentials are securely managed through the platform's credential
system
### Webhook Security
- Added `credentials` parameter to WebhooksManager interface for proper
signature validation
- HMAC-SHA256 signature verification ensures webhook authenticity
- Webhook cursor tracking prevents duplicate event processing
### API Integration
- Comprehensive API client (`_api.py`) with full type safety
- Proper error handling and response validation
- Support for all Airtable field types and operations
## Changes 🏗️
### Added Blocks:
- AirtableWebhookTriggerBlock
- AirtableCreateRecordsBlock
- AirtableDeleteRecordsBlock
- AirtableGetRecordBlock
- AirtableListRecordsBlock
- AirtableUpdateRecordsBlock
- AirtableAddFieldBlock
- AirtableCreateTableBlock
- AirtableUpdateFieldBlock
- AirtableUpdateTableBlock
### Modified Files:
- Updated WebhooksManager interface to support credential-based
validation
- Modified all webhook handlers to support the new interface
## Test Plan 📋
### Manual Testing Performed:
1. **Authentication Testing**
- ✅ Verified API key authentication works correctly
- ✅ Tested OAuth flow including token refresh
- ✅ Confirmed credentials are properly encrypted and stored
2. **Webhook Testing**
- ✅ Created webhook subscriptions for different table events
- ✅ Verified signature validation prevents unauthorized requests
- ✅ Tested cursor tracking to ensure no duplicate events
- ✅ Confirmed webhook cleanup on block deletion
3. **Record Operations Testing**
- ✅ Created single and batch records with various field types
- ✅ Updated records with and without upsert functionality
- ✅ Listed records with filtering, sorting, and pagination
- ✅ Deleted single and multiple records
- ✅ Retrieved individual record details
4. **Schema Management Testing**
- ✅ Created tables with multiple field types
- ✅ Added fields to existing tables
- ✅ Updated table and field properties
- ✅ Verified proper error handling for invalid field types
5. **Error Handling Testing**
- ✅ Tested with invalid credentials
- ✅ Verified proper error messages for API limits
- ✅ Confirmed graceful handling of network errors
### Security Considerations 🔒
1. **API Key Management**
- API keys are stored encrypted in the credential system
- Keys are never logged or exposed in error messages
- Credentials are passed securely through the execution context
2. **Webhook Security**
- HMAC-SHA256 signature validation on all incoming webhooks
- Webhook URLs use secure ingress endpoints
- Proper cleanup of webhooks when blocks are deleted
3. **OAuth Security**
- OAuth tokens are securely stored and refreshed
- Scopes are limited to necessary permissions
- Token refresh happens automatically before expiration
## Configuration Requirements
No additional environment variables or configuration changes are
required. The integration uses the existing credential management
system.
## Checklist 📋
#### For code changes:
- [x] I have read the [contributing
instructions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/.github/CONTRIBUTING.md)
- [x] Confirmed that `make lint` passes
- [x] Confirmed that `make test` passes
- [x] Updated documentation where needed
- [x] Added/updated tests for new functionality
- [x] Manually tested all blocks with real Airtable bases
- [x] Verified backwards compatibility of webhook interface changes
#### Security:
- [x] No hard-coded secrets or sensitive information
- [x] Proper input validation on all user inputs
- [x] Secure credential handling throughout
## Changes 🏗️
- Make docker + deps cache actually work on the FE CI
- Run the E2E test data script before Playwright
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI is faster in repeated runs ( _uses cache_ )
- [x] Test data script runs successfully
### For configuration changes:
None
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
This PR contains code changes that will resolve the cursor jump issue in
the **Note** block #9252 . The changes in code affects the
NodeTextBoxInput component to transfer the state into local state and
not mutate the `value` prop directly.
Also includes change to use store admin which is from rebase issue but
approved by maintainer @ntindle
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested the note block allows to edit text normally
https://github.com/user-attachments/assets/f2800bf1-9867-4627-ac9d-44718627b263
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Ubbe <hi@ubbe.dev>
https://github.com/Significant-Gravitas/AutoGPT/pull/10437 migrated
DatabaseManager away from RestApi, but this change is not yet reflected
in Docker Compose, which causes the self-hosted Docker mode to break.
### Changes 🏗️
Add DatabaseManager process in docker.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] `docker compose up`
## Changes 🏗️
Fixed WebSocket connection errors during multi-tab logout 💆🏽
<img width="1193" height="273" alt="Screenshot 2025-07-23 at 22 23 35"
src="https://github.com/user-attachments/assets/bf6f964d-bcb0-4a2a-adff-1194defe1e61"
/>
Previously, when users logged out in one browser tab, WebSocket
connections in other open tabs would continue trying to reconnect with
invalid authentication tokens, causing console errors and potential
runtime errors.
### What was fixed
- WebSocket connections are now properly disconnected when logout occurs
in any tab
- cross-tab logout mechanism now includes WebSocket cleanup to prevent
reconnection errors
- added logic to prevent automatic reconnection after intentional
disconnection
- added E2E tests to cover this case
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manual testing: Login in multiple tabs, navigate to builder
(establishes WebSocket), logout from one tab, verify no console errors
- [x] E2E test: Added automated test covering multi-tab logout with
WebSocket cleanup verification
- [x] Verified cross-tab logout still works correctly
- [x] Confirmed WebSocket connections reconnect properly after fresh
login
### For configuration changes:
None
We've been reporting agents that are stuck on the `QUEUED` status. This
change includes the one on. RUNNING status that's been stuck for more
than 24hours.
### Changes 🏗️
Report agent on RUNNING status for more than 24hours.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
It's hard to debug two processes running in the same pod. We need to
decouple the two processes into two services.
### Changes 🏗️
Move DatabaseManager away from RestAPI as a standalone serice
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️https://github.com/user-attachments/assets/42e1c896-5f3b-447c-aee9-4f5963c217d9
There is now a 🔔 icon on the Navigation bar that shows previous agent
runs and displays real-time agent running status.
If you run an agent, the bell will show on a badge how many agents are
running. If you hover over it, a hint appears. If you click on it, it
opens a dropdown and displays the executions with their status ( _which
should match what we have in library functionality, not design-wise_ ).
I leveraged the existing APIs for this purpose. Most of the run logic is
[encapsulated on this
hook](https://github.com/Significant-Gravitas/AutoGPT/compare/dev...feat/agent-notifications?expand=1#diff-a9e7f2904d6283b094aca19b64c7168e8c66be1d5e0bb454be8978cb98526617)
and is also an independent `<AgentActivityDropdown />` component.
Clicking on an agent run opens that run in the library page.
This new functionality is covered by E2E tests 💆🏽✔️
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The navigation bar layout looks good when logged out
- [x] The navigation bar layout looks good when logged in
- [x] Open an agent in the library and click `Run`
- [x] See the real-time activity of the agent running on the navigation
bar bell icon
### For configuration changes:
_No configuration changes needed._
Some AgentInput can store empty string as the default value, this will
cause error on non string input.
Error:
```
2025-07-22 23:14:10,424 WARNING Invalid <class 'backend.blocks.io.AgentToggleInputBlock.Input'>: {'name': 'Enriched info (including email), will double the search cost', 'value': '', 'advanced': False, 'placeholder_values': []}, 1 validation error for Input
value
Input should be a valid boolean, unable to interpret input [type=bool_parsing, input_value='', input_type=str]
For further information visit https://errors.pydantic.dev/2.11/v/bool_parsing
2025-07-22 23:14:10,424 WARNING Invalid <class 'backend.blocks.io.AgentNumberInputBlock.Input'>: {'name': 'Expected New Leads Count', 'value': '', 'advanced': False, 'placeholder_values': []}, 1 validation error for Input
value
Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='', input_type=str]
For further information visit https://errors.pydantic.dev/2.11/v/int_parsing
```
### Changes 🏗️
Ignore invalid field when constructing agent input schema.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Use AgentNumberInput and AgenToggleInput with empty string value.
<!-- Clearly explain the need for these changes: -->
We setup launchdarkly so now we can dynamically control which blocks are
available on the UI
### Changes 🏗️
enables the google blocks + fixes the LD .env.example for the local env
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Deploy to test environment and verify the blocks show correctly vs
are hidden when toggling Launch darlky rules
### Changes ️
The previous implementation of the `LaunchDarklyProvider` had a race
condition where it would only initialize after the user's authentication
state was fully resolved. This caused two primary issues:
1. A delay in evaluating any feature flags, leading to a "flash of
un-styled/un-flagged content" until the user session was loaded.
2. An unreliable transition from an un-flagged state to a flagged state,
which could cause UI flicker or incorrect flag evaluations upon login.
This pull request refactors the provider to follow a more robust,
industry-standard pattern. It now initializes immediately with an
`anonymous` context, ensuring flags are available from the very start of
the application lifecycle. When the user logs in and their session
becomes available, the provider seamlessly transitions to an
authenticated context, guaranteeing that the correct flags are evaluated
consistently.
### Checklist
#### For code changes:
- I have clearly listed my changes in the PR description
- I have made a test plan
- I have tested my changes according to the test plan:
**Test Plan:**
- [x] **Anonymous User:** Load the application in an incognito window
without logging in. Verify that feature flags are evaluated correctly
for an anonymous user. Check the browser console for the
`[LaunchDarklyProvider] Using anonymous context` message.
- [x] **Login Flow:** While on the site, log in. Verify that the UI
updates with the correct feature flags for the authenticated user. Check
the console for the `[LaunchDarklyProvider] Using authenticated context`
message and confirm the LaunchDarkly client re-initializes.
- [x] **Authenticated User (Page Refresh):** As a logged-in user,
refresh the page. Verify that the application loads directly with the
authenticated user's flags, leveraging the cached session and
bootstrapped flags from `localStorage`.
- [x] **Logout Flow:** While logged in, log out. Verify that the UI
reverts to the anonymous user's state and flags. The provider `key`
should change back to "anonymous", triggering another re-mount.
<details><summary>Summary of Code Changes</summary>
- Refactored `LaunchDarklyProvider` to handle user authentication state
changes gracefully.
- The provider now initializes immediately with an `anonymous` user
context while the Supabase user session is loading.
- Once the user is authenticated, the provider's context is updated to
reflect the logged-in user's details.
- Added a `key` prop to the `<LDProvider>` component, using the user's
ID (or "anonymous"). This forces React to re-mount the provider when the
user's identity changes, ensuring a clean re-initialization of the
LaunchDarkly SDK.
- Enabled `localStorage` bootstrapping (`options={{ bootstrap:
"localStorage" }}`) to cache flags and improve performance on subsequent
page loads.
- Added `console.debug` statements for improved observability into the
provider's state (anonymous vs. authenticated).
</details>
#### For configuration changes:
- `.env.example` is updated or already compatible with my changes
- `docker-compose.yml` is updated or already compatible with my changes
- I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Configuration Changes</summary>
- No configuration changes were made. This PR relies on existing
environment variables (`NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID` and
`NEXT_PUBLIC_LAUNCHDARKLY_ENABLED`).
</details>
---------
Co-authored-by: Lluis Agusti <hi@llu.lu>
## Summary
- Add thread safety to NodeExecutionProgress class to prevent race
conditions between graph executor and node executor threads
- Fixes potential data corruption and lost outputs during concurrent
access to shared output lists
- Uses single global lock per node for minimal performance impact
- Instead of blocking the node evaluation before adding another node
evaluation, we move on to the next node, in case another node completes
it.
## Changes
- Added `threading.Lock` to NodeExecutionProgress class
- Protected `add_output()` calls from node executor thread with lock
- Protected `pop_output()` calls from graph executor thread with lock
- Protected `_pop_done_task()` output checks with lock
## Problem Solved
The `NodeExecutionProgress.output` dictionary was being accessed
concurrently:
- `add_output()` called from node executor thread (asyncio thread)
- `pop_output()` called from graph executor thread (main thread)
- Python lists are not thread-safe for concurrent append/pop operations
- This could cause data corruption, index errors, and lost outputs
## Test Plan
- [x] Existing executor tests pass
- [x] No performance regression (operations are microsecond-level)
- [x] Thread safety verified through code analysis
## Technical Details
- Single `threading.Lock()` per NodeExecutionProgress instance (~64
bytes)
- Lock acquisition time (~100-200ns) is minimal compared to list
operations
- Maintains order guarantees for same node_execution_id processing
- No GIL contention issues as operations are very brief
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Rest-Api service is a vital service where we should minimize the amount
of external resources impacting it.
And the NotificationManager service is running as a singleton in nature
(similar to the scheduler service), so it makes sense to put it in a
single pod.
### Changes 🏗️
Move the NotificationManager service from rest-api pod to the scheduler
pod
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
Rest-Api service is a vital service where we should minimize the amount
of external resources impacting it.
And the NotificationManager service is running as a singleton in nature
(similar to the scheduler service), so it makes sense to put it in a
single pod.
### Changes 🏗️
Move the NotificationManager service from rest-api pod to the scheduler
pod
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
## Changes 🏗️
Launch Darkly is not being initialised in production, despite on paper
all env variables being well set 🧐
I tried doing production builds locally, and I noticed this provider
needs to be initialised on the client because Launch Darkly flags are
designed to work on the client side, where they can access user context
and browser environment.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Did production build locally and run it
- [x] See LD initialised on the console after the `use client` directive
was added
### For configuration changes:
None
Currently, we only create a library entry of the top-most graph when
importing the graph from an exported file.
This can cause some complications, as there is no way to remove the
library entry of it.
### Changes 🏗️
Create the library entry for all the subgraphs during the import
process.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Export an agent with subgraphs and import it back.
### Changes 🏗️
This PR adds a new utility block to the basic blocks collection. This
block provides a simple way to reverse the order of elements in any
list.
**New Features:**
- Added class in
- Block accepts any list as input and returns the same list with
elements in reversed order
- Preserves the original list (creates a copy before reversing)
- Works with lists containing any type of elements
**Technical Details:**
- Block ID:
- Category:
- Input: - The list to reverse (accepts )
- Output: - The list with elements in reversed order
- Includes test input/output for validation
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created an agent with the ReverseListOrderBlock
- [x] Tested with various list types (numbers, strings, mixed types)
- [x] Verified the block preserves the original list
- [x] Confirmed the block correctly reverses the order of elements
- [x] Tested with empty lists and single-element lists
- [x] Verified the block integrates properly with other blocks in a
workflow
#### For configuration changes:
- [x] is updated or already compatible with my changes
- [x] is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note:** No configuration changes required - this is a pure code
addition that uses the existing block framework.
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
## Summary
- Introduced correct health check API for AppService
- Fixed service health check mechanism to properly handle health status
monitoring
## Changes Made
- Updated health check implementation in AppService.
- Make rest service health check checks the health of DatabaseManager
too.
## Test plan
- [x] Verify health check endpoint responds correctly
- [x] Test health check mechanism under various service states
- [x] Validate monitoring and alerting integration
🤖 Generated with [Claude Code](https://claude.ai/code)
### Changes 🏗️
This PR optimizes database performance by adding missing foreign key
indexes, removing unused/redundant indexes, cleaning up all legacy
untracked indexes, and adding performance indexes for materialized views
to achieve 100% optimized database indexing.
**Foreign Key Indexes Added:**
- `AgentGraph`: `[forkedFromId, forkedFromVersion]` - For fork
relationship queries
- `AgentGraphExecution`: `[agentPresetId]` - For preset-based execution
filtering
- `AgentNodeExecution`: `[agentNodeId]` - For node execution lookups
- `AgentNodeExecutionInputOutput`: `[agentPresetId]` - For preset
input/output queries
- `AgentPreset`: `[agentGraphId, agentGraphVersion]` & `[webhookId]` -
For graph and webhook lookups
- `LibraryAgent`: `[agentGraphId, agentGraphVersion]` & `[creatorId]` -
For agent and creator queries
- `StoreListing`: `[agentGraphId, agentGraphVersion]` - For marketplace
agent lookups
- `StoreListingReview`: `[reviewByUserId]` - For user review queries
**Unused/Redundant Indexes Removed:**
- `User.email` - Unused index identified by linter
- `AnalyticsMetrics.userId` - Unused index causing write overhead
- `AnalyticsDetails.type` - Redundant (covered by composite `[userId,
type]`)
- `APIKey.key`, `APIKey.status` - Unused indexes
- Named index `"analyticsDetails"` - Converted to standard composite
index
**All Legacy Untracked Indexes Removed:**
- `idx_store_listing_version_status` - Redundant with Prisma composite
index
- `idx_slv_agent` - Redundant with Prisma-managed `[agentGraphId,
agentGraphVersion]`
- `idx_store_listing_version_approved_listing` - Redundant with unique
constraint
- `StoreListing_agentId_owningUserId_idx` - Legacy index superseded by
current strategy
- `StoreListing_isDeleted_isApproved_idx` - Replaced by optimized
composite index
- `StoreListing_isDeleted_idx` - Redundant with composite index
- `StoreListingVersion_agentId_agentVersion_isDeleted_idx` - Legacy
index replaced
- `idx_store_listing_approved` - Redundant with existing `[owningUserId,
slug]` unique constraint and `[isDeleted, hasApprovedVersion]` index
- `idx_slv_categories_gin` - Specialized array search index removed (can
be re-added if category filtering is implemented)
- `idx_profile_user` - Duplicate of Prisma-managed `Profile_userId_idx`
**Materialized View Performance Indexes Added:**
- `idx_mv_review_stats_rating` on `mv_review_stats(avg_rating DESC)` -
Optimizes sorting agents by rating
- `idx_mv_review_stats_count` on `mv_review_stats(review_count DESC)` -
Optimizes sorting agents by review count
**Result: 100% Optimized Database Indexing**
- All database indexes are now defined and managed through Prisma schema
- No more untracked indexes requiring manual SQL maintenance
- Added performance indexes for materialized views used by marketplace
views
- Improved query performance for agent sorting and filtering
- Enhanced maintainability and consistency across environments
**Schema Comments Updated:**
- Removed all references to dropped untracked indexes
- Simplified documentation to reflect Prisma-only approach for regular
tables
- Added comprehensive documentation for materialized view indexes and
their purposes
- Maintained documentation for materialized view refresh strategy
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Schema changes compile successfully with Prisma
- [x] Migration adds required FK indexes and materialized view
performance indexes
- [x] Migration drops all legacy indexes and redundant untracked indexes
- [x] All pre-commit hooks pass (linting, formatting, type checking)
- [x] No breaking changes to existing foreign key relationships
- [x] Verified existing Prisma indexes cover all query patterns
- [x] Schema comments comprehensively document all indexing strategy
- [x] Materialized view performance indexes optimize marketplace sorting
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>
This PR adds a database index to improve query performance based on
Supabase query performance insights and index recommendations. These
indexes target frequently queried columns and relationships to reduce
query execution time.
### Changes 🏗️
- Added index on `AgentNodeExecutionInputOutput.agentPresetId`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified schema changes are valid Prisma syntax
- [x] Confirmed indexes target frequently queried columns based on
Supabase recommendations
- [x] Ensured no duplicate indexes are created
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Changes 🏗️
Only initialise Launch Darkly if we have a user and the config is set.
### Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Once merged login into app and see if Launch Darkly initialises
### For configuration changes:
No
This PR reduces the dependency of the API layer on the database manager
service by avoiding using DatabaseManager for credentials fetch when a
direct query is possible from the API layer
### Changes 🏗️
* If Prisma is available, use the direct query.
* Otherwise, utilize the database manager.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run blocks with credentials like AiTextGeneratorBlock
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<img width="563" height="400" alt="Screenshot 2025-07-19 at 00 50 25"
src="https://github.com/user-attachments/assets/ce9b2fe3-a244-40ed-a7a9-27b983ec90f2"
/>
Introduced an error on my previous PR:
https://github.com/Significant-Gravitas/AutoGPT/pull/10397 (
`shouldRender` should be taken in account... )
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login / logout as normal completing challenge
#### For configuration changes:
None
## Changes 🏗️
Do not render the Cloudflare CAPTCHA if the user has already passed
verification.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Try with Cloudflare enabled
- [x] Shows the CAPTCHA when not verified
- [x] CAPTCHA is hidden when verified
### For configuration changes
None
## Changes 🏗️https://github.com/user-attachments/assets/dd635fa1-d8ea-4e5b-b719-2c7df8e57832
Using [LaunchDarkly](https://launchdarkly.com/), introduce the concept
of "beta" blocks, which are blocks that will be disabled in production
unless enabled via a feature flag. This allows us to safely hide and
test certain blocks in production safely.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Checkout and run FE locally
- [x] With the `beta-blocks` flag `disabled` in LD
- [x] Go to the builder and see **you can't** add the blocks specified
on the flag
- [x] With the `beta-blocks` flag `enabled` in LD
- [x] Go to the builder and see **you can** add the blocks specified on
the flag
### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
🚧 We need to add the `NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID` to the dev and
prod environments.
## Summary
- Introduced correct health check API for AppService
- Fixed service health check mechanism to properly handle health status
monitoring
## Changes Made
- Updated health check implementation in AppService.
- Make rest service health check checks the health of DatabaseManager
too.
## Test plan
- [x] Verify health check endpoint responds correctly
- [x] Test health check mechanism under various service states
- [x] Validate monitoring and alerting integration
🤖 Generated with [Claude Code](https://claude.ai/code)
This PR adds two new blocks for running Replicate models in AutoGPT:
ReplicateModelBlock: Synchronous execution of any Replicate model with
custom inputs
Key Features:
Support for any public Replicate model via model name (e.g.,
"stability-ai/stable-diffusion")
Custom input parameters via dictionary (e.g., {"prompt": "a beautiful
landscape"})
Optional model version specification for reproducible results
Proper credentials handling using ProviderName.REPLICATE
Comprehensive test suite with 12 test cases covering success, error, and
edge cases
Type-safe implementation with full pyright compliance
Mock methods for testing and development
# Checklist 📋
## For code changes:
- [X] I have clearly listed my changes in the PR description
- [X] I have made a test plan
- [X] I have tested my changes according to the test plan:
- Unit tests for ReplicateModelBlock (6 test cases)
- Test block initialization and configuration
- Test mock methods for development
- Test error handling and edge cases
- Verify type safety with pyright (0 errors)
- Verify code formatting with Black and isort
- Verify linting with Ruff (0 errors)
- Test credentials handling with ProviderName.REPLICATE
- Test model version specification functionality
- Test both synchronous and asynchronous execution paths
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
## Summary
This PR introduces a complete cloud storage infrastructure and file
upload system that agents can use instead of passing base64 data
directly in inputs, while maintaining backward compatibility for the
builder's node inputs.
### Problem Statement
Currently, when agents need to process files, they pass base64-encoded
data directly in the input, which has several limitations:
1. **Size limitations**: Base64 encoding increases file size by ~33%,
making large files impractical
2. **Memory usage**: Large base64 strings consume significant memory
during processing
3. **Network overhead**: Base64 data is sent repeatedly in API requests
4. **Performance impact**: Encoding/decoding base64 adds processing
overhead
### Solution
This PR introduces a complete cloud storage infrastructure and new file
upload workflow:
1. **New cloud storage system**: Complete `CloudStorageHandler` with
async GCS operations
2. **New upload endpoint**: Agents upload files via `/files/upload` and
receive a `file_uri`
3. **GCS storage**: Files are stored in Google Cloud Storage with
user-scoped paths
4. **URI references**: Agents pass the `file_uri` instead of base64 data
5. **Block processing**: File blocks can retrieve actual file content
using the URI
### Changes Made
#### New Files Introduced:
- **`backend/util/cloud_storage.py`** - Complete cloud storage
infrastructure (545 lines)
- **`backend/util/cloud_storage_test.py`** - Comprehensive test suite
(471 lines)
#### Backend Changes:
- **New cloud storage infrastructure** in
`backend/util/cloud_storage.py`:
- Complete `CloudStorageHandler` class with async GCS operations
- Support for multiple cloud providers (GCS implemented, S3/Azure
prepared)
- User-scoped and execution-scoped file storage with proper
authorization
- Automatic file expiration with metadata-based cleanup
- Path traversal protection and comprehensive security validation
- Async file operations with proper error handling and logging
- **New `UploadFileResponse` model** in `backend/server/model.py`:
- Returns `file_uri` (GCS path like
`gcs://bucket/users/{user_id}/file.txt`)
- Includes `file_name`, `size`, `content_type`, `expires_in_hours`
- Proper Pydantic schema instead of dictionary response
- **New `upload_file` endpoint** in `backend/server/routers/v1.py`:
- Complete new endpoint for file upload with cloud storage integration
- Returns GCS path URI directly as `file_uri`
- Supports user-scoped file storage for proper isolation
- Maintains fallback to base64 data URI when GCS not configured
- File size validation, virus scanning, and comprehensive error handling
#### Frontend Changes:
- **Updated API client** in
`frontend/src/lib/autogpt-server-api/client.ts`:
- Modified return type to expect `file_uri` instead of `signed_url`
- Supports the new upload workflow
- **Enhanced file input component** in
`frontend/src/components/type-based-input.tsx`:
- **Builder nodes**: Still use base64 for immediate data retention
without expiration
- **Agent inputs**: Use the new upload endpoint and pass `file_uri`
references
- Maintains backward compatibility for existing workflows
#### Test Updates:
- **New comprehensive test suite** in
`backend/util/cloud_storage_test.py`:
- 27 test cases covering all cloud storage functionality
- Tests for file storage, retrieval, authorization, and cleanup
- Tests for path validation, security, and error handling
- Coverage for user-scoped, execution-scoped, and system storage
- **New upload endpoint tests** in `backend/server/routers/v1_test.py`:
- Tests for GCS path URI format (`gcs://bucket/path`)
- Tests for base64 fallback when GCS not configured
- Validates file upload, virus scanning, and size limits
- Tests user-scoped file storage and access control
### Benefits
1. **New Infrastructure**: Complete cloud storage system with
enterprise-grade features
2. **Scalability**: Supports larger files without base64 size penalties
3. **Performance**: Reduces memory usage and network overhead with async
operations
4. **Security**: User-scoped file storage with comprehensive access
control and path validation
5. **Flexibility**: Maintains base64 support for builder nodes while
providing URI-based approach for agents
6. **Extensibility**: Designed for multiple cloud providers (GCS, S3,
Azure)
7. **Reliability**: Automatic file expiration, cleanup, and robust error
handling
8. **Backward compatibility**: Existing builder workflows continue to
work unchanged
### Usage
**For Agent Inputs:**
```typescript
// 1. Upload file
const response = await api.uploadFile(file);
// 2. Pass file_uri to agent
const agentInput = { file_input: response.file_uri };
```
**For Builder Nodes (unchanged):**
```typescript
// Still uses base64 for immediate data retention
const nodeInput = { file_input: "data:image/jpeg;base64,..." };
```
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All new cloud storage tests pass (27/27)
- [x] All upload file tests pass (7/7)
- [x] Full v1 router test suite passes (21/21)
- [x] All server tests pass (126/126)
- [x] Backend formatting and linting pass
- [x] Frontend TypeScript compilation succeeds
- [x] Verified GCS path URI format (`gcs://bucket/path`)
- [x] Tested fallback to base64 data URI when GCS not configured
- [x] Confirmed file upload functionality works in UI
- [x] Validated response schema matches Pydantic model
- [x] Tested agent workflow with file_uri references
- [x] Verified builder nodes still work with base64 data
- [x] Tested user-scoped file access control
- [x] Verified file expiration and cleanup functionality
- [x] Tested security validation and path traversal protection
#### For configuration changes:
- [x] No new configuration changes required
- [x] `.env.example` remains compatible
- [x] `docker-compose.yml` remains compatible
- [x] Uses existing GCS configuration from media storage
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude AI <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
The docs have an issue with not building due to changes made in the
process of updating the e2e testing to remove the monitor page.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
fixes the missing start and end tags that prevent docs from building.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] run the build and check for no errors indicating missing tags
- [x] deploy to netlify
This PR adds two new Gmail integration blocks—**Gmail Get Thread** and
**Gmail Reply**—to the platform, enhancing threaded email workflows. Key
changes include:
- **GmailGetThreadBlock**:
- New block that retrieves an entire Gmail thread by `threadId`, with an
option to include or exclude messages from Spam and Trash.
- Supports use cases like fetching all messages in a conversation to
check for responses.
- **GmailReplyBlock**:
- New block that sends a reply within an existing Gmail thread,
maintaining the thread context.
- Accepts detailed input fields including recipients, CC, BCC, subject,
body, and attachments.
- Ensures replies are properly associated with their parent message and
thread.
- **Enhancements to existing Gmail blocks**:
- The `Email` model and related outputs now include a `threadId` field.
- Updated test data and mock data to support threaded operations.
- Expanded OAuth scopes for actions requiring thread metadata.
- **Documentation updates**:
- Added documentation for the new Gmail blocks in both the general block
listing and the detailed Gmail block docs.
- Clarified that the `Email` output now includes the `threadId`.
These updates enable more advanced and context-aware Gmail automations,
such as fetching full conversations and replying inline, supporting
richer communication workflows with Gmail.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Try all the gmail blocks
- [x] Send an email reply based on a thread from the get thread block
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
### Changes 🏗️
This PR introduces several key improvements to message handling, block
functionality, and execution reliability:
- **Renamed CSV block to Spreadsheet block** with enhanced CSV/Excel
processing capabilities
- **Added message size limiting and truncation** for Redis communication
to prevent connection issues
- **Optimized FileReadBlock** to yield content chunks instead of
duplicated outputs for better performance
- **Improved execution termination handling** with better timeout
management and event publishing
- **Enhanced continuous retry decorator** with async function support
- **Implemented payload truncation** to prevent Redis connection issues
from oversized messages
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified backend starts without errors
- [x] Confirmed message truncation works for large payloads
- [x] Tested spreadsheet block functionality with CSV and Excel files
- [x] Validated execution termination improvements
- [x] Checked FileReadBlock chunk processing
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes
- Introduced a new script to generate test data for end-to-end (E2E)
tests using API functions, ensuring compatibility with future model
changes.
- The script creates test users, agent blocks, graphs, profiles, library
agents, presets, API keys, and store submissions.
- Utilizes external services for image and video URLs, and includes
error handling for data creation processes.
- Provides a summary of created data upon completion, enhancing the
testing framework for the AutoGPT platform.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test scripts are working perfectly and not breaking anything. Data
is also correctly visible in the database.
## Changes 🏗️
<img width="1843" height="321" alt="Screenshot 2025-07-17 at 15 48 01"
src="https://github.com/user-attachments/assets/63f528f7-1dc3-4587-a5af-d02b2c858191"
/>
In this recent PR
https://github.com/Significant-Gravitas/AutoGPT/pull/10394/ the
navigation bar disappeared when logged out. A change was introduced
where the navigation bar does not show up if we don't have profile data
( _which we won't have when logged out_ ). This solves it + adds tests
covering the navigation bar in the logged out state.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run this locally
- [x] See the navbar appearing
- [x] E2E tests pass on the CI
### For configuration changes:
None