- #11273
### Changes 🏗️
- Bump `apscheduler` to v3.11.1 which contains a fix for the issue
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] "It's a rather ugly solution but the test proves that it works."
~the maintainer
- [x] CI passes
<!-- Clearly explain the need for these changes: -->
This PR addresses the need for consistent error handling across all
blocks in the AutoGPT platform. Previously, each block had to manually
define an `error` field in their output schema, leading to code
duplication and potential inconsistencies. Some blocks might forget to
include the error field, making error handling unpredictable.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **Created `BlockSchemaOutput` base class**: New base class that
extends `BlockSchema` with a standardized `error` field
- **Created `BlockSchemaInput` base class**: Added for consistency and
future extensibility
- **Updated 140+ block implementations**: Changed all block `Output`
classes from `class Output(BlockSchema):` to `class
Output(BlockSchemaOutput):`
- **Removed manual error field definitions**: Eliminated hundreds of
duplicate `error: str = SchemaField(...)` definitions
- **Updated type annotations**: Changed `Block[BlockSchema,
BlockSchema]` to `Block[BlockSchemaInput, BlockSchemaOutput]` throughout
the codebase
- **Fixed imports**: Added `BlockSchemaInput` and `BlockSchemaOutput`
imports to all relevant files
- **Maintained backward compatibility**: Updated `EmptySchema` to
inherit from `BlockSchemaOutput`
**Key Benefits:**
- Consistent error handling across all blocks
- Reduced code duplication (removed ~200 lines of repetitive error field
definitions)
- Type safety improvements with distinct input/output schema types
- Blocks can still override error field with more specific descriptions
when needed
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verified `poetry run format` passes (all linting, formatting, and
type checking)
- [x] Tested block instantiation works correctly (MediaDurationBlock,
UnrealTextToSpeechBlock)
- [x] Confirmed error fields are automatically present in all updated
blocks
- [x] Verified block loading system works (successfully loads 353+
blocks)
- [x] Tested backward compatibility with EmptySchema
- [x] Confirmed blocks can still override error field with custom
descriptions
- [x] Validated core schema inheritance chain works correctly
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes were needed for this refactoring.*
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
### Changes 🏗️
- Increased `max_field_size` in `aiohttp.ClientSession` to 16KB to
handle servers with large headers (e.g., long CSP headers).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Add unit test that checks it can now parse headers over 8k size
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️
<img width="800" height="547" alt="Screenshot 2025-10-29 at 22 11 35"
src="https://github.com/user-attachments/assets/5c700ddc-d770-48ef-9847-7e652c5dedcb"
/>
<br /><br />
- Use
[`react-currency-input-field`](https://www.npmjs.com/package/react-currency-input-field)
for `<Input type="amount" />` under the hood
- so it formats numbers nicely with `,` and `.`
- Simplify form logic
- Make the popover cover the trigger button when open
- Re-organize imports
- Show a `$` prefix in front of the amount inputs
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] Open the wallet with credits enabled
- [x] Play with the inputs
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
### Changes 🏗️
- Added validation to ensure that the `summary` and `final_summary`
returned by the LLM are strings.
- Raises a `ValueError` if the LLM returns a list or other non-string
type, providing a descriptive error message to aid debugging.
Fixes
[AUTOGPT-SERVER-6M4](https://sentry.io/organizations/significant-gravitas/issues/6978480131/).
The issue was that: LLM returned list of strings instead of single
string summary, causing `_combine_summaries` to fail on `join`.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2230933
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6978480131/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Added a unit test to verify that a ValueError is raised when the
LLM returns a list instead of a string for summary or final_summary.
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Marketplace sort by functionality was not working on the frontend. This
PR fixes it
### Changes 🏗️
- Add type hints for sort by
- Fix marketplace sort by drop downs
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested locally
### Changes 🏗️
- Ensures `handleFetchError` can handle non-JSON error responses (e.g.,
HTML error pages).
- Attempts to parse the response body as JSON, but falls back to text if
JSON parsing fails.
- Logs a warning to the console if JSON parsing fails.
- Sets `responseData` to null if parsing fails.
Fixes
[BUILDER-482](https://sentry.io/organizations/significant-gravitas/issues/6958135748/).
The issue was that: Frontend error handler unconditionally calls
`response.json()` on a non-JSON HTML error page starting with 'A'.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2206951
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6958135748/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test Plan:
- [x] Created unit tests for the issue that caused the error
- [x] Created unit tests to ensure responses are parsed gracefully
### Changes 🏗️
Enhanced SQL query security in the store search functionality by
implementing proper parameterization to prevent SQL injection
vulnerabilities.
**Security Improvements:**
- Replaced string interpolation with PostgreSQL positional parameters
(`$1`, `$2`, etc.) for all user inputs
- Added ORDER BY whitelist validation to prevent injection via
`sorted_by` parameter
- Parameterized search term, creators array, category, and pagination
values
- Fixed variable naming conflict (`sql_where_clause` vs `where_clause`)
**Testing:**
- Added 4 comprehensive tests validating SQL injection prevention across
different attack vectors
- Tests verify that malicious input in search queries, filters, sorting,
and categories are safely handled
- All 10 tests in db_test.py pass successfully
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All existing tests pass (10/10 tests passing)
- [x] New security tests validate SQL injection prevention
- [x] Verified parameterized queries handle malicious input safely
- [x] Code formatting passes (`poetry run format`)
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes required for this security fix*
## Summary
Fix critical issue where pre-execution permission validation broke
execution of graphs that reference older versions of sub-graphs.
## Problem
The `validate_graph_execution_permissions` function was checking for the
specific version of a graph in the user's library. This caused failures
when:
1. A parent graph references an older version of a sub-graph
2. The user updates the sub-graph to a newer version
3. The older version is no longer in their library
4. Execution of the parent graph fails with `GraphNotInLibraryError`
## Root Cause
In `backend/executor/utils.py` line 523, the function was checking for
the exact version, but sub-graphs legitimately reference older versions
that may no longer be in the library.
## Solution
### 1. Remove Version-Specific Check (backend/executor/utils.py)
- Remove `graph_version=graph.version` parameter from validation call
- Add explanatory comment about version-agnostic behavior
- Now only checks that the graph ID exists in user's library (any
version)
### 2. Enhance Documentation (backend/data/graph.py)
- Update function docstring to explain version-agnostic behavior
- Document that `None` (now default) allows execution of any version
- Clarify this is important for sub-graph version compatibility
## Technical Details
The `validate_graph_execution_permissions` function was already designed
to handle version-agnostic checks when `graph_version=None`. By omitting
the version parameter, we skip the version check and only verify:
- Graph exists in user's library
- Graph is not deleted/archived
- User has execution permissions
## Impact
- ✅ Parent graphs can execute even when they reference older sub-graph
versions
- ✅ Sub-graph updates don't break existing parent graphs
- ✅ Maintains security: still checks library membership and permissions
- ✅ No breaking changes: version-specific validation still available
when needed
## Example Scenario Fixed
1. User creates parent graph that uses sub-graph v1
2. User updates sub-graph to v2 (v1 removed from library)
3. Parent graph still references sub-graph v1
4. **Before**: Execution fails with `GraphNotInLibraryError`
5. **After**: Execution succeeds (version-agnostic permission check)
## Testing
- [x] Code formatting and linting passes
- [x] Type checking passes
- [x] No breaking changes to existing functionality
- [x] Security still maintained through library membership checks
## Files Changed
- `backend/executor/utils.py`: Remove version-specific permission check
- `backend/data/graph.py`: Enhanced documentation for version-agnostic
behavior
Closes #[issue-number-if-applicable]
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
The `<Wallet />` was being rendered twice ( one hidden with CSS `hidden`
) because of the Navbar layout, which caused logic issues within the
wallet. I changed to render it conditionally via Javascript instance,
which is always better practice than use `hidden` specially for
components with actual logic.
I also moved the component files closer to where it is used ( in the
navbar ).
I have a Cursor plugin that removes imports when unused, but annoyingly
re-organizes them, hence the changes around that...
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] There is only 1 Wallet in the DOM
📨 Fix: Handle Oversized Notification Emails
Summary
This PR adds logic to detect and handle oversized notification emails
exceeding Postmark’s 5 MB limit. Instead of retrying indefinitely, the
system now sends a lightweight summary email with key stats and a
dashboard link.
Changes
Added size check in EmailSender.send_templated()
Sends summary email when payload > ~4.5 MB
Prevents infinite retries and queue clogging
Added logs for oversized detection
Fixes#11119
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Changes 🏗️
A couple of improvements on **Onboarding Step 5**:
- Show a spinner when the page is loading ( better contrast / context
than skeleton in this case )
- Prevent the run button being disabled if credentials failed to load
- while this is good/expected behavior, it will help us debug the issue
in production where credentials failed to load silently, given running
the agent it'll throw an error we can see
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new account/signup
- [x] On Onboarding Step 5 test the above
- Resolves#11251
This fixes all the warnings mentioned in #11251, reducing noise and
making our logs and error alerts more useful :)
### Changes 🏗️
- Remove "Block {block_name} has multiple credential inputs" warning
(not actually an issue)
- Rename `json` attribute of `MainCodeExecutionResult` to `json_data`;
retain serialized name through a field alias
- Replace `Path(regex=...)` with `Path(pattern=...)` in
`get_shared_execution` endpoint parameter config
- Change Uvicorn's WebSocket module to new Sans-I/O implementation for
WS server
- Disable Uvicorn's WebSocket module for REST server
- Remove deprecated `enable_cleanup_closed=True` argument in
`CloudStorageHandler` implementation
- Replace Prisma transaction timeout `int` argument with a `timedelta`
value
- Update Sentry SDK to latest version (v2.42.1)
- Broaden filter for cleanup warnings from indirect dependency `litellm`
- Fix handling of `MissingConfigError` in REST server endpoints
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Check that the warnings are actually gone
- [x] Deploy to dev environment and run a graph; check for any warnings
- Test WebSocket server
- [x] Run an agent in the Builder; make sure real-time execution updates
still work
### Changes 🏗️
### Before
<img width="800" height="649" alt="Screenshot_2025-10-23_at_00 44 59"
src="https://github.com/user-attachments/assets/fd717d39-772a-4331-bc54-4db15a9a3107"
/>
### After
<img width="800" height="555" alt="Screenshot 2025-10-27 at 23 19 10"
src="https://github.com/user-attachments/assets/64878bd0-3a96-4b3a-8344-1a88c89de52e"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Try to signup with a non-approved email
- [x] You see the modal with an updated copy
## Changes 🏗️
The mobile 📱 experience is still a mess but this helps a little.
### Before
<img width="350" height="395" alt="Screenshot 2025-10-24 at 18 26 18"
src="https://github.com/user-attachments/assets/75eab232-8c37-41e7-a51d-dbe07db336a0"
/>
### After
<img width="350" height="406" alt="Screenshot 2025-10-24 at 18 25 54"
src="https://github.com/user-attachments/assets/ecbd8bbd-8a94-4775-b990-c8b51de48cf9"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Load the app
- [x] Check the Tally popup button copy
- [x] The button still works
### Changes 🏗️
- Modifies the LLM block to extract the actual response from the
dictionary returned by the LLM, instead of yielding the entire
dictionary. This addresses
[AUTOGPT-SERVER-6EY](https://sentry.io/organizations/significant-gravitas/issues/6950850822/).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] After applying the fix, I ran the agent that triggered the Sentry
error and confirmed that it now completes successfully without errors.
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Fixes
[AUTOGPT-SERVER-6KZ](https://sentry.io/organizations/significant-gravitas/issues/6976926125/).
The issue was that: Redirect handling strips the URL scheme, causing
subsequent requests to fail validation and hit a 404.
- Ensures the hostname in the URL is properly IDNA-encoded after
validation.
- Reconstructs the netloc with the encoded hostname and preserves the
port if it exists.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2204774
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6976926125/?seerDrawer=true)
### Changes 🏗️
**backend/util/request.py:**
- Fixed URL validation to properly preserve port numbers when
reconstructing netloc
- Ensures IDNA-encoded hostname is combined with port (if present)
before URL reconstruction
**Test Results:**
- ✅ Tested request to https://www.target.com/ (original failing URL from
Sentry issue)
- ✅ Status: 200, Content retrieved successfully (339,846 bytes)
- ✅ Port preservation verified for URLs with explicit ports
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested request to https://www.target.com/ (original failing URL)
- [x] Verified status code 200 and successful content retrieval
- [x] Verified port preservation in URL validation
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
- Resolves#10980
- 2nd attempt after #11075 broke some things
Fixes unnecessary graph re-saving when no changes were made after
initial save. More specifically, this PR fixes two causes of this issue:
- Frontend node IDs were being compared to backend IDs, which won't
match if the graph has been modified and saved since loading.
- `fillDefaults` was being applied to all nodes (including existing
ones) on element creation, and empty values were being stripped
*post-save* with `removeEmptyStringsAndNulls`. This invisible
auto-modification of node input data meant that in some common cases the
graph would never be in sync with the backend.
### Changes 🏗️
- Fix node ID handling
- Use `node.data.backend_id ?? node.id` instead of `node.id` in
`prepareSaveableGraph`
- Also map link source/sink IDs to their corresponding backend IDs
- Add note about `node.data.backend_id` to `_saveAgent`
- Use `node.data.backend_id || node.id` as display ID in `CustomNode`
- Prevent auto-modification of node input data on existing nodes
- Prune empty values (`undefined`, `null`, `""`) from node input data
*pre-save* instead of post-save
- Related: improve typing and functionality of
`fillObjectDefaultsFromSchema` (moved and renamed from `fillDefaults`)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Node display ID updates on save
- [x] Clicking save a second time (without making more changes) doesn't
cause re-save
- [x] Updating nodes with dynamic input links (e.g. Create Dictionary
Block) doesn't make the links disappear
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Prevented unintended auto-modification of existing nodes during
editing
* Improved consistency of node and connection identifiers in saved
graphs
* **Improvements**
* Enhanced node title display logic for clearer node identification
* Optimized data cleanup utilities for more robust input processing in
the builder
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Removes CLAUDE_3_5_SONNET and CLAUDE_3_5_HAIKU from LlmModel enum, model
metadata, and cost configuration since they are deprecated
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify the models are gone from the llm blocks
## Changes 🏗️
We weren't fetching all library agents, just the first 15... to compute
the agent map on the Agent Activity dropdown. We suspect that is causing
some agent executions coming as `Unknown agent`.
In this changes, I'm fetching all the library agents upfront ( _without
blocking page load_ ) and caching them on the browser, so we have all
the details to render the agent runs. This is re-used in the library as
well for fast initial load on the agents list page.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First request populates cache; subsequent identical requests hit
cache
- [x] Editing an agent invalidates relevant cache keys and serves fresh
data
- [x] Different query params generate distinct cache entries
- [x] Cache layer gracefully falls back to live data on errors
- [x] 404 behavior for unknown agents unchanged
### For configuration changes:
None
## Changes 🏗️
The onboarding `Run` button is disabled sometimes when an agent
requiring credentials is selected. We think this can be because the
credentials load _async_ by a sub-component ( `<CredentialsInputs />` ),
and there wasn't a way for the parent component to know whether they
loaded or not.
- Refactored **Step 5** of onboarding to adhere to our code conventions
- split concerns and colocated state
- used generated API hooks
- the UI will only render once API calls succeed
- Created a system where ``<CredentialsInputs />` notify the parent
component when they load
- Did minor adjustments here and there
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I will know once I find an agent with credentials that I can
run....
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added visual agent selection card displaying agent details during
onboarding
* Introduced credentials input management for agent configuration
* Added onboarding guidance for initiating agent runs
* **Improvements**
* Enhanced onboarding flow with improved state management
* Refined login state handling
* Adjusted spacing in agent rating display
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Categories and Creators where not sanitized in the full text search
- apply sanitization to categories and creators
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] run tests to check it still works
Categories and Creators where not sanitized in the full text search
### Changes 🏗️
- apply sanitization to categories and creators
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] run tests to check it still works
Added error output pins to all Firecrawl blocks as standard on the
AutoGPT platform. The base block execution code already handles error
yielding, so no try-catch logic was needed.
- FirecrawlScrapeBlock: Added error output pin for scrape failures
- FirecrawlCrawlBlock: Added error output pin for crawl failures
- FirecrawlExtractBlock: Added error output pin for extraction failures
- FirecrawlMapBlock: Added error output pin for map failures
- FirecrawlSearchBlock: Added error output pin for search failures
Resolves#11253
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
We're currently seeing errors in the `DatabaseManager` while it's
shutting down, like:
```
WARNING [DatabaseManager] Termination request: SystemExit; 0 executing cleanup.
INFO [DatabaseManager] ⏳ Disconnecting Database...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection started...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection completed successfully.
INFO [DatabaseManager] Terminated.
ERROR POST /create_or_add_to_user_notification_batch failed: Failed to create or add to notification batch for user {user_id} and type AGENT_RUN: NoneType: None
```
This indicates two issues:
- The service doesn't wait for pending RPC calls to finish before
terminating
- We're using `logger.exception` outside an error handling context,
causing the confusing and not much useful `NoneType: None` to be printed
instead of error info
### Changes 🏗️
- Implement graceful shutdown in `AppService` so in-flight RPC calls can
finish
- Add tests for graceful shutdown
- Prevent `AppService` accepting new requests during shutdown
- Rework `AppService` lifecycle management; add support for async
`lifespan`
- Fix `AppService` endpoint error logging
- Improve logging in `AppProcess` and `AppService`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Deploy to Dev cluster, then `kubectl rollout restart` the different
services a few times
- [x] -> `DatabaseManager` doesn't break on re-deployment
- [x] -> `Scheduler` doesn't break on re-deployment
- [x] -> `NotificationManager` doesn't break on re-deployment
Updated block costs in `backend/backend/data/block_cost_config.py`:
- **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
- **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
- **AIScreenshotToVideoAdBlock**: Added cost of 612 credits
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
This PR implements real-time agent execution functionality in the new
flow editor, enabling users to run, monitor, and view results of their
agent workflows directly within the builder interface.
https://github.com/user-attachments/assets/8a730e08-f88d-49d4-be31-980e2c7a2f83
#### Key Features Added:
##### 1. **Agent Execution Controls**
- Added "Run Agent" / "Stop Agent" button with gradient styling in the
builder interface
- Implemented execution state management through a new `graphStore` for
tracking running status
- Save graph automatically before execution to ensure latest changes are
persisted
##### 2. **Real-time Execution Monitoring**
- Implemented WebSocket-based real-time updates for node execution
status via `useFlowRealtime` hook
- Subscribe to graph execution events and node execution events for live
status tracking
- Visual execution status badges on nodes showing states: `QUEUED`,
`RUNNING`, `COMPLETED`, `FAILED`, etc.
- Animated gradient border effect when agent is actively running
##### 3. **Node Execution Results Display**
- New `NodeDataRenderer` component to display input/output data for each
executed node
- Collapsible result sections with formatted JSON display
- Prepared UI for future functionality: copy, info, and expand actions
for node data
#### Technical Implementation:
- **State Management**: Extended `nodeStore` with execution status and
result tracking methods
- **WebSocket Integration**: Real-time communication for execution
updates without polling
- **Component Architecture**: Modular components for execution controls,
status display, and result rendering
- **Visual Feedback**: Color-coded status badges and animated borders
for clear execution state indication
#### TODO Items for Future PRs:
- Complete implementation of node result action buttons (copy, info,
expand)
- Add agent output display component
- Implement schedule run functionality
- Handle credential and input parameters for graph execution
- Add tooltips for better UX
### Checklist
- [x] Create a new agent with at least 3 blocks and verify execution
starts correctly
- [x] Verify real-time status updates appear on nodes during execution
- [x] Confirm execution results display in the node output sections
- [x] Verify the animated border appears when agent is running
- [x] Check that node status badges show correct states (QUEUED,
RUNNING, COMPLETED, etc.)
- [x] Test WebSocket reconnection after connection loss
- [x] Verify graph is saved before execution begins
Updated block costs in `backend/backend/data/block_cost_config.py`:
- **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
- **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
- **AIScreenshotToVideoAdBlock**: Added cost of 612 credits
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
Improves the "not on waitlist" error display based on feedback.
- Follow-up to #11198
- Follow-up to #11196
### Changes 🏗️
- Use standard `ErrorCard`
- Improve text strings
- Merge `isWaitlistError` and `isWaitlistErrorFromParams`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] We need to test in dev becasue we don't have a waitlist locally
and will revert if it doesnt work
- deploy to dev environment and sign up with a non approved account and
see if error appears
Part of our effort to eliminate preventable warnings and errors.
- Resolves#11237
### Changes 🏗️
- Exclude `undefined` query params in API requests
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Open the Builder without a `flowVersion` URL parameter
- [x] -> `GET /api/library/agents/by-graph/{graph_id}` succeeds
- Open the builder with a `flowVersion` URL parameter
- [x] -> version is correctly included in request URL parameters
## Changes 🏗️
- Add [custom events](https://datafa.st/docs/custom-goals) in
**Datafa.st** to track the user journey around core actions
- track `add_to_library`
- track `download_agent`
- track `run_agent`
- track `schedule_agent`
- Refactor the analytics service to encapsulate both **GA** and
**Datafa.st**
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Analytics load correctly locally
- [x] Events fire in production
### For configuration changes:
Once deployed to production we need to verify we are receiving analytics
and custom events in [Datafa.st](https://datafa.st/)
Potential fix for
[https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145)
To fix the issue, rather than using substring matching on the raw URL
string, we need to properly parse the URL and inspect its hostname. We
should confirm that the `hostname` property of the parsed URL is equal
to either `vimeo.com` or explicitly permitted subdomains like
`www.vimeo.com`. We can use the native JavaScript `URL` class for robust
parsing.
**File/Location:**
- Only change line(s) in
`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/OutputRenderers/renderers/MarkdownRenderer.tsx`
- Specifically, update the logic in function `isVideoUrl()` on line 45.
**Methods/Imports/Definitions:**
- Use the standard `URL` class (no need to add a new import, as this is
available in browsers and in Node.js).
- Provide fallback in case the URL passed in is malformed (wrap in a
try-catch, treat as non-video in this case).
- Check the parsed hostname for equality with `vimeo.com` or,
optionally, specific allowed subdomains (`www.vimeo.com`).
---
_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Debug and info level messages are currently ending up in Sentry,
polluting our issue feed.
### Changes 🏗️
- Limit Sentry console capture to warnings and worse
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Trivial change, no test needed
<!-- Clearly explain the need for these changes: -->
This PR converts Jinja2 TemplateError exceptions to ValueError in the
TextFormatter class to ensure proper error handling and HTTP status code
responses (400 instead of 500).
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Added import for `jinja2.exceptions.TemplateError` in
`backend/util/text.py:6`
- Wrapped template rendering in try-catch block in `format_string`
method (`backend/util/text.py:105-109`)
- Convert `TemplateError` to `ValueError` to ensure proper 400 HTTP
status code for client errors
- Added warning logging for template rendering errors before re-raising
as ValueError
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan: -->
- [x] Verified that invalid Jinja2 templates now raise ValueError
instead of TemplateError
- [x] Confirmed that valid templates continue to work correctly
- [x] Checked that warning logs are generated for template errors
- [x] Validated that the exception chain is preserved with `from e`
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Resolves#11226
### Changes 🏗️
- Drop use of `CloudLoggingHandler` which docs state isn't for use in
GKE
- For cloud logging, output only structured log entries to `stdout`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test deploy to dev and check logs
Changes to providers blocks to store in db
### Changes 🏗️
- revet change
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] I have reverted the merge
## Summary
- Fixes database connection warnings in executor logs: "Client is not
connected to the query engine, you must call `connect()` before
attempting to query data"
- Implements resilient database client pattern already used elsewhere in
the codebase
- Adds caching to reduce database load for user context lookups
## Changes
- Updated `get_user_context()` to check `prisma.is_connected()` and fall
back to database manager client
- Added `@cached(maxsize=1000, ttl_seconds=3600)` decorator for
performance optimization
- Updated database manager to expose `get_user_by_id` method
## Test plan
- [x] Verify executor pods no longer show Prisma connection warnings
- [x] Confirm user timezone is still correctly retrieved
- [x] Test fallback behavior when Prisma is disconnected
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
Standardize all the runtime environment checks on the Front-end and
associated conditions to run against a single environment service where
all the environment config is centralized and hence easier to manage.
This helps prevent typos and bug when manually asserting against
environment variables ( which are typed as `string` ), the helper
functions are easier to read and re-use across the codebase.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app and click around
- [x] Everything is smooth
- [x] Test on the CI and types are green
### For configuration changes:
None 🙏🏽
## Changes 🏗️
Document how to contribute on the Front-end so it is easier for
non-regular contributors.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Contribution guidelines make sense and look good considering the
AutoGPT stack
### For configuration changes:
None
We currently try to re-init the LaunchDarkly client every time a feature flag is checked.
This causes 5 second extra latency on the flag check when LD is down, such as now.
Since flag checks are performed on every block execution, this currently cripples the platform's executors.
- Follow-up to #11221
### Changes 🏗️
- Only try to init LaunchDarkly once
- Improve surrounding log statements in the `feature_flag` module
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- This is a critical hotfix; we'll see its effect once deployed
LaunchDarkly is currently down and it's keeping our executor pods from
spinning up.
### Changes 🏗️
- Wrap `LaunchDarklyIntegration` init in a try/except
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- We'll see if it works once it deploys
## Problem
The YouTube transcription block would fail when attempting to transcribe
videos that only had transcripts available in non-English languages.
Even when usable transcripts existed in other languages, the block would
raise a `NoTranscriptFound` error because it only requested English
transcripts.
**Example video that would fail:**
https://www.youtube.com/watch?v=3AMl5d2NKpQ (only has Hungarian
transcripts)
**Error message:**
```
Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ!
No transcripts were found for any of the requested language codes: ('en',)
For this video (3AMl5d2NKpQ) transcripts are available in the following languages:
(GENERATED) - hu ("Hungarian (auto-generated)")
```
## Solution
Implemented intelligent language fallback in the
`TranscribeYoutubeVideoBlock.get_transcript()` method:
1. **First**, tries to fetch English transcript (maintains backward
compatibility)
2. **If English unavailable**, lists all available transcripts and
selects the first one using this priority:
- Manually created transcripts (any language)
- Auto-generated transcripts (any language)
3. **Only fails** if no transcripts exist at all
**Example behavior:**
```python
# Before: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ") # ❌ Raises NoTranscriptFound
# After: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ") # ✅ Returns Hungarian transcript
```
## Changes
- **Modified** `backend/blocks/youtube.py`: Added try-catch logic to
fallback to any available language when English is not found
- **Added** `test/blocks/test_youtube.py`: Comprehensive test suite
covering URL extraction, language fallback, transcript preferences, and
error handling (7 tests)
- **Updated** `docs/content/platform/blocks/youtube.md`: Documented the
language fallback behavior and transcript priority order
## Testing
- ✅ All 7 new unit tests pass
- ✅ Block integration test passes
- ✅ Full test suite: 621 passed, 0 failed (no regressions)
- ✅ Code formatting and linting pass
## Impact
This fix enables the YouTube transcription block to work with
international content while maintaining full backward compatibility:
- ✅ Videos in any language can now be transcribed
- ✅ English is still preferred when available
- ✅ No breaking changes to existing functionality
- ✅ Graceful degradation to available languages
Fixes#10637
Fixes https://linear.app/autogpt/issue/OPEN-2626
> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `www.youtube.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python3`
(dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
> Issue Title: if theres only one lanague available for transcribe
youtube return that langage not an error
> Issue Description: `Could not retrieve a transcript for the video
https://www.youtube.com/watch?v=3AMl5d2NKpQ! This is most likely caused
by: No transcripts were found for any of the requested language codes:
('en',) For this video (3AMl5d2NKpQ) transcripts are available in the
following languages: (MANUALLY CREATED) None (GENERATED) - hu
("Hungarian (auto-generated)") (TRANSLATION LANGUAGES) None If you are
sure that the described cause is not responsible for this error and that
a transcript should be retrievable, please create an issue at
https://github.com/jdepoix/youtube-transcript-api/issues. Please add
which version of youtube_transcript_api you are using and provide the
information needed to replicate the error. Also make sure that there are
no open issues which already describe your problem!` you can use this
video to test:
[https://www.youtube.com/watch?v=3AMl5d2NKpQ\`](https://www.youtube.com/watch?v=3AMl5d2NKpQ%60)
> Fixes
https://linear.app/autogpt/issue/OPEN-2626/if-theres-only-one-lanague-available-for-transcribe-youtube-return
>
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10637).
All replies are displayed in both locations.
>
>
</details>
<!-- START COPILOT CODING AGENT TIPS -->
---
✨ Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
### Need 💡
This PR addresses Linear issue SECRT-1665, which mandates an update to
Linear's OAuth2 implementation. Linear is transitioning from long-lived
access tokens to short-lived access tokens with refresh tokens, with a
deadline of April 1, 2026. This change is crucial to ensure continued
integration with Linear and to support their new token management
system, including a migration path for existing long-lived tokens.
### Changes 🏗️
- **`autogpt_platform/backend/backend/blocks/linear/_oauth.py`**:
- Implemented full support for refresh tokens, including HTTP Basic
Authentication for token refresh requests.
- Added `migrate_old_token()` method to exchange old long-lived access
tokens for new short-lived tokens with refresh tokens using Linear's
`/oauth/migrate_old_token` endpoint.
- Enhanced `get_access_token()` to automatically detect and attempt
migration for old tokens, and to refresh short-lived tokens when they
expire.
- Improved error handling and token expiration management.
- Updated `_request_tokens` to handle both authorization code and
refresh token flows, supporting Linear's recommended authentication
methods.
- **`autogpt_platform/backend/backend/blocks/linear/_config.py`**:
- Updated `TEST_CREDENTIALS_OAUTH` mock data to include realistic
`access_token_expires_at` and `refresh_token` for testing the new token
lifecycle.
- **`LINEAR_OAUTH_IMPLEMENTATION.md`**:
- Added documentation detailing the new Linear OAuth refresh token
implementation, including technical details, migration strategy, and
testing notes.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified OAuth URL generation and parameter encoding.
- [x] Confirmed HTTP Basic Authentication header creation for refresh
requests.
- [x] Tested token expiration logic with a 5-minute buffer.
- [x] Validated migration detection for old vs. new token types.
- [x] Checked code syntax and import compatibility.
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
---
Linear Issue: [SECRT-1665](https://linear.app/autogpt/issue/SECRT-1665)
<a
href="https://cursor.com/background-agent?bcId=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a> <a
href="https://cursor.com/agents?id=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>
---------
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
## Changes 🏗️
Following https://datafa.st/docs/nextjs-app-router
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will see once we make a production deployment and get data into
the platform
### For configuration changes:
None
fix issue with identifying errors :(
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] we have to test in dev due to waitlist integration, so we are
merging. will revert if fails
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Summary
This PR improves the user experience for users who are not on the
waitlist during sign-up. When a user attempts to sign up or log in with
an email that's not on the allowlist, they now see a clear, helpful
modal with a direct call-to-action to join the waitlist.
Fixes
[OPEN-2794](https://linear.app/autogpt/issue/OPEN-2794/display-waitlist-error-for-users-not-on-waitlist-during-sign-up)
## Changes
- ✨ Updated `EmailNotAllowedModal` with improved messaging and a "Join
Waitlist" button
- 🔧 Fixed OAuth provider signup/login to properly display the waitlist
modal
- 📝 Enhanced auth-code-error page to detect and display
waitlist-specific errors
- 💬 Added helpful guidance about checking email address and Discord
support link
- 🎯 Consistent waitlist error handling across all auth flows (regular
signup, OAuth, error pages)
## Test Plan
Tested locally by:
1. Attempting signup with non-allowlisted email - modal appears ✅
2. Attempting Google SSO with non-allowlisted account - modal appears ✅
3. Modal shows "Join Waitlist" button that opens
https://agpt.co/waitlist✅
4. Help text about checking email and Discord support is visible ✅
## Screenshots
The new waitlist modal includes:
- Clear "Join the Waitlist" title
- Explanation that platform is in closed beta
- "Join Waitlist" button (opens in new tab)
- Help text about checking email address
- Discord support link for users who need help
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Summary
Fix critical UserBalance migration and spending issues affecting users
with credits from transaction history but no UserBalance records.
## Root Issues Fixed
### Issue 1: UserBalance Migration Complexity
- **Problem**: Complex data migration with timestamp logic issues and
potential race conditions
- **Solution**: Simplified to idempotent table creation only,
application handles auto-population
### Issue 2: Credit Spending Bug
- **Problem**: Users with $10.0 from transaction history couldn't spend
$0.16
- **Root Cause**: `_add_transaction` and `_enable_transaction` only
checked UserBalance table, returning 0 balance for users without records
- **Solution**: Enhanced both methods with transaction history fallback
logic
### Issue 3: Exception Handling Inconsistency
- **Problem**: Raw SQL unique violations raised different exception
types than Prisma ORM
- **Solution**: Convert raw SQL unique violations to
`UniqueViolationError` at source
## Changes Made
### Migration Cleanup
- **Idempotent operations**: Use `CREATE TABLE IF NOT EXISTS`, `CREATE
INDEX IF NOT EXISTS`
- **Inline foreign key**: Define constraint within `CREATE TABLE`
instead of separate `ALTER TABLE`
- **Removed data migration**: Application creates UserBalance records
on-demand
- **Safe to re-run**: No errors if table/index/constraint already exists
### Credit Logic Fixes
- **Enhanced `_add_transaction`**: Added transaction history fallback in
`user_balance_lock` CTE
- **Enhanced `_enable_transaction`**: Added same fallback logic for
payment fulfillment
- **Exception normalization**: Convert raw SQL unique violations to
`UniqueViolationError`
- **Simplified `onboarding_reward`**: Use standardized
`UniqueViolationError` catching
### SQL Fallback Pattern
```sql
COALESCE(
(SELECT balance FROM UserBalance WHERE userId = ? FOR UPDATE),
-- Fallback: compute from transaction history if UserBalance doesn't exist
(SELECT COALESCE(ct.runningBalance, 0)
FROM CreditTransaction ct
WHERE ct.userId = ? AND ct.isActive = true AND ct.runningBalance IS NOT NULL
ORDER BY ct.createdAt DESC LIMIT 1),
0
) as balance
```
## Impact
### Before
- ❌ Users with transaction history but no UserBalance couldn't spend
credits
- ❌ Migration had complex timestamp logic with potential bugs
- ❌ Raw SQL and Prisma exceptions handled differently
- ❌ Error: "Insufficient balance of $10.0, where this will cost $0.16"
### After
- ✅ Seamless spending for all users regardless of UserBalance record
existence
- ✅ Simple, idempotent migration that's safe to re-run
- ✅ Consistent exception handling across all credit operations
- ✅ Automatic UserBalance record creation during first transaction
- ✅ Backward compatible - existing users unaffected
## Business Value
- **Eliminates user frustration**: Users can spend their credits
immediately
- **Smooth migration path**: From old User.balance to new UserBalance
table
- **Better reliability**: Atomic operations with proper error handling
- **Maintainable code**: Consistent patterns across credit operations
## Test Plan
- [ ] Manual testing with users who have transaction history but no
UserBalance records
- [ ] Verify migration can be run multiple times safely
- [ ] Test spending credits works for all user scenarios
- [ ] Verify payment fulfillment (`_enable_transaction`) works correctly
- [ ] Add comprehensive test coverage for this scenario
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Problem
High QPS failures on `spend_credits` operations due to lock contention
from `pg_advisory_xact_lock` causing serialization and seconds of wait
time.
## Solution
Replace PostgreSQL advisory locks with atomic database operations using
CTEs (Common Table Expressions).
### Key Changes
- **Add persistent balance column** to User table for O(1) balance
lookups
- **Atomic CTE-based operations** for all credit transactions using
UPDATE...RETURNING pattern
- **Comprehensive concurrency tests** with 7 test scenarios including
stress testing
- **Remove all advisory lock usage** from the credit system
### Implementation Details
1. **Migration**: Adds balance column with backfill from transaction
history
2. **Atomic Operations**: All credit operations now use single atomic
CTEs that update balance and create transaction in one query
3. **Race Condition Prevention**: WHERE clauses in UPDATE statements
ensure balance never goes negative
4. **BetaUserCredit Compatibility**: Preserved monthly refill logic with
updated `_add_transaction` signature
### Performance Impact
- ✅ Eliminated lock contention bottlenecks
- ✅ O(1) balance lookups instead of O(n) transaction aggregation
- ✅ Atomic operations prevent race conditions without locks
- ✅ Supports high QPS without serialization delays
### Testing
- All existing tests pass
- New concurrency test suite (`credit_concurrency_test.py`) with:
- Concurrent spends from same user
- Insufficient balance handling
- Mixed operations (spends, top-ups, balance checks)
- Race condition prevention
- Integer overflow protection
- Stress testing with 100 concurrent operations
### Breaking Changes
None - all existing APIs maintain compatibility
🤖 Generated with [Claude Code](https://claude.ai/code)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Enhanced top‑up flows with top‑up types, clearer credit→dollar
formatting, and idempotent onboarding rewards.
* **Bug Fixes**
* Fixed race conditions for concurrent spends/top‑ups, added
integer‑overflow and underflow protection, stronger input validation,
and improved refund/dispute handling.
* **Refactor**
* Persisted per‑user balance with atomic updates for reliable balances;
admin history now prefetches balances.
* **Tests**
* Added extensive concurrency, refund, ceiling/underflow and migration
test suites.
* **Chores**
* Database migration to add persisted user balance; APIKey status
extended (SUSPENDED).
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
## Summary
Fixes a critical serialization bug introduced in PR #11187 where
`SafeJson` failed to serialize dictionaries containing Pydantic models,
causing 500 Internal Server Errors in the executor service.
## Problem
The error manifested as:
```
CRITICAL: Operation Approaching Failure Threshold: Service communication: '_call_method_async'
Current attempt: 50/50
Error: HTTPServerError: HTTP 500: Server error '500 Internal Server Error'
for url 'http://autogpt-database-manager.prod-agpt.svc.cluster.local:8005/create_graph_execution'
```
Root cause in `create_graph_execution`
(backend/data/execution.py:656-657):
```python
"credentialInputs": SafeJson(credential_inputs) if credential_inputs else Json({})
```
Where `credential_inputs: Mapping[str, CredentialsMetaInput]` is a dict
containing Pydantic models.
After PR #11187's refactor, `_sanitize_value()` only converted top-level
BaseModel instances to dicts, but didn't handle BaseModel instances
nested inside dicts/lists/tuples. This caused Prisma's JSON serializer
to fail with:
```
TypeError: Type <class 'backend.data.model.CredentialsMetaInput'> not serializable
```
## Solution
Added BaseModel handling to `_sanitize_value()` to recursively convert
Pydantic models to dicts before sanitizing:
```python
elif isinstance(value, BaseModel):
# Convert Pydantic models to dict and recursively sanitize
return _sanitize_value(value.model_dump(exclude_none=True))
```
This ensures all nested Pydantic models are properly serialized
regardless of nesting depth.
## Changes
- **backend/util/json.py**: Added BaseModel check to `_sanitize_value()`
function
- **backend/util/test_json.py**: Added 6 comprehensive tests covering:
- Dict containing Pydantic models
- Deeply nested Pydantic models
- Lists of Pydantic models in dicts
- The exact CredentialsMetaInput scenario
- Complex mixed structures
- Models with control characters
## Testing
✅ All new tests pass
✅ Verified fix resolves the production 500 error
✅ Code formatted with `poetry run format`
## Related
- Fixes issues introduced in PR #11187
- Related to executor service 500 errors in production
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Claude <noreply@anthropic.com>
### Problem
When running multiple backend pods in production, requests can be routed
to different pods causing inconsistent cache states. Additionally, the
current cache implementation in `autogpt_libs` doesn't support shared
caching across processes, leading to data inconsistency and redundant
cache misses.
### Changes 🏗️
- **Moved cache implementation from autogpt_libs to backend**
(`/backend/backend/util/cache.py`)
- Removed `/autogpt_libs/autogpt_libs/utils/cache.py`
- Centralized cache utilities within the backend module
- Updated all import statements across the codebase
- **Implemented Redis-based shared caching**
- Added `shared_cache` parameter to `@cached` decorator for
cross-process caching
- Implemented Redis connection pooling for efficient cache operations
- Added support for cache key pattern matching and bulk deletion
- Added TTL refresh on cache access with `refresh_ttl_on_get` option
- **Enhanced cache functionality**
- Added thundering herd protection with double-checked locking
- Implemented thread-local caching with `@thread_cached` decorator
- Added cache management methods: `cache_clear()`, `cache_info()`,
`cache_delete()`
- Added support for both sync and async functions
- **Updated store caching** (`/backend/server/v2/store/cache.py`)
- Enabled shared caching for all store-related cache functions
- Set appropriate TTL values (5-15 minutes) for different cache types
- Added `clear_all_caches()` function for cache invalidation
- **Added Redis configuration**
- Added Redis connection settings to backend settings
- Configured dedicated connection pool for cache operations
- Set up binary mode for pickle serialization
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify Redis connection and cache operations work correctly
- [x] Test shared cache across multiple backend instances
- [x] Verify cache invalidation with `clear_all_caches()`
- [x] Run cache tests: `poetry run pytest
backend/backend/util/cache_test.py`
- [x] Test thundering herd protection under concurrent load
- [x] Verify TTL refresh functionality with `refresh_ttl_on_get=True`
- [x] Test thread-local caching for request-scoped data
- [x] Ensure no performance regression vs in-memory cache
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes (Redis already configured)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Redis cache configuration uses existing Redis service settings
(REDIS_HOST, REDIS_PORT, REDIS_PASSWORD)
- No new environment variables required
## Summary
Implement selective rollout of payment functionality using LaunchDarkly
feature flags to enable gradual deployment to pilot users.
- Add `ENABLE_PLATFORM_PAYMENT` flag to control credit system behavior
- Update `get_user_credit_model` to use user-specific flag evaluation
- Replace hardcoded `NEXT_PUBLIC_SHOW_BILLING_PAGE` with LaunchDarkly
flag
- Enable payment UI components only for flagged users
- Maintain backward compatibility with existing beta credit system
- Default to beta monthly credits when flag is disabled
- Fix tests to work with new async credit model function
## Key Changes
### Backend
- **Credit Model Selection**: The `get_user_credit_model()` function now
takes a `user_id` parameter and uses LaunchDarkly to determine which
credit model to return:
- Flag enabled → `UserCredit` (payment system enabled, no monthly
refills)
- Flag disabled → `BetaUserCredit` (current behavior with monthly
refills)
- **Flag Integration**: Added `ENABLE_PLATFORM_PAYMENT` flag and
integrated LaunchDarkly evaluation throughout the credit system
- **API Updates**: All credit-related endpoints now use the
user-specific credit model instead of a global instance
### Frontend
- **Dynamic UI**: Payment-related components (billing page, wallet
refill) now show/hide based on the LaunchDarkly flag
- **Removed Environment Variable**: Replaced
`NEXT_PUBLIC_SHOW_BILLING_PAGE` with runtime flag evaluation
### Testing
- **Test Fixes**: Updated all tests that referenced the removed global
`_user_credit_model` to use proper mocking of the new async function
## Deployment Strategy
This implementation enables a controlled rollout:
1. Deploy with flag disabled (default) - no behavior change for existing
users
2. Enable flag for pilot/beta users via LaunchDarkly dashboard
3. Monitor usage and feedback from pilot users
4. Gradually expand to more users
5. Eventually enable for all users once validated
## Test Plan
- [x] Unit tests pass for credit system components
- [x] Payment UI components show/hide correctly based on flag
- [x] Default behavior (flag disabled) maintains current functionality
- [x] Flag enabled users get payment system without monthly refills
- [x] Admin credit operations work correctly
- [x] Backward compatibility maintained
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes the `Invalid \escape` error occurring in
`/upsert_execution_output` endpoint by completely rewriting the SafeJson
implementation.
## Problem
- Error: `POST /upsert_execution_output failed: Invalid \escape: line 1
column 36404 (char 36403)`
- Caused by data containing literal backslash-u sequences (e.g.,
`\u0000` as text, not actual null characters)
- Previous implementation tried to remove problematic escape sequences
from JSON strings
- This created invalid JSON when it removed `\\u0000` and left invalid
sequences like `\w`
## Solution
Completely rewrote SafeJson to work on Python data structures instead of
JSON strings:
1. **Direct data sanitization**: Recursively walks through dicts, lists,
and tuples to remove control characters directly from strings
2. **No JSON string manipulation**: Avoids all escape sequence parsing
issues
3. **More efficient**: Eliminates the serialize → sanitize → deserialize
cycle
4. **Preserves valid content**: Backslashes, paths, and literal text are
correctly preserved
## Changes
- Removed `POSTGRES_JSON_ESCAPES` regex (no longer needed)
- Added `_sanitize_value()` helper function for recursive sanitization
- Simplified `SafeJson()` to convert Pydantic models and sanitize data
structures
- Added `import json # noqa: F401` for backwards compatibility
## Testing
- ✅ Verified fix resolves the `Invalid \escape` error
- ✅ All existing SafeJson unit tests pass
- ✅ Problematic data with literal escape sequences no longer causes
errors
- ✅ Code formatted with `poetry run format`
## Technical Details
**Before (JSON string approach):**
```python
# Serialize to JSON string
json_string = dumps(data)
# Remove escape sequences from string (BREAKS!)
sanitized = regex.sub("", json_string)
# Parse back (FAILS with Invalid \escape)
return Json(json.loads(sanitized))
```
**After (data structure approach):**
```python
# Convert Pydantic to dict
data = model.model_dump() if isinstance(data, BaseModel) else data
# Recursively sanitize strings in data structure
sanitized = _sanitize_value(data)
# Return as Json (no parsing needed)
return Json(sanitized)
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Currently, we don’t add category and cost information to custom nodes in
the new builder. This means we’re rendering with the correct information
and costs are displayed accurately based on the selected discriminator
value.
<img width="441" height="781" alt="Screenshot 2025-10-15 at 2 43 33 PM"
src="https://github.com/user-attachments/assets/8199cfa7-4353-4de2-8c15-b68aa86e458c"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All information is displayed correctly.
- [x] I’ve tried changing the discrimination value and we’re getting the
correct cost for the selected value.
### Changes 🏗️
- **Added Claude Haiku 4.5 model support** (`claude-haiku-4-5-20251001`)
- Added model to `LlmModel` enum in
`autogpt_platform/backend/backend/blocks/llm.py`
- Configured model metadata with 200k context window and 64k max output
tokens
- Set pricing to 4 credits per million tokens in
`backend/data/block_cost_config.py`
- **Classic Forge Integration**
- Added `CLAUDE4_5_HAIKU_v1` to Anthropic provider in
`classic/forge/forge/llm/providers/anthropic.py`
- Configured with $1/1M prompt tokens and $5/1M completion tokens
pricing
- Enabled function call API support
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
**Test Plan:**
- [x] Verify Claude Haiku 4.5 model appears in the LLM block model
selection dropdown
- [x] Test basic text generation using Claude Haiku 4.5 in an agent
workflow
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Configuration changes</summary>
- No environment variable changes required
- No docker-compose changes needed
- Model configuration is handled through existing Anthropic API
integration
</details>
https://github.com/user-attachments/assets/bbc42c47-0e7c-4772-852e-55aa91f4d253
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
## Summary
Move DatabaseError from store-specific exceptions to generic backend
exceptions for proper layer separation, while also fixing store
exception inheritance to ensure proper HTTP status codes.
## Problem
1. **Poor Layer Separation**: DatabaseError was defined in
store-specific exceptions but represents infrastructure concerns that
affect the entire backend
2. **Incorrect HTTP Status Codes**: Store exceptions inherited from
Exception instead of ValueError, causing 500 responses for client errors
3. **Reusability Issues**: Other backend modules couldn't use
DatabaseError for DB operations
4. **Blanket Catch Issues**: Store-specific catches were affecting
generic database operations
## Solution
### Move DatabaseError to Generic Location
- Move from backend.server.v2.store.exceptions to
backend.util.exceptions
- Update all 23 references in backend/server/v2/store/db.py to use new
location
- Remove from StoreError inheritance hierarchy
### Fix Complete Store Exception Hierarchy
- MediaUploadError: Changed from Exception to ValueError inheritance
(client errors → 400)
- StoreError: Changed from Exception to ValueError inheritance (business
logic errors → 400)
- Store NotFound exceptions: Changed to inherit from NotFoundError (→
404)
- DatabaseError: Now properly inherits from Exception (infrastructure
errors → 500)
## Benefits
### ✅ Proper Layer Separation
- Database errors are infrastructure concerns, not store-specific
business logic
- Store exceptions focus on business validation and client errors
- Clean separation between infrastructure and business logic layers
### ✅ Correct HTTP Status Codes
- DatabaseError: 500 (server infrastructure errors)
- Store NotFound errors: 404 (via existing NotFoundError handler)
- Store validation errors: 400 (via existing ValueError handler)
- Media upload errors: 400 (client validation errors)
### ✅ Architectural Improvements
- DatabaseError now reusable across entire backend
- Eliminates blanket catch issues affecting generic DB operations
- All store exceptions use global exception handlers properly
- Future store exceptions automatically get proper status codes
## Files Changed
- **backend/util/exceptions.py**: Add DatabaseError class
- **backend/server/v2/store/exceptions.py**: Remove DatabaseError, fix
inheritance hierarchy
- **backend/server/v2/store/db.py**: Update all DatabaseError references
to new location
## Result
- ✅ **No more stack trace spam**: Expected business logic errors handled
properly
- ✅ **Proper HTTP semantics**: 500 for infrastructure, 400/404 for
client errors
- ✅ **Better architecture**: Clean layer separation and reusable
components
- ✅ **Fixes original issue**: AgentNotFoundError now returns 404 instead
of 500
This addresses the logging issue mentioned by @zamilmajdy while also
implementing the architectural improvements suggested by @Pwuts.
This PR introduces saving functionality to the new builder interface,
allowing users to save and update agent flows. The implementation
includes both UI components and backend integration for persistent
storage of agent configurations.
https://github.com/user-attachments/assets/95ee46de-2373-4484-9f34-5f09aa071c5e
### Key Features Added:
#### 1. **Save Control Component** (`NewSaveControl`)
- Added a new save control popover in the control panel with form inputs
for agent name, description, and version display
- Integrated with the new control panel as a primary action button with
a floppy disk icon
- Supports keyboard shortcuts (Ctrl+S / Cmd+S) for quick saving
#### 2. **Graph Persistence Logic**
- Implemented `useNewSaveControl` hook to handle:
- Creating new graphs via `usePostV1CreateNewGraph`
- Updating existing graphs via `usePutV1UpdateGraphVersion`
- Intelligent comparison to prevent unnecessary saves when no changes
are made
- URL parameter management for flowID and flowVersion tracking
#### 3. **Loading State Management**
- Added `GraphLoadingBox` component to display a loading indicator while
graphs are being fetched
- Enhanced `useFlow` hook with loading state tracking
(`isFlowContentLoading`)
- Improved UX with clear visual feedback during graph operations
#### 4. **Component Reorganization**
- Refactored components from `NewBlockMenu` to `NewControlPanel`
directory structure for better organization:
- Moved all block menu related components under
`NewControlPanel/NewBlockMenu/`
- Separated save control into its own module
(`NewControlPanel/NewSaveControl/`)
- Improved modularity and separation of concerns
#### 5. **State Management Enhancements**
- Added `controlPanelStore` for managing control panel states (e.g.,
save popover visibility)
- Enhanced `nodeStore` with `getBackendNodes()` method for retrieving
nodes in backend format
- Added `getBackendLinks()` to `edgeStore` for consistent link
formatting
### Technical Improvements:
- **Graph Comparison Logic**: Implemented `graphsEquivalent()` helper to
deeply compare saved and current graph states, preventing redundant
saves
- **Form Validation**: Used Zod schema validation for save form inputs
with proper constraints
- **Error Handling**: Comprehensive error handling with user-friendly
toast notifications
- **Query Invalidation**: Proper cache invalidation after successful
saves to ensure data consistency
### UI/UX Enhancements:
- Clean, modern save dialog with clear labeling and placeholder text
- Real-time version display showing the current graph version
- Disabled state for save button during operations to prevent double
submissions
- Toast notifications for success and error states
- Higher z-index for GraphLoadingBox to ensure visibility over other
elements
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving is working perfectly. All nodes, links, their positions,
and hardcoded data are saved correctly.
- [x] If there are no changes, the user cannot save the graph.
## Summary
Fix store exception hierarchy to prevent ERROR level stack trace spam
for expected business logic errors and ensure proper HTTP status codes.
## Problem
The original error from production logs showed AgentNotFoundError for
non-existent agents like autogpt/domain-drop-catcher was:
- Returning 500 status codes instead of 404
- Generating ERROR level stack traces in logs for expected not found
scenarios
- Bypassing global exception handlers due to improper inheritance
## Root Cause
Store exceptions inherited from Exception instead of ValueError, causing
them to bypass the global ValueError handler (400) and fall through to
the generic Exception handler (500) with full stack traces.
## Solution
Create proper exception hierarchy for ALL store-related errors by
making:
- MediaUploadError inherit from ValueError instead of Exception
- StoreError inherit from ValueError instead of Exception
- Store NotFound exceptions inherit from NotFoundError (which inherits
from ValueError)
## Changes Made
1. MediaUploadError: Changed from Exception to ValueError inheritance
2. StoreError: Changed from Exception to ValueError inheritance
3. Store NotFound exceptions: Changed to inherit from NotFoundError
## Benefits
- Correct HTTP status codes: Not found errors return 404, validation
errors return 400
- No more 500 stack trace spam for expected business logic errors
- Clean consistent error handling using existing global handlers
- Future-proof: Any new store exceptions automatically get proper status
codes
## Result
- AgentNotFoundError for autogpt/domain-drop-catcher now returns 404
instead of 500
- InvalidFileTypeError, VirusDetectedError, etc. now return 400 instead
of 500
- No ERROR level stack traces for expected client errors
- Proper HTTP semantics throughout the store API
## Summary
Fix critical SafeJson function to properly sanitize JSON-encoded Unicode
escape sequences that were causing PostgreSQL 22P05 errors when null
characters in web scraped content were stored as "\u0000" in the
database.
## Root Cause Analysis
The existing SafeJson function in backend/util/json.py:
1. Only removed raw control characters (\x00-\x08, etc.) using
POSTGRES_CONTROL_CHARS regex
2. Failed to handle JSON-encoded Unicode escape sequences (\u0000,
\u0001, etc.)
3. When web scraping returned content with null bytes, these were
JSON-encoded as "\u0000" strings
4. PostgreSQL rejected these Unicode escape sequences, causing 22P05
errors
## Changes Made
### Enhanced SafeJson Function (backend/util/json.py:147-153)
- **Add POSTGRES_JSON_ESCAPES regex**: Comprehensive pattern targeting
all PostgreSQL-incompatible Unicode and single-char JSON escape
sequences
- **Unicode escapes**: \u0000-\u0008, \u000B-\u000C, \u000E-\u001F,
\u007F (preserves \u0009=tab, \u000A=newline, \u000D=carriage return)
- **Single-char escapes**: \b (backspace), \f (form feed) with negative
lookbehind/lookahead to preserve file paths like "C:\\file.txt"
- **Two-pass sanitization**: Remove JSON escape sequences first, then
raw characters as fallback
### Comprehensive Test Coverage (backend/util/test_json.py:219-414)
Added 11 new test methods covering:
- **Control character sanitization**: Verify dangerous characters (\x00,
\x07, \x0C, etc.) are removed while preserving safe whitespace (\t, \n,
\r)
- **Web scraping content**: Simulate SearchTheWebBlock scenarios with
null bytes and control characters
- **Code preservation**: Ensure legitimate file paths, JSON strings,
regex patterns, and programming code are preserved
- **Unicode escape handling**: Test both \u0000-style and \b/\f-style
escape sequences
- **Edge case protection**: Prevent over-matching of legitimate
sequences like "mybfile.txt" or "\\u0040"
- **Mixed content scenarios**: Verify only problematic sequences are
removed while preserving legitimate content
## Validation Results
- ✅ All 24 SafeJson tests pass including 11 new comprehensive
sanitization tests
- ✅ Control characters properly removed: \x00, \x01, \x08, \x0C, \x1F,
\x7F
- ✅ Safe characters preserved: \t (tab), \n (newline), \r (carriage
return)
- ✅ File paths preserved: "C:\\Users\\file.txt", "\\\\server\\share"
- ✅ Programming code preserved: regex patterns, JSON strings, SQL
escapes
- ✅ Unicode escapes sanitized: \u0000 → removed, \u0048 ("H") →
preserved if valid
- ✅ No false positives: Legitimate sequences not accidentally removed
- ✅ poetry run format succeeds without errors
## Impact
- **Prevents PostgreSQL 22P05 errors**: No more null character database
rejections from web scraping
- **Maintains data integrity**: Legitimate content preserved while
dangerous characters removed
- **Comprehensive protection**: Handles both raw bytes and JSON-encoded
escape sequences
- **Web scraping reliability**: SearchTheWebBlock and similar blocks now
store content safely
- **Backward compatibility**: Existing SafeJson behavior unchanged for
legitimate content
## Test Plan
- [x] All existing SafeJson tests pass (24/24)
- [x] New comprehensive sanitization tests pass (11/11)
- [x] Control character removal verified
- [x] Legitimate content preservation verified
- [x] Web scraping scenarios tested
- [x] Code formatting and type checking passes
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
The `dictionary` input on the Add to Dictionary block is hidden, even
though it is the main input.
### Changes 🏗️
- Mark `dictionary` explicitly as not advanced (so not hidden by
default)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Trivial change, no test needed
Integrates Sentry SDK to set user and contextual tags during node
execution for improved error tracking and user count analytics. Ensures
Sentry context is properly set and restored, and exceptions are captured
with relevant context before scope restoration.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
Adds sentry tracking to block failures
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test to make sure the userid and block details show up in Sentry
- [x] make sure other errors aren't contaminated
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- New Features
- Added conditional support for feature flags when configured, enabling
targeted rollouts and experiments without impacting unconfigured
environments.
- Chores
- Enhanced error monitoring with richer contextual data during node
execution to improve stability and diagnostics.
- Updated metrics initialization to dynamically include feature flag
integrations when available, without altering behavior for unconfigured
setups.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Since #10323, one-time graph inputs are no longer stored on the input
nodes (#9139), so we can reasonably assume that the default value set by
the graph creator will be safe to export.
### Changes 🏗️
- Don't strip the default value from input nodes in
`NodeModel.stripped_for_export(..)`, except for inputs marked as
`secret`
- Update and expand tests for graph export secrets stripping mechanism
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Expanded tests pass
- Relatively simple change with good test coverage, no manual test
needed
## Problem
The `SendDiscordMessageBlock` only accepted channel names, while other
Discord blocks like `SendDiscordFileBlock` and `SendDiscordEmbedBlock`
accept both channel IDs and channel names. This inconsistency made it
difficult to use channel IDs with the message sending block, which is
often more reliable and direct than name-based lookup.
## Solution
Updated `SendDiscordMessageBlock` to accept both channel IDs and channel
names through the `channel_name` field, matching the implementation
pattern used in other Discord blocks.
### Changes Made
1. **Enhanced channel resolution logic** to try parsing the input as a
channel ID first, then fall back to name-based search:
```python
# Try to parse as channel ID first
try:
channel_id = int(channel_name)
channel = client.get_channel(channel_id)
except ValueError:
# Not an ID, treat as channel name
# ... search guilds for matching channel name
```
2. **Updated field descriptions** to clarify the dual functionality:
- `channel_name`: Now describes that it accepts "Channel ID or channel
name"
- `server_name`: Clarified as "only needed if using channel name"
3. **Added type checking** to ensure the resolved channel can send
messages before attempting to send
4. **Updated documentation** to reflect the new capability
## Backward Compatibility
✅ **Fully backward compatible**: The field name remains `channel_name`
(not renamed), and all existing workflows using channel names will
continue to work exactly as before.
✅ **New capability**: Users can now also provide channel IDs (e.g.,
`"123456789012345678"`) for more direct channel targeting.
## Testing
- All existing tests pass, including `SendDiscordMessageBlock` and all
other Discord block tests
- Implementation verified to match the pattern used in
`SendDiscordFileBlock` and `SendDiscordEmbedBlock`
- Code passes all linting, formatting, and type checking
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10909
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
> Issue Title: SendDiscordMessage needs to take a channel id as an
option under channelname the same as the other discord blocks
> Issue Description: with how we can process the other discord blocks we
should do the same here with the identifiers being allowed to be a
channel name or id. we can't rename the field though or that will break
backwards compatibility
> Fixes
https://linear.app/autogpt/issue/OPEN-2701/senddiscordmessage-needs-to-take-a-channel-id-as-an-option-under
>
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User 055a3053-5ab6-449a-bcfa-990768594185:
> the ones with boxes around them need confirmed for lables but yeah its
related but not dupe
>
> Comment by User 264d7bf4-db2a-46fa-a880-7d67b58679e6:
> this might be a duplicate since there is a related ticket but not sure
>
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10909).
All replies are displayed in both locations.
>
>
</details>
<!-- START COPILOT CODING AGENT TIPS -->
---
💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey3.medallia.com/?EAHeSx-AP01bZqG0Ld9QLQ) to start
the survey.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* New Features
* Send Discord Message block now accepts a channel ID in addition to
channel name.
* Server name is only required when using a channel name.
* Improved channel detection and validation with clearer errors if the
channel isn’t found.
* Documentation
* Updated block documentation to reflect support for channel ID or name
and clarify when server name is needed.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bently <Github@bentlybro.com>
Closes#11163
## Summary
Expanded the Fact Checker block to yield its references list from the
Jina AI API response.
## Changes 🏗️
- Added `Reference` TypedDict to define the structure of reference
objects
- Added `references` field to the Output schema
- Modified the `run` method to extract and yield references from the API
response
- Added fallback to empty list if references are not present
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that the Fact Checker block schema includes the new
references field
- [x] Confirmed that references are properly extracted from the API
response when present
- [x] Tested the fallback behavior when references are not in the API
response
- [x] Ensured backward compatibility - existing functionality remains
unchanged
- [x] Verified the Reference TypedDict matches the expected API response
structure
Generated with [Claude Code](https://claude.ai/code)
## Summary by CodeRabbit
* **New Features**
* Fact-checking results now include a references list to support
verification.
* Each reference provides a URL, a key quote, and an indicator showing
whether it supports the claim.
* References are presented alongside factuality, result, and reasoning
when available; otherwise, an empty list is returned.
* Enhances transparency and traceability of fact-check outcomes without
altering existing result fields.
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
### Changes 🏗️
<img width="672" height="761" alt="Screenshot 2025-10-14 at 16 12 50"
src="https://github.com/user-attachments/assets/9e664ade-10fe-4c09-af10-b26d10dca360"
/>
Fixes
[BUILDER-3YG](https://sentry.io/organizations/significant-gravitas/issues/6942679655/).
The issue was that: User uploaded an incompatible external agent persona
file lacking required flow graph keys (`nodes`, `links`).
- Improves error handling when an invalid agent file is uploaded.
- Provides a more user-friendly error message indicating the file must
be a valid agent.json file exported from the AutoGPT platform.
- Clears the invalid file from the form and resets the agent object to
null.
This fix was generated by Seer in Sentry, triggered by Toran Bruce
Richards. 👁️ Run ID: 1943626
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6942679655/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test that uploading an invalid agent file (e.g., missing `nodes`
or `links`) triggers the improved error handling and displays the
user-friendly error message.
- [x] Verify that the invalid file is cleared from the form after the
error, and the agent object is reset to null.
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
We’re currently facing two problems with credentials:
1. When we change the discriminator input value, the form data
credential field should be cleaned up completely.
2. When I select a different discriminator and if that provider has a
value, it should select the latest one.
So, in this PR, I’ve encountered both issues.
### Changes 🏗️
- Updated CredentialField to utilize a new setCredential function for
managing selected credentials.
- Implemented logic to auto-select the latest credential when none is
selected and clear the credential if the provider changes.
- Improved SelectCredential component with a defaultValue prop and
adjusted styling for better UI consistency.
- Removed unnecessary console logs from helper functions to clean up the
code.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Credential selection works perfectly with both the discriminator
and normal addition.
- [x] Auto-select credential is also working.
Fixes#11162
## Summary
Implements a new Perplexity block that allows users to query
Perplexity's sonar models via OpenRouter with support for extracting URL
citations and annotations.
## Changes
- Add new block for Perplexity sonar models (sonar, sonar-pro,
sonar-deep-research)
- Support model selection for all three Perplexity models
- Implement annotations output pin for URL citations and source
references
- Integrate with OpenRouter API for accessing Perplexity models
- Follow existing block patterns from AI text generator block
## Test Plan
✅ Block successfully instantiates
✅ Block is properly loaded by the dynamic loading system
✅ Output fields include response and annotations as required
Generated with [Claude Code](https://claude.ai/code)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- New Features
- Added a Perplexity integration block to query Sonar models via
OpenRouter.
- Supports multiple model options, optional system prompt, and
adjustable max tokens.
- Returns concise responses with citation-style annotations extracted
from the model output.
- Provides clear error messages when requests fail.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
## Summary
- Changed max_concurrent_graph_executions_per_user from 50 to 25
concurrent executions
- Updated the limit to be per user per graph instead of globally per
user
- Users can now run different graphs concurrently without being limited
by executions of other graphs
- Enhanced database query to filter by both user_id and graph_id
## Changes Made
- **Settings**: Reduced default limit from 50 to 25 and updated
description to clarify per-graph scope
- **Database Layer**: Modified `get_graph_executions_count` to accept
optional `graph_id` parameter
- **Executor Manager**: Updated rate limiting logic to check
per-user-per-graph instead of per-user globally
- **Logging**: Enhanced warning messages to include graph_id context
## Test plan
- [ ] Verify that users can run up to 25 concurrent executions of the
same graph
- [ ] Verify that users can run different graphs concurrently without
interference
- [ ] Test rate limiting behavior when limit is exceeded for a specific
graph
- [ ] Confirm logging shows correct graph_id context in rate limit
messages
## Impact
This change improves the user experience by allowing concurrent
execution of different graphs while still preventing resource exhaustion
from running too many instances of the same graph.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
<!-- Clearly explain the need for these changes: -->
This PR prevents users from creating multiple store submissions with the
same slug, which could lead to confusion and potential conflicts in the
marketplace. Each user's submissions should have unique slugs to ensure
proper identification and navigation.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **Backend**: Added validation to check for existing slugs before
creating new store submissions in `backend/server/v2/store/db.py`
- **Backend**: Introduced new `SlugAlreadyInUseError` exception in
`backend/server/v2/store/exceptions.py` for clearer error handling
- **Backend**: Updated store routes to return HTTP 409 Conflict when
slug is already in use in `backend/server/v2/store/routes.py`
- **Backend**: Cleaned up test file in
`backend/server/v2/store/db_test.py`
- **Frontend**: Enhanced error handling in the publish agent modal to
display specific error messages to users in
`frontend/src/components/contextual/PublishAgentModal/components/AgentInfoStep/useAgentInfoStep.ts`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Add a store submission with a specific slug
- [x] Attempt to add another store submission with the same slug for the
same user - Verify a 409 conflict error is returned with appropriate
error message
- [x] Add a store submission with the same slug, but for a different
user - Verify the submission is successful
- [x] Verify frontend displays the specific error message when slug
conflict occurs
- [x] Existing tests pass without modification
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11159
Currently, we’re not throwing errors for client-side requests in the
custom mutator. This way, we’re ignoring the client-side request error.
If we do encounter an error, we send it as a normal response object.
That’s why our onError callback on React Query mutation and hasError
isn’t working in the query. To fix this, in this PR, we’re throwing the
client-side error.
### Changes 🏗️
- We’re throwing errors for both server-side and client-side requests.
- Why server-side? So the client cache isn’t hydrated with the error.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All end-to-end functionality is working properly.
- [x] I’ve manually checked all the pages and they are all functioning
correctly.
When a user clicks the “Become a Creator” button on the marketplace
page, we send an unauthorised request to the server to get a list of
agents. In this PR, I’ve fixed this by checking if the user is logged
in. If they’re not, I’ll show them a UI to log in or create an account.
<img width="967" height="605" alt="Screenshot 2025-10-14 at 12 04 52 PM"
src="https://github.com/user-attachments/assets/95079d9c-e6ef-4d75-9422-ce4fb138e584"
/>
### Changes
- Modify the publish agent test to detect the correct text even when the
user is logged out.
- Use Supabase helpers to check if the user is logged in. If not,
display the appropriate UI.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The login UI is correctly displayed when the user is logged out.
- [x] The login UI is also correctly displayed when the user is logged
in.
- [x] The login and signup buttons are working perfectly.
## Changes 🏗️
<img width="800" height="664" alt="Screenshot 2025-10-14 at 14 09 54"
src="https://github.com/user-attachments/assets/73f6277a-6bef-40f9-b208-31aba0cfc69f"
/>
<img width="600" height="773" alt="Screenshot 2025-10-14 at 14 10 05"
src="https://github.com/user-attachments/assets/c88cb22f-1597-4216-9688-09c19030df89"
/>
Allow to manage on the fly which search terms appear on the Marketplace
page via Launch Darkly dashboard. There is a new flag configured there:
`marketplace-search-terms`:
- **enabled** → `["Foo", "Bar"]` → the terms that will appear
- **disabled** → `[ "Marketing", "SEO", "Content Creation",
"Automation", "Fun"]` → the default ones show
### Small fix
Fix the following browser console warning about `onLoadingComplete`
being deprecated...
<img width="600" height="231" alt="Screenshot 2025-10-14 at 13 55 45"
src="https://github.com/user-attachments/assets/1b26e228-0902-4554-9f8c-4839f8d4ed83"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Ran the flag locally and verified it shows the terms set on Launch
Darkly
### For configuration changes:
Launch Darkly new flag needs to be configured on all environments.
Some agents aren't suitable for onboarding. This adds per-store agent
setting to allow them for onboarding. In case no agent is allowed
fallback to the former search.
### Changes 🏗️
- Add `useForOnboarding` to `StoreListing` model and `StoreAgent` view
(with migration)
- Remove filtering of agents with empty input schema or credentials
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Only allowed agents are displayed
- [x] Fallback to the old system in case there aren't enough allowed
agents
There is concern that the write load on the database may derail the
performance optimisations.
This hotfix comments out the code that adds the search terms to the db,
so we can discuss how best to do this in a way that won't bring down the
db.
### Changes 🏗️
- commented out the code to log store terms to the db
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] check search still works in dev
## Changes 🏗️
<img width="800" height="852" alt="Screenshot_2025-10-13_at_19 20 47"
src="https://github.com/user-attachments/assets/2fc150b9-1053-4e25-9018-24bcc2d93b43"
/>
<img width="800" height="669" alt="Screenshot 2025-10-13 at 19 23 41"
src="https://github.com/user-attachments/assets/9078b04e-0f65-42f3-ac4a-d2f3daa91215"
/>
- Onboarding “Run” step now renders required credentials (e.g., Google
OAuth) and includes them in execution.
- Run button remains disabled until required inputs and credentials are
provided.
- Logic extracted and strongly typed; removed any usage.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan ( _once merged
in dev..._ )
- [ ] Select an onboarding agent that requires Google OAuth:
- [ ] Credentials selector appears.
- [ ] After selecting/signing in, “Run agent” enables.
- [ ]Run succeeds and navigates to the next step.
### For configuration changes:
None
## Changes 🏗️
I found that if I logged out while an agent was running, sometimes
Webscokets would keep open connections but fail to connect ( given there
is no token anymore ) and cause strange behavior down the line on the
login screen.
Two root causes behind after inspecting the browser logs 🧐
- WebSocket connections were attempted with an empty token right after
logout, yielding `wss://.../ws?token=` and repeated `1006/connection`
refused loops.
- During logout, sockets in `CONNECTING` state weren’t being closed, so
the browser kept trying to finish the handshake and were reattempted
shortly after failing
Trying to fix this like:
- Guard `connectWebSocket()` to no-op if a logout/disconnect intent is
set, and to skip connecting when no token is available.
- Treat `CONNECTING` sockets as closeable in `disconnectWebSocket()` and
clear `wsConnecting` to avoid stale pending Promises
- Left existing heartbeat/reconnect logic intact, but it now won’t run
when we’re logging out or when we can’t get a token.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login and run an agent that takes long to run
- [x] Logout
- [x] Check the browser console and you don't see any socket errors
- [x] The login screen behaves ok
### For configuration changes:
Noop
- Depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11107
and https://github.com/Significant-Gravitas/AutoGPT/pull/11122
In this PR, I’ve added support for discrimination. Now, users can choose
a credential type based on other input values.
https://github.com/user-attachments/assets/6cedc59b-ec84-4ae2-bb06-59d891916847
### Changes 🏗️
- Updated CredentialsField to utilize credentialProvider from schema.
- Refactored helper functions to filter credentials based on the
selected provider.
- Modified APIKeyCredentialsModal and PasswordCredentialsModal to accept
provider as a prop.
- Improved FieldTemplate to dynamically display the correct credential
provider.
- Added getCredentialProviderFromSchema function to manage
multi-provider scenarios.
### Checklist 📋
#### For code changes:
- [x] Credential input is correctly updating based on other input
values.
- [x] Credential can be added correctly.
### Problem
Limits caching to just the main marketplace routes
### Changes 🏗️
- **Simplified store cache implementation** in
`backend/server/v2/store/cache.py`
- Streamlined caching logic for better maintainability
- Reduced complexity while maintaining performance
- **Added cache invalidation on store updates**
- Implemented cache clearing when new agents are added to the store
- Added invalidation logic in admin store routes
(`admin_store_routes.py`)
- Ensures all pods reflect the latest store state after modifications
- **Updated store database operations** in
`backend/server/v2/store/db.py`
- Modified to work with the new cache structure
- **Added cache deletion tests** (`test_cache_delete.py`)
- Validates cache invalidation works correctly
- Ensures cache consistency across operations
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify store listings are cached correctly
- [x] Upload a new agent to the store and confirm cache is invalidated
<!-- Clearly explain the need for these changes: -->
Fixes
[AUTOGPT-SERVER-5K6](https://sentry.io/organizations/significant-gravitas/issues/6887660207/).
The issue was that: Batch sending fails due to malformed data (422) and
inactive recipients (406); the 406 error is misclassified as a size
limit failure.
- Implements more robust error handling for Postmark API failures during
notification sending.
- Specifically handles inactive recipients (HTTP 406), malformed data
(HTTP 422), and oversized notifications.
- Adds detailed logging for each error case, including the notification
index and error message.
- Skips individual notifications that fail due to these errors,
preventing the entire batch from failing.
- Improves error handling for ValueErrors during send_templated calls,
specifically addressing oversized notifications.
This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 1675950
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6887660207/?seerDrawer=true)
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Implements more robust error handling for Postmark API failures during
notification sending.
- Specifically handles inactive recipients (HTTP 406), malformed data
(HTTP 422), and oversized notifications.
- Adds detailed logging for each error case, including the notification
index and error message.
- Skips individual notifications that fail due to these errors,
preventing the entire batch from failing.
- Improves error handling for ValueErrors during send_templated calls,
specifically addressing oversized notifications.
- Also disables this in prod to prevent scaling issues until we work out
some of the more critical issues
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test sending notifications with invalid email addresses to ensure
406 errors are handled correctly.
- [x] Test sending notifications with malformed data to ensure 422
errors are handled correctly.
- [x] Test sending oversized notifications to ensure they are skipped
and logged correctly.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- New Features
- None
- Bug Fixes
- Individual email failures no longer abort a batch; processing
continues after per-recipient errors.
- Specific handling for inactive recipients and malformed messages to
prevent repeated delivery attempts.
- Chores
- Improved error logging and diagnostics for email delivery scenarios.
- Tests
- Added tests covering email-sending error cases, user-deactivation on
inactive addresses, and batch-continuation behavior.
- Documentation
- None
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
In this PR, I’ve added functionality to fetch a graph based on the
flowID and flowVersion provided in the URL. Once the graph is fetched,
we add the nodes and links using the graph data in a new builder.
<img width="1512" height="982" alt="Screenshot 2025-10-11 at 10 26
07 AM"
src="https://github.com/user-attachments/assets/2f66eb52-77b2-424c-86db-559ea201b44d"
/>
### Changes
- Added `get_specific_blocks` route in `routes.py`.
- Created `get_block_by_id` function in `db.py`.
- Add a new hook `useFlow.ts` to load the graph and populate it in the
flow editor.
- Updated frontend components to reflect changes in block handling.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Able to load the graph correctly.
- [x] Able to populate it on the builder.
- Resolves#10980
Fixes unnecessary graph re-saving when no changes were made after
initial save. The issue occurred because frontend node IDs weren't
synced with backend IDs after save operations.
### Changes 🏗️
- Update actual node.id to match backend node ID after save
- Update edge references with new node IDs
- Properly sync visual editor state with backend
### Test Plan 📋
- [x] TypeScript compilation passes
- [x] Pre-commit hooks pass
- [x] Manual test: Save graph, verify no re-save needed on subsequent
save/run
## Summary
Add configurable rate limiting to prevent users from exceeding the
maximum number of concurrent graph executions, defaulting to 50 per
user.
## Changes Made
### Configuration (`backend/util/settings.py`)
- Add `max_concurrent_graph_executions_per_user` setting (default: 50,
range: 1-1000)
- Configurable via environment variables or settings file
### Database Query Function (`backend/data/execution.py`)
- Add `get_graph_executions_count()` function for efficient count
queries
- Supports filtering by user_id, statuses, and time ranges
- Used to check current RUNNING/QUEUED executions per user
### Database Manager Integration (`backend/executor/database.py`)
- Expose `get_graph_executions_count` through DatabaseManager RPC
interface
- Follows existing patterns for database operations
- Enables proper service-to-service communication
### Rate Limiting Logic (`backend/executor/manager.py`)
- Inline rate limit check in `_handle_run_message()` before cluster lock
- Use existing `db_client` pattern for consistency
- Reject and requeue executions when limit exceeded
- Graceful error handling - proceed if rate limit check fails
- Enhanced logging with user_id and current/max execution counts
## Technical Implementation
- **Database approach**: Query actual execution statuses for accuracy
- **RPC pattern**: Use DatabaseManager client following existing
codebase patterns
- **Fail-safe design**: Proceed with execution if rate limit check fails
- **Requeue on limit**: Rejected executions are requeued for later
processing
- **Early rejection**: Check rate limit before expensive cluster lock
operations
## Rate Limiting Flow
1. Parse incoming graph execution request
2. Query database via RPC for user's current RUNNING/QUEUED execution
count
3. Compare against configured limit (default: 50)
4. If limit exceeded: reject and requeue message
5. If within limit: proceed with normal execution flow
## Configuration Example
```env
MAX_CONCURRENT_GRAPH_EXECUTIONS_PER_USER=25 # Reduce to 25 for stricter limits
```
## Test plan
- [x] Basic functionality tested - settings load correctly, database
function works
- [x] ExecutionManager imports and initializes without errors
- [x] Database manager exposes the new function through RPC
- [x] Code follows existing patterns and conventions
- [ ] Integration testing with actual rate limiting scenarios
- [ ] Performance testing to ensure minimal impact on execution pipeline
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR fixes duplicate agent listings on the marketplace home page by
updating the StoreAgent view to return only the latest approved version
of each agent.
### Changes 🏗️
- Updated `StoreAgent` database view to filter for only the latest
approved version per listing
- Added CTE (Common Table Expression) `latest_versions` to efficiently
determine the maximum version for each store listing
- Modified the join logic to only include the latest version instead of
all approved versions
- Updated `versions` array field to contain only the single latest
version
**Technical details:**
- The view now uses a `latest_versions` CTE that groups by
`storeListingId` and finds `MAX(version)` for approved submissions
- Join condition ensures only the latest version is included:
`slv.version = lv.latest_version`
- This prevents duplicate entries for agents with multiple approved
versions
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified marketplace home page shows no duplicate agents
- [x] Confirmed only latest version is displayed for agents with
multiple approved versions
- [x] Checked that agent details page still functions correctly
- [x] Validated that run counts and ratings are still accurate
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Changes 🏗️
The Agent Activity Dropdown is now stable, it has been 100% exposed to
users on production for a few weeks, no need to have it behind a flag
anymore.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login to AutoGPT
- [x] The bell on the navbar is always present even if the flag on
Launch Darkly is turned off
### For configuration changes:
None
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11107
In this PR, I’ve added a way to add a username and password as
credentials on new builder.
https://github.com/user-attachments/assets/b896ea62-6a6d-487c-99a3-727cef4ad9a5
### Changes 🏗️
- Introduced PasswordCredentialsModal to handle user password
credentials.
- Updated useCredentialField to support user password type.
- Refactored APIKeyCredentialsModal to remove unnecessary onSuccess
prop.
- Enhanced the CredentialsField component to conditionally render the
new password modal based on supported credential types.
### Checklist 📋
#### For code changes:
- [x] Ability to add username and password correctly.
- [x] The username and password are visible in the credentials list
after adding it.
- Depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11120
In this PR, I’ve added a search functionality to the new block menu with
pagination.
https://github.com/user-attachments/assets/4c199997-4b5a-43c7-83b6-66abb1feb915
### Changes 🏗️
- Add a frontend for the search list with pagination functionality.
- Updated the search route to use GET method.
- Removed the SearchRequest model and replaced it with individual query
parameters.
### Checklist 📋
#### For code changes:
- [x] The search functionality is working perfectly.
- [x] If the search query doesn’t exist, it correctly displays a “No
Result” UI.
Fixes a issue where sub-agent executions triggered by one user were
visible in the original agent author's execution library.
## Solution
Fixed the user_id attribution in
`autogpt_platform/backend/backend/executor/manager.py` by ensuring that
sub-agent executions always use the actual executor's user_id rather
than the agent author's user_id stored in node defaults.
### Changes
- Added user_id override in `execute_node()` function when preparing
AgentExecutorBlock input (line 194)
- Ensures sub-agent executions are correctly attributed to the user
running them, not the agent author
- Maintains proper privacy isolation between users in marketplace agent
scenarios
### Security Impact
- **Before**: When User B downloaded and ran a marketplace agent
containing sub-agents owned by User A, the sub-agent executions appeared
in User A's library
- **After**: Sub-agent executions now only appear in the library of the
user who actually ran them
- Prevents unauthorized access to execution data and user privacy
violation
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Test plan: -->
- [x] Create an agent with sub-agents as User A
- [x] Publish agent to marketplace
- [x] Run the agent as User B
- [x] Verify User A cannot see User B's sub-agent executions in their
library
- [x] Verify User B can see their own sub-agent executions
- [x] Verify primary agent executions remain correctly filtered
Currently, we use the context API for the block menu provider and to
access some of its state outside the blockMenuProvider wrapper. For
instance, in the tutorial, we need to move this wrapper higher up in the
tree, perhaps at the top of the builder tree. This will cause
unnecessary re-renders. Therefore, we should create a block menu zustand
store so that we can easily access it in other parts of the builder.
### Changes 🏗️
- Deleted `block-menu-provider.tsx` file.
- Updated BlockMenu, BlockMenuContent, BlockMenuDefaultContent, and
other components to utilize blockMenuStore instead of
BlockMenuStateProvider.
- Adjusted imports and context usage accordingly.
### Checklist 📋
- [x] Changes have been clearly listed.
- [x] Code has been tested and verified.
- [x] I’ve checked every part of the block menu where we used the
context API and it’s working perfectly.
- [x] Ability to use block menu state in other parts of the builder.
Currently, the new builder doesn’t support sticky notes. We’re rendering
them as normal nodes with an input, which is why I’ve added a UI for
this.
<img width="1512" height="982" alt="Screenshot 2025-10-08 at 4 12 58 PM"
src="https://github.com/user-attachments/assets/be716e45-71c6-4cc4-81ba-97313426222f"
/>
To add sticky notes, go to the search menu of the block menu and search
for “Note block”. Then, add them from there.
### Changes 🏗️
- Updated CustomNodeData to include uiType.
- Conditional rendering in CustomNode based on uiType.
- Added a custom sticky note UI component called `StickyNoteBlock.tsx`.
- Adjusted FormCreator and FieldTemplate to pass and utilize uiType.
- Enhanced TextInputWidget to render differently based on uiType.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Able to attach sticky notes to the builder.
- [x] Able to accurately capture data while writing on sticky notes and
data is persistent also
In this PR, I have added support of oAuth2 in new builder.
https://github.com/user-attachments/assets/89472ebb-8ec2-467a-9824-79a80a71af8a
### Changes 🏗️
- Updated the FlowEditor to support OAuth2 credential selection.
- Improved the UI for API key and OAuth2 modals, enhancing user
experience.
- Refactored credential field components for better modularity and
maintainability.
- Updated OpenAPI documentation to reflect changes in OAuth flow
endpoints.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Able to create OAuth credentials
- [x] OAuth2 is correctly selected using the Credential Selector.
## Summary
Fix the critical issue where retry failure alerts were being spammed
when service communication failed repeatedly.
## Problem
The service communication retry mechanism was sending a critical Discord
alert every time it hit the 50-attempt threshold, with no rate limiting.
This caused alert spam when the same operation (like spend_credits) kept
failing repeatedly with the same error.
## Solution
### Rate Limiting Implementation
- Add ALERT_RATE_LIMIT_SECONDS = 300 (5 minutes) between identical
alerts
- Create _should_send_alert() function with thread-safe rate limiting
using _alert_rate_limiter dict
- Generate unique signatures based on
context:func_name:exception_type:exception_message
- Only send alert if sufficient time has passed since last identical
alert
### Enhanced Logging
- Rate-limited alerts log as warnings instead of being silently dropped
- Add full exception tracebacks for final failures and every 10th retry
attempt
- Improve alert message clarity and add note about rate limiting
- Better structured logging with exception types and details
### Error Context Preservation
- Maintain all original retry behavior and exception handling
- Preserve critical alerting for genuinely new issues
- Clean up alert message (removed accidental paste from error logs)
## Technical Details
- Thread-safe implementation using threading.Lock() for rate limiter
access
- Signature includes first 100 chars of exception message for
granularity
- Memory efficient - only stores last alert timestamp per unique error
type
- No breaking changes to existing retry functionality
## Impact
- **Eliminates alert spam**: Same failing operation only alerts once per
5 minutes
- **Preserves critical alerts**: New/different failures still trigger
immediate alerts
- **Better debugging**: Enhanced logging provides full exception context
- **Maintains reliability**: All retry logic works exactly as before
## Testing
- ✅ Rate limiting tested with multiple scenarios
- ✅ Import compatibility verified
- ✅ No regressions in retry functionality
- ✅ Alert generation works for new vs repeated errors
## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update
## How Has This Been Tested?
- Manual testing of rate limiting functionality with different error
scenarios
- Import verification to ensure no regressions
- Code formatting and linting compliance
## Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation (N/A -
internal utility)
- [x] My changes generate no new warnings
- [x] Any dependent changes have been merged and published in downstream
modules (N/A)
## Changes 🏗️
We weren't awaiting the onboarding enabled check and also we were
re-directing to a wrong onboarding URL.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new user
- [x] Re-directs well to onboarding
- [x] Complete up to Step 5 and logout
- [x] Login again
- [x] You are on Step 5
#### For configuration changes:
None
## Changes 🏗️
### Fix re-direct bugs
Sometimes the app will re-direct to a strange URL after login:
```
http://localhost:3000/marketplace,%20/marketplace
```
It looks like a race-condition because the re-direct to `/marketplace`
was done on a [server
action](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
rather than in the browser.
**✅ Fixed by**
Moving the login / signup server actions to Next.js API endpoints. In
this way the login/signup pages just call an API endpoint and handle its
response without having to hassle with serverless 💆🏽
### Wallet layout flash
<img width="800" height="744" alt="Screenshot 2025-10-08 at 22 52 03"
src="https://github.com/user-attachments/assets/7cb85fd5-7dc4-4870-b4e1-173cc8148e51"
/>
The wallet popover would sometimes flash after login, because it was
re-rendering once onboarding and credits data loaded.
**✅ Fixed by**
Only rendering once we have onboarding and credits data, without the
popover is useless and causes flashes.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login / Signup to the app with email and Google
- [x] Works fine
- [x] Onboarding popover does not flash
- [x] Onboarding and marketplace re-directs work
### For configuration changes:
None
Changed the type of the 'content' field in the Project model to accept
None, making it optional instead of required. Linear doesn't always
return data here if its not set by the user.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
- Makes the content optional
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manually test it works with our data
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- **Bug Fixes**
- Improved handling of projects with no content by making content
optional.
- Prevents validation errors during project creation, import, or sync
when content is missing.
- Enhances compatibility with integrations that may omit content fields.
- No impact on existing projects with content; behavior remains
unchanged.
- No user action required.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
### Changes 🏗️
- Changed the type of the `progress` field in the `LinearTask` model
from `int` to `float` to fix
[BUILDER-3V5](https://sentry.io/organizations/significant-gravitas/issues/6929150079/).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Root cause analysis confirms fix -- testing will need to occur in
dev environment before release to prod but this should merge now
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- New Features
- Progress indicators now support decimal values, allowing more precise
tracking (e.g., 42.5% instead of 42%). This enables finer-grained
updates in the interface and any integrations consuming progress data.
- Users may notice smoother progress changes during long-running tasks,
with improved accuracy in percentage displays across relevant views and
APIs.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
We need to be able to count user impact by user count which means we
need to track that
### Changes 🏗️
- Attaches user id to frontend actions (which hopefully propagate to the
backend in some places)
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test login -> shows on sentry
- [x] Test logout -> no longer shows on sentry
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
Instrument Prometheus for internal services
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing tests
## Changes 🏗️
### Performance (Onboarding) 🐎
- Moved non-UI logic into `providers/onboarding/helpers.ts` to reduce
provider complexity.
- Memoized provider value and narrowed state updates to cut unnecessary
re-renders.
- Deferred non-critical effects until after mount to lower initial JS
work.
**Result:** faster initial render and smoother onboarding flows under
load.
### Layout and overflow fixes 📐
- Replaced `w-screen` with `w-full` in platform/admin/profile layouts
and marketplace wrappers to avoid 100vw scrollbar overflow.
- Adjusted mobile navbar position (`right-0` instead of `-right-4`) to
prevent off-viewport elements.
**Result:** removed horizontal scrolling on Marketplace, Library, and
Settings pages; Build remains unaffected.
### New Generic Error pages
- Standardized global error handling in `app/global-error.tsx` for
consistent display and user feedback.
- Added platform-scoped error page(s) under `app/(platform)/error` for
route-level failures with a consistent layout.
- Improved retry affordances using existing `ErrorCard`.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify onboarding flows render faster and re-render less (DevTools
flamegraph)
- [x] Confirm no horizontal scrolling on Marketplace, Library, Settings
at common widths
- [x] Validate mobile navbar stays within viewport
- [x] Trigger errors to confirm global and platform error pages render
consistently
### For configuration changes:
None
2025-10-03 22:41:01 +09:00
524 changed files with 16765 additions and 8125 deletions
- `frontend/src/lib/supabase/` - Authentication and database client
@@ -160,6 +175,7 @@ pnpm storybook # Start component development server
### Agent Block System
Agents are built using a visual block-based system where each block performs a single action. Blocks are defined in `backend/blocks/` and must include:
- Block definition with input/output schemas
- Execution logic with proper error handling
- Tests validating functionality
@@ -167,6 +183,7 @@ Agents are built using a visual block-based system where each block performs a s
### Database & ORM
**Prisma ORM** with PostgreSQL backend including pgvector for embeddings:
- Schema in `schema.prisma`
- Migrations in `backend/migrations/`
- Always run `prisma migrate dev` and `prisma generate` after schema changes
@@ -174,13 +191,15 @@ Agents are built using a visual block-based system where each block performs a s
5. Shell environment variables have highest precedence
### Docker Environment Setup
- All services use hardcoded defaults (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
@@ -189,6 +208,7 @@ Agents are built using a visual block-based system where each block performs a s
## Advanced Development Patterns
### Adding New Blocks
1. Create file in `/backend/backend/blocks/`
2. Inherit from `Block` base class with input/output schemas
3. Implement `run` method with proper error handling
@@ -198,6 +218,7 @@ Agents are built using a visual block-based system where each block performs a s
7. Consider how inputs/outputs connect with other blocks in graph editor
### API Development
1. Update routes in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside route files
@@ -205,21 +226,76 @@ Agents are built using a visual block-based system where each block performs a s
5. Run `poetry run test` to verify changes
### Frontend Development
1. Components in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for component development
4. Test user-facing features with Playwright E2E tests
5. Update protected routes in middleware when needed
**📖 Complete Frontend Guide**: See `autogpt_platform/frontend/CONTRIBUTING.md` and `autogpt_platform/frontend/.cursorrules` for comprehensive patterns and conventions.
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
@@ -69,7 +75,7 @@ To find IDs, identify the values for organization_id when you call this endpoint
description="""Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results.
@@ -109,7 +115,7 @@ class SearchPeopleBlock(Block):
description="The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces.",
placeholder="Enter your prompt here...",
@@ -1249,12 +1250,11 @@ class AITextGeneratorBlock(AIBlockBase):
description="The maximum number of tokens to generate in the chat completion.",
)
classOutput(BlockSchema):
classOutput(BlockSchemaOutput):
response:str=SchemaField(
description="The response generated by the language model."
)
prompt:list=SchemaField(description="The prompt sent to the language model.")
error:str=SchemaField(description="Error message if the API call failed.")
def__init__(self):
super().__init__(
@@ -1308,7 +1308,7 @@ class SummaryStyle(Enum):
classAITextSummarizerBlock(AIBlockBase):
classInput(BlockSchema):
classInput(BlockSchemaInput):
text:str=SchemaField(
description="The text to summarize.",
placeholder="Enter the text to summarize here...",
@@ -1348,10 +1348,9 @@ class AITextSummarizerBlock(AIBlockBase):
description="Ollama host for local models",
)
classOutput(BlockSchema):
classOutput(BlockSchemaOutput):
summary:str=SchemaField(description="The final summary of the text.")
prompt:list=SchemaField(description="The prompt sent to the language model.")
error:str=SchemaField(description="Error message if the API call failed.")
def__init__(self):
super().__init__(
@@ -1455,7 +1454,20 @@ class AITextSummarizerBlock(AIBlockBase):
credentials=credentials,
)
returnllm_response["summary"]
summary=llm_response["summary"]
# Validate that the LLM returned a string and not a list or other type
ifnotisinstance(summary,str):
frombackend.util.truncateimporttruncate
truncated_summary=truncate(summary,500)
raiseValueError(
f"LLM generation failed: Expected a string summary, but received {type(summary).__name__}. "
f"The language model incorrectly formatted its response. "
description="Media input (URL, data URI, or local path)."
)
@@ -22,13 +28,10 @@ class MediaDurationBlock(Block):
default=True,
)
classOutput(BlockSchema):
classOutput(BlockSchemaOutput):
duration:float=SchemaField(
description="Duration of the media file (in seconds)."
)
error:str=SchemaField(
description="Error message if something fails.",default=""
)
def__init__(self):
super().__init__(
@@ -70,7 +73,7 @@ class LoopVideoBlock(Block):
Block for looping (repeating) a video clip until a given duration or number of loops.
"""
classInput(BlockSchema):
classInput(BlockSchemaInput):
video_in:MediaFileType=SchemaField(
description="The input video (can be a URL, data URI, or local path)."
)
@@ -90,13 +93,10 @@ class LoopVideoBlock(Block):
default="file_path",
)
classOutput(BlockSchema):
classOutput(BlockSchemaOutput):
video_out:str=SchemaField(
description="Looped video returned either as a relative path or a data URI."
)
error:str=SchemaField(
description="Error message if something fails.",default=""
)
def__init__(self):
super().__init__(
@@ -166,7 +166,7 @@ class AddAudioToVideoBlock(Block):
Optionally scale the volume of the new track.
"""
classInput(BlockSchema):
classInput(BlockSchemaInput):
video_in:MediaFileType=SchemaField(
description="Video input (URL, data URI, or local path)."
)
@@ -182,13 +182,10 @@ class AddAudioToVideoBlock(Block):
default="file_path",
)
classOutput(BlockSchema):
classOutput(BlockSchemaOutput):
video_out:MediaFileType=SchemaField(
description="Final video (with attached audio), as a path or data URI."
)
error:str=SchemaField(
description="Error message if something fails.",default=""
)
def__init__(self):
super().__init__(
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.