## Summary
- Fixed auto-frame timing in new builder - now calls `fitView` after
nodes are rendered instead of on mount
- Replaced manual viewport calculation in legacy builder with React
Flow's `fitView` for consistency
- Both builders now properly center and frame all blocks when opening an
agent
## Test plan
- [x] Open an existing agent with multiple blocks in the new builder -
verify all blocks are visible and centered
- [x] Open an existing agent in the legacy builder - verify all blocks
are visible and centered
- [x] Verify the manual "Frame" button still works correctly
## Changes 🏗️
Adds the following improvements:
### Prevent credential row overflowing on mobile 📱
**Before**
<img width="300" height="469" alt="Screenshot 2025-12-15 at 16 42 05"
src="https://github.com/user-attachments/assets/0d27394c-cec9-45a4-be82-804827343212"
/>
**After**
<img width="300" height="446" alt="Screenshot 2025-12-15 at 16 44 22"
src="https://github.com/user-attachments/assets/0f19e220-500d-4488-955e-612d38704727"
/>
_Just hide the ****** on mobile..._
### Make touch targets bigger on 📱 on the mobile menu
**Before**
<img width="300" height="607" alt="Screenshot 2025-12-15 at 16 58 28"
src="https://github.com/user-attachments/assets/762b7d4e-5269-41a4-88d2-ea745c50324e"
/>
Touch targets were quite small on mobile, especially for people with big
fingers...
**After**
<img width="300" height="589" alt="Screenshot 2025-12-15 at 16 54 02"
src="https://github.com/user-attachments/assets/beede0c4-5439-47a9-8bec-143b44306c6b"
/>
### New `<OverflowText />` component
<img width="600" height="551" alt="Screenshot 2025-12-15 at 16 48 20"
src="https://github.com/user-attachments/assets/a969223f-dd6a-497a-857e-18483aea28d7"
/>
A component that will render text like `<Text />`, but automatically
displays `...` and the full text content on a tooltip if it detects
there is no space for the full text length. Pretty useful for the type
of dashboard we are building, where sometimes titles or user-generated
content can be quite long, making the UI look whack.
### Google Drive Picker
Only allow the removal of files if it is not in read-only mode.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Checkout branch locally
- [x] Test the above
- Updated Next.js configuration to set body size limits for server
actions and API routes.
- Enhanced error handling in the API client to provide user-friendly
messages for file size errors.
- Added user-friendly error messages for 413 Payload Too Large responses
in API error parsing.
These changes ensure that file uploads are consistent with backend
limits and improve user experience during uploads.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Upload a file bigger than 10MB and it works
- [X] Upload a file bigger than 256MB and you see a official error
stating the max file size is 256MB
<!-- Clearly explain the need for these changes: -->
Agent blocks require different handling compared to standard blocks,
particularly for:
- Handle ID generation (using direct keys instead of generated IDs)
- Form data storage structure (nested under `inputs` key)
- Field ID parsing (filtering out schema path prefixes)
This PR implements special handling for `BlockUIType.AGENT` throughout
the form rendering and output handling components to ensure agents work
correctly in the flow editor.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **CustomNode.tsx**: Pass `uiType` prop to `OutputHandler` component
- **FormCreator.tsx**:
- Store agent form data in `hardcodedValues.inputs` instead of directly
in `hardcodedValues`
- Extract initial values from `hardcodedValues.inputs` for agent blocks
- **OutputHandler.tsx**:
- Accept `uiType` prop
- Use direct key as handle ID for agents instead of
`generateHandleId(key)`
- **useMarketplaceAgentsContent.ts**:
- Fetch full agent details using `getV2GetLibraryAgent` before adding to
builder
- Ensures agent schemas are properly populated (fixes issue where
marketplace endpoint returns empty schemas)
- **AnyOfField.tsx**: Generate handle IDs for agents by filtering out
"root" and "properties" from schema path
- **FieldTemplate.tsx**: Apply same handle ID generation logic for agent
fields
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add an agent block from marketplace and verify it renders
correctly
- [x] Connect inputs/outputs to/from an agent block and verify
connections work
- [x] Fill in form fields for an agent block and verify data persists
correctly
- [x] Verify agent blocks work in both new and existing flows
- [x] Test that non-agent blocks still work as before (regression test)
## Summary
This PR refactors the Human-In-The-Loop (HITL) review system backend to
improve data handling and API consistency.
## Changes
### Backend Refactoring
#### 1. **Block Output Schema Update** (`human_in_the_loop.py`)
- Replaced single `reviewed_data` and `status` fields with separate
`approved_data` and `rejected_data` outputs
- This allows downstream blocks to handle approved vs rejected data
differently without checking status
- Simplified test outputs to match new schema
#### 2. **Review Data Handling** (`human_review.py`)
- Modified `get_or_create_human_review` to always return
`review.payload` regardless of approval status
- Previously returned `None` for rejected reviews, which could cause
data loss
- Now preserves reviewer-modified data for both approved and rejected
cases
#### 3. **API Route Simplification** (`review/routes.py`)
- Streamlined review decision processing logic using ternary operator
- Unified data handling for both approved and rejected reviews
- Maintains backward compatibility while improving code clarity
## Why These Changes?
- **Better Data Flow**: Separate output pins for approved/rejected data
make workflow design more intuitive
- **Data Preservation**: Rejected reviews can still pass modified data
downstream for logging or alternative processing
- **Cleaner API**: Simplified decision processing reduces code
complexity and potential bugs
## Testing
- All existing tests pass with updated schema
- Backward compatibility maintained for existing workflows
- Human review functionality verified in both approved and rejected
scenarios
## Related
This is the backend portion of changes from #11529, applied separately
to the `feat/hitl` branch.
## Changes 🏗️https://github.com/user-attachments/assets/7e49ed5b-c818-4aa3-b5d6-4fa86fada7ee
When the content of Summary + Outputs + Inputs is long enough, it will
show in this new `<ScrollableTabs />` component, which auto-scrolls the
content as you click on a tab.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally
- [x] Check the new page with scrollable tabs
## Changes 🏗️
The main goal of this PR is to improve how we display inputs used for a
given task.
Agent inputs can be of many types (text, long text, date, select, file,
etc.). Until now, we have tried to display them as text, which has not
always worked. Given we already have `<RunAgentInputs />`, which uses
form elements to display the inputs ( _prefilled with data_ ), most of
the time it will look better and less buggy than text.
### Before
<img width="800" height="614" alt="Screenshot 2025-12-14 at 17 45 44"
src="https://github.com/user-attachments/assets/3d851adf-9638-46c1-adfa-b5e68dc78bb0"
/>
### After
<img width="800" height="708" alt="Screenshot 2025-12-14 at 17 45 21"
src="https://github.com/user-attachments/assets/367f32b4-2c30-4368-8d63-4cad06e32437"
/>
### Other improvements
- 🗑️ Removed `<EditInputsModal />`
- it is not used given the API does not support editing inputs for a
schedule yt
- Made `<InformationTooltip />` icon size customisable
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally
- [x] Check the new view tasks use the form elements instead of text to
display inputs
## Changes 🏗️
We have the setup for light/dark mode support ( Tailwind + `next-themes`
), but not the capacity yet from contributions to make the app dark-mode
ready. First, we need to make it look good in light mode 😆
This disables `dark:` mode classes on the code, to prevent the app
looking oopsie when the user is seeing it with a browser with dark mode
preference:
### Before these changes
<img width="800" height="739" alt="Screenshot 2025-12-14 at 17 09 25"
src="https://github.com/user-attachments/assets/76333e03-930a-40b6-b91e-47ee01bf2c00"
/>
### After
<img width="800" height="722" alt="Screenshot 2025-12-14 at 16 55 46"
src="https://github.com/user-attachments/assets/34d85359-c68f-474c-8c66-2bebf28f923e"
/>
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app on a browser with dark mode preference
- [x] It still looks in light mode without broken styles
- Resolves#11599
### Changes 🏗️
- Manually update item counts when initiating a task from `EmptyTasks`
view
- Other improvements made while debugging
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `NewAgentLibraryView` transitions to full layout when a first task
is created
- [x] `NewAgentLibraryView` transitions to full layout when a first
trigger is set up
Preserve user searches in the new builder and cache search results for
more efficiency.
Search is saved, so the user can see their previous searches.
### Changes 🏗️
- Add `BuilderSearch` column&migration to save user search (with all
filters)
- Builder `db.py` now caches all search results using `@cached` and
returns paginated results, so following pages are returned much quicker
- Score and sort results
- Update models&routes
- Update frontend, so it works properly with modified endpoints
- Frontend: store `serachId` and use it for subsequent searches, so we
don't save partial searches (e.g. "b", "bl", ..., "block"). Search id is
reset when user clears the search field.
- Add clickable chips to the Suggestions builder tab
- Add `HorizontalScroll` component (chips use it)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Search works and is cached
- [x] Search sorts results
- [x] Searches are preserved properly
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Single dollar signs ($10, $variable) are commonly used in content and
were being incorrectly interpreted as inline LaTeX math delimiters. This
change disables that behavior while keeping double dollar sign ($$...$$)
math blocks working.
## Changes 🏗️
• Configure remarkMath plugin with singleDollarTextMath: false in
MarkdownRenderer.tsx
• Double dollar sign display math ($$...$$) continues to work as
expected
• Single dollar signs are no longer interpreted as inline math
delimiters
## Checklist 📋
For code changes:
-[x] I have clearly listed my changes in the PR description
-[x] I have made a test plan
-[x] I have tested my changes according to the test plan:
-[x] Verify content with dollar amounts (e.g., “$100”) renders as plain
text
-[x] Verify double dollar sign math blocks ($$x^2$$) still render as
LaTeX
-[x] Verify other markdown features (code blocks, tables, links) still
work correctly
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#11586
- Follow-up to #11580
### Changes 🏗️
- Fix logic to include manual triggers as a possibility
- Fix input render logic to use trigger setup schema if applicable
- Fix rendering payload input for externally triggered runs
- Amend `RunAgentModal` to load preset inputs+credentials if selected
- Amend `SelectedTemplateView` to use modified input for run (if
applicable)
- Hide non-applicable buttons in `SelectedRunView` for externally
triggered runs
- Implement auto-navigation to `SelectedTriggerView` on trigger setup
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Can set up manual triggers
- [x] Navigates to trigger view after setup
- [x] Can set up automatic triggers
- [x] Can create templates from runs
- [x] Can run templates
- [x] Can run templates with modified input
I want to be able to automate some actions on social media or our
sevrver in response to actions from discord
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
Add trigger blocks for common GitHub events to enable OSS automation:
- GithubReleaseTriggerBlock: Trigger on release events (published, etc.)
- GithubStarTriggerBlock: Trigger on star events for milestone
celebrations
- GithubIssuesTriggerBlock: Trigger on issue events for
triage/notifications
- GithubDiscussionTriggerBlock: Trigger on discussion events for Q&A
sync
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test Stars
- [x] Test Discussions
- [x] Test Issues
- [x] Test Release
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
<!-- Clearly explain the need for these changes: -->
We have lots we want to do with google sheets and we don't want a lack
of blocks to be a limiter so I pre-ddi a lot of blocks!
### Changes 🏗️
Adds 24 new blocks for google sheets (tested and working)
```
|-----|-------------------------------------------|----------------------------------------|
| 1 | GoogleSheetsFilterRowsBlock | Filter rows based on column conditions | ✅ |
| 2 | GoogleSheetsLookupRowBlock | VLOOKUP-style row lookup | ✅ |
| 3 | GoogleSheetsDeleteRowsBlock | Delete rows from a sheet | ✅ |
| 4 | GoogleSheetsGetColumnBlock | Get data from a specific column | ✅ |
| 5 | GoogleSheetsSortBlock | Sort sheet data | ✅ |
| 6 | GoogleSheetsGetUniqueValuesBlock | Get unique values from a column | ✅ |
| 7 | GoogleSheetsInsertRowBlock | Insert rows into a sheet | ✅ |
| 8 | GoogleSheetsAddColumnBlock | Add a new column | ✅ |
| 9 | GoogleSheetsGetRowCountBlock | Get the number of rows | ✅ |
| 10 | GoogleSheetsRemoveDuplicatesBlock | Remove duplicate rows | ✅ |
| 11 | GoogleSheetsUpdateRowBlock | Update an existing row | ✅ |
| 12 | GoogleSheetsGetRowBlock | Get a specific row by index | ✅ |
| 13 | GoogleSheetsDeleteColumnBlock | Delete a column | ✅ |
| 14 | GoogleSheetsCreateNamedRangeBlock | Create a named range | ✅ |
| 15 | GoogleSheetsListNamedRangesBlock | List all named ranges | ✅ |
| 16 | GoogleSheetsAddDropdownBlock | Add dropdown validation to cells | ✅ |
| 17 | GoogleSheetsCopyToSpreadsheetBlock | Copy sheet to another spreadsheet | ✅ |
| 18 | GoogleSheetsProtectRangeBlock | Protect a range from editing | ✅ |
| 19 | GoogleSheetsExportCsvBlock | Export sheet as CSV | ✅ |
| 20 | GoogleSheetsImportCsvBlock | Import CSV data | ✅ |
| 21 | GoogleSheetsAddNoteBlock | Add notes to cells | ✅ |
| 22 | GoogleSheetsGetNotesBlock | Get notes from cells | ✅ |
| 23 | GoogleSheetsShareSpreadsheetBlock | Share spreadsheet with users | ✅ |
| 24 | GoogleSheetsSetPublicAccessBlock | Set public access permissions | ✅ |
```
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tested using the attached agent
[super test for
spreadsheets_v9.json](https://github.com/user-attachments/files/24041582/super.test.for.spreadsheets_v9.json)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Introduces a large suite of Google Sheets blocks for row/column ops,
filtering/sorting/lookup, CSV import/export, notes, named ranges,
protections, sheet copy, and sharing/public access, plus refactors
append to a simpler single-row append.
>
> - **Google Sheets blocks (new)**:
> - **Data ops**: `GoogleSheetsFilterRowsBlock`,
`GoogleSheetsLookupRowBlock`, `GoogleSheetsDeleteRowsBlock`,
`GoogleSheetsGetColumnBlock`, `GoogleSheetsSortBlock`,
`GoogleSheetsGetUniqueValuesBlock`, `GoogleSheetsInsertRowBlock`,
`GoogleSheetsAddColumnBlock`, `GoogleSheetsGetRowCountBlock`,
`GoogleSheetsRemoveDuplicatesBlock`, `GoogleSheetsUpdateRowBlock`,
`GoogleSheetsGetRowBlock`, `GoogleSheetsDeleteColumnBlock`.
> - **Named ranges & validation**: `GoogleSheetsCreateNamedRangeBlock`,
`GoogleSheetsListNamedRangesBlock`, `GoogleSheetsAddDropdownBlock`.
> - **Sheet/admin**: `GoogleSheetsCopyToSpreadsheetBlock`,
`GoogleSheetsProtectRangeBlock`.
> - **CSV & notes**: `GoogleSheetsExportCsvBlock`,
`GoogleSheetsImportCsvBlock`, `GoogleSheetsAddNoteBlock`,
`GoogleSheetsGetNotesBlock`.
> - **Sharing**: `GoogleSheetsShareSpreadsheetBlock`,
`GoogleSheetsSetPublicAccessBlock`.
> - **Refactor**:
> - Rename and simplify append: `GoogleSheetsAppendRowBlock` (replaces
multi-row/dict input with single `row`), fixed insert option to
`INSERT_ROWS` and streamlined response.
> - **Utilities/Enums**:
> - Add helpers (`_column_letter_to_index`, `_index_to_column_letter`,
`_apply_filter`) and enums (`FilterOperator`, `SortOrder`, `ShareRole`,
`PublicAccessRole`).
> - Drive/Sheets service builders and file validation reused across new
blocks.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9e2f4024. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
<!-- Clearly explain the need for these changes: -->
In the File Widget, the upload button was incorrectly behaving like a
submit button. When users clicked it, the rjsf library immediately
triggered form validation and displayed validation errors, even though
the user was only trying to upload a file.
This happened because HTML buttons inside a form default to
`type="submit"`, which triggers form submission on click. By explicitly
setting `type="button"` on all file-related buttons, we prevent them
from submitting the form while still allowing them to trigger the file
input dialog.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Added `type="button"` attribute to the clear button in the compact
variant
- Added `type="button"` attribute to the upload button in the compact
variant
- Added `type="button"` attribute to the "Browse File" button in the
default variant
This ensures that clicking any of these buttons only triggers the
intended file selection/upload action without causing unwanted form
validation or submission.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested clicking the upload button in a form with File Widget -
form should not submit or show validation errors
- [x] Tested clicking the clear button - should clear the file without
triggering form validation
- [x] Tested clicking the "Browse File" button - should open file dialog
without triggering form validation
- [x] Verified file upload functionality still works correctly after
selecting a file
<!-- Clearly explain the need for these changes: -->
When opening a graph from the Library, the `flowVersion` query parameter
was not being set in the URL. This caused issues when the graph data
didn't contain an internal `graphVersion`, resulting in the builder
failing and graphs breaking when running.
The `useGetV1GetSpecificGraph` hook relies on the `flowVersion` query
parameter to fetch the correct graph version. Without it being properly
set in the URL, the graph loading logic fails when the version
information is missing from the graph data itself.
### Changes 🏗️
- Added `setQueryStates` to the `useQueryStates` hook return value in
`useFlow.ts`
- Added logic to sync `flowVersion` to the URL query parameters when a
graph is loaded
- When `graph.version` is available, it now updates the `flowVersion`
query parameter in the URL (defaults to `1` if version is undefined)
This ensures the URL stays in sync with the loaded graph's version,
preventing builder failures and execution issues.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open a graph from the Library that doesn't have a version in the
URL
- [x] Verify that `flowVersion` is automatically added to the URL query
parameters
- [x] Verify that the graph loads correctly and can be executed
- [x] Verify that graphs with existing `flowVersion` in URL continue to
work correctly
- [x] Verify that graphs opened from Library with version information
sync correctly
## Changes 🏗️
### Adjust layout and styles on mobile 📱
<img width="448" height="843" alt="Screenshot 2025-12-09 at 22 53 14"
src="https://github.com/user-attachments/assets/159bdf4f-e6b2-42f5-8fdf-25f8a62c62d1"
/>
### Make the sidebar cards have contextual actions
<img width="486" height="243" alt="Screenshot 2025-12-09 at 22 53 27"
src="https://github.com/user-attachments/assets/2f530168-3217-47c4-b08d-feccbb9e9152"
/>
Depending on the card type, different type of actions are shown...
### Make buttons in "About agent" card do something
<img width="344" height="346" alt="Screenshot 2025-12-09 at 22 54 01"
src="https://github.com/user-attachments/assets/47181f80-1f68-4ef1-aecc-bbadc7cc9c44"
/>
### Other
- Hide `Schedule` button for agents with trigger run type
- Adjust secondary button background colour...
- Make drawer content scrollable on mobile
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run locally and test the above
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
<!-- Clearly explain the need for these changes: -->
When the content inside the credential select dropdown becomes too long,
the adjacent link buttons lose their rounded shape and appear squarish.
This happens when the text stretches the container or affects the layout
of the buttons.
The issue occurs because the button's width can shrink below its
intended size when the flex container is stretched by long credential
names. By adding an explicit minimum width constraint with `!min-w-8`,
we ensure the button maintains its proper dimensions and rounded
appearance regardless of the select dropdown's content length.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Added `!min-w-8` to the external link button's className in
`SelectCredential` component to enforce a minimum width of 2rem (8 *
0.25rem)
- This ensures the button maintains its rounded shape even when the
adjacent select dropdown contains long credential names
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested credential select with short credential names - button
should maintain rounded shape
- [x] Tested credential select with very long credential names (e.g.,
long provider names, usernames, and hosts) - button should maintain
rounded shape
### Changes 🏗️
This PR migrates the copy/paste functionality for graph nodes and edges
from local storage to the Clipboard API. This change addresses storage
limitations and enables cross-tab copying.
https://github.com/user-attachments/assets/6ef55713-ca5b-4562-bb54-4c12db241d30
**Key changes:**
- Replaced `localStorage` with `navigator.clipboard` API for copy/paste
operations
- Added `CLIPBOARD_PREFIX` constant (`"autogpt-flow-data:"`) to identify
our clipboard data and prevent conflicts with other clipboard content
- Added toast notifications to provide user feedback when copying nodes
- Added error handling for clipboard read/write operations with console
error logging
- Removed dependency on `@/services/storage/local-storage` for copied
flow data
- Updated `useCopyPaste` hook to use async clipboard operations with
proper promise handling
**Benefits:**
- ✅ Removes local storage size limitations (5-10MB)
- ✅ Enables copying nodes between browser tabs/windows
- ✅ Provides better user feedback through toast notifications
- ✅ More standard approach using native browser Clipboard API
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select one or more nodes in the flow editor
- [x] Press `Ctrl+C` (or `Cmd+C` on Mac) to copy nodes
- [x] Verify toast notification appears showing "Copied successfully"
with node count
- [x] Press `Ctrl+V` (or `Cmd+V` on Mac) to paste nodes
- [x] Verify nodes are pasted at viewport center with new unique IDs
- [x] Verify edges between copied nodes are also pasted correctly
- [x] Test copying nodes in one browser tab and pasting in another tab
(should work)
- [x] Test copying non-flow data (e.g., text) and verify paste doesn't
interfere with flow editor
## Changes 🏗️
Add Templates to the new Agent Library page:
<img width="800" height="889" alt="Screenshot 2025-12-09 at 14 10 01"
src="https://github.com/user-attachments/assets/85f0d478-f3f9-4ccf-81df-b9a7f4ae8849"
/>
- You can create a template from a run ( new action button )
- Templates are listed and can be selected on the sidebar
- When viewing a template, you can edit it, create a task or delete it
Add Triggers to the new Agent Library page:
<img width="800" height="836" alt="Screenshot 2025-12-09 at 14 10 43"
src="https://github.com/user-attachments/assets/c722f807-c72f-4a4d-8778-e36bea203f6e"
/>
- When an agent contains a trigger block, on the modal it will create a
trigger
- When there are triggers, they are listed on the sidebar
- A trigger can be viewed and edited
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the new page locally
## Summary
<img width="1263" height="883" alt="image"
src="https://github.com/user-attachments/assets/98d4f449-1897-4019-a599-846c27df4191"
/>
<img width="398" height="190" alt="image"
src="https://github.com/user-attachments/assets/0138ac02-420d-4f96-b980-74eb41e3c968"
/>
- Add execution accuracy monitoring with moving averages and Discord
alerts
- Dashboard visualization for accuracy trends and alert detection
- Hourly monitoring for marketplace agents (≥10 executions in 30 days)
- Generated API client integration with type-safe models
## Features
- **Moving Average Analysis**: 3-day vs 7-day comparison with
configurable thresholds
- **Discord Notifications**: Hourly alerts for accuracy drops ≥10%
- **Dashboard UI**: Real-time trends visualization with alert status
- **Type Safety**: Generated API hooks and models throughout
- **Error Handling**: Graceful OpenAI configuration handling
- **PostgreSQL Optimization**: Window functions for efficient trend
queries
## Test plan
- [x] Backend accuracy monitoring logic tested with sample data
- [x] Frontend components using generated API hooks (no manual fetch)
- [x] Discord notification integration working
- [x] Admin authentication and authorization working
- [x] All formatting and linting checks passing
- [x] Error handling for missing OpenAI configuration
- [x] Test data available with `test-accuracy-agent-001`
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
<img width="500" height="927" alt="Screenshot 2025-12-08 at 16 30 04"
src="https://github.com/user-attachments/assets/97170302-50ae-4750-99e1-275861ee9e08"
/>
<img width="500" height="897" alt="Screenshot 2025-12-08 at 16 30 10"
src="https://github.com/user-attachments/assets/0a7ac828-ebea-40de-a19e-fbf77b0c2eb7"
/>
Update the new **Run Agent Modal** as a new view. From now on, sections
( inputs, credentials, etc... ) are enclosed. Refactor all the existing
code to be more readable and [adhere to code
conventions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/CONTRIBUTING.md).
### Other improvements
- Refactor `<CredentialInputs />`:
- to adhere to code conventions
- to show all existing credentials for a given provider
- to allow deletion of a given credential
- to allow to select/switch credentials for a given task
- Run Graph Errors:
- when the run fails, show the errors nicely on the toast and link to
the builder for further debugging
- Design System:
- secondary button colour has been updated
- labels size for form fields has been updated
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally
- [x] Test the above updates running agents from the new library page
## Changes 🏗️
<img width="795" height="275" alt="Screenshot 2025-12-04 at 21 05 55"
src="https://github.com/user-attachments/assets/e7572635-3976-4822-89ea-3e5e26196ab8"
/>
Allow to close toasts 🍞
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally
- [x] Toast have a close button top-right
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.
### Changes 🏗️
- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
- [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
- [x] Verify OAuth flow works with simplified scopes
- [x] Verify file picker works with merged credentials field
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
>
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
This PR adds a collection of pre-built store agents that can be loaded
into test databases for development and testing purposes.
### Changes 🏗️
- Add 17 exported agent JSON files in `backend/agents/` directory
- Add `StoreAgent_rows.csv` containing store listing metadata (titles,
descriptions, categories, images)
- Add `load_store_agents.py` script to load agents into the test
database
- Add `load-store-agents` Makefile target for easy execution
**Included Agents:**
- Flux AI Image Generator
- YouTube Transcription Scraper
- Decision Maker Lead Finder
- Smart Meeting Prep
- Automated Support Agent
- Unspirational Poster Maker
- AI Video Generator
- Automated SEO Blog Writer
- Lead Finder (Local Businesses)
- LinkedIn Post Generator
- YouTube to LinkedIn Post Converter
- Personal Newsletter
- Email Scout - Contact Finder Assistant
- YouTube Video to SEO Blog Writer
- AI Webpage Copy Improver
- Domain Name Finder
- AI Function
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `make load-store-agents` and verify agents are loaded into the
database
- [x] Verify store listings appear correctly with metadata from CSV
- [x] Confirm no sensitive information (API keys, secrets) is included
in the exported agents
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
No configuration changes required - this only adds test data and a
loading script.
## Summary
- Add Agent Output Demo field to marketplace agent submission form,
positioned below the Description field
- Store agent output demo URLs in database for future CoPilot
integration
- Implement proper video/image ordering on marketplace pages
- Add shared YouTube URL validation utility to eliminate code
duplication
## Changes Made
### Frontend
- **Agent submission form**: Added Agent Output Demo field with YouTube
URL validation
- **Edit agent form**: Added Agent Output Demo field for existing
submissions
- **Marketplace display**: Implemented proper video/image ordering:
1. YouTube/Overview video (if exists)
2. First image (hero)
3. Agent Output Demo (if exists)
4. Additional images
- **Shared utilities**: Created `validateYouTubeUrl` function in
`src/lib/utils.ts`
### Backend
- **Database schema**: Added `agentOutputDemoUrl` field to
`StoreListingVersion` model
- **Database views**: Updated `StoreAgent` view to include
`agent_output_demo` field
- **API models**: Added `agent_output_demo_url` to submission requests
and `agent_output_demo` to responses
- **Database migration**: Added migration to create new column and
update view
- **Test files**: Updated all test files to include the new required
field
## Test Plan
- [x] Frontend form validation works correctly for YouTube URLs
- [x] Database migration applies successfully
- [x] Backend API accepts and returns the new field
- [x] Marketplace displays videos in correct order
- [x] Both frontend and backend formatting/linting pass
- [x] All test files include required field to prevent failures
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
This PR improves the `/integrations/providers` endpoint to dynamically
determine supported authentication types from the SDK registry instead
of using hardcoded values.
**What changed:**
- The `list_providers` function now looks up each provider in the
`AutoRegistry` to get its `supported_auth_types`
- If a provider has defined auth types in the SDK registry, those are
used to set `supports_api_key`, `supports_user_password`, and
`supports_host_scoped` flags
- Falls back to legacy hardcoded behavior for providers not registered
in the SDK (maintains backwards compatibility)
**Why:**
- Providers can now correctly declare their supported authentication
methods via the SDK
- Removes brittle hardcoded checks like `name in ("smtp",)` for specific
providers
- Makes the credential type system more extensible and maintainable
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified providers with SDK-defined auth types return correct
flags
- [x] Verified legacy providers still work with fallback behavior
- [x] Tested the `/integrations/providers` endpoint returns expected
data
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
No configuration changes required for this PR.
### Changes 🏗️
Mark ValueError as known block errors
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Replace print() with logger.info() in reddit.py for login message
- Replace print() with logger.debug() in airtable/_api.py for API params
- Replace print() with logger.debug() in _manual_base.py for webhook URL
- Add logging imports and logger initialization where missing
- Update FIXME to TODO with GitHub issue reference #8537
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] test it still works
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Switch `print()` to `logger.info/debug()` across Airtable, Reddit, and
manual webhook modules; add logger initialization and clarify TODO with
issue reference.
>
> - **Backend**:
> - **Airtable (`backend/blocks/airtable/_api.py`)**:
> - Replace `print(params)` with `logger.debug` in `create_base`.
> - **Reddit (`backend/blocks/reddit.py`)**:
> - Add `logging` import and `logger` initialization.
> - Replace login `print` with `logger.info` in `get_praw`.
> - **Webhooks (`backend/integrations/webhooks/_manual_base.py`)**:
> - Replace `print` with `logger.debug` in `_register_webhook` and add
`logger`.
> - Update `FIXME` to `TODO` with GitHub issue reference `#8537`.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
3add9b0fa9. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Make onboarding task completion backend-authoritative which prevents
cheating (previously users could mark all tasks as completed instantly
and get rewards) and makes task completion more reliable. Completion of
tasks is moved backend with exception of introductory onboarding tasks
and visit-page type tasks.
### Changes 🏗️
- Move incrementing run counter backend and make webhook-triggered and
scheduled task execution count as well
- Use user timezone for calculating run streak
- Frontend task completion is moved from update onboarding state to
separate endpoint and guarded so only frontend tasks can be completed
- Graph creation, execution and add marketplace agent to library accept
`source`, so appropriate tasks can be completed
- Replace `client.ts` api calls with orval generated and remove no
longer used functions from `client.ts`
- Add `resolveResponse` helper function that unwraps orval generated
call result to 2xx response
Small changes&bug fixes:
- Make Redis notification bus serialize all payload fields
- Fix confetti when group is finished
- Collapse finished group when opening Wallet
- Play confetti only for tasks that are listed in the Wallet UI
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Onboarding can be finished
- [x] All tasks can be finished and work properly
- [x] Confetti works properly
## Changes 🏗️
- Make it less verbose and adapt the summary card to the new design
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run locally and see summary displaying well
- Added conditional rendering in `NodeArrayInput` component to only
render `NodeHandle` when `schema.items` is defined
- This prevents the error when array schemas don't have an `items`
property defined
- The input field rendering logic already had proper null checks, so
only the handle rendering needed to be fixed
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open the legacy builder and create a new agent
- [x] Add a Smart Decision block to the agent
- [x] Verify that the `conversation_history` field renders without
errors
- [x] Verify that array fields with defined `items` still render
correctly (e.g., test with other blocks that use arrays)
- [x] Verify that connecting/disconnecting handles to the
conversation_history field works correctly
- [x] Verify that the field can accept input values when not connected
## Changes 🏗️
<img width="800" height="767" alt="Screenshot 2025-12-04 at 17 40 10"
src="https://github.com/user-attachments/assets/37036246-bcdb-46eb-832c-f91fddfd9014"
/>
<img width="800" height="492" alt="Screenshot 2025-12-04 at 17 40 16"
src="https://github.com/user-attachments/assets/ba547e54-016a-403c-9ab6-99465d01af6b"
/>
On the new Agent Library page:
- Implement the new actions sidebar ( main change... )
- Refactor the layout/components to accommodate that
- Implement the missing "Summary" functionality
- Update icon buttons in Design system with new designs
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally and test it
### Changes 🏗️
This PR adds support for `host_scoped` credential type in the new
builder's `CredentialField` component. This enables blocks that require
sensitive headers for custom API endpoints to configure host-scoped
credentials directly from the credential field.
<img width="745" height="843" alt="Screenshot 2025-12-04 at 4 31 09 PM"
src="https://github.com/user-attachments/assets/d076b797-64c4-4c31-9c88-47a064814055"
/>
<img width="418" height="180" alt="Screenshot 2025-12-04 at 4 36 02 PM"
src="https://github.com/user-attachments/assets/b4fa6d8d-d8f4-41ff-ab11-7c708017f8fd"
/>
**Key changes:**
- **Added `HostScopedCredentialsModal` component**
(`models/HostScopedCredentialsModal/`)
- Modal dialog for creating host-scoped credentials with host pattern,
optional title, and dynamic header pairs (key-value)
- Auto-populates host from discriminator value (URL field) when
available
- Supports adding/removing multiple header pairs with validation
- **Enhanced credential filtering logic** (`helpers.ts`)
- Updated `filterCredentialsByProvider` to accept `schema` and
`discriminatorValue` parameters
- Added intelligent filtering for:
- Credential types supported by the block
- OAuth credentials with sufficient scopes
- Host-scoped credentials matched by host from discriminator value
- Extracted `getDiscriminatorValue` helper function for reusability
- **Updated `CredentialField` component**
- Added `supportsHostScoped` check in `useCredentialField` hook
- Conditionally renders `HostScopedCredentialsModal` when
`supportsHostScoped && discriminatorValue` is true
- Exports `discriminatorValue` for use in child components
- **Updated `useCredentialField` hook**
- Calculates `discriminatorValue` using new `getDiscriminatorValue`
helper
- Passes `schema` and `discriminatorValue` to enhanced
`filterCredentialsByProvider` function
- Returns `supportsHostScoped` and `discriminatorValue` for component
consumption
**Technical details:**
- Host extraction uses `getHostFromUrl` utility to parse host from
discriminator value (URL)
- Header pairs are managed as state with add/remove functionality
- Form validation uses `react-hook-form` with `zod` schema
- Credential creation integrates with existing API endpoints and query
invalidation
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify `HostScopedCredentialsModal` appears when block supports
`host_scoped` credentials and discriminator value is present
- [x] Test host auto-population from discriminator value (URL field)
- [x] Test manual host entry when discriminator value is not available
- [x] Test adding/removing multiple header pairs
- [x] Test form validation (host required, empty header pairs filtered
out)
- [x] Test credential creation and successful toast notification
- [x] Verify credentials list refreshes after creation
- [x] Test host-scoped credential filtering matches credentials by host
from URL
- [x] Verify existing credential types (api_key, oauth2, user_password)
still work correctly
- [x] Test OAuth scope filtering still works as expected
- [x] Verify modal only shows when `supportsHostScoped &&
discriminatorValue` conditions are met
<!-- Clearly explain the need for these changes: -->
Fixes an issue where copying a node via the context menu dialog fails on
the first attempt in a new session. The problem occurs because the node
selection state update and the copy operation happen in quick
succession, causing a race condition where `copySelectedNodes()` reads
the store before the selection state is properly updated.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Start a new browser session (or clear storage)
- [x] Open the flow editor
- [x] Right-click on a node and select "Copy Node" from the context menu
- [x] Verify the node is successfully copied on the first attempt
<!-- Clearly explain the need for these changes: -->
This PR enhances the FileInput component to support multiple variants
and modes, and integrates it into the form renderer as a file widget.
The changes enable a more flexible file input experience with both
server upload and local base64 conversion capabilities.
### Compact one
<img width="354" height="91" alt="Screenshot 2025-12-03 at 8 05 51 PM"
src="https://github.com/user-attachments/assets/1295a34f-4d9f-4a65-89f2-4b6516f176ff"
/>
<img width="386" height="96" alt="Screenshot 2025-12-03 at 8 06 11 PM"
src="https://github.com/user-attachments/assets/3c10e350-8ddc-43ff-bf0a-68b23f8db394"
/>
## Default one
<img width="671" height="165" alt="Screenshot 2025-12-03 at 8 05 08 PM"
src="https://github.com/user-attachments/assets/bd17c5f1-8cdb-4818-9850-a9e61252d8c1"
/>
<img width="656" height="141" alt="Screenshot 2025-12-03 at 8 05 21 PM"
src="https://github.com/user-attachments/assets/1edf260f-6245-44b9-bd3e-dd962f497413"
/>
### Changes 🏗️
#### FileInput Component Enhancements
- **Added variant support**: Introduced `default` and `compact` variants
- `default`: Full-featured UI with drag & drop, progress bar, and
storage note
- `compact`: Minimal inline design suitable for tight spaces like node
inputs
- **Added mode support**: Introduced `upload` and `base64` modes
- `upload`: Uploads files to server (requires `onUploadFile` and
`uploadProgress`)
- `base64`: Converts files to base64 locally without server upload
- **Improved type safety**: Refactored props using discriminated unions
(`UploadModeProps | Base64ModeProps`) to ensure type-safe usage
- **Enhanced file handling**: Added `getFileLabelFromValue` helper to
extract file labels from base64 data URIs or file paths
- **Better UX**: Added `showStorageNote` prop to control visibility of
storage disclaimer
#### FileWidget Integration
- **Replaced legacy Input**: Migrated from legacy `Input` component to
new `FileInput` component
- **Smart variant selection**: Automatically selects `default` or
`compact` variant based on form context size
- **Base64 mode**: Uses base64 mode for form inputs, eliminating need
for server uploads in builder context
- **Improved accessibility**: Better disabled/readonly state handling
with visual feedback
#### Form Renderer Updates
- **Disabled validation**: Added `noValidate={true}` and
`liveValidate={false}` to prevent premature form validation
#### Storybook Updates
- **Expanded stories**: Added comprehensive stories for all variant/mode
combinations:
- `Default`: Default variant with upload mode
- `Compact`: Compact variant with base64 mode
- `CompactWithUpload`: Compact variant with upload mode
- `DefaultWithBase64`: Default variant with base64 mode
- **Improved documentation**: Updated component descriptions to clearly
explain variants and modes
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test FileInput component in Storybook with all variant/mode
combinations
- [x] Test file upload flow in default variant with upload mode
- [x] Test base64 conversion in compact variant with base64 mode
- [x] Test file widget in form renderer (node inputs)
- [x] Test file type validation (accept prop)
- [x] Test file size validation (maxFileSize prop)
- [x] Test error handling for invalid files
- [x] Test disabled and readonly states
- [x] Test file clearing/removal functionality
- [x] Verify compact variant renders correctly in tight spaces
- [x] Verify default variant shows storage note only in upload mode
- [x] Test drag & drop functionality in default variant
When running a graph in the new builder, validation errors were only
displayed in toast notifications, making it difficult for users to
identify which specific fields had errors. Users needed to see
validation errors directly next to the problematic fields within each
node for better UX and faster debugging.
<img width="1319" height="953" alt="Screenshot 2025-12-03 at 12 48
15 PM"
src="https://github.com/user-attachments/assets/d444bc71-9bee-4fa7-8b7f-33339bd0cb24"
/>
### Changes 🏗️
- **Error handling in graph execution** (`useRunGraph.ts`):
- Added detection for graph validation errors using
`ApiError.isGraphValidationError()`
- Parse and store node-level errors from backend validation response
- Clear all node errors on successful graph execution
- Enhanced toast messages to guide users to fix validation errors on
highlighted nodes
- **Node store error management** (`nodeStore.ts`):
- Added `errors` field to node data structure
- Implemented `updateNodeErrors()` to set errors for a specific node
- Implemented `clearNodeErrors()` to remove errors from a specific node
- Implemented `getNodeErrors()` to retrieve errors for a specific node
- Implemented `setNodeErrorsForBackendId()` to set errors by backend ID
(supports matching by `metadata.backend_id` or node `id`)
- Implemented `clearAllNodeErrors()` to clear all node errors across the
graph
- **Visual error indication** (`CustomNode.tsx`, `NodeContainer.tsx`):
- Added error detection logic to identify both configuration errors and
output errors
- Applied error styling to nodes with validation errors (using `FAILED`
status styling)
- Nodes with errors now display with red border/ring to visually
indicate issues
- **Field-level error display** (`FieldTemplate.tsx`):
- Fetch node errors from store for the current node
- Match field IDs with error keys (handles both underscore and dot
notation)
- Display field-specific error messages below each field in red text
- Added helper function `getFieldErrorKey()` to normalize field IDs for
error matching
- **Utility helpers** (`helpers.ts`):
- Created `getFieldErrorKey()` function to extract field key from field
ID (removes `root_` prefix)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a graph with multiple nodes and intentionally leave
required fields empty
- [x] Run the graph and verify that validation errors appear in toast
notification
- [x] Verify that nodes with errors are highlighted with red border/ring
styling
- [x] Verify that field-specific error messages appear below each
problematic field in red text
- [x] Verify that error messages handle both underscore and dot notation
in field keys
- [x] Fix validation errors and run graph again - verify errors are
cleared
- [x] Verify that successful graph execution clears all node errors
- [x] Test with nodes that have `backend_id` in metadata vs nodes
without
- [x] Verify that nodes without errors don't show error styling
- [x] Test with nested fields and array fields to ensure error matching
works correctly
Text inputs in the form builder can be difficult to edit when dealing
with longer content. Users need a way to expand text inputs into a
larger, more comfortable editing interface, especially for multi-line
text, passwords, and longer string values.
https://github.com/user-attachments/assets/443bf4eb-c77c-4bf6-b34c-77091e005c6d
### Changes 🏗️
- **Added `InputExpanderModal` component**: A new modal component that
provides a larger textarea (300px min-height) for editing text inputs
with the following features:
- Copy-to-clipboard functionality with visual feedback (checkmark icon)
- Toast notification on successful copy
- Auto-focus on open for better UX
- Proper state management to reset values when modal opens/closes
- **Enhanced `TextInputWidget`**:
- Added expand button (ArrowsOutIcon) with tooltip for text, password,
and textarea input types
- Button appears inline next to the input field
- Integrated the new `InputExpanderModal` component
- Improved layout with flexbox to accommodate the expand button
- Added padding-right to input when expand button is visible to prevent
text overlap
- **Refactored file structure**:
- Moved `TextInputWidget.tsx` into `TextInputWidget/` directory
- Updated import path in `widgets/index.ts`
- **UX improvements**:
- Expand button only shows for applicable input types (text, password,
textarea)
- Number and integer inputs don't show expand button (not needed)
- Modal preserves schema title, description, and placeholder for context
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test expand button appears for text input fields
- [x] Test expand button appears for password input fields
- [x] Test expand button appears for textarea fields
- [x] Test expand button does NOT appear for number/integer inputs
- [x] Test clicking expand button opens modal with current value
- [x] Test editing text in modal and saving updates the input field
- [x] Test cancel button closes modal without saving changes
- [x] Test copy-to-clipboard button copies text and shows success state
- [x] Test toast notification appears on successful copy
- [x] Test modal resets to original value when reopened
- [x] Test modal auto-focuses textarea on open
- [x] Test expand button tooltip displays correctly
- [x] Test input field layout with expand button (no text overlap)
When users drag and drop nodes in the new flow editor, nodes can overlap
with each other, making the graph difficult to read and interact with.
This PR adds an automatic collision resolution algorithm that runs when
a node is dropped, ensuring nodes are automatically separated to prevent
overlaps and maintain a clean, readable graph layout.
### Changes 🏗️
- **Added collision resolution algorithm** (`resolve-collision.ts`):
- Implements an iterative collision detection and resolution system
using Flatbush for efficient spatial indexing
- Automatically resolves overlaps by moving nodes apart along the axis
with the smallest overlap
- Configurable options: `maxIterations`, `overlapThreshold`, and
`margin`
- Uses actual node dimensions (`width`, `height`, or `measured` values)
when available
- **Integrated collision resolution into Flow component**:
- Added `onNodeDragStop` callback that triggers collision resolution
after a node is dropped
- Configured with `maxIterations: Infinity`, `overlapThreshold: 0.5`,
and `margin: 15px`
- **Enhanced node dimension handling**:
- Updated `nodeStore.ts` to prioritize actual node dimensions
(`node.width`, `node.measured.width`) over hardcoded defaults when
calculating positions
- Ensures collision detection uses accurate node sizes
- **Added dependency**:
- Added `flatbush@4.5.0` for efficient spatial indexing and collision
detection
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Drag a node and drop it on top of another node - verify nodes
automatically separate
- [x] Drag multiple nodes to create overlapping clusters - verify all
overlaps are resolved
- [x] Drag nodes with different sizes (NOTE blocks vs regular blocks) -
verify collision detection uses correct dimensions
- [x] Drag nodes near the edge of the canvas - verify nodes don't get
pushed off-screen
- [x] Test with a graph containing many nodes (20+) - verify performance
is acceptable
- [x] Verify nodes maintain their positions when no collisions occur
- [x] Test with nodes that have custom measured dimensions - verify
accurate collision detection
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.
### Changes 🏗️
- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
- [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
- [x] Verify OAuth flow works with simplified scopes
- [x] Verify file picker works with merged credentials field
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
>
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
## Changes 🏗️
Fix concurrency grouping on Front-end workflows.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will see once merged
The Next.js API proxy was stripping the X-API-Key header when forwarding
requests to the backend, causing API key authentication to fail in
environments where requests go through the proxy (e.g., dev
environment).
### Changes 🏗️
- Updated `createRequestHeaders()` in
`frontend/src/lib/autogpt-server-api/helpers.ts` to forward the
`X-API-Key` header from the original request to the backend
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify API key authentication works when requests go through the
Next.js proxy
- [x] Verify existing authentication (Authorization header) still works
- [x] Verify admin impersonation header forwarding still works
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
No configuration changes required.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
Added the same concurrency optimisation to the Front-end and Fullstack
CI workflows. It will:
- Cancel in-progress runs when a new workflow starts for the same
branch/PR
- Reduce CI costs by avoiding redundant runs
- Ensure only the latest workflow runs
- Both workflows now use the same concurrency strategy to optimise CI
billing.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will see as we push commits...
When agent execution fails, the error toast was only showing
`error.message`, which often lacks detail. The API returns more specific
error messages in `error.response.detail.message`, but these weren't
being displayed to users, making debugging harder.
<img width="1018" height="796" alt="Screenshot 2025-12-03 at 2 14 17 PM"
src="https://github.com/user-attachments/assets/6a93aed9-9e18-450a-9995-8760d7d5ca35"
/>
### Changes 🏗️
- Updated error message extraction in `useAgentRunModal` to check
`error.response.detail.message` first, then fall back to
`error.message`, then to the default message
- This ensures users see the most specific error message available from
the API response
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Triggered an agent execution error (e.g., invalid inputs) and
verified the toast shows the detailed error message from
`error.response.detail.message`
- [x] Verified fallback to `error.message` when
`error.response.detail.message` is not available
- [x] Verified fallback to default message when neither is available
## Changes 🏗️
<img width="800" height="614" alt="Screenshot 2025-12-03 at 14 52 46"
src="https://github.com/user-attachments/assets/c7012d7a-96d4-4268-a53b-27f2f7322a39"
/>
- Create a new `useExecutionEvents()` hook that can be used to subscribe
to agent executions
- now both the **Activity Dropdown** and the new **Library Agent Page**
use that
- so subscribing to executions is centralised on a single place
- Apply a couple of design fixes
- Fix not being able to select the new templates tab
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally and verify the above
## Changes 🏗️
### Design updates
Design updates on the new Library page. New empty views with
illustration and overall changes on the sidebar and selected run
sections...
<img width="800" height="871" alt="Screenshot 2025-12-03 at 14 03 45"
src="https://github.com/user-attachments/assets/6970af99-dda8-4cd8-a9b5-97fe5ee2032e"
/>
<img width="800" height="844" alt="Screenshot 2025-12-03 at 14 03 52"
src="https://github.com/user-attachments/assets/92e6af79-db9f-4098-961f-9ae3b3300ffa"
/>
<img width="800" height="816" alt="Screenshot 2025-12-03 at 14 03 57"
src="https://github.com/user-attachments/assets/7e23e612-b446-4c53-8ea2-f0e2b1574ec3"
/>
<img width="800" height="862" alt="Screenshot 2025-12-03 at 14 04 07"
src="https://github.com/user-attachments/assets/e3398232-e74b-4a06-8702-30a96862dc00"
/>
### Architecture
- Make selected tabs/items synced with the URL via `?activeTab=` and
`?activeItem=`, so it is easy and predictable to change their state...
- Some minor updates on the design system I missed on previous PRs ...
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally and check the new page ( still wip )
- [OPEN-2871: TypeError: URL constructor: www.agpt.co is not a valid
URL.](https://linear.app/autogpt/issue/OPEN-2871/typeerror-url-constructor-wwwagptco-is-not-a-valid-url)
- [Sentry Issue BUILDER-56D: TypeError: URL constructor: www.agpt.co is
not a valid
URL.](https://significant-gravitas.sentry.io/issues/7081476631/)
### Changes 🏗️
- Amend URL handling in `CreatorLinks` to correctly handle URLs with
implicit schema
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Trivial change, CI is sufficient
### Changes 🏗️
- Modify the HTTP block to handle HTTP errors (4xx, 5xx) by returning
response objects instead of raising exceptions.
- This allows proper handling of client_error and server_error outputs.
Fixes
[AUTOGPT-SERVER-6VP](https://sentry.io/organizations/significant-gravitas/issues/7023985892/).
The issue was that: HTTP errors are raised as exceptions by `Requests`
default behavior, bypassing the block's intended error output handling,
resulting in `BlockUnknownError`.
This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 4902617
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/7023985892/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tested with a service that will return 4XX and 5XX errors to make
sure the correct paths are followed
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> HTTP block now returns 4xx/5xx responses instead of raising, and
Requests gains retry_max_attempts with last-result handling.
>
> - **Backend**
> - **HTTP block (`backend/blocks/http.py`)**:
> - Use `Requests(raise_for_status=False, retry_max_attempts=1)` so
4xx/5xx return response objects and route to
`client_error`/`server_error` outputs.
> - **HTTP client util (`backend/util/request.py`)**:
> - Add `retry_max_attempts` option with `stop_after_attempt` and
`_return_last_result` to return the final response when retries stop.
> - Build `tenacity` retry config dynamically in `Requests.request()`;
validate `retry_max_attempts >= 1` when provided.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
fccae61c26. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: nicholas.tindle <nicholas.tindle@agpt.co>
Allow the external api to manage credentials
### Changes 🏗️
- add ability to external api to manage credentials
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested it works
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Introduces external API endpoints to manage integrations (OAuth
initiation/completion and credential CRUD), adds external OAuth state
fields, and new API key permissions/config.
>
> - **External API – Integrations**:
> - Add router `backend/server/external/routes/integrations.py` with
endpoints to:
> - `GET /v1/integrations/providers` list providers (incl. default
scopes)
> - `POST /v1/integrations/{provider}/oauth/initiate` and `POST
/oauth/complete` for external OAuth (custom callback, state)
> - `GET /v1/integrations/credentials` and `GET /{provider}/credentials`
to list credentials
> - `POST /{provider}/credentials` to create `api_key`, `user_password`,
`host_scoped` creds; `DELETE /{provider}/credentials/{cred_id}` to
delete
> - Wire router in `backend/server/external/api.py`.
> - **Auth/Permissions**:
> - Add `APIKeyPermission` values: `MANAGE_INTEGRATIONS`,
`READ_INTEGRATIONS`, `DELETE_INTEGRATIONS` (schema + migration +
OpenAPI).
> - **Data model / Store**:
> - Extend `OAuthState` with external-flow fields: `callback_url`,
`state_metadata`, `api_key_id`, `is_external`.
> - Update `IntegrationCredentialsStore.store_state_token(...)` to
accept/store external OAuth metadata.
> - **OAuth providers**:
> - Set GitHub handler `DEFAULT_SCOPES = ["repo"]` in
`integrations/oauth/github.py`.
> - **Config**:
> - Add `config.external_oauth_callback_origins` in
`backend/util/settings.py` to validate allowed OAuth callback origins.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
249bba9e59. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Summary
- Clamp `start_time` to at least 10 seconds before request time (Twitter
API requirement)
- Update input description to document this automatic adjustment
- Fix `serialize_list` to handle `None` data gracefully (exposed by the
fix)
## Background
Twitter API returns `400 Bad Request` when `start_time` is less than 10
seconds before the request time. Users providing current/future times
would hit this error.
**Sentry Issue:**
[BUILDER-3PG](https://significant-gravitas.sentry.io/issues/6919685270/)
## Affected Blocks
- `TwitterSearchRecentTweetsBlock`
- `TwitterGetUserMentionsBlock`
- `TwitterGetHomeTimelineBlock`
- `TwitterGetUserTweetsBlock`
## Test plan
- [x] Tested `TwitterSearchRecentTweetsBlock` with current time as
`start_time`
- [x] Verified clamping works and API call succeeds
- [x] Verified "No tweets found" is returned correctly when search
window has no results
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
## Summary
Fix broken `update_agent_version_in_library` functionality by eagerly
loading `AgentGraph` while loading the library, also consolidating
duplicate code that updates agent version in library and configures HITL
safe mode settings.
## Problem
The `update_agent_version_in_library` is currently failed with this
error:
```
File "/Users/abhi/Documents/AutoGPT/autogpt_platform/backend/backend/server/v2/library/model.py", line 110, in from_db
raise ValueError("Associated Agent record is required.")
ValueError: Associated Agent record is required.
```
also logic was duplicated across two router endpoints with identical
implementations, creating maintenance burden and potential for
inconsistencies.
## Changes Made
### Created Helper Method
- Add `_update_library_agent_version_and_settings()` helper function
- Fixes broken `update_agent_version_in_library` by centralizing the
logic
- Uses proper error handling and settings merging with `model_copy()`
### Replaced Duplicate Code
- **In `update_graph` function** (v1.py:863) - replaced 13 lines with
single helper call
- **In `set_graph_active_version` function** (v1.py:920) - replaced 13
lines with single helper call
### Benefits
- **Fixes broken functionality**: Centralizes
`update_agent_version_in_library` logic
- **DRY Principle**: Eliminates code duplication across two router
endpoints
- **Maintainability**: Single place to modify the library agent update
logic
- **Consistency**: Ensures both endpoints use identical logic for HITL
safe mode configuration
- **Readability**: Cleaner, more focused endpoint implementations
## Technical Details
The helper method fixes broken `update_agent_version_in_library` by
handling:
1. Updating agent version in library via
`update_agent_version_in_library()`
2. Conditionally setting `human_in_the_loop_safe_mode: true` if graph
has HITL blocks and setting is not already configured
3. Proper settings merging to preserve existing configuration
## Testing
- [x] Code compiles and passes type checking
- [x] Pre-commit hooks pass (linting, formatting, type checking)
- [x] Both affected endpoints maintain same functionality with cleaner
implementation
Fixes broken duplicate code identified in v1.py router endpoints for
`update_agent_version_in_library`.
Co-authored-by: Claude <noreply@anthropic.com>
This PR addresses two issues:
1. **Empty values being sent to backend**: Node inputs were including
empty strings, null, and undefined values when converting to backend
format, causing unnecessary data to be sent.
2. **Incorrect connection detection in ObjectEditor**: The component was
checking if the parent field was connected instead of checking
individual key-value pairs, causing incorrect UI state for dynamic
properties.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **nodeStore.ts**: Added `pruneEmptyValues` utility to remove
empty/null/undefined values from `hardcodedValues` before converting
nodes to backend format. This ensures only meaningful input values are
sent to the backend API.
- **ObjectEditorWidget.tsx**:
- Removed unused `fieldKey` prop from component interface
- Fixed connection detection logic to check individual key-value handle
IDs instead of the parent field key, ensuring correct connection state
for each dynamic property in object editors
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a node with object input fields and verify empty values are
not sent to backend
- [x] Add multiple key-value pairs to an object editor widget
- [x] Connect individual key-value pairs via handles and verify
connection state is correctly displayed
- [x] Disconnect a key-value pair and verify the input field becomes
editable
- [x] Save and reload an agent with object inputs to verify empty values
are pruned correctly
- [x] Verify that non-empty values are still correctly preserved and
sent to backend
This PR fixes an issue where the `isGraphRunning` state wasn't being
updated when users manually executed graphs. This caused the UI to not
show proper feedback that a graph was running, such as the running
background animation. The fix adds a `setIsGraphRunning` method to the
graph store and ensures it's called both when starting execution (for
immediate UI feedback) and when errors occur (to reset the state).
### Changes 🏗️
- **Added `setIsGraphRunning` method to graphStore**: Provides a way to
manually control the graph running state
- **Set running state on manual execution**: Updates `isGraphRunning` to
`true` immediately when users click the Run button, providing instant UI
feedback
- **Reset state on execution errors**: Ensures `isGraphRunning` is set
back to `false` if the graph execution fails, preventing the UI from
showing incorrect running state
- **Improved UI responsiveness**: Users now see immediate visual
feedback (like the running background) when they start a graph execution
- **Cleaned up unused import**: Removed unnecessary `flowID` from
useQueryStates in Flow component
- I’ve also changed the colour of the running background to a lighter
shade of purple.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Click Run button and verify the running background appears
immediately
- [x] Test with graphs that require input values via the input dialog
- [x] Verify running state is cleared when execution completes
- [x] Test error scenarios and confirm running state is reset on failure
- [x] Confirm all UI elements that depend on `isGraphRunning` update
correctly
- [x] Test multiple consecutive runs to ensure state management is
consistent
## Summary
Fix critical validation errors in GraphExecutionEntry and
NodeExecutionEntry models that were preventing HITL block execution, and
re-enable HITL test suite.
## Root Cause
After introducing ExecutionContext as a mandatory field, existing
messages in the execution queue lacked this field, causing validation
failures:
```
[GraphExecutor] [ExecutionManager] Could not parse run message: 1 validation error for GraphExecutionEntry
execution_context
Field required [type=missing, input_value={'user_id': '26db15cb-a29...
```
## Changes Made
### 🔧 Execution Model Fixes (`backend/data/execution.py`)
- **Add backward compatibility**: `execution_context: ExecutionContext =
Field(default_factory=ExecutionContext)`
- **Prevent shared mutable defaults**: Use `default_factory` instead of
direct instantiation to avoid mutation issues
- **Ignore unknown fields**: Add `model_config = {"extra": "ignore"}`
for future compatibility
- **Applied to both models**: GraphExecutionEntry and NodeExecutionEntry
### 🧪 Re-enable HITL Test Suite
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`human_review_test.py`
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`review_routes_test.py`
- **Restore test coverage**: HITL functionality now properly tested in
CI
## Default ExecutionContext Behavior
When execution_context is missing from old messages, provides safe
defaults:
- `safe_mode: bool = True` (HITL blocks require approval by default)
- `user_timezone: str = "UTC"` (safe timezone default)
- `root_execution_id: Optional[str] = None`
- `parent_execution_id: Optional[str] = None`
## Impact
- ✅ **Fixes deployment validation errors** in dev environment
- ✅ **Maintains backward compatibility** with existing queue messages
- ✅ **Restores proper HITL test coverage** in CI
- ✅ **Ensures isolation**: Each execution gets its own ExecutionContext
instance
- ✅ **Future-proofs**: Protects against message format changes
## Testing
- [x] HITL test suite re-enabled and should pass in CI
- [x] Existing executions continue to work with sensible defaults
- [x] New executions receive proper ExecutionContext from caller
- [x] Verified `default_factory` prevents shared mutable instances
## Files Changed
- `backend/data/execution.py` - Add backward-compatible ExecutionContext
defaults
- `backend/data/human_review_test.py` - Re-enable test suite
- `backend/server/v2/executions/review/review_routes_test.py` -
Re-enable test suite
Resolves the "Field required" validation error preventing HITL block
execution in dev environment.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add `jlumbroso/free-disk-space@v1.3.1` action to claude.yml workflow
- Frees up disk space on Ubuntu runner before heavy Docker and
dependency setup
- Skips `large-packages` (slow) and `docker-images` (needed for
workflow)
## Test plan
- [x] Verify claude.yml workflow runs successfully without disk space
errors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
Adds a new Discord block that allows users to create threads in Discord
channels. This addresses issue OPEN-2666 which requested the ability to
create Discord threads from workflows.
## Solution
Implemented `CreateDiscordThreadBlock` in
`autogpt_platform/backend/backend/blocks/discord/bot_blocks.py` with the
following features:
- Create public or private threads in Discord channels via bot token
- Support for both channel ID and channel name lookup (with optional
server name)
- Configurable thread type (public/private toggle)
- Configurable auto-archive duration (60, 1440, 4320, or 10080 minutes)
- Optional initial message to send in the newly created thread
- Outputs: status, thread_id, and thread_name for workflow chaining
The block follows the existing Discord block patterns and includes
proper error handling for permissions, channel not found, and login
failures.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verify block appears in the workflow builder UI
- [x] Test creating a public thread with valid bot token and channel ID
- [x] Test creating a private thread with valid bot token and channel
name
- [x] Test with invalid channel ID/name to verify error handling
- [x] Test with bot lacking thread creation permissions
- [x] Verify thread_id output can be chained to subsequent blocks
- [x] Test auto-archive duration options (60, 1440, 4320, 10080 minutes)
- [x] Test sending initial message in newly created thread
Video to show the blocks working!
https://github.com/user-attachments/assets/f248f315-05b3-47e2-bd6b-4c64d53c35fc
Simplifying the chat tool system to use only 2 tools
### Changes 🏗️
- remove old tools
- expand run_agent tool to include all stages
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested adding credentials work
- [x] tested running an agent works
- [x] tested scheduling an agent works
Sometime block errors are raised with message set as None, we now handle
this case
### Changes 🏗️
- handle case of None message
- Add tests
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] write unit tests
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Ensure `BlockExecutionError` and `BlockUnknownError` provide default
messages when given None/empty input, with new unit tests covering
formatting and inheritance.
>
> - **Backend**:
> - `backend/util/exceptions.py`:
> - `BlockExecutionError`: default `None` message to `"Output error was
None"`.
> - `BlockUnknownError`: default empty/`None` message to `"Unknown error
occurred"`.
> - **Tests**:
> - `backend/util/exceptions_test.py`: add tests for message formatting,
`None`/empty handling, and exception inheritance.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9b31ae47. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Youtube can blocks cloud ips causing the youtube transcribe blocks to
not work. This PR adds webshare proxy to get around this issue
### Changes 🏗️
- add webshare proxy to youtube transcribe block
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] I have tested this works locally using the proxy
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Routes YouTube transcript fetching through Webshare proxy using
user/password credentials, wiring in provider enum, settings, default
credentials, and updated tests.
>
> - **Blocks** (`backend/blocks/youtube.py`):
> - Use `WebshareProxyConfig` with `YouTubeTranscriptApi` to fetch
transcripts via proxy.
> - Add `credentials` input (`user_password` for `webshare_proxy`);
include test credentials and mocks.
> - Update method signatures: `get_transcript(video_id, credentials)`
and `run(..., *, credentials, ...)`.
> - Change description to indicate proxy usage; add logging.
> - **Integrations**:
> - Providers (`backend/integrations/providers.py`): add
`ProviderName.WEBSHARE_PROXY`.
> - Credentials store (`backend/integrations/credentials_store.py`): add
`webshare_proxy` `UserPasswordCredentials`; include in
`DEFAULT_CREDENTIALS` and conditionally in `get_all_creds`.
> - **Settings** (`backend/util/settings.py`): add secrets
`webshare_proxy_username` and `webshare_proxy_password`.
> - **Tests** (`test/blocks/test_youtube.py`): update to pass
credentials and assert proxy config; add custom-credentials test; adjust
fallback/priority tests.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d060898488. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Eliminates explicit Google Sheets API scopes from credentials fields in
all Google Sheets-related blocks. This change may be intended to
centralize or dynamically manage API scopes elsewhere, simplifying block
configuration.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
- removes the scopes we aren't approved to use
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Bently tested it on his fresh account and it worked!
## Changes 🏗️
Update the new library agent page, empty view to look like:
<img width="900" height="1060" alt="Screenshot 2025-12-01 at 14 12 10"
src="https://github.com/user-attachments/assets/e6a22a4f-35f4-434e-bbb1-593390034b9a"
/>
Now we display an **About this agent** card on the left when the agent
is published on the marketplace. I expanded the:
```
/api/library/agents/{id}
```
endpoint to return as well the following:
```js
{
// ...
created_at: "timestamp",
marketplace_listing: {
creator: { name: "string", "slug": string, id: "string" },
name: "string",
slug: "string",
id: "string"
}
}
```
To be able to display this extra information on the card and link to the
creator and marketplace pages.
Also:
- design system updates regarding navbar and colors
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run locally and see the new page for an agent with no runs
We want to allow external tools to explore the marketplace and use the
chat agent tools
### Changes 🏗️
- add store api routes
- add tool api routes
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested all endpoints work
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
## Changes 🏗️
Update tokens of the design system with new values 🖌️🎨
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Storybook build passes, no type errors
This PR improves the user experience by defaulting the node output
accordion to an expanded state. Previously, users had to manually expand
the accordion to view execution results, which added an unnecessary
click to the workflow. With this change, output data is immediately
visible when available, allowing users to quickly see the results of
their node executions. The ability to collapse the accordion is
preserved for users who prefer a more compact interface.
### Changes 🏗️
- **Changed default state of node output accordion**: The node output
section now defaults to expanded (`isExpanded = true`) instead of
collapsed
- **Improved user experience**: Users can now immediately see node
execution results without needing to manually expand the accordion
- **Maintains collapse functionality**: Users can still manually
collapse the accordion if they prefer a more compact view
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute a node and verify the output accordion is expanded by
default
- [x] Verify output data is immediately visible after node execution
- [x] Test that the accordion can still be manually collapsed
- [x] Confirm the accordion state resets to expanded when switching
between nodes
- [x] Test with different types of output data (simple values, objects,
arrays)
This PR implements a minimum movement threshold of 50 pixels for node
position changes before they are logged to the history system. This
prevents the undo/redo history from being cluttered with minor,
unintentional movements that occur when users interact with block inputs
or accidentally nudge nodes.
### Changes 🏗️
- **Added movement threshold for history tracking**: Implemented a 50px
minimum movement requirement before logging node position changes to
history
- **Prevents history spam**: Stops small, unintentional movements (like
clicking on inputs inside blocks) from cluttering the undo/redo history
- **Tracks drag start positions**: Maintains initial positions when
dragging begins to accurately calculate total movement distance
- **Improved history management**: Only significant node movements are
now recorded, matching the behavior of the old builder
- **Memory efficient**: Cleans up tracked positions after drag
operations complete
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Drag a node less than 50px and verify no history entry is created
- [x] Drag a node more than 50px and verify history entry is created
- [x] Click on inputs inside blocks and verify no history entries are
created
- [x] Test undo/redo functionality works correctly with the threshold
- [x] Verify adding/removing nodes still creates history entries
- [x] Test multiple nodes being dragged simultaneously
This PR addresses the need for better user awareness when building
trigger-based agents. When a user adds webhook/trigger nodes to their
flow, a prominent banner now appears at the bottom of the builder
informing them they're creating a "Trigger Agent" and providing a direct
link to monitor its activity in the Agent Library.
### Changes 🏗️
- **Added TriggerAgentBanner component**: New banner that displays at
the bottom of the builder when the flow contains webhook/trigger nodes
- **Implemented webhook node detection**: Added `hasWebhookNodes` method
to nodeStore that checks if any nodes are of type WEBHOOK or
WEBHOOK_MANUAL
- **Conditional banner display**: Builder now shows TriggerAgentBanner
instead of BuilderActions when webhook nodes are present
- **Dynamic library link**: Banner includes a link to the specific agent
in the library (if found) or defaults to the general library page
- **Integrated with existing flow context**: Uses the current flowID to
fetch the corresponding library agent for proper linking
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add a webhook node to a flow and verify the banner appears
- [x] Remove all webhook nodes and verify the banner disappears
- [x] Test with both WEBHOOK and WEBHOOK_MANUAL node types
- [x] Click the library link and verify it navigates to the correct
agent page
- [x] Test library link fallback when agent is not found in library
- [x] Verify banner styling and positioning at bottom center of builder
This PR fixes issue where edges were not being copied when selecting
multiple nodes with Shift+Select.
### Changes 🏗️
- **Fixed edge copying logic**: Removed the requirement for edges to be
explicitly selected - now automatically includes all edges between
selected nodes when copying
- **Migrated to custom stores**: Refactored copy-paste functionality to
use `useNodeStore` and `useEdgeStore` instead of ReactFlow's built-in
state management
- **Improved type safety**: Replaced generic `Node` and `Edge` types
with `CustomNode` and `CustomEdge` for better type checking
- **Enhanced paste behavior**:
- Deselects existing nodes before pasting to ensure only pasted nodes
are selected
- Uses store methods for adding nodes and edges, ensuring proper state
management
- **Simplified node data handling**: Removed manual clearing of
backend_id, status, and nodeExecutionResult - now handled by the store's
addNode method
- **Added debugging support**: Added console logging for copied data to
aid in troubleshooting
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select multiple nodes using Shift+Select and copy with Ctrl/Cmd+C
- [x] Verify edges between selected nodes are included in the copy
- [x] Paste with Ctrl/Cmd+V and confirm both nodes and edges appear
- [x] Verify pasted elements maintain correct connections
- [x] Confirm only pasted nodes are selected after paste
- [x] Test that pasted nodes have unique IDs
- [x] Verify pasted nodes appear centered in the current viewport
This PR fixes an issue where undefined `customized_name` values were
being included in node metadata, which could cause issues with data
persistence and agent exports. The fix ensures that the
`customized_name` property is only included when it has been explicitly
set by the user.
### Changes 🏗️
- **Fixed metadata serialization**: Modified `getNodeAsGraphNode` to
only include `customized_name` in the metadata when it has been
explicitly set (not undefined)
- **Improved data structure**: Uses object spread syntax to
conditionally include the `customized_name` property, preventing
undefined values from being stored
- **Cleaned up code**: Removed unnecessary TODO comment and improved
code readability
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new agent and verify metadata doesn't include undefined
customized_name
- [x] Customize a block name and verify it's properly saved in metadata
- [x] Export an agent and verify the JSON doesn't contain undefined
customized_name fields
- [x] Import an agent with customized names and verify they are
preserved
- [x] Save and reload an agent to ensure metadata persistence works
correctly
This PR enables users to add agents directly to the builder from search
results and marketplace views. Previously, users had to navigate to
different sections to add agents - now they can do it with a single
click from wherever they find the agent. The change includes proper
loading states, error handling, and success notifications to provide a
smooth user experience.
### Changes 🏗️
- **Added direct agent-to-builder functionality**: Users can now add
agents directly to the builder from search results and marketplace views
- **Created reusable hook `useAddAgentToBuilder`**: Centralized logic
for adding both library and marketplace agents to the builder
- **Enhanced search results interaction**: Added click handlers and
loading states to agent cards in search results
- **Improved marketplace agent addition**: Marketplace agents are now
added to both library and builder with proper feedback
- **Added loading states**: Visual feedback when agents are being added
(loading spinners on cards)
- **Improved error handling**: Added toast notifications for success and
failure cases with descriptive error messages
- **Added Sentry error tracking**: Captures exceptions for better
debugging in production
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Search for agents and add them to builder from search results
- [x] Add marketplace agents which should appear in both library and
builder
- [x] Verify loading states appear during agent addition
- [x] Test error scenarios (network failure, invalid agent)
- [x] Confirm toast notifications appear for both success and error
cases
- [x] Verify builder viewport centers on newly added agent
## Changes 🏗️
Re-arrange the folder structure of the new Library page sub-components
so they are grouped by location:
### Before
<img width="238" height="506" alt="Screenshot 2025-11-27 at 23 45 27"
src="https://github.com/user-attachments/assets/429fda6e-bf74-4d80-9306-028365789ca1"
/>
All components where on a single level, which works fine for simpler
pages without that many sub-components, but on this one which has so
much functionality it ends up messier...
### After
<img width="226" height="517" alt="Screenshot 2025-11-27 at 23 45 46"
src="https://github.com/user-attachments/assets/99c098ea-ff11-4779-bad8-7d524bf91605"
/>
### Imports order
I edited some files, and the linter/formatter automatically sorted
import order as per the lint plugin.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the new library agent page locally and click around
## Summary
This PR implements a comprehensive Human In The Loop (HITL) block that
allows agents to pause execution and wait for human
approval/modification of data before continuing.
https://github.com/user-attachments/assets/c027d731-17d3-494c-85ca-97c3bf33329c
## Key Features
- Added WAITING_FOR_REVIEW status to AgentExecutionStatus enum
- Created PendingHumanReview database table for storing review requests
- Implemented HumanInTheLoopBlock that extracts input data and creates
review entries
- Added API endpoints at /api/executions/review for fetching and
reviewing pending data
- Updated execution manager to properly handle waiting status and resume
after approval
## Frontend Components
- PendingReviewCard for individual review handling
- PendingReviewsList for multiple reviews
- FloatingReviewsPanel for graph builder integration
- Integrated review UI into 3 locations: legacy library, new library,
and graph builder
## Technical Implementation
- Added proper type safety throughout with SafeJson handling
- Optimized database queries using count functions instead of full data
fetching
- Fixed imports to be top-level instead of local
- All formatters and linters pass
## Test plan
- [ ] Test Human In The Loop block creation in graph builder
- [ ] Test block execution pauses and creates pending review
- [ ] Test review UI appears in all 3 locations
- [ ] Test data modification and approval workflow
- [ ] Test rejection workflow
- [ ] Test execution resumes after approval
🤖 Generated with [Claude Code](https://claude.ai/code)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added Human-In-The-Loop review workflows to pause executions for human
validation.
* Users can approve or reject pending tasks, optionally editing
submitted data and adding a message.
* New "Waiting for Review" execution status with UI indicators across
run lists, badges, and activity views.
* Review management UI: pending review cards, list view, and a floating
reviews panel for quick access.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Complete migration of all non-test `query_raw` calls to use
`query_raw_with_schema` for proper PostgreSQL schema context handling.
This resolves the marketplace API failures where queries were looking
for unqualified table names.
## Root Cause
Prisma's `query_raw()` doesn't respect the `schema` parameter in
`DATABASE_URL` (`?schema=platform`) for raw SQL queries, causing queries
to fail when looking for unqualified table names in multi-schema
environments.
## Changes Made
### Files Updated
- ✅ **backend/server/v2/store/db.py**: Already updated in previous
commit
- ✅ **backend/server/v2/builder/db.py**: Updated `get_suggested_blocks`
query at line 343
- ✅ **backend/check_store_data.py**: Updated all 4 `query_raw` calls to
use schema-aware queries
- ✅ **backend/check_db.py**: Updated all `query_raw` calls (import
already existed)
### Technical Implementation
- Add import: `from backend.data.db import query_raw_with_schema`
- Replace `prisma.get_client().query_raw()` with
`query_raw_with_schema()`
- Add `{schema_prefix}` placeholder to table references in SQL queries
- Fix f-string template conflicts by using double braces
`{{schema_prefix}}`
### Query Examples
**Before:**
```sql
FROM "StoreAgent"
FROM "AgentNodeExecution" execution
```
**After:**
```sql
FROM {schema_prefix}"StoreAgent"
FROM {schema_prefix}"AgentNodeExecution" execution
```
## Impact
- ✅ All raw SQL queries now properly respect platform schema context
- ✅ Fixes "relation does not exist" errors in multi-schema environments
- ✅ Maintains backward compatibility with public schema deployments
- ✅ Code formatting passes with `poetry run format`
## Testing
- All `query_raw` usages in non-test code successfully migrated
- `query_raw_with_schema` automatically handles schema prefix injection
- Existing query logic unchanged, only schema awareness added
## Before/After
**Before:** GET /api/store/agents → "relation 'StoreAgent' does not
exist"
**After:** GET /api/store/agents → ✅ Returns store agents correctly
Resolves the marketplace API failures and ensures consistent schema
handling across all raw SQL operations.
Co-authored-by: Claude <noreply@anthropic.com>
### 🏗️ Changes
This PR adds a Google Drive Picker field type to enhance the user
experience of existing Google blocks, replacing manual file ID entry
with a visual file picker.
#### Backend Changes
- **Added and types** in :
- Configurable picker field with OAuth scope management
- Support for multiselect, folder selection, and MIME type filtering
- Proper access token handling for file downloads
- **Enhanced Gmail blocks**: Updated attachment fields to use Google
Drive Picker for better UX
- **Enhanced Google Sheets blocks**: Updated spreadsheet selection to
use picker instead of manual ID entry
- **Added utility**: Async file download with virus scanning and 100MB
size limit
#### Frontend Changes
- **Enhanced GoogleDrivePicker component**: Improved UI with folder icon
and multiselect messaging
- **Integrated picker in form renderers**: Auto-renders for fields with
format
- **Added shared GoogleDrivePickerInput component**: Eliminates code
duplication between NodeInputs and RunAgentInputs
- **Added type definitions**: Complete TypeScript support for picker
schemas and responses
#### Key Features
- 🎯 **Visual file selection**: Replace manual Google Drive file ID entry
with intuitive picker
- 📁 **Flexible configuration**: Support for documents, spreadsheets,
folders, and custom MIME types
- 🔒 **Minimal OAuth scopes**: Uses scope for security (only access to
user-selected files)
- ⚡ **Enhanced UX**: Seamless integration in both block configuration
and agent run modals
- 🛡️ **Security**: Virus scanning and file size limits for downloaded
attachments
#### Migration Impact
- **Backward compatible**: Existing blocks continue to work with manual
ID entry
- **Progressive enhancement**: New picker fields provide better UX for
the same functionality
- **No breaking changes**: all existing blocks should be unaffected
This enhancement improves the user experience of Google blocks without
introducing new systems or breaking existing functionality.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x]Test multiple of the new blocks [of note is that the create
spreadsheet block should be not used for now as it uses api not drive
picker]
- [x] chain the blocks together and pass values between them
---------
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
- Added `store_media_file` utility to convert local file paths to Data
URIs for image processing.
- Updated `AIImageCustomizerBlock` to utilize processed images in model
execution, improving compatibility with Replicate API.
- Added optional Aspect ratio input to AIImageCustomizerBlock
This change enhances the image handling capabilities of the AI image
customizer, ensuring that images are properly formatted for external
processing.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Created agent using AI Image Customizer block attached to agent
file input
- [x] Run agent, confirmed block is working
- [x] Confirm block is still working in original direct file upload
setup.
### Testing Results
#### Before (dev cloud):
<img width="836" height="592" alt="image"
src="https://github.com/user-attachments/assets/88c75668-c5c9-44bb-bec5-6554088a0cb7"
/>
#### After (local):
<img width="827" height="587" alt="image"
src="https://github.com/user-attachments/assets/04fea431-70a5-4173-bc84-d354c03d7174"
/>
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Preprocesses input images to data URIs and adds an `aspect_ratio`
option, wiring both through to Replicate in `AIImageCustomizerBlock`.
>
> - **Backend**
> - **`backend/blocks/ai_image_customizer.py`**:
> - Preprocesses input images via `store_media_file(...,
return_content=True)` to Data URIs before invoking Replicate.
> - Adds `AspectRatio` enum and `aspect_ratio` input; passed through
`run_model` and included in Replicate input.
> - Updates block test input accordingly.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4116cf80d7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
## Changes 🏗️
Addresses this code scanning alert
[security/code-scanning/156](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/156)
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] No prototype pollution
This unbreaks the Claude Code and Copilot workflows in our repo.
- Follow-up to #11288
### Changes 🏗️
- Update `node-version` on `actions/setup-node@v4` from v21 to v22
Bumps [cross-env](https://github.com/kentcdodds/cross-env) from 7.0.3 to
10.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/kentcdodds/cross-env/releases">cross-env's
releases</a>.</em></p>
<blockquote>
<h2>v10.1.0</h2>
<h1><a
href="https://github.com/kentcdodds/cross-env/compare/v10.0.0...v10.1.0">10.1.0</a>
(2025-09-29)</h1>
<h3>Features</h3>
<ul>
<li>add support for default value syntax (<a
href="152ae6a85b">152ae6a</a>)</li>
</ul>
<p>For example:</p>
<pre lang="json"><code>"dev:server": "cross-env wrangler
dev --port ${PORT:-8787}",
</code></pre>
<p>If <code>PORT</code> is already set, use that value, otherwise
fallback to <code>8787</code>.</p>
<p>Learn more about <a
href="https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html">Shell
Parameter Expansion</a></p>
<h2>v10.0.0</h2>
<h1><a
href="https://github.com/kentcdodds/cross-env/compare/v9.0.0...v10.0.0">10.0.0</a>
(2025-07-25)</h1>
<p>TL;DR: You should probably not have to change anything if:</p>
<ul>
<li>You're using a modern maintained version of Node.js (v20+ is
tested)</li>
<li>You're only using the CLI (most of you are as that's the intended
purpose)</li>
</ul>
<p>In this release (which should have been v8 except I had some issues
with automated releases 🙈), I've updated all the things and modernized
the package. This happened in <a
href="https://redirect.github.com/kentcdodds/cross-env/issues/261">#261</a></p>
<p>Was this needed? Not really, but I just thought it'd be fun to
modernize this package.</p>
<p>Here's the highlights of what was done.</p>
<ul>
<li>Replace Jest with Vitest for testing</li>
<li>Convert all source files from .js to .ts with proper TypeScript
types</li>
<li>Use zshy for ESM-only builds (removes CJS support)</li>
<li>Adopt <code>@epic-web/config</code> for TypeScript, ESLint, and
Prettier</li>
<li>Update to Node.js >=20 requirement</li>
<li>Remove kcd-scripts dependency</li>
<li>Add comprehensive e2e tests with GitHub Actions matrix testing</li>
<li>Update GitHub workflow with caching and cross-platform testing</li>
<li>Modernize documentation and remove outdated sections</li>
<li>Update all dependencies to latest versions</li>
<li>Add proper TypeScript declarations and exports</li>
</ul>
<p>The tool maintains its original functionality while being completely
modernized with the latest tooling and best practices</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>This is a major rewrite that changes the module format from CommonJS
to ESM-only. The package now requires Node.js >=20 and only exports
ESM modules (not relevant in most cases).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="152ae6a85b"><code>152ae6a</code></a>
feat: add support ofr default value syntax</li>
<li><a
href="bd70d1ab25"><code>bd70d1a</code></a>
chore: upgrade zshy</li>
<li><a
href="8e0b190df9"><code>8e0b190</code></a>
chore(ci): get coverage</li>
<li><a
href="8635e80e81"><code>8635e80</code></a>
fix(release): manually release a major version</li>
<li><a
href="3a58f22360"><code>3a58f22</code></a>
chore: fix npmrc registry</li>
<li><a
href="b70bfff5ec"><code>b70bfff</code></a>
chore(ci): add names to steps and workflows</li>
<li><a
href="cc5759dc36"><code>cc5759d</code></a>
fix(release): manually release a major version</li>
<li><a
href="080a859190"><code>080a859</code></a>
chore: remove publish script</li>
<li><a
href="31e5bc70e7"><code>31e5bc7</code></a>
chore(ci): restore built files</li>
<li><a
href="81e9c34f55"><code>81e9c34</code></a>
chore(ci): add back semantic-release</li>
<li>Additional commits viewable in <a
href="https://github.com/kentcdodds/cross-env/compare/v7.0.3...v10.1.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Changes 🏗️
Show spinners on the login/logout buttons while they are being
processed.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login with password: there is a spinner on the button while
logging in
- [x] Logout: there is a spinner on the button while logging out
### Changes 🏗️
<img width="1490" height="432" alt="Screenshot 2025-11-24 at 23 26 12"
src="https://github.com/user-attachments/assets/e5a149f2-7751-4276-9b76-707db7afdd46"
/>
The agent output buttons on the new run page weren't always clickable
due to a missing `z-index`.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] Open run in new runs page
- [x] Ouput buttons are clickable without issues
## Changes 🏗️
Change the favicon colour in the tab depending on the environment, so
when you have multiple tabs open (production, staging, local... ) it is
clear which map to what.
### Local ( orange )
<img width="257" height="40" alt="Screenshot 2025-11-24 at 22 38 27"
src="https://github.com/user-attachments/assets/705ddf6b-cc4a-498a-ad15-ed2c60f6b397"
/>
### Dev ( green )
<img width="263" height="40" alt="Screenshot 2025-11-24 at 22 45 20"
src="https://github.com/user-attachments/assets/eda3ba16-8646-4032-ad3c-7a8fc4db778c"
/>
### Example
<img width="513" height="41" alt="Screenshot 2025-11-24 at 22 45 09"
src="https://github.com/user-attachments/assets/1a43f860-536a-465e-9c10-a68c5218a58c"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Load the app and the favicon colour matches the env
This PR adds the latest [claude opus
4.5](https://www.anthropic.com/news/claude-opus-4-5) model to the
platform
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test and use the llm to make sure it works
## Changes 🏗️
Add some missing page titles, most importantly the missing one the new
runs page.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app
- [x] Page titles are there
## Changes 🏗️
When the user is logged in and tries to navigate to `/login` or
`/signup` manually, redirect them away to `/marketplace`.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] Go to `/login` or `/signup`
- [x] You are redirected back into `/marketplace`
This PR adds some of the latest grok models to the platform
``x-ai/grok-4-fast``, ``x-ai/grok-4.1-fast`` and ``ai/grok-code-fast-1``
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test all of the latest grok models to make sure they work and they
do!
<img width="1089" height="714" alt="image"
src="https://github.com/user-attachments/assets/0d1e3984-69e8-432b-982a-b04c16bc4f41"
/>
This PR adds the latest google banana pro image generator and editor to
the platform and fixes up some of the prices for the image generation
models
I asked for ``Generate a image of a dog on a skateboard`` and this is
what i got:
<img width="2048" height="2048" alt="image"
src="https://github.com/user-attachments/assets/9b6c16d8-df8f-4fb6-a009-d6d342f9beb7"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test the image generator and image editor block using the latest
google banana pro model and it works
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Added the ability to get all issues for a given project.
### Changes 🏗️
- added api query
- added new models
- added new block that gets all issues for a given project
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] I have ensured the new block works in dev
- [x] I have ensured the other linear blocks still work
Currently if the smtp server is not configured currently it results in a
platform error. This PR simplifies the error handling
### Changes 🏗️
- removed default value for smtp server host.
- capture common errors and yield them as error
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Checked all tests still pass
This PR introduces several improvements to the new builder experience:
**1. Agent Outputs Feature** ✨
- Implemented a new `AgentOutputs` component that displays execution
outputs from OUTPUT blocks
- Added a slide-out sheet UI to view agent outputs with proper
formatting
- Integrated with existing output renderers from the library view
- Shows output block names, descriptions, and rendered values
- Added beta badge to indicate feature is still experimental
**2. UI/UX Improvements** 🎨
- Fixed graph loading spinner color from violet to neutral zinc for
better consistency
- Adjusted node shadow styling for better visual hierarchy (reduced
shadow when not selected)
- Fixed credential field button spacing to prevent layout overflow
- Improved array editor widget delete button positioning
- Added proper link handling for integration redirects (opens in new
tab)
- Fixed object editor to handle null values gracefully
**3. Performance & State Management** 🚀
- Fixed race condition in run input dialog by awaiting execution before
closing
- Added proper history initialization after graph loads
- Added `outputSchema` to graph store for tracking output blocks
- Fixed search bar to maintain query state properly
- Added automatic fit view on graph load for better initial viewport
**4. Build Actions Bar** 🔧
- Reduced padding for more compact appearance
- Enabled/disabled Agent Outputs button based on presence of output
blocks
- Removed loading icon from manual run button when not executing
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created and executed an agent with OUTPUT blocks to verify outputs
display correctly
- [x] Tested output viewer with different data types (text, JSON,
images, etc.)
- [x] Verified credential field layouts don't overflow in constrained
spaces
- [x] Tested array editor delete functionality and button positioning
- [x] Confirmed graph loads with proper fit view and history
initialization
- [x] Tested run input dialog closes only after execution starts
- [x] Verified integration links open in new tabs
- [x] Tested object editor with null values
## Changes 🏗️
<img width="900" height="757" alt="Screenshot 2025-11-19 at 12 18 38"
src="https://github.com/user-attachments/assets/e2c2a4cf-a05e-431e-853d-fb0a68729e54"
/>
When the dev environment is used for a PR preview, show a banner at the
top of the page to indicate this.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a PR preview against Dev
- [x] Check it shows the banner once this is merged
- [x] Or try locally with the env var set
### For configuration changes:
`NEXT_PUBLIC_PREVIEW_STEALING_DEV` is set programmatically via our Infra
CI.
This adds gemini-3-pro-preview from openrouter
https://openrouter.ai/google/gemini-3-pro-preview
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test the gemini 3 model in the llm blocks and it works
This PR introduces several performance and user experience improvements
to the new builder, focusing on node positioning, state management
optimizations, and visual enhancements.
The new builder had several issues that impacted developer experience
and runtime performance:
- Inefficient store subscriptions causing unnecessary re-renders
- No intelligent node positioning when adding blocks via clicking
- useEffect dependencies causing potential stale closures
- Width constraints missing on form fields affecting layout consistency
### Changes 🏗️
#### Performance Optimizations
- **Store subscription optimization**: Added `useShallow` from zustand
to prevent unnecessary re-renders in
[NodeContainer](file:///app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeContainer.tsx)
and
[NodeExecutionBadge](file:///app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeExecutionBadge.tsx)
- **useEffect cleanup**: Split combined useEffects in
[useFlow](file:///app/(platform)/build/hooks/useFlow.ts) for clearer
dependencies and better performance
- **Memoization**: Added `memo` to
[NewControlPanel](file:///app/(platform)/build/components/NewControlPanel/NewControlPanel.tsx)
to prevent unnecessary re-renders
- **Callback optimization**: Wrapped `onDrop` handler in `useCallback`
to prevent recreation on every render
#### UX Improvements
- **Smart node positioning**: Implemented `findFreePosition` algorithm
in [helper.ts](file:///app/(platform)/build/components/helper.ts) that:
- Automatically finds non-overlapping positions for new nodes
- Tries right, left, then below existing nodes
- Falls back to far-right position if no space available
- **Click-to-add blocks**: Added click handlers to blocks that:
- Add the block at an intelligent position
- Automatically pan viewport to center the new node with smooth
animation
- **Visual feedback**: Added loading state with spinner icon for agent
blocks during fetch
- **Form field width**: Added `max-w-[340px]` constraint to prevent
overflow in
[FieldTemplate](file:///components/renderers/input-renderer/templates/FieldTemplate.tsx)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create from scratch and execute an agent with at least 3 blocks
- [x] Test adding blocks via drag-and-drop ensures no overlapping
- [x] Test adding blocks via click positions them intelligently
- [x] Test viewport animation when adding blocks via click
- [x] Import an agent from file upload, and confirm it executes
correctly
- [x] Test loading spinner appears when adding agents from "My Agents"
- [x] Verify performance improvements by checking React DevTools for
reduced re-renders
## Summary
Implement comprehensive parameterization of the activity status
generation system to enable custom prompts for admin analytics
dashboard.
## Changes Made
### Core Function Enhancement (`activity_status_generator.py`)
- **Extract hardcoded prompts to constants**: `DEFAULT_SYSTEM_PROMPT`
and `DEFAULT_USER_PROMPT`
- **Add prompt parameters**: `system_prompt`, `user_prompt` with
defaults to maintain backward compatibility
- **Template substitution system**: User prompt supports
`{{GRAPH_NAME}}` and `{{EXECUTION_DATA}}` placeholders
- **Skip existing flag**: `skip_existing` parameter allows admin to
force regeneration of existing data
- **Maintain manager compatibility**: All existing calls continue to
work with default parameters
### Admin API Enhancement (`execution_analytics_routes.py`)
- **Custom prompt fields**: `system_prompt` and `user_prompt` optional
fields in `ExecutionAnalyticsRequest`
- **Skip existing control**: `skip_existing` boolean flag for admin
regeneration option
- **Template documentation**: Clear documentation of placeholder system
in field descriptions
- **Backward compatibility**: All existing API calls work unchanged
### Template System Design
- **Simple placeholder replacement**: `{{GRAPH_NAME}}` → actual graph
name, `{{EXECUTION_DATA}}` → JSON execution data
- **No dependencies**: Uses simple `string.replace()` for maximum
compatibility
- **JSON safety**: Execution data properly serialized as indented JSON
- **Validation tested**: Template substitution verified to work
correctly
## Key Features
### For Regular Users (Manager Integration)
- **No changes required**: Existing manager.py calls work unchanged
- **Default behavior preserved**: Same prompts and logic as before
- **Feature flag compatibility**: LaunchDarkly integration unchanged
### For Admin Analytics Dashboard
- **Custom system prompts**: Admins can override the AI evaluation
criteria
- **Custom user prompts**: Admins can modify the analysis instructions
with execution data templates
- **Force regeneration**: `skip_existing=False` allows reprocessing
existing executions with new prompts
- **Complete model list**: Access to all LLM models from `llm.py` (70+
models including GPT, Claude, Gemini, etc.)
## Technical Validation
- ✅ Template substitution tested and working
- ✅ Default behavior preserved for existing code
- ✅ Admin API parameter validation working
- ✅ All imports and function signatures correct
- ✅ Backward compatibility maintained
## Use Cases Enabled
- **A/B testing**: Compare different prompt strategies on same execution
data
- **Custom evaluation**: Tailor success criteria for specific graph
types
- **Prompt optimization**: Iterate on prompt design based on admin
feedback
- **Bulk reprocessing**: Regenerate activity status with improved
prompts
## Testing
- Template substitution functionality verified
- Function signatures and imports validated
- Code formatting and linting passed
- Backward compatibility confirmed
## Breaking Changes
None - all existing functionality preserved with default parameters.
## Related Issues
Resolves the requirement to expose prompt customization on the frontend
execution analytics dashboard.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Currently we are capturing block errors via the scope only, this change
captures the error directly.
### Changes 🏗️
- capture the error as well as the scope in the executor manager
- Update the block error message to include additional details
- remove the __str__ function from blockerror as it is no longer needed
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Checked that errors are still captured in dev
The rfjs library was throwing validation errors for our custom format
types `short-text` and `long-text` because these are not standard JSON
Schema formats. This was causing form validation to fail even though
these formats are valid in our application context.
<img width="792" height="85" alt="Screenshot 2025-11-18 at 9 39 08 AM"
src="https://github.com/user-attachments/assets/c75c584f-b991-483c-8779-fc93877028e0"
/>
### Changes 🏗️
- Created a custom validator using `@rjsf/validator-ajv8`'s
`customizeValidator` function
- Added support for `short-text` and `long-text` custom formats that
accept any string value
- Replaced the default validator with our custom validator in the
FormRenderer component
- Disabled strict mode and format validation in AJV options to prevent
validation errors for non-standard formats
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create an agent with input blocks that use short-text format
- [x] Create an agent with input blocks that use long-text format
- [x] Execute the agent and verify no validation errors appear
- [x] Verify that form submission works correctly with both formats
- [x] Test that other standard formats (email, URL, etc.) still work as
expected
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11368
This PR adds the ability to rename nodes directly in the flow editor by
double-clicking on their titles.
https://github.com/user-attachments/assets/1de3fc5c-f859-425e-b4cf-dfb21c3efe3d
### Changes 🏗️
- **Added inline node title editing functionality:**
- Users can now double-click on any node title to enter edit mode
- Custom titles are saved on Enter key or blur, canceled on Escape key
- Custom node names are persisted in the node's metadata as
`customized_name`
- Added tooltip to display full title when text is truncated
- **Modified node data handling:**
- Updated `nodeStore` to include `customized_name` in metadata when
converting nodes
- Modified `helper.ts` to pass metadata (including custom titles) to
custom nodes
- Added metadata property to `CustomNodeData` type
- **UI improvements:**
- Added hover cursor indication for editable titles
- Implemented proper focus management during editing
- Maintained consistent styling between display and edit modes
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Double-click on various node types to enter edit mode
- [x] Type new names and press Enter to save
- [x] Press Escape to cancel editing and revert to original name
- [x] Click outside the input field to save changes
- [x] Verify custom names persist after page refresh
- [x] Test with long node names to ensure tooltip appears
- [x] Verify custom names are saved with the graph
- [x] Test editing on all node types (standard, input, output, webhook,
etc.)
## Summary
Fixes critical issue where `GET
/graphs/{graph_id}/executions/{graph_exec_id}` failed for marketplace
agents with "Graph not found" errors due to incorrect version access
checking.
## Root Cause
The endpoint was checking access to the **latest version** of a graph
instead of the **specific version used in the execution**. This broke
marketplace agents when:
1. User executes a marketplace agent (e.g., v3)
2. Graph owner later publishes a new version (e.g., v4)
3. User tries to view execution details
4. **BUG**: Code checked access to latest version (v4) instead of
execution version (v3)
5. If v4 wasn't published to marketplace → access denied → "Graph not
found"
## Original Problematic Code
```python
# routers/v1.py - get_graph_execution (WRONG ORDER)
graph = await graph_db.get_graph(graph_id=graph_id, user_id=user_id) # ❌ Uses LATEST version
if not graph:
raise HTTPException(404, f"Graph #{graph_id} not found")
result = await execution_db.get_graph_execution(...) # Gets execution data
```
## Solution
**Reordered operations** to check access against the **execution's
specific version**:
```python
# NEW CODE (CORRECT ORDER)
result = await execution_db.get_graph_execution(...) # ✅ Get execution FIRST
if not await graph_db.get_graph(
graph_id=result.graph_id,
version=result.graph_version, # ✅ Use execution's version, not latest!
user_id=user_id,
):
raise HTTPException(404, f"Graph #{graph_id} not found")
```
### Key Changes Made
1. **Fixed version access logic** (routers/v1.py:1075-1095):
- Reordered operations to get execution data first
- Check access using `result.graph_version` instead of latest version
- Applied same fix to external API routes
2. **Enhanced `get_graph()` marketplace fallback**
(data/graph.py:919-935):
- Added proper marketplace lookup when user doesn't own the graph
- Supports version-specific marketplace access checking
- Maintains security by only allowing approved, non-deleted listings
3. **Activity status generator fix**
(activity_status_generator.py:139-144):
- Use `skip_access_check=True` for internal system operations
4. **Missing block handling** (data/graph.py:94-103):
- Added `_UnknownBlockBase` placeholder for graceful handling of deleted
blocks
## Example Scenario Fixed
1. **User**: Installs marketplace agent "Blog Writer" v3
2. **Owner**: Later publishes v4 (not to marketplace yet)
3. **User**: Runs the agent (executes v3)
4. **Before**: Viewing execution details fails because code checked v4
access
5. **After**: ✅ Viewing execution details works because code checks v3
access
## Impact
- ✅ **Marketplace agents work correctly**: Users can view execution
details for any marketplace agent version they've used
- ✅ **Backward compatibility**: Existing owned graphs continue working
- ✅ **Security maintained**: Only allows access to versions user
legitimately executed
- ✅ **Version-aware access control**: Proper access checking for
specific versions, not just latest
## Testing
- [x] Marketplace agents: Execution details now accessible for all
executed versions
- [x] Owned graphs: Continue working as before
- [x] Version scenarios: Access control works correctly for specific
versions
- [x] Missing blocks: Graceful handling without errors
**Root issue resolved**: Version mismatch between execution version and
access check version that was breaking marketplace agent execution
viewing.
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#11390
This unbreaks the last step of the Builder tutorial :)
### Changes 🏗️
- Give `isSaving` time to propagate before calling dependent callback
`saveAndRun`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run through Builder tutorial; Run (with implicit save) should work
at once
## Changes 🏗️
Fixed the logout errors by removing duplicate redirects. `serverLogout`
was calling `redirect("/login")` (which throws `NEXT_REDIRECT`), and
then `useSupabaseStore` was also calling `router.refresh()`, causing
conflicts.
Updated `serverLogout` to return a result object instead of redirecting,
and moved the redirect to the client using `router.push("/login")` after
logout completes. This removes the `NEXT_REDIRECT` error and ensures a
single redirect.
<img width="800" height="706" alt="Screenshot 2025-11-18 at 16 14 54"
src="https://github.com/user-attachments/assets/38e0e55c-f48d-4b25-a07b-d4729e229c70"
/>
Also addressed 401 errors during logout. Hooks like `useCredits` were
still making API calls after logout, causing "Authorization header is
missing" errors. Added a check in `_makeClientRequest` to detect
logout-in-progress and suppress authentication errors during that
window. This prevents console noise and avoids unnecessary error
handling.
<img width="800" height="742" alt="Screenshot 2025-11-18 at 16 14 45"
src="https://github.com/user-attachments/assets/6fb2270a-97a0-4411-9e5a-9b4b52117af3"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Log out of your account
- [x] There are no errors showing up on the browser devtools
This refactor improves developer experience (DX) by creating a more
maintainable and extensible architecture.
The previous `CustomNode` implementation had several issues:
- Code was duplicated across different node types (StandardNodeBlock,
OutputBlock, etc.)
- Poor separation of concerns with all logic in a single component
- Limited flexibility for handling different block types
- Inconsistent handle display logic across different node types
<img width="2133" height="831" alt="Screenshot 2025-11-12 at 9 25 10 PM"
src="https://github.com/user-attachments/assets/02864bba-9ffe-4629-98ab-1c43fa644844"
/>
## Changes 🏗️
- **Refactored CustomNode structure**:
- Extracted reusable components:
[`NodeContainer`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeContainer.tsx),
[`NodeHeader`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeHeader.tsx),
[`NodeAdvancedToggle`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeAdvancedToggle.tsx),
[`WebhookDisclaimer`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx)
- Removed `StandardNodeBlock.tsx` and consolidated logic into
[`CustomNode.tsx`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/CustomNode.tsx)
- Moved
[`StickyNoteBlock`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/StickyNoteBlock.tsx)
to components folder for better organization
- **Added BlockUIType-specific logic**:
- Implemented conditional handle display based on block type (INPUT,
WEBHOOK, WEBHOOK_MANUAL blocks don't show handles)
- Added special handling for AGENT blocks with dynamic input/output
schemas
- Added webhook-specific disclaimer component with library agent
integration
- Fixed OUTPUT block's name field to not show input handle
- **Enhanced FormCreator**:
- Added `showHandles` prop for granular control
- Added `className` prop for styling flexibility (used for webhook
opacity)
- **Improved nodeStore**:
- Added `getNodeBlockUIType` method for retrieving node UI types
- **UI/UX improvements**:
- Fixed duplicate gap classes in
[`BuilderActions`](file:///Users/abhi/Documents/AutoGPT/autogpt_platform/frontend/src/app/(platform)/build/components/BuilderActions/BuilderActions.tsx)
- Added proper styling for webhook blocks (disabled state with reduced
opacity)
- Improved field template spacing for specific block types
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create and test a standard node block with input/output handles
- [x] Create and test INPUT block (verify no input handles)
- [x] Create and test OUTPUT block (verify name field has no handle)
- [x] Create and test WEBHOOK block (verify disclaimer appears and form
is disabled)
- [x] Create and test AGENT block with custom schemas
- [x] Create and test sticky note block
- [x] Verify advanced toggle works for all node types
- [x] Test node execution badges display correctly
- [x] Verify node selection highlighting works
Currently when an agent fails validation during a scheduled run, we
raise an error then try again, regardless of why.
This change removed the agent schedule and notifies the user
### Changes 🏗️
- add schedule_id to the GraphExecutionJobArgs
- add agent_name to the GraphExecutionJobArgs
- Delete schedule on GraphValidationError
- Notify the user with a message that include the agent name
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] I have ensured the scheduler tests work with these changes
This PR removes turnstile from the platform.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test to make sure that turnstile is gone, it will be.
- [x] Test logging in with out turnstile to make sure it still works
- [x] Test registering a new account with out turnstile and it works
## Summary
- Replaced the question mark icon with explicit "Give Feedback" text in
the feedback button
- Applied consistent styling to match the "Tutorial" button
- Removed QuestionMarkCircledIcon dependency from TallyPopup component
## Motivation
Users reported not knowing what the question mark icon was for, which
prevented them from discovering the feedback feature. Making the button
text-based and explicit removes this confusion.
## Changes
- Removed `QuestionMarkCircledIcon` import and icon element
- Changed button to display only "Give Feedback" text
- Added consistent styling (height, rounded corners, background color)
to match Tutorial button
- Button text can wrap to two lines if needed for better readability
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Check the UI to see that the question mark on the tally button has
been replaced with "Give Feedback"
Before
<img width="618" height="198" alt="image"
src="https://github.com/user-attachments/assets/0d4803eb-9a05-4a43-aaff-cc43b6d0cda4"
/>
After
<img width="298" height="126" alt="image"
src="https://github.com/user-attachments/assets/c1e1c3b5-94b4-4ad9-87e9-a0feca1143e3"
/>
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
## Summary
Adds a non-blocking warning banner to Login and Sign Up pages that
alerts mobile users about potential limitations in the mobile
experience.
## Changes
- Created `MobileWarningBanner` component in `src/components/auth/`
- Integrated banner into Login page (`/login`)
- Integrated banner into Sign Up page (`/signup`)
- Banner displays only on mobile devices (viewports < 768px)
- Uses existing `useBreakpoint` hook for responsive detection
## Design Details
- **Position**: Appears below the login/signup card (after the bottom
"Sign up"/"Log in" links)
- **Style**: Amber-themed warning banner with DeviceMobile icon
- **Message**:
- Title: "Heads up: AutoGPT works best on desktop"
- Description: "Some features may be limited on mobile. For the best
experience, consider switching to a desktop."
- **Behavior**: Non-blocking, no user interaction required
<img width="342" height="81" alt="image"
src="https://github.com/user-attachments/assets/b6584299-b388-4d8d-b951-02bd95915566"
/>
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verified banner appears on mobile viewports (< 768px)
- [x] Verified banner is hidden on desktop viewports (≥ 768px)
- [x] Tested on Login page
- [x] Tested on Sign Up page
<img width="342" height="758" alt="image"
src="https://github.com/user-attachments/assets/077b3e0a-ab9c-41c7-83b7-7ee80a3396fd"
/>
<img width="342" height="759" alt="image"
src="https://github.com/user-attachments/assets/77a64b28-748b-4d97-bd7c-67c55e5e9f22"
/>
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
## Changes 🏗️
### Issue 1: login/signup redirect conflict
There are 2 hooks, both on the login and signup pages, that attempt to
call `router.push` once a user logs in or is created.
The main offender seems to be this hook:
```tsx
useEffect(() => {
if (user) router.push("/");
}, [user]);
```
Which is in place on both pages to prevent logged-in users from
accessing `/login` or `/signup`. What happens is when a user signs up or
logs in, if they need onboarding, there is a `router.push` down the line
to redirect them there, which conflicts with the one done in this hook.
**Solution**
I moved the logic from that hook to the `middleware.ts`, which is a
better place for it... It won't conflict anymore with onboarding
redirects done in those pages
### Issue 2: onboarding server redirects
Potential race condition: both the server component and the client
`<OnboardingProvider />` perform redirects. The server component
redirects happen first, but if onboarding state changes after mount, the
provider can redirect again, causing rapid mount/unmount cycles.
**Solution**
Make all onboarding redirects central in `/onboarding` which is now a
client component do in client redirects only and displaying a spinner
while it does so.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested locally login/logout/signup and trying to access `/login`
and `/signup` being logged in
Source maps aren't being uploaded to Sentry, so debugging errors in
production is really hard.
### Changes 🏗️
- Fix config so source maps are found and uploaded to Sentry
- Disable deleting source maps after upload (so they are available in
the browser)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested locally
<!-- Clearly explain the need for these changes: -->
This PR enhances the visual feedback in the flow editor by adding
animated "beads" that travel along edges during execution. This provides
users with clear, real-time visualization of data flow and execution
progress through the graph, making it easier to understand which
connections are active and track execution state.
https://github.com/user-attachments/assets/df4a4650-8192-403f-a200-15f6af95e384
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **Added new edge data types and structure:**
- Added `CustomEdgeData` type with `isStatic`, `beadUp`, `beadDown`, and
`beadData` properties
- Created `CustomEdge` type extending XYEdge with custom data
- **Implemented bead animation components:**
- Added `JSBeads.tsx` - JavaScript-based animation component with
real-time updates
- Added `SVGBeads.tsx` - SVG-based animation component (for future
consideration)
- Added helper functions for path calculations and bead positioning
- **Updated edge rendering:**
- Modified `CustomEdge` component to display beads during execution
- Added static edge styling with dashed lines (`stroke-dasharray: 6`)
- Improved visual hierarchy with different stroke styles for
selected/unselected states
- **Refactored edge management:**
- Converted `edgeStore` from using `Connection` type to `CustomEdge`
type
- Added `updateEdgeBeads` and `resetEdgeBeads` methods for bead state
management
- Updated `copyPasteStore` to work with new edge structure
- **Added support for static outputs:**
- Added `staticOutput` property to `CustomNodeData`
- Static edges show continuous bead animation while regular edges show
one-time animation
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a flow with multiple blocks and verify beads animate along
edges during execution
- [x] Test that beads increment when execution starts (`beadUp`) and
decrement when completed (`beadDown`)
- [x] Verify static edges display with dashed lines and continuous
animation
- [x] Confirm copy/paste operations preserve edge data and bead states
- [x] Test edge animations performance with complex graphs (10+ nodes)
- [x] Verify bead animations complete properly before disappearing
- [x] Test that multiple beads can animate on the same edge for
concurrent executions
- [x] Verify edge selection/deletion still works with new visualization
- [x] Test that bead state resets properly when starting new executions
## Changes 🏗️
- Clear backend_id when pasting blocks to prevent duplicate ID errors
- Add copy/paste functionality to new FlowEditor
- Ensure pasted blocks use newly generated UUIDs when saving
Fixes issue where copying and pasting blocks would fail with 'Unique
constraint failed' error because the old backend_id was being reused
instead of the new node ID.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add any block to the builder, save and run the graph and wait for
it to finish, then copy and paste the first block and paste it, try to
use it and it should now work and not have any issues/errors
https://github.com/user-attachments/assets/c24f9a9a-8e4f-4988-8731-cddc34a0da13
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Resolves#11305
### Changes 🏗️
Make `AIListGeneratorBlock` more reliable:
- Leverage `AIStructuredResponseGenerator`'s robust
prompt/retry/validate logic
- Use JSON format instead of Python list format
- Add `force_json_output` toggle
- Fix output instructions in prompt (only string values allowed)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Works without `force_json_output`
- [x] Works with `force_json_output`
- [x] Retry mechanism works as intended
Use WebSocket notifications from the backend to display confetti.
### Changes 🏗️
- Send WebSocket notifications to the browser when new onboarding steps
are completed
- Handle WebSocket notifications events in the Wallet and use them
instead of frontend-based logic to play confetti (fixes confetti
appearing on every refresh)
- Scroll to newly completed tasks when wallet opens just before confetti
plays
- Fix: make `Run again` button complete `RE_RUN_AGENT` task
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Confetti are displayed when previously uncompleted tasks are
completed
- [x] Confetti do not appear on page refresh
- [x] Wallet scrolls on open before confetti is displayed
- [x] `Run again` button completes `RE_RUN_AGENT` task
This PR fixes a flaky test issue in the signup flow where Playwright's
strict mode was failing due to duplicate heading elements on the
marketplace page.
### Problem
The test was failing intermittently with the following error:
```
Error: strict mode violation: getByText('Bringing you AI agents designed by thinkers from around the world') resolved to 2 elements
```
This occurred because the marketplace page contains two identical `<h3>`
elements with the same text, causing Playwright's strict mode to throw
an error when trying to select a single element.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run E2E tests locally multiple times to ensure no flakiness
- [x] Check CI pipeline runs successfully
Fixes two related bugs in the agent scheduling UI that caused confusion
for users setting up recurring schedules:
1. **"on day nan of every month" display bug**: When scheduling an agent
to repeat every N days (e.g., "every 2 days"), the schedule info panel
incorrectly displayed "on day nan of every month" instead of the correct
"Every N days at HH:MM" format.
2. **Confusing time picker for hourly intervals**: When setting up a
schedule with "every N hours", the UI displayed a time picker labeled
"at 9 o'clock" which was confusing because the time setting is ignored
for hourly intervals. Users were unclear about what this setting meant
or if it had any effect.
### Changes 🏗️
**Fixed `humanizeCronExpression` function**
(`autogpt_platform/frontend/src/lib/cron-expression-utils.ts`):
- Reordered cron expression parsing logic to handle day intervals
(`*/N`) before monthly checks
- Added `!dayOfMonth.startsWith("*/")` guard to monthly and yearly
checks to prevent misinterpreting day intervals as monthly day lists
- This ensures expressions like `0 9 */2 * *` (every 2 days at 9:00) are
correctly displayed as "Every 2 days at 09:00" instead of "on day nan of
every month"
**Updated `CronScheduler` component**
(`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/ScheduleAgentModal/components/CronScheduler/CronScheduler.tsx`):
- Hide `TimeAt` component for custom intervals with unit "hours" (time
is ignored for hourly intervals)
- Pass context-aware label to `TimeAt`: "Starting at" for custom day
intervals, "At" for other frequencies
- This clarifies that the time setting is the starting time for day
intervals and removes confusion for hourly intervals
**Enhanced `TimeAt` component**
(`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/ScheduleAgentModal/components/CronScheduler/TimeAt.tsx`):
- Added optional `label` prop (defaults to "At") to allow context-aware
labeling
- Component now displays "Starting at" when used with custom day
intervals for better clarity
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Schedule an agent with "Custom" frequency, "Every 2 days" interval
- verify it displays as "Every 2 days at HH:MM" in the schedule info
panel (not "on day nan of every month")
- [x] Schedule an agent with "Monthly" frequency - verify it displays
correctly (e.g., "On day 1, 15 of every month at HH:MM")
<img width="845" height="388" alt="image"
src="https://github.com/user-attachments/assets/02ed0b73-bf5e-48fd-a7b0-6f4d4687eb13"
/>
<img width="839" height="374" alt="image"
src="https://github.com/user-attachments/assets/be62eee2-3fdd-4b20-aecf-669c3c6c6fb2"
/>
- Resolves#11345
### Changes 🏗️
- Move tool use routing logic from frontend to backend: routing info was
being baked into graph links by the frontend, inconsistently, causing
issues
- Rework tool use routing to use target node ID instead of target block
name
- Add a bit of magic to `NodeOutputs` component to show tool node title
instead of ID
DX:
- Removed `build` from `.prettierignore` -> re-enable formatting for
builder components
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Use SDM block in a graph; verify it works
- [x] Use SDM block with agent executor block as tool; verify it works
- Tests for `parse_execution_output` pass (checked by CI)
## Summary
This PR fixes an issue where multiple keyboard save event listeners were
being registered when the same save hook was used in multiple
components, causing the graph to be saved multiple times (3x) when using
Ctrl/Cmd+S.
## Changes
- **Created a centralized `useSaveGraph` hook** in
`/hooks/useSaveGraph.ts` that encapsulates all graph saving logic
- **Refactored `useNewSaveControl`** to use the new centralized hook
instead of duplicating save logic
- **Updated `useRunGraph` and `useScheduleGraph`** to use the
centralized `useSaveGraph` hook directly
- **Simplified the save control component** by removing redundant logic
and using cleaner naming conventions
## Problem
The previous implementation had the save logic duplicated in
`useNewSaveControl`, and when this hook was used in multiple places
(NewSaveControl component, RunGraph, ScheduleGraph), each instance would
register its own keyboard event listener for Ctrl/Cmd+S. This caused:
- Multiple save requests being sent simultaneously
- "Unique constraint failed on the fields: ('id', 'version')" errors
from the backend
- Poor performance due to unnecessary re-renders
## Solution
By centralizing the save logic in a dedicated `useSaveGraph` hook:
- Save logic is now in one place, making it easier to maintain
- Components can use the save functionality without registering
duplicate event listeners
- The keyboard shortcut listener is only registered once in the
`useNewSaveControl` hook
- Other components (RunGraph, ScheduleGraph) can call `saveGraph`
directly without side effects
## Testing
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified Ctrl/Cmd+S saves the graph only once
- [x] Tested save functionality from Save Control popup
- [x] Confirmed Run Graph and Schedule Graph still save before execution
- [x] Verified no duplicate save requests in network tab
- [x] Checked that save toast notifications appear correctly
We need a way to differentiate between serious errors that cause on call
alerts and block errors.
This PR address this need by ensuring all errors that occur during
execution of a block are of Subtype BlockError
### Changes 🏗️
- Introduced BlockErrors and its subtypes
- Updated current errors that are emitted by blocks to use BlockError
- Update executor manager, to errors emitted when running a block that
are not of type BlockError to BlockUnknownError
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] checked tests still work
- [x] Ensured block error message is readable and useful
In this PR, I’ve added drag-and-drop functionality to the new builder
using built-in HTML drag-and-drop.
https://github.com/user-attachments/assets/b27c281e-6216-4131-9a89-e10b0dd56a8f
### Changes
- Added ReactFlowProvider to manage flow state in BuilderPage and Flow
components.
- Implemented drag-and-drop support for blocks in the NewControlPanel,
allowing users to drag blocks from the menu and drop them onto the
canvas.
- Enhanced the Block component to handle drag events and provide visual
feedback during dragging.
- Updated useFlow hook to include onDragOver and onDrop handlers for
managing block placement.
- Adjusted nodeStore to accept position parameters for added blocks,
improving placement accuracy.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I’ve tried dragging and dropping multiple blocks, and it works
perfectly as shown in the video.
- depends one https://github.com/Significant-Gravitas/AutoGPT/pull/11339
In this PR, I’ve added a dropdown menu to the custom node. This allows
you to delete a node, copy a node, and if the node is an agent node, you
can also navigate to that specific agent.
<img width="633" height="403" alt="Screenshot 2025-11-08 at 7 24 38 PM"
src="https://github.com/user-attachments/assets/89dd2906-95f5-40a5-82d1-de05075e4f30"
/>
###Changes
- Added context menu to custom nodes with copy, delete, and open agent
options
- Added performance optimization with memo for custom edge
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All three buttons are working perfectly.
- [x] The “Go to graph” option is only visible in the sub-graph node.
In the new builder, I’ve added a copy-paste functionality using the
keyboard.
https://github.com/user-attachments/assets/3106ae86-3f47-4807-a598-9c0b166eaae9
### Changes 🏗️
- Added useCopyPasteKeyboard hook for handling keyboard shortcuts
- Created new copyPasteStore for state management
- Implemented performance optimizations (memo on CustomEdge)
- Updated nodeStore and edgeStore to support the functionality
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The copy-paste functionality is working correctly as you can see
in the video.
## Summary
Fix admin impersonation not working for graph execution requests that
are server-side rendered.
## Problem
- Build page uses SSR, so API calls go through _makeServerRequest
instead of _makeClientRequest
- Server-side requests cannot access sessionStorage where impersonation
ID is stored
- Graph execution requests were missing X-Act-As-User-Id header
## Simple Solution
1. **Store impersonation in cookie** (useAdminImpersonation.ts):
- Set/clear cookie alongside sessionStorage for server access
2. **Read cookie on server** (_makeServerRequest in client.ts):
- Check for impersonation cookie using Next.js cookies() API
- Create fake Request with X-Act-As-User-Id header
- Pass to existing makeAuthenticatedRequest flow
## Changes Made
- useAdminImpersonation.ts: 2 lines to set/clear cookie
- client.ts: 1 method to read cookie and create header
- No changes to existing proxy/header/helpers logic
## Result
- ✅ Graph execution requests now include impersonation header
- ✅ Works for both client-side and server-side rendered requests
- ✅ Minimal changes, leverages existing header forwarding logic
- ✅ Backward compatible with all existing functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
This change uses [Zustand](https://github.com/pmndrs/zustand) (a
lightweight state management library) to centralize authentication state
across the app. Previously, each component mounting `useSupabase()`
would create its own local state, causing duplicate API calls and
inconsistent user data. Now, user state is cached globally with Zustand
- when multiple components need auth data, they share the same cached
state instead of each fetching separately. This reduces server load and
improves app responsiveness.
**File structure:**
```
src/lib/supabase/hooks/
├── useSupabase.ts # React hook interface (modified)
├── useSupabaseStore.ts # Zustand state management (new)
└── helpers.ts # Pure business logic (new)
```
**What was extracted to helpers:**
- `ensureSupabaseClient()` - Singleton client initialization
- `fetchUser()` - User fetching with error handling
- `validateSession()` - Session validation logic
- `refreshSession()` - Session refresh logic
- `handleStorageEvent()` - Cross-tab logout handling
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified no TypeScript errors in modified files
- [x] Tested login flow works correctly
- [x] Tested logout flow works correctly
- [x] Verified session validation on tab focus/visibility
- [x] Tested cross-tab logout synchronization
- [x] Confirmed WebSocket disconnection on logout
This PR adds a comprehensive execution analytics admin endpoint that
generates AI-powered activity summaries and correctness scores for graph
executions, with proper feature flag bypass for admin use.
### Changes 🏗️
**Backend Changes:**
- Added admin endpoint: `/api/executions/admin/execution_analytics`
- Implemented feature flag bypass with `skip_feature_flag=True`
parameter for admin operations
- Fixed async database client usage (`get_db_async_client`) to resolve
async/await errors
- Added batch processing with configurable size limits to handle large
datasets
- Comprehensive error handling and logging for troubleshooting
- Renamed entire feature from "Activity Backfill" to "Execution
Analytics" for clarity
**Frontend Changes:**
- Created clean admin UI for execution analytics generation at
`/admin/execution-analytics`
- Built form with graph ID input, model selection dropdown, and optional
filters
- Implemented results table with status badges and detailed execution
information
- Added CSV export functionality for analytics results
- Integrated with generated TypeScript API client for proper
authentication
- Added proper error handling with toast notifications and loading
states
**Database & API:**
- Fixed critical async/await issue by switching from sync to async
database client
- Updated router configuration and endpoint naming for consistency
- Generated proper TypeScript types and API client integration
- Applied feature flag filtering at API level while bypassing for admin
operations
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
**Test Plan:**
- [x] Admin can access execution analytics page at
`/admin/execution-analytics`
- [x] Form validation works correctly (requires graph ID, validates
inputs)
- [x] API endpoint `/api/executions/admin/execution_analytics` responds
correctly
- [x] Authentication works properly through generated API client
- [x] Analytics generation works with different LLM models (gpt-4o-mini,
gpt-4o, etc.)
- [x] Results display correctly with appropriate status badges
(success/failed/skipped)
- [x] CSV export functionality downloads correct data
- [x] Error handling displays appropriate toast messages
- [x] Feature flag bypass works for admin users (generates analytics
regardless of user flags)
- [x] Batch processing handles multiple executions correctly
- [x] Loading states show proper feedback during processing
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] No configuration changes required for this feature
**Related to:** PR #11325 (base correctness score functionality)
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
This enables real time notifications from backend to browser via
WebSocket using Redis bus for moving notifications from REST process to
WebSocket process.
This is needed for (follow-up) backend-completion of onboarding tasks
with instant notifications.
### Changes 🏗️
- Add new `AsyncRedisNotificationEventBus` to enable publishing
notifications to the Redis event bus
- Consume notifications in `ws_api.py` similarly to execution events and
send them via WebSocket
- Store WebSocket user connections in `ConnectionManager`
- Add relevant tests and types
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Notifications are sent to the frontend
## Changes 🏗️
Allow dynamic URLs in the CORS config, to match them via regex. This
helps because currently we have Front-end preview deployments which are
isolated ( _nice they don't pollute or overrride other domains_ ) like:
```
https://autogpt-git-{branch_name}-{commit}-significant-gravitas.vercel.app
```
The Front-end builds and works there, but as soon as you login, any API
requests to endpoints that need auth will fail due to CORS, given our
current CORS config does not support dynamically generated domains.
### Changes
After these changes we can specify dynamic domains to be allowed under
CORS. I also made `localhost` disabled if the API is in production for
safety...
### Before
```yml
cors:
allowOrigin: "https://dev-builder.agpt.co" # could only specify full URL strings, not dyamic ones
```
### After
```yml
cors:
allowOrigins:
- "https://dev-builder.agpt.co"
- "regex:https://autogpt-git-[a-z0-9-]+\\.vercel\\.app" # dynamic domains supported via regex
```
### Files
- add `build_cors_params` utility to parse literal/regex origins and
block localhost in production (`backend/server/utils/cors.py`)
- apply the helper in both `AgentServer` and `WebsocketServer` so CORS
logic and validations remain consistent
- add reusable `override_config` testing helper and update existing
WebSocket tests to cover the shared CORS behavior
- introduce targeted unit tests for the new CORS helper
(`backend/server/utils/cors_test.py`)
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will know once we made the origin config changes on infra and
test with this...
## Changes 🏗️
Make sure we can login on preview deployments generated by Vercel to
test Front-end changes. As of now, the Cloudflare CAPTCHA verification
fails, we don't need to have it active there.
### Minor improvements
<img width="1599" height="755" alt="Screenshot 2025-11-06 at 16 18 10"
src="https://github.com/user-attachments/assets/0a3fb1f3-2d4d-49fe-885f-10f141dc0ce4"
/>
Prevent the following build error:
```
15:58:01.507
at j (.next/server/app/(no-navbar)/onboarding/reset/page.js:1:5125)
15:58:01.507
at <unknown> (.next/server/chunks/5826.js:2:14221)
15:58:01.507
at b.handleCallbackErrors (.next/server/chunks/5826.js:43:43068)
15:58:01.507
at <unknown> (.next/server/chunks/5826.js:2:14194) {
15:58:01.507
description: "Route /onboarding/reset couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error",
15:58:01.507
digest: 'DYNAMIC_SERVER_USAGE'
15:58:01.507
}
```
by making the reset onboarding route a client one. I made a new component, `<LoadingSpinner />`, and that page will show it while onboarding it's being reset.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] You can login/signup on the app and use it in the preview URL generated by Vercel
These models have become deprecated
- deepseek-r1-distill-llama-70b
- gemma2-9b-it
- llama3-70b-8192
- llama3-8b-8192
- google/gemini-flash-1.5
I have removed them and setup a migration, the migration is to convert
all the old versions of the model to new versions, the model changes
will happen like so
- llama3-70b-8192 → llama-3.3-70b-versatile
- llama3-8b-8192 → llama-3.1-8b-instant
- google/gemini-flash-1.5 → google/gemini-2.5-flash
- deepseek-r1-distill-llama-70b → gpt-5-chat-latest
- gemma2-9b-it → gpt-5-chat-latest
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Check to see if old models where removed
- [x] Check to see if migration worked and converted old models to new
one in graph
### Changes 🏗️
- Replaces `isSupersetOf` and `difference` Set operations with
backwards-compatible implementations using `Array.from` and
`every`/`filter` methods.
- This ensures compatibility with older JavaScript environments that may
not fully support modern Set operations.
Fixes
[BUILDER-451](https://sentry.io/organizations/significant-gravitas/issues/6952591149/).
The issue was that: ES2024 Set methods `isSupersetOf` and `difference`
are unsupported in iOS Safari 16.7, causing a TypeError during component
render.
This fix was generated by Seer in Sentry, triggered automatically. 👁️
Run ID: 2032240
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6952591149/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] Test on iOS Safari 16.7 to ensure no TypeError occurs during
component render.
- [ ] Verify that the replaced `isSupersetOf` and `difference`
implementations function correctly in other supported browsers.
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Implements a cookie consent banner and settings modal for GDPR
compliance, allowing users to manage preferences for analytics and
monitoring cookies. Integrates consent checks with Sentry, Vercel
Analytics, and Google Analytics, ensuring tracking is only enabled with
user permission. Refactors dialog components for improved layout and
adds consent management utilities and hooks.
#### For code changes:
- [x] Banner appears at bottom of page on first visit with rounded
corners and proper spacing (40px margins)
- [x] Banner shows three buttons: "Reject All", "Settings", and "Accept
All"
- [x] Clicking "Accept All" hides banner and enables
analytics/monitoring
- [x] Clicking "Reject All" hides banner and keeps analytics/monitoring
disabled
- [x] Banner does not reappear after consent is given (check
localStorage: `autogpt_cookie_consent`)
**Cookie Settings Modal:**
- [x] Clicking "Settings" button opens the Cookie Settings modal
- [x] Modal displays three categories: Essential Cookies (always
active), Analytics & Performance (toggle), Error Monitoring & Session
Replay (toggle)
- [x] Clicking "Save Preferences" saves custom settings and closes modal
- [x] Clicking "Accept All" enables all cookies and closes modal
- [x] Clicking "Reject All" disables optional cookies and closes modal
- [x] Modal can be closed with X button or clicking outside
**Consent Persistence:**
- [x] Refresh page after giving consent - banner should not reappear
- [x] Clear localStorage and refresh - banner should reappear
- [x] Consent choices persist across browser sessions
<img width="1123" height="126" alt="image"
src="https://github.com/user-attachments/assets/7425efab-b5cc-4449-802d-0e12bd65053b"
/>
<img width="1124" height="372" alt="image"
src="https://github.com/user-attachments/assets/2f28919a-97e8-44f5-9021-70d3836bb996"
/>
<!-- Clearly explain the need for these changes: -->
Fixes
[BUILDER-48G](https://sentry.io/organizations/significant-gravitas/issues/6960009111/).
The issue was that: Asynchronous API update scheduled via
`setTimeout(0)` in `OnboardingProvider` creates a race condition,
causing React-DOM's portal cleanup (`removeChild`) to fail during
concurrent component unmounting.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Prevents state updates and API calls after the `OnboardingProvider`
component has been unmounted.
- Introduces a `isMounted` ref to track the component's mount status.
- Uses a `pendingUpdatesRef` to manage and cancel pending API update
promises on unmount, preventing memory leaks and errors.
- Ensures that API update errors are only logged if the component is
still mounted.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2058387
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6960009111/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Onboarding works and does not throw errors when unmounted
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
- Resolves#11314
### Changes 🏗️
- Change "Download agent" CTA button to action link at bottom of summary
agent info
- Move agent ratings above CTA buttons to prevent it from jumping on
page load
- Update vertical spacings to more closely match designs

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Designer approves of new look
- [x] Tests pass
### Changes 🏗️
Fixes
[BUILDER-4HJ](https://sentry.io/organizations/significant-gravitas/issues/6979388537/).
The issue was that: Server-side rendering failed to retrieve the
Supabase access token, causing authenticated API calls to omit the
Authorization header.
- Ensures that the agent version is fetched only when
`creator_agent.active_version_id` exists and the status code is 200.
- Enables the `prefetchGetV2GetAgentByStoreIdQuery` query when
`creator_agent.active_version_id` exists.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2234004
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6979388537/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Loading marketplace works...
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️
<img width="800" height="800" alt="Screenshot 2025-11-04 at 23 05 22"
src="https://github.com/user-attachments/assets/ecb3f442-8f1b-4a80-a6c9-0c4b6d5e0427"
/>
New `<Button variant="link" />` for when you need to render an HTML
`<button>` but with our link styles.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally
- [x] Looks good
## Summary
Add AI-generated correctness score field to execution activity status
generation to provide quantitative assessment of how well executions
achieved their intended purpose.
New page:
<img width="1000" height="229" alt="image"
src="https://github.com/user-attachments/assets/5cb907cf-5bc7-4b96-8128-8eecccde9960"
/>
Old page:
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/ece0dfab-1e50-4121-9985-d585f7fcd4d2"
/>
## What Changed
- Added `correctness_score` field (float 0.0-1.0) to
`GraphExecutionStats` model
- **REFACTORED**: Removed duplicate `llm_utils.py` and reused existing
`AIStructuredResponseGeneratorBlock` logic
- Updated activity status generator to use structured responses instead
of plain text
- Modified prompts to include correctness assessment with 5-tier scoring
system:
- 0.0-0.2: Failure
- 0.2-0.4: Poor
- 0.4-0.6: Partial Success
- 0.6-0.8: Mostly Successful
- 0.8-1.0: Success
- Updated manager.py to extract and set both activity_status and
correctness_score
- Fixed tests to work with existing structured response interface
## Technical Details
- **Code Reuse**: Eliminated duplication by using existing
`AIStructuredResponseGeneratorBlock` instead of creating new LLM
utilities
- Added JSON validation with retry logic for malformed responses
- Maintained backward compatibility for existing activity status
functionality
- Score is clamped to valid 0.0-1.0 range and validated
- All type errors resolved and linting passes
## Test Plan
- [x] All existing tests pass with refactored structure
- [x] Structured LLM call functionality tested with success and error
cases
- [x] Activity status generation tested with various execution scenarios
- [x] Integration tests verify both fields are properly set in execution
stats
- [x] No code duplication - reuses existing block logic
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
BREAKING CHANGE: Removed deprecated use_auto_prompt field from Input
schema. Existing workflows using this field will need to be updated to
use the type field set to "auto" instead.
## Summary of Changes 📝
This PR comprehensively updates all Exa search blocks to match the
latest Exa API specification and adds significant new functionality
through the Websets API integration.
### Core API Updates 🔄
- **Migration to Exa SDK**: Replaced manual API calls with the official
`exa_py` AsyncExa SDK across all blocks for better reliability and
maintainability
- **Removed deprecated fields**: Eliminated
`use_auto_prompt`/`useAutoprompt` field (breaking change)
- **Fixed incomplete field definitions**: Corrected `user_location`
field definition
- **Added new input fields**: Added `moderation` and `context` fields
for enhanced content filtering
### Enhanced Content Settings 🛠️
- **Text field improvements**: Support both boolean and advanced object
configurations
- **New content options**:
- Added `livecrawl` settings (never, fallback, always, preferred)
- Added `subpages` support for deeper content retrieval
- Added `extras` settings for links and images
- Added `context` settings for additional contextual information
- **Updated settings**: Enhanced `highlight` and `summary`
configurations with new query and schema options
### Comprehensive Cost Tracking 💰
- Added detailed cost tracking models:
- `CostDollars` for monetary costs
- `CostCredits` for API credit tracking
- `CostDuration` for time-based costs
- New output fields: `request_id`, `resolved_search_type`,
`cost_dollars`
- Improved response handling to conditionally yield fields based on
availability
### New Websets API Integration 🚀
Added eight new specialized blocks for Exa's Websets API:
- **`websets.py`**: Core webset management (create, get, list, delete)
- **`websets_search.py`**: Search operations within websets
- **`websets_items.py`**: Individual item management (add, get, update,
delete)
- **`websets_enrichment.py`**: Data enrichment operations
- **`websets_import_export.py`**: Bulk import/export functionality
- **`websets_monitor.py`**: Monitor and track webset changes
- **`websets_polling.py`**: Poll for updates and changes
### New Special-Purpose Blocks 🎯
- **`code_context.py`**: Code search capabilities for finding relevant
code snippets from open source repositories, documentation, and Stack
Overflow
- **`research.py`**: Asynchronous research capabilities that explore the
web, gather sources, synthesize findings, and return structured results
with citations
### Code Organization Improvements 📁
- **Removed legacy code**: Deleted `model.py` file containing deprecated
API models
- **Centralized helpers**: Consolidated shared models and utilities in
`helpers.py`
- **Improved modularity**: Each webset operation is now in its own
dedicated file
### Other Changes 🔧
- Updated `.gitignore` for better development workflow
- Updated `CLAUDE.md` with project-specific instructions
- Updated documentation in `docs/content/platform/new_blocks.md` with
error handling, data models, and file input guidelines
- Improved webhook block implementations with SDK integration
### Files Changed 📂
- **Modified (11 files)**:
- `.gitignore`
- `autogpt_platform/CLAUDE.md`
- `autogpt_platform/backend/backend/blocks/exa/answers.py`
- `autogpt_platform/backend/backend/blocks/exa/contents.py`
- `autogpt_platform/backend/backend/blocks/exa/helpers.py`
- `autogpt_platform/backend/backend/blocks/exa/search.py`
- `autogpt_platform/backend/backend/blocks/exa/similar.py`
- `autogpt_platform/backend/backend/blocks/exa/webhook_blocks.py`
- `autogpt_platform/backend/backend/blocks/exa/websets.py`
- `docs/content/platform/new_blocks.md`
- **Added (8 files)**:
- `autogpt_platform/backend/backend/blocks/exa/code_context.py`
- `autogpt_platform/backend/backend/blocks/exa/research.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_enrichment.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_import_export.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_items.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_monitor.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_polling.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_search.py`
- **Deleted (1 file)**:
- `autogpt_platform/backend/backend/blocks/exa/model.py`
### Migration Guide 🚦
For users with existing workflows using the deprecated `use_auto_prompt`
field:
1. Remove the `use_auto_prompt` field from your input configuration
2. Set the `type` field to `ExaSearchTypes.AUTO` (or "auto" in JSON) to
achieve the same behavior
3. Review any custom content settings as the structure has been enhanced
### Testing Recommendations ✅
- Test existing workflows to ensure they handle the breaking change
- Verify cost tracking fields are properly returned
- Test new content settings options (livecrawl, subpages, extras,
context)
- Validate websets functionality if using the new Websets API blocks
🤖 Generated with [Claude Code](https://claude.com/claude-code)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] made + ran a test agent for the blocks and flows between them
[Exa
Tests_v44.json](https://github.com/user-attachments/files/23226143/Exa.Tests_v44.json)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Migrates Exa blocks to AsyncExa SDK, adds comprehensive
Websets/research/code-context blocks, updates existing
search/content/answers/similar, deletes legacy models, adjusts
tests/docs; breaking: remove `use_auto_prompt` in favor of
`type="auto"`.
>
> - **Backend — Exa integration (SDK migration & BREAKING)**:
> - Replace manual HTTP calls with `exa_py.AsyncExa` across `search`,
`similar`, `contents`, `answers`, and webhooks; richer outputs
(citations, context, costs, resolved search type).
> - BREAKING: remove `Input.use_auto_prompt`; use `type = "auto"`.
> - Centralize models/utilities in `exa/helpers.py` (content settings,
cost models, result mappers).
> - **New Blocks**:
> - **Websets**: management (`websets.py`), searches, items,
enrichments, imports/exports, monitors, polling (new files under
`exa/websets_*`).
> - **Research**: async research task create/get/wait/list
(`exa/research.py`).
> - **Code Context**: code snippet/context retrieval
(`exa/code_context.py`).
> - **Removals**:
> - Delete deprecated `exa/model.py`.
> - **Docs & DX**:
> - Update `docs/new_blocks.md` (error handling, models, file input) and
`CLAUDE.md`; ignore backend logs in `.gitignore`.
> - **Frontend Tests**:
> - Split/extend “e” block tests and improve block add robustness in
Playwright (`build.spec.ts`, `build.page.ts`).
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e5e572322. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added multiple Exa research and webset management blocks for task
creation, monitoring, and completion tracking.
* Introduced new search capabilities including code context retrieval,
content search, and enhanced filtering options.
* Added webset enrichment, import/export, and item management
functionality.
* Expanded search with location-based and category filters.
* **Documentation**
* Updated guidance on error handling, data models, and file input
handling.
* **Refactor**
* Modernized backend API integration with improved response structure
and error reporting.
* Simplified configuration options for search operations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#11316
- Durable fix to replace #11318
### Changes 🏗️
- Expand graph execution permissions check
- Don't require library membership for execution as sub-graph
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Can run sub-agent with non-latest graph version
- [x] Can run sub-agent that is available in Marketplace but not added
to Library
## Summary
Fix critical queue blocking issue where rate-limited user messages
prevent other users' executions from being processed, causing the 135
late executions reported in production.
## Root Cause Analysis
When a user exceeds `max_concurrent_graph_executions_per_user` (25), the
executor uses `basic_nack(requeue=True)` which sends the message to the
**FRONT** of the RabbitMQ queue. This creates an infinite blocking loop
where:
1. Rate-limited message goes to front of queue
2. Gets processed, hits rate limit again
3. Goes back to front of queue
4. Blocks all other users' messages indefinitely
## Solution Implementation
### 🔧 Core Changes
- **New setting**: `requeue_by_republishing` (default: `True`) in
`backend/util/settings.py`
- **Smart `_ack_message`**: Automatically uses republishing when
`requeue=True` and setting enabled
- **Efficient implementation**: Uses existing `self.run_client`
connection instead of creating new ones
- **Integration test**: Real RabbitMQ test validates queue ordering
behavior
### 🔄 Technical Implementation
**Before (blocking):**
```python
basic_nack(delivery_tag, requeue=True) # Goes to FRONT of queue ❌
```
**After (non-blocking):**
```python
if requeue and self.config.requeue_by_republishing:
# First: Republish to BACK of queue
self.run_client.publish_message(...)
# Then: Reject without requeue
basic_nack(delivery_tag, requeue=False)
```
### 📊 Impact
- ✅ **Other users' executions no longer blocked** by rate-limited users
- ✅ **Fair queue processing** - FIFO behavior maintained for all users
- ✅ **Rate limiting still works** - just doesn't block others
- ✅ **Configurable** - can revert to old behavior with
`requeue_by_republishing=False`
- ✅ **Zero performance impact** - uses existing connections
## Test Plan
- **Integration test**: `test_requeue_integration.py` validates real
RabbitMQ queue ordering
- **Scenario testing**: Confirms rate-limited messages go to back of
queue
- **Cross-user validation**: Verifies other users' messages process
correctly
- **Setting test**: Confirms configuration loads with correct defaults
## Deployment Strategy
This is a **hotfix** that can be deployed immediately:
- **Backward compatible**: Old behavior available via config
- **Safe default**: New behavior is safer than current state
- **No breaking changes**: All existing functionality preserved
- **Immediate relief**: Resolves production queue blocking
## Files Modified
- `backend/executor/manager.py`: Enhanced `_ack_message` logic and
`_requeue_message_to_back` method
- `backend/util/settings.py`: Added `requeue_by_republishing`
configuration field
- `test_requeue_integration.py`: Integration test for queue ordering
validation
## Related Issues
Fixes the 135 late executions issue where messages were stuck in QUEUED
state despite available executor capacity (583m/600m utilization).
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
- Unmask for Sentry:
- Agent name&creator on onboarding cards
- Edge paths
- Block I/O names
- Prevent firing `onClick` when onboarding agents are loading
- Prevent confetti on null elements and top-left corner
- Fix tooltip on Wallet hover
- Fix `0` appearing in place of notification dot on the Wallet button
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Onboarding works and can be completed
- [x] Wallet confetti works properly
- [x] Tooltip works
Implements foundational backend infrastructure for chat-based agent
interaction system. Users will be able to discover, configure, and run
marketplace agents through conversational AI.
**Note:** Chat routes are behind a feature flag
### Changes 🏗️
**Core Chat System:**
- Chat service with LLM orchestration (Claude 3.5 Sonnet, Haiku, GPT-4)
- REST API routes for sessions and messages
- Database layer for chat persistence
- System prompts and configuration
**5 Conversational Tools:**
1. `find_agent` - Search marketplace by keywords
2. `get_agent_details` - Fetch agent info, inputs, credentials
3. `get_required_setup_info` - Check user readiness, missing credentials
4. `run_agent` - Execute agents immediately
5. `setup_agent` - Configure scheduled execution with cron
**Testing:**
- 28 tests across chat tools (23 passing, 5 skipped for scheduler)
- Test fixtures for simple, LLM, and Firecrawl agents
- Service and data layer tests
**Bug Fixes:**
- Fixed `setup_agent.py` to create schedules instead of immediate
execution
- Fixed graph lookup to use UUID instead of username/slug
- Fixed credential matching by provider/type instead of ID
- Fixed internal tool calls to use `._execute()` instead of `.execute()`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All 28 chat tool tests pass (23 pass, 5 skip - require scheduler)
- [x] Code formatting and linting pass
- [x] Tool execution flow validated through unit tests
- [x] Agent discovery, details, and execution tested
- [x] Credential parsing and matching tested
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
No configuration changes required - all existing settings compatible.
### Changes 🏗️
Change copywriting for execution task summary
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual review
Currently, we are rendering text for all types of outputs, even if it’s
a video, image, or other type. So, In current we fixed it by rendering
them correctly. Also, some output actions weren’t working, so fixed them
also.
<img width="1486" height="1080" alt="Screenshot 2025-10-27 at 4 36
33 PM"
src="https://github.com/user-attachments/assets/4e4ee43f-5400-477e-8fa9-2914acf11466"
/>
<img width="463" height="683" alt="Screenshot 2025-10-27 at 4 39 00 PM"
src="https://github.com/user-attachments/assets/bfc09c00-58dd-4a0d-96a2-aa51cc282797"
/>
<img width="1455" height="753" alt="Screenshot 2025-10-27 at 4 36 56 PM"
src="https://github.com/user-attachments/assets/52870ffe-3e47-4b0f-bfa3-8d8bbe38cbbd"
/>
<img width="1131" height="1062" alt="Screenshot 2025-10-27 at 4 37
17 PM"
src="https://github.com/user-attachments/assets/e55040e9-33e6-45a8-8397-bf912e93840f"
/>
### Changes 🏗️
- Add a new design for the node output.
- Render the correct HTML tag for each type.
- Make all the output actions below the data section workable, such as
viewing the complete data or copying it.
- Add a “View more” button. We’re only seeing two pins of output. If we
have more pins, we can view all the output in a dialogue box.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] able to render different types of output data correctly.
- [x] All output actions are working perfectly.
---------
Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
### Changes 🏗️
add_store_agent_to_library does not add subagents to the user library,
this check can cause issues.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
## Changes 🏗️
Fixing a ✍🏽 typo found by @Pwuts
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app
- [x] No typos on the breadcrumbs
This PR introduces scheduling functionality to the new builder, allowing
users to create cron-based schedules for automated graph execution with
configurable inputs and credentials.
https://github.com/user-attachments/assets/20c1359f-a3d6-47bf-a881-4f22c657906c
## What's New
### 🚀 Features
#### Scheduling Infrastructure
- **CronSchedulerDialog Component**: Interactive dialog for creating
scheduled runs with:
- Schedule name configuration
- Cron expression builder with visual UI
- Timezone support (displays user timezone or defaults to UTC)
- Integration with backend scheduling API
- **ScheduleGraph Component**: New action button in builder actions
toolbar
- Clock icon button to initiate scheduling workflow
- Handles conditional flow based on input/credential requirements
#### Enhanced Input Management
- **Unified RunInputDialog**: Refactored to support both manual runs and
scheduled runs
- Dynamic "purpose" prop (`"run"` | `"schedule"`) for contextual
behavior
- Seamless credential and input collection flow
- Transitions to cron scheduler when scheduling
#### Builder Actions Improvements
- **New Action Buttons Layout**: Three primary actions in the builder
toolbar:
1. Agent Outputs (placeholder for future implementation)
2. Run Graph (play/stop button with gradient styling)
3. Schedule Graph (clock icon for scheduling)
## Technical Details
### New Components
- `CronSchedulerDialog` - Main scheduling dialog component
- `useCronSchedulerDialog` - Hook managing scheduling logic and API
calls
- `ScheduleGraph` - Schedule button component
- `useScheduleGraph` - Hook for scheduling flow control
- `AgentOutputs` - Placeholder component for future outputs feature
### Modified Components
- `BuilderActions` - Added new action buttons
- `RunGraph` - Enhanced with tooltip support
- `RunInputDialog` - Made multi-purpose for run/schedule
- `useRunInputDialog` - Added scheduling dialog state management
### API Integration
- Uses `usePostV1CreateExecutionSchedule` for schedule creation
- Fetches user timezone with `useGetV1GetUserTimezone`
- Validates and passes graph ID, version, inputs, and credentials
## User Experience
1. **Without Inputs/Credentials**:
- Click schedule button → Opens cron scheduler directly
2. **With Inputs/Credentials**:
- Click schedule button → Opens input dialog
- Fill required fields → Click "Schedule Run"
- Configure cron expression → Create schedule
3. **Timezone Awareness**:
- Shows user's configured timezone
- Warns if no timezone is set (defaults to UTC)
- Provides link to timezone settings
## Testing Checklist
- [x] Create a schedule without inputs/credentials
- [x] Create a schedule with required inputs
- [x] Create a schedule with credentials
- [x] Verify timezone display (with and without user timezone)
This PR introduces comprehensive undo/redo functionality to the flow
builder, allowing users to revert and restore changes to their
workflows. The implementation includes keyboard shortcuts (Ctrl/Cmd+Z
for undo, Ctrl/Cmd+Y for redo) and visual controls in the UI.
https://github.com/user-attachments/assets/514253a6-4e86-4ac5-96b4-992180fb3b00
### What's New 🚀
- **Undo/Redo State Management**: Implemented a dedicated Zustand store
(`historyStore`) that tracks up to 50 historical states of nodes and
connections
- **Keyboard Shortcuts**: Added cross-platform keyboard shortcuts:
- `Ctrl/Cmd + Z` for undo
- `Ctrl/Cmd + Y` for redo
- **UI Controls**: Added dedicated undo/redo buttons to the control
panel with:
- Visual feedback when actions are available/disabled
- Tooltips for better user guidance
- Proper accessibility attributes
- **Automatic History Tracking**: Integrated history tracking into node
operations (add, remove, position changes, data updates)
### Technical Details 🔧
#### Architecture
- **History Store** (`historyStore.ts`): Manages past and future states
using a stack-based approach
- Stores snapshots of nodes and connections
- Implements state deduplication to prevent duplicate history entries
- Limits history to 50 states to manage memory usage
- **Integration Points**:
- `nodeStore.ts`: Modified to push state changes to history on relevant
operations
- `Flow.tsx`: Added the new `useFlowRealtime` hook for real-time updates
- `NewControlPanel.tsx`: Integrated the new `UndoRedoButtons` component
#### UI Improvements
- **Enhanced Control Panel Button**: Updated to support different HTML
elements (button/div) with proper role attributes for accessibility
- **Block Menu Tooltips**: Added tooltips to improve user guidance
- **Responsive UI**: Adjusted tooltip delays for better responsiveness
(100ms delay)
### Testing Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description ✅
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new flow with multiple nodes and verify undo/redo works
for node additions
- [x] Move nodes and verify position changes can be undone/redone
- [x] Delete nodes and verify deletions can be undone
- [x] Test keyboard shortcuts (Ctrl/Cmd+Z and Ctrl/Cmd+Y) on different
platforms
- [x] Verify undo/redo buttons are disabled when no history is available
- [x] Test with complex flows (10+ nodes) to ensure performance remains
good
## Summary
Implement comprehensive admin user impersonation functionality to enable
admins to act on behalf of any user for debugging and support purposes.
## 🔐 Security Features
- **Admin Role Validation**: Only users with 'admin' role can
impersonate others
- **Header-Based Authentication**: Uses `X-Act-As-User-Id` header for
impersonation requests
- **Comprehensive Audit Logging**: All impersonation attempts logged
with admin details
- **Secure Error Handling**: Proper HTTP 403/401 responses for
unauthorized access
- **SSR Safety**: Client-side environment checks prevent server-side
rendering issues
## 🏗️ Architecture
### Backend Implementation (`autogpt_libs/auth/dependencies.py`)
- Enhanced `get_user_id` FastAPI dependency to process impersonation
headers
- Admin role verification using existing `verify_user()` function
- Audit trail logging with admin email, user ID, and target user
- Seamless integration with all existing routes using `get_user_id`
dependency
### Frontend Implementation
- **React Hook**: `useAdminImpersonation` for state management and API
calls
- **Security Banner**: Prominent warning when impersonation is active
- **Admin Panel**: Control interface for starting/stopping impersonation
- **Session Persistence**: Maintains impersonation state across page
refreshes
- **Full Page Refresh**: Ensures all data updates correctly on state
changes
### API Integration
- **Header Forwarding**: All API requests include impersonation header
when active
- **Proxy Support**: Next.js API proxy forwards headers to backend
- **Generated Hooks**: Compatible with existing React Query API hooks
- **Error Handling**: Graceful fallback for storage/authentication
failures
## 🎯 User Experience
### For Admins
1. Navigate to `/admin/impersonation`
2. Enter target user ID (UUID format with validation)
3. System displays security banner during active impersonation
4. All API calls automatically use impersonated user context
5. Click "Stop Impersonation" to return to admin context
### Security Notice
- **Audit Trail**: All impersonation logged with `logger.info()`
including admin email
- **Session Isolation**: Impersonation state stored in sessionStorage
(not persistent)
- **No Token Manipulation**: Uses header-based approach, preserving
admin's JWT
- **Role Enforcement**: Backend validates admin role on every
impersonated request
## 🔧 Technical Details
### Constants & Configuration
- `IMPERSONATION_HEADER_NAME = "X-Act-As-User-Id"`
- `IMPERSONATION_STORAGE_KEY = "admin-impersonate-user-id"`
- Centralized in `frontend/src/lib/constants.ts` and
`autogpt_libs/auth/dependencies.py`
### Code Quality Improvements
- **DRY Principle**: Eliminated duplicate header forwarding logic
- **Icon Compliance**: Uses Phosphor Icons per coding guidelines
- **Type Safety**: Proper TypeScript interfaces and error handling
- **SSR Compatibility**: Environment checks for client-side only
operations
- **Error Consistency**: Uniform silent failure with logging approach
### Testing
- Updated backend auth dependency tests for new function signatures
- Added Mock Request objects for comprehensive test coverage
- Maintained existing test functionality while extending capabilities
## 🚀 CodeRabbit Review Responses
All CodeRabbit feedback has been addressed:
1. ✅ **DRY Principle**: Refactored duplicate header forwarding logic
2. ✅ **Icon Library**: Replaced lucide-react with Phosphor Icons
3. ✅ **SSR Safety**: Added environment checks for sessionStorage
4. ✅ **UI Improvements**: Synchronous initialization prevents flicker
5. ✅ **Error Handling**: Consistent silent failure with logging
6. ✅ **Backend Validation**: Confirmed comprehensive security
implementation
7. ✅ **Type Safety**: Addressed TypeScript concerns
8. ✅ **Code Standards**: Followed all coding guidelines and best
practices
## 🧪 Testing Instructions
1. **Login as Admin**: Ensure user has admin role
2. **Navigate to Panel**: Go to `/admin/impersonation`
3. **Test Impersonation**: Enter valid user UUID and start impersonation
4. **Verify Banner**: Security banner should appear at top of all pages
5. **Test API Calls**: Verify credits/graphs/etc show impersonated
user's data
6. **Check Logging**: Backend logs should show impersonation audit trail
7. **Stop Impersonation**: Verify return to admin context works
correctly
## 📝 Files Modified
### Backend
- `autogpt_libs/auth/dependencies.py` - Core impersonation logic
- `autogpt_libs/auth/dependencies_test.py` - Updated test signatures
### Frontend
- `src/hooks/useAdminImpersonation.ts` - State management hook
- `src/components/admin/AdminImpersonationBanner.tsx` - Security warning
banner
- `src/components/admin/AdminImpersonationPanel.tsx` - Admin control
interface
- `src/app/(platform)/admin/impersonation/page.tsx` - Admin page
- `src/app/(platform)/admin/layout.tsx` - Navigation integration
- `src/app/(platform)/layout.tsx` - Banner integration
- `src/lib/autogpt-server-api/client.ts` - Header injection for API
calls
- `src/lib/autogpt-server-api/helpers.ts` - Header forwarding logic
- `src/app/api/proxy/[...path]/route.ts` - Proxy header forwarding
- `src/app/api/mutators/custom-mutator.ts` - Enhanced error handling
- `src/lib/constants.ts` - Shared constants
## 🔒 Security Compliance
- **Authorization**: Admin role required for impersonation access
- **Authentication**: Uses existing JWT validation with additional role
checks
- **Audit Logging**: Comprehensive logging of all impersonation
activities
- **Error Handling**: Secure error responses without information leakage
- **Session Management**: Temporary sessionStorage without persistent
data
- **Header Validation**: Proper sanitization and validation of
impersonation headers
This implementation provides a secure, auditable, and user-friendly
admin impersonation system that integrates seamlessly with the existing
AutoGPT Platform architecture.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Admin user impersonation to view the app as another user.
* New "User Impersonation" admin page for entering target user IDs and
managing sessions.
* Sidebar link for quick access to the impersonation page.
* Persistent impersonation state that updates app data (e.g., credits)
and survives page reloads.
* Top warning banner when impersonation is active with a Stop
Impersonation control.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
<img width="800" height="876" alt="Screenshot_2025-10-29_at_22 56 43"
src="https://github.com/user-attachments/assets/e1d9cf62-0a81-4658-82c2-6e673d636479"
/>
New `<GoogleDrivePicker />` component that, when rendered:
- re-uses existing Google credentials OR asks the user to SSO
- uses the Google Drive Picker script to launch a modal for the user to
select files
We will need this 3 new environment variables on the Front-end for it to
work:
```
# Google Drive Picker
NEXT_PUBLIC_GOOGLE_CLIENT_ID=
NEXT_PUBLIC_GOOGLE_API_KEY=
NEXT_PUBLIC_GOOGLE_APP_ID=
```
Updated `.env.default` with them.
### Next
We need to figure out how to map this to an agent input type and update
the Back-end to accept the files as input.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I tried the whole flow
### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
### Changes 🏗️
This PR enhances the agent execution functionality by introducing a
dynamic input dialog that collects both regular inputs and credentials
before running agents.
<img width="1309" height="826" alt="Screenshot 2025-11-03 at 10 16
38 AM"
src="https://github.com/user-attachments/assets/2015da5d-055d-49c5-8e7e-31bd0fe369f4"
/>
#### ✨ New Features
- **Dynamic Input Dialog**: Added a new `RunInputDialog` component that
automatically detects when agents require inputs or credentials and
prompts users before execution
- **Credential Management**: Integrated credential input handling
directly into the execution flow, supporting various credential types
(API keys, OAuth, passwords)
- **Enhanced Run Controls**: Improved the `RunGraph` component with
better state management and visual feedback for running/stopping agents
- **Form Renderer**: Created a new unified `FormRenderer` component for
consistent input rendering across the application
#### 🔧 Refactoring
- **Input Renderer Migration**: Moved input renderer components from
FlowEditor-specific location to a shared components directory for better
reusability:
- Migrated fields (AnyOfField, CredentialField, ObjectField)
- Migrated widgets (ArrayEditor, DateInput, SelectWidget, TextInput,
etc.)
- Migrated templates (FieldTemplate, ArrayFieldTemplate)
- **State Management**: Enhanced `graphStore` with schemas for inputs
and credentials, including helper methods to check for their presence
- **Component Organization**: Restructured BuilderActions components for
better modularity
#### 🗑️ Cleanup
- Removed outdated FlowEditor documentation files (FORM_CREATOR.md,
README.md)
- Removed deprecated `RunGraph` and `useRunGraph` implementations from
FlowEditor
- Consolidated duplicate functionality into new shared components
#### 🎨 UI/UX Improvements
- Added gradient styling to Run/Stop button for better visual appeal
- Improved dialog layout with clear sections for Credentials and Inputs
- Enhanced form fields with size variants (small, medium, large) for
better responsiveness
- Added loading states and proper error handling during execution
### Technical Details
- The new system automatically detects input requirements from the graph
schema
- Credentials are handled separately with special UI treatment based on
credential type
- The dialog only appears when inputs or credentials are actually
required
- Execution flow: Save graph → Check for inputs/credentials → Show
dialog if needed → Execute with provided values
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create an agent without inputs and verify it runs directly without
dialog
- [x] Create an agent with input blocks and verify the dialog appears
with correct fields
- [x] Create an agent requiring credentials and verify credential
selection/creation works
- [x] Test agent execution with both inputs and credentials
- [x] Verify Stop Agent functionality during execution
- [x] Test error handling for invalid inputs or missing credentials
- [x] Verify that the dialog closes properly after submission
- [x] Test that execution state is properly reflected in the UI
Beads are reset when saving but not on run which can result in beads
from previous runs accumulating on the opened graph.
### Changes 🏗️
- Move bead reset code to function and call it before run
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Beads reset on every run
### Changes 🏗️
- Prevent removing progress of user onboarding tasks by merging arrays
on the backend instead of replacing them
- New endpoint for onboarding reset
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tasks are not being reset
- [x] `/onboarding/reset` works
- #11273
- Bump `apscheduler` to v3.11.1 which contains a fix for the issue
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] "It's a rather ugly solution but the test proves that it works."
~the maintainer
- [x] CI passes
- #11273
### Changes 🏗️
- Bump `apscheduler` to v3.11.1 which contains a fix for the issue
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] "It's a rather ugly solution but the test proves that it works."
~the maintainer
- [x] CI passes
<!-- Clearly explain the need for these changes: -->
This PR addresses the need for consistent error handling across all
blocks in the AutoGPT platform. Previously, each block had to manually
define an `error` field in their output schema, leading to code
duplication and potential inconsistencies. Some blocks might forget to
include the error field, making error handling unpredictable.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **Created `BlockSchemaOutput` base class**: New base class that
extends `BlockSchema` with a standardized `error` field
- **Created `BlockSchemaInput` base class**: Added for consistency and
future extensibility
- **Updated 140+ block implementations**: Changed all block `Output`
classes from `class Output(BlockSchema):` to `class
Output(BlockSchemaOutput):`
- **Removed manual error field definitions**: Eliminated hundreds of
duplicate `error: str = SchemaField(...)` definitions
- **Updated type annotations**: Changed `Block[BlockSchema,
BlockSchema]` to `Block[BlockSchemaInput, BlockSchemaOutput]` throughout
the codebase
- **Fixed imports**: Added `BlockSchemaInput` and `BlockSchemaOutput`
imports to all relevant files
- **Maintained backward compatibility**: Updated `EmptySchema` to
inherit from `BlockSchemaOutput`
**Key Benefits:**
- Consistent error handling across all blocks
- Reduced code duplication (removed ~200 lines of repetitive error field
definitions)
- Type safety improvements with distinct input/output schema types
- Blocks can still override error field with more specific descriptions
when needed
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verified `poetry run format` passes (all linting, formatting, and
type checking)
- [x] Tested block instantiation works correctly (MediaDurationBlock,
UnrealTextToSpeechBlock)
- [x] Confirmed error fields are automatically present in all updated
blocks
- [x] Verified block loading system works (successfully loads 353+
blocks)
- [x] Tested backward compatibility with EmptySchema
- [x] Confirmed blocks can still override error field with custom
descriptions
- [x] Validated core schema inheritance chain works correctly
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes were needed for this refactoring.*
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
### Changes 🏗️
- Increased `max_field_size` in `aiohttp.ClientSession` to 16KB to
handle servers with large headers (e.g., long CSP headers).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Add unit test that checks it can now parse headers over 8k size
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️
<img width="800" height="547" alt="Screenshot 2025-10-29 at 22 11 35"
src="https://github.com/user-attachments/assets/5c700ddc-d770-48ef-9847-7e652c5dedcb"
/>
<br /><br />
- Use
[`react-currency-input-field`](https://www.npmjs.com/package/react-currency-input-field)
for `<Input type="amount" />` under the hood
- so it formats numbers nicely with `,` and `.`
- Simplify form logic
- Make the popover cover the trigger button when open
- Re-organize imports
- Show a `$` prefix in front of the amount inputs
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] Open the wallet with credits enabled
- [x] Play with the inputs
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
### Changes 🏗️
- Added validation to ensure that the `summary` and `final_summary`
returned by the LLM are strings.
- Raises a `ValueError` if the LLM returns a list or other non-string
type, providing a descriptive error message to aid debugging.
Fixes
[AUTOGPT-SERVER-6M4](https://sentry.io/organizations/significant-gravitas/issues/6978480131/).
The issue was that: LLM returned list of strings instead of single
string summary, causing `_combine_summaries` to fail on `join`.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2230933
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6978480131/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Added a unit test to verify that a ValueError is raised when the
LLM returns a list instead of a string for summary or final_summary.
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Marketplace sort by functionality was not working on the frontend. This
PR fixes it
### Changes 🏗️
- Add type hints for sort by
- Fix marketplace sort by drop downs
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested locally
### Changes 🏗️
- Ensures `handleFetchError` can handle non-JSON error responses (e.g.,
HTML error pages).
- Attempts to parse the response body as JSON, but falls back to text if
JSON parsing fails.
- Logs a warning to the console if JSON parsing fails.
- Sets `responseData` to null if parsing fails.
Fixes
[BUILDER-482](https://sentry.io/organizations/significant-gravitas/issues/6958135748/).
The issue was that: Frontend error handler unconditionally calls
`response.json()` on a non-JSON HTML error page starting with 'A'.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2206951
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6958135748/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test Plan:
- [x] Created unit tests for the issue that caused the error
- [x] Created unit tests to ensure responses are parsed gracefully
### Changes 🏗️
Enhanced SQL query security in the store search functionality by
implementing proper parameterization to prevent SQL injection
vulnerabilities.
**Security Improvements:**
- Replaced string interpolation with PostgreSQL positional parameters
(`$1`, `$2`, etc.) for all user inputs
- Added ORDER BY whitelist validation to prevent injection via
`sorted_by` parameter
- Parameterized search term, creators array, category, and pagination
values
- Fixed variable naming conflict (`sql_where_clause` vs `where_clause`)
**Testing:**
- Added 4 comprehensive tests validating SQL injection prevention across
different attack vectors
- Tests verify that malicious input in search queries, filters, sorting,
and categories are safely handled
- All 10 tests in db_test.py pass successfully
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All existing tests pass (10/10 tests passing)
- [x] New security tests validate SQL injection prevention
- [x] Verified parameterized queries handle malicious input safely
- [x] Code formatting passes (`poetry run format`)
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes required for this security fix*
## Summary
Fix critical issue where pre-execution permission validation broke
execution of graphs that reference older versions of sub-graphs.
## Problem
The `validate_graph_execution_permissions` function was checking for the
specific version of a graph in the user's library. This caused failures
when:
1. A parent graph references an older version of a sub-graph
2. The user updates the sub-graph to a newer version
3. The older version is no longer in their library
4. Execution of the parent graph fails with `GraphNotInLibraryError`
## Root Cause
In `backend/executor/utils.py` line 523, the function was checking for
the exact version, but sub-graphs legitimately reference older versions
that may no longer be in the library.
## Solution
### 1. Remove Version-Specific Check (backend/executor/utils.py)
- Remove `graph_version=graph.version` parameter from validation call
- Add explanatory comment about version-agnostic behavior
- Now only checks that the graph ID exists in user's library (any
version)
### 2. Enhance Documentation (backend/data/graph.py)
- Update function docstring to explain version-agnostic behavior
- Document that `None` (now default) allows execution of any version
- Clarify this is important for sub-graph version compatibility
## Technical Details
The `validate_graph_execution_permissions` function was already designed
to handle version-agnostic checks when `graph_version=None`. By omitting
the version parameter, we skip the version check and only verify:
- Graph exists in user's library
- Graph is not deleted/archived
- User has execution permissions
## Impact
- ✅ Parent graphs can execute even when they reference older sub-graph
versions
- ✅ Sub-graph updates don't break existing parent graphs
- ✅ Maintains security: still checks library membership and permissions
- ✅ No breaking changes: version-specific validation still available
when needed
## Example Scenario Fixed
1. User creates parent graph that uses sub-graph v1
2. User updates sub-graph to v2 (v1 removed from library)
3. Parent graph still references sub-graph v1
4. **Before**: Execution fails with `GraphNotInLibraryError`
5. **After**: Execution succeeds (version-agnostic permission check)
## Testing
- [x] Code formatting and linting passes
- [x] Type checking passes
- [x] No breaking changes to existing functionality
- [x] Security still maintained through library membership checks
## Files Changed
- `backend/executor/utils.py`: Remove version-specific permission check
- `backend/data/graph.py`: Enhanced documentation for version-agnostic
behavior
Closes #[issue-number-if-applicable]
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
The `<Wallet />` was being rendered twice ( one hidden with CSS `hidden`
) because of the Navbar layout, which caused logic issues within the
wallet. I changed to render it conditionally via Javascript instance,
which is always better practice than use `hidden` specially for
components with actual logic.
I also moved the component files closer to where it is used ( in the
navbar ).
I have a Cursor plugin that removes imports when unused, but annoyingly
re-organizes them, hence the changes around that...
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] There is only 1 Wallet in the DOM
📨 Fix: Handle Oversized Notification Emails
Summary
This PR adds logic to detect and handle oversized notification emails
exceeding Postmark’s 5 MB limit. Instead of retrying indefinitely, the
system now sends a lightweight summary email with key stats and a
dashboard link.
Changes
Added size check in EmailSender.send_templated()
Sends summary email when payload > ~4.5 MB
Prevents infinite retries and queue clogging
Added logs for oversized detection
Fixes#11119
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Changes 🏗️
A couple of improvements on **Onboarding Step 5**:
- Show a spinner when the page is loading ( better contrast / context
than skeleton in this case )
- Prevent the run button being disabled if credentials failed to load
- while this is good/expected behavior, it will help us debug the issue
in production where credentials failed to load silently, given running
the agent it'll throw an error we can see
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new account/signup
- [x] On Onboarding Step 5 test the above
- Resolves#11251
This fixes all the warnings mentioned in #11251, reducing noise and
making our logs and error alerts more useful :)
### Changes 🏗️
- Remove "Block {block_name} has multiple credential inputs" warning
(not actually an issue)
- Rename `json` attribute of `MainCodeExecutionResult` to `json_data`;
retain serialized name through a field alias
- Replace `Path(regex=...)` with `Path(pattern=...)` in
`get_shared_execution` endpoint parameter config
- Change Uvicorn's WebSocket module to new Sans-I/O implementation for
WS server
- Disable Uvicorn's WebSocket module for REST server
- Remove deprecated `enable_cleanup_closed=True` argument in
`CloudStorageHandler` implementation
- Replace Prisma transaction timeout `int` argument with a `timedelta`
value
- Update Sentry SDK to latest version (v2.42.1)
- Broaden filter for cleanup warnings from indirect dependency `litellm`
- Fix handling of `MissingConfigError` in REST server endpoints
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Check that the warnings are actually gone
- [x] Deploy to dev environment and run a graph; check for any warnings
- Test WebSocket server
- [x] Run an agent in the Builder; make sure real-time execution updates
still work
### Changes 🏗️
### Before
<img width="800" height="649" alt="Screenshot_2025-10-23_at_00 44 59"
src="https://github.com/user-attachments/assets/fd717d39-772a-4331-bc54-4db15a9a3107"
/>
### After
<img width="800" height="555" alt="Screenshot 2025-10-27 at 23 19 10"
src="https://github.com/user-attachments/assets/64878bd0-3a96-4b3a-8344-1a88c89de52e"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Try to signup with a non-approved email
- [x] You see the modal with an updated copy
## Changes 🏗️
The mobile 📱 experience is still a mess but this helps a little.
### Before
<img width="350" height="395" alt="Screenshot 2025-10-24 at 18 26 18"
src="https://github.com/user-attachments/assets/75eab232-8c37-41e7-a51d-dbe07db336a0"
/>
### After
<img width="350" height="406" alt="Screenshot 2025-10-24 at 18 25 54"
src="https://github.com/user-attachments/assets/ecbd8bbd-8a94-4775-b990-c8b51de48cf9"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Load the app
- [x] Check the Tally popup button copy
- [x] The button still works
### Changes 🏗️
- Modifies the LLM block to extract the actual response from the
dictionary returned by the LLM, instead of yielding the entire
dictionary. This addresses
[AUTOGPT-SERVER-6EY](https://sentry.io/organizations/significant-gravitas/issues/6950850822/).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] After applying the fix, I ran the agent that triggered the Sentry
error and confirmed that it now completes successfully without errors.
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Fixes
[AUTOGPT-SERVER-6KZ](https://sentry.io/organizations/significant-gravitas/issues/6976926125/).
The issue was that: Redirect handling strips the URL scheme, causing
subsequent requests to fail validation and hit a 404.
- Ensures the hostname in the URL is properly IDNA-encoded after
validation.
- Reconstructs the netloc with the encoded hostname and preserves the
port if it exists.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2204774
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6976926125/?seerDrawer=true)
### Changes 🏗️
**backend/util/request.py:**
- Fixed URL validation to properly preserve port numbers when
reconstructing netloc
- Ensures IDNA-encoded hostname is combined with port (if present)
before URL reconstruction
**Test Results:**
- ✅ Tested request to https://www.target.com/ (original failing URL from
Sentry issue)
- ✅ Status: 200, Content retrieved successfully (339,846 bytes)
- ✅ Port preservation verified for URLs with explicit ports
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested request to https://www.target.com/ (original failing URL)
- [x] Verified status code 200 and successful content retrieval
- [x] Verified port preservation in URL validation
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
- Resolves#10980
- 2nd attempt after #11075 broke some things
Fixes unnecessary graph re-saving when no changes were made after
initial save. More specifically, this PR fixes two causes of this issue:
- Frontend node IDs were being compared to backend IDs, which won't
match if the graph has been modified and saved since loading.
- `fillDefaults` was being applied to all nodes (including existing
ones) on element creation, and empty values were being stripped
*post-save* with `removeEmptyStringsAndNulls`. This invisible
auto-modification of node input data meant that in some common cases the
graph would never be in sync with the backend.
### Changes 🏗️
- Fix node ID handling
- Use `node.data.backend_id ?? node.id` instead of `node.id` in
`prepareSaveableGraph`
- Also map link source/sink IDs to their corresponding backend IDs
- Add note about `node.data.backend_id` to `_saveAgent`
- Use `node.data.backend_id || node.id` as display ID in `CustomNode`
- Prevent auto-modification of node input data on existing nodes
- Prune empty values (`undefined`, `null`, `""`) from node input data
*pre-save* instead of post-save
- Related: improve typing and functionality of
`fillObjectDefaultsFromSchema` (moved and renamed from `fillDefaults`)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Node display ID updates on save
- [x] Clicking save a second time (without making more changes) doesn't
cause re-save
- [x] Updating nodes with dynamic input links (e.g. Create Dictionary
Block) doesn't make the links disappear
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Prevented unintended auto-modification of existing nodes during
editing
* Improved consistency of node and connection identifiers in saved
graphs
* **Improvements**
* Enhanced node title display logic for clearer node identification
* Optimized data cleanup utilities for more robust input processing in
the builder
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Removes CLAUDE_3_5_SONNET and CLAUDE_3_5_HAIKU from LlmModel enum, model
metadata, and cost configuration since they are deprecated
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify the models are gone from the llm blocks
## Changes 🏗️
We weren't fetching all library agents, just the first 15... to compute
the agent map on the Agent Activity dropdown. We suspect that is causing
some agent executions coming as `Unknown agent`.
In this changes, I'm fetching all the library agents upfront ( _without
blocking page load_ ) and caching them on the browser, so we have all
the details to render the agent runs. This is re-used in the library as
well for fast initial load on the agents list page.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First request populates cache; subsequent identical requests hit
cache
- [x] Editing an agent invalidates relevant cache keys and serves fresh
data
- [x] Different query params generate distinct cache entries
- [x] Cache layer gracefully falls back to live data on errors
- [x] 404 behavior for unknown agents unchanged
### For configuration changes:
None
## Changes 🏗️
The onboarding `Run` button is disabled sometimes when an agent
requiring credentials is selected. We think this can be because the
credentials load _async_ by a sub-component ( `<CredentialsInputs />` ),
and there wasn't a way for the parent component to know whether they
loaded or not.
- Refactored **Step 5** of onboarding to adhere to our code conventions
- split concerns and colocated state
- used generated API hooks
- the UI will only render once API calls succeed
- Created a system where ``<CredentialsInputs />` notify the parent
component when they load
- Did minor adjustments here and there
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I will know once I find an agent with credentials that I can
run....
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added visual agent selection card displaying agent details during
onboarding
* Introduced credentials input management for agent configuration
* Added onboarding guidance for initiating agent runs
* **Improvements**
* Enhanced onboarding flow with improved state management
* Refined login state handling
* Adjusted spacing in agent rating display
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Categories and Creators where not sanitized in the full text search
- apply sanitization to categories and creators
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] run tests to check it still works
Categories and Creators where not sanitized in the full text search
### Changes 🏗️
- apply sanitization to categories and creators
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] run tests to check it still works
Added error output pins to all Firecrawl blocks as standard on the
AutoGPT platform. The base block execution code already handles error
yielding, so no try-catch logic was needed.
- FirecrawlScrapeBlock: Added error output pin for scrape failures
- FirecrawlCrawlBlock: Added error output pin for crawl failures
- FirecrawlExtractBlock: Added error output pin for extraction failures
- FirecrawlMapBlock: Added error output pin for map failures
- FirecrawlSearchBlock: Added error output pin for search failures
Resolves#11253
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
We're currently seeing errors in the `DatabaseManager` while it's
shutting down, like:
```
WARNING [DatabaseManager] Termination request: SystemExit; 0 executing cleanup.
INFO [DatabaseManager] ⏳ Disconnecting Database...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection started...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection completed successfully.
INFO [DatabaseManager] Terminated.
ERROR POST /create_or_add_to_user_notification_batch failed: Failed to create or add to notification batch for user {user_id} and type AGENT_RUN: NoneType: None
```
This indicates two issues:
- The service doesn't wait for pending RPC calls to finish before
terminating
- We're using `logger.exception` outside an error handling context,
causing the confusing and not much useful `NoneType: None` to be printed
instead of error info
### Changes 🏗️
- Implement graceful shutdown in `AppService` so in-flight RPC calls can
finish
- Add tests for graceful shutdown
- Prevent `AppService` accepting new requests during shutdown
- Rework `AppService` lifecycle management; add support for async
`lifespan`
- Fix `AppService` endpoint error logging
- Improve logging in `AppProcess` and `AppService`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Deploy to Dev cluster, then `kubectl rollout restart` the different
services a few times
- [x] -> `DatabaseManager` doesn't break on re-deployment
- [x] -> `Scheduler` doesn't break on re-deployment
- [x] -> `NotificationManager` doesn't break on re-deployment
Updated block costs in `backend/backend/data/block_cost_config.py`:
- **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
- **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
- **AIScreenshotToVideoAdBlock**: Added cost of 612 credits
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
This PR implements real-time agent execution functionality in the new
flow editor, enabling users to run, monitor, and view results of their
agent workflows directly within the builder interface.
https://github.com/user-attachments/assets/8a730e08-f88d-49d4-be31-980e2c7a2f83
#### Key Features Added:
##### 1. **Agent Execution Controls**
- Added "Run Agent" / "Stop Agent" button with gradient styling in the
builder interface
- Implemented execution state management through a new `graphStore` for
tracking running status
- Save graph automatically before execution to ensure latest changes are
persisted
##### 2. **Real-time Execution Monitoring**
- Implemented WebSocket-based real-time updates for node execution
status via `useFlowRealtime` hook
- Subscribe to graph execution events and node execution events for live
status tracking
- Visual execution status badges on nodes showing states: `QUEUED`,
`RUNNING`, `COMPLETED`, `FAILED`, etc.
- Animated gradient border effect when agent is actively running
##### 3. **Node Execution Results Display**
- New `NodeDataRenderer` component to display input/output data for each
executed node
- Collapsible result sections with formatted JSON display
- Prepared UI for future functionality: copy, info, and expand actions
for node data
#### Technical Implementation:
- **State Management**: Extended `nodeStore` with execution status and
result tracking methods
- **WebSocket Integration**: Real-time communication for execution
updates without polling
- **Component Architecture**: Modular components for execution controls,
status display, and result rendering
- **Visual Feedback**: Color-coded status badges and animated borders
for clear execution state indication
#### TODO Items for Future PRs:
- Complete implementation of node result action buttons (copy, info,
expand)
- Add agent output display component
- Implement schedule run functionality
- Handle credential and input parameters for graph execution
- Add tooltips for better UX
### Checklist
- [x] Create a new agent with at least 3 blocks and verify execution
starts correctly
- [x] Verify real-time status updates appear on nodes during execution
- [x] Confirm execution results display in the node output sections
- [x] Verify the animated border appears when agent is running
- [x] Check that node status badges show correct states (QUEUED,
RUNNING, COMPLETED, etc.)
- [x] Test WebSocket reconnection after connection loss
- [x] Verify graph is saved before execution begins
Updated block costs in `backend/backend/data/block_cost_config.py`:
- **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
- **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
- **AIScreenshotToVideoAdBlock**: Added cost of 612 credits
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
Improves the "not on waitlist" error display based on feedback.
- Follow-up to #11198
- Follow-up to #11196
### Changes 🏗️
- Use standard `ErrorCard`
- Improve text strings
- Merge `isWaitlistError` and `isWaitlistErrorFromParams`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] We need to test in dev becasue we don't have a waitlist locally
and will revert if it doesnt work
- deploy to dev environment and sign up with a non approved account and
see if error appears
Part of our effort to eliminate preventable warnings and errors.
- Resolves#11237
### Changes 🏗️
- Exclude `undefined` query params in API requests
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Open the Builder without a `flowVersion` URL parameter
- [x] -> `GET /api/library/agents/by-graph/{graph_id}` succeeds
- Open the builder with a `flowVersion` URL parameter
- [x] -> version is correctly included in request URL parameters
## Changes 🏗️
- Add [custom events](https://datafa.st/docs/custom-goals) in
**Datafa.st** to track the user journey around core actions
- track `add_to_library`
- track `download_agent`
- track `run_agent`
- track `schedule_agent`
- Refactor the analytics service to encapsulate both **GA** and
**Datafa.st**
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Analytics load correctly locally
- [x] Events fire in production
### For configuration changes:
Once deployed to production we need to verify we are receiving analytics
and custom events in [Datafa.st](https://datafa.st/)
Potential fix for
[https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145)
To fix the issue, rather than using substring matching on the raw URL
string, we need to properly parse the URL and inspect its hostname. We
should confirm that the `hostname` property of the parsed URL is equal
to either `vimeo.com` or explicitly permitted subdomains like
`www.vimeo.com`. We can use the native JavaScript `URL` class for robust
parsing.
**File/Location:**
- Only change line(s) in
`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/OutputRenderers/renderers/MarkdownRenderer.tsx`
- Specifically, update the logic in function `isVideoUrl()` on line 45.
**Methods/Imports/Definitions:**
- Use the standard `URL` class (no need to add a new import, as this is
available in browsers and in Node.js).
- Provide fallback in case the URL passed in is malformed (wrap in a
try-catch, treat as non-video in this case).
- Check the parsed hostname for equality with `vimeo.com` or,
optionally, specific allowed subdomains (`www.vimeo.com`).
---
_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Debug and info level messages are currently ending up in Sentry,
polluting our issue feed.
### Changes 🏗️
- Limit Sentry console capture to warnings and worse
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Trivial change, no test needed
<!-- Clearly explain the need for these changes: -->
This PR converts Jinja2 TemplateError exceptions to ValueError in the
TextFormatter class to ensure proper error handling and HTTP status code
responses (400 instead of 500).
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Added import for `jinja2.exceptions.TemplateError` in
`backend/util/text.py:6`
- Wrapped template rendering in try-catch block in `format_string`
method (`backend/util/text.py:105-109`)
- Convert `TemplateError` to `ValueError` to ensure proper 400 HTTP
status code for client errors
- Added warning logging for template rendering errors before re-raising
as ValueError
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan: -->
- [x] Verified that invalid Jinja2 templates now raise ValueError
instead of TemplateError
- [x] Confirmed that valid templates continue to work correctly
- [x] Checked that warning logs are generated for template errors
- [x] Validated that the exception chain is preserved with `from e`
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Resolves#11226
### Changes 🏗️
- Drop use of `CloudLoggingHandler` which docs state isn't for use in
GKE
- For cloud logging, output only structured log entries to `stdout`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test deploy to dev and check logs
Changes to providers blocks to store in db
### Changes 🏗️
- revet change
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] I have reverted the merge
## Summary
- Fixes database connection warnings in executor logs: "Client is not
connected to the query engine, you must call `connect()` before
attempting to query data"
- Implements resilient database client pattern already used elsewhere in
the codebase
- Adds caching to reduce database load for user context lookups
## Changes
- Updated `get_user_context()` to check `prisma.is_connected()` and fall
back to database manager client
- Added `@cached(maxsize=1000, ttl_seconds=3600)` decorator for
performance optimization
- Updated database manager to expose `get_user_by_id` method
## Test plan
- [x] Verify executor pods no longer show Prisma connection warnings
- [x] Confirm user timezone is still correctly retrieved
- [x] Test fallback behavior when Prisma is disconnected
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
Standardize all the runtime environment checks on the Front-end and
associated conditions to run against a single environment service where
all the environment config is centralized and hence easier to manage.
This helps prevent typos and bug when manually asserting against
environment variables ( which are typed as `string` ), the helper
functions are easier to read and re-use across the codebase.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app and click around
- [x] Everything is smooth
- [x] Test on the CI and types are green
### For configuration changes:
None 🙏🏽
## Changes 🏗️
Document how to contribute on the Front-end so it is easier for
non-regular contributors.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Contribution guidelines make sense and look good considering the
AutoGPT stack
### For configuration changes:
None
We currently try to re-init the LaunchDarkly client every time a feature flag is checked.
This causes 5 second extra latency on the flag check when LD is down, such as now.
Since flag checks are performed on every block execution, this currently cripples the platform's executors.
- Follow-up to #11221
### Changes 🏗️
- Only try to init LaunchDarkly once
- Improve surrounding log statements in the `feature_flag` module
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- This is a critical hotfix; we'll see its effect once deployed
LaunchDarkly is currently down and it's keeping our executor pods from
spinning up.
### Changes 🏗️
- Wrap `LaunchDarklyIntegration` init in a try/except
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- We'll see if it works once it deploys
## Problem
The YouTube transcription block would fail when attempting to transcribe
videos that only had transcripts available in non-English languages.
Even when usable transcripts existed in other languages, the block would
raise a `NoTranscriptFound` error because it only requested English
transcripts.
**Example video that would fail:**
https://www.youtube.com/watch?v=3AMl5d2NKpQ (only has Hungarian
transcripts)
**Error message:**
```
Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ!
No transcripts were found for any of the requested language codes: ('en',)
For this video (3AMl5d2NKpQ) transcripts are available in the following languages:
(GENERATED) - hu ("Hungarian (auto-generated)")
```
## Solution
Implemented intelligent language fallback in the
`TranscribeYoutubeVideoBlock.get_transcript()` method:
1. **First**, tries to fetch English transcript (maintains backward
compatibility)
2. **If English unavailable**, lists all available transcripts and
selects the first one using this priority:
- Manually created transcripts (any language)
- Auto-generated transcripts (any language)
3. **Only fails** if no transcripts exist at all
**Example behavior:**
```python
# Before: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ") # ❌ Raises NoTranscriptFound
# After: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ") # ✅ Returns Hungarian transcript
```
## Changes
- **Modified** `backend/blocks/youtube.py`: Added try-catch logic to
fallback to any available language when English is not found
- **Added** `test/blocks/test_youtube.py`: Comprehensive test suite
covering URL extraction, language fallback, transcript preferences, and
error handling (7 tests)
- **Updated** `docs/content/platform/blocks/youtube.md`: Documented the
language fallback behavior and transcript priority order
## Testing
- ✅ All 7 new unit tests pass
- ✅ Block integration test passes
- ✅ Full test suite: 621 passed, 0 failed (no regressions)
- ✅ Code formatting and linting pass
## Impact
This fix enables the YouTube transcription block to work with
international content while maintaining full backward compatibility:
- ✅ Videos in any language can now be transcribed
- ✅ English is still preferred when available
- ✅ No breaking changes to existing functionality
- ✅ Graceful degradation to available languages
Fixes#10637
Fixes https://linear.app/autogpt/issue/OPEN-2626
> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `www.youtube.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python3`
(dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
> Issue Title: if theres only one lanague available for transcribe
youtube return that langage not an error
> Issue Description: `Could not retrieve a transcript for the video
https://www.youtube.com/watch?v=3AMl5d2NKpQ! This is most likely caused
by: No transcripts were found for any of the requested language codes:
('en',) For this video (3AMl5d2NKpQ) transcripts are available in the
following languages: (MANUALLY CREATED) None (GENERATED) - hu
("Hungarian (auto-generated)") (TRANSLATION LANGUAGES) None If you are
sure that the described cause is not responsible for this error and that
a transcript should be retrievable, please create an issue at
https://github.com/jdepoix/youtube-transcript-api/issues. Please add
which version of youtube_transcript_api you are using and provide the
information needed to replicate the error. Also make sure that there are
no open issues which already describe your problem!` you can use this
video to test:
[https://www.youtube.com/watch?v=3AMl5d2NKpQ\`](https://www.youtube.com/watch?v=3AMl5d2NKpQ%60)
> Fixes
https://linear.app/autogpt/issue/OPEN-2626/if-theres-only-one-lanague-available-for-transcribe-youtube-return
>
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10637).
All replies are displayed in both locations.
>
>
</details>
<!-- START COPILOT CODING AGENT TIPS -->
---
✨ Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
### Need 💡
This PR addresses Linear issue SECRT-1665, which mandates an update to
Linear's OAuth2 implementation. Linear is transitioning from long-lived
access tokens to short-lived access tokens with refresh tokens, with a
deadline of April 1, 2026. This change is crucial to ensure continued
integration with Linear and to support their new token management
system, including a migration path for existing long-lived tokens.
### Changes 🏗️
- **`autogpt_platform/backend/backend/blocks/linear/_oauth.py`**:
- Implemented full support for refresh tokens, including HTTP Basic
Authentication for token refresh requests.
- Added `migrate_old_token()` method to exchange old long-lived access
tokens for new short-lived tokens with refresh tokens using Linear's
`/oauth/migrate_old_token` endpoint.
- Enhanced `get_access_token()` to automatically detect and attempt
migration for old tokens, and to refresh short-lived tokens when they
expire.
- Improved error handling and token expiration management.
- Updated `_request_tokens` to handle both authorization code and
refresh token flows, supporting Linear's recommended authentication
methods.
- **`autogpt_platform/backend/backend/blocks/linear/_config.py`**:
- Updated `TEST_CREDENTIALS_OAUTH` mock data to include realistic
`access_token_expires_at` and `refresh_token` for testing the new token
lifecycle.
- **`LINEAR_OAUTH_IMPLEMENTATION.md`**:
- Added documentation detailing the new Linear OAuth refresh token
implementation, including technical details, migration strategy, and
testing notes.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified OAuth URL generation and parameter encoding.
- [x] Confirmed HTTP Basic Authentication header creation for refresh
requests.
- [x] Tested token expiration logic with a 5-minute buffer.
- [x] Validated migration detection for old vs. new token types.
- [x] Checked code syntax and import compatibility.
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
---
Linear Issue: [SECRT-1665](https://linear.app/autogpt/issue/SECRT-1665)
<a
href="https://cursor.com/background-agent?bcId=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a> <a
href="https://cursor.com/agents?id=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>
---------
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
## Changes 🏗️
Following https://datafa.st/docs/nextjs-app-router
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will see once we make a production deployment and get data into
the platform
### For configuration changes:
None
fix issue with identifying errors :(
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] we have to test in dev due to waitlist integration, so we are
merging. will revert if fails
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Summary
This PR improves the user experience for users who are not on the
waitlist during sign-up. When a user attempts to sign up or log in with
an email that's not on the allowlist, they now see a clear, helpful
modal with a direct call-to-action to join the waitlist.
Fixes
[OPEN-2794](https://linear.app/autogpt/issue/OPEN-2794/display-waitlist-error-for-users-not-on-waitlist-during-sign-up)
## Changes
- ✨ Updated `EmailNotAllowedModal` with improved messaging and a "Join
Waitlist" button
- 🔧 Fixed OAuth provider signup/login to properly display the waitlist
modal
- 📝 Enhanced auth-code-error page to detect and display
waitlist-specific errors
- 💬 Added helpful guidance about checking email address and Discord
support link
- 🎯 Consistent waitlist error handling across all auth flows (regular
signup, OAuth, error pages)
## Test Plan
Tested locally by:
1. Attempting signup with non-allowlisted email - modal appears ✅
2. Attempting Google SSO with non-allowlisted account - modal appears ✅
3. Modal shows "Join Waitlist" button that opens
https://agpt.co/waitlist✅
4. Help text about checking email and Discord support is visible ✅
## Screenshots
The new waitlist modal includes:
- Clear "Join the Waitlist" title
- Explanation that platform is in closed beta
- "Join Waitlist" button (opens in new tab)
- Help text about checking email address
- Discord support link for users who need help
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Summary
Fix critical UserBalance migration and spending issues affecting users
with credits from transaction history but no UserBalance records.
## Root Issues Fixed
### Issue 1: UserBalance Migration Complexity
- **Problem**: Complex data migration with timestamp logic issues and
potential race conditions
- **Solution**: Simplified to idempotent table creation only,
application handles auto-population
### Issue 2: Credit Spending Bug
- **Problem**: Users with $10.0 from transaction history couldn't spend
$0.16
- **Root Cause**: `_add_transaction` and `_enable_transaction` only
checked UserBalance table, returning 0 balance for users without records
- **Solution**: Enhanced both methods with transaction history fallback
logic
### Issue 3: Exception Handling Inconsistency
- **Problem**: Raw SQL unique violations raised different exception
types than Prisma ORM
- **Solution**: Convert raw SQL unique violations to
`UniqueViolationError` at source
## Changes Made
### Migration Cleanup
- **Idempotent operations**: Use `CREATE TABLE IF NOT EXISTS`, `CREATE
INDEX IF NOT EXISTS`
- **Inline foreign key**: Define constraint within `CREATE TABLE`
instead of separate `ALTER TABLE`
- **Removed data migration**: Application creates UserBalance records
on-demand
- **Safe to re-run**: No errors if table/index/constraint already exists
### Credit Logic Fixes
- **Enhanced `_add_transaction`**: Added transaction history fallback in
`user_balance_lock` CTE
- **Enhanced `_enable_transaction`**: Added same fallback logic for
payment fulfillment
- **Exception normalization**: Convert raw SQL unique violations to
`UniqueViolationError`
- **Simplified `onboarding_reward`**: Use standardized
`UniqueViolationError` catching
### SQL Fallback Pattern
```sql
COALESCE(
(SELECT balance FROM UserBalance WHERE userId = ? FOR UPDATE),
-- Fallback: compute from transaction history if UserBalance doesn't exist
(SELECT COALESCE(ct.runningBalance, 0)
FROM CreditTransaction ct
WHERE ct.userId = ? AND ct.isActive = true AND ct.runningBalance IS NOT NULL
ORDER BY ct.createdAt DESC LIMIT 1),
0
) as balance
```
## Impact
### Before
- ❌ Users with transaction history but no UserBalance couldn't spend
credits
- ❌ Migration had complex timestamp logic with potential bugs
- ❌ Raw SQL and Prisma exceptions handled differently
- ❌ Error: "Insufficient balance of $10.0, where this will cost $0.16"
### After
- ✅ Seamless spending for all users regardless of UserBalance record
existence
- ✅ Simple, idempotent migration that's safe to re-run
- ✅ Consistent exception handling across all credit operations
- ✅ Automatic UserBalance record creation during first transaction
- ✅ Backward compatible - existing users unaffected
## Business Value
- **Eliminates user frustration**: Users can spend their credits
immediately
- **Smooth migration path**: From old User.balance to new UserBalance
table
- **Better reliability**: Atomic operations with proper error handling
- **Maintainable code**: Consistent patterns across credit operations
## Test Plan
- [ ] Manual testing with users who have transaction history but no
UserBalance records
- [ ] Verify migration can be run multiple times safely
- [ ] Test spending credits works for all user scenarios
- [ ] Verify payment fulfillment (`_enable_transaction`) works correctly
- [ ] Add comprehensive test coverage for this scenario
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Problem
High QPS failures on `spend_credits` operations due to lock contention
from `pg_advisory_xact_lock` causing serialization and seconds of wait
time.
## Solution
Replace PostgreSQL advisory locks with atomic database operations using
CTEs (Common Table Expressions).
### Key Changes
- **Add persistent balance column** to User table for O(1) balance
lookups
- **Atomic CTE-based operations** for all credit transactions using
UPDATE...RETURNING pattern
- **Comprehensive concurrency tests** with 7 test scenarios including
stress testing
- **Remove all advisory lock usage** from the credit system
### Implementation Details
1. **Migration**: Adds balance column with backfill from transaction
history
2. **Atomic Operations**: All credit operations now use single atomic
CTEs that update balance and create transaction in one query
3. **Race Condition Prevention**: WHERE clauses in UPDATE statements
ensure balance never goes negative
4. **BetaUserCredit Compatibility**: Preserved monthly refill logic with
updated `_add_transaction` signature
### Performance Impact
- ✅ Eliminated lock contention bottlenecks
- ✅ O(1) balance lookups instead of O(n) transaction aggregation
- ✅ Atomic operations prevent race conditions without locks
- ✅ Supports high QPS without serialization delays
### Testing
- All existing tests pass
- New concurrency test suite (`credit_concurrency_test.py`) with:
- Concurrent spends from same user
- Insufficient balance handling
- Mixed operations (spends, top-ups, balance checks)
- Race condition prevention
- Integer overflow protection
- Stress testing with 100 concurrent operations
### Breaking Changes
None - all existing APIs maintain compatibility
🤖 Generated with [Claude Code](https://claude.ai/code)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Enhanced top‑up flows with top‑up types, clearer credit→dollar
formatting, and idempotent onboarding rewards.
* **Bug Fixes**
* Fixed race conditions for concurrent spends/top‑ups, added
integer‑overflow and underflow protection, stronger input validation,
and improved refund/dispute handling.
* **Refactor**
* Persisted per‑user balance with atomic updates for reliable balances;
admin history now prefetches balances.
* **Tests**
* Added extensive concurrency, refund, ceiling/underflow and migration
test suites.
* **Chores**
* Database migration to add persisted user balance; APIKey status
extended (SUSPENDED).
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
## Summary
Fixes a critical serialization bug introduced in PR #11187 where
`SafeJson` failed to serialize dictionaries containing Pydantic models,
causing 500 Internal Server Errors in the executor service.
## Problem
The error manifested as:
```
CRITICAL: Operation Approaching Failure Threshold: Service communication: '_call_method_async'
Current attempt: 50/50
Error: HTTPServerError: HTTP 500: Server error '500 Internal Server Error'
for url 'http://autogpt-database-manager.prod-agpt.svc.cluster.local:8005/create_graph_execution'
```
Root cause in `create_graph_execution`
(backend/data/execution.py:656-657):
```python
"credentialInputs": SafeJson(credential_inputs) if credential_inputs else Json({})
```
Where `credential_inputs: Mapping[str, CredentialsMetaInput]` is a dict
containing Pydantic models.
After PR #11187's refactor, `_sanitize_value()` only converted top-level
BaseModel instances to dicts, but didn't handle BaseModel instances
nested inside dicts/lists/tuples. This caused Prisma's JSON serializer
to fail with:
```
TypeError: Type <class 'backend.data.model.CredentialsMetaInput'> not serializable
```
## Solution
Added BaseModel handling to `_sanitize_value()` to recursively convert
Pydantic models to dicts before sanitizing:
```python
elif isinstance(value, BaseModel):
# Convert Pydantic models to dict and recursively sanitize
return _sanitize_value(value.model_dump(exclude_none=True))
```
This ensures all nested Pydantic models are properly serialized
regardless of nesting depth.
## Changes
- **backend/util/json.py**: Added BaseModel check to `_sanitize_value()`
function
- **backend/util/test_json.py**: Added 6 comprehensive tests covering:
- Dict containing Pydantic models
- Deeply nested Pydantic models
- Lists of Pydantic models in dicts
- The exact CredentialsMetaInput scenario
- Complex mixed structures
- Models with control characters
## Testing
✅ All new tests pass
✅ Verified fix resolves the production 500 error
✅ Code formatted with `poetry run format`
## Related
- Fixes issues introduced in PR #11187
- Related to executor service 500 errors in production
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Claude <noreply@anthropic.com>
### Problem
When running multiple backend pods in production, requests can be routed
to different pods causing inconsistent cache states. Additionally, the
current cache implementation in `autogpt_libs` doesn't support shared
caching across processes, leading to data inconsistency and redundant
cache misses.
### Changes 🏗️
- **Moved cache implementation from autogpt_libs to backend**
(`/backend/backend/util/cache.py`)
- Removed `/autogpt_libs/autogpt_libs/utils/cache.py`
- Centralized cache utilities within the backend module
- Updated all import statements across the codebase
- **Implemented Redis-based shared caching**
- Added `shared_cache` parameter to `@cached` decorator for
cross-process caching
- Implemented Redis connection pooling for efficient cache operations
- Added support for cache key pattern matching and bulk deletion
- Added TTL refresh on cache access with `refresh_ttl_on_get` option
- **Enhanced cache functionality**
- Added thundering herd protection with double-checked locking
- Implemented thread-local caching with `@thread_cached` decorator
- Added cache management methods: `cache_clear()`, `cache_info()`,
`cache_delete()`
- Added support for both sync and async functions
- **Updated store caching** (`/backend/server/v2/store/cache.py`)
- Enabled shared caching for all store-related cache functions
- Set appropriate TTL values (5-15 minutes) for different cache types
- Added `clear_all_caches()` function for cache invalidation
- **Added Redis configuration**
- Added Redis connection settings to backend settings
- Configured dedicated connection pool for cache operations
- Set up binary mode for pickle serialization
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify Redis connection and cache operations work correctly
- [x] Test shared cache across multiple backend instances
- [x] Verify cache invalidation with `clear_all_caches()`
- [x] Run cache tests: `poetry run pytest
backend/backend/util/cache_test.py`
- [x] Test thundering herd protection under concurrent load
- [x] Verify TTL refresh functionality with `refresh_ttl_on_get=True`
- [x] Test thread-local caching for request-scoped data
- [x] Ensure no performance regression vs in-memory cache
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes (Redis already configured)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Redis cache configuration uses existing Redis service settings
(REDIS_HOST, REDIS_PORT, REDIS_PASSWORD)
- No new environment variables required
## Summary
Implement selective rollout of payment functionality using LaunchDarkly
feature flags to enable gradual deployment to pilot users.
- Add `ENABLE_PLATFORM_PAYMENT` flag to control credit system behavior
- Update `get_user_credit_model` to use user-specific flag evaluation
- Replace hardcoded `NEXT_PUBLIC_SHOW_BILLING_PAGE` with LaunchDarkly
flag
- Enable payment UI components only for flagged users
- Maintain backward compatibility with existing beta credit system
- Default to beta monthly credits when flag is disabled
- Fix tests to work with new async credit model function
## Key Changes
### Backend
- **Credit Model Selection**: The `get_user_credit_model()` function now
takes a `user_id` parameter and uses LaunchDarkly to determine which
credit model to return:
- Flag enabled → `UserCredit` (payment system enabled, no monthly
refills)
- Flag disabled → `BetaUserCredit` (current behavior with monthly
refills)
- **Flag Integration**: Added `ENABLE_PLATFORM_PAYMENT` flag and
integrated LaunchDarkly evaluation throughout the credit system
- **API Updates**: All credit-related endpoints now use the
user-specific credit model instead of a global instance
### Frontend
- **Dynamic UI**: Payment-related components (billing page, wallet
refill) now show/hide based on the LaunchDarkly flag
- **Removed Environment Variable**: Replaced
`NEXT_PUBLIC_SHOW_BILLING_PAGE` with runtime flag evaluation
### Testing
- **Test Fixes**: Updated all tests that referenced the removed global
`_user_credit_model` to use proper mocking of the new async function
## Deployment Strategy
This implementation enables a controlled rollout:
1. Deploy with flag disabled (default) - no behavior change for existing
users
2. Enable flag for pilot/beta users via LaunchDarkly dashboard
3. Monitor usage and feedback from pilot users
4. Gradually expand to more users
5. Eventually enable for all users once validated
## Test Plan
- [x] Unit tests pass for credit system components
- [x] Payment UI components show/hide correctly based on flag
- [x] Default behavior (flag disabled) maintains current functionality
- [x] Flag enabled users get payment system without monthly refills
- [x] Admin credit operations work correctly
- [x] Backward compatibility maintained
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes the `Invalid \escape` error occurring in
`/upsert_execution_output` endpoint by completely rewriting the SafeJson
implementation.
## Problem
- Error: `POST /upsert_execution_output failed: Invalid \escape: line 1
column 36404 (char 36403)`
- Caused by data containing literal backslash-u sequences (e.g.,
`\u0000` as text, not actual null characters)
- Previous implementation tried to remove problematic escape sequences
from JSON strings
- This created invalid JSON when it removed `\\u0000` and left invalid
sequences like `\w`
## Solution
Completely rewrote SafeJson to work on Python data structures instead of
JSON strings:
1. **Direct data sanitization**: Recursively walks through dicts, lists,
and tuples to remove control characters directly from strings
2. **No JSON string manipulation**: Avoids all escape sequence parsing
issues
3. **More efficient**: Eliminates the serialize → sanitize → deserialize
cycle
4. **Preserves valid content**: Backslashes, paths, and literal text are
correctly preserved
## Changes
- Removed `POSTGRES_JSON_ESCAPES` regex (no longer needed)
- Added `_sanitize_value()` helper function for recursive sanitization
- Simplified `SafeJson()` to convert Pydantic models and sanitize data
structures
- Added `import json # noqa: F401` for backwards compatibility
## Testing
- ✅ Verified fix resolves the `Invalid \escape` error
- ✅ All existing SafeJson unit tests pass
- ✅ Problematic data with literal escape sequences no longer causes
errors
- ✅ Code formatted with `poetry run format`
## Technical Details
**Before (JSON string approach):**
```python
# Serialize to JSON string
json_string = dumps(data)
# Remove escape sequences from string (BREAKS!)
sanitized = regex.sub("", json_string)
# Parse back (FAILS with Invalid \escape)
return Json(json.loads(sanitized))
```
**After (data structure approach):**
```python
# Convert Pydantic to dict
data = model.model_dump() if isinstance(data, BaseModel) else data
# Recursively sanitize strings in data structure
sanitized = _sanitize_value(data)
# Return as Json (no parsing needed)
return Json(sanitized)
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Currently, we don’t add category and cost information to custom nodes in
the new builder. This means we’re rendering with the correct information
and costs are displayed accurately based on the selected discriminator
value.
<img width="441" height="781" alt="Screenshot 2025-10-15 at 2 43 33 PM"
src="https://github.com/user-attachments/assets/8199cfa7-4353-4de2-8c15-b68aa86e458c"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All information is displayed correctly.
- [x] I’ve tried changing the discrimination value and we’re getting the
correct cost for the selected value.
### Changes 🏗️
- **Added Claude Haiku 4.5 model support** (`claude-haiku-4-5-20251001`)
- Added model to `LlmModel` enum in
`autogpt_platform/backend/backend/blocks/llm.py`
- Configured model metadata with 200k context window and 64k max output
tokens
- Set pricing to 4 credits per million tokens in
`backend/data/block_cost_config.py`
- **Classic Forge Integration**
- Added `CLAUDE4_5_HAIKU_v1` to Anthropic provider in
`classic/forge/forge/llm/providers/anthropic.py`
- Configured with $1/1M prompt tokens and $5/1M completion tokens
pricing
- Enabled function call API support
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
**Test Plan:**
- [x] Verify Claude Haiku 4.5 model appears in the LLM block model
selection dropdown
- [x] Test basic text generation using Claude Haiku 4.5 in an agent
workflow
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Configuration changes</summary>
- No environment variable changes required
- No docker-compose changes needed
- Model configuration is handled through existing Anthropic API
integration
</details>
https://github.com/user-attachments/assets/bbc42c47-0e7c-4772-852e-55aa91f4d253
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
## Summary
Move DatabaseError from store-specific exceptions to generic backend
exceptions for proper layer separation, while also fixing store
exception inheritance to ensure proper HTTP status codes.
## Problem
1. **Poor Layer Separation**: DatabaseError was defined in
store-specific exceptions but represents infrastructure concerns that
affect the entire backend
2. **Incorrect HTTP Status Codes**: Store exceptions inherited from
Exception instead of ValueError, causing 500 responses for client errors
3. **Reusability Issues**: Other backend modules couldn't use
DatabaseError for DB operations
4. **Blanket Catch Issues**: Store-specific catches were affecting
generic database operations
## Solution
### Move DatabaseError to Generic Location
- Move from backend.server.v2.store.exceptions to
backend.util.exceptions
- Update all 23 references in backend/server/v2/store/db.py to use new
location
- Remove from StoreError inheritance hierarchy
### Fix Complete Store Exception Hierarchy
- MediaUploadError: Changed from Exception to ValueError inheritance
(client errors → 400)
- StoreError: Changed from Exception to ValueError inheritance (business
logic errors → 400)
- Store NotFound exceptions: Changed to inherit from NotFoundError (→
404)
- DatabaseError: Now properly inherits from Exception (infrastructure
errors → 500)
## Benefits
### ✅ Proper Layer Separation
- Database errors are infrastructure concerns, not store-specific
business logic
- Store exceptions focus on business validation and client errors
- Clean separation between infrastructure and business logic layers
### ✅ Correct HTTP Status Codes
- DatabaseError: 500 (server infrastructure errors)
- Store NotFound errors: 404 (via existing NotFoundError handler)
- Store validation errors: 400 (via existing ValueError handler)
- Media upload errors: 400 (client validation errors)
### ✅ Architectural Improvements
- DatabaseError now reusable across entire backend
- Eliminates blanket catch issues affecting generic DB operations
- All store exceptions use global exception handlers properly
- Future store exceptions automatically get proper status codes
## Files Changed
- **backend/util/exceptions.py**: Add DatabaseError class
- **backend/server/v2/store/exceptions.py**: Remove DatabaseError, fix
inheritance hierarchy
- **backend/server/v2/store/db.py**: Update all DatabaseError references
to new location
## Result
- ✅ **No more stack trace spam**: Expected business logic errors handled
properly
- ✅ **Proper HTTP semantics**: 500 for infrastructure, 400/404 for
client errors
- ✅ **Better architecture**: Clean layer separation and reusable
components
- ✅ **Fixes original issue**: AgentNotFoundError now returns 404 instead
of 500
This addresses the logging issue mentioned by @zamilmajdy while also
implementing the architectural improvements suggested by @Pwuts.
This PR introduces saving functionality to the new builder interface,
allowing users to save and update agent flows. The implementation
includes both UI components and backend integration for persistent
storage of agent configurations.
https://github.com/user-attachments/assets/95ee46de-2373-4484-9f34-5f09aa071c5e
### Key Features Added:
#### 1. **Save Control Component** (`NewSaveControl`)
- Added a new save control popover in the control panel with form inputs
for agent name, description, and version display
- Integrated with the new control panel as a primary action button with
a floppy disk icon
- Supports keyboard shortcuts (Ctrl+S / Cmd+S) for quick saving
#### 2. **Graph Persistence Logic**
- Implemented `useNewSaveControl` hook to handle:
- Creating new graphs via `usePostV1CreateNewGraph`
- Updating existing graphs via `usePutV1UpdateGraphVersion`
- Intelligent comparison to prevent unnecessary saves when no changes
are made
- URL parameter management for flowID and flowVersion tracking
#### 3. **Loading State Management**
- Added `GraphLoadingBox` component to display a loading indicator while
graphs are being fetched
- Enhanced `useFlow` hook with loading state tracking
(`isFlowContentLoading`)
- Improved UX with clear visual feedback during graph operations
#### 4. **Component Reorganization**
- Refactored components from `NewBlockMenu` to `NewControlPanel`
directory structure for better organization:
- Moved all block menu related components under
`NewControlPanel/NewBlockMenu/`
- Separated save control into its own module
(`NewControlPanel/NewSaveControl/`)
- Improved modularity and separation of concerns
#### 5. **State Management Enhancements**
- Added `controlPanelStore` for managing control panel states (e.g.,
save popover visibility)
- Enhanced `nodeStore` with `getBackendNodes()` method for retrieving
nodes in backend format
- Added `getBackendLinks()` to `edgeStore` for consistent link
formatting
### Technical Improvements:
- **Graph Comparison Logic**: Implemented `graphsEquivalent()` helper to
deeply compare saved and current graph states, preventing redundant
saves
- **Form Validation**: Used Zod schema validation for save form inputs
with proper constraints
- **Error Handling**: Comprehensive error handling with user-friendly
toast notifications
- **Query Invalidation**: Proper cache invalidation after successful
saves to ensure data consistency
### UI/UX Enhancements:
- Clean, modern save dialog with clear labeling and placeholder text
- Real-time version display showing the current graph version
- Disabled state for save button during operations to prevent double
submissions
- Toast notifications for success and error states
- Higher z-index for GraphLoadingBox to ensure visibility over other
elements
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving is working perfectly. All nodes, links, their positions,
and hardcoded data are saved correctly.
- [x] If there are no changes, the user cannot save the graph.
## Summary
Fix store exception hierarchy to prevent ERROR level stack trace spam
for expected business logic errors and ensure proper HTTP status codes.
## Problem
The original error from production logs showed AgentNotFoundError for
non-existent agents like autogpt/domain-drop-catcher was:
- Returning 500 status codes instead of 404
- Generating ERROR level stack traces in logs for expected not found
scenarios
- Bypassing global exception handlers due to improper inheritance
## Root Cause
Store exceptions inherited from Exception instead of ValueError, causing
them to bypass the global ValueError handler (400) and fall through to
the generic Exception handler (500) with full stack traces.
## Solution
Create proper exception hierarchy for ALL store-related errors by
making:
- MediaUploadError inherit from ValueError instead of Exception
- StoreError inherit from ValueError instead of Exception
- Store NotFound exceptions inherit from NotFoundError (which inherits
from ValueError)
## Changes Made
1. MediaUploadError: Changed from Exception to ValueError inheritance
2. StoreError: Changed from Exception to ValueError inheritance
3. Store NotFound exceptions: Changed to inherit from NotFoundError
## Benefits
- Correct HTTP status codes: Not found errors return 404, validation
errors return 400
- No more 500 stack trace spam for expected business logic errors
- Clean consistent error handling using existing global handlers
- Future-proof: Any new store exceptions automatically get proper status
codes
## Result
- AgentNotFoundError for autogpt/domain-drop-catcher now returns 404
instead of 500
- InvalidFileTypeError, VirusDetectedError, etc. now return 400 instead
of 500
- No ERROR level stack traces for expected client errors
- Proper HTTP semantics throughout the store API
## Summary
Fix critical SafeJson function to properly sanitize JSON-encoded Unicode
escape sequences that were causing PostgreSQL 22P05 errors when null
characters in web scraped content were stored as "\u0000" in the
database.
## Root Cause Analysis
The existing SafeJson function in backend/util/json.py:
1. Only removed raw control characters (\x00-\x08, etc.) using
POSTGRES_CONTROL_CHARS regex
2. Failed to handle JSON-encoded Unicode escape sequences (\u0000,
\u0001, etc.)
3. When web scraping returned content with null bytes, these were
JSON-encoded as "\u0000" strings
4. PostgreSQL rejected these Unicode escape sequences, causing 22P05
errors
## Changes Made
### Enhanced SafeJson Function (backend/util/json.py:147-153)
- **Add POSTGRES_JSON_ESCAPES regex**: Comprehensive pattern targeting
all PostgreSQL-incompatible Unicode and single-char JSON escape
sequences
- **Unicode escapes**: \u0000-\u0008, \u000B-\u000C, \u000E-\u001F,
\u007F (preserves \u0009=tab, \u000A=newline, \u000D=carriage return)
- **Single-char escapes**: \b (backspace), \f (form feed) with negative
lookbehind/lookahead to preserve file paths like "C:\\file.txt"
- **Two-pass sanitization**: Remove JSON escape sequences first, then
raw characters as fallback
### Comprehensive Test Coverage (backend/util/test_json.py:219-414)
Added 11 new test methods covering:
- **Control character sanitization**: Verify dangerous characters (\x00,
\x07, \x0C, etc.) are removed while preserving safe whitespace (\t, \n,
\r)
- **Web scraping content**: Simulate SearchTheWebBlock scenarios with
null bytes and control characters
- **Code preservation**: Ensure legitimate file paths, JSON strings,
regex patterns, and programming code are preserved
- **Unicode escape handling**: Test both \u0000-style and \b/\f-style
escape sequences
- **Edge case protection**: Prevent over-matching of legitimate
sequences like "mybfile.txt" or "\\u0040"
- **Mixed content scenarios**: Verify only problematic sequences are
removed while preserving legitimate content
## Validation Results
- ✅ All 24 SafeJson tests pass including 11 new comprehensive
sanitization tests
- ✅ Control characters properly removed: \x00, \x01, \x08, \x0C, \x1F,
\x7F
- ✅ Safe characters preserved: \t (tab), \n (newline), \r (carriage
return)
- ✅ File paths preserved: "C:\\Users\\file.txt", "\\\\server\\share"
- ✅ Programming code preserved: regex patterns, JSON strings, SQL
escapes
- ✅ Unicode escapes sanitized: \u0000 → removed, \u0048 ("H") →
preserved if valid
- ✅ No false positives: Legitimate sequences not accidentally removed
- ✅ poetry run format succeeds without errors
## Impact
- **Prevents PostgreSQL 22P05 errors**: No more null character database
rejections from web scraping
- **Maintains data integrity**: Legitimate content preserved while
dangerous characters removed
- **Comprehensive protection**: Handles both raw bytes and JSON-encoded
escape sequences
- **Web scraping reliability**: SearchTheWebBlock and similar blocks now
store content safely
- **Backward compatibility**: Existing SafeJson behavior unchanged for
legitimate content
## Test Plan
- [x] All existing SafeJson tests pass (24/24)
- [x] New comprehensive sanitization tests pass (11/11)
- [x] Control character removal verified
- [x] Legitimate content preservation verified
- [x] Web scraping scenarios tested
- [x] Code formatting and type checking passes
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
The `dictionary` input on the Add to Dictionary block is hidden, even
though it is the main input.
### Changes 🏗️
- Mark `dictionary` explicitly as not advanced (so not hidden by
default)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Trivial change, no test needed
Integrates Sentry SDK to set user and contextual tags during node
execution for improved error tracking and user count analytics. Ensures
Sentry context is properly set and restored, and exceptions are captured
with relevant context before scope restoration.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
Adds sentry tracking to block failures
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test to make sure the userid and block details show up in Sentry
- [x] make sure other errors aren't contaminated
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- New Features
- Added conditional support for feature flags when configured, enabling
targeted rollouts and experiments without impacting unconfigured
environments.
- Chores
- Enhanced error monitoring with richer contextual data during node
execution to improve stability and diagnostics.
- Updated metrics initialization to dynamically include feature flag
integrations when available, without altering behavior for unconfigured
setups.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Since #10323, one-time graph inputs are no longer stored on the input
nodes (#9139), so we can reasonably assume that the default value set by
the graph creator will be safe to export.
### Changes 🏗️
- Don't strip the default value from input nodes in
`NodeModel.stripped_for_export(..)`, except for inputs marked as
`secret`
- Update and expand tests for graph export secrets stripping mechanism
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Expanded tests pass
- Relatively simple change with good test coverage, no manual test
needed
## Problem
The `SendDiscordMessageBlock` only accepted channel names, while other
Discord blocks like `SendDiscordFileBlock` and `SendDiscordEmbedBlock`
accept both channel IDs and channel names. This inconsistency made it
difficult to use channel IDs with the message sending block, which is
often more reliable and direct than name-based lookup.
## Solution
Updated `SendDiscordMessageBlock` to accept both channel IDs and channel
names through the `channel_name` field, matching the implementation
pattern used in other Discord blocks.
### Changes Made
1. **Enhanced channel resolution logic** to try parsing the input as a
channel ID first, then fall back to name-based search:
```python
# Try to parse as channel ID first
try:
channel_id = int(channel_name)
channel = client.get_channel(channel_id)
except ValueError:
# Not an ID, treat as channel name
# ... search guilds for matching channel name
```
2. **Updated field descriptions** to clarify the dual functionality:
- `channel_name`: Now describes that it accepts "Channel ID or channel
name"
- `server_name`: Clarified as "only needed if using channel name"
3. **Added type checking** to ensure the resolved channel can send
messages before attempting to send
4. **Updated documentation** to reflect the new capability
## Backward Compatibility
✅ **Fully backward compatible**: The field name remains `channel_name`
(not renamed), and all existing workflows using channel names will
continue to work exactly as before.
✅ **New capability**: Users can now also provide channel IDs (e.g.,
`"123456789012345678"`) for more direct channel targeting.
## Testing
- All existing tests pass, including `SendDiscordMessageBlock` and all
other Discord block tests
- Implementation verified to match the pattern used in
`SendDiscordFileBlock` and `SendDiscordEmbedBlock`
- Code passes all linting, formatting, and type checking
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10909
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
> Issue Title: SendDiscordMessage needs to take a channel id as an
option under channelname the same as the other discord blocks
> Issue Description: with how we can process the other discord blocks we
should do the same here with the identifiers being allowed to be a
channel name or id. we can't rename the field though or that will break
backwards compatibility
> Fixes
https://linear.app/autogpt/issue/OPEN-2701/senddiscordmessage-needs-to-take-a-channel-id-as-an-option-under
>
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User 055a3053-5ab6-449a-bcfa-990768594185:
> the ones with boxes around them need confirmed for lables but yeah its
related but not dupe
>
> Comment by User 264d7bf4-db2a-46fa-a880-7d67b58679e6:
> this might be a duplicate since there is a related ticket but not sure
>
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10909).
All replies are displayed in both locations.
>
>
</details>
<!-- START COPILOT CODING AGENT TIPS -->
---
💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey3.medallia.com/?EAHeSx-AP01bZqG0Ld9QLQ) to start
the survey.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* New Features
* Send Discord Message block now accepts a channel ID in addition to
channel name.
* Server name is only required when using a channel name.
* Improved channel detection and validation with clearer errors if the
channel isn’t found.
* Documentation
* Updated block documentation to reflect support for channel ID or name
and clarify when server name is needed.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bently <Github@bentlybro.com>
Closes#11163
## Summary
Expanded the Fact Checker block to yield its references list from the
Jina AI API response.
## Changes 🏗️
- Added `Reference` TypedDict to define the structure of reference
objects
- Added `references` field to the Output schema
- Modified the `run` method to extract and yield references from the API
response
- Added fallback to empty list if references are not present
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that the Fact Checker block schema includes the new
references field
- [x] Confirmed that references are properly extracted from the API
response when present
- [x] Tested the fallback behavior when references are not in the API
response
- [x] Ensured backward compatibility - existing functionality remains
unchanged
- [x] Verified the Reference TypedDict matches the expected API response
structure
Generated with [Claude Code](https://claude.ai/code)
## Summary by CodeRabbit
* **New Features**
* Fact-checking results now include a references list to support
verification.
* Each reference provides a URL, a key quote, and an indicator showing
whether it supports the claim.
* References are presented alongside factuality, result, and reasoning
when available; otherwise, an empty list is returned.
* Enhances transparency and traceability of fact-check outcomes without
altering existing result fields.
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
### Changes 🏗️
<img width="672" height="761" alt="Screenshot 2025-10-14 at 16 12 50"
src="https://github.com/user-attachments/assets/9e664ade-10fe-4c09-af10-b26d10dca360"
/>
Fixes
[BUILDER-3YG](https://sentry.io/organizations/significant-gravitas/issues/6942679655/).
The issue was that: User uploaded an incompatible external agent persona
file lacking required flow graph keys (`nodes`, `links`).
- Improves error handling when an invalid agent file is uploaded.
- Provides a more user-friendly error message indicating the file must
be a valid agent.json file exported from the AutoGPT platform.
- Clears the invalid file from the form and resets the agent object to
null.
This fix was generated by Seer in Sentry, triggered by Toran Bruce
Richards. 👁️ Run ID: 1943626
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6942679655/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test that uploading an invalid agent file (e.g., missing `nodes`
or `links`) triggers the improved error handling and displays the
user-friendly error message.
- [x] Verify that the invalid file is cleared from the form after the
error, and the agent object is reset to null.
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
We’re currently facing two problems with credentials:
1. When we change the discriminator input value, the form data
credential field should be cleaned up completely.
2. When I select a different discriminator and if that provider has a
value, it should select the latest one.
So, in this PR, I’ve encountered both issues.
### Changes 🏗️
- Updated CredentialField to utilize a new setCredential function for
managing selected credentials.
- Implemented logic to auto-select the latest credential when none is
selected and clear the credential if the provider changes.
- Improved SelectCredential component with a defaultValue prop and
adjusted styling for better UI consistency.
- Removed unnecessary console logs from helper functions to clean up the
code.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Credential selection works perfectly with both the discriminator
and normal addition.
- [x] Auto-select credential is also working.
Fixes#11162
## Summary
Implements a new Perplexity block that allows users to query
Perplexity's sonar models via OpenRouter with support for extracting URL
citations and annotations.
## Changes
- Add new block for Perplexity sonar models (sonar, sonar-pro,
sonar-deep-research)
- Support model selection for all three Perplexity models
- Implement annotations output pin for URL citations and source
references
- Integrate with OpenRouter API for accessing Perplexity models
- Follow existing block patterns from AI text generator block
## Test Plan
✅ Block successfully instantiates
✅ Block is properly loaded by the dynamic loading system
✅ Output fields include response and annotations as required
Generated with [Claude Code](https://claude.ai/code)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- New Features
- Added a Perplexity integration block to query Sonar models via
OpenRouter.
- Supports multiple model options, optional system prompt, and
adjustable max tokens.
- Returns concise responses with citation-style annotations extracted
from the model output.
- Provides clear error messages when requests fail.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
## Summary
- Changed max_concurrent_graph_executions_per_user from 50 to 25
concurrent executions
- Updated the limit to be per user per graph instead of globally per
user
- Users can now run different graphs concurrently without being limited
by executions of other graphs
- Enhanced database query to filter by both user_id and graph_id
## Changes Made
- **Settings**: Reduced default limit from 50 to 25 and updated
description to clarify per-graph scope
- **Database Layer**: Modified `get_graph_executions_count` to accept
optional `graph_id` parameter
- **Executor Manager**: Updated rate limiting logic to check
per-user-per-graph instead of per-user globally
- **Logging**: Enhanced warning messages to include graph_id context
## Test plan
- [ ] Verify that users can run up to 25 concurrent executions of the
same graph
- [ ] Verify that users can run different graphs concurrently without
interference
- [ ] Test rate limiting behavior when limit is exceeded for a specific
graph
- [ ] Confirm logging shows correct graph_id context in rate limit
messages
## Impact
This change improves the user experience by allowing concurrent
execution of different graphs while still preventing resource exhaustion
from running too many instances of the same graph.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
<!-- Clearly explain the need for these changes: -->
This PR prevents users from creating multiple store submissions with the
same slug, which could lead to confusion and potential conflicts in the
marketplace. Each user's submissions should have unique slugs to ensure
proper identification and navigation.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **Backend**: Added validation to check for existing slugs before
creating new store submissions in `backend/server/v2/store/db.py`
- **Backend**: Introduced new `SlugAlreadyInUseError` exception in
`backend/server/v2/store/exceptions.py` for clearer error handling
- **Backend**: Updated store routes to return HTTP 409 Conflict when
slug is already in use in `backend/server/v2/store/routes.py`
- **Backend**: Cleaned up test file in
`backend/server/v2/store/db_test.py`
- **Frontend**: Enhanced error handling in the publish agent modal to
display specific error messages to users in
`frontend/src/components/contextual/PublishAgentModal/components/AgentInfoStep/useAgentInfoStep.ts`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Add a store submission with a specific slug
- [x] Attempt to add another store submission with the same slug for the
same user - Verify a 409 conflict error is returned with appropriate
error message
- [x] Add a store submission with the same slug, but for a different
user - Verify the submission is successful
- [x] Verify frontend displays the specific error message when slug
conflict occurs
- [x] Existing tests pass without modification
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11159
Currently, we’re not throwing errors for client-side requests in the
custom mutator. This way, we’re ignoring the client-side request error.
If we do encounter an error, we send it as a normal response object.
That’s why our onError callback on React Query mutation and hasError
isn’t working in the query. To fix this, in this PR, we’re throwing the
client-side error.
### Changes 🏗️
- We’re throwing errors for both server-side and client-side requests.
- Why server-side? So the client cache isn’t hydrated with the error.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All end-to-end functionality is working properly.
- [x] I’ve manually checked all the pages and they are all functioning
correctly.
When a user clicks the “Become a Creator” button on the marketplace
page, we send an unauthorised request to the server to get a list of
agents. In this PR, I’ve fixed this by checking if the user is logged
in. If they’re not, I’ll show them a UI to log in or create an account.
<img width="967" height="605" alt="Screenshot 2025-10-14 at 12 04 52 PM"
src="https://github.com/user-attachments/assets/95079d9c-e6ef-4d75-9422-ce4fb138e584"
/>
### Changes
- Modify the publish agent test to detect the correct text even when the
user is logged out.
- Use Supabase helpers to check if the user is logged in. If not,
display the appropriate UI.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The login UI is correctly displayed when the user is logged out.
- [x] The login UI is also correctly displayed when the user is logged
in.
- [x] The login and signup buttons are working perfectly.
## Changes 🏗️
<img width="800" height="664" alt="Screenshot 2025-10-14 at 14 09 54"
src="https://github.com/user-attachments/assets/73f6277a-6bef-40f9-b208-31aba0cfc69f"
/>
<img width="600" height="773" alt="Screenshot 2025-10-14 at 14 10 05"
src="https://github.com/user-attachments/assets/c88cb22f-1597-4216-9688-09c19030df89"
/>
Allow to manage on the fly which search terms appear on the Marketplace
page via Launch Darkly dashboard. There is a new flag configured there:
`marketplace-search-terms`:
- **enabled** → `["Foo", "Bar"]` → the terms that will appear
- **disabled** → `[ "Marketing", "SEO", "Content Creation",
"Automation", "Fun"]` → the default ones show
### Small fix
Fix the following browser console warning about `onLoadingComplete`
being deprecated...
<img width="600" height="231" alt="Screenshot 2025-10-14 at 13 55 45"
src="https://github.com/user-attachments/assets/1b26e228-0902-4554-9f8c-4839f8d4ed83"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Ran the flag locally and verified it shows the terms set on Launch
Darkly
### For configuration changes:
Launch Darkly new flag needs to be configured on all environments.
Some agents aren't suitable for onboarding. This adds per-store agent
setting to allow them for onboarding. In case no agent is allowed
fallback to the former search.
### Changes 🏗️
- Add `useForOnboarding` to `StoreListing` model and `StoreAgent` view
(with migration)
- Remove filtering of agents with empty input schema or credentials
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Only allowed agents are displayed
- [x] Fallback to the old system in case there aren't enough allowed
agents
There is concern that the write load on the database may derail the
performance optimisations.
This hotfix comments out the code that adds the search terms to the db,
so we can discuss how best to do this in a way that won't bring down the
db.
### Changes 🏗️
- commented out the code to log store terms to the db
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] check search still works in dev
## Changes 🏗️
<img width="800" height="852" alt="Screenshot_2025-10-13_at_19 20 47"
src="https://github.com/user-attachments/assets/2fc150b9-1053-4e25-9018-24bcc2d93b43"
/>
<img width="800" height="669" alt="Screenshot 2025-10-13 at 19 23 41"
src="https://github.com/user-attachments/assets/9078b04e-0f65-42f3-ac4a-d2f3daa91215"
/>
- Onboarding “Run” step now renders required credentials (e.g., Google
OAuth) and includes them in execution.
- Run button remains disabled until required inputs and credentials are
provided.
- Logic extracted and strongly typed; removed any usage.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan ( _once merged
in dev..._ )
- [ ] Select an onboarding agent that requires Google OAuth:
- [ ] Credentials selector appears.
- [ ] After selecting/signing in, “Run agent” enables.
- [ ]Run succeeds and navigates to the next step.
### For configuration changes:
None
## Changes 🏗️
I found that if I logged out while an agent was running, sometimes
Webscokets would keep open connections but fail to connect ( given there
is no token anymore ) and cause strange behavior down the line on the
login screen.
Two root causes behind after inspecting the browser logs 🧐
- WebSocket connections were attempted with an empty token right after
logout, yielding `wss://.../ws?token=` and repeated `1006/connection`
refused loops.
- During logout, sockets in `CONNECTING` state weren’t being closed, so
the browser kept trying to finish the handshake and were reattempted
shortly after failing
Trying to fix this like:
- Guard `connectWebSocket()` to no-op if a logout/disconnect intent is
set, and to skip connecting when no token is available.
- Treat `CONNECTING` sockets as closeable in `disconnectWebSocket()` and
clear `wsConnecting` to avoid stale pending Promises
- Left existing heartbeat/reconnect logic intact, but it now won’t run
when we’re logging out or when we can’t get a token.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login and run an agent that takes long to run
- [x] Logout
- [x] Check the browser console and you don't see any socket errors
- [x] The login screen behaves ok
### For configuration changes:
Noop
- Depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11107
and https://github.com/Significant-Gravitas/AutoGPT/pull/11122
In this PR, I’ve added support for discrimination. Now, users can choose
a credential type based on other input values.
https://github.com/user-attachments/assets/6cedc59b-ec84-4ae2-bb06-59d891916847
### Changes 🏗️
- Updated CredentialsField to utilize credentialProvider from schema.
- Refactored helper functions to filter credentials based on the
selected provider.
- Modified APIKeyCredentialsModal and PasswordCredentialsModal to accept
provider as a prop.
- Improved FieldTemplate to dynamically display the correct credential
provider.
- Added getCredentialProviderFromSchema function to manage
multi-provider scenarios.
### Checklist 📋
#### For code changes:
- [x] Credential input is correctly updating based on other input
values.
- [x] Credential can be added correctly.
### Problem
Limits caching to just the main marketplace routes
### Changes 🏗️
- **Simplified store cache implementation** in
`backend/server/v2/store/cache.py`
- Streamlined caching logic for better maintainability
- Reduced complexity while maintaining performance
- **Added cache invalidation on store updates**
- Implemented cache clearing when new agents are added to the store
- Added invalidation logic in admin store routes
(`admin_store_routes.py`)
- Ensures all pods reflect the latest store state after modifications
- **Updated store database operations** in
`backend/server/v2/store/db.py`
- Modified to work with the new cache structure
- **Added cache deletion tests** (`test_cache_delete.py`)
- Validates cache invalidation works correctly
- Ensures cache consistency across operations
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify store listings are cached correctly
- [x] Upload a new agent to the store and confirm cache is invalidated
<!-- Clearly explain the need for these changes: -->
Fixes
[AUTOGPT-SERVER-5K6](https://sentry.io/organizations/significant-gravitas/issues/6887660207/).
The issue was that: Batch sending fails due to malformed data (422) and
inactive recipients (406); the 406 error is misclassified as a size
limit failure.
- Implements more robust error handling for Postmark API failures during
notification sending.
- Specifically handles inactive recipients (HTTP 406), malformed data
(HTTP 422), and oversized notifications.
- Adds detailed logging for each error case, including the notification
index and error message.
- Skips individual notifications that fail due to these errors,
preventing the entire batch from failing.
- Improves error handling for ValueErrors during send_templated calls,
specifically addressing oversized notifications.
This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 1675950
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6887660207/?seerDrawer=true)
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Implements more robust error handling for Postmark API failures during
notification sending.
- Specifically handles inactive recipients (HTTP 406), malformed data
(HTTP 422), and oversized notifications.
- Adds detailed logging for each error case, including the notification
index and error message.
- Skips individual notifications that fail due to these errors,
preventing the entire batch from failing.
- Improves error handling for ValueErrors during send_templated calls,
specifically addressing oversized notifications.
- Also disables this in prod to prevent scaling issues until we work out
some of the more critical issues
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test sending notifications with invalid email addresses to ensure
406 errors are handled correctly.
- [x] Test sending notifications with malformed data to ensure 422
errors are handled correctly.
- [x] Test sending oversized notifications to ensure they are skipped
and logged correctly.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- New Features
- None
- Bug Fixes
- Individual email failures no longer abort a batch; processing
continues after per-recipient errors.
- Specific handling for inactive recipients and malformed messages to
prevent repeated delivery attempts.
- Chores
- Improved error logging and diagnostics for email delivery scenarios.
- Tests
- Added tests covering email-sending error cases, user-deactivation on
inactive addresses, and batch-continuation behavior.
- Documentation
- None
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
In this PR, I’ve added functionality to fetch a graph based on the
flowID and flowVersion provided in the URL. Once the graph is fetched,
we add the nodes and links using the graph data in a new builder.
<img width="1512" height="982" alt="Screenshot 2025-10-11 at 10 26
07 AM"
src="https://github.com/user-attachments/assets/2f66eb52-77b2-424c-86db-559ea201b44d"
/>
### Changes
- Added `get_specific_blocks` route in `routes.py`.
- Created `get_block_by_id` function in `db.py`.
- Add a new hook `useFlow.ts` to load the graph and populate it in the
flow editor.
- Updated frontend components to reflect changes in block handling.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Able to load the graph correctly.
- [x] Able to populate it on the builder.
- Resolves#10980
Fixes unnecessary graph re-saving when no changes were made after
initial save. The issue occurred because frontend node IDs weren't
synced with backend IDs after save operations.
### Changes 🏗️
- Update actual node.id to match backend node ID after save
- Update edge references with new node IDs
- Properly sync visual editor state with backend
### Test Plan 📋
- [x] TypeScript compilation passes
- [x] Pre-commit hooks pass
- [x] Manual test: Save graph, verify no re-save needed on subsequent
save/run
## Summary
Add configurable rate limiting to prevent users from exceeding the
maximum number of concurrent graph executions, defaulting to 50 per
user.
## Changes Made
### Configuration (`backend/util/settings.py`)
- Add `max_concurrent_graph_executions_per_user` setting (default: 50,
range: 1-1000)
- Configurable via environment variables or settings file
### Database Query Function (`backend/data/execution.py`)
- Add `get_graph_executions_count()` function for efficient count
queries
- Supports filtering by user_id, statuses, and time ranges
- Used to check current RUNNING/QUEUED executions per user
### Database Manager Integration (`backend/executor/database.py`)
- Expose `get_graph_executions_count` through DatabaseManager RPC
interface
- Follows existing patterns for database operations
- Enables proper service-to-service communication
### Rate Limiting Logic (`backend/executor/manager.py`)
- Inline rate limit check in `_handle_run_message()` before cluster lock
- Use existing `db_client` pattern for consistency
- Reject and requeue executions when limit exceeded
- Graceful error handling - proceed if rate limit check fails
- Enhanced logging with user_id and current/max execution counts
## Technical Implementation
- **Database approach**: Query actual execution statuses for accuracy
- **RPC pattern**: Use DatabaseManager client following existing
codebase patterns
- **Fail-safe design**: Proceed with execution if rate limit check fails
- **Requeue on limit**: Rejected executions are requeued for later
processing
- **Early rejection**: Check rate limit before expensive cluster lock
operations
## Rate Limiting Flow
1. Parse incoming graph execution request
2. Query database via RPC for user's current RUNNING/QUEUED execution
count
3. Compare against configured limit (default: 50)
4. If limit exceeded: reject and requeue message
5. If within limit: proceed with normal execution flow
## Configuration Example
```env
MAX_CONCURRENT_GRAPH_EXECUTIONS_PER_USER=25 # Reduce to 25 for stricter limits
```
## Test plan
- [x] Basic functionality tested - settings load correctly, database
function works
- [x] ExecutionManager imports and initializes without errors
- [x] Database manager exposes the new function through RPC
- [x] Code follows existing patterns and conventions
- [ ] Integration testing with actual rate limiting scenarios
- [ ] Performance testing to ensure minimal impact on execution pipeline
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR fixes duplicate agent listings on the marketplace home page by
updating the StoreAgent view to return only the latest approved version
of each agent.
### Changes 🏗️
- Updated `StoreAgent` database view to filter for only the latest
approved version per listing
- Added CTE (Common Table Expression) `latest_versions` to efficiently
determine the maximum version for each store listing
- Modified the join logic to only include the latest version instead of
all approved versions
- Updated `versions` array field to contain only the single latest
version
**Technical details:**
- The view now uses a `latest_versions` CTE that groups by
`storeListingId` and finds `MAX(version)` for approved submissions
- Join condition ensures only the latest version is included:
`slv.version = lv.latest_version`
- This prevents duplicate entries for agents with multiple approved
versions
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified marketplace home page shows no duplicate agents
- [x] Confirmed only latest version is displayed for agents with
multiple approved versions
- [x] Checked that agent details page still functions correctly
- [x] Validated that run counts and ratings are still accurate
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Changes 🏗️
The Agent Activity Dropdown is now stable, it has been 100% exposed to
users on production for a few weeks, no need to have it behind a flag
anymore.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login to AutoGPT
- [x] The bell on the navbar is always present even if the flag on
Launch Darkly is turned off
### For configuration changes:
None
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11107
In this PR, I’ve added a way to add a username and password as
credentials on new builder.
https://github.com/user-attachments/assets/b896ea62-6a6d-487c-99a3-727cef4ad9a5
### Changes 🏗️
- Introduced PasswordCredentialsModal to handle user password
credentials.
- Updated useCredentialField to support user password type.
- Refactored APIKeyCredentialsModal to remove unnecessary onSuccess
prop.
- Enhanced the CredentialsField component to conditionally render the
new password modal based on supported credential types.
### Checklist 📋
#### For code changes:
- [x] Ability to add username and password correctly.
- [x] The username and password are visible in the credentials list
after adding it.
- Depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11120
In this PR, I’ve added a search functionality to the new block menu with
pagination.
https://github.com/user-attachments/assets/4c199997-4b5a-43c7-83b6-66abb1feb915
### Changes 🏗️
- Add a frontend for the search list with pagination functionality.
- Updated the search route to use GET method.
- Removed the SearchRequest model and replaced it with individual query
parameters.
### Checklist 📋
#### For code changes:
- [x] The search functionality is working perfectly.
- [x] If the search query doesn’t exist, it correctly displays a “No
Result” UI.
Fixes a issue where sub-agent executions triggered by one user were
visible in the original agent author's execution library.
## Solution
Fixed the user_id attribution in
`autogpt_platform/backend/backend/executor/manager.py` by ensuring that
sub-agent executions always use the actual executor's user_id rather
than the agent author's user_id stored in node defaults.
### Changes
- Added user_id override in `execute_node()` function when preparing
AgentExecutorBlock input (line 194)
- Ensures sub-agent executions are correctly attributed to the user
running them, not the agent author
- Maintains proper privacy isolation between users in marketplace agent
scenarios
### Security Impact
- **Before**: When User B downloaded and ran a marketplace agent
containing sub-agents owned by User A, the sub-agent executions appeared
in User A's library
- **After**: Sub-agent executions now only appear in the library of the
user who actually ran them
- Prevents unauthorized access to execution data and user privacy
violation
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Test plan: -->
- [x] Create an agent with sub-agents as User A
- [x] Publish agent to marketplace
- [x] Run the agent as User B
- [x] Verify User A cannot see User B's sub-agent executions in their
library
- [x] Verify User B can see their own sub-agent executions
- [x] Verify primary agent executions remain correctly filtered
Currently, we use the context API for the block menu provider and to
access some of its state outside the blockMenuProvider wrapper. For
instance, in the tutorial, we need to move this wrapper higher up in the
tree, perhaps at the top of the builder tree. This will cause
unnecessary re-renders. Therefore, we should create a block menu zustand
store so that we can easily access it in other parts of the builder.
### Changes 🏗️
- Deleted `block-menu-provider.tsx` file.
- Updated BlockMenu, BlockMenuContent, BlockMenuDefaultContent, and
other components to utilize blockMenuStore instead of
BlockMenuStateProvider.
- Adjusted imports and context usage accordingly.
### Checklist 📋
- [x] Changes have been clearly listed.
- [x] Code has been tested and verified.
- [x] I’ve checked every part of the block menu where we used the
context API and it’s working perfectly.
- [x] Ability to use block menu state in other parts of the builder.
Currently, the new builder doesn’t support sticky notes. We’re rendering
them as normal nodes with an input, which is why I’ve added a UI for
this.
<img width="1512" height="982" alt="Screenshot 2025-10-08 at 4 12 58 PM"
src="https://github.com/user-attachments/assets/be716e45-71c6-4cc4-81ba-97313426222f"
/>
To add sticky notes, go to the search menu of the block menu and search
for “Note block”. Then, add them from there.
### Changes 🏗️
- Updated CustomNodeData to include uiType.
- Conditional rendering in CustomNode based on uiType.
- Added a custom sticky note UI component called `StickyNoteBlock.tsx`.
- Adjusted FormCreator and FieldTemplate to pass and utilize uiType.
- Enhanced TextInputWidget to render differently based on uiType.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Able to attach sticky notes to the builder.
- [x] Able to accurately capture data while writing on sticky notes and
data is persistent also
In this PR, I have added support of oAuth2 in new builder.
https://github.com/user-attachments/assets/89472ebb-8ec2-467a-9824-79a80a71af8a
### Changes 🏗️
- Updated the FlowEditor to support OAuth2 credential selection.
- Improved the UI for API key and OAuth2 modals, enhancing user
experience.
- Refactored credential field components for better modularity and
maintainability.
- Updated OpenAPI documentation to reflect changes in OAuth flow
endpoints.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Able to create OAuth credentials
- [x] OAuth2 is correctly selected using the Credential Selector.
## Summary
Fix the critical issue where retry failure alerts were being spammed
when service communication failed repeatedly.
## Problem
The service communication retry mechanism was sending a critical Discord
alert every time it hit the 50-attempt threshold, with no rate limiting.
This caused alert spam when the same operation (like spend_credits) kept
failing repeatedly with the same error.
## Solution
### Rate Limiting Implementation
- Add ALERT_RATE_LIMIT_SECONDS = 300 (5 minutes) between identical
alerts
- Create _should_send_alert() function with thread-safe rate limiting
using _alert_rate_limiter dict
- Generate unique signatures based on
context:func_name:exception_type:exception_message
- Only send alert if sufficient time has passed since last identical
alert
### Enhanced Logging
- Rate-limited alerts log as warnings instead of being silently dropped
- Add full exception tracebacks for final failures and every 10th retry
attempt
- Improve alert message clarity and add note about rate limiting
- Better structured logging with exception types and details
### Error Context Preservation
- Maintain all original retry behavior and exception handling
- Preserve critical alerting for genuinely new issues
- Clean up alert message (removed accidental paste from error logs)
## Technical Details
- Thread-safe implementation using threading.Lock() for rate limiter
access
- Signature includes first 100 chars of exception message for
granularity
- Memory efficient - only stores last alert timestamp per unique error
type
- No breaking changes to existing retry functionality
## Impact
- **Eliminates alert spam**: Same failing operation only alerts once per
5 minutes
- **Preserves critical alerts**: New/different failures still trigger
immediate alerts
- **Better debugging**: Enhanced logging provides full exception context
- **Maintains reliability**: All retry logic works exactly as before
## Testing
- ✅ Rate limiting tested with multiple scenarios
- ✅ Import compatibility verified
- ✅ No regressions in retry functionality
- ✅ Alert generation works for new vs repeated errors
## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update
## How Has This Been Tested?
- Manual testing of rate limiting functionality with different error
scenarios
- Import verification to ensure no regressions
- Code formatting and linting compliance
## Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation (N/A -
internal utility)
- [x] My changes generate no new warnings
- [x] Any dependent changes have been merged and published in downstream
modules (N/A)
## Changes 🏗️
We weren't awaiting the onboarding enabled check and also we were
re-directing to a wrong onboarding URL.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new user
- [x] Re-directs well to onboarding
- [x] Complete up to Step 5 and logout
- [x] Login again
- [x] You are on Step 5
#### For configuration changes:
None
## Changes 🏗️
### Fix re-direct bugs
Sometimes the app will re-direct to a strange URL after login:
```
http://localhost:3000/marketplace,%20/marketplace
```
It looks like a race-condition because the re-direct to `/marketplace`
was done on a [server
action](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
rather than in the browser.
**✅ Fixed by**
Moving the login / signup server actions to Next.js API endpoints. In
this way the login/signup pages just call an API endpoint and handle its
response without having to hassle with serverless 💆🏽
### Wallet layout flash
<img width="800" height="744" alt="Screenshot 2025-10-08 at 22 52 03"
src="https://github.com/user-attachments/assets/7cb85fd5-7dc4-4870-b4e1-173cc8148e51"
/>
The wallet popover would sometimes flash after login, because it was
re-rendering once onboarding and credits data loaded.
**✅ Fixed by**
Only rendering once we have onboarding and credits data, without the
popover is useless and causes flashes.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login / Signup to the app with email and Google
- [x] Works fine
- [x] Onboarding popover does not flash
- [x] Onboarding and marketplace re-directs work
### For configuration changes:
None
Changed the type of the 'content' field in the Project model to accept
None, making it optional instead of required. Linear doesn't always
return data here if its not set by the user.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
- Makes the content optional
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manually test it works with our data
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- **Bug Fixes**
- Improved handling of projects with no content by making content
optional.
- Prevents validation errors during project creation, import, or sync
when content is missing.
- Enhances compatibility with integrations that may omit content fields.
- No impact on existing projects with content; behavior remains
unchanged.
- No user action required.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
### Changes 🏗️
- Changed the type of the `progress` field in the `LinearTask` model
from `int` to `float` to fix
[BUILDER-3V5](https://sentry.io/organizations/significant-gravitas/issues/6929150079/).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Root cause analysis confirms fix -- testing will need to occur in
dev environment before release to prod but this should merge now
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- New Features
- Progress indicators now support decimal values, allowing more precise
tracking (e.g., 42.5% instead of 42%). This enables finer-grained
updates in the interface and any integrations consuming progress data.
- Users may notice smoother progress changes during long-running tasks,
with improved accuracy in percentage displays across relevant views and
APIs.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
We need to be able to count user impact by user count which means we
need to track that
### Changes 🏗️
- Attaches user id to frontend actions (which hopefully propagate to the
backend in some places)
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test login -> shows on sentry
- [x] Test logout -> no longer shows on sentry
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
Instrument Prometheus for internal services
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing tests
In this PR, I’ve added an API Key modal to the new builder so users can
add API key credentials.
https://github.com/user-attachments/assets/68da226c-3787-4950-abb0-7a715910355e
### Changes
- Updated the credential field to support API key.
- Added a modal for creating new API keys and improved the selection UI
for credentials.
- Refactored components for better modularity and maintainability.
- Enhanced styling and user experience in the FlowEditor components.
- Updated OpenAPI documentation for better clarity on credential
operations.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Able to create API key perfectly.
- [x] can select the correct credentials.
<!-- Clearly explain the need for these changes: -->
We struggle to identify where issues are coming from feature flags and
which are from normal use. This adds that split on the frontend.
### Changes 🏗️
Include sentry in the LD initialization
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test that launch darkly flags get attached to the frontend
(browser only)
## Changes 🏗️
We are seeing login and authentication issues in production and staging.
Locally though, the app behaves fine. We also had issues related to the
CAPTCHA in the past.
Our CAPTCHA code is less than ideal, with some heavy `useEffect` that
will load the Turnstile script into the DOM. I have the impression that
is loading the script multiple times ( due to dependencies on the
effects array not being well set ), or the like causing associated login
issues.
Created a new Turnstile component using
[`react-turnstile`](https://docs.page/marsidev/react-turnstile) that is
way simpler and should hopefully be more stable.
I also fixed an issue with the Credits popover layout rendering cropped
on the window.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login/logout on the app multiple times with Turnstile ON,
everything is stable
- [x] Credits popover appears on the right place
### For configuration changes:
None
React Flow has built-in functionality to select multiple nodes by using
`cmd` + click. You can also select using rectangle selection by holding
the shift key. However, we need to design a custom node after it’s
selected.
<img width="845" height="510" alt="Screenshot 2025-10-06 at 12 41 16 PM"
src="https://github.com/user-attachments/assets/c91f22e3-2211-46b6-b3d3-fbc89148e99a"
/>
### Tests
- [x] Selecting Ui is visible after selecting a node, using cmd + click,
and after rectangle selection.
This PR refactors the marketplace search page to improve code
maintainability, readability, and follows modern React patterns by
extracting complex logic into a custom hook and creating dedicated
components.
### 🔄 Changes
#### **Architecture Improvements**
- **Component Extraction**: Replaced the monolithic `SearchResults`
component with a cleaner `MainSearchResultPage` component that focuses
solely on presentation
- **Custom Hook Pattern**: Extracted all business logic and state
management into `useMainSearchResultPage` hook for better separation of
concerns
- **Loading State Component**: Added dedicated
`MainSearchResultPageLoading` component for consistent loading UI
#### **Code Simplification**
- **Reduced search page to 19 lines** (from 175 lines) by removing
inline logic and state management
- **Centralized data fetching** using auto-generated API endpoints
(`useGetV2ListStoreAgents`, `useGetV2ListStoreCreators`)
- **Improved error handling** with dedicated error states and loading
states
#### **Feature Updates**
- **Sort Options**: Commented out "Most Recent" and "Highest Rated" sort
options due to backend limitations (no date/rating data available)
- **Client-side Sorting**: Implemented client-side sorting for "runs"
and "rating" as a temporary solution
- **Search Filters**: Maintained filter functionality for
agents/creators with improved state management
### 📊 Impact
- **Better Developer Experience**: Code is now more modular and easier
to understand
- **Improved Maintainability**: Business logic separated from
presentation layer
- **Future-Ready**: Structure prepared for backend improvements when
date/rating data becomes available
- **Type Safety**: Leveraging TypeScript with auto-generated API types
### 🧪 Testing Checklist
- [x] Search functionality works correctly with various search terms
- [x] Filter chips correctly toggle between "All", "Agents", and
"Creators"
- [x] Sort dropdown displays only "Most Runs" option
- [x] Client-side sorting correctly sorts agents and creators by runs
- [x] Loading state displays while fetching data
- [x] Error state displays when API calls fail
- [x] "No results found" message appears for empty searches
- [x] Search bar in results page is functional
- [x] Results display correctly with proper layout and styling
In this PR, I’ve added a feature to select a credential from a list and
also provided a UI to create a new credential if desired.
<img width="443" height="157" alt="Screenshot 2025-10-06 at 9 28 07 AM"
src="https://github.com/user-attachments/assets/d9e72a14-255d-45b6-aa61-b55c2465dd7e"
/>
#### Frontend Changes:
- **Refactored credential field** from a single component to a modular
architecture:
- Created `CredentialField/` directory with separated concerns
- Added `SelectCredential.tsx` component for credential selection UI
with provider details display
- Implemented `useCredentialField.ts` custom hook for credential data
fetching with 10-minute caching
- Added `helpers.ts` with credential filtering and provider name
formatting utilities
- Added loading states with skeleton UI while fetching credentials
- **Enhanced UI/UX features**:
- Dropdown selector showing credentials with provider, title, username,
and host details
- Visual key icon for each credential option
- Placeholder "Add API Key" button (implementation pending)
- Loading skeleton UI for better perceived performance
- Smart filtering of credentials based on provider requirements
- **Template improvements**:
- Updated `FieldTemplate.tsx` to properly handle credential field
display
- Special handling for credential field labels showing provider-specific
names
- Removed input handle for credential fields in the node editor
#### Backend Changes:
- **API Documentation improvements**:
- Added OpenAPI summaries to `/credentials` endpoint ("List
Credentials")
- Added summary to `/{provider}/credentials/{cred_id}` endpoint ("Get
Specific Credential By ID")
### Test Plan 📋
- [x] Navigate to the flow builder
- [x] Add a block that requires credentials (e.g., API block)
- [x] Verify the credential dropdown loads and displays available
credentials
- [x] Check that only credentials matching the provider requirements are
shown
## Summary
- Centralize dynamic field delimiters and helpers in
backend/data/dynamic_fields.py.
- Refactor SmartDecisionMaker: build function signatures with
dynamic-field mapping and re-map tool outputs back to original dynamic
names.
- Deterministic retry loop with retry-only feedback to avoid polluting
final conversation history.
- Update executor/utils.py and data/graph.py to use centralized
utilities.
- Update and extend tests: dynamic-field E2E flow, mapping verification,
output yielding, and retry validation; switch mocked llm_call to
AsyncMock; align tool-name expectations.
- Add a single-tool fallback in schema lookup to support mocked
scenarios.
## Validation
- Full backend test suite: 1125 passed, 88 skipped, 53 warnings (local).
- Backend lint/format pass.
## Scope
- Minimal and localized to SmartDecisionMaker and dynamic-field
utilities; unrelated pyright warnings remain unchanged.
## Risks/Mitigations
- Behavior is backward-compatible; dynamic-field constants are
centralized and reused.
- Output re-mapping only affects SmartDecisionMaker tool outputs and
matches existing link naming conventions.
## Checklist
- [x] Formatted and linted
- [x] All updated tests pass locally
- [x] No secrets introduced
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
- Added a description to the Upload Agent dialog to provide more context
for users. Fixes
[BUILDER-3N1](https://sentry.io/organizations/significant-gravitas/issues/6915512912/).
The issue was that: DialogContent in LibraryUploadAgentDialog lacks an
accessible description, violating WAI-ARIA standards.
<img width="2066" height="1740" alt="image"
src="https://github.com/user-attachments/assets/c876fb33-4375-4a66-a6a2-6b13c00ef8d3"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test it works
- [x] Get design approval
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
## Changes 🏗️
### Performance (Onboarding) 🐎
- Moved non-UI logic into `providers/onboarding/helpers.ts` to reduce
provider complexity.
- Memoized provider value and narrowed state updates to cut unnecessary
re-renders.
- Deferred non-critical effects until after mount to lower initial JS
work.
**Result:** faster initial render and smoother onboarding flows under
load.
### Layout and overflow fixes 📐
- Replaced `w-screen` with `w-full` in platform/admin/profile layouts
and marketplace wrappers to avoid 100vw scrollbar overflow.
- Adjusted mobile navbar position (`right-0` instead of `-right-4`) to
prevent off-viewport elements.
**Result:** removed horizontal scrolling on Marketplace, Library, and
Settings pages; Build remains unaffected.
### New Generic Error pages
- Standardized global error handling in `app/global-error.tsx` for
consistent display and user feedback.
- Added platform-scoped error page(s) under `app/(platform)/error` for
route-level failures with a consistent layout.
- Improved retry affordances using existing `ErrorCard`.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify onboarding flows render faster and re-render less (DevTools
flamegraph)
- [x] Confirm no horizontal scrolling on Marketplace, Library, Settings
at common widths
- [x] Validate mobile navbar stays within viewport
- [x] Trigger errors to confirm global and platform error pages render
consistently
### For configuration changes:
None
## Summary
Fix two critical production issues affecting SmartDecisionMaker
functionality and prompt compression accuracy.
### 🔧 Changes Made
#### Issue 1: SmartDecisionMaker ChatCompletionMessage Error
**Problem**: PR #11015 introduced code that appended
`response.raw_response` (ChatCompletionMessage object) directly to
conversation history, causing `'ChatCompletionMessage' object has no
attribute 'get'` errors.
**Root Cause**: ChatCompletionMessage objects don't have `.get()` method
but conversation history processing expects dictionary objects with
`.get()` capability.
**Solution**: Created `_convert_raw_response_to_dict()` helper function
for type-safe conversion:
- ✅ **Helper function**: Safely converts raw_response to dictionary
format for conversation history
- ✅ **Type safety**: Handles OpenAI (ChatCompletionMessage), Anthropic
(Message), and Ollama (string) responses
- ✅ **Preserves context**: Maintains conversation flow for multi-turn
tool calling scenarios
- ✅ **DRY principle**: Single helper used in both validation error path
(line 624) and success path (line 681)
- ✅ **No breaking changes**: Tool call continuity preserved for complex
workflows
#### Issue 2: Tool Call Token Counting in Prompt Compression
**Problem**: `_msg_tokens()` function only counted tokens in 'content'
field, severely undercounting tool calls which store data in different
fields (tool_calls, function.arguments, etc.).
**Root Cause**: Tool calls have no 'content' to calculate length of,
causing massive token undercounting during prompt compression that could
lead to context overflow.
**Solution**: Enhanced `_msg_tokens()` to handle both OpenAI and
Anthropic tool call formats:
- ✅ **OpenAI format**: Count tokens in `tool_calls[].id`, `type`,
`function.name`, `function.arguments`
- ✅ **Anthropic format**: Count tokens in `content[].tool_use` (`id`,
`name`, `input`) and `content[].tool_result`
- ✅ **Backward compatibility**: Regular string content counted exactly
as before
- ✅ **Comprehensive testing**: Added 11 unit tests in `prompt_test.py`
### 📊 Validation Results
- ✅ **SmartDecisionMaker errors resolved**: No more
ChatCompletionMessage.get() failures
- ✅ **Token counting accuracy**: OpenAI tool calls 9+ tokens vs previous
3-4 wrapper-only tokens
- ✅ **Token counting accuracy**: Anthropic tool calls 13+ tokens vs
previous 3-4 wrapper-only tokens
- ✅ **Backward compatibility**: Regular messages maintain exact same
token count
- ✅ **Type safety**: 0 type errors in both modified files
- ✅ **Test coverage**: All 11 new unit tests pass + existing
SmartDecisionMaker tests pass
- ✅ **Multi-turn conversations**: Tool call workflows continue working
correctly
### 🎯 Impact
- **Resolves Sentry issue OPEN-2750**: ChatCompletionMessage errors
eliminated
- **Prevents context overflow**: Accurate token counting during prompt
compression for long tool call conversations
- **Production stability**: SmartDecisionMaker retry mechanism works
correctly with proper conversation flow
- **Resource efficiency**: Better memory management through accurate
token accounting
- **Zero breaking changes**: Full backward compatibility maintained
### 🧪 Test Plan
- [x] Verified SmartDecisionMaker no longer crashes with
ChatCompletionMessage errors
- [x] Validated tool call token counting accuracy with comprehensive
unit tests (11 tests all pass)
- [x] Confirmed backward compatibility for regular message token
counting
- [x] Tested both OpenAI and Anthropic tool call formats
- [x] Verified type safety with pyright checks
- [x] Ensured conversation history flows correctly with helper function
- [x] Confirmed multi-turn tool calling scenarios work with preserved
context
### 📝 Files Modified
- `backend/blocks/smart_decision_maker.py` - Added
`_convert_raw_response_to_dict()` helper for safe conversion
- `backend/util/prompt.py` - Enhanced tool call token counting for
accurate prompt compression
- `backend/util/prompt_test.py` - Comprehensive unit tests for token
counting (11 tests)
### ⚡ Ready for Review
Both fixes are critical for production stability and have been
thoroughly tested with zero breaking changes. The helper function
approach ensures type safety while preserving essential conversation
context for complex tool calling workflows.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
The code execution blocks' implementations are heavily duplicated and
their names aren't very clear.
E.g. the "InstantiationBlock" just shows up as "Instantiation" in the
block list.
I would've done this in #11017 but kept the refactoring separate for
easier reviewing.
### Changes 🏗️
- Rename "Code Execution" block to "Execute Code"
- Rename "Instantiation" block to "Instantiate Code Sandbox"
- Rename "Step Execution" block to "Execute Code Step"
- Deduplicate implementation of the three code execution blocks
- Add `dispose_sandbox` toggle to "Execute Code" and "Execute Code Step"
blocks
- Note: it's default `True` on the Execute Code block, default `False`
on the Execute Code Step block
- Update block and input descriptions to clarify behavior
- Fix all linting issues
<details>
<summary>Screenshots</summary>




</details>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test all code execution blocks manually
Bumps the development-dependencies group with 4 updates in the
/autogpt_platform/backend directory:
[faker](https://github.com/joke2k/faker),
[pyright](https://github.com/RobertCraigie/pyright-python),
[pytest-mock](https://github.com/pytest-dev/pytest-mock) and
[ruff](https://github.com/astral-sh/ruff).
Updates `faker` from 37.6.0 to 37.8.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/releases">faker's
releases</a>.</em></p>
<blockquote>
<h2>Release v37.8.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.8.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.7.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.7.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/blob/master/CHANGELOG.md">faker's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.7.0...v37.8.0">v37.8.0
- 2025-09-15</a></h3>
<ul>
<li>Add Automotive providers for <code>ja_JP</code> locale. Thanks <a
href="https://github.com/ItoRino424"><code>@ItoRino424</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.6.0...v37.7.0">v37.7.0
- 2025-09-15</a></h3>
<ul>
<li>Add Nigerian name locales (<code>yo_NG</code>, <code>ha_NG</code>,
<code>ig_NG</code>, <code>en_NG</code>). Thanks <a
href="https://github.com/ifeoluwaoladeji"><code>@ifeoluwaoladeji</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4bde8f57ad"><code>4bde8f5</code></a>
Bump version: 37.7.0 → 37.8.0</li>
<li><a
href="f542f364cb"><code>f542f36</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="e28d7cb909"><code>e28d7cb</code></a>
fix test</li>
<li><a
href="e4305b0e29"><code>e4305b0</code></a>
fix padding</li>
<li><a
href="a359441a81"><code>a359441</code></a>
💄 format code</li>
<li><a
href="0e3f0bdf81"><code>0e3f0bd</code></a>
Add Automotive providers for <code>ja_JP</code> locale (<a
href="https://redirect.github.com/joke2k/faker/issues/2251">#2251</a>)</li>
<li><a
href="d4fa69dfc7"><code>d4fa69d</code></a>
Bump version: 37.6.0 → 37.7.0</li>
<li><a
href="f636f06a37"><code>f636f06</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="9a482dd25b"><code>9a482dd</code></a>
💄 Format code</li>
<li><a
href="2493b2d51a"><code>2493b2d</code></a>
fix: fix minor grammar typo (<a
href="https://redirect.github.com/joke2k/faker/issues/2259">#2259</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/joke2k/faker/compare/v37.6.0...v37.8.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `pyright` from 1.1.404 to 1.1.405
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e211ec8df8"><code>e211ec8</code></a>
Pyright NPM Package update to 1.1.405 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/353">#353</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.404...v1.1.405">compare
view</a></li>
</ul>
</details>
<br />
Updates `pytest-mock` from 3.14.1 to 3.15.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/releases">pytest-mock's
releases</a>.</em></p>
<blockquote>
<h2>v3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/529">#529</a>:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>v3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/pull/524">#524</a>:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/blob/main/CHANGELOG.rst">pytest-mock's
changelog</a>.</em></p>
<blockquote>
<h2>3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><code>[#529](https://github.com/pytest-dev/pytest-mock/issues/529)
<https://github.com/pytest-dev/pytest-mock/issues/529></code>_:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><code>[#524](https://github.com/pytest-dev/pytest-mock/issues/524)
<https://github.com/pytest-dev/pytest-mock/pull/524></code>_:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e1b5c62a38"><code>e1b5c62</code></a>
Release 3.15.1</li>
<li><a
href="184eb190d6"><code>184eb19</code></a>
Set <code>spy_return_iter</code> only when explicitly requested (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/537">#537</a>)</li>
<li><a
href="4fa0088a0a"><code>4fa0088</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/536">#536</a>)</li>
<li><a
href="f5aff33ce7"><code>f5aff33</code></a>
Fix test failure with pytest 8+ and verbose mode (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/535">#535</a>)</li>
<li><a
href="adc41873c9"><code>adc4187</code></a>
Bump actions/setup-python from 5 to 6 in the github-actions group (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/533">#533</a>)</li>
<li><a
href="95ad570060"><code>95ad570</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/532">#532</a>)</li>
<li><a
href="e696bf02c1"><code>e696bf0</code></a>
Fix standalone mock support (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/531">#531</a>)</li>
<li><a
href="5b29b03ce9"><code>5b29b03</code></a>
Fix gen-release-notes script</li>
<li><a
href="7d22ef4e56"><code>7d22ef4</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/528">#528</a>
from pytest-dev/release-3.15.0</li>
<li><a
href="90b29f89e2"><code>90b29f8</code></a>
Update CHANGELOG for 3.15.0</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-mock/compare/v3.14.1...v3.15.1">compare
view</a></li>
</ul>
</details>
<br />
Updates `ruff` from 0.12.11 to 0.13.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.13.0</h2>
<h2>Release Notes</h2>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.13.0">blog
post</a> for a migration guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p><strong>Several rules can now add <code>from __future__ import
annotations</code> automatically</strong></p>
<p><code>TC001</code>, <code>TC002</code>, <code>TC003</code>,
<code>RUF013</code>, and <code>UP037</code> now add <code>from
__future__ import annotations</code> as part of their fixes when the
<code>lint.future-annotations</code> setting is enabled. This allows the
rules to move more imports into <code>TYPE_CHECKING</code> blocks
(<code>TC001</code>, <code>TC002</code>, and <code>TC003</code>), use
PEP 604 union syntax on Python versions before 3.10
(<code>RUF013</code>), and unquote more annotations
(<code>UP037</code>).</p>
</li>
<li>
<p><strong>Full module paths are now used to verify first-party
modules</strong></p>
<p>Ruff now checks that the full path to a module exists on disk before
categorizing it as a first-party import. This change makes first-party
import detection more accurate, helping to avoid false positives on
local directories with the same name as a third-party dependency, for
example. See the <a
href="https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc">FAQ
section</a> on import categorization for more details.</p>
</li>
<li>
<p><strong>Deprecated rules must now be selected by exact rule
code</strong></p>
<p>Ruff will no longer activate deprecated rules selected by their group
name or prefix. As noted below, the two remaining deprecated rules were
also removed in this release, so this won't affect any current rules,
but it will still affect any deprecations in the future.</p>
</li>
<li>
<p><strong>The deprecated macOS configuration directory fallback has
been removed</strong></p>
<p>Ruff will no longer look for a user-level configuration file at
<code>~/Library/Application Support/ruff/ruff.toml</code> on macOS. This
feature was deprecated in v0.5 in favor of using the <a
href="https://specifications.freedesktop.org/basedir-spec/latest/">XDG
specification</a> (usually resolving to
<code>~/.config/ruff/ruff.toml</code>), like on Linux. The fallback and
accompanying deprecation warning have now been removed.</p>
</li>
</ul>
<h3>Removed Rules</h3>
<p>The following rules have been removed:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/pandas-df-variable-name"><code>pandas-df-variable-name</code></a>
(<code>PD901</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/non-pep604-isinstance"><code>non-pep604-isinstance</code></a>
(<code>UP038</code>)</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow-dag-no-schedule-argument"><code>airflow-dag-no-schedule-argument</code></a>
(<code>AIR002</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-removal"><code>airflow3-removal</code></a>
(<code>AIR301</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-moved-to-provider"><code>airflow3-moved-to-provider</code></a>
(<code>AIR302</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-suggested-update"><code>airflow3-suggested-update</code></a>
(<code>AIR311</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-suggested-to-move-to-provider"><code>airflow3-suggested-to-move-to-provider</code></a>
(<code>AIR312</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/long-sleep-not-forever"><code>long-sleep-not-forever</code></a>
(<code>ASYNC116</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/f-string-number-format"><code>f-string-number-format</code></a>
(<code>FURB116</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/os-symlink"><code>os-symlink</code></a>
(<code>PTH211</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/generic-not-last-base-class"><code>generic-not-last-base-class</code></a>
(<code>PYI059</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/redundant-none-literal"><code>redundant-none-literal</code></a>
(<code>PYI061</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/pytest-raises-ambiguous-pattern"><code>pytest-raises-ambiguous-pattern</code></a>
(<code>RUF043</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/unused-unpacked-variable"><code>unused-unpacked-variable</code></a>
(<code>RUF059</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/useless-class-metaclass-type"><code>useless-class-metaclass-type</code></a>
(<code>UP050</code>)</li>
</ul>
<p>The following behaviors have been stabilized:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.13.0</h2>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.13.0">blog
post</a> for a migration
guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p><strong>Several rules can now add <code>from __future__ import
annotations</code> automatically</strong></p>
<p><code>TC001</code>, <code>TC002</code>, <code>TC003</code>,
<code>RUF013</code>, and <code>UP037</code> now add <code>from
__future__ import annotations</code> as part of their fixes when the
<code>lint.future-annotations</code> setting is enabled. This allows the
rules to move
more imports into <code>TYPE_CHECKING</code> blocks (<code>TC001</code>,
<code>TC002</code>, and <code>TC003</code>),
use PEP 604 union syntax on Python versions before 3.10
(<code>RUF013</code>), and
unquote more annotations (<code>UP037</code>).</p>
</li>
<li>
<p><strong>Full module paths are now used to verify first-party
modules</strong></p>
<p>Ruff now checks that the full path to a module exists on disk before
categorizing it as a first-party import. This change makes first-party
import detection more accurate, helping to avoid false positives on
local
directories with the same name as a third-party dependency, for example.
See
the <a
href="https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc">FAQ
section</a> on import categorization for more details.</p>
</li>
<li>
<p><strong>Deprecated rules must now be selected by exact rule
code</strong></p>
<p>Ruff will no longer activate deprecated rules selected by their group
name
or prefix. As noted below, the two remaining deprecated rules were also
removed in this release, so this won't affect any current rules, but it
will
still affect any deprecations in the future.</p>
</li>
<li>
<p><strong>The deprecated macOS configuration directory fallback has
been removed</strong></p>
<p>Ruff will no longer look for a user-level configuration file at
<code>~/Library/Application Support/ruff/ruff.toml</code> on macOS. This
feature was
deprecated in v0.5 in favor of using the <a
href="https://specifications.freedesktop.org/basedir-spec/latest/">XDG
specification</a>
(usually resolving to <code>~/.config/ruff/ruff.toml</code>), like on
Linux. The
fallback and accompanying deprecation warning have now been removed.</p>
</li>
</ul>
<h3>Removed Rules</h3>
<p>The following rules have been removed:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/pandas-df-variable-name"><code>pandas-df-variable-name</code></a>
(<code>PD901</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/non-pep604-isinstance"><code>non-pep604-isinstance</code></a>
(<code>UP038</code>)</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a1fdd66f10"><code>a1fdd66</code></a>
Bump 0.13.0 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20336">#20336</a>)</li>
<li><a
href="8770b95509"><code>8770b95</code></a>
[ty] introduce <code>DivergentType</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20312">#20312</a>)</li>
<li><a
href="65982a1e14"><code>65982a1</code></a>
[ty] Use 'unknown' specialization for upper bound on Self (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20325">#20325</a>)</li>
<li><a
href="57d1f7132d"><code>57d1f71</code></a>
[ty] Simplify unions of enum literals and subtypes thereof (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20324">#20324</a>)</li>
<li><a
href="7a75702237"><code>7a75702</code></a>
Ignore deprecated rules unless selected by exact code (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20167">#20167</a>)</li>
<li><a
href="9ca632c84f"><code>9ca632c</code></a>
Stabilize adding future import via config option (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20277">#20277</a>)</li>
<li><a
href="64fe7d30a3"><code>64fe7d3</code></a>
[<code>flake8-errmsg</code>] Stabilize extending
<code>raw-string-in-exception</code> (<code>EM101</code>) to ...</li>
<li><a
href="beeeb8d5c5"><code>beeeb8d</code></a>
Stabilize the remaining Airflow rules (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20250">#20250</a>)</li>
<li><a
href="b6fca52855"><code>b6fca52</code></a>
[<code>flake8-bugbear</code>] Stabilize support for non-context-manager
calls in `assert...</li>
<li><a
href="ac7f882c78"><code>ac7f882</code></a>
[<code>flake8-commas</code>] Stabilize support for trailing comma checks
in type paramet...</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.11...0.13.0">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Bumps [firecrawl-py](https://github.com/firecrawl/firecrawl) from 2.16.3
to 4.3.1.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/firecrawl/firecrawl/commits">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Upgrade firecrawl-py to v4.3.6 and refactor firecrawl blocks to new v4
API, formats handling, method names, and response fields.
>
> - **Dependencies**
> - Bump `firecrawl-py` from `2.16.3` to `4.3.6` (adds `httpx`, updates
`pydantic>=2`).
> - **Firecrawl API migration**
> - Centralize `ScrapeFormat` in `backend/blocks/firecrawl/_api.py`.
> - Add `_format_utils.convert_to_format_options` to map `ScrapeFormat`
(incl. `screenshot@fullPage`) to v4 `FormatOption`/`ScreenshotFormat`.
> - Switch to v4 types (`firecrawl.v2.types.ScrapeOptions`); adopt
snake_case fields (`only_main_content`, `max_age`, `wait_for`).
> - Rename methods: `crawl_url` → `crawl`, `scrape_url` → `scrape`,
`map_url` → `map`.
> - Normalize response attributes: `rawHtml` → `raw_html`,
`changeTracking` → `change_tracking`.
> - **Blocks**
> - `crawl.py`, `scrape.py`, `search.py`: use new formats conversion and
updated options/fields; adjust iteration over results (`search`: iterate
`web` when present).
> - `map.py`: return both `links` and detailed `results`
(url/title/description) and update output schema accordingly.
> - **Project files**
> - Update `pyproject.toml` and `poetry.lock` for new dependency
versions.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d872f2e82b. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
## Summary
Fix critical issues where activity status generator incorrectly reported
failed executions as successful, and enhance AI evaluation logic to be
more accurate about actual task accomplishment.
## Changes Made
### 1. Missing Block Handling (`backend/data/graph.py`)
- **Replace ValueError with graceful degradation**: When blocks are
deleted/missing, return `_UnknownBlock` placeholder instead of crashing
- **Comprehensive interface implementation**: `_UnknownBlock` implements
all expected Block methods to prevent type errors
- **Warning logging**: Log missing blocks for debugging without breaking
execution flow
- **Removed unnecessary caching**: Direct constructor calls instead of
cached wrapper functions
### 2. Enhanced Activity Status AI Evaluation
(`backend/executor/activity_status_generator.py`)
#### Intention-Based Success Evaluation
- **Graph description analysis**: AI now reads graph description FIRST
to understand intended purpose
- **Purpose-driven evaluation**: Success is measured against what the
graph was designed to accomplish
- **Critical output analysis**: Enhanced detection of missing outputs
from key blocks (Output, Post, Create, Send, Publish, Generate)
- **Sub-agent failure detection**: Better identification when
AgentExecutorBlock produces no outputs
#### Improved Prompting
- **Intent-specific examples**: 'blog writing' → check for blog content,
'email automation' → check for sent emails
- **Primary evaluation criteria**: 'Did this execution accomplish what
the graph was designed to do?'
- **Enhanced checklist**: 7-point analysis including graph description
matching
- **Technical vs. goal completion**: Distinguish between workflow steps
completing vs. actual user goals achieved
#### Removed Database Error Handling
- **Eliminated try-catch blocks**: No longer needed around
`get_graph_metadata` and `get_graph` calls
- **Direct database calls**: Simplified error handling after fixing
missing block root cause
- **Cleaner code flow**: More predictable execution path without
redundant error handling
## Problem Solved
- **False success reports**: AI previously marked executions as
'successful' when critical output blocks produced no results
- **Missing block crashes**: System would fail when trying to analyze
executions with deleted/missing blocks
- **Intent-blind evaluation**: AI evaluated technical completion instead
of actual goal achievement
- **Database service errors**: 500 errors when missing blocks caused
graph loading failures
## Business Impact
- **More accurate user feedback**: Users get honest assessment of
whether their automations actually worked
- **Better task completion detection**: Clear distinction between
'workflow completed' vs. 'goal achieved'
- **Improved reliability**: System handles edge cases gracefully without
crashing
- **Enhanced user trust**: Truthful reporting builds confidence in the
platform
## Testing
- ✅ Tested with problematic executions that previously showed false
successes
- ✅ Confirmed missing block handling works without warnings
- ✅ Verified enhanced prompt correctly identifies failures
- ✅ Database calls work without try-catch protection
## Example Before/After
**Before (False Success):**
```
Graph: "Automated SEO Blog Writer"
Status: "✅ I successfully completed your blog writing task!"
Reality: No blog content was actually created (critical output blocks had no outputs)
```
**After (Accurate Failure Detection):**
```
Graph: "Automated SEO Blog Writer"
Status: "❌ The task failed because the blog post creation step didn't produce any output."
Reality: Correctly identifies that the intended blog writing goal was not achieved
```
## Files Modified
- `backend/data/graph.py`: Missing block graceful handling with complete
interface
- `backend/executor/activity_status_generator.py`: Enhanced AI
evaluation with intention-based analysis
## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update
## Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] New and existing unit tests pass locally with my changes
- [x] Any dependent changes have been merged and published in downstream
modules
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
### Performance (Onboarding) 🐎
- Moved non-UI logic into `providers/onboarding/helpers.ts` to reduce
provider complexity.
- Memoized provider value and narrowed state updates to cut unnecessary
re-renders.
- Deferred non-critical effects until after mount to lower initial JS
work.
**Result:** faster initial render and smoother onboarding flows under
load.
### Layout and overflow fixes 📐
- Replaced `w-screen` with `w-full` in platform/admin/profile layouts
and marketplace wrappers to avoid 100vw scrollbar overflow.
- Adjusted mobile navbar position (`right-0` instead of `-right-4`) to
prevent off-viewport elements.
**Result:** removed horizontal scrolling on Marketplace, Library, and
Settings pages; Build remains unaffected.
### New Generic Error pages
- Standardized global error handling in `app/global-error.tsx` for
consistent display and user feedback.
- Added platform-scoped error page(s) under `app/(platform)/error` for
route-level failures with a consistent layout.
- Improved retry affordances using existing `ErrorCard`.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify onboarding flows render faster and re-render less (DevTools
flamegraph)
- [x] Confirm no horizontal scrolling on Marketplace, Library, Settings
at common widths
- [x] Validate mobile navbar stays within viewport
- [x] Trigger errors to confirm global and platform error pages render
consistently
### For configuration changes:
None
Fixed costs being displayed as raw cent values instead of properly
formatted dollar amounts in the frontend monitoring and agent run detail
pages.
## Problem
The platform was showing costs incorrectly in two key areas:
- **Monitoring page**: Total cost displayed as raw cents with incorrect
"seconds" unit (e.g., "Total cost: 150 seconds")
- **Agent run details**: Individual run costs displayed as raw cents
(e.g., "Cost: $150" for what should be $1.50)
## Solution
Updated the affected components to properly convert cents to dollars
with consistent formatting:
**FlowRunsStatus.tsx** - Fixed total cost calculation and display:
```tsx
// Before
{filteredFlowRuns.reduce((total, run) => total + (run.stats?.cost ?? 0), 0)} seconds
// After
${(filteredFlowRuns.reduce((total, run) => total + (run.stats?.cost ?? 0), 0) / 100).toFixed(2)}
```
**RunDetailHeader.tsx** - Fixed individual run cost display:
```tsx
// Before
Cost: ${run.stats.cost}
// After
Cost: ${(run.stats.cost / 100).toFixed(2)}
```
## Validation
- Backend correctly stores costs in cents (verified in models and
database schemas)
- Email notification templates already handle the conversion properly
using `(credits_used|float)/100`
- Other components use the existing `formatCredits()` utility which
correctly converts cents to dollars
- No security vulnerabilities introduced (CodeQL verification passed)
- All linting and formatting checks pass
The fix ensures users now see accurate dollar amounts (e.g., $1.50
instead of $150 or "150 seconds") across the platform's cost reporting
interfaces.

> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `checkpoint.prisma.io`
> - Triggering command: `/usr/bin/node
/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{"product":"prisma","version":"5.17.0","cli_install_type":"local","information":"","local_timestamp":"2025-09-25T21:41:17Z","project_hash":"a5170f80","cli_path":"/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js","cli_path_hash":"40bbdaf9","endpoint":"REDACTED","disable":false,"arch":"x64","os":"linux","node_version":"v20.19.5","ci":false,"ci_name":"","command":"generate","schema_providers":["postgresql"],"schema_preview_features":[],"schema_generators_providers":["prisma-client-py"],"cache_file":"/root/.cache/checkpoint-nodejs/prisma-40bbdaf9","cache_duration":43200000,"remind_duration":172800000,"force":false,"timeout":5000,"unref":true,"child_path":"/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child","client_event_id":"","previous_client_event_id":"","check_if_update_available":false}`
(dns block)
> - Triggering command: `/usr/bin/node
/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{"product":"prisma","version":"5.17.0","cli_install_type":"local","information":"","local_timestamp":"2025-09-25T21:41:19Z","project_hash":"a5170f80","cli_path":"/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js","cli_path_hash":"40bbdaf9","endpoint":"REDACTED","disable":false,"arch":"x64","os":"linux","node_version":"v20.19.5","ci":false,"ci_name":"","command":"migrate
deploy","schema_providers":["postgresql"],"schema_preview_features":[],"schema_generators_providers":["prisma-client-py"],"cache_file":"/root/.cache/checkpoint-nodejs/prisma-40bbdaf9","cache_duration":43200000,"remind_duration":172800000,"force":false,"timeout":5000,"unref":true,"child_path":"/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child","client_event_id":"","previous_client_event_id":"","check_if_update_available":false}`
(dns block)
> - Triggering command: `/opt/hostedtoolcache/node/21.7.3/x64/bin/node
/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{"product":"prisma","version":"5.17.0","cli_install_type":"local","information":"","local_timestamp":"2025-09-25T21:44:58Z","project_hash":"c6190a20","cli_path":"/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js","cli_path_hash":"8d85b642","endpoint":"REDACTED","disable":false,"arch":"x64","os":"linux","node_version":"v21.7.3","ci":true,"ci_name":"GitHub
Actions","command":"generate","schema_providers":["postgresql"],"schema_preview_features":[],"schema_generators_providers":["prisma-client-py"],"cache_file":"/home/REDACTED/.cache/checkpoint-nodejs/prisma-8d85b642","cache_duration":43200000,"remind_duration":172800000,"force":false,"timeout":5000,"unref":true,"child_path":"/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child","client_event_id":"","previous_client_event_id":"","check_if_update_available":false}`
(dns block)
> - `fonts.googleapis.com`
> - Triggering command: `node
/home/REDACTED/work/AutoGPT/AutoGPT/autogpt_platform/frontend/node_modules/.bin/../next/dist/bin/next
build` (dns block)
> -
`https://api.github.com/repos/Significant-Gravitas/Significant-Gravitas%2FAutoGPT/languages`
> - Triggering command:
`/home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps
/home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js`
(http block)
> - `o1.ingest.sentry.io`
> - Triggering command: `node
/home/REDACTED/work/AutoGPT/AutoGPT/autogpt_platform/frontend/node_modules/.bin/../next/dist/bin/next
build` (dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
>
> ----
>
> *This section details on the original issue you should resolve*
>
> <issue_title>Costs are being shown as dollars rather than cents based
on the new runs page</issue_title>
> <issue_description></issue_description>
>
> ## Comments on the Issue (you are @copilot in this section)
>
> <comments>
> </comments>
>
</details>
FixesSignificant-Gravitas/AutoGPT#10886
<!-- START COPILOT CODING AGENT TIPS -->
---
💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
### Changes 🏗️
- Fix not being able to complete `MARKETPLACE_RUN_AGENT` task
- Fix confetti shooting on every refresh
- Fix confetti shooting from top-left corner
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Bugs eradicated
- Resolves#11016
### Changes 🏗️
- Add more extensive outputs to Code Execution Block
- Rename "Response" output to "Main Text Output"
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Object outputs can be accessed now
This PR implements the AI Condition Block as requested in issue
AUTOMAT-60. The new block enables users to define conditional logic
using natural language descriptions instead of traditional comparison
operators, while maintaining the same yes/no data pass-through
functionality as the existing ConditionBlock.
## Overview
The AI Condition Block uses Large Language Models to evaluate conditions
written in plain English, such as:
- "the input is the body of an email"
- "the input is a City in the USA"
- "the input is an error or a refusal"
## Key Features
**Natural Language Processing**: Users can express complex conditions in
everyday English rather than programming logic, making agent workflows
more intuitive and accessible.
**Consistent Interface**: Maintains the same input/output schema as the
standard ConditionBlock:
- Boolean `result` output indicating condition evaluation
- `yes_output` and `no_output` for conditional data flow
- Optional custom values for yes/no cases
**Robust Error Handling**: Defaults to `false` on AI evaluation failures
to ensure safe operation and prevent workflow interruption.
**Performance Optimized**: Uses minimal token limits (10 tokens) for
true/false responses to reduce latency and API costs.
## Implementation Details
The block is implemented as `AIConditionBlock` in
`backend/blocks/ai_condition.py` and inherits from `AIBlockBase`
following established platform patterns. It includes:
- Proper LLM integration with credential management
- Token usage tracking and statistics
- Comprehensive test mocking for reliable CI/CD
- Full documentation with examples and use cases
## Use Cases
This block enables more sophisticated conditional logic for:
- **Content Classification**: Automatically categorize text, emails, or
documents
- **Data Validation**: Validate inputs using natural language rules
- **Smart Routing**: Route data based on AI-evaluated conditions
- **Error Detection**: Identify and handle error messages or problematic
inputs
- **Quality Control**: Check content against flexible quality standards
## Testing
The implementation includes comprehensive testing that integrates with
the existing platform test suite. All tests pass, including:
- Unit tests with proper LLM response mocking
- Code quality checks (linting, formatting, type checking)
- Security analysis via CodeQL
- Integration testing to ensure proper block discovery and loading
The block is automatically discovered by the platform's block loading
system and is immediately available for use in agent workflows.
## PR Checklist
- [x] **Have you listed your changes in the description?**
- Added new `AIConditionBlock` in `backend/blocks/ai_condition.py`
- Added comprehensive documentation in
`docs/content/platform/blocks/ai_condition.md`
- Implemented natural language condition evaluation using LLMs
- [x] **Have you included a test plan?**
- Unit tests with mocked LLM responses
- Integration tests for block discovery and loading
- Error handling validation
- Token usage tracking verification
- [x] **Have you tested your changes according to the test plan?**
- All existing tests pass
- Linting and formatting checks pass
- Type checking passes
- Security analysis via CodeQL passes
- Fixed `json_format` parameter to `force_json_output` per recent API
changes
> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `api.openai.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python
/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/pytest
backend/blocks/test/test_block.py::test_available_blocks -k
AIConditionBlock -v` (dns block)
> -
`https://api.github.com/repos/Significant-Gravitas/Significant-Gravitas%2FAutoGPT/languages`
> - Triggering command:
`/home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps
/home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js`
(http block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
> Issue Title: AI Condition Block
> Issue Description: A version of the condition/if block that uses an AI
powered condition.
>
> It should have the same yes/no data pass throughs, as well as
outputting a result Boolean.
>
> The condition is plaintext English, provided by the user, and could be
anything.
>
> e.g
> If `[the input] is the body of an email`
> If `[the input] is a City in the USA`
> If `[the input] is an error or a refusal`
> Fixes https://linear.app/autogpt/issue/AUTOMAT-60/ai-condition-block
>
>
> Comment by User 4bcbb358-1758-43e4-abef-a0a42b63442f:
> 📋 I need a **repo** label on this issue to determine which GitHub
repository to work in.
>
> Please add a repo label to this issue with the format
`owner/repository-name` (e.g., `github/copilot`), then I'll
automatically start working on it!
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
>
</details>
<!-- START COPILOT CODING AGENT TIPS -->
---
✨ Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Introduces `AIConditionBlock` that uses an LLM to evaluate
natural-language conditions and outputs boolean result with yes/no
pass-through, plus accompanying documentation.
>
> - **Backend**:
> - **New block**: `backend/blocks/ai_condition.py`
> - Evaluates natural-language conditions via `llm_call` using
selectable `LlmModel` and credentials.
> - Parses strict true/false responses (with fallback token matching),
yields `result`, `yes_output`/`no_output`, and `error` on
ambiguity/failure.
> - Tracks token usage via `NodeExecutionStats`; includes test
inputs/mocks and `force_json_output=False`.
> - **Docs**:
> - Adds `docs/content/platform/blocks/ai_condition.md` with usage,
inputs/outputs, examples, and considerations.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
06e9586bd3. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Wallet update removed `BUILDER_OPEN` and `BUILDER_RUN_AGENT`.
### Changes 🏗️
- Restore completion codepaths for `BUILDER_OPEN` and
`BUILDER_RUN_AGENT` for analytical purposes
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tasks are completed silently
Remove pr_reviewer section from configuration
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
removes the out of config status section
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] validated by global config
Added Sentry captureConsoleIntegration and extraErrorDataIntegration to
client, edge, and server configs. Improved replay integration with
unmasking support. Updated TallyPopup to collect and expose Sentry
replay data, user agent, and page URL for enhanced telemetry and
debugging. Improved event handling and error logging for Tally events.
Marked CustomNode title for Sentry unmasking.<!-- Clearly explain the
need for these changes: -->
### Changes 🏗️
Reconfigure sentry
Pass the id with sentry replay to tally alongside prefilling email, and
passing non user identifying attributes like platform url, full url, and
is authenticated.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test the results show up in sentry
- [x] Test the url works in tally
### Changes 🏗️
- Rename wallet and update design
- Update tasks and add Hidden Tasks section
- Update onboarding backend code and related db migration
- Add progress bar for some tasks
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All tasks can be finished
- [x] Finished tasks add correct amount of credits
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
<!-- Clearly explain the need for these changes: -->
https://github.com/user-attachments/assets/909a6ecf-5731-424c-8dee-fe25db907365
### Need 💡
This PR introduces a new "Table Input" block and corresponding UI
component, allowing users to easily input structured, tabular data
directly within the agent builder. This addresses the need for a
user-friendly way to define custom column headers and populate rows of
data, which is then output as a list of dictionaries.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
* **New `TableInputBlock` (backend):** A new block
(`backend/backend/blocks/table_input.py`) has been added. It defines an
`Input` schema with `headers` (a list of strings for column names) and
`value` (a list of dictionaries representing table rows). The block
outputs the `value` data in the specified dictionary format.
* **New `NodeTableInput` Component (frontend):** A new React component
(`frontend/src/components/node-table-input.tsx`) was created to render
an editable table UI, supporting dynamic row addition/removal and cell
editing.
* **Frontend Integration:**
* `NodeGenericInputField` and `NodeObjectInputTree` were updated to pass
`parentContext` down the component hierarchy.
* `NodeArrayInput` was modified to conditionally render the new
`NodeTableInput` component. It now detects when an array field
(`selfKey` is "value") is part of a parent context that defines
`headers`, indicating it should be rendered as a table.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add a "Table Input" block to the builder.
- [x] Define custom headers (e.g., "Name", "Email").
- [x] Add several rows of data using the table UI.
- [x] Verify that adding, editing, and removing rows works as expected.
- [x] Connect the output of the "Table Input" block to another block
(e.g., a "Print" block) and confirm the output format is a list of
dictionaries with the defined headers as keys.
- [x] Test with an empty table (no rows).
- [x] Test with no headers defined (should default).
- [x] Test that an empty row returns empty data (is this a good
behavior?
example output of the block
```
{
"advanced": false,
"column_headers": [
"Col 1",
"Col 2",
"Col 3"
],
"name": "table_input",
"value": [
{
"Col 1": "row 1",
"Col 2": "row 1",
"Col 3": "row 1"
},
{
"Col 1": "val 1",
"Col 2": "val 2",
"Col 3": "val 3"
}
]
}
```
---
<a
href="https://cursor.com/background-agent?bcId=bc-b8d31867-1034-4374-852c-b92ca69cc399">
<picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
</picture>
</a>
<a
href="https://cursor.com/agents?id=bc-b8d31867-1034-4374-852c-b92ca69cc399">
<picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
<img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
</picture>
</a>
---------
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
## Summary
Adds claude-sonnet-4.5 model to the platform and sets the price to 9
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] test the new claude-sonnet-4.5 model on the platform to make sure
it works
Bumps [@sentry/nextjs](https://github.com/getsentry/sentry-javascript)
from 9.42.0 to 10.8.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/releases"><code>@sentry/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>10.8.0</h2>
<h3>Important Changes</h3>
<ul>
<li>
<p><strong>feat(sveltekit): Add Compatibility for builtin SvelteKit
Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17423">#17423</a>)</strong></p>
<p>This release makes the <code>@sentry/sveltekit</code> SDK compatible
with SvelteKit's native <a
href="https://svelte.dev/docs/kit/observability">observability
support</a> introduced in SvelteKit version <code>2.31.0</code>.
If you enable both, instrumentation and tracing, the SDK will now
initialize early enough to set up additional instrumentation like
database queries and it will pick up spans emitted from SvelteKit.</p>
<p>We will follow up with docs how to set up the SDK soon.
For now, If you're on SvelteKit version <code>2.31.0</code> or newer,
you can easily opt into the new feature:</p>
<ol>
<li>
<p>Enable <a
href="https://svelte.dev/docs/kit/observability">experimental tracing
and instrumentation support</a> in <code>svelte.config.js</code>:</p>
</li>
<li>
<p>Move your <code>Sentry.init()</code> call from
<code>src/hooks.server.(js|ts)</code> to the new
<code>instrumentation.server.(js|ts)</code> file:</p>
<pre lang="ts"><code>// instrumentation.server.ts
import * as Sentry from '@sentry/sveltekit';
<p>Sentry.init({<br />
dsn: '...',<br />
// rest of your config<br />
});<br />
</code></pre></p>
<p>The rest of your Sentry config in <code>hooks.server.ts</code>
(<code>sentryHandle</code> and <code>handleErrorWithSentry</code>)
should stay the same.</p>
</li>
</ol>
<p>If you prefer to stay on the hooks-file based config for now, the SDK
will continue to work as previously.</p>
<p>Thanks to the Svelte team and <a
href="https://github.com/elliott-with-the-longest-name-on-github"><code>@elliott-with-the-longest-name-on-github</code></a>
for implementing observability support and for reviewing our PR!</p>
</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17438">#17438</a>)</li>
</ul>
<!-- raw HTML omitted -->
<ul>
<li>test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17470">#17470</a>)</li>
</ul>
<!-- raw HTML omitted -->
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@sentry/browser</code></td>
<td>23.59 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> - with treeshaking flags</td>
<td>22.2 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Tracing)</td>
<td>38.94 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Tracing, Replay)</td>
<td>76.4 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>66.43 KB</td>
</tr>
</tbody>
</table>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md"><code>@sentry/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>10.8.0</h2>
<h3>Important Changes</h3>
<ul>
<li>
<p><strong>feat(sveltekit): Add Compatibility for builtin SvelteKit
Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17423">#17423</a>)</strong></p>
<p>This release makes the <code>@sentry/sveltekit</code> SDK compatible
with SvelteKit's native <a
href="https://svelte.dev/docs/kit/observability">observability
support</a> introduced in SvelteKit version <code>2.31.0</code>.
If you enable both, instrumentation and tracing, the SDK will now
initialize early enough to set up additional instrumentation like
database queries and it will pick up spans emitted from SvelteKit.</p>
<p>We will follow up with docs how to set up the SDK soon.
For now, If you're on SvelteKit version <code>2.31.0</code> or newer,
you can easily opt into the new feature:</p>
<ol>
<li>
<p>Enable <a
href="https://svelte.dev/docs/kit/observability">experimental tracing
and instrumentation support</a> in <code>svelte.config.js</code>:</p>
</li>
<li>
<p>Move your <code>Sentry.init()</code> call from
<code>src/hooks.server.(js|ts)</code> to the new
<code>instrumentation.server.(js|ts)</code> file:</p>
<pre lang="ts"><code>// instrumentation.server.ts
import * as Sentry from '@sentry/sveltekit';
<p>Sentry.init({<br />
dsn: '...',<br />
// rest of your config<br />
});<br />
</code></pre></p>
<p>The rest of your Sentry config in <code>hooks.server.ts</code>
(<code>sentryHandle</code> and <code>handleErrorWithSentry</code>)
should stay the same.</p>
</li>
</ol>
<p>If you prefer to stay on the hooks-file based config for now, the SDK
will continue to work as previously.</p>
<p>Thanks to the Svelte team and <a
href="https://github.com/elliott-with-the-longest-name-on-github"><code>@elliott-with-the-longest-name-on-github</code></a>
for implementing observability support and for reviewing our PR!</p>
</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17438">#17438</a>)</li>
</ul>
<!-- raw HTML omitted -->
<ul>
<li>test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17470">#17470</a>)</li>
</ul>
<!-- raw HTML omitted -->
<h2>10.7.0</h2>
<h3>Important Changes</h3>
<ul>
<li><strong>feat(cloudflare): Add
<code>instrumentPrototypeMethods</code> option to instrument RPC methods
for DurableObjects (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17424">#17424</a>)</strong></li>
</ul>
<p>By default, <code>Sentry.instrumentDurableObjectWithSentry</code>
will not wrap any RPC methods on the prototype. To enable wrapping for
RPC methods, set <code>instrumentPrototypeMethods</code> to
<code>true</code> or, if performance is a concern, a list of only the
methods you want to instrument:</p>
<pre lang="js"><code></tr></table>
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bd8458e659"><code>bd8458e</code></a>
release: 10.8.0</li>
<li><a
href="dbdddc896f"><code>dbdddc8</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17481">#17481</a>
from getsentry/prepare-release/10.8.0</li>
<li><a
href="f5d4bd616e"><code>f5d4bd6</code></a>
meta(changelog): Update changelog for 10.8.0</li>
<li><a
href="dfdc3b0ab9"><code>dfdc3b0</code></a>
test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17470">#17470</a>)</li>
<li><a
href="895b38590c"><code>895b385</code></a>
fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17438">#17438</a>)</li>
<li><a
href="e6e20d847c"><code>e6e20d8</code></a>
feat(sveltekit): Add Compatibility for builtin SvelteKit Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17423">#17423</a>)</li>
<li><a
href="7e24422327"><code>7e24422</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17472">#17472</a>
from getsentry/master</li>
<li><a
href="27e97b0cec"><code>27e97b0</code></a>
Merge branch 'release/10.7.0'</li>
<li><a
href="b7e4816824"><code>b7e4816</code></a>
release: 10.7.0</li>
<li><a
href="0bc8417d50"><code>0bc8417</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17471">#17471</a>
from getsentry/prepare-release/10.7.0</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-javascript/compare/9.42.0...10.8.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Upgrades `@sentry/nextjs` to 10.15.0, updating numerous related
`@sentry/*`, OpenTelemetry (v2), and build/dev dependencies via the
lockfile.
>
> - **Dependencies (frontend)**:
> - Upgrade `@sentry/nextjs` from `9.42.0` to `10.15.0`.
> - Cascading updates in `pnpm-lock.yaml`:
> - `@sentry/*` packages (`browser`, `core`, `node`, `opentelemetry`,
`react`, `vercel-edge`, `webpack-plugin`, `bundler-plugin-core`, `cli`,
etc.).
> - OpenTelemetry stack to newer major versions
(`@opentelemetry/core`/`resources`/`sdk-trace-base` 2.x; multiple
`instrumentation-*` packages).
> - Build tooling: `rollup` 4.52.x and platform binaries;
`@rollup/plugin-*`.
> - Misc dev typings and utilities (e.g., `@types/mysql`, `@types/pg`,
`debug`, `@prisma/instrumentation`).
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
5b4b37e551. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
This PR fixes a critical production issue where SmartDecisionMakerBlock
was silently accepting tool calls with typo'd parameter names (e.g.,
'maximum_keyword_difficulty' instead of 'max_keyword_difficulty'),
causing downstream blocks to receive null values and execution failures.
The solution implements comprehensive parameter validation with
automatic retry when the LLM provides malformed tool calls, giving the
LLM specific feedback to correct the errors.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
**Core Validation & Retry Logic
(`backend/blocks/smart_decision_maker.py`)**
- Add tool call parameter validation against function schema
- Implement retry mechanism using existing `create_retry_decorator` from
`backend.util.retry`
- Validate provided parameters against expected schema properties and
required fields
- Generate specific error messages for unknown parameters (typos) and
missing required parameters
- Add error feedback to conversation history for LLM learning on retry
attempts
- Use `input_data.retry` field to configure number of retry attempts
**Comprehensive Test Coverage
(`backend/blocks/test/test_smart_decision_maker.py`)**
- Add `test_smart_decision_maker_parameter_validation` with 4
comprehensive test scenarios:
1. Tool call with typo'd parameter (should retry and eventually fail
with clear error)
2. Tool call missing required parameter (should fail immediately with
clear error)
3. Valid tool call with optional parameter missing (should succeed)
4. Valid tool call with all parameters provided (should succeed)
- Verify retry mechanism works correctly and respects retry count
- Mock LLM responses for controlled testing of validation logic
**Load Tests Documentation Update (`load-tests/README.md`)**
- Update documentation to reflect current orchestrator-based
architecture
- Remove references to deprecated `run-tests.js` and
`comprehensive-orchestrator.js`
- Streamline documentation to focus on working
`orchestrator/orchestrator.js`
- Update NPM scripts and command examples for current workflow
- Clean up outdated file references to match actual infrastructure
**Production Impact**
- **Prevents silent failures**: Tool call parameter typos now cause
retries instead of null downstream values
- **Maintains compatibility**: No breaking changes to existing
SmartDecisionMaker functionality
- **Improves reliability**: LLM receives feedback to correct parameter
errors
- **Configurable retries**: Uses existing `retry` field for user control
- **Accurate documentation**: Load-tests docs now match actual working
infrastructure
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run existing SmartDecisionMaker tests to ensure no regressions:
`poetry run pytest backend/blocks/test/test_smart_decision_maker.py
-xvs` ✅ All 4 tests passed
- [x] Run new parameter validation test specifically: `poetry run pytest
backend/blocks/test/test_smart_decision_maker.py::test_smart_decision_maker_parameter_validation
-xvs` ✅ Passed with retry behavior confirmed
- [x] Verify retry mechanism works by checking log output for retry
attempts ✅ Confirmed in test logs
- [x] Test tool call validation with different scenarios (typos, missing
params, valid calls) ✅ All scenarios covered and working
- [x] Run code formatting and linting: `poetry run format` ✅ All
formatters passed
- [x] Verify no breaking changes to existing SmartDecisionMaker
functionality ✅ All existing tests pass
- [x] Verify load-tests documentation accuracy ✅ README now matches
actual orchestrator infrastructure
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note**: No configuration changes were needed as this uses existing
retry infrastructure and block schema validation.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Problem
Multiple executor pods could simultaneously execute the same graph,
leading to:
- Duplicate executions and wasted resources
- Inconsistent execution states and results
- Race conditions in graph execution management
- Inefficient resource utilization in cluster environments
## Solution
Implement distributed locking using ClusterLock to ensure only one
executor pod can process a specific graph execution at a time.
## Key Changes
### Core Fix: Distributed Execution Coordination
- **ClusterLock implementation**: Redis-based distributed locking
prevents duplicate executions
- **Atomic lock acquisition**: Only one executor can hold the lock for a
specific graph execution
- **Automatic lock expiry**: Prevents deadlocks if executor pods crash
or become unresponsive
- **Graceful degradation**: System continues operating even if Redis
becomes temporarily unavailable
### Technical Implementation
- Move ClusterLock to `backend/executor/` alongside ExecutionManager
(its primary consumer)
- Comprehensive integration tests (27 test scenarios) ensure reliability
under all conditions
- Redis client compatibility for different deployment configurations
- Rate-limited lock refresh to minimize Redis load
### Reliability Improvements
- **Context manager support**: Automatic lock cleanup prevents resource
leaks
- **Ownership verification**: Locks can only be refreshed/released by
the owner
- **Concurrency testing**: Thread-safe operations verified under high
contention
- **Error handling**: Robust failure scenarios including network
partitions
## Test Coverage
- ✅ Concurrent executor coordination (prevents duplicate executions)
- ✅ Lock expiry and refresh mechanisms (prevents deadlocks)
- ✅ Redis connection failures (graceful degradation)
- ✅ Thread safety under high load (production scenarios)
- ✅ Long-running executions with periodic refresh
## Impact
- **No more duplicate executions**: Eliminates wasted compute resources
and inconsistent results
- **Improved reliability**: Robust distributed coordination across
executor pods
- **Better resource utilization**: Only one pod processes each execution
- **Scalable architecture**: Supports multiple executor pods without
conflicts
## Validation
- All integration tests pass ✅
- Existing ExecutionManager functionality preserved ✅
- No breaking changes to APIs ✅
- Production-ready distributed locking ✅🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
This PR introduces a new high-performance builder interface for the
AutoGPT platform, implementing a React Flow-based visual editor with
optimized state management and rendering.
#### Key Changes:
1. **New Flow Editor Implementation**
- Built on React Flow for efficient graph rendering and interaction
- Implements a node-based visual workflow builder with custom nodes and
edges
- Dynamic form generation using React JSON Schema Form (RJSF) for block
inputs
- Intelligent connection handling with visual feedback
2. **State Management Optimization**
- Added Zustand for lightweight, performant state management
- Separated node and edge stores for better data isolation
- Reduced unnecessary re-renders through granular state updates
3. **Dual Builder View (Temporary)**
- Added toggle between old and new builder implementations
- Allows A/B testing and gradual migration
- Feature flagged for controlled rollout
4. **Enhanced UI Components**
- Custom form widgets for various input types (date, time, file, etc.)
- Array and object editors with improved UX
- Connection handles with visual state indicators
- Advanced mode toggle for complex configurations
5. **Architecture Improvements**
- Modular component structure for better code organization
- Comprehensive documentation for the new system
- Type-safe implementation with TypeScript
#### Dependencies Added:
- `zustand` (v5.0.2) - State management
- `@rjsf/core` (v5.22.8) - JSON Schema Form core
- `@rjsf/utils` (v5.22.8) - RJSF utilities
- `@rjsf/validator-ajv8` (v5.22.8) - Schema validation
### Performance Improvements 🚀
- **Reduced Re-renders**: Zustand's shallow comparison and selective
subscriptions minimize unnecessary component updates
- **Optimized Graph Rendering**: React Flow provides efficient
canvas-based rendering for large workflows
- **Lazy Loading**: Components are loaded on-demand reducing initial
bundle size
- **Memoized Computations**: Heavy calculations are cached to avoid
redundant processing
### Test Plan 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
#### Test Checklist:
- [x] Create a new agent from scratch with at least 5 blocks
- [x] Connect blocks and verify connections render correctly
- [x] Switch between old and new builder views
- [x] Test all form input types (text, number, boolean, array, object)
- [x] Verify data persistence when switching views
- [x] Test advanced mode toggle functionality
- [x] Performance test with 50+ blocks to verify smooth interaction
### Migration Strategy
The implementation includes a temporary toggle to switch between the old
and new builder. This allows for:
- Gradual user migration
- A/B testing to measure performance improvements
- Fallback option if issues are discovered
- Collecting user feedback before full rollout
### Documentation
Comprehensive documentation has been added:
- `/components/FlowEditor/docs/README.md` - Architecture overview and
store management
- `/components/FlowEditor/docs/FORM_CREATOR.md` - Detailed form system
documentation
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
This PR fixes critical issues in the DataForSEO blocks to improve error
handling and prevent runtime exceptions.
### Changes 🏗️
1. **Fixed NoneType error in DataForSEO Related Keywords Block**
(#10990)
- Added null check to ensure `items` is always a list before iteration
- Prevents TypeError when API returns None for items field
- Ensures robust handling of unexpected API responses
2. **Added error output pins to DataForSEO blocks** (#10981)
- Added `error` field to Output schema in both `related_keywords.py` and
`keyword_suggestions.py`
- Wrapped entire `run` methods in try-except blocks
- Errors are now properly yielded to the error output pin, allowing
agents to handle failures gracefully
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that DataForSEO blocks handle None responses without
throwing TypeError
- [x] Confirmed error output pins capture and yield exceptions properly
- [x] Ensured backwards compatibility with existing block
implementations
- [x] Tested both Related Keywords and Keyword Suggestions blocks
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
---
Fixes#10990Fixes#10981
Generated with [Claude Code](https://claude.ai/code)
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
This PR fixes critical issues in the DataForSEO blocks to improve error
handling and prevent runtime exceptions.
### Changes 🏗️
1. **Fixed NoneType error in DataForSEO Related Keywords Block**
(#10990)
- Added null check to ensure `items` is always a list before iteration
- Prevents TypeError when API returns None for items field
- Ensures robust handling of unexpected API responses
2. **Added error output pins to DataForSEO blocks** (#10981)
- Added `error` field to Output schema in both `related_keywords.py` and
`keyword_suggestions.py`
- Wrapped entire `run` methods in try-except blocks
- Errors are now properly yielded to the error output pin, allowing
agents to handle failures gracefully
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that DataForSEO blocks handle None responses without
throwing TypeError
- [x] Confirmed error output pins capture and yield exceptions properly
- [x] Ensured backwards compatibility with existing block
implementations
- [x] Tested both Related Keywords and Keyword Suggestions blocks
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
---
Fixes#10990Fixes#10981
Generated with [Claude Code](https://claude.ai/code)
2025-09-25 20:22:10 +00:00
1025 changed files with 118384 additions and 16693 deletions
- `frontend/src/lib/supabase/` - Authentication and database client
@@ -160,6 +175,7 @@ pnpm storybook # Start component development server
### Agent Block System
Agents are built using a visual block-based system where each block performs a single action. Blocks are defined in `backend/blocks/` and must include:
- Block definition with input/output schemas
- Execution logic with proper error handling
- Tests validating functionality
@@ -167,6 +183,7 @@ Agents are built using a visual block-based system where each block performs a s
### Database & ORM
**Prisma ORM** with PostgreSQL backend including pgvector for embeddings:
- Schema in `schema.prisma`
- Migrations in `backend/migrations/`
- Always run `prisma migrate dev` and `prisma generate` after schema changes
@@ -174,13 +191,15 @@ Agents are built using a visual block-based system where each block performs a s
5. Shell environment variables have highest precedence
### Docker Environment Setup
- All services use hardcoded defaults (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
@@ -189,6 +208,7 @@ Agents are built using a visual block-based system where each block performs a s
## Advanced Development Patterns
### Adding New Blocks
1. Create file in `/backend/backend/blocks/`
2. Inherit from `Block` base class with input/output schemas
3. Implement `run` method with proper error handling
@@ -198,6 +218,7 @@ Agents are built using a visual block-based system where each block performs a s
7. Consider how inputs/outputs connect with other blocks in graph editor
### API Development
1. Update routes in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside route files
@@ -205,21 +226,76 @@ Agents are built using a visual block-based system where each block performs a s
5. Run `poetry run test` to verify changes
### Frontend Development
1. Components in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for component development
4. Test user-facing features with Playwright E2E tests
5. Update protected routes in middleware when needed
**📖 Complete Frontend Guide**: See `autogpt_platform/frontend/CONTRIBUTING.md` and `autogpt_platform/frontend/.cursorrules` for comprehensive patterns and conventions.
- **Error Handling**: ErrorCard for render errors, toast for mutations, Sentry for exceptions
- **Testing**: Playwright for E2E, Storybook for component development
### Key Concepts
@@ -153,6 +172,7 @@ Key models (defined in `/backend/schema.prisma`):
**Adding a new block:**
Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-sdk-guide.md) which covers:
- Provider configuration with `ProviderBuilder`
- Block schema definition
- Authentication (API keys, OAuth, webhooks)
@@ -160,6 +180,7 @@ Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-
- File organization
Quick steps:
1. Create new file in `/backend/backend/blocks/`
2. Configure provider using `ProviderBuilder` in `_config.py`
3. Inherit from `Block` base class
@@ -171,6 +192,8 @@ Quick steps:
Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
If you get any pushback or hit complex block conditions check the new_blocks guide in the docs.
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
@@ -180,10 +203,20 @@ ex: do the inputs and outputs tie well together?
**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3.Add Storybook stories for new components
4. Test with Playwright if user-facing
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
1.**Pages**: Create in `src/app/(platform)/feature-name/page.tsx`
- Add `usePageName.ts` hook for logic
- Put sub-components in local `components/` folder
2.**Components**: Structure as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Never use `src/components/__legacy__/*`
3.**Data fetching**: Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Regenerate with `pnpm generate:api`
- Pattern: `use{Method}{Version}{OperationName}`
4.**Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5.**Testing**: Add Storybook stories for new components, Playwright for E2E
6.**Code conventions**: Function declarations (not arrow functions) for components/handlers
6e60a900-9d7d-490e-9af2-a194827ed632,d85882b8-633f-44ce-a315-c20a8c123d19,flux-ai-image-generator,Flux AI Image Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ca154dd1-140e-454c-91bd-2d8a00de3f08.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/577d995d-bc38-40a9-a23f-1f30f5774bdb.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/415db1b7-115c-43ab-bd6c-4e9f7ef95be1.jpg""]",false,Transform ideas into breathtaking images,"Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!","[""creative""]",false,true
f11fc6e9-6166-4676-ac5d-f07127b270c1,c775f60d-b99f-418b-8fe0-53172258c3ce,youtube-transcription-scraper,YouTube Transcription Scraper,https://youtu.be/H8S3pU68lGE,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/65bce54b-0124-4b0d-9e3e-f9b89d0dc99e.jpg""]",false,Fetch the transcriptions from the most popular YouTube videos in your chosen topic,"Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content.","[""writing""]",false,true
17908889-b599-4010-8e4f-bed19b8f3446,6e16e65a-ad34-4108-b4fd-4a23fced5ea2,business-ownerceo-finder,Decision Maker Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/1020d94e-b6a2-4fa7-bbdf-2c218b0de563.jpg""]",false,Contact CEOs today,"Find the key decision-makers you need, fast.
This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses you’re looking for and where, and it will:
* Search the area and gather public information
* Return names, roles, and contact details when available
* Provide smart Google search suggestions if details aren’t found
Perfect for:
* B2B sales teams seeking verified leads
* Recruiters sourcing local talent
* Researchers looking to connect with business leaders
Save hours of manual searching and get straight to the people who matter most.","[""business""]",true,true
72beca1d-45ea-4403-a7ce-e2af168ee428,415b7352-0dc6-4214-9d87-0ad3751b711d,smart-meeting-brief,Smart Meeting Prep,https://youtu.be/9ydZR2hkxaY,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2f116ce1-63ae-4d39-a5cd-f514defc2b97.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0a71a60a-2263-4f12-9836-9c76ab49f155.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/95327695-9184-403c-907a-a9d3bdafa6a5.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2bc77788-790b-47d4-8a61-ce97b695e9f5.png""]",true,Business meeting briefings delivered daily,"Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls.
How It Works
1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day.
2. It reviews recent email threads with each participant to surface key relationship history and communication context.
3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data.
4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points.","[""personal""]",true,true
9fa5697a-617b-4fae-aea0-7dbbed279976,b8ceb480-a7a2-4c90-8513-181a49f7071f,automated-support-ai,Automated Support Agent,https://youtu.be/nBMfu_5sgDA,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ed56febc-2205-4179-9e7e-505d8500b66c.png""]",true,Automate up to 80 percent of inbound support emails,"Overview:
Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution.
How it Works:
New support emails are routed to the agent.
The agent checks internal documentation for answers.
It measures confidence in the answer found and either replies directly or escalates to a human.
Business Value:
Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times.","[""business""]",false,true
2bdac92b-a12c-4131-bb46-0e3b89f61413,31daf49d-31d3-476b-aa4c-099abc59b458,unspirational-poster-maker,Unspirational Poster Maker,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a490dac-27e5-405f-a4c4-8d1c55b85060.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d343fbb5-478c-4e38-94df-4337293b61f1.jpg""]",false,Because adulting is hard,"This witty AI agent generates hilariously relatable ""motivational"" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to ""get it together."" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.","[""creative""]",false,true
9adf005e-2854-4cc7-98cf-f7103b92a7b7,a03b0d8c-4751-43d6-a54e-c3b7856ba4e3,ai-shortform-video-generator-create-viral-ready-content,AI Video Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/8d2670b9-fea5-4966-a597-0a4511bffdc3.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/aabe8aec-0110-4ce7-a259-4f86fe8fe07d.png""]",false,Create Viral-Ready Shorts Content in Seconds,"OVERVIEW
Transform any trending headline or broad topic into a polished, vertical short-form video in a single run.
The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags.
HOW IT WORKS
1. Input a topic or an exact news headline.
2. The agent fetches live search results and selects the most engaging related story.
3. Key facts are summarised into concise research notes.
4. Claude writes a 30–35 second script with visual cues, a three-second hook, tension loops, and a call-to-action.
5. GPT-4o generates an eye-catching title and one or two discoverability hashtags.
6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”).
– All voice, style and resolution settings can be adjusted in the Builder before you press ""Run"".
7. Output delivered: Title, Script, Hashtags, Video URL.
KEY USE CASES
- Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”).
- Real-time newsjacking with a specific breaking headline.
- Product-launch spotlights and quick event recaps while interest is high.
BUSINESS VALUE
- One-click speed: from idea to finished video in minutes.
- Consistent brand look: Revid presets keep voice, style and aspect ratio on spec.
- No-code workflow: marketers create social video without design or development queues.
- Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys.
- The agent outputs exactly one video per execution. Run it again for additional shorts.
- Video rendering time varies; AI-generated footage may take several minutes.","[""writing""]",false,true
864e48ef-fee5-42c1-b6a4-2ae139db9fc1,55d40473-0f31-4ada-9e40-d3a7139fcbd4,automated-blog-writer,Automated SEO Blog Writer,https://youtu.be/nKcDCbDVobs,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2dd5f95b-5b30-4bf8-a11b-bac776c5141a.jpg""]",true,"Automate research, writing, and publishing for high-ranking blog posts","Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility.
How it works:
1. Share your pitch, website, and values.
2. The agent studies your site and uncovers proven SEO opportunities.
3. It spends two hours researching and drafting each post.
4. You set the cadence—publishing runs on autopilot.
Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most.
Use cases:
• Founders: Keep your blog active with no time drain.
• Agencies: Deliver scalable SEO content for clients.
• Strategists: Automate execution, focus on strategy.
• Marketers: Drive steady organic growth.
• Local businesses: Capture nearby search traffic.","[""writing""]",false,true
6046f42e-eb84-406f-bae0-8e052064a4fa,a548e507-09a7-4b30-909c-f63fcda10fff,lead-finder-local-businesses,Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/abd6605f-d5f8-426b-af36-052e8ba5044f.webp""]",false,Auto-Prospect Like a Pro,"Turbo-charge your local lead generation with the AutoGPT Marketplace’s top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching.
**WHAT IT DOES**
• Searches Google Maps via the official API (no scraping)
• Prompts like “dentists in Chicago” or “coffee shops near me”
• Exports instantly to your CRM, sheet, or outreach workflow
**WHY YOU’LL LOVE IT**
✓ Hyper-targeted leads in minutes
✓ Unlimited searches & locations
✓ Zero CAPTCHAs or IP blocks
✓ Works on AutoGPT Cloud or self-hosted (with your API key)
✓ Cut prospecting time by 90%
**PERFECT FOR**
— Marketers & PPC agencies
— SEO consultants & designers
— SaaS founders & sales teams
Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit.
→ Click *Add to Library* and own your market today.","[""business""]",true,true
f623c862-24e9-44fc-8ce8-d8282bb51ad2,eafa21d3-bf14-4f63-a97f-a5ee41df83b3,linkedin-post-generator,LinkedIn Post Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/297f6a8e-81a8-43e2-b106-c7ad4a5662df.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/fceebdc1-aef6-4000-97fc-4ef587f56bda.png""]",false,Auto‑craft LinkedIn gold,"Create research‑driven, high‑impact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed.
FEATURES
• Automated YouTube research – discovers and analyses top‑ranked videos so you don’t have to
• AI‑curated synthesis – combines multiple transcripts into one authoritative narrative
• Full creative control – adjust style, tone, objective, opinion, clarity, target word count and number of videos
• LinkedIn‑optimised output – hook, 2‑3 key points, CTA, strategic line breaks, 3‑5 hashtags, no markdown
• One‑click publish – returns a ready‑to‑post text block (≤1300 characters)
HOW IT WORKS
1. Enter a topic and your preferred writing parameters.
2. The agent builds a YouTube search, fetches the page, and extracts the topN video URLs.
3. It pulls each transcript, then feeds them—plus your settings—into Claude3.5Sonnet.
4. The model writes a concise, engaging post designed for maximum LinkedIn engagement.
USE CASES
• Thought‑leadership updates backed by fresh video research
• Rapid industry summaries after major events, webinars, or conferences
• Consistent LinkedIn content for busy founders, marketers, and creators
WHY YOU’LL LOVE IT
Save hours of manual research, avoid surface‑level hot‑takes, and publish posts that showcase real expertise—without the heavy lift.","[""writing""]",true,true
7d4120ad-b6b3-4419-8bdb-7dd7d350ef32,e7bb29a1-23c7-4fee-aa3b-5426174b8c52,youtube-to-linkedin-post-converter,YouTube to LinkedIn Post Converter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f084b326-a708-4396-be51-7ba59ad2ef32.png""]",false,Transform Your YouTube Videos into Engaging LinkedIn Posts with AI,"WHAT IT DOES:
This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn.
HOW IT WORKS:
- You provide the URL to the YouTube video (required)
- You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.)
- You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.)
- The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model
- The models extract key insights, memorable quotes, and the main points from the video
- You’ll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement
INPUTS:
- Source YouTube Video – Provide the URL to the YouTube video
- Structure – Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.)
- Content – Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.)
- Tone – Select the tone for the post (e.g., Conversational, Inspirational, etc.)
OUTPUT:
- LinkedIn Post – A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences
Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding.","[""writing""]",false,true
c61d6a83-ea48-4df8-b447-3da2d9fe5814,00fdd42c-a14c-4d19-a567-65374ea0e87f,personalized-morning-coffee-newsletter,Personal Newsletter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f4b38e4c-8166-4caf-9411-96c9c4c82d4c.png""]",false,Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood.,"This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained.
How It Works
1. Enter your favorite topics, industries, or areas of interest.
2. Choose your tone—professional, casual, or humorous.
3. Set your preferred delivery cadence: daily or weekly.
4. The agent scans top sources and compiles 3–5 engaging stories, insights, and fun facts into a conversational newsletter.
Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day.
Use Cases
• Executives: Get a daily digest of market updates and leadership insights.
• Marketers: Receive curated creative trends and campaign inspiration.
• Entrepreneurs: Stay updated on your industry without information overload.","[""research""]",true,true
e2e49cfc-4a39-4d62-a6b3-c095f6d025ff,fc2c9976-0962-4625-a27b-d316573a9e7f,email-address-finder,Email Scout - Contact Finder Assistant,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/da8a690a-7a8b-4c1d-b6f8-e2f840c0205d.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a2ac25c-1609-4881-8140-e6da2421afb3.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/26179263-fe06-45bd-b6a0-0754660a0a46.jpg""]",false,Find contact details from name and location using AI search,"Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information.
Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like ""Tim Cook, USA"" or ""Sarah Smith, London"" and let the AI assistant do the work of finding potential contact details.
Key Features:
- Quick search from just name and location
- Scans multiple public sources
- Automated AI-powered search process
- Easy to use with simple inputs
Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact.
Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible.","[""""]",false,true
81bcc372-0922-4a36-bc35-f7b1e51d6939,e437cc95-e671-489d-b915-76561fba8c7f,ai-youtube-to-blog-converter,YouTube Video to SEO Blog Writer,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/239e5a41-2515-4e1c-96ef-31d0d37ecbeb.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/c7d96966-786f-4be6-ad7d-3a51c84efc0e.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0275a74c-e2c2-4e29-a6e4-3a616c3c35dd.png""]",false,One link. One click. One powerful blog post.,"Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts.
Your videos deserve a second life—in writing.
Make your content work twice as hard by repurposing it into engaging, searchable articles.
Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest.
FEATURES
• CONTENT ANALYSIS
Extracts key points from the video while preserving your message and intent.
• CUSTOMIZABLE OUTPUT
Select a tone that fits your audience: casual, professional, educational, or formal.
• SEO OPTIMIZATION
Automatically creates engaging titles and structured subheadings for better search visibility.
• USER-FRIENDLY
Repurpose your videos into written content to expand your reach and improve accessibility.
Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless.
","[""writing""]",true,true
5c3510d2-fc8b-4053-8e19-67f53c86eb1a,f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9,ai-webpage-copy-improver,AI Webpage Copy Improver,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d562d26f-5891-4b09-8859-fbb205972313.jpg""]",false,Boost Your Website's Search Engine Performance,"Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.","[""marketing""]",true,true
94d03bd3-7d44-4d47-b60c-edb2f89508d6,b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0,domain-name-finder,Domain Name Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/28545e09-b2b8-4916-b4c6-67f982510a78.jpeg""]",false,Instantly generate brand-ready domain names that are actually available,"Overview:
Finding a domain name that fits your brand shouldn’t take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable.
How It Works
1. Input your product pitch, company name, or core keywords.
2. The agent analyzes brand tone, audience, and industry context.
3. It generates a list of unique, memorable domains that match your criteria.
4. All names are pre-filtered for real-time availability, so you can register immediately.
Business Value
Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains.
Key Use Cases
• Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands.
• Marketers: Test name options across campaigns with instant availability data.
• Entrepreneurs: Validate ideas faster with instant domain options.","[""business""]",false,true
7a831906-daab-426f-9d66-bcf98d869426,516d813b-d1bc-470f-add7-c63a4b2c2bad,ai-function,AI Function,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/620e8117-2ee1-4384-89e6-c2ef4ec3d9c9.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/476259e2-5a79-4a7b-8e70-deeebfca70d7.png""]",false,Never Code Again,"AI FUNCTION MAGIC
Your AI‑powered assistant for turning plain‑English descriptions into working Python functions.
HOW IT WORKS
1. Describe what the function should do.
2. Specify the inputs it needs.
3. Receive the generated Python code.
FEATURES
- Effortless Function Generation: convert natural‑language specs into complete functions.
- Customizable Inputs: define the parameters that matter to you.
- Versatile Use Cases: simulate data, automate tasks, prototype ideas.
- Seamless Integration: add the generated function directly to your codebase.
EXAMPLE
Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.”
Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!
Fetch the transcriptions from the most popular YouTube videos in your chosen topic
Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content.
Find the key decision-makers you need, fast.
This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses you’re looking for and where, and it will:
* Search the area and gather public information
* Return names, roles, and contact details when available
* Provide smart Google search suggestions if details aren’t found
Perfect for:
* B2B sales teams seeking verified leads
* Recruiters sourcing local talent
* Researchers looking to connect with business leaders
Save hours of manual searching and get straight to the people who matter most.
Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls.
How It Works
1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day.
2. It reviews recent email threads with each participant to surface key relationship history and communication context.
3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data.
4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points.
Automate up to 80 percent of inbound support emails
Overview:
Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution.
How it Works:
New support emails are routed to the agent.
The agent checks internal documentation for answers.
It measures confidence in the answer found and either replies directly or escalates to a human.
Business Value:
Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times.
This witty AI agent generates hilariously relatable "motivational" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to "get it together." Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.
OVERVIEW
Transform any trending headline or broad topic into a polished, vertical short-form video in a single run.
The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags.
HOW IT WORKS
1. Input a topic or an exact news headline.
2. The agent fetches live search results and selects the most engaging related story.
3. Key facts are summarised into concise research notes.
4. Claude writes a 30–35 second script with visual cues, a three-second hook, tension loops, and a call-to-action.
5. GPT-4o generates an eye-catching title and one or two discoverability hashtags.
6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”).
– All voice, style and resolution settings can be adjusted in the Builder before you press "Run".
7. Output delivered: Title, Script, Hashtags, Video URL.
KEY USE CASES
- Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”).
- Real-time newsjacking with a specific breaking headline.
- Product-launch spotlights and quick event recaps while interest is high.
BUSINESS VALUE
- One-click speed: from idea to finished video in minutes.
- Consistent brand look: Revid presets keep voice, style and aspect ratio on spec.
- No-code workflow: marketers create social video without design or development queues.
- Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys.
Self-hosted users simply add OpenAI, Anthropic, Perplexity (OpenRouter/Jina) and Revid keys once.
IMPORTANT NOTES
- The agent outputs exactly one video per execution. Run it again for additional shorts.
- Video rendering time varies; AI-generated footage may take several minutes.
Automate research, writing, and publishing for high-ranking blog posts
Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility.
How it works:
1. Share your pitch, website, and values.
2. The agent studies your site and uncovers proven SEO opportunities.
3. It spends two hours researching and drafting each post.
4. You set the cadence—publishing runs on autopilot.
Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most.
Use cases:
• Founders: Keep your blog active with no time drain.
• Agencies: Deliver scalable SEO content for clients.
• Strategists: Automate execution, focus on strategy.
• Marketers: Drive steady organic growth.
• Local businesses: Capture nearby search traffic.
Turbo-charge your local lead generation with the AutoGPT Marketplace’s top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching.
**WHAT IT DOES**
• Searches Google Maps via the official API (no scraping)
• Prompts like “dentists in Chicago” or “coffee shops near me”
• Returns: Name, Website, Rating, Reviews, **Phone & Address**
• Exports instantly to your CRM, sheet, or outreach workflow
**WHY YOU’LL LOVE IT**
✓ Hyper-targeted leads in minutes
✓ Unlimited searches & locations
✓ Zero CAPTCHAs or IP blocks
✓ Works on AutoGPT Cloud or self-hosted (with your API key)
✓ Cut prospecting time by 90%
**PERFECT FOR**
— Marketers & PPC agencies
— SEO consultants & designers
— SaaS founders & sales teams
Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit.
→ Click *Add to Library* and own your market today.
Create research‑driven, high‑impact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed.
FEATURES
• Automated YouTube research – discovers and analyses top‑ranked videos so you don’t have to
• AI‑curated synthesis – combines multiple transcripts into one authoritative narrative
• Full creative control – adjust style, tone, objective, opinion, clarity, target word count and number of videos
• LinkedIn‑optimised output – hook, 2‑3 key points, CTA, strategic line breaks, 3‑5 hashtags, no markdown
• One‑click publish – returns a ready‑to‑post text block (≤1 300 characters)
HOW IT WORKS
1. Enter a topic and your preferred writing parameters.
2. The agent builds a YouTube search, fetches the page, and extracts the top N video URLs.
3. It pulls each transcript, then feeds them—plus your settings—into Claude 3.5 Sonnet.
4. The model writes a concise, engaging post designed for maximum LinkedIn engagement.
USE CASES
• Thought‑leadership updates backed by fresh video research
• Rapid industry summaries after major events, webinars, or conferences
• Consistent LinkedIn content for busy founders, marketers, and creators
WHY YOU’LL LOVE IT
Save hours of manual research, avoid surface‑level hot‑takes, and publish posts that showcase real expertise—without the heavy lift.
Transform Your YouTube Videos into Engaging LinkedIn Posts with AI
WHAT IT DOES:
This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn.
HOW IT WORKS:
- You provide the URL to the YouTube video (required)
- You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.)
- You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.)
- The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model
- The models extract key insights, memorable quotes, and the main points from the video
- You’ll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement
INPUTS:
- Source YouTube Video – Provide the URL to the YouTube video
- Structure – Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.)
- Content – Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.)
- Tone – Select the tone for the post (e.g., Conversational, Inspirational, etc.)
OUTPUT:
- LinkedIn Post – A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences
Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding.
Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood.
This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained.
How It Works
1. Enter your favorite topics, industries, or areas of interest.
2. Choose your tone—professional, casual, or humorous.
3. Set your preferred delivery cadence: daily or weekly.
4. The agent scans top sources and compiles 3–5 engaging stories, insights, and fun facts into a conversational newsletter.
Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day.
Use Cases
• Executives: Get a daily digest of market updates and leadership insights.
• Marketers: Receive curated creative trends and campaign inspiration.
• Entrepreneurs: Stay updated on your industry without information overload.
Find contact details from name and location using AI search
Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information.
Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like "Tim Cook, USA" or "Sarah Smith, London" and let the AI assistant do the work of finding potential contact details.
Key Features:
- Quick search from just name and location
- Scans multiple public sources
- Automated AI-powered search process
- Easy to use with simple inputs
Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact.
Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible.
Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts.
Your videos deserve a second life—in writing.
Make your content work twice as hard by repurposing it into engaging, searchable articles.
Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest.
FEATURES
• CONTENT ANALYSIS
Extracts key points from the video while preserving your message and intent.
• CUSTOMIZABLE OUTPUT
Select a tone that fits your audience: casual, professional, educational, or formal.
• SEO OPTIMIZATION
Automatically creates engaging titles and structured subheadings for better search visibility.
• USER-FRIENDLY
Repurpose your videos into written content to expand your reach and improve accessibility.
Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless.
Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.
Instantly generate brand-ready domain names that are actually available
Overview:
Finding a domain name that fits your brand shouldn’t take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable.
How It Works
1. Input your product pitch, company name, or core keywords.
2. The agent analyzes brand tone, audience, and industry context.
3. It generates a list of unique, memorable domains that match your criteria.
4. All names are pre-filtered for real-time availability, so you can register immediately.
Business Value
Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains.
Key Use Cases
• Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands.
• Marketers: Test name options across campaigns with instant availability data.
• Entrepreneurs: Validate ideas faster with instant domain options.
AI FUNCTION MAGIC
Your AI‑powered assistant for turning plain‑English descriptions into working Python functions.
HOW IT WORKS
1. Describe what the function should do.
2. Specify the inputs it needs.
3. Receive the generated Python code.
FEATURES
- Effortless Function Generation: convert natural‑language specs into complete functions.
- Customizable Inputs: define the parameters that matter to you.
- Versatile Use Cases: simulate data, automate tasks, prototype ideas.
- Seamless Integration: add the generated function directly to your codebase.
EXAMPLE
Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.”
Input parameter: number_of_people (default 20)
Result: a list of dictionaries such as
[
{ "name": "Emma Martinez", "date_of_birth": "1992‑11‑03", "job_title": "Data Analyst", "age": 32 },
{ "name": "Liam O’Connor", "date_of_birth": "1985‑07‑19", "job_title": "Marketing Manager", "age": 39 },
…18 more entries…
]
"description":"This witty AI agent generates hilariously relatable \"motivational\" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clich\u00e9s and embrace our collective struggles to \"get it together.\" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.",
"prompt":"A cat sprawled dramatically across an important-looking document during a work-from-home meeting, making direct eye contact with the camera while knocking over a coffee mug in slow motion. Text Overlay: \"Chaos is a career path. Be the obstacle everyone has to work around.\"",
"prompt":"<example_output>\nA photo of a sloth lounging on a desk, with its head resting on a keyboard. The keyboard is on top of a laptop with a blank spreadsheet open. A to-do list is placed beside the laptop, with the top item written as \"Do literally anything\". There is a text overlay that says \"If you can't outwork them, outnap them.\".\n</example_output>\n\nCreate a relatable satirical, snarky, user-deprecating motivational style image based on the theme: \"{{THEME}}\".\n\nOutput only the image description and caption, without any additional commentary or formatting.",
"description":"## AI-Powered Function Magic: Never code again!\nProvide a description of a python function and your inputs and AI will provide the results.",
"sys_prompt":"You are now the following python function:\n\n```\n# {{DESCRIPTION}}\n{{FUNCTION}}\n```\n\nThe user will provide your input arguments.\nOnly respond with your `return` value.\nDo not include any commentary or additional text in your response. \nDo not include ``` backticks or any other decorators.",
"description":"The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"",
"value":"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.",
"secret":false,
"advanced":false,
"description":"Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"",
"description":"The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"",
"description":"Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"",
"default":"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age."
}
},
"required":[]
},
"output_schema":{
"type":"object",
"properties":{
"return":{
"advanced":false,
"secret":false,
"title":"return",
"description":"The value returned by the function"
"description":"Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!",
"prompt":"Generate an incredibly detailed, photorealistic image prompt about {{TOPIC}}, describing the camera it's taken with and prompting the diffusion model to use all the best quality techniques.\n\nOutput only the prompt with no additional commentary.",
"description":"Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.",
"prompt":"Current Webpage Content:\n```\n{{CONTENT}}\n```\n\nBased on the following analysis of the webpage content:\n\n```\n{{ANALYSIS}}\n```\n\nRewrite and improve the content to address the identified issues. Focus on:\n1. Enhancing clarity and readability\n2. Optimizing for SEO (suggest and incorporate relevant keywords)\n3. Improving calls-to-action for better conversion rates\n4. Refining the structure and organization\n5. Maintaining brand consistency while improving the overall tone\n\nProvide the improved content in HTML format inside a code-block with \"```\" backticks, preserving the original structure where appropriate. Also, include a brief summary of the changes made and their potential impact.",
"prompt":"Analyze the following webpage content and provide a detailed report on its current state, including strengths and weaknesses in terms of clarity, SEO optimization, and potential for conversion:\n\n{{CONTENT}}\n\nInclude observations on:\n1. Overall readability and clarity\n2. Use of keywords and SEO-friendly language\n3. Effectiveness of calls-to-action\n4. Structure and organization of content\n5. Tone and brand consistency",
"prompt":"<business_website>\n{{WEBSITE_CONTENT}}\n</business_website>\n\nExtract the Contact Email of {{BUSINESS_NAME}}.\n\nIf no email that can be used to contact {{BUSINESS_NAME}} is present, output `N/A`.\nDo not share any emails other than the email for this specific entity.\n\nIf multiple present pick the likely best one.\n\nRespond with the email (or N/A) inside <email></email> tags.\n\nExample Response:\n\n<thoughts_or_comments>\nThere were many emails present, but luckily one was for {{BUSINESS_NAME}} which I have included below.\n</thoughts_or_comments>\n<email>\nexample@email.com\n</email>",
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
@@ -69,7 +75,7 @@ To find IDs, identify the values for organization_id when you call this endpoint
description="""Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results.
@@ -109,7 +115,7 @@ class SearchPeopleBlock(Block):
@@ -36,14 +44,135 @@ class ProgrammingLanguage(Enum):
JAVA="java"
classCodeExecutionBlock(Block):
classMainCodeExecutionResult(BaseModel):
"""
*Pydantic model mirroring `e2b_code_interpreter.Result`*
Represents the data to be displayed as a result of executing a cell in a Jupyter notebook.
The result is similar to the structure returned by ipython kernel: https://ipython.readthedocs.io/en/stable/development/execution.html#execution-semantics
The result can contain multiple types of data, such as text, images, plots, etc. Each type of data is represented
as a string, and the result can contain multiple types of data. The display calls don't have to have text representation,
for the actual result the representation is always present for the result, the other representations are always optional.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.