- Resolves#10300
- Follow-up fix to #10167
### Changes 🏗️
- Include sub-graphs in `get_library_agent` endpoint
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Executing agent with sub-graphs that require credentials works
During data manipulation refactoring, the CreateListBlock lost its
important batching functionality with max_size and max_tokens parameters.
This restores the original implementation that can yield lists in chunks
based on size or token limits.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
## Summary
This PR adds missing plural output versions to blocks that yield
individual items in loops but don't provide the complete collection,
enabling both individual item access (for iteration) and complete
collection access (for aggregate operations).
## Changes
### GitHub Blocks (existing)
- **GithubListPullRequestsBlock**: Added `pull_requests` output
alongside existing `pull_request`
- **GithubListPRReviewersBlock**: Added `reviewers` output alongside
existing `reviewer`
### Additional Blocks (added in this PR)
- **GetRedditPostsBlock**: Added `posts` output for complete list of
Reddit posts
- **ReadRSSFeedBlock**: Added `entries` output for complete list of RSS
entries
- **AddMemoryBlock**: Added `results` output for complete list of memory
operation results
## Pattern Applied
The pattern ensures blocks provide both:
```python
# Complete collection first
yield "plural_output", all_items
# Then individual items for iteration
for item in all_items:
yield "singular_output", item
```
## Testing
- Updated test outputs to include plural versions
- All blocks maintain backward compatibility with existing singular
outputs
- `poetry run format` - ✅ Passed
- `poetry run test` - ✅ Blocks validated
## Benefits
- **Iteration**: Users can still iterate over individual items as before
- **Aggregation**: Users can now access complete collections for
operations like counting, filtering, or batch processing
- **Compatibility**: Existing workflows continue to work unchanged
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
#### New List Operation Blocks
- Implement `GetListItemBlock` for retrieving an element at a specific
index, with negative index support
- Introduce `RemoveFromListBlock` to remove or pop items and optionally
return the removed value
- Add `ReplaceListItemBlock` to overwrite an item at a given index and
return the old value
- Provide `ListIsEmptyBlock` for quickly checking if a list has no
elements
#### New Dictionary Operation Blocks (for consistency with list
operations)
- Add `RemoveFromDictionaryBlock` to remove key-value pairs and
optionally return the removed value
- Implement `ReplaceDictionaryValueBlock` to replace values for a
specified key and return the old value
- Provide `DictionaryIsEmptyBlock` for checking if a dictionary has no
elements
#### Code Organization & Refactoring
- **Created `data_manipulation.py`**: Moved all dictionary and list
manipulation blocks to a dedicated file to prevent `basic.py` from
becoming too large
- **Refactored `basic.py`**: Now contains only core utility blocks
(FileStore, StoreValue, PrintToConsole, Note, UniversalTypeConverter)
- **Ensured consistency**: Dictionary and list blocks now have
equivalent functionality and follow the same patterns
- **Removed redundancy**: Eliminated duplicate `GetDictionaryValueBlock`
since `FindInDictionaryBlock` already provides comprehensive lookup
functionality
- **Preserved UUIDs**: All existing block UUIDs maintained to ensure no
breaking changes
#### Block Organization Summary
**`basic.py` (core utilities):**
- `FileStoreBlock`, `StoreValueBlock`, `PrintToConsoleBlock`,
`NoteBlock`, `UniversalTypeConverterBlock`
**`data_manipulation.py` (dictionary & list operations):**
- **Dictionary blocks:** Create, Add, Find, Remove, Replace, IsEmpty
- **List blocks:** Create, Add, Find, Get, Remove, Replace, IsEmpty
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run test`
- [x] `pnpm format`
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
The block library was missing two key capabilities that keep coming up
in real-world agent flows:
1. **Granular Mem0 filtering.** Agents often work side-by-side for the
same user, so memories must be scoped to a specific run or agent to
avoid crosstalk.
2. **First-class Google Sheets support.** Many community projects (e.g.,
data-collection, lightweight dashboards, no-code workflows) rely on
Sheets, but we only had a brittle REST call block.
This PR adds fine-grained filters to every Mem0 retrieval block and
introduces a complete, OAuth-ready Google Sheets suite so agents can
create, read, write, format, and manage spreadsheets safely.
:contentReference[oaicite:0]{index=0}
---
### Changes 🏗️
#### 📚 Mem0 block enhancements
* Added `categories_filter`, `metadata_filter`, `limit_memory_to_run`,
and `limit_memory_to_agent` inputs to **SearchMemoryBlock**,
**GetAllMemoriesBlock**, and **GetLatestMemoryBlock**.
* Added identical scoping logic to **AddMemoryBlock** so newly-created
memories can be tied to run/agent IDs.
#### 📊 New Google Sheets blocks (`backend/blocks/google/sheets.py`)
| Block | Purpose |
|-------|---------|
| `GoogleSheetsReadBlock` | Read a range |
| `GoogleSheetsWriteBlock` | Overwrite a range |
| `GoogleSheetsAppendBlock` | Append rows |
| `GoogleSheetsClearBlock` | Clear a range |
| `GoogleSheetsMetadataBlock` | Fetch spreadsheet + sheet info |
| `GoogleSheetsManageSheetBlock` | Create / delete / copy a sheet |
| `GoogleSheetsBatchOperationsBlock` | Batch update / clear |
| `GoogleSheetsFindReplaceBlock` | Find & replace text |
| `GoogleSheetsFormatBlock` | Cell formatting (bg/text colour, bold,
italic, font size) |
| `GoogleSheetsCreateSpreadsheetBlock` | Spin up a new spreadsheet |
* Each block has typed input/output schemas, built-in test mocks, and is
disabled in prod unless Google OAuth is configured.
* Added helper enums (`SheetOperation`, `BatchOperationType`) and
updated **CLAUDE.md** contributor guide with a UUID-generation step.
:contentReference[oaicite:2]{index=2}
---
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual E2E run: agent writes chat summary to new spreadsheet,
reads it back, searches memory with run-scoped filter
- [x] Live Google API smoke-tests (read/write/append/clear/format) using
a disposable spreadsheet
This PR introduces key-value storage blocks.
### Changes 🏗️
- **Database Schema**: Add AgentNodeExecutionKeyValueData table with
composite primary key (userId, key)
- **Persistence Blocks**: Create PersistInformationBlock and
RetrieveInformationBlock in persistence.py
- **Scope-based Storage**: Support for within_agent (per agent) vs
across_agents (global user) persistence
- **Key Structure**: Use formal # delimiter for storage keys:
`agent#{graph_id}#{key}` and `global#{key}`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run all 244 block tests - all passing ✅
- [x] Test PersistInformationBlock with mock data storage
- [x] Test RetrieveInformationBlock with mock data retrieval
- [x] Verify scope-based key generation (within_agent vs across_agents)
- [x] Verify database function integration through all manager classes
- [x] Run lint and type checking - all passing ✅
- [x] Verify database migration is included and valid
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
Note: This change adds database schema and new blocks but doesn't
require environment or docker-compose changes as it uses existing
database infrastructure.
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10298
- Follow-up to #10270
### Changes 🏗️
Amend two changes from #10270:
- Add fallback for `NEXT_PUBLIC_FRONTEND_BASE_URL` in custom-mutator.ts
- Revert rename of `FRONTEND_BASE_URL` to
`NEXT_PUBLIC_FRONTEND_BASE_URL` in `backend/.env.example`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Don't set `NEXT_PUBLIC_FRONTEND_BASE_URL`
- Run the platform locally
- [x] -> `/library` loads normally
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Summary
- Enhanced graph execution cancellation and cleanup mechanisms
- Improved error handling and logging for graph execution lifecycle
- Added timeout handling for graph termination with proper status
updates
- Exposed a new API for stopping graph based on only graph_id or user_id
- Refactored logging metadata structure for better error tracking
## Key Changes
### Backend
- **Graph Execution Management**: Enhanced `stop_graph_execution` with
timeout-based waiting and proper status transitions
- **Execution Cleanup**: Added proper cancellation waiting with timeout
handling in executor manager
- **Logging Improvements**: Centralized `LogMetadata` class and improved
error logging consistency
- **API Enhancements**: Added bulk graph execution stopping
functionality
- **Error Handling**: Better exception handling and status management
for failed/cancelled executions
### Frontend
- **Status Safety**: Added null safety checks for status chips to
prevent runtime errors
- **Execution Control**: Simplified stop execution request handling
## Test Plan
- [x] Verify graph execution can be properly stopped and reaches
terminal state
- [x] Test timeout scenarios for stuck executions
- [x] Validate proper cleanup of running node executions when graph is
cancelled
- [x] Check frontend status chips handle undefined statuses gracefully
- [x] Test bulk execution stopping functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10284
### Changes 🏗️
- Allows passing an `aiohttp.BasicAuth` object directly to the `auth`
parameter of the `make_request` function.
- Converts tuple-based auth credentials to `aiohttp.BasicAuth` objects
before making the request.
Fixes
[AUTOGPT-SERVER-4AX](https://sentry.io/organizations/significant-gravitas/issues/6709824432/).
The issue was that: aiohttp's ClientSession.request received a plain
tuple for `auth` instead of an `aiohttp.BasicAuth` object, causing
OAuth2 token exchange failure.
This fix was generated by Seer in Sentry, triggered by Bently. 👁️ Run
ID: 185767
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6709824432/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Graph execution that fails due to interruption or unknown error should
be enqueued back to the queue. However, swallowing the error ends up not
marking the execution as a failure.
### Changes 🏗️
* Avoid keep updating the graph execution status on each node execution
step.
* Added a guard rail to avoid completing graph execution on
non-completed execution status.
* Avoid acknowledging messages from the queue if the graph execution is
not yet completed.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run graph execution, kill the process, re-run the process
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Currently, we are using PyClamd to run a file anti-virus scan for all
the files uploaded into the platform. We split the file into small
chunks and serially check the chunks for the virus scan. The socket is
not thread-safe, and we need to create multiple sockets across many
threads to leverage concurrency. To make this step concurrent and keep
it fully async, we need to migrate PyClamd to aioclamd.
### Changes 🏗️
Convert pyclamd to aioclamd, leverage chunk parallelism scan with a
semaphore limiting the concurrency limit.
#### Side Note
Shout-out to @tedyu for raising this improvement idea.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute file upload into the platform
## Changes 🏗️
We need to `FRONTEND_BASE_URL` to → `NEXT_PUBLIC_FRONTEND_BASE_URL`
given is needed on the new API client on the Front-end to make requests.
The `NEXT_PUBLIC` prefix is important so that it is available on the
client.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally
- [x] The library and other pages work
- Resolves#10217https://github.com/user-attachments/assets/26a402f5-6f43-453b-8c83-481380bde853
### Changes 🏗️
Frontend:
- Show message instead of action buttons ("Run" etc) when graph has
webhook node(s)
- Fix check for webhook nodes used in `BlocksControl` and `FlowEditor`
- Clean up `PrimaryActionBar` implementation
- Add `accent` variant to `ui/button:Button`
API:
- Add `GET /library/agents/by-graph/{graph_id}` endpoint
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to Builder
- Add a trigger block
- [x] -> action buttons disappear; message shows in their place
- Save the graph
- Click the "Agent Library" link in the message
- [x] -> app navigates to `/library/agents/[id]` for the newly created
agent
CreateListBlock can only batch lists based on the size limit, but
sometimes we need the size to be dynamically adjusted based on the token
count.
### Changes 🏗️
Improve CreateListBlock to support batching based on token count
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test CreateListBlock
Complete the implementation of the Agent Run Scheduling UX in the
Library.
Demo:
https://github.com/user-attachments/assets/701adc63-452c-4d37-aeea-51788b2774f2
### Changes 🏗️
Frontend:
- Add "Schedule" button + dialog + logic to `AgentRunDraftView`
- Update corresponding logic on `AgentRunsPage`
- Add schedule name field to `CronSchedulerDialog`
- Amend Builder components `useAgentGraph`, `FlowEditor`,
`RunnerUIWrapper` to also handle schedule name input
- Split `CronScheduler` into `CronScheduler`+`CronSchedulerDialog`
- Make `AgentScheduleDetailsView` more fully functional
- Add schedule description to info box
- Add "Delete schedule" button
- Update schedule create/select/delete logic in `AgentRunsPage`
- Improve schedule UX in `AgentRunsSelectorList`
- Switch tabs automatically when a run or schedule is selected
- Remove now-redundant schedule filters
- Refactor `@/lib/monitor/cronExpressionManager` into
`@/lib/cron-expression-utils`
Backend + API:
- Add name and credentials to graph execution schedule job params
- Update schedule API
- `POST /schedules` -> `POST /graphs/{graph_id}/schedules`
- Add `GET /graphs/{graph_id}/schedules`
- Add not found error handling to `DELETE /schedules/{schedule_id}`
- Minor refactoring
Backend:
- Fix "`GraphModel`->`NodeModel` is not fully defined" error in
scheduler
- Add support for all exceptions defined in `backend.util.exceptions` to
RPC logic in `backend.util.service`
- Fix inconsistent log prefixing in `backend.executor.scheduler`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create a simple agent with inputs and blocks that require credentials;
go to this agent in the Library
- Fill out the inputs and click "Schedule"; make it run every minute
(for testing purposes)
- [x] -> newly created schedule appears in the list
- [x] -> scheduled runs are successful
- Click "Delete schedule"
- [x] -> schedule no longer in list
- [x] -> on deleting the last schedule, view switches back to the Runs
list
- [x] -> no new runs occur from the deleted schedule
Calling LLM using the current block sometimes can break due to the high
context window.
A prompt compaction algorithm is applied (enabled by default) to make
sure the sent prompt is within a context window limit.
### Changes 🏗️
````
Heuristics
--------
* Prefer shrinking the content rather than truncating the conversation.
* If the conversation content is compacted and it's still not enough, then reduce the conversation list.
* The rest of the implementation is adjusted to minimize the LLM call breaking.
Strategy
--------
1. **Token-aware truncation** – progressively halve a per-message cap
(`start_cap`, `start_cap/2`, … `floor_cap`) and apply it to the
*content* of every message except the first and last. Tool shells
are included: we keep the envelope but shorten huge payloads.
2. **Middle-out deletion** – if still over the limit, delete the whole
messages working outward from the centre, **skipping** any message
that contains ``tool_calls`` or has ``role == "tool"``.
3. **Last-chance trim** – if still too big, truncate the *first* and
*last* message bodies down to `floor_cap` tokens.
4. If the prompt is *still* too large:
• raise ``ValueError`` when ``lossy_ok == False`` (default)
• return the partially-trimmed prompt when ``lossy_ok == True``
````
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run an SDM block in a loop until it hits 200000 tokens using the
open-ai O3 model.
- Follow-up fix to #10138
AI erased a bit of functionality from the `GithubReadPullRequestBlock`
in #10138. This PR puts it back and improves on the old format.
### Changes 🏗️
- Include full diff in `changes` output of `GithubReadPullRequestBlock`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Use the `GithubReadPullRequestBlock` with `include_pr_changes` enabled
- [ ] -> block runs successfully
- [ ] -> full diff included in `changes` output
<html><head></head><body><h3>Why these changes are needed 🧐</h3>
<p>Revid.ai offers several specialised, undocumented rendering flows
beyond the basic “text-to-video” endpoint our platform already
supported.
to:</p>
<ol>
<li>
<p><strong>Generate ads</strong> from copy plus product images
(30-second vertical spots).</p>
</li>
<li>
<p><strong>Turn a single creative prompt</strong> into a fully
AI-generated video (no multi-line script).</p>
</li>
<li>
<p><strong>Transform a screenshot into a narrated, avatar-driven
clip</strong>, ideal for product-led demos.</p>
</li>
</ol>
<p>Without first-class blocks for these flows, users were forced to drop
to raw HTTP nodes, losing schema validation, test mocks and credential
management.</p>
<h3>Changes 🏗️</h3>
Added new category to ``BlockCategory`` in ``block.py`` for ``MARKETING
= "Block that helps with marketing"``
Area | Change | Notes
-- | -- | --
ai_shortform_video_block.py | Refactored out a shared _RevidMixin
(webhook + polling helpers). | Keeps DRY across new blocks.
| Added AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS and Voice.EVA
enum members. | Required by Revid sample payloads.
| AIAdMakerVideoCreatorBlock | Implements ai-ad-generator flow;
supports optional input_media_urls, target_duration,
use_only_provided_media.
| AIPromptToVideoCreatorBlock | Implements prompt-to-video flow with
prompt_target_duration.
| AIScreenshotToVideoAdBlock | Implements screenshot-to-video-ad flow
(avatar narration, BG removal).
| Added full pydantic schemas, test stubs & mock hooks for each new
block. | Ensures unit tests pass and blocks appear in UI.
<p>No existing functionality was removed; current <code
inline="">AIShortformVideoCreatorBlock</code> is untouched apart from
enum imports.</p></body></html>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] use the ``AI ShortForm Video Creator`` block to generate a video
and it will work
- [x] same with `` ai ad maker video creator`` block test it and it
should work
- [x] and test ``ai screenshot to video ad`` block it should work
---------
Co-authored-by: Bently <Github@bentlybro.com>
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
* Add an enriching email feature toggle for SearchPeopleBlock
* Introduce GetPersonDetailBlock
* Adjust the cost of both blocks
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute SearchPeopleBlock & GetPersonDetailBlock
Currently, we don't have a secure way to pass Authorization headers when
calling the `SendWebRequestBlock`.
This hinders the integration of third-party applications that do not yet
have native block support.
### Changes 🏗️
Add Host-scoped credentials support for the newly introduced
SendAuthenticatedWebRequestBlock.
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/0d3d577a-2b9b-4f0f-9377-0e00a069ba37"
/>
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a59b9f16-c89c-453d-a628-1df0dfd60fb5"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Uses `https://api.openai.com/v1/images/edits` through
SendWebRequestBlock by passing the api-key through host-scoped
credentials.
An anti-virus file scan step is added to each file upload step on the
platform before the file is sent to cloud storage or local machine
storage.
### Changes 🏗️
* Added ClamAV service
* Added AV file scan on each upload step
* Added tests & documentation
* Make the step mandatory even on local development
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tried using FileUploadBlock & AgentFileInputBlock
## Changes 🏗️
The test data script is not working locally, this should fix it 🤞🏽
- Fixed `agentId` → `agentGraphId` field references in preset matching
logic
- Fixed `agentId` → `agentGraphId` field references in store listing
graph lookup
- Added graph uniqueness logic to prevent duplicate library agents per
user
- Improved data consistency by ensuring proper foreign key relationships
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified script runs without database schema errors
- [x] Confirmed foreign key relationships are properly maintained
- [x] Tested that library agents use unique graphs per user
- [x] Validated preset matching uses correct field references
This PR makes several improvements to the `update_library_agent`
endpoint.
- Resolves#10216
### Changes 🏗️
- Add `DELETE /library/agents/{id}` endpoint
- Fix `PUT /library/agents/{id}` endpoint
- Return updated library agent
- Remove `is_deleted` parameter
- Change method from `PUT` to `PATCH`
Also, a small DX improvement:
- Expose `BackendAPI` globally through `window.api` for local dev
purposes
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Deleting library agents works
- Follow-up fix to #10167
- Resolves#10228
### Changes 🏗️
- Don't assume `block.input_schema.jsonschema()["required"]` exists
- Unbreak handling of `webhook_type` in
`BaseWebhooksManager.get_manual_webhook(..)`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create an agent with a Generic Webhook Trigger block; go to it in the
Library
- [x] -> `/library/agents/[id]` loads normally
AIImageEditorBlock was not able to accept an image from AgentFileInput
or FileStore block.
### Changes 🏗️
* Add support for image loading for the image editor block:
<img width="1081" alt="Screenshot 2025-06-23 at 10 28 45 AM"
src="https://github.com/user-attachments/assets/ac3fea91-9503-4894-bbe3-2dc3c5649a39"
/>
* Avoid rendering a relative path image file.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run AiImageEditor block using AgentFileInput or FileStore block.
This pull request adds support for setting up (webhook-)triggered agents
in the Library. It contains changes throughout the entire stack to make
everything work in the various phases of a triggered agent's lifecycle:
setup, execution, updates, deletion.
Setting up agents with webhook triggers was previously only possible in
the Builder, limiting their use to the agent's creator only. To make it
work in the Library, this change uses the previously introduced
`AgentPreset` to store information on, instead of on the graph's nodes
to which only a graph's creator has access.
- Initial ticket: #10111
- Builds on #9786


### Changes 🏗️
Frontend:
- Amend the Library's `AgentRunDraftView` to handle creating and editing
Presets
- Add `hideIfSingleCredentialAvailable` parameter to `CredentialsInput`
- Add multi-select support to `TypeBasedInput`
- Add Presets section to `AgentRunsSelectorList`
- Amend `AgentRunSummaryCard` for use for Presets
- Add `AgentStatusChip` to display general agent status (for now: Active
/ Inactive / Error)
- Add Preset loading logic and create/update/delete handlers logic to
`AgentRunsPage`
- Rename `IconClose` to `IconCross`
API:
- Add `LibraryAgent` properties `has_external_trigger`,
`trigger_setup_info`, `credentials_input_schema`
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Remove redundant parameters from `POST
/library/presets/{preset_id}/execute` endpoint
Backend:
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Extract non-node-related logic from `on_node_activate` into
`setup_webhook_for_block`
- Add webhook-related logic to `update_preset` and `delete_preset`
endpoints
- Amend webhook infrastructure to work with AgentPresets
- Add preset trigger support to webhook ingress endpoint
- Amend executor stack to work with passed-in node input
(`nodes_input_masks`, generalized from `node_credentials_input_map`)
- Amend graph validation to work with passed-in node input
- Add `AgentPreset`->`IntegrationWebhook` relation
- Add `WebhookWithRelations` model
- Change behavior of `BaseWebhooksManager.get_manual_webhook(..)` to
avoid unnecessary changes of the webhook URL: ignore `events` to find
matching webhook, and update `events` if necessary.
- Fix & improve `AgentPreset` API, models, and DB logic
- Add `isDeleted` filter to get/list queries
- Add `user_id` attribute to `LibraryAgentPreset` model
- Add separate `credentials` property to `LibraryAgentPreset` model
- Fix `library_db.update_preset(..)` replacement of existing
`InputPresets`
- Make `library_db.update_preset(..)` more usage-friendly with separate
parameters for updateable properties
- Add `user_id` checks to various DB functions
- Fix error handling in various endpoints
- Fix cache race condition on `load_webhook_managers()`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Test existing functionality
- [x] Auto-setup and -teardown of webhooks on save in the builder still
works
- [x] Running an agent normally from the Library still works
- Test new functionality
- [x] Setting up a trigger in the Library
- [x] Updating a trigger in the Library
- [x] Disabling and re-enabling a trigger in the Library
- [x] Deleting a trigger in the Library
- [x] Triggers set up in the Library result in a new run when the
webhook receives a payload
This pull request sets up and configures Orval for API client
generation. It automates the process of creating TypeScript clients from
the backend's OpenAPI specification, improving development efficiency
and reducing manual code maintenance.
### Changes 🏗️
- Configures Orval with a new configuration file (`orval.config.ts`).
- Adds scripts to `package.json` for fetching the OpenAPI spec and
generating the API client.
- Implements a custom mutator for handling authentication.
- Adds API client generation as a step in the CI workflow.
- Adds `.gitignore` entry for generated API client files.
- Adds a security middleware to prevent caching of sensitive data.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that the API client is generated correctly.
- [x] Confirmed that the custom mutator is functioning as expected for
authentication.
- [x] Ensured that the new CI workflow step for API client generation is
successful.
- [x] Tested generated API calls
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
Since auto conversion is applied before merging nested input in the
block, it breaks the auto conversion break.
### Changes 🏗️
* Enabling auto-type conversion on block input schema mismatch for
nested input
* Add batching feature for `CreateListBlock`
* Increase default max_token size for LLM call
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run `AIStructuredResponseGeneratorBlock` with non-string prompt
value (should be auto-converted).
### Changes 🏗️
Add cost calculation for Apollo integration.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run Apollo block Search People & Organizations Block.
### Why? 🤔
<!-- Clearly explain the need for these changes: -->
We need to prevent sensitive data (authentication tokens, API
keys, user credentials, personal information) from being cached by
browsers and proxies. Following the principle of "secure by
default", we're switching from a deny list to an allow list
approach for cache control.
### Changes 🛠️
<!-- Concisely describe all of the changes made in this pull
request: -->
- **Refactored cache control middleware from deny list to allow
list approach**
- By default, ALL endpoints now have `Cache-Control: no-store,
no-cache, must-revalidate, private` headers
- Only explicitly allowed paths (static assets, health checks,
public store pages) can be cached
- This ensures new endpoints are automatically protected without
developers having to remember to add them to a list
- **Updated `SecurityHeadersMiddleware` in
`/backend/backend/server/middleware/security.py`**
- Renamed `SENSITIVE_PATHS` to `CACHEABLE_PATHS`
- Inverted the logic in `is_cacheable_path()` method
- Cache control headers are now applied to all paths NOT in the
allow list
- **Updated test suite to match new behavior**
- Tests now verify that most endpoints have cache control
headers by default
- Tests verify that only allowed paths (static assets, health
endpoints, etc.) can be cached
- **Updated documentation in `CLAUDE.md`**
- Documented the new allow list approach
- Added instructions for developers on how to allow caching for
new endpoints
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test modified endpoints work still
- [x] Test modified endpoints correctly have no cache rules
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Main issues:
* `AIStructuredResponseGeneratorBlock` is not able to produce a list of
objects.
* `SmartDecisionBlock` is not able to call tools with some optional
inputs.
### Changes 🏗️
* Allow persisting `null` / `None` value as execution output.
* Provide `multiple_tool_calls` option for `SmartDecisionBlock`.
* Provide `list_result` option for `AIStructuredResponseGeneratorBlock`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run `SmartDecisionBlock` & `AIStructuredResponseGeneratorBlock`
This PR introduces a custom function for generating unique operation IDs
for OpenAPI specifications to improve auto-generated client code
quality.
## Why This Change?
**Better Auto-Generated Clients**: Default FastAPI operation IDs create
unclear method names in generated clients. Our custom generator produces
clean, readable operation IDs that translate to intuitive method names.
- **Before**: `get_items_api_v1_items_get` → unclear generated methods
- **After**: `get_users_list` → clean, descriptive method names
## Changes
- ✨ **Added**: `custom_generate_unique_id` utility function
- Generates IDs using pattern: `{method}_{tag}_{summary}`
- Ensures uniqueness and readability
- 🔧 **Updated**: FastAPI app configuration to use custom generator
## Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] OpenAPI docs reflect new operation ID format
- [x] Tested various HTTP methods, tags, and summaries
- [x] Verified app startup functionality
- [x] Validated improved client generation output
Current Apollo blocks only work with keywords; the huge number of list
filter fields doesn't work because it's passing the wrong GET parameter
(missing `[]`).
### Changes 🏗️
Change the GET request to a POST request for Apollo.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run SearchPeopleBlock with title filter
## Description
Added the `graph_id` parameter to the stop execution endpoint path
(`/graphs/{graph_id}/executions/{graph_exec_id}/stop`) to fix client
generation from Openapi spec error.
## Problem
The client generation was failing due to missing path parameter
definition for `graph_id` in the stop execution endpoint.
<img width="1412" alt="Screenshot 2025-06-19 at 9 20 17 AM"
src="https://github.com/user-attachments/assets/aa1667d3-05be-48c6-975b-84473830ac03"
/>
## Solution
Added `graph_id` as a path parameter while maintaining the existing
functionality.
## Testing
- [x] Verified OpenAPI client generation works without errors
- [x] Confirmed endpoint functionality remains unchanged
- [x] Tested API calls maintain backward compatibility
Request on block execution can be throttled, and requests between
services can sometimes break. The scope of this PR is to add an
appropriate retry on those.
### Changes 🏗️
* Block request retry: Retry on throttled status code only (504, 429,
etc).
* RPC request retry: Retry connection issues (ConnectError, Timeout,
etc).
* Truncate logging on executor/utils.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual graph execution