## Summary
This PR enhances the node execution stats tracking system to properly
handle nested graph executions and additional cost/step metrics:
- **Add extra_cost and extra_steps fields** to `NodeExecutionStats`
model for tracking additional metrics from sub-graphs
- **Update AgentExecutorBlock** to merge nested execution stats from
sub-graphs into the parent execution
- **Fix stats update mechanism** in `execute_node` to use in-place
updates instead of `model_copy` for better performance
- **Add proper tracking** of extra costs and steps in graph execution
stats aggregation
## Changes Made
- Modified `backend/backend/data/model.py` to add `extra_cost` and
`extra_steps` fields
- Updated `backend/backend/blocks/agent.py` to merge stats from nested
graph executions
- Fixed `backend/backend/executor/manager.py` to properly update
execution stats and aggregate extra metrics
## Test Plan
- [x] Verify that nested graph executions properly propagate their stats
to parent graphs
- [x] Test that extra costs and steps are correctly tracked and
aggregated
- [x] Ensure debug logging provides useful information for monitoring
- [x] Run existing tests to ensure no regressions
- [x] Test with multi-level nested agent graphs
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10313
- Resolves#10333
Before:
https://github.com/user-attachments/assets/a105b2b0-a90b-4bc6-89da-bef3f5a5fa1f
- No credentials input
- Stuttery experience when panning or zooming the viewport
After:
https://github.com/user-attachments/assets/f58d7864-055f-4e1c-a221-57154467c3aa
- Pretty much the same UX as in the Library, with fully-fledged
credentials input support
- Much smoother when moving around the canvas
### Changes 🏗️
Frontend:
- Add credentials input support to Run UX in Builder
- Pass run inputs instead of storing them on the input nodes
- Re-implement `RunnerInputUI` using `AgentRunDetailsView`; rename to
`RunnerInputDialog`
- Make `AgentRunDraftView` more flexible
- Remove `RunnerInputList`, `RunnerInputBlock`
- Make moving around in the Builder *smooooth* by reducing unnecessary
re-renders
- Clean up and partially re-write bead management logic
- Replace `request*` fire-and-forget methods in `useAgentGraph` with
direct action async callbacks
- Clean up run input UI components
- Simplify `RunnerUIWrapper`
- Add `isEmpty` utility function in `@/lib/utils` (expanding on
`_.isEmpty`)
- Fix default value handling in `TypeBasedInput` (**Note:** after all
the changes I've made I'm not sure this is still necessary)
- Improve & clean up Builder test implementations
Backend + API:
- Fix front-end `Node`, `GraphMeta`, and `Block` types
- Small refactor of `Graph` to match naming of some `LibraryAgent`
attributes
- Fix typing of `list_graphs`,
`get_graph_meta_by_store_listing_version_id` endpoints
- Add `GraphMeta` model and `GraphModel.meta()` shortcut
- Move `POST /library/agents/{library_agent_id}/setup-trigger` to `POST
/library/presets/setup-trigger`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Test the new functionality in the Builder:
- [x] Running an agent with (credentials) inputs from the builder
- [x] Beads behave correctly
- [x] Running an agent without any inputs from the builder
- [x] Scheduling an agent from the builder
- [x] Adding and searching blocks in the block menu
- [x] Test that all existing `AgentRunDraftView` functionality in the
Library still works the same
- [x] Run an agent
- [x] Schedule an agent
- [x] View past runs
- [x] Run an agent with inputs, then edit the agent's inputs and view
the agent in the Library (should be fine)
## Summary
This PR adds a simple block error rate monitoring system that runs every
24 hours (configurable) and sends Discord alerts when blocks exceed the
error rate threshold.
## Changes Made
**Modified Files:**
- `backend/executor/scheduler.py` - Added `report_block_error_rates`
function and scheduled job
- `backend/util/settings.py` - Added configuration options
- `backend/.env.example` - Added environment variable examples
- Refactor scheduled job logics in scheduler.py into seperate files
## Configuration
```bash
# Block Error Rate Monitoring
BLOCK_ERROR_RATE_THRESHOLD=0.5 # 50% error rate threshold
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400 # 24 hours
```
## How It Works
1. **Scheduled Job**: Runs every 24 hours (configurable via
`BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS`)
2. **Error Rate Calculation**: Queries last 24 hours of node executions
and calculates error rates per block
3. **Threshold Check**: Alerts on blocks with ≥50% error rate
(configurable via `BLOCK_ERROR_RATE_THRESHOLD`)
4. **Discord Alert**: Sends alert to Discord using existing
`discord_system_alert` function
5. **Manual Execution**: Available via
`execute_report_block_error_rates()` scheduler client method
## Alert Format
```
Block Error Rate Alert:
🚨 Block 'DeprecatedGPT3Block' has 75.0% error rate (75/100) in the last 24 hours
🚨 Block 'BrokenImageBlock' has 60.0% error rate (30/50) in the last 24 hours
```
## Testing
Can be tested manually via:
```python
from backend.executor.scheduler import SchedulerClient
client = SchedulerClient()
result = client.execute_report_block_error_rates()
```
## Implementation Notes
- Follows the same pattern as `report_late_executions` function
- Only checks blocks with ≥10 executions to avoid noise
- Uses existing Discord notification infrastructure
- Configurable threshold and check interval
- Proper error handling and logging
## Test plan
- [x] Verify configuration loads correctly
- [x] Test error rate calculation with existing database
- [x] Confirm Discord integration works
- [x] Test manual execution via scheduler client
- [x] Verify scheduled job runs correctly
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude AI <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
### Summary
Performance optimization for the platform's store and creator
functionality by adding targeted database indexes and implementing
materialized views to reduce query execution time.
### Changes 🏗️
**Database Performance Optimizations:**
- Added strategic database indexes for `StoreListing`,
`StoreListingVersion`, `StoreListingReview`, `AgentGraphExecution`, and
`Profile` tables
- Implemented materialized views (`mv_agent_run_counts`,
`mv_review_stats`) to cache expensive aggregation queries
- Optimized `StoreAgent` and `Creator` views to use materialized views
and improved query patterns
- Added automated refresh function with 15-minute scheduling for
materialized views (when pg_cron extension is available)
**Key Performance Improvements:**
- Filtered indexes on approved store listings to speed up marketplace
queries
- GIN index on categories for faster category-based searches
- Composite indexes for common query patterns (e.g., listing + version
lookups)
- Pre-computed agent run counts and review statistics to eliminate
expensive aggregations
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified migration runs successfully without errors
- [x] Confirmed materialized views are created and populated correctly
- [x] Tested StoreAgent and Creator view queries return expected results
- [x] Validated automatic refresh function works properly
- [x] Confirmed rollback migration successfully removes all changes
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note:** No configuration changes were required as this is purely a
database schema optimization.
## Block Development SDK - Simplifying Block Creation
### Problem
Currently, creating a new block requires manual updates to **5+ files**
scattered across the codebase:
- `backend/data/block_cost_config.py` - Manually add block costs
- `backend/integrations/credentials_store.py` - Add default credentials
- `backend/integrations/providers.py` - Register new providers
- `backend/integrations/oauth/__init__.py` - Register OAuth handlers
- `backend/integrations/webhooks/__init__.py` - Register webhook
managers
This creates significant friction for developers, increases the chance
of configuration errors, and makes the platform difficult to scale.
### Solution
This PR introduces a **Block Development SDK** that provides:
- Single import for all block development needs: `from backend.sdk
import *`
- Automatic registration of all block configurations
- Zero external file modifications required
- Provider-based configuration with inheritance
### Changes 🏗️
#### 1. **New SDK Module** (`backend/sdk/`)
- **`__init__.py`**: Unified exports of 68+ block development components
- **`registry.py`**: Central auto-registration system for all block
configurations
- **`builder.py`**: `ProviderBuilder` class for fluent provider
configuration
- **`provider.py`**: Provider configuration management
- **`cost_integration.py`**: Automatic cost application system
#### 2. **Provider Builder Pattern**
```python
# Configure once, use everywhere
my_provider = (
ProviderBuilder("my-service")
.with_api_key("MY_SERVICE_API_KEY", "My Service API Key")
.with_base_cost(5, BlockCostType.RUN)
.build()
)
```
#### 3. **Automatic Cost System**
- Provider base costs automatically applied to all blocks using that
provider
- Override with `@cost` decorator for block-specific pricing
- Tiered pricing support with cost filters
#### 4. **Dynamic Provider Support**
- Modified `ProviderName` enum to accept any string via `_missing_`
method
- No more manual enum updates for new providers
#### 5. **Application Integration**
- Added `sync_all_provider_costs()` to `initialize_blocks()` for
automatic cost registration
- Maintains full backward compatibility with existing blocks
#### 6. **Comprehensive Examples** (`backend/blocks/examples/`)
- `simple_example_block.py` - Basic block structure
- `example_sdk_block.py` - Provider with credentials
- `cost_example_block.py` - Various cost patterns
- `advanced_provider_example.py` - Custom API clients
- `example_webhook_sdk_block.py` - Webhook configuration
#### 7. **Extensive Testing**
- 6 new test modules with 30+ test cases
- Integration tests for all SDK features
- Cost calculation verification
- Provider registration tests
### Before vs After
**Before SDK:**
```python
# 1. Multiple complex imports
from backend.data.block import Block, BlockCategory, BlockOutput
from backend.data.model import SchemaField, CredentialsField
# ... many more imports
# 2. Update block_cost_config.py
BLOCK_COSTS[MyBlock] = [BlockCost(...)]
# 3. Update credentials_store.py
DEFAULT_CREDENTIALS.append(...)
# 4. Update providers.py enum
# 5. Update oauth/__init__.py
# 6. Update webhooks/__init__.py
```
**After SDK:**
```python
from backend.sdk import *
# Everything configured in one place
my_provider = (
ProviderBuilder("my-service")
.with_api_key("MY_API_KEY", "My API Key")
.with_base_cost(10, BlockCostType.RUN)
.build()
)
class MyBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = my_provider.credentials_field()
data: String = SchemaField(description="Input data")
class Output(BlockSchema):
result: String = SchemaField(description="Result")
# That's it\! No external files to modify
```
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created new blocks using SDK pattern with provider configuration
- [x] Verified automatic cost registration for provider-based blocks
- [x] Tested cost override with @cost decorator
- [x] Confirmed custom providers work without enum modifications
- [x] Verified all example blocks execute correctly
- [x] Tested backward compatibility with existing blocks
- [x] Ran all SDK tests (30+ tests, all passing)
- [x] Created blocks with credentials and verified authentication
- [x] Tested webhook block configuration
- [x] Verified application startup with auto-registration
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
(no changes needed)
- [x] `docker-compose.yml` is updated or already compatible with my
changes (no changes needed)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
### Impact
- **Developer Experience**: Block creation time reduced from hours to
minutes
- **Maintainability**: All block configuration in one place
- **Scalability**: Support hundreds of blocks without enum updates
- **Type Safety**: Full IDE support with proper type hints
- **Testing**: Easier to test blocks in isolation
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
- Follow-up fix for #10301
The condition that determines whether
`LibraryAgent.credentials_input_schema` is set incorrectly handles empty
lists of sub-graphs.
### Changes 🏗️
- Check if `sub_graphs is not None` rather than using the boolean
interpretation of `sub_graphs`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Trivial change, no test needed.
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
We flew too close to the sun
### Changes 🏗️
adds back perplexity due to the need to remove it after it has already
been migrated not before or the system will automatically migrate it to
a different model so that it is one that exists
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested locally; no impact since we are simply re-enabling it
<!-- Clearly explain the need for these changes: -->
Adds the latest Perplexity Sonar models from OpenRouter and removes the
decommissioned Sonar Large model.
### Changes 🏗️
- Added constants for `perplexity/sonar`, `perplexity/sonar-pro`, and
`perplexity/sonar-deep-research` in the `LlmModel` enum
- Included metadata entries for the new models
- Mapped the new models in the cost configuration with their respective
pricing tiers
- Removed the outdated Sonar Large model
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run test`
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
Auto type conversion doesn't work on optional type.
To reproduce:
<img width="981" alt="image"
src="https://github.com/user-attachments/assets/92198d32-bce9-44fd-a9b0-b7b431aec3ba"
/>
Use the AgentNumberInput block and try to pass a string value to the
sub-agent that uses it.
### Changes 🏗️
Added optional type auto conversation support.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Try to convert string to optional[int]
- Resolves#10300
- Follow-up fix to #10167
### Changes 🏗️
- Include sub-graphs in `get_library_agent` endpoint
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Executing agent with sub-graphs that require credentials works
During data manipulation refactoring, the CreateListBlock lost its
important batching functionality with max_size and max_tokens parameters.
This restores the original implementation that can yield lists in chunks
based on size or token limits.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
## Summary
This PR adds missing plural output versions to blocks that yield
individual items in loops but don't provide the complete collection,
enabling both individual item access (for iteration) and complete
collection access (for aggregate operations).
## Changes
### GitHub Blocks (existing)
- **GithubListPullRequestsBlock**: Added `pull_requests` output
alongside existing `pull_request`
- **GithubListPRReviewersBlock**: Added `reviewers` output alongside
existing `reviewer`
### Additional Blocks (added in this PR)
- **GetRedditPostsBlock**: Added `posts` output for complete list of
Reddit posts
- **ReadRSSFeedBlock**: Added `entries` output for complete list of RSS
entries
- **AddMemoryBlock**: Added `results` output for complete list of memory
operation results
## Pattern Applied
The pattern ensures blocks provide both:
```python
# Complete collection first
yield "plural_output", all_items
# Then individual items for iteration
for item in all_items:
yield "singular_output", item
```
## Testing
- Updated test outputs to include plural versions
- All blocks maintain backward compatibility with existing singular
outputs
- `poetry run format` - ✅ Passed
- `poetry run test` - ✅ Blocks validated
## Benefits
- **Iteration**: Users can still iterate over individual items as before
- **Aggregation**: Users can now access complete collections for
operations like counting, filtering, or batch processing
- **Compatibility**: Existing workflows continue to work unchanged
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
#### New List Operation Blocks
- Implement `GetListItemBlock` for retrieving an element at a specific
index, with negative index support
- Introduce `RemoveFromListBlock` to remove or pop items and optionally
return the removed value
- Add `ReplaceListItemBlock` to overwrite an item at a given index and
return the old value
- Provide `ListIsEmptyBlock` for quickly checking if a list has no
elements
#### New Dictionary Operation Blocks (for consistency with list
operations)
- Add `RemoveFromDictionaryBlock` to remove key-value pairs and
optionally return the removed value
- Implement `ReplaceDictionaryValueBlock` to replace values for a
specified key and return the old value
- Provide `DictionaryIsEmptyBlock` for checking if a dictionary has no
elements
#### Code Organization & Refactoring
- **Created `data_manipulation.py`**: Moved all dictionary and list
manipulation blocks to a dedicated file to prevent `basic.py` from
becoming too large
- **Refactored `basic.py`**: Now contains only core utility blocks
(FileStore, StoreValue, PrintToConsole, Note, UniversalTypeConverter)
- **Ensured consistency**: Dictionary and list blocks now have
equivalent functionality and follow the same patterns
- **Removed redundancy**: Eliminated duplicate `GetDictionaryValueBlock`
since `FindInDictionaryBlock` already provides comprehensive lookup
functionality
- **Preserved UUIDs**: All existing block UUIDs maintained to ensure no
breaking changes
#### Block Organization Summary
**`basic.py` (core utilities):**
- `FileStoreBlock`, `StoreValueBlock`, `PrintToConsoleBlock`,
`NoteBlock`, `UniversalTypeConverterBlock`
**`data_manipulation.py` (dictionary & list operations):**
- **Dictionary blocks:** Create, Add, Find, Remove, Replace, IsEmpty
- **List blocks:** Create, Add, Find, Get, Remove, Replace, IsEmpty
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run test`
- [x] `pnpm format`
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
The block library was missing two key capabilities that keep coming up
in real-world agent flows:
1. **Granular Mem0 filtering.** Agents often work side-by-side for the
same user, so memories must be scoped to a specific run or agent to
avoid crosstalk.
2. **First-class Google Sheets support.** Many community projects (e.g.,
data-collection, lightweight dashboards, no-code workflows) rely on
Sheets, but we only had a brittle REST call block.
This PR adds fine-grained filters to every Mem0 retrieval block and
introduces a complete, OAuth-ready Google Sheets suite so agents can
create, read, write, format, and manage spreadsheets safely.
:contentReference[oaicite:0]{index=0}
---
### Changes 🏗️
#### 📚 Mem0 block enhancements
* Added `categories_filter`, `metadata_filter`, `limit_memory_to_run`,
and `limit_memory_to_agent` inputs to **SearchMemoryBlock**,
**GetAllMemoriesBlock**, and **GetLatestMemoryBlock**.
* Added identical scoping logic to **AddMemoryBlock** so newly-created
memories can be tied to run/agent IDs.
#### 📊 New Google Sheets blocks (`backend/blocks/google/sheets.py`)
| Block | Purpose |
|-------|---------|
| `GoogleSheetsReadBlock` | Read a range |
| `GoogleSheetsWriteBlock` | Overwrite a range |
| `GoogleSheetsAppendBlock` | Append rows |
| `GoogleSheetsClearBlock` | Clear a range |
| `GoogleSheetsMetadataBlock` | Fetch spreadsheet + sheet info |
| `GoogleSheetsManageSheetBlock` | Create / delete / copy a sheet |
| `GoogleSheetsBatchOperationsBlock` | Batch update / clear |
| `GoogleSheetsFindReplaceBlock` | Find & replace text |
| `GoogleSheetsFormatBlock` | Cell formatting (bg/text colour, bold,
italic, font size) |
| `GoogleSheetsCreateSpreadsheetBlock` | Spin up a new spreadsheet |
* Each block has typed input/output schemas, built-in test mocks, and is
disabled in prod unless Google OAuth is configured.
* Added helper enums (`SheetOperation`, `BatchOperationType`) and
updated **CLAUDE.md** contributor guide with a UUID-generation step.
:contentReference[oaicite:2]{index=2}
---
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual E2E run: agent writes chat summary to new spreadsheet,
reads it back, searches memory with run-scoped filter
- [x] Live Google API smoke-tests (read/write/append/clear/format) using
a disposable spreadsheet
This PR introduces key-value storage blocks.
### Changes 🏗️
- **Database Schema**: Add AgentNodeExecutionKeyValueData table with
composite primary key (userId, key)
- **Persistence Blocks**: Create PersistInformationBlock and
RetrieveInformationBlock in persistence.py
- **Scope-based Storage**: Support for within_agent (per agent) vs
across_agents (global user) persistence
- **Key Structure**: Use formal # delimiter for storage keys:
`agent#{graph_id}#{key}` and `global#{key}`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run all 244 block tests - all passing ✅
- [x] Test PersistInformationBlock with mock data storage
- [x] Test RetrieveInformationBlock with mock data retrieval
- [x] Verify scope-based key generation (within_agent vs across_agents)
- [x] Verify database function integration through all manager classes
- [x] Run lint and type checking - all passing ✅
- [x] Verify database migration is included and valid
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
Note: This change adds database schema and new blocks but doesn't
require environment or docker-compose changes as it uses existing
database infrastructure.
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10298
- Follow-up to #10270
### Changes 🏗️
Amend two changes from #10270:
- Add fallback for `NEXT_PUBLIC_FRONTEND_BASE_URL` in custom-mutator.ts
- Revert rename of `FRONTEND_BASE_URL` to
`NEXT_PUBLIC_FRONTEND_BASE_URL` in `backend/.env.example`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Don't set `NEXT_PUBLIC_FRONTEND_BASE_URL`
- Run the platform locally
- [x] -> `/library` loads normally
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Summary
- Enhanced graph execution cancellation and cleanup mechanisms
- Improved error handling and logging for graph execution lifecycle
- Added timeout handling for graph termination with proper status
updates
- Exposed a new API for stopping graph based on only graph_id or user_id
- Refactored logging metadata structure for better error tracking
## Key Changes
### Backend
- **Graph Execution Management**: Enhanced `stop_graph_execution` with
timeout-based waiting and proper status transitions
- **Execution Cleanup**: Added proper cancellation waiting with timeout
handling in executor manager
- **Logging Improvements**: Centralized `LogMetadata` class and improved
error logging consistency
- **API Enhancements**: Added bulk graph execution stopping
functionality
- **Error Handling**: Better exception handling and status management
for failed/cancelled executions
### Frontend
- **Status Safety**: Added null safety checks for status chips to
prevent runtime errors
- **Execution Control**: Simplified stop execution request handling
## Test Plan
- [x] Verify graph execution can be properly stopped and reaches
terminal state
- [x] Test timeout scenarios for stuck executions
- [x] Validate proper cleanup of running node executions when graph is
cancelled
- [x] Check frontend status chips handle undefined statuses gracefully
- [x] Test bulk execution stopping functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10284
### Changes 🏗️
- Allows passing an `aiohttp.BasicAuth` object directly to the `auth`
parameter of the `make_request` function.
- Converts tuple-based auth credentials to `aiohttp.BasicAuth` objects
before making the request.
Fixes
[AUTOGPT-SERVER-4AX](https://sentry.io/organizations/significant-gravitas/issues/6709824432/).
The issue was that: aiohttp's ClientSession.request received a plain
tuple for `auth` instead of an `aiohttp.BasicAuth` object, causing
OAuth2 token exchange failure.
This fix was generated by Seer in Sentry, triggered by Bently. 👁️ Run
ID: 185767
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6709824432/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Graph execution that fails due to interruption or unknown error should
be enqueued back to the queue. However, swallowing the error ends up not
marking the execution as a failure.
### Changes 🏗️
* Avoid keep updating the graph execution status on each node execution
step.
* Added a guard rail to avoid completing graph execution on
non-completed execution status.
* Avoid acknowledging messages from the queue if the graph execution is
not yet completed.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run graph execution, kill the process, re-run the process
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Currently, we are using PyClamd to run a file anti-virus scan for all
the files uploaded into the platform. We split the file into small
chunks and serially check the chunks for the virus scan. The socket is
not thread-safe, and we need to create multiple sockets across many
threads to leverage concurrency. To make this step concurrent and keep
it fully async, we need to migrate PyClamd to aioclamd.
### Changes 🏗️
Convert pyclamd to aioclamd, leverage chunk parallelism scan with a
semaphore limiting the concurrency limit.
#### Side Note
Shout-out to @tedyu for raising this improvement idea.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute file upload into the platform
## Changes 🏗️
We need to `FRONTEND_BASE_URL` to → `NEXT_PUBLIC_FRONTEND_BASE_URL`
given is needed on the new API client on the Front-end to make requests.
The `NEXT_PUBLIC` prefix is important so that it is available on the
client.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally
- [x] The library and other pages work
- Resolves#10217https://github.com/user-attachments/assets/26a402f5-6f43-453b-8c83-481380bde853
### Changes 🏗️
Frontend:
- Show message instead of action buttons ("Run" etc) when graph has
webhook node(s)
- Fix check for webhook nodes used in `BlocksControl` and `FlowEditor`
- Clean up `PrimaryActionBar` implementation
- Add `accent` variant to `ui/button:Button`
API:
- Add `GET /library/agents/by-graph/{graph_id}` endpoint
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to Builder
- Add a trigger block
- [x] -> action buttons disappear; message shows in their place
- Save the graph
- Click the "Agent Library" link in the message
- [x] -> app navigates to `/library/agents/[id]` for the newly created
agent
CreateListBlock can only batch lists based on the size limit, but
sometimes we need the size to be dynamically adjusted based on the token
count.
### Changes 🏗️
Improve CreateListBlock to support batching based on token count
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test CreateListBlock
Complete the implementation of the Agent Run Scheduling UX in the
Library.
Demo:
https://github.com/user-attachments/assets/701adc63-452c-4d37-aeea-51788b2774f2
### Changes 🏗️
Frontend:
- Add "Schedule" button + dialog + logic to `AgentRunDraftView`
- Update corresponding logic on `AgentRunsPage`
- Add schedule name field to `CronSchedulerDialog`
- Amend Builder components `useAgentGraph`, `FlowEditor`,
`RunnerUIWrapper` to also handle schedule name input
- Split `CronScheduler` into `CronScheduler`+`CronSchedulerDialog`
- Make `AgentScheduleDetailsView` more fully functional
- Add schedule description to info box
- Add "Delete schedule" button
- Update schedule create/select/delete logic in `AgentRunsPage`
- Improve schedule UX in `AgentRunsSelectorList`
- Switch tabs automatically when a run or schedule is selected
- Remove now-redundant schedule filters
- Refactor `@/lib/monitor/cronExpressionManager` into
`@/lib/cron-expression-utils`
Backend + API:
- Add name and credentials to graph execution schedule job params
- Update schedule API
- `POST /schedules` -> `POST /graphs/{graph_id}/schedules`
- Add `GET /graphs/{graph_id}/schedules`
- Add not found error handling to `DELETE /schedules/{schedule_id}`
- Minor refactoring
Backend:
- Fix "`GraphModel`->`NodeModel` is not fully defined" error in
scheduler
- Add support for all exceptions defined in `backend.util.exceptions` to
RPC logic in `backend.util.service`
- Fix inconsistent log prefixing in `backend.executor.scheduler`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create a simple agent with inputs and blocks that require credentials;
go to this agent in the Library
- Fill out the inputs and click "Schedule"; make it run every minute
(for testing purposes)
- [x] -> newly created schedule appears in the list
- [x] -> scheduled runs are successful
- Click "Delete schedule"
- [x] -> schedule no longer in list
- [x] -> on deleting the last schedule, view switches back to the Runs
list
- [x] -> no new runs occur from the deleted schedule
Calling LLM using the current block sometimes can break due to the high
context window.
A prompt compaction algorithm is applied (enabled by default) to make
sure the sent prompt is within a context window limit.
### Changes 🏗️
````
Heuristics
--------
* Prefer shrinking the content rather than truncating the conversation.
* If the conversation content is compacted and it's still not enough, then reduce the conversation list.
* The rest of the implementation is adjusted to minimize the LLM call breaking.
Strategy
--------
1. **Token-aware truncation** – progressively halve a per-message cap
(`start_cap`, `start_cap/2`, … `floor_cap`) and apply it to the
*content* of every message except the first and last. Tool shells
are included: we keep the envelope but shorten huge payloads.
2. **Middle-out deletion** – if still over the limit, delete the whole
messages working outward from the centre, **skipping** any message
that contains ``tool_calls`` or has ``role == "tool"``.
3. **Last-chance trim** – if still too big, truncate the *first* and
*last* message bodies down to `floor_cap` tokens.
4. If the prompt is *still* too large:
• raise ``ValueError`` when ``lossy_ok == False`` (default)
• return the partially-trimmed prompt when ``lossy_ok == True``
````
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run an SDM block in a loop until it hits 200000 tokens using the
open-ai O3 model.
- Follow-up fix to #10138
AI erased a bit of functionality from the `GithubReadPullRequestBlock`
in #10138. This PR puts it back and improves on the old format.
### Changes 🏗️
- Include full diff in `changes` output of `GithubReadPullRequestBlock`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Use the `GithubReadPullRequestBlock` with `include_pr_changes` enabled
- [ ] -> block runs successfully
- [ ] -> full diff included in `changes` output
<html><head></head><body><h3>Why these changes are needed 🧐</h3>
<p>Revid.ai offers several specialised, undocumented rendering flows
beyond the basic “text-to-video” endpoint our platform already
supported.
to:</p>
<ol>
<li>
<p><strong>Generate ads</strong> from copy plus product images
(30-second vertical spots).</p>
</li>
<li>
<p><strong>Turn a single creative prompt</strong> into a fully
AI-generated video (no multi-line script).</p>
</li>
<li>
<p><strong>Transform a screenshot into a narrated, avatar-driven
clip</strong>, ideal for product-led demos.</p>
</li>
</ol>
<p>Without first-class blocks for these flows, users were forced to drop
to raw HTTP nodes, losing schema validation, test mocks and credential
management.</p>
<h3>Changes 🏗️</h3>
Added new category to ``BlockCategory`` in ``block.py`` for ``MARKETING
= "Block that helps with marketing"``
Area | Change | Notes
-- | -- | --
ai_shortform_video_block.py | Refactored out a shared _RevidMixin
(webhook + polling helpers). | Keeps DRY across new blocks.
| Added AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS and Voice.EVA
enum members. | Required by Revid sample payloads.
| AIAdMakerVideoCreatorBlock | Implements ai-ad-generator flow;
supports optional input_media_urls, target_duration,
use_only_provided_media.
| AIPromptToVideoCreatorBlock | Implements prompt-to-video flow with
prompt_target_duration.
| AIScreenshotToVideoAdBlock | Implements screenshot-to-video-ad flow
(avatar narration, BG removal).
| Added full pydantic schemas, test stubs & mock hooks for each new
block. | Ensures unit tests pass and blocks appear in UI.
<p>No existing functionality was removed; current <code
inline="">AIShortformVideoCreatorBlock</code> is untouched apart from
enum imports.</p></body></html>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] use the ``AI ShortForm Video Creator`` block to generate a video
and it will work
- [x] same with `` ai ad maker video creator`` block test it and it
should work
- [x] and test ``ai screenshot to video ad`` block it should work
---------
Co-authored-by: Bently <Github@bentlybro.com>
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
* Add an enriching email feature toggle for SearchPeopleBlock
* Introduce GetPersonDetailBlock
* Adjust the cost of both blocks
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute SearchPeopleBlock & GetPersonDetailBlock
Currently, we don't have a secure way to pass Authorization headers when
calling the `SendWebRequestBlock`.
This hinders the integration of third-party applications that do not yet
have native block support.
### Changes 🏗️
Add Host-scoped credentials support for the newly introduced
SendAuthenticatedWebRequestBlock.
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/0d3d577a-2b9b-4f0f-9377-0e00a069ba37"
/>
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a59b9f16-c89c-453d-a628-1df0dfd60fb5"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Uses `https://api.openai.com/v1/images/edits` through
SendWebRequestBlock by passing the api-key through host-scoped
credentials.
An anti-virus file scan step is added to each file upload step on the
platform before the file is sent to cloud storage or local machine
storage.
### Changes 🏗️
* Added ClamAV service
* Added AV file scan on each upload step
* Added tests & documentation
* Make the step mandatory even on local development
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tried using FileUploadBlock & AgentFileInputBlock