mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-11 15:25:16 -05:00
69c420e5747bd4151c5a28d9be293aebc8bb801a
604 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
69c420e574 |
chore(libs/deps): Bump the production-dependencies group across 1 directory with 7 updates (#10371)
Bumps the production-dependencies group with 7 updates in the /autogpt_platform/autogpt_libs directory: | Package | From | To | | --- | --- | --- | | [pydantic](https://github.com/pydantic/pydantic) | `2.11.4` | `2.11.7` | | [pydantic-settings](https://github.com/pydantic/pydantic-settings) | `2.9.1` | `2.10.1` | | [pytest-mock](https://github.com/pytest-dev/pytest-mock) | `3.14.0` | `3.14.1` | | [supabase](https://github.com/supabase/supabase-py) | `2.15.1` | `2.16.0` | | [launchdarkly-server-sdk](https://github.com/launchdarkly/python-server-sdk) | `9.11.1` | `9.12.0` | | [fastapi](https://github.com/fastapi/fastapi) | `0.115.12` | `0.116.1` | | [uvicorn](https://github.com/encode/uvicorn) | `0.34.3` | `0.35.0` | Updates `pydantic` from 2.11.4 to 2.11.7 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pydantic/pydantic/releases">pydantic's releases</a>.</em></p> <blockquote> <h2>v2.11.7 2025-06-14</h2> <!-- raw HTML omitted --> <h2>What's Changed</h2> <h3>Fixes</h3> <ul> <li>Copy <code>FieldInfo</code> instance if necessary during <code>FieldInfo</code> build by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11980">pydantic/pydantic#11980</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/pydantic/pydantic/compare/v2.11.6...v2.11.7">https://github.com/pydantic/pydantic/compare/v2.11.6...v2.11.7</a></p> <h2>v2.11.6 2025-06-13</h2> <h2>v2.11.6 (2025-06-13)</h2> <h3>What's Changed</h3> <h4>Fixes</h4> <ul> <li>Rebuild dataclass fields before schema generation by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11949">#11949</a></li> <li>Always store the original field assignment on <code>FieldInfo</code> by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11946">#11946</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/pydantic/pydantic/compare/v2.11.5...v2.11.6">https://github.com/pydantic/pydantic/compare/v2.11.5...v2.11.6</a></p> <h2>v2.11.5 2025-05-22</h2> <!-- raw HTML omitted --> <h2>What's Changed</h2> <h3>Fixes</h3> <ul> <li>Check if <code>FieldInfo</code> is complete after applying type variable map by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11855">#11855</a></li> <li>Do not delete mock validator/serializer in <code>model_rebuild()</code> by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11890">#11890</a></li> <li>Do not duplicate metadata on model rebuild by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11902">#11902</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/pydantic/pydantic/compare/v2.11.4...v2.11.5">https://github.com/pydantic/pydantic/compare/v2.11.4...v2.11.5</a></p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's changelog</a>.</em></p> <blockquote> <h2>v2.11.7 (2025-06-14)</h2> <p><a href="https://github.com/pydantic/pydantic/releases/tag/v2.11.7">GitHub release</a></p> <h3>What's Changed</h3> <h4>Fixes</h4> <ul> <li>Copy <code>FieldInfo</code> instance if necessary during <code>FieldInfo</code> build by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11898">#11898</a></li> </ul> <h2>v2.11.6 (2025-06-13)</h2> <p><a href="https://github.com/pydantic/pydantic/releases/tag/v2.11.6">GitHub release</a></p> <h3>What's Changed</h3> <h4>Fixes</h4> <ul> <li>Rebuild dataclass fields before schema generation by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11949">#11949</a></li> <li>Always store the original field assignment on <code>FieldInfo</code> by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11946">#11946</a></li> </ul> <h2>v2.11.5 (2025-05-22)</h2> <p><a href="https://github.com/pydantic/pydantic/releases/tag/v2.11.5">GitHub release</a></p> <h3>What's Changed</h3> <h4>Fixes</h4> <ul> <li>Check if <code>FieldInfo</code> is complete after applying type variable map by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11855">#11855</a></li> <li>Do not delete mock validator/serializer in <code>model_rebuild()</code> by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11890">#11890</a></li> <li>Do not duplicate metadata on model rebuild by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/pydantic/pydantic/pull/11902">#11902</a></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
2682ed7439 |
chore(backend/deps-dev): Bump faker from 33.3.1 to 37.4.0 in /autogpt_platform/backend (#10386)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
1502f28481 |
chore(libs/deps): Bump pytest-asyncio from 0.26.0 to 1.0.0 in /autogpt_platform/autogpt_libs (#10175)
Bumps [pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio) from 0.26.0 to 1.0.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pytest-dev/pytest-asyncio/releases">pytest-asyncio's releases</a>.</em></p> <blockquote> <h2>pytest-asyncio 1.0.0</h2> <h1><a href="https://github.com/pytest-dev/pytest-asyncio/tree/1.0.0">1.0.0</a> - 2025-05-26</h1> <h2>Removed</h2> <ul> <li>The deprecated <em>event_loop</em> fixture. (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1106">#1106</a>)</li> </ul> <h2>Added</h2> <ul> <li>Prelimiary support for Python 3.14 (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1025">#1025</a>)</li> </ul> <h2>Changed</h2> <ul> <li>Scoped event loops (e.g. module-scoped loops) are created once rather than per scope (e.g. per module). This reduces the number of fixtures and speeds up collection time, especially for large test suites. (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1107">#1107</a>)</li> <li>The <em>loop_scope</em> argument to <code>pytest.mark.asyncio</code> no longer forces that a pytest Collector exists at the level of the specified scope. For example, a test function marked with <code>pytest.mark.asyncio(loop_scope="class")</code> no longer requires a class surrounding the test. This is consistent with the behavior of the <em>scope</em> argument to <code>pytest_asyncio.fixture</code>. (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1112">#1112</a>)</li> </ul> <h2>Fixed</h2> <ul> <li>An error caused when using pytest's [--setup-plan]{.title-ref} option. (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/630">#630</a>)</li> <li>Unsuppressed import errors with pytest option <code>--doctest-ignore-import-errors</code> (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/797">#797</a>)</li> <li>A "fixture not found" error in connection with package-scoped loops (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1052">#1052</a>)</li> </ul> <h2>Notes for Downstream Packagers</h2> <ul> <li>Removed a test that had an ordering dependency on other tests. (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1114">#1114</a>)</li> </ul> <h2>pytest-asyncio 1.0.0a1</h2> <h1><a href="https://github.com/pytest-dev/pytest-asyncio/tree/1.0.0a1">1.0.0a1</a> - 2025-05-09</h1> <h2>Removed</h2> <ul> <li>The deprecated <em>event_loop</em> fixture. (<a href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1106">#1106</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
c0eae266d8 |
chore(backend/deps-dev): Bump the development-dependencies group in /autogpt_platform/backend with 2 updates (#10373)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
a15bb16ce2 |
chore(backend/deps): Bump the production-dependencies group across 1 directory with 4 updates (#10389)
Bumps the production-dependencies group with 4 updates in the /autogpt_platform/backend directory: [groq](https://github.com/groq/groq-python), [launchdarkly-server-sdk](https://github.com/launchdarkly/python-server-sdk), [openai](https://github.com/openai/openai-python) and [sentry-sdk](https://github.com/getsentry/sentry-python). Updates `groq` from 0.29.0 to 0.30.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/groq/groq-python/releases">groq's releases</a>.</em></p> <blockquote> <h2>v0.30.0</h2> <h2>0.30.0 (2025-07-11)</h2> <p>Full Changelog: <a href="https://github.com/groq/groq-python/compare/v0.29.0...v0.30.0">v0.29.0...v0.30.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> api update (<a href=" |
||
|
|
ff7157fbbe |
chore(backend/deps): Bump pinecone from 5.4.2 to 7.3.0 in /autogpt_platform/backend (#10378)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
423b22214a |
feat(blocks): Add Excel support to ReadSpreadsheetBlock and introduced FileReadBlock (#10393)
This PR adds Excel file support to CSV processing and enhances text file reading capabilities. ### Changes 🏗️ **ReadSpreadsheetBlock (formerly ReadCsvBlock):** - Renamed `ReadCsvBlock` to `ReadSpreadsheetBlock` for better clarity - Added Excel file support (.xlsx, .xls) with automatic conversion to CSV using pandas - Enhanced parameter `file_in` to `file_input` for consistency - Excel files are automatically detected by extension and converted to CSV format - Maintains all existing CSV processing functionality (delimiters, headers, etc.) - Graceful error handling when pandas library is not available **FileReadBlock:** - Enhanced text file reading with advanced chunking capabilities - Added parameters: `skip_size`, `skip_rows`, `row_limit`, `size_limit`, `delimiter` - Supports both character-based and row-based processing - Chunked output for large files based on size limits - Proper file handling with UTF-8 and latin-1 encoding fallbacks - Uses `store_media_file` for secure file processing (URLs, data URIs, local paths) - Fixed test input to use data URI instead of non-existent file **General Improvements:** - Consistent parameter naming across blocks (`file_input`) - Enhanced error handling and validation - Comprehensive test coverage - All existing functionality preserved ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Both ReadSpreadsheetBlock and FileReadBlock instantiate correctly - [x] ReadSpreadsheetBlock processes CSV data with existing functionality - [x] FileReadBlock reads text files with data URI input - [x] All block tests pass (457 passed, 83 skipped) - [x] No linting errors in modified files - [x] Excel support gracefully handles missing pandas dependency #### For configuration changes: - [ ] `.env.example` is updated or already compatible with my changes - [ ] `docker-compose.yml` is updated or already compatible with my changes - [ ] I have included a list of my configuration changes in the PR description (under **Changes**) *Note: No configuration changes required for this PR.* |
||
|
|
db1f034544 |
Fix Gmail body parsing for multipart messages (#9863) (#10071)
<!-- Clearly explain the need for these changes: --> The `GmailReadBlock._get_email_body()` method was only inspecting the top-level payload and a single `text/plain` part, causing it to return the fallback string "This email does not contain a text body." for most Gmail messages. This occurred because Gmail messages are typically wrapped in `multipart/alternative` or other multipart containers, which the original implementation couldn't handle. This critical issue made the Gmail integration unusable for reading email body content, as virtually every real Gmail message uses multipart MIME structures. <!-- Concisely describe all of the changes made in this pull request: --> ### Changes #### Core Implementation: - **Replaced simple `_get_email_body()` with recursive multipart parser** that can walk through nested MIME structures - **Added `_walk_for_body()` method** for recursive traversal of email parts with depth limiting (max 10 levels) - **Implemented safe base64 decoding** with automatic padding correction in `_decode_base64()` - **Added attachment body support** via `_download_attachment_body()` for emails where body content is stored as attachments #### Email Format Support: - **HTML to text conversion** using `html2text` library for HTML-only emails - **Multipart/alternative handling** with preference for `text/plain` over `text/html` - **Nested multipart structure support** (e.g., `multipart/mixed` containing `multipart/alternative`) - **Single-part email support** (maintains backward compatibility) #### Dependencies & Testing: - **Added `html2text = "^2024.2.26"`** to `pyproject.toml` for HTML conversion - **Created comprehensive unit tests** in `test/blocks/test_gmail.py` covering all email types and edge cases - **Added error handling and graceful fallbacks** for malformed data and missing dependencies #### Security & Performance: - **Recursion depth limiting** prevents infinite loops on malformed email structures - **Exception handling** ensures graceful degradation when API calls fail - **Efficient tree traversal** with early returns for better performance ### Checklist #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <details> <summary>Test Plan</summary> - **Single-part text/plain emails** - Verified correct extraction of plain text content - **Multipart/alternative emails** - Tested preference for plain text over HTML when both available - **HTML-only emails** - Confirmed HTML to text conversion works correctly - **Nested multipart structures** - Tested deeply nested `multipart/mixed` containing `multipart/alternative` - **Attachment-based body content** - Verified downloading and decoding of body stored as attachments - **Base64 padding edge cases** - Tested malformed base64 data with missing padding - **Recursion depth limits** - Confirmed protection against infinite recursion - **Error handling scenarios** - Tested graceful fallbacks for API failures and missing dependencies - **Backward compatibility** - Ensured existing functionality remains unchanged for edge cases - **Integration testing** - Ran standalone verification script with 100% test pass rate </details> #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) <details> <summary>Configuration Changes</summary> - Added `html2text` dependency to `pyproject.toml` - no environment or infrastructure changes required - No changes to ports, services, secrets, or databases - Fully backward compatible with existing Gmail API configuration </details> --------- Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
c2eea593c0 |
fix(backend): Include node execution steps and cost of sub-graph execution (#10328)
## Summary This PR enhances the node execution stats tracking system to properly handle nested graph executions and additional cost/step metrics: - **Add extra_cost and extra_steps fields** to `NodeExecutionStats` model for tracking additional metrics from sub-graphs - **Update AgentExecutorBlock** to merge nested execution stats from sub-graphs into the parent execution - **Fix stats update mechanism** in `execute_node` to use in-place updates instead of `model_copy` for better performance - **Add proper tracking** of extra costs and steps in graph execution stats aggregation ## Changes Made - Modified `backend/backend/data/model.py` to add `extra_cost` and `extra_steps` fields - Updated `backend/backend/blocks/agent.py` to merge stats from nested graph executions - Fixed `backend/backend/executor/manager.py` to properly update execution stats and aggregate extra metrics ## Test Plan - [x] Verify that nested graph executions properly propagate their stats to parent graphs - [x] Test that extra costs and steps are correctly tracked and aggregated - [x] Ensure debug logging provides useful information for monitoring - [x] Run existing tests to ensure no regressions - [x] Test with multi-level nested agent graphs 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
36f5f24333 |
feat(platform/builder): Builder credentials support + UX improvements (#10323)
- Resolves #10313 - Resolves #10333 Before: https://github.com/user-attachments/assets/a105b2b0-a90b-4bc6-89da-bef3f5a5fa1f - No credentials input - Stuttery experience when panning or zooming the viewport After: https://github.com/user-attachments/assets/f58d7864-055f-4e1c-a221-57154467c3aa - Pretty much the same UX as in the Library, with fully-fledged credentials input support - Much smoother when moving around the canvas ### Changes 🏗️ Frontend: - Add credentials input support to Run UX in Builder - Pass run inputs instead of storing them on the input nodes - Re-implement `RunnerInputUI` using `AgentRunDetailsView`; rename to `RunnerInputDialog` - Make `AgentRunDraftView` more flexible - Remove `RunnerInputList`, `RunnerInputBlock` - Make moving around in the Builder *smooooth* by reducing unnecessary re-renders - Clean up and partially re-write bead management logic - Replace `request*` fire-and-forget methods in `useAgentGraph` with direct action async callbacks - Clean up run input UI components - Simplify `RunnerUIWrapper` - Add `isEmpty` utility function in `@/lib/utils` (expanding on `_.isEmpty`) - Fix default value handling in `TypeBasedInput` (**Note:** after all the changes I've made I'm not sure this is still necessary) - Improve & clean up Builder test implementations Backend + API: - Fix front-end `Node`, `GraphMeta`, and `Block` types - Small refactor of `Graph` to match naming of some `LibraryAgent` attributes - Fix typing of `list_graphs`, `get_graph_meta_by_store_listing_version_id` endpoints - Add `GraphMeta` model and `GraphModel.meta()` shortcut - Move `POST /library/agents/{library_agent_id}/setup-trigger` to `POST /library/presets/setup-trigger` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Test the new functionality in the Builder: - [x] Running an agent with (credentials) inputs from the builder - [x] Beads behave correctly - [x] Running an agent without any inputs from the builder - [x] Scheduling an agent from the builder - [x] Adding and searching blocks in the block menu - [x] Test that all existing `AgentRunDraftView` functionality in the Library still works the same - [x] Run an agent - [x] Schedule an agent - [x] View past runs - [x] Run an agent with inputs, then edit the agent's inputs and view the agent in the Library (should be fine) |
||
|
|
309114a727 | Merge commit from fork | ||
|
|
4ffb99bfb0 |
feat(backend): Add block error rate monitoring and Discord alerts (#10332)
## Summary This PR adds a simple block error rate monitoring system that runs every 24 hours (configurable) and sends Discord alerts when blocks exceed the error rate threshold. ## Changes Made **Modified Files:** - `backend/executor/scheduler.py` - Added `report_block_error_rates` function and scheduled job - `backend/util/settings.py` - Added configuration options - `backend/.env.example` - Added environment variable examples - Refactor scheduled job logics in scheduler.py into seperate files ## Configuration ```bash # Block Error Rate Monitoring BLOCK_ERROR_RATE_THRESHOLD=0.5 # 50% error rate threshold BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400 # 24 hours ``` ## How It Works 1. **Scheduled Job**: Runs every 24 hours (configurable via `BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS`) 2. **Error Rate Calculation**: Queries last 24 hours of node executions and calculates error rates per block 3. **Threshold Check**: Alerts on blocks with ≥50% error rate (configurable via `BLOCK_ERROR_RATE_THRESHOLD`) 4. **Discord Alert**: Sends alert to Discord using existing `discord_system_alert` function 5. **Manual Execution**: Available via `execute_report_block_error_rates()` scheduler client method ## Alert Format ``` Block Error Rate Alert: 🚨 Block 'DeprecatedGPT3Block' has 75.0% error rate (75/100) in the last 24 hours 🚨 Block 'BrokenImageBlock' has 60.0% error rate (30/50) in the last 24 hours ``` ## Testing Can be tested manually via: ```python from backend.executor.scheduler import SchedulerClient client = SchedulerClient() result = client.execute_report_block_error_rates() ``` ## Implementation Notes - Follows the same pattern as `report_late_executions` function - Only checks blocks with ≥10 executions to avoid noise - Uses existing Discord notification infrastructure - Configurable threshold and check interval - Proper error handling and logging ## Test plan - [x] Verify configuration loads correctly - [x] Test error rate calculation with existing database - [x] Confirm Discord integration works - [x] Test manual execution via scheduler client - [x] Verify scheduled job runs correctly 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude AI <claude@anthropic.com> Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
7688a9701e |
perf(backend/db): Optimize StoreAgent and Creator views with database indexes and materialized views (#10084)
### Summary Performance optimization for the platform's store and creator functionality by adding targeted database indexes and implementing materialized views to reduce query execution time. ### Changes 🏗️ **Database Performance Optimizations:** - Added strategic database indexes for `StoreListing`, `StoreListingVersion`, `StoreListingReview`, `AgentGraphExecution`, and `Profile` tables - Implemented materialized views (`mv_agent_run_counts`, `mv_review_stats`) to cache expensive aggregation queries - Optimized `StoreAgent` and `Creator` views to use materialized views and improved query patterns - Added automated refresh function with 15-minute scheduling for materialized views (when pg_cron extension is available) **Key Performance Improvements:** - Filtered indexes on approved store listings to speed up marketplace queries - GIN index on categories for faster category-based searches - Composite indexes for common query patterns (e.g., listing + version lookups) - Pre-computed agent run counts and review statistics to eliminate expensive aggregations ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified migration runs successfully without errors - [x] Confirmed materialized views are created and populated correctly - [x] Tested StoreAgent and Creator view queries return expected results - [x] Validated automatic refresh function works properly - [x] Confirmed rollback migration successfully removes all changes #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) **Note:** No configuration changes were required as this is purely a database schema optimization. |
||
|
|
243400e128 |
feat(platform): Add Block Development SDK with auto-registration system (#10074)
## Block Development SDK - Simplifying Block Creation ### Problem Currently, creating a new block requires manual updates to **5+ files** scattered across the codebase: - `backend/data/block_cost_config.py` - Manually add block costs - `backend/integrations/credentials_store.py` - Add default credentials - `backend/integrations/providers.py` - Register new providers - `backend/integrations/oauth/__init__.py` - Register OAuth handlers - `backend/integrations/webhooks/__init__.py` - Register webhook managers This creates significant friction for developers, increases the chance of configuration errors, and makes the platform difficult to scale. ### Solution This PR introduces a **Block Development SDK** that provides: - Single import for all block development needs: `from backend.sdk import *` - Automatic registration of all block configurations - Zero external file modifications required - Provider-based configuration with inheritance ### Changes 🏗️ #### 1. **New SDK Module** (`backend/sdk/`) - **`__init__.py`**: Unified exports of 68+ block development components - **`registry.py`**: Central auto-registration system for all block configurations - **`builder.py`**: `ProviderBuilder` class for fluent provider configuration - **`provider.py`**: Provider configuration management - **`cost_integration.py`**: Automatic cost application system #### 2. **Provider Builder Pattern** ```python # Configure once, use everywhere my_provider = ( ProviderBuilder("my-service") .with_api_key("MY_SERVICE_API_KEY", "My Service API Key") .with_base_cost(5, BlockCostType.RUN) .build() ) ``` #### 3. **Automatic Cost System** - Provider base costs automatically applied to all blocks using that provider - Override with `@cost` decorator for block-specific pricing - Tiered pricing support with cost filters #### 4. **Dynamic Provider Support** - Modified `ProviderName` enum to accept any string via `_missing_` method - No more manual enum updates for new providers #### 5. **Application Integration** - Added `sync_all_provider_costs()` to `initialize_blocks()` for automatic cost registration - Maintains full backward compatibility with existing blocks #### 6. **Comprehensive Examples** (`backend/blocks/examples/`) - `simple_example_block.py` - Basic block structure - `example_sdk_block.py` - Provider with credentials - `cost_example_block.py` - Various cost patterns - `advanced_provider_example.py` - Custom API clients - `example_webhook_sdk_block.py` - Webhook configuration #### 7. **Extensive Testing** - 6 new test modules with 30+ test cases - Integration tests for all SDK features - Cost calculation verification - Provider registration tests ### Before vs After **Before SDK:** ```python # 1. Multiple complex imports from backend.data.block import Block, BlockCategory, BlockOutput from backend.data.model import SchemaField, CredentialsField # ... many more imports # 2. Update block_cost_config.py BLOCK_COSTS[MyBlock] = [BlockCost(...)] # 3. Update credentials_store.py DEFAULT_CREDENTIALS.append(...) # 4. Update providers.py enum # 5. Update oauth/__init__.py # 6. Update webhooks/__init__.py ``` **After SDK:** ```python from backend.sdk import * # Everything configured in one place my_provider = ( ProviderBuilder("my-service") .with_api_key("MY_API_KEY", "My API Key") .with_base_cost(10, BlockCostType.RUN) .build() ) class MyBlock(Block): class Input(BlockSchema): credentials: CredentialsMetaInput = my_provider.credentials_field() data: String = SchemaField(description="Input data") class Output(BlockSchema): result: String = SchemaField(description="Result") # That's it\! No external files to modify ``` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Created new blocks using SDK pattern with provider configuration - [x] Verified automatic cost registration for provider-based blocks - [x] Tested cost override with @cost decorator - [x] Confirmed custom providers work without enum modifications - [x] Verified all example blocks execute correctly - [x] Tested backward compatibility with existing blocks - [x] Ran all SDK tests (30+ tests, all passing) - [x] Created blocks with credentials and verified authentication - [x] Tested webhook block configuration - [x] Verified application startup with auto-registration #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes (no changes needed) - [x] `docker-compose.yml` is updated or already compatible with my changes (no changes needed) - [x] I have included a list of my configuration changes in the PR description (under **Changes**) ### Impact - **Developer Experience**: Block creation time reduced from hours to minutes - **Maintainability**: All block configuration in one place - **Scalability**: Support hundreds of blocks without enum updates - **Type Safety**: Full IDE support with proper type hints - **Testing**: Easier to test blocks in isolation --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com> |
||
|
|
c77cb1fcfb |
fix(backend/library): Fix sub_graphs check in LibraryAgent.from_db(..) (#10316)
- Follow-up fix for #10301 The condition that determines whether `LibraryAgent.credentials_input_schema` is set incorrectly handles empty lists of sub-graphs. ### Changes 🏗️ - Check if `sub_graphs is not None` rather than using the boolean interpretation of `sub_graphs` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Trivial change, no test needed. |
||
|
|
ab4eb10c3d |
chore(backend/deps-dev): Bump the development-dependencies group across 1 directory with 4 updates (#10173)
Bumps the development-dependencies group with 4 updates in the /autogpt_platform/backend directory: [poethepoet](https://github.com/nat-n/poethepoet), [pyright](https://github.com/RobertCraigie/pyright-python), [requests](https://github.com/psf/requests) and [ruff](https://github.com/astral-sh/ruff). Updates `poethepoet` from 0.34.0 to 0.35.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/nat-n/poethepoet/releases">poethepoet's releases</a>.</em></p> <blockquote> <h2>0.35.0</h2> <h2>Enhancements</h2> <ul> <li>Support script tasks that run packages with a <code>__main__</code> module by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/300">nat-n/poethepoet#300</a></li> <li>Allow virtualenv location to reference special git related env vars by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/302">nat-n/poethepoet#302</a></li> <li>Simplify CLI help page header by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/291">nat-n/poethepoet#291</a></li> </ul> <h2>Fixes</h2> <ul> <li>Don't register hidden tasks with poetry plugin by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/292">nat-n/poethepoet#292</a></li> <li>Don't resolve symlinks to poetry in PoetryExecutor by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/293">nat-n/poethepoet#293</a></li> <li>Crash with invalid help option on task by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/294">nat-n/poethepoet#294</a></li> <li>Always validate task args when loading config by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/295">nat-n/poethepoet#295</a></li> <li>Coerce switch case values to string to avoid errors by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/296">nat-n/poethepoet#296</a></li> <li>Always print help when no arguments provided by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/299">nat-n/poethepoet#299</a></li> <li>Suppress useless global options in the poetry plugin cli by <a href="https://github.com/nat-n"><code>@nat-n</code></a> in <a href="https://redirect.github.com/nat-n/poethepoet/pull/301">nat-n/poethepoet#301</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/nat-n/poethepoet/compare/v0.34.0...v0.35.0">https://github.com/nat-n/poethepoet/compare/v0.34.0...v0.35.0</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/nat-n/poethepoet/compare/v0.34.0...v0.35.0">compare view</a></li> </ul> </details> <br /> Updates `pyright` from 1.1.401 to 1.1.402 <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
42e141012f |
chore(backend/deps): Bump the production-dependencies group across 1 directory with 20 updates (#10242)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
b7f9dcf419 |
fix(backend): add back perplexity_llama (#10327)
<!-- Clearly explain the need for these changes: --> We flew too close to the sun ### Changes 🏗️ adds back perplexity due to the need to remove it after it has already been migrated not before or the system will automatically migrate it to a different model so that it is one that exists <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] tested locally; no impact since we are simply re-enabling it |
||
|
|
a4ff8402f1 |
feat(backend): add Perplexity Sonar models (#10326)
<!-- Clearly explain the need for these changes: --> Adds the latest Perplexity Sonar models from OpenRouter and removes the decommissioned Sonar Large model. ### Changes 🏗️ - Added constants for `perplexity/sonar`, `perplexity/sonar-pro`, and `perplexity/sonar-deep-research` in the `LlmModel` enum - Included metadata entries for the new models - Mapped the new models in the cost configuration with their respective pricing tiers - Removed the outdated Sonar Large model ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run format` - [x] `poetry run test` #### For configuration changes: - [ ] `.env.example` is updated or already compatible with my changes - [ ] `docker-compose.yml` is updated or already compatible with my changes - [ ] I have included a list of my configuration changes in the PR description (under **Changes**) |
||
|
|
5ff6d2ca56 | fix(backend): Fix stop graph response on already stopped graph | ||
|
|
02d3b42745 |
fix(backend;frontend): Add auto-type convertion support for optional types (#10325)
Auto type conversion doesn't work on optional type. To reproduce: <img width="981" alt="image" src="https://github.com/user-attachments/assets/92198d32-bce9-44fd-a9b0-b7b431aec3ba" /> Use the AgentNumberInput block and try to pass a string value to the sub-agent that uses it. ### Changes 🏗️ Added optional type auto conversation support. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Try to convert string to optional[int] |
||
|
|
171deea806 | feat(block): Added best-effort support of multiple/parallel tool calls for SmartDecisionMaker Block | ||
|
|
149bbd910a | feat(block): Introduce GoogleSheetsFindBlock | ||
|
|
c6741e7c14 | fix(block): Fix Broken SmartDecisionManager block using Anthropic | ||
|
|
358ce1d258 |
fix(backend/library): Include subgraphs in get_library_agent (#10301)
- Resolves #10300 - Follow-up fix to #10167 ### Changes 🏗️ - Include sub-graphs in `get_library_agent` endpoint ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Executing agent with sub-graphs that require credentials works |
||
|
|
a5691c0e89 | feat(block): Add dict append capability for GoogleSheetsAppendBlock | ||
|
|
0b35dff1e6 | fix(block): Fix failing GoogleSheetsAppendBlock on undefined append range | ||
|
|
6cf9136cdd | feat(block): Support URL format input instead of ID for Google Sheet blocks | ||
|
|
5d91a9c9b9 | feat(block): Make RetrieveInformationBlock output static | ||
|
|
e3d84d87f8 |
fix(blocks): restore batching logic in CreateListBlock
During data manipulation refactoring, the CreateListBlock lost its important batching functionality with max_size and max_tokens parameters. This restores the original implementation that can yield lists in chunks based on size or token limits. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|
|
9fecbe2a31 |
feat(blocks): add plural outputs where blocks yield singular values in loops (#10304)
## Summary
This PR adds missing plural output versions to blocks that yield
individual items in loops but don't provide the complete collection,
enabling both individual item access (for iteration) and complete
collection access (for aggregate operations).
## Changes
### GitHub Blocks (existing)
- **GithubListPullRequestsBlock**: Added `pull_requests` output
alongside existing `pull_request`
- **GithubListPRReviewersBlock**: Added `reviewers` output alongside
existing `reviewer`
### Additional Blocks (added in this PR)
- **GetRedditPostsBlock**: Added `posts` output for complete list of
Reddit posts
- **ReadRSSFeedBlock**: Added `entries` output for complete list of RSS
entries
- **AddMemoryBlock**: Added `results` output for complete list of memory
operation results
## Pattern Applied
The pattern ensures blocks provide both:
```python
# Complete collection first
yield "plural_output", all_items
# Then individual items for iteration
for item in all_items:
yield "singular_output", item
```
## Testing
- Updated test outputs to include plural versions
- All blocks maintain backward compatibility with existing singular
outputs
- `poetry run format` - ✅ Passed
- `poetry run test` - ✅ Blocks validated
## Benefits
- **Iteration**: Users can still iterate over individual items as before
- **Aggregation**: Users can now access complete collections for
operations like counting, filtering, or batch processing
- **Compatibility**: Existing workflows continue to work unchanged
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
|
||
|
|
4744e0f6b1 |
feat(blocks): add data manipulation blocks and refactor basic.py (#10261)
### Changes 🏗️ #### New List Operation Blocks - Implement `GetListItemBlock` for retrieving an element at a specific index, with negative index support - Introduce `RemoveFromListBlock` to remove or pop items and optionally return the removed value - Add `ReplaceListItemBlock` to overwrite an item at a given index and return the old value - Provide `ListIsEmptyBlock` for quickly checking if a list has no elements #### New Dictionary Operation Blocks (for consistency with list operations) - Add `RemoveFromDictionaryBlock` to remove key-value pairs and optionally return the removed value - Implement `ReplaceDictionaryValueBlock` to replace values for a specified key and return the old value - Provide `DictionaryIsEmptyBlock` for checking if a dictionary has no elements #### Code Organization & Refactoring - **Created `data_manipulation.py`**: Moved all dictionary and list manipulation blocks to a dedicated file to prevent `basic.py` from becoming too large - **Refactored `basic.py`**: Now contains only core utility blocks (FileStore, StoreValue, PrintToConsole, Note, UniversalTypeConverter) - **Ensured consistency**: Dictionary and list blocks now have equivalent functionality and follow the same patterns - **Removed redundancy**: Eliminated duplicate `GetDictionaryValueBlock` since `FindInDictionaryBlock` already provides comprehensive lookup functionality - **Preserved UUIDs**: All existing block UUIDs maintained to ensure no breaking changes #### Block Organization Summary **`basic.py` (core utilities):** - `FileStoreBlock`, `StoreValueBlock`, `PrintToConsoleBlock`, `NoteBlock`, `UniversalTypeConverterBlock` **`data_manipulation.py` (dictionary & list operations):** - **Dictionary blocks:** Create, Add, Find, Remove, Replace, IsEmpty - **List blocks:** Create, Add, Find, Get, Remove, Replace, IsEmpty ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `poetry run format` - [x] `poetry run test` - [x] `pnpm format` <details> <summary>Example test plan</summary> - [ ] Create from scratch and execute an agent with at least 3 blocks - [ ] Import an agent from file upload, and confirm it executes correctly - [ ] Upload agent to marketplace - [ ] Import an agent from marketplace and confirm it executes correctly - [ ] Edit an agent from monitor, and confirm it executes correctly </details> #### For configuration changes: - [ ] `.env.example` is updated or already compatible with my changes - [ ] `docker-compose.yml` is updated or already compatible with my changes - [ ] I have included a list of my configuration changes in the PR description (under **Changes**) <details> <summary>Examples of configuration changes</summary> - Changing ports - Adding new services that need to communicate with each other - Secrets or environment variable changes - New or infrastructure changes such as databases </details> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
24b4ab9864 |
feat(block): Enhance Mem0 blocks filetering & add more GoogleSheets blocks (#10287)
The block library was missing two key capabilities that keep coming up
in real-world agent flows:
1. **Granular Mem0 filtering.** Agents often work side-by-side for the
same user, so memories must be scoped to a specific run or agent to
avoid crosstalk.
2. **First-class Google Sheets support.** Many community projects (e.g.,
data-collection, lightweight dashboards, no-code workflows) rely on
Sheets, but we only had a brittle REST call block.
This PR adds fine-grained filters to every Mem0 retrieval block and
introduces a complete, OAuth-ready Google Sheets suite so agents can
create, read, write, format, and manage spreadsheets safely.
:contentReference[oaicite:0]{index=0}
---
### Changes 🏗️
#### 📚 Mem0 block enhancements
* Added `categories_filter`, `metadata_filter`, `limit_memory_to_run`,
and `limit_memory_to_agent` inputs to **SearchMemoryBlock**,
**GetAllMemoriesBlock**, and **GetLatestMemoryBlock**.
* Added identical scoping logic to **AddMemoryBlock** so newly-created
memories can be tied to run/agent IDs.
#### 📊 New Google Sheets blocks (`backend/blocks/google/sheets.py`)
| Block | Purpose |
|-------|---------|
| `GoogleSheetsReadBlock` | Read a range |
| `GoogleSheetsWriteBlock` | Overwrite a range |
| `GoogleSheetsAppendBlock` | Append rows |
| `GoogleSheetsClearBlock` | Clear a range |
| `GoogleSheetsMetadataBlock` | Fetch spreadsheet + sheet info |
| `GoogleSheetsManageSheetBlock` | Create / delete / copy a sheet |
| `GoogleSheetsBatchOperationsBlock` | Batch update / clear |
| `GoogleSheetsFindReplaceBlock` | Find & replace text |
| `GoogleSheetsFormatBlock` | Cell formatting (bg/text colour, bold,
italic, font size) |
| `GoogleSheetsCreateSpreadsheetBlock` | Spin up a new spreadsheet |
* Each block has typed input/output schemas, built-in test mocks, and is
disabled in prod unless Google OAuth is configured.
* Added helper enums (`SheetOperation`, `BatchOperationType`) and
updated **CLAUDE.md** contributor guide with a UUID-generation step.
:contentReference[oaicite:2]{index=2}
---
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual E2E run: agent writes chat summary to new spreadsheet,
reads it back, searches memory with run-scoped filter
- [x] Live Google API smoke-tests (read/write/append/clear/format) using
a disposable spreadsheet
|
||
|
|
d4646c249d | feat(backend): implement KV data storage blocks | ||
|
|
095199bfa6 |
feat(backend): implement KV data storage blocks (#10294)
This PR introduces key-value storage blocks. ### Changes 🏗️ - **Database Schema**: Add AgentNodeExecutionKeyValueData table with composite primary key (userId, key) - **Persistence Blocks**: Create PersistInformationBlock and RetrieveInformationBlock in persistence.py - **Scope-based Storage**: Support for within_agent (per agent) vs across_agents (global user) persistence - **Key Structure**: Use formal # delimiter for storage keys: `agent#{graph_id}#{key}` and `global#{key}` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run all 244 block tests - all passing ✅ - [x] Test PersistInformationBlock with mock data storage - [x] Test RetrieveInformationBlock with mock data retrieval - [x] Verify scope-based key generation (within_agent vs across_agents) - [x] Verify database function integration through all manager classes - [x] Run lint and type checking - all passing ✅ - [x] Verify database migration is included and valid #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) Note: This change adds database schema and new blocks but doesn't require environment or docker-compose changes as it uses existing database infrastructure. --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
b1f3122243 |
fix(frontend): Add fallback for NEXT_PUBLIC_FRONTEND_BASE_URL to API proxy (#10299)
- Resolves #10298 - Follow-up to #10270 ### Changes 🏗️ Amend two changes from #10270: - Add fallback for `NEXT_PUBLIC_FRONTEND_BASE_URL` in custom-mutator.ts - Revert rename of `FRONTEND_BASE_URL` to `NEXT_PUBLIC_FRONTEND_BASE_URL` in `backend/.env.example` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Don't set `NEXT_PUBLIC_FRONTEND_BASE_URL` - Run the platform locally - [x] -> `/library` loads normally #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) |
||
|
|
f1cc2afbda |
feat(backend): improve stop graph execution reliability (#10293)
## Summary - Enhanced graph execution cancellation and cleanup mechanisms - Improved error handling and logging for graph execution lifecycle - Added timeout handling for graph termination with proper status updates - Exposed a new API for stopping graph based on only graph_id or user_id - Refactored logging metadata structure for better error tracking ## Key Changes ### Backend - **Graph Execution Management**: Enhanced `stop_graph_execution` with timeout-based waiting and proper status transitions - **Execution Cleanup**: Added proper cancellation waiting with timeout handling in executor manager - **Logging Improvements**: Centralized `LogMetadata` class and improved error logging consistency - **API Enhancements**: Added bulk graph execution stopping functionality - **Error Handling**: Better exception handling and status management for failed/cancelled executions ### Frontend - **Status Safety**: Added null safety checks for status chips to prevent runtime errors - **Execution Control**: Simplified stop execution request handling ## Test Plan - [x] Verify graph execution can be properly stopped and reaches terminal state - [x] Test timeout scenarios for stuck executions - [x] Validate proper cleanup of running node executions when graph is cancelled - [x] Check frontend status chips handle undefined statuses gracefully - [x] Test bulk execution stopping functionality 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
47f503f223 |
feat(backend): Support aiohttp.BasicAuth in make_request (#10283)
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10284 ### Changes 🏗️ - Allows passing an `aiohttp.BasicAuth` object directly to the `auth` parameter of the `make_request` function. - Converts tuple-based auth credentials to `aiohttp.BasicAuth` objects before making the request. Fixes [AUTOGPT-SERVER-4AX](https://sentry.io/organizations/significant-gravitas/issues/6709824432/). The issue was that: aiohttp's ClientSession.request received a plain tuple for `auth` instead of an `aiohttp.BasicAuth` object, causing OAuth2 token exchange failure. This fix was generated by Seer in Sentry, triggered by Bently. 👁️ Run ID: 185767 Not quite right? [Click here to continue debugging with Seer.](https://sentry.io/organizations/significant-gravitas/issues/6709824432/?seerDrawer=true) ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) <details> <summary>Examples of configuration changes</summary> - Changing ports - Adding new services that need to communicate with each other - Secrets or environment variable changes - New or infrastructure changes such as databases </details> Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com> |
||
|
|
198b3d9f45 |
fix(backend): Avoid swallowing exception on graph execution failure (#10260)
Graph execution that fails due to interruption or unknown error should be enqueued back to the queue. However, swallowing the error ends up not marking the execution as a failure. ### Changes 🏗️ * Avoid keep updating the graph execution status on each node execution step. * Added a guard rail to avoid completing graph execution on non-completed execution status. * Avoid acknowledging messages from the queue if the graph execution is not yet completed. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Run graph execution, kill the process, re-run the process --------- Co-authored-by: Swifty <craigswift13@gmail.com> |
||
|
|
9a6ae90d12 |
fix(backend): Convert pyclamd to aioclamd for anti-virus scan concurrency improvement (#10258)
Currently, we are using PyClamd to run a file anti-virus scan for all the files uploaded into the platform. We split the file into small chunks and serially check the chunks for the virus scan. The socket is not thread-safe, and we need to create multiple sockets across many threads to leverage concurrency. To make this step concurrent and keep it fully async, we need to migrate PyClamd to aioclamd. ### Changes 🏗️ Convert pyclamd to aioclamd, leverage chunk parallelism scan with a semaphore limiting the concurrency limit. #### Side Note Shout-out to @tedyu for raising this improvement idea. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Execute file upload into the platform |
||
|
|
b32ac898db |
fix(frontend): migrate to NEXT_PUBLIC_FRONTEND_BASE_URL (#10270)
## Changes 🏗️ We need to `FRONTEND_BASE_URL` to → `NEXT_PUBLIC_FRONTEND_BASE_URL` given is needed on the new API client on the Front-end to make requests. The `NEXT_PUBLIC` prefix is important so that it is available on the client. ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally - [x] The library and other pages work |
||
|
|
f3202fa776 |
feat(platform/builder): Hide action buttons on triggered graphs (#10218)
- Resolves #10217 https://github.com/user-attachments/assets/26a402f5-6f43-453b-8c83-481380bde853 ### Changes 🏗️ Frontend: - Show message instead of action buttons ("Run" etc) when graph has webhook node(s) - Fix check for webhook nodes used in `BlocksControl` and `FlowEditor` - Clean up `PrimaryActionBar` implementation - Add `accent` variant to `ui/button:Button` API: - Add `GET /library/agents/by-graph/{graph_id}` endpoint ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Go to Builder - Add a trigger block - [x] -> action buttons disappear; message shows in their place - Save the graph - Click the "Agent Library" link in the message - [x] -> app navigates to `/library/agents/[id]` for the newly created agent |
||
|
|
4d0db27d5e |
feat(block): Improve CreateListBlock to support batching based on token count (#10257)
CreateListBlock can only batch lists based on the size limit, but sometimes we need the size to be dynamically adjusted based on the token count. ### Changes 🏗️ Improve CreateListBlock to support batching based on token count ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Test CreateListBlock |
||
|
|
5421ccf86a |
feat(platform/library): Scheduling UX (#10246)
Complete the implementation of the Agent Run Scheduling UX in the Library. Demo: https://github.com/user-attachments/assets/701adc63-452c-4d37-aeea-51788b2774f2 ### Changes 🏗️ Frontend: - Add "Schedule" button + dialog + logic to `AgentRunDraftView` - Update corresponding logic on `AgentRunsPage` - Add schedule name field to `CronSchedulerDialog` - Amend Builder components `useAgentGraph`, `FlowEditor`, `RunnerUIWrapper` to also handle schedule name input - Split `CronScheduler` into `CronScheduler`+`CronSchedulerDialog` - Make `AgentScheduleDetailsView` more fully functional - Add schedule description to info box - Add "Delete schedule" button - Update schedule create/select/delete logic in `AgentRunsPage` - Improve schedule UX in `AgentRunsSelectorList` - Switch tabs automatically when a run or schedule is selected - Remove now-redundant schedule filters - Refactor `@/lib/monitor/cronExpressionManager` into `@/lib/cron-expression-utils` Backend + API: - Add name and credentials to graph execution schedule job params - Update schedule API - `POST /schedules` -> `POST /graphs/{graph_id}/schedules` - Add `GET /graphs/{graph_id}/schedules` - Add not found error handling to `DELETE /schedules/{schedule_id}` - Minor refactoring Backend: - Fix "`GraphModel`->`NodeModel` is not fully defined" error in scheduler - Add support for all exceptions defined in `backend.util.exceptions` to RPC logic in `backend.util.service` - Fix inconsistent log prefixing in `backend.executor.scheduler` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Create a simple agent with inputs and blocks that require credentials; go to this agent in the Library - Fill out the inputs and click "Schedule"; make it run every minute (for testing purposes) - [x] -> newly created schedule appears in the list - [x] -> scheduled runs are successful - Click "Delete schedule" - [x] -> schedule no longer in list - [x] -> on deleting the last schedule, view switches back to the Runs list - [x] -> no new runs occur from the deleted schedule |
||
|
|
c4056cbae9 |
feat(block): Introduce context-window aware prompt compaction for LLM & SmartDecision blocks (#10252)
Calling LLM using the current block sometimes can break due to the high context window. A prompt compaction algorithm is applied (enabled by default) to make sure the sent prompt is within a context window limit. ### Changes 🏗️ ```` Heuristics -------- * Prefer shrinking the content rather than truncating the conversation. * If the conversation content is compacted and it's still not enough, then reduce the conversation list. * The rest of the implementation is adjusted to minimize the LLM call breaking. Strategy -------- 1. **Token-aware truncation** – progressively halve a per-message cap (`start_cap`, `start_cap/2`, … `floor_cap`) and apply it to the *content* of every message except the first and last. Tool shells are included: we keep the envelope but shorten huge payloads. 2. **Middle-out deletion** – if still over the limit, delete the whole messages working outward from the centre, **skipping** any message that contains ``tool_calls`` or has ``role == "tool"``. 3. **Last-chance trim** – if still too big, truncate the *first* and *last* message bodies down to `floor_cap` tokens. 4. If the prompt is *still* too large: • raise ``ValueError`` when ``lossy_ok == False`` (default) • return the partially-trimmed prompt when ``lossy_ok == True`` ```` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Run an SDM block in a loop until it hits 200000 tokens using the open-ai O3 model. |
||
|
|
c01beaf003 |
fix(blocks): Restore GithubReadPullRequestBlock diff output (#10256)
- Follow-up fix to #10138 AI erased a bit of functionality from the `GithubReadPullRequestBlock` in #10138. This PR puts it back and improves on the old format. ### Changes 🏗️ - Include full diff in `changes` output of `GithubReadPullRequestBlock` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [ ] I have tested my changes according to the test plan: - Use the `GithubReadPullRequestBlock` with `include_pr_changes` enabled - [ ] -> block runs successfully - [ ] -> full diff included in `changes` output |
||
|
|
77e99e9739 |
feat(blocks): Add more Revid.ai media generation blocks (#9931)
<html><head></head><body><h3>Why these changes are needed 🧐</h3> <p>Revid.ai offers several specialised, undocumented rendering flows beyond the basic “text-to-video” endpoint our platform already supported. to:</p> <ol> <li> <p><strong>Generate ads</strong> from copy plus product images (30-second vertical spots).</p> </li> <li> <p><strong>Turn a single creative prompt</strong> into a fully AI-generated video (no multi-line script).</p> </li> <li> <p><strong>Transform a screenshot into a narrated, avatar-driven clip</strong>, ideal for product-led demos.</p> </li> </ol> <p>Without first-class blocks for these flows, users were forced to drop to raw HTTP nodes, losing schema validation, test mocks and credential management.</p> <h3>Changes 🏗️</h3> Added new category to ``BlockCategory`` in ``block.py`` for ``MARKETING = "Block that helps with marketing"`` Area | Change | Notes -- | -- | -- ai_shortform_video_block.py | Refactored out a shared _RevidMixin (webhook + polling helpers). | Keeps DRY across new blocks. | Added AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS and Voice.EVA enum members. | Required by Revid sample payloads. | AIAdMakerVideoCreatorBlock | Implements ai-ad-generator flow; supports optional input_media_urls, target_duration, use_only_provided_media. | AIPromptToVideoCreatorBlock | Implements prompt-to-video flow with prompt_target_duration. | AIScreenshotToVideoAdBlock | Implements screenshot-to-video-ad flow (avatar narration, BG removal). | Added full pydantic schemas, test stubs & mock hooks for each new block. | Ensures unit tests pass and blocks appear in UI. <p>No existing functionality was removed; current <code inline="">AIShortformVideoCreatorBlock</code> is untouched apart from enum imports.</p></body></html> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] use the ``AI ShortForm Video Creator`` block to generate a video and it will work - [x] same with `` ai ad maker video creator`` block test it and it should work - [x] and test ``ai screenshot to video ad`` block it should work --------- Co-authored-by: Bently <Github@bentlybro.com> |
||
|
|
7f7c387156 | fix(block): Fix broken in SearchPeople block | ||
|
|
21cf263eea | fix(block): Fix typo in Apollo block | ||
|
|
500952a15f | fix(block): Fix typo in Apollo block |