mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-09 14:25:25 -05:00
c474c51cd0b6a1bf571e41c37168622d4efbade9
7891 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
c474c51cd0 |
refactor(frontend): Update styles and components in HorizontalScroll and ChatSidebar
- Changed gradient background colors in HorizontalScroll to use 'background' instead of 'white'. - Replaced SpinnerGapIcon with LoadingSpinner in ChatSidebar for improved loading indication. - Introduced BlockCard component in FindBlocks for better block representation. - Integrated HorizontalScroll in FindBlocksTool to enhance block navigation. These changes improve UI consistency and enhance user experience in the application. |
||
|
|
5df3422f8f | fix backend format | ||
|
|
8546bc9a3d | Merge branch 'dev' into abhi/check-ai-sdk-ui | ||
|
|
a329831b0b |
feat(backend): Add ClamAV scanning for local file paths (#11988)
## Context From PR #11796 review discussion. Files processed by the video blocks (downloads, uploads, generated videos) should be scanned through ClamAV for malware detection. ## Problem `store_media_file()` in `backend/util/file.py` already scans: - `workspace://` references - Cloud storage paths - Data URIs (`data:...`) - HTTP/HTTPS URLs **But local file paths were NOT scanned.** The `else` branch only verified the file exists. This gap affected video processing blocks (e.g., `LoopVideoBlock`, `AddAudioToVideoBlock`) that: 1. Download/receive input media 2. Process it locally (loop, add audio, etc.) 3. Write output to temp directory 4. Call `store_media_file(output_filename, ...)` with a local path → **skipped virus scanning** ## Solution Added virus scanning to the local file path branch: ```python # Virus scan the local file before any further processing local_content = target_path.read_bytes() if len(local_content) > MAX_FILE_SIZE_BYTES: raise ValueError(...) await scan_content_safe(local_content, filename=sanitized_file) ``` ## Changes - `backend/util/file.py` - Added ~7 lines to scan local files (consistent with other input types) - `backend/util/file_test.py` - Added 2 test cases for local file scanning ## Risk Assessment - **Low risk:** Single point of change, follows existing pattern - **Backwards compatible:** No API changes - **Fail-safe:** If scanning fails, file is rejected (existing behavior) Closes SECRT-1904 Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
98dd1a9480 |
chore(libs/deps): Bump cryptography from 45.0.6 to 46.0.1 in /autogpt_platform/autogpt_libs (#10968)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.6 to 46.0.1. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p> <blockquote> <p>46.0.1 - 2025-09-16</p> <pre><code> * Fixed an issue where users installing via ``pip`` on Python 3.14 development versions would not properly install a dependency. * Fixed an issue building the free-threaded macOS 3.14 wheels. <p>.. _v46-0-0:</p> <p>46.0.0 - 2025-09-16<br /> </code></pre></p> <ul> <li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for Python 3.7 has been removed.</li> <li>Support for OpenSSL < 3.0 is deprecated and will be removed in the next release.</li> <li>Support for <code>x86_64</code> macOS (including publishing wheels) is deprecated and will be removed in two releases. We will switch to publishing an <code>arm64</code> only wheel for macOS.</li> <li>Support for 32-bit Windows (including publishing wheels) is deprecated and will be removed in two releases. Users should move to a 64-bit Python installation.</li> <li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.5.3.</li> <li>We now build <code>ppc64le</code> <code>manylinux</code> wheels and publish them to PyPI.</li> <li>We now build <code>win_arm64</code> (Windows on Arm) wheels and publish them to PyPI.</li> <li>Added support for free-threaded Python 3.14.</li> <li>Removed the deprecated <code>get_attribute_for_oid</code> method on :class:<code>~cryptography.x509.CertificateSigningRequest</code>. Users should use :meth:<code>~cryptography.x509.Attributes.get_attribute_for_oid</code> instead.</li> <li>Removed the deprecated <code>CAST5</code>, <code>SEED</code>, <code>IDEA</code>, and <code>Blowfish</code> classes from the cipher module. These are still available in :doc:<code>/hazmat/decrepit/index</code>.</li> <li>In X.509, when performing a PSS signature with a SHA-3 hash, it is now encoded with the official NIST SHA3 OID.</li> </ul> <p>.. _v45-0-7:</p> <p>45.0.7 - 2025-09-01</p> <pre><code> * Added a function to support an upcoming ``pyOpenSSL`` release. <p>.. _v45-0-6:<br /> </code></pre></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
9c7c598c7d |
chore(deps): bump peter-evans/create-pull-request from 7 to 8 (#11663)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7 to 8. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/peter-evans/create-pull-request/releases">peter-evans/create-pull-request's releases</a>.</em></p> <blockquote> <h2>Create Pull Request v8.0.0</h2> <h2>What's new in v8</h2> <ul> <li>Requires <a href="https://github.com/actions/runner/releases/tag/v2.327.1">Actions Runner v2.327.1</a> or later if you are using a self-hosted runner for Node 24 support.</li> </ul> <h2>What's Changed</h2> <ul> <li>chore: Update checkout action version to v6 by <a href="https://github.com/yonas"><code>@yonas</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4258">peter-evans/create-pull-request#4258</a></li> <li>Update actions/checkout references to <a href="https://github.com/v6"><code>@v6</code></a> in docs by <a href="https://github.com/Copilot"><code>@Copilot</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4259">peter-evans/create-pull-request#4259</a></li> <li>feat: v8 by <a href="https://github.com/peter-evans"><code>@peter-evans</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4260">peter-evans/create-pull-request#4260</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/yonas"><code>@yonas</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4258">peter-evans/create-pull-request#4258</a></li> <li><a href="https://github.com/Copilot"><code>@Copilot</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4259">peter-evans/create-pull-request#4259</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/peter-evans/create-pull-request/compare/v7.0.11...v8.0.0">https://github.com/peter-evans/create-pull-request/compare/v7.0.11...v8.0.0</a></p> <h2>Create Pull Request v7.0.11</h2> <h2>What's Changed</h2> <ul> <li>fix: restrict remote prune to self-hosted runners by <a href="https://github.com/peter-evans"><code>@peter-evans</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4250">peter-evans/create-pull-request#4250</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/peter-evans/create-pull-request/compare/v7.0.10...v7.0.11">https://github.com/peter-evans/create-pull-request/compare/v7.0.10...v7.0.11</a></p> <h2>Create Pull Request v7.0.10</h2> <p>⚙️ Fixes an issue where updating a pull request failed when targeting a forked repository with the same owner as its parent.</p> <h2>What's Changed</h2> <ul> <li>build(deps): bump the github-actions group with 2 updates by <a href="https://github.com/dependabot"><code>@dependabot</code></a>[bot] in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4235">peter-evans/create-pull-request#4235</a></li> <li>build(deps-dev): bump prettier from 3.6.2 to 3.7.3 in the npm group by <a href="https://github.com/dependabot"><code>@dependabot</code></a>[bot] in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4240">peter-evans/create-pull-request#4240</a></li> <li>fix: provider list pulls fallback for multi fork same owner by <a href="https://github.com/peter-evans"><code>@peter-evans</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4245">peter-evans/create-pull-request#4245</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/obnyis"><code>@obnyis</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4064">peter-evans/create-pull-request#4064</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/peter-evans/create-pull-request/compare/v7.0.9...v7.0.10">https://github.com/peter-evans/create-pull-request/compare/v7.0.9...v7.0.10</a></p> <h2>Create Pull Request v7.0.9</h2> <p>⚙️ Fixes an <a href="https://redirect.github.com/peter-evans/create-pull-request/issues/4228">incompatibility</a> with the recently released <code>actions/checkout@v6</code>.</p> <h2>What's Changed</h2> <ul> <li>~70 dependency updates by <a href="https://github.com/dependabot"><code>@dependabot</code></a></li> <li>docs: fix workaround description about <code>ready_for_review</code> by <a href="https://github.com/ybiquitous"><code>@ybiquitous</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/3939">peter-evans/create-pull-request#3939</a></li> <li>Docs: <code>add-paths</code> default behavior by <a href="https://github.com/joeflack4"><code>@joeflack4</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/3928">peter-evans/create-pull-request#3928</a></li> <li>docs: update to create-github-app-token v2 by <a href="https://github.com/Goooler"><code>@Goooler</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4063">peter-evans/create-pull-request#4063</a></li> <li>Fix compatibility with actions/checkout@v6 by <a href="https://github.com/ericsciple"><code>@ericsciple</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4230">peter-evans/create-pull-request#4230</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/joeflack4"><code>@joeflack4</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/3928">peter-evans/create-pull-request#3928</a></li> <li><a href="https://github.com/Goooler"><code>@Goooler</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4063">peter-evans/create-pull-request#4063</a></li> <li><a href="https://github.com/ericsciple"><code>@ericsciple</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/4230">peter-evans/create-pull-request#4230</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
728c40def5 |
fix(backend): replace multiprocessing queue with thread safe queue in ExecutionQueue (#11618)
<!-- Clearly explain the need for these changes: --> The `ExecutionQueue` class was using `multiprocessing.Manager().Queue()` which spawns a subprocess for inter-process communication. However, analysis showed that `ExecutionQueue` is only accessed from threads within the same process, not across processes. This caused: - Unnecessary subprocess spawning per graph execution - IPC overhead for every queue operation - Potential resource leaks if Manager processes weren't properly cleaned up - Limited scalability when many graphs execute concurrently ### Changes <!-- Concisely describe all of the changes made in this pull request: --> - Replaced `multiprocessing.Manager().Queue()` with `queue.Queue()` in `ExecutionQueue` class - Updated imports: removed `from multiprocessing import Manager` and `from queue import Empty`, added `import queue` - Updated exception handling from `except Empty:` to `except queue.Empty:` - Added comprehensive docstring explaining the bug and fix **File changed:** `autogpt_platform/backend/backend/data/execution.py` ### Checklist #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Verified `ExecutionQueue` uses `queue.Queue` (not `multiprocessing.Manager().Queue()`) - [x] Tested all queue operations: `add()`, `get()`, `empty()`, `get_or_none()` - [x] Verified thread-safety with concurrent producer/consumer threads (100 items) - [x] Verified multi-producer/consumer scenario (3 producers, 2 consumers, 150 items) - [x] Confirmed no subprocess spawning when creating multiple queues - [x] Code passes Black formatting check #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) > No configuration changes required - this is a code-only fix with no external API changes. --------- Co-authored-by: Otto <otto@agpt.co> Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
20ed8749d6 | Merge branch 'dev' into abhi/check-ai-sdk-ui | ||
|
|
cd64562e1b |
chore(libs/deps): bump the production-dependencies group across 1 directory with 8 updates (#11934)
Bumps the production-dependencies group with 8 updates in the /autogpt_platform/autogpt_libs directory: | Package | From | To | | --- | --- | --- | | [fastapi](https://github.com/fastapi/fastapi) | `0.116.1` | `0.128.0` | | [google-cloud-logging](https://github.com/googleapis/python-logging) | `3.12.1` | `3.13.0` | | [launchdarkly-server-sdk](https://github.com/launchdarkly/python-server-sdk) | `9.12.0` | `9.14.1` | | [pydantic](https://github.com/pydantic/pydantic) | `2.11.7` | `2.12.5` | | [pydantic-settings](https://github.com/pydantic/pydantic-settings) | `2.10.1` | `2.12.0` | | [pyjwt](https://github.com/jpadilla/pyjwt) | `2.10.1` | `2.11.0` | | [supabase](https://github.com/supabase/supabase-py) | `2.16.0` | `2.27.2` | | [uvicorn](https://github.com/Kludex/uvicorn) | `0.35.0` | `0.40.0` | Updates `fastapi` from 0.116.1 to 0.128.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/fastapi/fastapi/releases">fastapi's releases</a>.</em></p> <blockquote> <h2>0.128.0</h2> <h3>Breaking Changes</h3> <ul> <li>➖ Drop support for <code>pydantic.v1</code>. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14609">#14609</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Internal</h3> <ul> <li>✅ Run performance tests only on Pydantic v2. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14608">#14608</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h2>0.127.1</h2> <h3>Refactors</h3> <ul> <li>🔊 Add a custom <code>FastAPIDeprecationWarning</code>. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14605">#14605</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Docs</h3> <ul> <li>📝 Add documentary to website. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14600">#14600</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Translations</h3> <ul> <li>🌐 Update translations for de (update-outdated). PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14602">#14602</a> by <a href="https://github.com/nilslindemann"><code>@nilslindemann</code></a>.</li> <li>🌐 Update translations for de (update-outdated). PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14581">#14581</a> by <a href="https://github.com/nilslindemann"><code>@nilslindemann</code></a>.</li> </ul> <h3>Internal</h3> <ul> <li>🔧 Update pre-commit to use local Ruff instead of hook. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14604">#14604</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> <li>✅ Add missing tests for code examples. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14569">#14569</a> by <a href="https://github.com/YuriiMotov"><code>@YuriiMotov</code></a>.</li> <li>👷 Remove <code>lint</code> job from <code>test</code> CI workflow. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14593">#14593</a> by <a href="https://github.com/YuriiMotov"><code>@YuriiMotov</code></a>.</li> <li>👷 Update secrets check. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14592">#14592</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> <li>👷 Run CodSpeed tests in parallel to other tests to speed up CI. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14586">#14586</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> <li>🔨 Update scripts and pre-commit to autofix files. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14585">#14585</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h2>0.127.0</h2> <h3>Breaking Changes</h3> <ul> <li>🔊 Add deprecation warnings when using <code>pydantic.v1</code>. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14583">#14583</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Translations</h3> <ul> <li>🔧 Add LLM prompt file for Korean, generated from the existing translations. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14546">#14546</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> <li>🔧 Add LLM prompt file for Japanese, generated from the existing translations. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14545">#14545</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Internal</h3> <ul> <li>⬆️ Upgrade OpenAI model for translations to gpt-5.2. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14579">#14579</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h2>0.126.0</h2> <h3>Upgrades</h3> <ul> <li>➖ Drop support for Pydantic v1, keeping short temporary support for Pydantic v2's <code>pydantic.v1</code>. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14575">#14575</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
8fddc9d71f |
fix(backend): Reduce GET /api/graphs expense + latency (#11986)
[SECRT-1896: Fix crazy `GET /api/graphs` latency (P95 = 107s)](https://linear.app/autogpt/issue/SECRT-1896) These changes should decrease latency of this endpoint by ~~60-65%~~ a lot. ### Changes 🏗️ - Make `Graph.credentials_input_schema` cheaper by avoiding constructing a new `BlockSchema` subclass - Strip down `GraphMeta` - drop all computed fields - Replace with either `GraphModel` or `GraphModelWithoutNodes` wherever those computed fields are used - Simplify usage in `list_graphs_paginated` and `fetch_graph_from_store_slug` - Refactor and clarify relationships between the different graph models - Split `BaseGraph` into `GraphBaseMeta` + `BaseGraph` - Strip down `Graph` - move `credentials_input_schema` and `aggregate_credentials_inputs` to `GraphModel` - Refactor to eliminate double `aggregate_credentials_inputs()` call in `credentials_input_schema` call tree - Add `GraphModelWithoutNodes` (similar to current `GraphMeta`) ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `GET /api/graphs` works as it should - [x] Running a graph succeeds - [x] Adding a sub-agent in the Builder works as it should |
||
|
|
3d1cd03fc8 |
ci(frontend): disable chromatic for this month (#11994)
### Changes 🏗️ - we react the max snapshots quota and don't wanna upgrade - make it run (when re-enabled) on `src/components` changes only to reduce snapshots ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] CI hope for the best |
||
|
|
e7ebe42306 | fix(frontend): Revert ThinkingMessage progress bar delay to original values (#11993) | ||
|
|
e0fab7e34e |
fix(frontend): Improve clarification answer message formatting (#11985)
## Summary Improves the auto-generated message format when users submit clarification answers in the agent generator. ## Before ``` I have the answers to your questions: keyword_1: User answer 1 keyword_2: User answer 2 Please proceed with creating the agent. ``` <img width="748" height="153" alt="image" src="https://github.com/user-attachments/assets/7231aaab-8ea4-406b-ba31-fa2b6055b82d" /> ## After ``` **Here are my answers:** > What is the primary purpose? User answer 1 > What is the target audience? User answer 2 Please proceed with creating the agent. ``` <img width="619" height="352" alt="image" src="https://github.com/user-attachments/assets/ef8c1fbf-fb60-4488-b51f-407c1b9e3e44" /> ## Changes - Use human-readable question text instead of machine-readable keywords - Use blockquote format for questions (natural "quote and reply" pattern) - Use double newlines for proper Markdown paragraph breaks - Iterate over `message.questions` array to preserve original question order - Move handler inside conditional block for proper TypeScript type narrowing ## Why - The old format was ugly and hard to read (raw keywords, no line breaks) - The new format uses a natural "quoting and replying" pattern - Better readability for both users and the LLM (verified: backend does NOT parse keywords) ## Linear Ticket Fixes [SECRT-1822](https://linear.app/autogpt/issue/SECRT-1822) ## Testing - [ ] Trigger agent creation that requires clarifying questions - [ ] Fill out the form and submit - [ ] Verify message appears with new blockquote format - [ ] Verify questions appear in original order - [ ] Verify agent generation proceeds correctly Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> |
||
|
|
1c9680b6f2 |
feat(chat): implement session stream resumption endpoint
- Refactored the existing GET endpoint to allow resuming an active chat session stream without requiring a new message. - Updated the backend logic to check for an active task and return the appropriate SSE stream or a 204 No Content response if no task is running. - Modified the frontend to support the new resume functionality, enhancing user experience by allowing seamless continuation of chat sessions. - Updated OpenAPI documentation to reflect changes in endpoint behavior and parameters. |
||
|
|
251d26a643 |
feat(chat): introduce step lifecycle events for LLM API calls
- Added `StreamStartStep` and `StreamFinishStep` classes to manage the lifecycle of individual LLM API calls within a message. - Updated `stream_chat_completion` to yield step events, enhancing the ability to visually separate multiple LLM calls. - Refactored the handling of start and finish events to accommodate the new step lifecycle, improving state management during streaming. - Adjusted the `stream_registry` to recognize and process the new step events. |
||
|
|
090c576b3e | fix lint on backend | ||
|
|
29ee85c86f |
fix: add virus scanning to WorkspaceManager.write_file() (#11990)
## Summary
Adds virus scanning at the `WorkspaceManager.write_file()` layer for
defense in depth.
## Problem
Previously, virus scanning was only performed at entry points:
- `store_media_file()` in `backend/util/file.py`
- `WriteWorkspaceFileTool` in
`backend/api/features/chat/tools/workspace_files.py`
This created a trust boundary where any new caller of
`WorkspaceManager.write_file()` would need to remember to scan first.
## Solution
Add `scan_content_safe()` call directly in
`WorkspaceManager.write_file()` before persisting to storage. This
ensures all content is scanned regardless of the caller.
## Changes
- Added import for `scan_content_safe` from `backend.util.virus_scanner`
- Added virus scan call after file size validation, before storage
## Testing
Existing tests should pass. The scan is a no-op in test environments
where ClamAV isn't running.
Closes https://linear.app/autogpt/issue/OPEN-2993
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Introduces a new required async scan step in the workspace write path,
which can add latency or cause new failures if the scanner/ClamAV is
misconfigured or unavailable.
>
> **Overview**
> Adds a **defense-in-depth** virus scan to
`WorkspaceManager.write_file()` by invoking `scan_content_safe()` after
file-size validation and before any storage/database persistence.
>
> This centralizes scanning so any caller writing workspace files gets
the same malware check without relying on upstream entry points to
remember to scan.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
4b036bfe22 |
feat(copilot): add loading state to chat components
- Introduced `isLoadingSession` prop to manage loading states in `ChatContainer` and `ChatMessagesContainer`. - Updated `useCopilotPage` to handle session loading state and improve user experience during session creation. - Refactored session management logic to streamline message hydration and session handling. - Enhanced UI feedback with loading indicators when messages are being fetched or sessions are being created. |
||
|
|
85b6520710 |
feat(blocks): Add video editing blocks (#11796)
<!-- Clearly explain the need for these changes: -->
This PR adds general-purpose video editing blocks for the AutoGPT
Platform, enabling automated video production workflows like documentary
creation, marketing videos, tutorial assembly, and content repurposing.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
**New blocks added in `backend/blocks/video/`:**
- `VideoDownloadBlock` - Download videos from URLs (YouTube, Vimeo, news
sites, direct links) using yt-dlp
- `VideoClipBlock` - Extract time segments from videos with start/end
time validation
- `VideoConcatBlock` - Merge multiple video clips with optional
transitions (none, crossfade, fade_black)
- `VideoTextOverlayBlock` - Add text overlays/captions with positioning
and timing options
- `VideoNarrationBlock` - Generate AI narration via ElevenLabs and mix
with video audio (replace, mix, or ducking modes)
**Dependencies required:**
- `yt-dlp` - For video downloading
- `moviepy` - For video editing operations
**Implementation details:**
- All blocks follow the SDK pattern with proper error handling and
exception chaining
- Proper resource cleanup in `finally` blocks to prevent memory leaks
- Input validation (e.g., end_time > start_time)
- Test mocks included for CI
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Blocks follow the SDK pattern with
`BlockSchemaInput`/`BlockSchemaOutput`
- [x] Resource cleanup is implemented in `finally` blocks
- [x] Exception chaining is properly implemented
- [x] Input validation is in place
- [x] Test mocks are provided for CI environments
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
N/A - No configuration changes required.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Adds new multimedia blocks that invoke ffmpeg/MoviePy and introduces
new external dependencies (plus container packages), which can impact
runtime stability and resource usage; download/overlay blocks are
present but disabled due to sandbox/policy concerns.
>
> **Overview**
> Adds a new `backend.blocks.video` module with general-purpose video
workflow blocks (download, clip, concat w/ transitions, loop, add-audio,
text overlay, and ElevenLabs-powered narration), including shared
utilities for codec selection, filename cleanup, and an ffmpeg-based
chapter-strip workaround for MoviePy.
>
> Extends credentials/config to support ElevenLabs
(`ELEVENLABS_API_KEY`, provider enum, system credentials, and cost
config) and adds new dependencies (`elevenlabs`, `yt-dlp`) plus Docker
runtime packages (`ffmpeg`, `imagemagick`).
>
> Improves file/reference handling end-to-end by embedding MIME types in
`workspace://...#mime` outputs and updating frontend rendering to detect
video vs image from MIME fragments (and broaden supported audio/video
extensions), with optional enhanced output rendering behind a feature
flag in the legacy builder UI.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
bfa942e032 |
feat(platform): Add Claude Opus 4.6 model support (#11983)
## Summary Adds support for Anthropic's newly released Claude Opus 4.6 model. ## Changes - Added `claude-opus-4-6` to the `LlmModel` enum - Added model metadata: 200K context window (1M beta), **128K max output tokens** - Added block cost config (same pricing tier as Opus 4.5: $5/MTok input, $25/MTok output) - Updated chat config default model to Claude Opus 4.6 ## Model Details From [Anthropic's docs](https://docs.anthropic.com/en/docs/about-claude/models): - **API ID:** `claude-opus-4-6` - **Context window:** 200K tokens (1M beta) - **Max output:** 128K tokens (up from 64K on Opus 4.5) - **Extended thinking:** Yes - **Adaptive thinking:** Yes (new, Opus 4.6 exclusive) - **Knowledge cutoff:** May 2025 (reliable), Aug 2025 (training) - **Pricing:** $5/MTok input, $25/MTok output (same as Opus 4.5) --------- Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> |
||
|
|
11256076d8 |
fix(frontend): Rename "Tasks" tab to "Agents" in navbar (#11982)
## Summary Renames the "Tasks" tab in the navbar to "Agents" per the Figma design. ## Changes - `Navbar.tsx`: Changed label from "Tasks" to "Agents" <img width="1069" height="153" alt="image" src="https://github.com/user-attachments/assets/3869d2a2-9bd9-4346-b650-15dabbdb46c4" /> ## Why - "Tasks" was incorrectly named and confusing for users trying to find their agent builds - Matches the Figma design ## Linear Ticket Fixes [SECRT-1894](https://linear.app/autogpt/issue/SECRT-1894) ## Related - [SECRT-1865](https://linear.app/autogpt/issue/SECRT-1865) - Find and Manage Existing/Unpublished or Recent Agent Builds Is Unintuitive |
||
|
|
3ca2387631 |
feat(blocks): Implement Text Encode block (#11857)
## Summary
Implements a `TextEncoderBlock` that encodes plain text into escape
sequences (the reverse of `TextDecoderBlock`).
## Changes
### Block Implementation
- Added `encoder_block.py` with `TextEncoderBlock` in
`autogpt_platform/backend/backend/blocks/`
- Uses `codecs.encode(text, "unicode_escape").decode("utf-8")` for
encoding
- Mirrors the structure and patterns of the existing `TextDecoderBlock`
- Categorised as `BlockCategory.TEXT`
### Documentation
- Added Text Encoder section to
`docs/integrations/block-integrations/text.md` (the auto-generated docs
file for TEXT category blocks)
- Expanded "How it works" with technical details on the encoding method,
validation, and edge cases
- Added 3 structured use cases per docs guidelines: JSON payload
preparation, Config/ENV generation, Snapshot fixtures
- Added Text Encoder to the overview table in
`docs/integrations/README.md`
- Removed standalone `encoder_block.md` (TEXT category blocks belong in
`text.md` per `CATEGORY_FILE_MAP` in `generate_block_docs.py`)
### Documentation Formatting (CodeRabbit feedback)
- Added blank lines around markdown tables (MD058)
- Added `text` language tags to fenced code blocks (MD040)
- Restructured use case section with bold headings per coding guidelines
## How Docs Were Synced
The `check-docs-sync` CI job runs `poetry run python
scripts/generate_block_docs.py --check` which expects blocks to be
documented in category-grouped files. Since `TextEncoderBlock` uses
`BlockCategory.TEXT`, the `CATEGORY_FILE_MAP` maps it to `text.md` — not
a standalone file. The block entry was added to `text.md` following the
exact format used by the generator (with `<!-- MANUAL -->` markers for
hand-written sections).
## Related Issue
Fixes #11111
---------
Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: lif <19658300+majiayu000@users.noreply.github.com>
Co-authored-by: Aryan Kaul <134673289+aryancodes1@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
|
||
|
|
62edd73020 | chore: further fixes | ||
|
|
5a878e0af0 | chore: update styles + add mobile drawer | ||
|
|
ed07f02738 |
fix(copilot): edit_agent updates existing agent instead of creating duplicate (#11981)
## Summary When editing an agent via CoPilot's `edit_agent` tool, the code was always creating a new `LibraryAgent` entry instead of updating the existing one to point to the new graph version. This caused duplicate agents to appear in the user's library. ## Changes In `save_agent_to_library()`: - When `is_update=True`, now checks if there's an existing library agent for the graph using `get_library_agent_by_graph_id()` - If found, uses `update_agent_version_in_library()` to update the existing library agent to point to the new version - Falls back to creating a new library agent if no existing one is found (e.g., if editing a graph that wasn't added to library yet) ## Testing - Verified lint/format checks pass - Plan reviewed and approved by Staff Engineer Plan Reviewer agent ## Related Fixes [SECRT-1857](https://linear.app/autogpt/issue/SECRT-1857) --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
321733360f | chore: refactor hook | ||
|
|
1f2fc1ba6f | Merge remote-tracking branch 'origin/dev' into abhi/check-ai-sdk-ui | ||
|
|
b121030c94 |
feat(frontend): Add progress indicator during agent generation [SECRT-1883] (#11974)
## Summary - Add asymptotic progress bar that appears during long-running chat tasks - Progress bar shows after 10 seconds with "Working on it..." label and percentage - Uses half-life formula: ~50% at 30s, ~75% at 60s, ~87.5% at 90s, etc. - Creates the classic "game loading bar" effect that never reaches 100% https://github.com/user-attachments/assets/3c59289e-793c-4a08-b3fc-69e1eef28b1f ## Test plan - [x] Start a chat that triggers agent generation - [x] Wait 10+ seconds for the progress bar to appear - [x] Verify progress bar is centered with label and percentage - [x] Verify progress follows expected timing (~50% at 30s) - [x] Verify progress bar disappears when task completes --------- Co-authored-by: Otto <otto@agpt.co> |
||
|
|
c22c18374d |
feat(frontend): Add ready-to-test prompt after agent creation [SECRT-1882] (#11975)
## Summary - Add special UI prompt when agent is successfully created in chat - Show "Agent Created Successfully" with agent name - Provide two action buttons: - **Run with example values**: Sends chat message asking AI to run with placeholders - **Run with my inputs**: Opens RunAgentModal for custom input configuration - After run/schedule, automatically send chat message with execution details for AI monitoring https://github.com/user-attachments/assets/b11e118c-de59-4b79-a629-8bd0d52d9161 ## Test plan - [x] Create an agent through chat - [x] Verify "Agent Created Successfully" prompt appears - [x] Click "Run with example values" - verify chat message is sent - [x] Click "Run with my inputs" - verify RunAgentModal opens - [x] Fill inputs and run - verify chat message with execution ID is sent - [x] Fill inputs and schedule - verify chat message with schedule details is sent --------- Co-authored-by: Otto <otto@agpt.co> |
||
|
|
e40233a3ac |
fix(backend/chat): Guide find_agent users toward action with CTAs (#11976)
When users search for agents, guide them toward creating custom agents if no results are found or after showing results. This improves user engagement by offering a clear next step. ### Changes 🏗️ - Updated `agent_search.py` to add CTAs in search responses - Added messaging to inform users they can create custom agents based on their needs - Applied to both "no results found" and "agents found" scenarios ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Search for agents in marketplace with matching results - [x] Search for agents in marketplace with no results - [x] Search for agents in library with matching results - [x] Search for agents in library with no results - [x] Verify CTA message appears in all cases --------- Co-authored-by: Otto <otto@agpt.co> |
||
|
|
3ae5eabf9d |
fix(backend/chat): Use latest prompt label in non-production environments (#11977)
In non-production environments, the chat service now fetches prompts with the `latest` label instead of the default production-labeled prompt. This makes it easier to test and iterate on prompt changes in dev/staging without needing to promote them to production first. ### Changes 🏗️ - Updated `_get_system_prompt_template()` in chat service to pass `label="latest"` when `app_env` is not `PRODUCTION` - Production environments continue using the default behavior (production-labeled prompts) ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified that in non-production environments, prompts with `latest` label are fetched - [x] Verified that production environments still use the default (production) labeled prompts Co-authored-by: Otto <otto@agpt.co> |
||
|
|
a077ba9f03 |
fix(platform): YouTube block yields only error on failure (#11980)
## Summary Fixes [SECRT-1889](https://linear.app/autogpt/issue/SECRT-1889): The YouTube transcription block was yielding both `video_id` and `error` when the transcript fetch failed. ## Problem The block yielded `video_id` immediately upon extracting it from the URL, before attempting to fetch the transcript. If the transcript fetch failed, both outputs were present. ```python # Before video_id = self.extract_video_id(input_data.youtube_url) yield "video_id", video_id # ← Yielded before transcript attempt transcript = self.get_transcript(video_id, credentials) # ← Could fail here ``` ## Solution Wrap the entire operation in try/except and only yield outputs after all operations succeed: ```python # After try: video_id = self.extract_video_id(input_data.youtube_url) transcript = self.get_transcript(video_id, credentials) transcript_text = self.format_transcript(transcript=transcript) # Only yield after all operations succeed yield "video_id", video_id yield "transcript", transcript_text except Exception as e: yield "error", str(e) ``` This follows the established pattern in other blocks (e.g., `ai_image_generator_block.py`). ## Testing - All 10 unit tests pass (`test/blocks/test_youtube.py`) - Lint/format checks pass Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> |
||
|
|
5401d54eaa |
fix(backend): Handle StreamHeartbeat in CoPilot stream handler (#11928)
### Changes 🏗️ Fixes **AUTOGPT-SERVER-7JA** (123 events since Jan 27, 2026). #### Problem `StreamHeartbeat` was added to keep SSE connections alive during long-running tool executions (yielded every 15s while waiting). However, the main `stream_chat_completion` handler's `elif` chain didn't have a case for it: ``` StreamTextStart → ✅ handled StreamTextDelta → ✅ handled StreamTextEnd → ✅ handled StreamToolInputStart → ✅ handled StreamToolInputAvailable → ✅ handled StreamToolOutputAvailable → ✅ handled StreamFinish → ✅ handled StreamError → ✅ handled StreamUsage → ✅ handled StreamHeartbeat → ❌ fell through to 'Unknown chunk type' error ``` This meant every heartbeat during tool execution generated a Sentry error instead of keeping the connection alive. #### Fix Add `StreamHeartbeat` to the `elif` chain and yield it through. The route handler already calls `to_sse()` on all yielded chunks, and `StreamHeartbeat.to_sse()` correctly returns `: heartbeat\n\n` (SSE comment format, ignored by clients but keeps proxies/load balancers happy). **1 file changed, 3 insertions.** |
||
|
|
5ac89d7c0b |
fix(test): fix timing bug in test_block_credit_reset (#11978)
## Summary Fixes the flaky `test_block_credit_reset` test that was failing on multiple PRs with `assert 0 == 1000`. ## Root Cause The test calls `disable_test_user_transactions()` which sets `updatedAt` to 35 days ago from the **actual current time**. It then mocks `time_now` to January 1st. **The bug**: If the test runs in early February, 35 days ago is January — the **same month** as the mocked `time_now`. The credit refill logic only triggers when the balance snapshot is from a *different* month, so no refill happens and the balance stays at 0. ## Fix After calling `disable_test_user_transactions()`, explicitly set `updatedAt` to December of the previous year. This ensures it's always in a different month than the mocked `month1` (January), regardless of when the test runs. ## Testing CI will verify the fix. |
||
|
|
3805995b09 | Merge remote-tracking branch 'origin/dev' into abhi/check-ai-sdk-ui | ||
|
|
e317a9c18a |
feat(chat): Add tool response schema endpoint for OpenAPI code generation
- Introduced a new endpoint `/api/chat/schema/tool-responses` to expose tool response models for frontend code generation. - Defined a `ToolResponseUnion` type that aggregates various response models, enhancing type safety and clarity in API responses. - Updated OpenAPI schema to include detailed descriptions and response structures for the new endpoint. - Added `AgentDetailsResponse` and other related schemas to improve agent information handling. |
||
|
|
4f908d5cb3 |
fix(platform): Improve Linear Search Block [SECRT-1880] (#11967)
## Summary Implements [SECRT-1880](https://linear.app/autogpt/issue/SECRT-1880) - Improve Linear Search Block ## Changes ### Models (`models.py`) - Added `State` model with `id`, `name`, and `type` fields for workflow state information - Added `state: State | None` field to `Issue` model ### API Client (`_api.py`) - Updated `try_search_issues()` to: - Add `max_results` parameter (default 10, was ~50) to reduce token usage - Add `team_id` parameter for team filtering - Return `createdAt`, `state`, `project`, and `assignee` fields in results - Fixed `try_get_team_by_name()` to return descriptive error message when team not found instead of crashing with `IndexError` ### Block (`issues.py`) - Added `max_results` input parameter (1-100, default 10) - Added `team_name` input parameter for optional team filtering - Added `error` output field for graceful error handling - Added categories (`PRODUCTIVITY`, `ISSUE_TRACKING`) - Updated test fixtures to include new fields ## Breaking Changes | Change | Before | After | Mitigation | |--------|--------|-------|------------| | Default result count | ~50 | 10 | Users can set `max_results` up to 100 if needed | ## Non-Breaking Changes - `state` field added to `Issue` (optional, defaults to `None`) - `max_results` param added (has default value) - `team_name` param added (optional, defaults to `None`) - `error` output added (follows established pattern from GitHub blocks) ## Testing - [x] Format/lint checks pass - [x] Unit test fixtures updated Resolves SECRT-1880 --------- Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com> |
||
|
|
c1aa684743 |
fix(platform/chat): Filter host-scoped credentials for run_agent tool (#11905)
- Fixes [SECRT-1851: \[Copilot\] `run_agent` tool doesn't filter host-scoped credentials](https://linear.app/autogpt/issue/SECRT-1851) - Follow-up to #11881 ### Changes 🏗️ - Filter host-scoped credentials for `run_agent` tool - Tighten validation on host input field in `HostScopedCredentialsModal` - Use netloc (w/ port) rather than just hostname (w/o port) as host scope ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Create graph that requires host-scoped credentials to work - Create host-scoped credentials with a *different* host - Try to have Copilot run the graph - [x] -> no matching credentials available - Create new credentials - [x] -> works --------- Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
b45e1bc79c |
feat(chat): Add SSE format conversion method to StreamStart response model
- Implemented `to_sse` method in `StreamStart` class to convert response data into SSE format, excluding non-protocol fields. - Removed redundant inputId declaration in ChatInput component for cleaner code. |
||
|
|
7e5b84cc5c |
fix(copilot): update homepage copy to focus on problem discovery (#11956)
## Summary Update the CoPilot homepage to shift from "what do you want to automate?" to "tell me about your problems." This lowers the barrier to engagement by letting users describe their work frustrations instead of requiring them to identify automations themselves. ## Changes | Element | Before | After | |---------|--------|-------| | Headline | "What do you want to automate?" | "Tell me about your work — I'll find what to automate." | | Placeholder | "You can search or just ask - e.g. 'create a blog post outline'" | "What's your role and what eats up most of your day? e.g. 'I'm a real estate agent and I hate...'" | | Button 1 | "Show me what I can automate" | "I don't know where to start, just ask me stuff" | | Button 2 | "Design a custom workflow" | "I do the same thing every week and it's killing me" | | Button 3 | "Help me with content creation" | "Help me find where I'm wasting my time" | | Container | max-w-2xl | max-w-3xl | > **Note on container width:** The `max-w-2xl` → `max-w-3xl` change is just to keep the longer headline on one line. This works but may not be the ideal solution — @lluis-xai should advise on the proper approach. ## Why This Matters The current UX assumes users know what they want to automate. In reality, most users know what frustrates them but can't identify automations. The current screen blocks Otto from starting the discovery conversation that leads to useful recommendations. ## Files Changed - `autogpt_platform/frontend/src/app/(platform)/copilot/page.tsx` — headline, placeholder, container width - `autogpt_platform/frontend/src/app/(platform)/copilot/helpers.ts` — quick action button text Resolves: [SECRT-1876](https://linear.app/autogpt/issue/SECRT-1876) --------- Co-authored-by: Lluis Agusti <hi@llu.lu>autogpt-platform-beta-v0.6.46 |
||
|
|
09cb313211 |
fix(frontend): Prevent reflected XSS in OAuth callback route (#11963)
## Summary Fixes a reflected cross-site scripting (XSS) vulnerability in the OAuth callback route. **Security Issue:** https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/202 ### Vulnerability The OAuth callback route at `frontend/src/app/(platform)/auth/integrations/oauth_callback/route.ts` was writing user-controlled data directly into an HTML response without proper sanitization. This allowed potential attackers to inject malicious scripts via OAuth callback parameters. ### Fix Added a `safeJsonStringify()` function that escapes characters that could break out of the script context: - `<` → `\u003c` - `>` → `\u003e` - `&` → `\u0026` This prevents any user-provided values from being interpreted as HTML/script content when embedded in the response. ### References - [OWASP XSS Prevention Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html) - [CWE-79: Improper Neutralization of Input During Web Page Generation](https://cwe.mitre.org/data/definitions/79.html) ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified the OAuth callback still functions correctly - [x] Confirmed special characters in OAuth responses are properly escaped |
||
|
|
6fce1f6084 | Enhance chat session management in copilot-2 by implementing session creation and hydration logic. Refactor ChatContainer and EmptySession components to streamline user interactions and improve UI responsiveness. Update ChatInput to handle message sending with loading states, ensuring a smoother user experience. | ||
|
|
c026485023 |
feat(frontend): Disable auto-opening wallet (#11961)
<!-- Clearly explain the need for these changes: --> ### Changes 🏗️ - Disable auto-opening Wallet for first time user and on credit increase - Remove no longer needed `lastSeenCredits` state and storage ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Wallet doesn't open automatically |
||
|
|
df21b96fed | Merge branch 'dev' into abhi/check-ai-sdk-ui | ||
|
|
2502fd6391 | Refactor tools in copilot-2 to utilize generated response types for improved type safety and clarity. Updated FindBlocks, FindAgents, CreateAgent, EditAgent, and RunAgent tools to leverage new API response models, enhancing maintainability and reducing redundancy in output handling. | ||
|
|
1eabc60484 |
Merge commit from fork
Fixes GHSA-rc89-6g7g-v5v7 / CVE-2026-22038 The logger.info() calls were explicitly logging API keys via get_secret_value(), exposing credentials in plaintext logs. Changes: - Replace info-level credential logging with debug-level provider logging - Remove all explicit secret value logging from observe/act/extract blocks Co-authored-by: Otto <otto@agpt.co> |
||
|
|
f4bf492f24 |
feat(platform): Add Redis-based SSE reconnection for long-running CoPilot operations (#11877)
## Changes 🏗️
Adds Redis-based SSE reconnection support for long-running CoPilot
operations (like Agent Generator), enabling clients to reconnect and
resume receiving updates after disconnection.
### What this does:
- **Stream Registry** - Redis-backed task tracking with message
persistence via Redis Streams
- **SSE Reconnection** - Clients can reconnect to active tasks using
`task_id` and `last_message_id`
- **Duplicate Message Fix** - Filters out in-progress assistant messages
from session response when active stream exists
- **Completion Consumer** - Handles background task completion
notifications via Redis Streams
### Architecture:
```
1. User sends message → Backend creates task in Redis
2. SSE chunks written to Redis Stream for persistence
3. Client receives chunks via SSE subscription
4. If client disconnects → Task continues in background
5. Client reconnects → GET /sessions/{id} returns active_stream info
6. Client subscribes to /tasks/{task_id}/stream with last_message_id
7. Missed messages replayed from Redis Stream
```
### Key endpoints:
- `GET /sessions/{session_id}` - Returns `active_stream` info if task is
running
- `GET /tasks/{task_id}/stream?last_message_id=X` - SSE endpoint for
reconnection
- `GET /tasks/{task_id}` - Get task status
- `POST /operations/{op_id}/complete` - Webhook for external service
completion
### Duplicate message fix:
When `GET /sessions/{id}` detects an active stream:
1. Filters out the in-progress assistant message from response
2. Returns `last_message_id="0-0"` so client replays stream from
beginning
3. Client receives complete response only through SSE (single source of
truth)
### Frontend changes:
- Task persistence in localStorage for cross-tab reconnection
- Stream event dispatcher handles reconnection flow
- Deduplication logic prevents duplicate messages
### Testing:
- Manual testing of reconnection scenarios
- Verified duplicate message fix works correctly
## Related
- Resolves SSE timeout issues for Agent Generator
- Fixes duplicate message bug on reconnection
|
||
|
|
81e48c00a4 |
feat(copilot): add customize_agent tool for marketplace templates (#11943)
## Summary
Adds a new copilot tool that allows users to customize
marketplace/template agents using natural language before adding them to
their library.
This exposes the Agent Generator's `/api/template-modification` endpoint
to the copilot, which was previously not available.
## Changes
- **service.py**: Add `customize_template_external` to call Agent
Generator's template modification endpoint
- **core.py**:
- Add `customize_template` wrapper function
- Extract `graph_to_json` as a reusable function (was previously inline
in `get_agent_as_json`)
- **customize_agent.py**: New tool that:
- Takes marketplace agent ID (format: `creator/slug`)
- Fetches template from store via `store_db.get_agent()`
- Calls Agent Generator for customization
- Handles clarifying questions from the generator
- Saves customized agent to user's library
- **__init__.py**: Register the tool in `TOOL_REGISTRY` for
auto-discovery
## Usage Flow
1. User searches marketplace: *"Find me a newsletter agent"*
2. Copilot calls `find_agent` → returns `autogpt/newsletter-writer`
3. User: *"Customize that agent to post to Discord instead of email"*
4. Copilot calls:
```
customize_agent(
agent_id="autogpt/newsletter-writer",
modifications="Post to Discord instead of sending email"
)
```
5. Agent Generator may ask clarifying questions (e.g., "What Discord
channel?")
6. Customized agent is saved to user's library
## Test plan
- [x] Verified tool imports correctly
- [x] Verified tool is registered in `TOOL_REGISTRY`
- [x] Verified OpenAI function schema is valid
- [x] Ran existing tests (`pytest backend/api/features/chat/tools/`) -
all pass
- [x] Type checker (`pyright`) passes with 0 errors
- [ ] Manual testing with copilot (requires Agent Generator service)
|
||
|
|
7dc53071e8 |
fix(backend): Add retry and error handling to block initialization (#11946)
## Summary Adds retry logic and graceful error handling to `initialize_blocks()` to prevent transient DB errors from crashing server startup. ## Problem When a transient database error occurs during block initialization (e.g., Prisma P1017 "Server has closed the connection"), the entire server fails to start. This is overly aggressive since: 1. Blocks are already registered in memory 2. The DB sync is primarily for tracking/schema storage 3. One flaky connection shouldn't prevent the server from starting **Triggered by:** [Sentry AUTOGPT-SERVER-7PW](https://significant-gravitas.sentry.io/issues/7238733543/) ## Solution - Add retry decorator (3 attempts with exponential backoff) for DB operations - On failure after retries, log a warning and continue to the next block - Blocks remain available in memory even if DB sync fails - Log summary of any failed blocks at the end ## Changes - `autogpt_platform/backend/backend/data/block.py`: Wrap block DB sync in retry logic with graceful fallback ## Testing - Existing block initialization behavior unchanged on success - On transient DB errors: retries up to 3 times, then continues with warning |
||
|
|
4878665c66 | Merge branch 'master' into dev |