mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-07 05:15:09 -05:00
dfd34a7ad1fd966286f45daf75fb26d5e65705e6
1869 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
dfd34a7ad1 |
chore(backend/deps-dev): bump the development-dependencies group across 1 directory with 3 updates
Bumps the development-dependencies group with 3 updates in the /autogpt_platform/backend directory: [poethepoet](https://github.com/nat-n/poethepoet), [pytest-watcher](https://github.com/olzhasar/pytest-watcher) and [ruff](https://github.com/astral-sh/ruff). Updates `poethepoet` from 0.37.0 to 0.40.0 - [Release notes](https://github.com/nat-n/poethepoet/releases) - [Commits](https://github.com/nat-n/poethepoet/compare/v0.37.0...v0.40.0) Updates `pytest-watcher` from 0.4.3 to 0.6.3 - [Release notes](https://github.com/olzhasar/pytest-watcher/releases) - [Changelog](https://github.com/olzhasar/pytest-watcher/blob/master/CHANGELOG.md) - [Commits](https://github.com/olzhasar/pytest-watcher/compare/v0.4.3...v0.6.3) Updates `ruff` from 0.14.14 to 0.15.0 - [Release notes](https://github.com/astral-sh/ruff/releases) - [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md) - [Commits](https://github.com/astral-sh/ruff/compare/0.14.14...0.15.0) --- updated-dependencies: - dependency-name: poethepoet dependency-version: 0.40.0 dependency-type: direct:development update-type: version-update:semver-minor dependency-group: development-dependencies - dependency-name: pytest-watcher dependency-version: 0.6.3 dependency-type: direct:development update-type: version-update:semver-minor dependency-group: development-dependencies - dependency-name: ruff dependency-version: 0.15.0 dependency-type: direct:development update-type: version-update:semver-minor dependency-group: development-dependencies ... Signed-off-by: dependabot[bot] <support@github.com> |
||
|
|
cd64562e1b |
chore(libs/deps): bump the production-dependencies group across 1 directory with 8 updates (#11934)
Bumps the production-dependencies group with 8 updates in the /autogpt_platform/autogpt_libs directory: | Package | From | To | | --- | --- | --- | | [fastapi](https://github.com/fastapi/fastapi) | `0.116.1` | `0.128.0` | | [google-cloud-logging](https://github.com/googleapis/python-logging) | `3.12.1` | `3.13.0` | | [launchdarkly-server-sdk](https://github.com/launchdarkly/python-server-sdk) | `9.12.0` | `9.14.1` | | [pydantic](https://github.com/pydantic/pydantic) | `2.11.7` | `2.12.5` | | [pydantic-settings](https://github.com/pydantic/pydantic-settings) | `2.10.1` | `2.12.0` | | [pyjwt](https://github.com/jpadilla/pyjwt) | `2.10.1` | `2.11.0` | | [supabase](https://github.com/supabase/supabase-py) | `2.16.0` | `2.27.2` | | [uvicorn](https://github.com/Kludex/uvicorn) | `0.35.0` | `0.40.0` | Updates `fastapi` from 0.116.1 to 0.128.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/fastapi/fastapi/releases">fastapi's releases</a>.</em></p> <blockquote> <h2>0.128.0</h2> <h3>Breaking Changes</h3> <ul> <li>➖ Drop support for <code>pydantic.v1</code>. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14609">#14609</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Internal</h3> <ul> <li>✅ Run performance tests only on Pydantic v2. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14608">#14608</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h2>0.127.1</h2> <h3>Refactors</h3> <ul> <li>🔊 Add a custom <code>FastAPIDeprecationWarning</code>. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14605">#14605</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Docs</h3> <ul> <li>📝 Add documentary to website. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14600">#14600</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Translations</h3> <ul> <li>🌐 Update translations for de (update-outdated). PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14602">#14602</a> by <a href="https://github.com/nilslindemann"><code>@nilslindemann</code></a>.</li> <li>🌐 Update translations for de (update-outdated). PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14581">#14581</a> by <a href="https://github.com/nilslindemann"><code>@nilslindemann</code></a>.</li> </ul> <h3>Internal</h3> <ul> <li>🔧 Update pre-commit to use local Ruff instead of hook. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14604">#14604</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> <li>✅ Add missing tests for code examples. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14569">#14569</a> by <a href="https://github.com/YuriiMotov"><code>@YuriiMotov</code></a>.</li> <li>👷 Remove <code>lint</code> job from <code>test</code> CI workflow. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14593">#14593</a> by <a href="https://github.com/YuriiMotov"><code>@YuriiMotov</code></a>.</li> <li>👷 Update secrets check. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14592">#14592</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> <li>👷 Run CodSpeed tests in parallel to other tests to speed up CI. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14586">#14586</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> <li>🔨 Update scripts and pre-commit to autofix files. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14585">#14585</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h2>0.127.0</h2> <h3>Breaking Changes</h3> <ul> <li>🔊 Add deprecation warnings when using <code>pydantic.v1</code>. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14583">#14583</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Translations</h3> <ul> <li>🔧 Add LLM prompt file for Korean, generated from the existing translations. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14546">#14546</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> <li>🔧 Add LLM prompt file for Japanese, generated from the existing translations. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14545">#14545</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h3>Internal</h3> <ul> <li>⬆️ Upgrade OpenAI model for translations to gpt-5.2. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14579">#14579</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <h2>0.126.0</h2> <h3>Upgrades</h3> <ul> <li>➖ Drop support for Pydantic v1, keeping short temporary support for Pydantic v2's <code>pydantic.v1</code>. PR <a href="https://redirect.github.com/fastapi/fastapi/pull/14575">#14575</a> by <a href="https://github.com/tiangolo"><code>@tiangolo</code></a>.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
8fddc9d71f |
fix(backend): Reduce GET /api/graphs expense + latency (#11986)
[SECRT-1896: Fix crazy `GET /api/graphs` latency (P95 = 107s)](https://linear.app/autogpt/issue/SECRT-1896) These changes should decrease latency of this endpoint by ~~60-65%~~ a lot. ### Changes 🏗️ - Make `Graph.credentials_input_schema` cheaper by avoiding constructing a new `BlockSchema` subclass - Strip down `GraphMeta` - drop all computed fields - Replace with either `GraphModel` or `GraphModelWithoutNodes` wherever those computed fields are used - Simplify usage in `list_graphs_paginated` and `fetch_graph_from_store_slug` - Refactor and clarify relationships between the different graph models - Split `BaseGraph` into `GraphBaseMeta` + `BaseGraph` - Strip down `Graph` - move `credentials_input_schema` and `aggregate_credentials_inputs` to `GraphModel` - Refactor to eliminate double `aggregate_credentials_inputs()` call in `credentials_input_schema` call tree - Add `GraphModelWithoutNodes` (similar to current `GraphMeta`) ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] `GET /api/graphs` works as it should - [x] Running a graph succeeds - [x] Adding a sub-agent in the Builder works as it should |
||
|
|
e7ebe42306 | fix(frontend): Revert ThinkingMessage progress bar delay to original values (#11993) | ||
|
|
e0fab7e34e |
fix(frontend): Improve clarification answer message formatting (#11985)
## Summary Improves the auto-generated message format when users submit clarification answers in the agent generator. ## Before ``` I have the answers to your questions: keyword_1: User answer 1 keyword_2: User answer 2 Please proceed with creating the agent. ``` <img width="748" height="153" alt="image" src="https://github.com/user-attachments/assets/7231aaab-8ea4-406b-ba31-fa2b6055b82d" /> ## After ``` **Here are my answers:** > What is the primary purpose? User answer 1 > What is the target audience? User answer 2 Please proceed with creating the agent. ``` <img width="619" height="352" alt="image" src="https://github.com/user-attachments/assets/ef8c1fbf-fb60-4488-b51f-407c1b9e3e44" /> ## Changes - Use human-readable question text instead of machine-readable keywords - Use blockquote format for questions (natural "quote and reply" pattern) - Use double newlines for proper Markdown paragraph breaks - Iterate over `message.questions` array to preserve original question order - Move handler inside conditional block for proper TypeScript type narrowing ## Why - The old format was ugly and hard to read (raw keywords, no line breaks) - The new format uses a natural "quoting and replying" pattern - Better readability for both users and the LLM (verified: backend does NOT parse keywords) ## Linear Ticket Fixes [SECRT-1822](https://linear.app/autogpt/issue/SECRT-1822) ## Testing - [ ] Trigger agent creation that requires clarifying questions - [ ] Fill out the form and submit - [ ] Verify message appears with new blockquote format - [ ] Verify questions appear in original order - [ ] Verify agent generation proceeds correctly Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> |
||
|
|
29ee85c86f |
fix: add virus scanning to WorkspaceManager.write_file() (#11990)
## Summary
Adds virus scanning at the `WorkspaceManager.write_file()` layer for
defense in depth.
## Problem
Previously, virus scanning was only performed at entry points:
- `store_media_file()` in `backend/util/file.py`
- `WriteWorkspaceFileTool` in
`backend/api/features/chat/tools/workspace_files.py`
This created a trust boundary where any new caller of
`WorkspaceManager.write_file()` would need to remember to scan first.
## Solution
Add `scan_content_safe()` call directly in
`WorkspaceManager.write_file()` before persisting to storage. This
ensures all content is scanned regardless of the caller.
## Changes
- Added import for `scan_content_safe` from `backend.util.virus_scanner`
- Added virus scan call after file size validation, before storage
## Testing
Existing tests should pass. The scan is a no-op in test environments
where ClamAV isn't running.
Closes https://linear.app/autogpt/issue/OPEN-2993
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Introduces a new required async scan step in the workspace write path,
which can add latency or cause new failures if the scanner/ClamAV is
misconfigured or unavailable.
>
> **Overview**
> Adds a **defense-in-depth** virus scan to
`WorkspaceManager.write_file()` by invoking `scan_content_safe()` after
file-size validation and before any storage/database persistence.
>
> This centralizes scanning so any caller writing workspace files gets
the same malware check without relying on upstream entry points to
remember to scan.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
85b6520710 |
feat(blocks): Add video editing blocks (#11796)
<!-- Clearly explain the need for these changes: -->
This PR adds general-purpose video editing blocks for the AutoGPT
Platform, enabling automated video production workflows like documentary
creation, marketing videos, tutorial assembly, and content repurposing.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
**New blocks added in `backend/blocks/video/`:**
- `VideoDownloadBlock` - Download videos from URLs (YouTube, Vimeo, news
sites, direct links) using yt-dlp
- `VideoClipBlock` - Extract time segments from videos with start/end
time validation
- `VideoConcatBlock` - Merge multiple video clips with optional
transitions (none, crossfade, fade_black)
- `VideoTextOverlayBlock` - Add text overlays/captions with positioning
and timing options
- `VideoNarrationBlock` - Generate AI narration via ElevenLabs and mix
with video audio (replace, mix, or ducking modes)
**Dependencies required:**
- `yt-dlp` - For video downloading
- `moviepy` - For video editing operations
**Implementation details:**
- All blocks follow the SDK pattern with proper error handling and
exception chaining
- Proper resource cleanup in `finally` blocks to prevent memory leaks
- Input validation (e.g., end_time > start_time)
- Test mocks included for CI
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Blocks follow the SDK pattern with
`BlockSchemaInput`/`BlockSchemaOutput`
- [x] Resource cleanup is implemented in `finally` blocks
- [x] Exception chaining is properly implemented
- [x] Input validation is in place
- [x] Test mocks are provided for CI environments
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
N/A - No configuration changes required.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Adds new multimedia blocks that invoke ffmpeg/MoviePy and introduces
new external dependencies (plus container packages), which can impact
runtime stability and resource usage; download/overlay blocks are
present but disabled due to sandbox/policy concerns.
>
> **Overview**
> Adds a new `backend.blocks.video` module with general-purpose video
workflow blocks (download, clip, concat w/ transitions, loop, add-audio,
text overlay, and ElevenLabs-powered narration), including shared
utilities for codec selection, filename cleanup, and an ffmpeg-based
chapter-strip workaround for MoviePy.
>
> Extends credentials/config to support ElevenLabs
(`ELEVENLABS_API_KEY`, provider enum, system credentials, and cost
config) and adds new dependencies (`elevenlabs`, `yt-dlp`) plus Docker
runtime packages (`ffmpeg`, `imagemagick`).
>
> Improves file/reference handling end-to-end by embedding MIME types in
`workspace://...#mime` outputs and updating frontend rendering to detect
video vs image from MIME fragments (and broaden supported audio/video
extensions), with optional enhanced output rendering behind a feature
flag in the legacy builder UI.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
bfa942e032 |
feat(platform): Add Claude Opus 4.6 model support (#11983)
## Summary Adds support for Anthropic's newly released Claude Opus 4.6 model. ## Changes - Added `claude-opus-4-6` to the `LlmModel` enum - Added model metadata: 200K context window (1M beta), **128K max output tokens** - Added block cost config (same pricing tier as Opus 4.5: $5/MTok input, $25/MTok output) - Updated chat config default model to Claude Opus 4.6 ## Model Details From [Anthropic's docs](https://docs.anthropic.com/en/docs/about-claude/models): - **API ID:** `claude-opus-4-6` - **Context window:** 200K tokens (1M beta) - **Max output:** 128K tokens (up from 64K on Opus 4.5) - **Extended thinking:** Yes - **Adaptive thinking:** Yes (new, Opus 4.6 exclusive) - **Knowledge cutoff:** May 2025 (reliable), Aug 2025 (training) - **Pricing:** $5/MTok input, $25/MTok output (same as Opus 4.5) --------- Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> |
||
|
|
11256076d8 |
fix(frontend): Rename "Tasks" tab to "Agents" in navbar (#11982)
## Summary Renames the "Tasks" tab in the navbar to "Agents" per the Figma design. ## Changes - `Navbar.tsx`: Changed label from "Tasks" to "Agents" <img width="1069" height="153" alt="image" src="https://github.com/user-attachments/assets/3869d2a2-9bd9-4346-b650-15dabbdb46c4" /> ## Why - "Tasks" was incorrectly named and confusing for users trying to find their agent builds - Matches the Figma design ## Linear Ticket Fixes [SECRT-1894](https://linear.app/autogpt/issue/SECRT-1894) ## Related - [SECRT-1865](https://linear.app/autogpt/issue/SECRT-1865) - Find and Manage Existing/Unpublished or Recent Agent Builds Is Unintuitive |
||
|
|
3ca2387631 |
feat(blocks): Implement Text Encode block (#11857)
## Summary
Implements a `TextEncoderBlock` that encodes plain text into escape
sequences (the reverse of `TextDecoderBlock`).
## Changes
### Block Implementation
- Added `encoder_block.py` with `TextEncoderBlock` in
`autogpt_platform/backend/backend/blocks/`
- Uses `codecs.encode(text, "unicode_escape").decode("utf-8")` for
encoding
- Mirrors the structure and patterns of the existing `TextDecoderBlock`
- Categorised as `BlockCategory.TEXT`
### Documentation
- Added Text Encoder section to
`docs/integrations/block-integrations/text.md` (the auto-generated docs
file for TEXT category blocks)
- Expanded "How it works" with technical details on the encoding method,
validation, and edge cases
- Added 3 structured use cases per docs guidelines: JSON payload
preparation, Config/ENV generation, Snapshot fixtures
- Added Text Encoder to the overview table in
`docs/integrations/README.md`
- Removed standalone `encoder_block.md` (TEXT category blocks belong in
`text.md` per `CATEGORY_FILE_MAP` in `generate_block_docs.py`)
### Documentation Formatting (CodeRabbit feedback)
- Added blank lines around markdown tables (MD058)
- Added `text` language tags to fenced code blocks (MD040)
- Restructured use case section with bold headings per coding guidelines
## How Docs Were Synced
The `check-docs-sync` CI job runs `poetry run python
scripts/generate_block_docs.py --check` which expects blocks to be
documented in category-grouped files. Since `TextEncoderBlock` uses
`BlockCategory.TEXT`, the `CATEGORY_FILE_MAP` maps it to `text.md` — not
a standalone file. The block entry was added to `text.md` following the
exact format used by the generator (with `<!-- MANUAL -->` markers for
hand-written sections).
## Related Issue
Fixes #11111
---------
Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: lif <19658300+majiayu000@users.noreply.github.com>
Co-authored-by: Aryan Kaul <134673289+aryancodes1@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
|
||
|
|
ed07f02738 |
fix(copilot): edit_agent updates existing agent instead of creating duplicate (#11981)
## Summary When editing an agent via CoPilot's `edit_agent` tool, the code was always creating a new `LibraryAgent` entry instead of updating the existing one to point to the new graph version. This caused duplicate agents to appear in the user's library. ## Changes In `save_agent_to_library()`: - When `is_update=True`, now checks if there's an existing library agent for the graph using `get_library_agent_by_graph_id()` - If found, uses `update_agent_version_in_library()` to update the existing library agent to point to the new version - Falls back to creating a new library agent if no existing one is found (e.g., if editing a graph that wasn't added to library yet) ## Testing - Verified lint/format checks pass - Plan reviewed and approved by Staff Engineer Plan Reviewer agent ## Related Fixes [SECRT-1857](https://linear.app/autogpt/issue/SECRT-1857) --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
b121030c94 |
feat(frontend): Add progress indicator during agent generation [SECRT-1883] (#11974)
## Summary - Add asymptotic progress bar that appears during long-running chat tasks - Progress bar shows after 10 seconds with "Working on it..." label and percentage - Uses half-life formula: ~50% at 30s, ~75% at 60s, ~87.5% at 90s, etc. - Creates the classic "game loading bar" effect that never reaches 100% https://github.com/user-attachments/assets/3c59289e-793c-4a08-b3fc-69e1eef28b1f ## Test plan - [x] Start a chat that triggers agent generation - [x] Wait 10+ seconds for the progress bar to appear - [x] Verify progress bar is centered with label and percentage - [x] Verify progress follows expected timing (~50% at 30s) - [x] Verify progress bar disappears when task completes --------- Co-authored-by: Otto <otto@agpt.co> |
||
|
|
c22c18374d |
feat(frontend): Add ready-to-test prompt after agent creation [SECRT-1882] (#11975)
## Summary - Add special UI prompt when agent is successfully created in chat - Show "Agent Created Successfully" with agent name - Provide two action buttons: - **Run with example values**: Sends chat message asking AI to run with placeholders - **Run with my inputs**: Opens RunAgentModal for custom input configuration - After run/schedule, automatically send chat message with execution details for AI monitoring https://github.com/user-attachments/assets/b11e118c-de59-4b79-a629-8bd0d52d9161 ## Test plan - [x] Create an agent through chat - [x] Verify "Agent Created Successfully" prompt appears - [x] Click "Run with example values" - verify chat message is sent - [x] Click "Run with my inputs" - verify RunAgentModal opens - [x] Fill inputs and run - verify chat message with execution ID is sent - [x] Fill inputs and schedule - verify chat message with schedule details is sent --------- Co-authored-by: Otto <otto@agpt.co> |
||
|
|
e40233a3ac |
fix(backend/chat): Guide find_agent users toward action with CTAs (#11976)
When users search for agents, guide them toward creating custom agents if no results are found or after showing results. This improves user engagement by offering a clear next step. ### Changes 🏗️ - Updated `agent_search.py` to add CTAs in search responses - Added messaging to inform users they can create custom agents based on their needs - Applied to both "no results found" and "agents found" scenarios ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Search for agents in marketplace with matching results - [x] Search for agents in marketplace with no results - [x] Search for agents in library with matching results - [x] Search for agents in library with no results - [x] Verify CTA message appears in all cases --------- Co-authored-by: Otto <otto@agpt.co> |
||
|
|
3ae5eabf9d |
fix(backend/chat): Use latest prompt label in non-production environments (#11977)
In non-production environments, the chat service now fetches prompts with the `latest` label instead of the default production-labeled prompt. This makes it easier to test and iterate on prompt changes in dev/staging without needing to promote them to production first. ### Changes 🏗️ - Updated `_get_system_prompt_template()` in chat service to pass `label="latest"` when `app_env` is not `PRODUCTION` - Production environments continue using the default behavior (production-labeled prompts) ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified that in non-production environments, prompts with `latest` label are fetched - [x] Verified that production environments still use the default (production) labeled prompts Co-authored-by: Otto <otto@agpt.co> |
||
|
|
a077ba9f03 |
fix(platform): YouTube block yields only error on failure (#11980)
## Summary Fixes [SECRT-1889](https://linear.app/autogpt/issue/SECRT-1889): The YouTube transcription block was yielding both `video_id` and `error` when the transcript fetch failed. ## Problem The block yielded `video_id` immediately upon extracting it from the URL, before attempting to fetch the transcript. If the transcript fetch failed, both outputs were present. ```python # Before video_id = self.extract_video_id(input_data.youtube_url) yield "video_id", video_id # ← Yielded before transcript attempt transcript = self.get_transcript(video_id, credentials) # ← Could fail here ``` ## Solution Wrap the entire operation in try/except and only yield outputs after all operations succeed: ```python # After try: video_id = self.extract_video_id(input_data.youtube_url) transcript = self.get_transcript(video_id, credentials) transcript_text = self.format_transcript(transcript=transcript) # Only yield after all operations succeed yield "video_id", video_id yield "transcript", transcript_text except Exception as e: yield "error", str(e) ``` This follows the established pattern in other blocks (e.g., `ai_image_generator_block.py`). ## Testing - All 10 unit tests pass (`test/blocks/test_youtube.py`) - Lint/format checks pass Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> |
||
|
|
5401d54eaa |
fix(backend): Handle StreamHeartbeat in CoPilot stream handler (#11928)
### Changes 🏗️ Fixes **AUTOGPT-SERVER-7JA** (123 events since Jan 27, 2026). #### Problem `StreamHeartbeat` was added to keep SSE connections alive during long-running tool executions (yielded every 15s while waiting). However, the main `stream_chat_completion` handler's `elif` chain didn't have a case for it: ``` StreamTextStart → ✅ handled StreamTextDelta → ✅ handled StreamTextEnd → ✅ handled StreamToolInputStart → ✅ handled StreamToolInputAvailable → ✅ handled StreamToolOutputAvailable → ✅ handled StreamFinish → ✅ handled StreamError → ✅ handled StreamUsage → ✅ handled StreamHeartbeat → ❌ fell through to 'Unknown chunk type' error ``` This meant every heartbeat during tool execution generated a Sentry error instead of keeping the connection alive. #### Fix Add `StreamHeartbeat` to the `elif` chain and yield it through. The route handler already calls `to_sse()` on all yielded chunks, and `StreamHeartbeat.to_sse()` correctly returns `: heartbeat\n\n` (SSE comment format, ignored by clients but keeps proxies/load balancers happy). **1 file changed, 3 insertions.** |
||
|
|
5ac89d7c0b |
fix(test): fix timing bug in test_block_credit_reset (#11978)
## Summary Fixes the flaky `test_block_credit_reset` test that was failing on multiple PRs with `assert 0 == 1000`. ## Root Cause The test calls `disable_test_user_transactions()` which sets `updatedAt` to 35 days ago from the **actual current time**. It then mocks `time_now` to January 1st. **The bug**: If the test runs in early February, 35 days ago is January — the **same month** as the mocked `time_now`. The credit refill logic only triggers when the balance snapshot is from a *different* month, so no refill happens and the balance stays at 0. ## Fix After calling `disable_test_user_transactions()`, explicitly set `updatedAt` to December of the previous year. This ensures it's always in a different month than the mocked `month1` (January), regardless of when the test runs. ## Testing CI will verify the fix. |
||
|
|
4f908d5cb3 |
fix(platform): Improve Linear Search Block [SECRT-1880] (#11967)
## Summary Implements [SECRT-1880](https://linear.app/autogpt/issue/SECRT-1880) - Improve Linear Search Block ## Changes ### Models (`models.py`) - Added `State` model with `id`, `name`, and `type` fields for workflow state information - Added `state: State | None` field to `Issue` model ### API Client (`_api.py`) - Updated `try_search_issues()` to: - Add `max_results` parameter (default 10, was ~50) to reduce token usage - Add `team_id` parameter for team filtering - Return `createdAt`, `state`, `project`, and `assignee` fields in results - Fixed `try_get_team_by_name()` to return descriptive error message when team not found instead of crashing with `IndexError` ### Block (`issues.py`) - Added `max_results` input parameter (1-100, default 10) - Added `team_name` input parameter for optional team filtering - Added `error` output field for graceful error handling - Added categories (`PRODUCTIVITY`, `ISSUE_TRACKING`) - Updated test fixtures to include new fields ## Breaking Changes | Change | Before | After | Mitigation | |--------|--------|-------|------------| | Default result count | ~50 | 10 | Users can set `max_results` up to 100 if needed | ## Non-Breaking Changes - `state` field added to `Issue` (optional, defaults to `None`) - `max_results` param added (has default value) - `team_name` param added (optional, defaults to `None`) - `error` output added (follows established pattern from GitHub blocks) ## Testing - [x] Format/lint checks pass - [x] Unit test fixtures updated Resolves SECRT-1880 --------- Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com> |
||
|
|
c1aa684743 |
fix(platform/chat): Filter host-scoped credentials for run_agent tool (#11905)
- Fixes [SECRT-1851: \[Copilot\] `run_agent` tool doesn't filter host-scoped credentials](https://linear.app/autogpt/issue/SECRT-1851) - Follow-up to #11881 ### Changes 🏗️ - Filter host-scoped credentials for `run_agent` tool - Tighten validation on host input field in `HostScopedCredentialsModal` - Use netloc (w/ port) rather than just hostname (w/o port) as host scope ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Create graph that requires host-scoped credentials to work - Create host-scoped credentials with a *different* host - Try to have Copilot run the graph - [x] -> no matching credentials available - Create new credentials - [x] -> works --------- Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
7e5b84cc5c |
fix(copilot): update homepage copy to focus on problem discovery (#11956)
## Summary Update the CoPilot homepage to shift from "what do you want to automate?" to "tell me about your problems." This lowers the barrier to engagement by letting users describe their work frustrations instead of requiring them to identify automations themselves. ## Changes | Element | Before | After | |---------|--------|-------| | Headline | "What do you want to automate?" | "Tell me about your work — I'll find what to automate." | | Placeholder | "You can search or just ask - e.g. 'create a blog post outline'" | "What's your role and what eats up most of your day? e.g. 'I'm a real estate agent and I hate...'" | | Button 1 | "Show me what I can automate" | "I don't know where to start, just ask me stuff" | | Button 2 | "Design a custom workflow" | "I do the same thing every week and it's killing me" | | Button 3 | "Help me with content creation" | "Help me find where I'm wasting my time" | | Container | max-w-2xl | max-w-3xl | > **Note on container width:** The `max-w-2xl` → `max-w-3xl` change is just to keep the longer headline on one line. This works but may not be the ideal solution — @lluis-xai should advise on the proper approach. ## Why This Matters The current UX assumes users know what they want to automate. In reality, most users know what frustrates them but can't identify automations. The current screen blocks Otto from starting the discovery conversation that leads to useful recommendations. ## Files Changed - `autogpt_platform/frontend/src/app/(platform)/copilot/page.tsx` — headline, placeholder, container width - `autogpt_platform/frontend/src/app/(platform)/copilot/helpers.ts` — quick action button text Resolves: [SECRT-1876](https://linear.app/autogpt/issue/SECRT-1876) --------- Co-authored-by: Lluis Agusti <hi@llu.lu> |
||
|
|
09cb313211 |
fix(frontend): Prevent reflected XSS in OAuth callback route (#11963)
## Summary Fixes a reflected cross-site scripting (XSS) vulnerability in the OAuth callback route. **Security Issue:** https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/202 ### Vulnerability The OAuth callback route at `frontend/src/app/(platform)/auth/integrations/oauth_callback/route.ts` was writing user-controlled data directly into an HTML response without proper sanitization. This allowed potential attackers to inject malicious scripts via OAuth callback parameters. ### Fix Added a `safeJsonStringify()` function that escapes characters that could break out of the script context: - `<` → `\u003c` - `>` → `\u003e` - `&` → `\u0026` This prevents any user-provided values from being interpreted as HTML/script content when embedded in the response. ### References - [OWASP XSS Prevention Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html) - [CWE-79: Improper Neutralization of Input During Web Page Generation](https://cwe.mitre.org/data/definitions/79.html) ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified the OAuth callback still functions correctly - [x] Confirmed special characters in OAuth responses are properly escaped |
||
|
|
c026485023 |
feat(frontend): Disable auto-opening wallet (#11961)
<!-- Clearly explain the need for these changes: --> ### Changes 🏗️ - Disable auto-opening Wallet for first time user and on credit increase - Remove no longer needed `lastSeenCredits` state and storage ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Wallet doesn't open automatically |
||
|
|
1eabc60484 |
Merge commit from fork
Fixes GHSA-rc89-6g7g-v5v7 / CVE-2026-22038 The logger.info() calls were explicitly logging API keys via get_secret_value(), exposing credentials in plaintext logs. Changes: - Replace info-level credential logging with debug-level provider logging - Remove all explicit secret value logging from observe/act/extract blocks Co-authored-by: Otto <otto@agpt.co> |
||
|
|
f4bf492f24 |
feat(platform): Add Redis-based SSE reconnection for long-running CoPilot operations (#11877)
## Changes 🏗️
Adds Redis-based SSE reconnection support for long-running CoPilot
operations (like Agent Generator), enabling clients to reconnect and
resume receiving updates after disconnection.
### What this does:
- **Stream Registry** - Redis-backed task tracking with message
persistence via Redis Streams
- **SSE Reconnection** - Clients can reconnect to active tasks using
`task_id` and `last_message_id`
- **Duplicate Message Fix** - Filters out in-progress assistant messages
from session response when active stream exists
- **Completion Consumer** - Handles background task completion
notifications via Redis Streams
### Architecture:
```
1. User sends message → Backend creates task in Redis
2. SSE chunks written to Redis Stream for persistence
3. Client receives chunks via SSE subscription
4. If client disconnects → Task continues in background
5. Client reconnects → GET /sessions/{id} returns active_stream info
6. Client subscribes to /tasks/{task_id}/stream with last_message_id
7. Missed messages replayed from Redis Stream
```
### Key endpoints:
- `GET /sessions/{session_id}` - Returns `active_stream` info if task is
running
- `GET /tasks/{task_id}/stream?last_message_id=X` - SSE endpoint for
reconnection
- `GET /tasks/{task_id}` - Get task status
- `POST /operations/{op_id}/complete` - Webhook for external service
completion
### Duplicate message fix:
When `GET /sessions/{id}` detects an active stream:
1. Filters out the in-progress assistant message from response
2. Returns `last_message_id="0-0"` so client replays stream from
beginning
3. Client receives complete response only through SSE (single source of
truth)
### Frontend changes:
- Task persistence in localStorage for cross-tab reconnection
- Stream event dispatcher handles reconnection flow
- Deduplication logic prevents duplicate messages
### Testing:
- Manual testing of reconnection scenarios
- Verified duplicate message fix works correctly
## Related
- Resolves SSE timeout issues for Agent Generator
- Fixes duplicate message bug on reconnection
|
||
|
|
81e48c00a4 |
feat(copilot): add customize_agent tool for marketplace templates (#11943)
## Summary
Adds a new copilot tool that allows users to customize
marketplace/template agents using natural language before adding them to
their library.
This exposes the Agent Generator's `/api/template-modification` endpoint
to the copilot, which was previously not available.
## Changes
- **service.py**: Add `customize_template_external` to call Agent
Generator's template modification endpoint
- **core.py**:
- Add `customize_template` wrapper function
- Extract `graph_to_json` as a reusable function (was previously inline
in `get_agent_as_json`)
- **customize_agent.py**: New tool that:
- Takes marketplace agent ID (format: `creator/slug`)
- Fetches template from store via `store_db.get_agent()`
- Calls Agent Generator for customization
- Handles clarifying questions from the generator
- Saves customized agent to user's library
- **__init__.py**: Register the tool in `TOOL_REGISTRY` for
auto-discovery
## Usage Flow
1. User searches marketplace: *"Find me a newsletter agent"*
2. Copilot calls `find_agent` → returns `autogpt/newsletter-writer`
3. User: *"Customize that agent to post to Discord instead of email"*
4. Copilot calls:
```
customize_agent(
agent_id="autogpt/newsletter-writer",
modifications="Post to Discord instead of sending email"
)
```
5. Agent Generator may ask clarifying questions (e.g., "What Discord
channel?")
6. Customized agent is saved to user's library
## Test plan
- [x] Verified tool imports correctly
- [x] Verified tool is registered in `TOOL_REGISTRY`
- [x] Verified OpenAI function schema is valid
- [x] Ran existing tests (`pytest backend/api/features/chat/tools/`) -
all pass
- [x] Type checker (`pyright`) passes with 0 errors
- [ ] Manual testing with copilot (requires Agent Generator service)
|
||
|
|
7dc53071e8 |
fix(backend): Add retry and error handling to block initialization (#11946)
## Summary Adds retry logic and graceful error handling to `initialize_blocks()` to prevent transient DB errors from crashing server startup. ## Problem When a transient database error occurs during block initialization (e.g., Prisma P1017 "Server has closed the connection"), the entire server fails to start. This is overly aggressive since: 1. Blocks are already registered in memory 2. The DB sync is primarily for tracking/schema storage 3. One flaky connection shouldn't prevent the server from starting **Triggered by:** [Sentry AUTOGPT-SERVER-7PW](https://significant-gravitas.sentry.io/issues/7238733543/) ## Solution - Add retry decorator (3 attempts with exponential backoff) for DB operations - On failure after retries, log a warning and continue to the next block - Blocks remain available in memory even if DB sync fails - Log summary of any failed blocks at the end ## Changes - `autogpt_platform/backend/backend/data/block.py`: Wrap block DB sync in retry logic with graceful fallback ## Testing - Existing block initialization behavior unchanged on success - On transient DB errors: retries up to 3 times, then continues with warning |
||
|
|
4878665c66 | Merge branch 'master' into dev | ||
|
|
678ddde751 |
refactor(backend): unify context compression into compress_context() (#11937)
## Background
This PR consolidates and unifies context window management for the
CoPilot backend.
### Problem
The CoPilot backend had **two separate implementations** of context
window management:
1. **`service.py` → `_manage_context_window()`** - Chat service
streaming/continuation
2. **`prompt.py` → `compress_prompt()`** - Sync LLM blocks
This duplication led to inconsistent behavior, maintenance burden, and
duplicate code.
---
## Solution: Unified `compress_context()`
A single async function that handles both use cases:
| Caller | Usage | Behavior |
|--------|-------|----------|
| **Chat service** | `compress_context(msgs, client=openai_client)` |
Summarization → Truncation |
| **LLM blocks** | `compress_context(msgs, client=None)` | Truncation
only (no API call) |
---
## Strategy Order
| Step | Description | Runs When |
|------|-------------|-----------|
| **1. LLM Summarization** | Summarize old messages into single context
message, keep recent 15 | Only if `client` provided |
| **2. Content Truncation** | Progressively truncate message content
(8192→4096→...→128 tokens) | If still over limit |
| **3. Middle-out Deletion** | Delete messages one at a time from center
outward | If still over limit |
| **4. First/Last Trim** | Truncate system prompt and last message
content | Last resort |
### Why This Order?
1. **Summarization first** (if available) - Preserves semantic meaning
of old messages
2. **Content truncation before deletion** - Keeps all conversation
turns, just shorter
3. **Middle-out deletion** - More granular than dropping all old
messages at once
4. **First/last trim** - Only touch system prompt as last resort
---
## Key Fixes
| Issue | Before | After |
|-------|--------|-------|
| **Socket leak** | `AsyncOpenAI` client never closed | `async with`
context manager |
| **Timeout ignored** | `timeout=30` passed to `create()` (invalid) |
`client.with_options(timeout=30)` |
| **OpenAI tool messages** | Not truncated | Properly truncated |
| **Tool pair integrity** | OpenAI format only | Both OpenAI + Anthropic
formats |
---
## Tool Format Support
`_ensure_tool_pairs_intact()` now supports both formats:
### OpenAI Format
```python
# Assistant with tool_calls
{"role": "assistant", "tool_calls": [{"id": "call_1", ...}]}
# Tool response
{"role": "tool", "tool_call_id": "call_1", "content": "result"}
```
### Anthropic Format
```python
# Assistant with tool_use
{"role": "assistant", "content": [{"type": "tool_use", "id": "toolu_1", ...}]}
# Tool result
{"role": "user", "content": [{"type": "tool_result", "tool_use_id": "toolu_1", ...}]}
```
---
## Files Changed
| File | Change |
|------|--------|
| `backend/util/prompt.py` | +450 lines: Add `CompressResult`,
`compress_context()`, helpers |
| `backend/api/features/chat/service.py` | -380 lines: Remove duplicate,
use thin wrapper |
| `backend/blocks/llm.py` | Migrate `llm_call()` to use
`compress_context(client=None)` |
| `backend/util/prompt_test.py` | +400 lines: Comprehensive tests
(OpenAI + Anthropic) |
### Removed
- `compress_prompt()` - Replaced by `compress_context(client=None)`
- `_manage_context_window()` - Replaced by
`compress_context(client=openai_client)`
---
## API
```python
async def compress_context(
messages: list[dict],
target_tokens: int = 120_000,
*,
model: str = "gpt-4o",
client: AsyncOpenAI | None = None, # None = truncation only
keep_recent: int = 15,
reserve: int = 2_048,
start_cap: int = 8_192,
floor_cap: int = 128,
) -> CompressResult:
...
@dataclass
class CompressResult:
messages: list[dict]
token_count: int
was_compacted: bool
error: str | None = None
original_token_count: int = 0
messages_summarized: int = 0
messages_dropped: int = 0
```
---
## Tests Added
| Test Class | Coverage |
|------------|----------|
| `TestMsgTokens` | Token counting for regular messages, OpenAI tool
calls, Anthropic tool_use |
| `TestTruncateToolMessageContent` | OpenAI + Anthropic tool message
truncation |
| `TestEnsureToolPairsIntact` | OpenAI format (3 tests), Anthropic
format (3 tests), edge cases (3 tests) |
| `TestCompressContext` | No compression, truncation-only, tool pair
preservation, error handling |
---
## Checklist
- [x] Code follows project conventions
- [x] Linting passes (`poetry run format`)
- [x] Type checking passes (`pyright`)
- [x] Tests added for all new functions
- [x] Both OpenAI and Anthropic tool formats supported
- [x] Backward compatible behavior preserved
- [x] All review comments addressed
|
||
|
|
aef6f57cfd |
fix(scheduler): route db calls through DatabaseManager (#11941)
## Summary Routes `increment_onboarding_runs` and `cleanup_expired_oauth_tokens` through the DatabaseManager RPC client instead of calling Prisma directly. ## Problem The Scheduler service never connects its Prisma client. While `add_graph_execution()` in `utils.py` has a fallback that routes through DatabaseManager when Prisma isn't connected, subsequent calls in the scheduler were hitting Prisma directly: - `increment_onboarding_runs()` after successful graph execution - `cleanup_expired_oauth_tokens()` in the scheduled job These threw `ClientNotConnectedError`, caught by generic exception handlers but spamming Sentry (~696K events since December per the original analysis in #11926). ## Solution Follow the same pattern as `utils.py`: 1. Add `cleanup_expired_oauth_tokens` to `DatabaseManager` and `DatabaseManagerAsyncClient` 2. Update scheduler to use `get_database_manager_async_client()` for both calls ## Changes - **database.py**: Import and expose `cleanup_expired_oauth_tokens` in both manager classes - **scheduler.py**: Use `db.increment_onboarding_runs()` and `db.cleanup_expired_oauth_tokens()` via the async client ## Impact - Eliminates Sentry error spam from scheduler - Onboarding run counters now actually increment for scheduled executions - OAuth token cleanup now actually runs ## Testing Deploy to staging with scheduled graphs and verify: 1. No more `ClientNotConnectedError` in scheduler logs 2. `UserOnboarding.agentRuns` increments on scheduled runs 3. Expired OAuth tokens get cleaned up Refs: #11926 (original fix that was closed) |
||
|
|
14cee1670a |
fix(backend): Prevent leaking Redis connections in ws_api (#11869)
Fixing https://github.com/Significant-Gravitas/AutoGPT/pull/11297#discussion_r2496833421 ### Changes 🏗️ 1. event_bus.py - Added close method to AsyncRedisEventBus - Added __init__ method to track the _pubsub instance attribute - Added async def close() method that closes the PubSub connection safely - Modified listen_events() to store the pubsub reference in self._pubsub 2. ws_api.py - Added cleanup in event_broadcaster - Wrapped the worker coroutines in try/finally block - The finally block calls close() on both event buses to ensure cleanup happens on any exit (including exceptions before retry) |
||
|
|
d81d1ce024 |
refactor(backend): extract context window management and fix LLM continuation (#11936)
## Summary Fixes CoPilot becoming unresponsive after long-running tools complete, and refactors context window management into a reusable function. ## Problem After `create_agent` completes, `_generate_llm_continuation()` was sending ALL messages to OpenRouter without any context compaction. When conversations exceeded ~50 messages, OpenRouter rejected requests with `provider_name: 'unknown'` (no provider would accept). **Evidence:** Langfuse session [44fbb803-092e-4ebd-b288-852959f4faf5](https://cloud.langfuse.com/project/cmk5qhf210003ad079sd8utjt/sessions/44fbb803-092e-4ebd-b288-852959f4faf5) showed: - Successful calls: 32-50 messages, known providers - Failed calls: 52+ messages, `provider: unknown`, `completion: null` ## Changes ### Refactor: Extract reusable `_manage_context_window()` - Counts tokens and checks against 120k threshold - Summarizes old messages while keeping recent 15 - Ensures tool_call/tool_response pairs stay intact - Progressive truncation if still over limit - Returns `ContextWindowResult` dataclass with messages, token count, compaction status, and errors - Helper `_messages_to_dicts()` reduces code duplication ### Fix: Update `_generate_llm_continuation()` - Now calls `_manage_context_window()` before making LLM calls - Adds retry logic with exponential backoff (matching `_stream_chat_chunks` behavior) ### Cleanup: Update `_stream_chat_chunks()` - Replaced inline context management with call to `_manage_context_window()` - Eliminates code duplication between the two functions ## Testing - Syntax check: ✅ - Ruff lint: ✅ - Import verification: ✅ ## Checklist - [x] My code follows the style guidelines of this project - [x] I have performed a self-review of my own code - [x] My changes generate no new warnings - [x] I have checked that my changes do not break existing functionality --------- Co-authored-by: Otto <otto@agpt.co> |
||
|
|
2dd341c369 |
refactor: enrich description with context before calling Agent Generator (#11932)
## Summary Updates the Agent Generator client to enrich the description with context before calling, instead of sending `user_instruction` as a separate parameter. ## Context Companion PR to Significant-Gravitas/AutoGPT-Agent-Generator#105 which removes unused parameters from the decompose API. ## Changes - Enrich `description` with `context` (e.g., clarifying question answers) before sending - Remove `user_instruction` from request payload ## How it works Both input boxes and chat box work the same way - the frontend constructs a formatted message with answers and sends it as a user message. The backend then enriches the description with this context before calling the external Agent Generator service. |
||
|
|
f7350c797a |
fix(copilot): use messages_dict in fallback context compaction (#11922)
## Summary Fixes a bug where the fallback path in context compaction passes `recent_messages` (already sliced) instead of `messages_dict` (full conversation) to `_ensure_tool_pairs_intact`. This caused the function to fail to find assistant messages that exist in the original conversation but were outside the sliced window, resulting in orphan tool_results being sent to Anthropic and rejected with: ``` messages.66.content.0: unexpected tool_use_id found in tool_result blocks: toolu_vrtx_019bi1PDvEn7o5ByAxcS3VdA ``` ## Changes - Pass `messages_dict` and `slice_start` (relative to full conversation) instead of `recent_messages` and `reduced_slice_start` (relative to already-sliced list) ## Testing This is a targeted fix for the fallback path. The bug only manifests when: 1. Token count > 120k (triggers compaction) 2. Initial compaction + summary still exceeds limit (triggers fallback) 3. A tool_result's corresponding assistant is in `messages_dict` but not in `recent_messages` ## Related - Fixes SECRT-1861 - Related: SECRT-1839 (original fix that missed this code path) |
||
|
|
1081590384 |
feat(backend): cover webhook ingress URL route (#11747)
### Changes 🏗️ - Add a unit test to verify webhook ingress URL generation matches the FastAPI route. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] poetry run pytest backend/integrations/webhooks/utils_test.py --confcutdir=backend/integrations/webhooks #### For configuration changes: - [x] .env.default is updated or already compatible with my changes - [x] docker-compose.yml is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under Changes) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Tests** * Added a unit test that validates webhook ingress URL generation matches the application's resolved route (scheme, host, and path) for provider-specific webhook endpoints, improving confidence in routing behavior and helping prevent regressions. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Reinier van der Leer <pwuts@agpt.co> |
||
|
|
7e37de8e30 |
fix: Include graph schemas for marketplace agents in Agent Generator (#11920)
## Problem When marketplace agents are included in the `library_agents` payload sent to the Agent Generator service, they were missing required fields (`graph_id`, `graph_version`, `input_schema`, `output_schema`). This caused Pydantic validation to fail with HTTP 422 Unprocessable Entity. **Root cause:** The `MarketplaceAgentSummary` TypedDict had a different shape than `LibraryAgentInfo` expected by the Agent Generator: - Agent Generator expects: `graph_id`, `graph_version`, `name`, `description`, `input_schema`, `output_schema` - MarketplaceAgentSummary had: `name`, `description`, `sub_heading`, `creator`, `is_marketplace_agent` ## Solution 1. **Add `agent_graph_id` to `StoreAgent` model** - The field was already in the database view but not exposed 2. **Include `agentGraphId` in hybrid search SQL query** - Carry the field through the search CTEs 3. **Update `search_marketplace_agents_for_generation()`** - Now fetches full graph schemas using `get_graph()` and returns `LibraryAgentSummary` (same type as library agents) 4. **Update deduplication logic** - Use `graph_id` instead of name for more accurate deduplication ## Changes - `backend/api/features/store/model.py`: Add optional `agent_graph_id` field to `StoreAgent` - `backend/api/features/store/hybrid_search.py`: Include `agentGraphId` in SQL query columns - `backend/api/features/store/db.py`: Map `agentGraphId` when creating `StoreAgent` objects - `backend/api/features/chat/tools/agent_generator/core.py`: Update `search_marketplace_agents_for_generation()` to fetch and include full graph schemas ## Testing - [ ] Agent creation on dev with marketplace agents in context - [ ] Verify no 422 errors from Agent Generator - [ ] Verify marketplace agents can be used as sub-agents Fixes: SECRT-1817 --------- Co-authored-by: majdyz <majdyz@users.noreply.github.com> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
2abbb7fbc8 |
hotfix(backend): use discriminator for credential matching in run_block (#11908)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> |
||
|
|
05b60db554 |
fix(backend/chat): Include input schema in discovery and validate unknown fields (#11916)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> |
||
|
|
18a1661fa3 |
feat: add library agent fetching with two-phase search for sub-agent support (#11889)
## Context
When users ask the chat to create agents, they may want to compose
workflows that reuse their existing agents as sub-agents. For this to
work, the Agent Generator service needs to know what agents the user has
available.
**Challenge:** Users can have large libraries with many agents. Fetching
all of them would be slow and provide too much context to the LLM.
## Solution
This PR implements **search-based library agent fetching** with a
**two-phase search** strategy:
1. **Phase 1 (Initial Search):** When the user describes their goal, we
search for relevant library agents using the goal as the search query
2. **Phase 2 (Step-Based Enrichment):** After the goal is decomposed
into steps, we extract keywords from those steps and search for
additional relevant agents
This ensures we find agents that are relevant to both the high-level
goal AND the specific steps identified.
### Example Flow
```
User goal: "Create an agent that fetches weather and sends a summary email"
Phase 1: Search for "weather email summary" → finds "Weather Fetcher" agent
Phase 2: After decomposition identifies steps like "send email notification"
→ searches "send email notification" → finds "Gmail Sender" agent
```
### Changes
**Library Agent Fetching:**
- `get_library_agents_for_generation()` - Search-based fetching from
user's library
- `search_marketplace_agents_for_generation()` - Search public
marketplace
- `get_all_relevant_agents_for_generation()` - Combines both with
deduplication
**Two-Phase Search:**
- `extract_search_terms_from_steps()` - Extracts keywords from
decomposed steps
- `enrich_library_agents_from_steps()` - Searches for additional agents
based on steps
- Integrated into `create_agent.py` as "Step 1.5" after goal
decomposition
**Type Safety:**
- Added `TypedDict` definitions: `LibraryAgentSummary`,
`MarketplaceAgentSummary`, `DecompositionStep`, `DecompositionResult`
### Design Decisions
- **Search-based, not fetch-all:** Scalable for large libraries
- **Library agents prioritized:** They have full schemas; marketplace
agents have basic info only
- **Deduplication by name and graph_id:** Prevents duplicates across
searches
- **Graceful degradation:** Failures don't block agent generation
- **Limited to 3 search terms:** Avoids excessive API calls during
enrichment
## Related PR
- Agent Generator:
https://github.com/Significant-Gravitas/AutoGPT-Agent-Generator/pull/103
## Test plan
- [x] `test_library_agents.py` - 19 tests covering all new functions
- [x] `test_service.py` - 4 tests for library_agents passthrough
- [ ] Integration test: Create agent with library sub-agent composition
|
||
|
|
cc4839bedb |
hotfix(frontend): fix home redirect (3) (#11904)
### Changes 🏗️ Further improvements to LaunchDarkly initialisation and homepage redirect... ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally with the flag disabled/enabled, and the redirects work --------- Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Ubbe <0ubbe@users.noreply.github.com> |
||
|
|
dbbff04616 |
hotfix(frontend): LD remount (#11903)
## Changes 🏗️ Removes the `key` prop from `LDProvider` that was causing full remounts when user context changed. ### The Problem The `key={context.key}` prop was forcing React to unmount and remount the entire LDProvider when switching from anonymous → logged in user: ``` 1. Page loads, user loading → key="anonymous" → LD mounts → flags available ✅ 2. User finishes loading → key="user-123" → React sees key changed 3. LDProvider UNMOUNTS → flags become undefined ❌ 4. New LDProvider MOUNTS → initializes again → flags available ✅ ``` This caused the flag values to cycle: `undefined → value → undefined → value` ### The Fix Remove the `key` prop. The LDProvider handles context changes internally via the `context` prop, which triggers `identify()` without remounting the provider. ## Checklist 📋 - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [ ] I have tested my changes according to the test plan: - [ ] Flag values don't flicker on page load - [ ] Flag values update correctly when logging in/out - [ ] No redirect race conditions Related: SECRT-1845 |
||
|
|
350ad3591b |
fix(backend/chat): Filter credentials for graph execution by scopes (#11881)
[SECRT-1842: run_agent tool does not correctly use credentials - agents fail with insufficient auth scopes](https://linear.app/autogpt/issue/SECRT-1842) ### Changes 🏗️ - Include scopes in credentials filter in `backend.api.features.chat.tools.utils.match_user_credentials_to_graph` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - CI must pass - It's broken now and a simple change so we'll test in the dev deployment |
||
|
|
e6438b9a76 |
hotfix(frontend): use server redirect (#11900)
### Changes 🏗️ The page used a client-side redirect (`useEffect` + `router.replace`) which only works after JavaScript loads and hydrates. On deployed sites, if there's any delay or failure in JS execution, users see an empty/black page because the component returns null. **Fix:** Converted to a server-side redirect using redirect() from next/navigation. This is a server component now, so: ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Tested locally but will see it fully working once deployed |
||
|
|
de0ec3d388 |
chore(llm): remove deprecated Claude 3.7 Sonnet model with migration and defensive handling (#11841)
## Summary Remove `claude-3-7-sonnet-20250219` from LLM model definitions ahead of Anthropic's API retirement, with comprehensive migration and defensive error handling. ## Background Anthropic is retiring Claude 3.7 Sonnet (`claude-3-7-sonnet-20250219`) on **February 19, 2026 at 9:00 AM PT**. This PR removes the model from the platform and migrates existing users to prevent service interruptions. ## Changes ### Code Changes - Remove `CLAUDE_3_7_SONNET` enum member from `LlmModel` in `llm.py` - Remove corresponding `ModelMetadata` entry - Remove `CLAUDE_3_7_SONNET` from `StagehandRecommendedLlmModel` enum - Remove `CLAUDE_3_7_SONNET` from block cost config - Add `CLAUDE_4_5_SONNET` to `StagehandRecommendedLlmModel` enum - Update Stagehand block defaults from `CLAUDE_3_7_SONNET` to `CLAUDE_4_5_SONNET` (staying in Claude family) - Add defensive error handling in `CredentialsFieldInfo.discriminate()` for deprecated model values ### Database Migration - Adds migration `20260126120000_migrate_claude_3_7_to_4_5_sonnet` - Migrates `AgentNode.constantInput` model references - Migrates `AgentNodeExecutionInputOutput.data` preset overrides ### Documentation - Updated `docs/integrations/block-integrations/llm.md` to remove deprecated model - Updated `docs/integrations/block-integrations/stagehand/blocks.md` to remove deprecated model and add Claude 4.5 Sonnet ## Notes - Agent JSON files in `autogpt_platform/backend/agents/` still reference this model in their provider mappings. These are auto-generated and should be regenerated separately. ## Testing - [ ] Verify LLM block still functions with remaining models - [ ] Confirm no import errors in affected files - [ ] Verify migration runs successfully - [ ] Verify deprecated model gives helpful error message instead of KeyError |
||
|
|
e10ff8d37f |
fix(frontend): remove double flag check on homepage redirect (#11894)
## Changes 🏗️
Fixes the hard refresh redirect bug (SECRT-1845) by removing the double
feature flag check.
### Before (buggy)
```
/ → checks flag → /copilot or /library
/copilot (layout) → checks flag → /library if OFF
```
On hard refresh, two sequential LD checks created a race condition
window.
### After (fixed)
```
/ → always redirects to /copilot
/copilot (layout) → single flag check via FeatureFlagPage
```
Single check point = no double-check race condition.
## Root Cause
As identified by @0ubbe: the root page and copilot layout were both
checking the feature flag. On hard refresh with network latency, the
second check could fire before LaunchDarkly fully initialized, causing
users to be bounced to `/library`.
## Test Plan
- [ ] Hard refresh on `/` → should go to `/copilot` (flag ON)
- [ ] Hard refresh on `/copilot` → should stay on `/copilot` (flag ON)
- [ ] With flag OFF → should redirect to `/library`
- [ ] Normal navigation still works
Fixes: SECRT-1845
cc @0ubbe
|
||
|
|
7cb1e588b0 |
fix(frontend): Refocus ChatInput after voice transcription completes (#11893)
## Summary Refocuses the chat input textarea after voice transcription finishes, allowing users to immediately use `spacebar+enter` to record and send their prompt. ## Changes - Added `inputId` parameter to `useVoiceRecording` hook - After transcription completes, the input is automatically focused - This improves the voice input UX flow ## Testing 1. Click mic button or press spacebar to record voice 2. Record a message and stop 3. After transcription completes, the input should be focused 4. User can now press Enter to send or spacebar to record again --------- Co-authored-by: Lluis Agusti <hi@llu.lu> |
||
|
|
582c6cad36 |
fix(e2e): Make E2E test data deterministic and fix flaky tests (#11890)
## Summary Fixes flaky E2E marketplace and library tests that were causing PRs to be removed from the merge queue. ## Root Cause 1. **Test data was probabilistic** - `e2e_test_data.py` used random chances (40% approve, then 20-50% feature), which could result in 0 featured agents 2. **Library pagination threshold wrong** - Checked `>= 10`, but page size is 20 3. **Fixed timeouts** - Used `waitForTimeout(2000)` / `waitForTimeout(10000)` instead of proper waits ## Changes ### Backend (`e2e_test_data.py`) - Add guaranteed minimums: 8 featured agents, 5 featured creators, 10 top agents - First N submissions are deterministically approved and featured - Increase agents per user from 15 → 25 (for pagination with page_size=20) - Fix library agent creation to use constants instead of hardcoded `10` ### Frontend Tests - `library.spec.ts`: Fix pagination threshold to `PAGE_SIZE` (20) - `library.page.ts`: Replace 2s timeout with `networkidle` + `waitForFunction` - `marketplace.page.ts`: Add `networkidle` wait, 30s waits in `getFirst*` methods - `marketplace.spec.ts`: Replace 10s timeout with `waitForFunction` - `marketplace-creator.spec.ts`: Add `networkidle` + element waits ## Related - Closes SECRT-1848, SECRT-1849 - Should unblock #11841 and other PRs in merge queue --------- Co-authored-by: Ubbe <hi@ubbe.dev> |
||
|
|
b2eb4831bd |
feat(chat): improve agent generator error propagation (#11884)
## Summary - Add helper functions in `service.py` to create standardized error responses with `error_type` classification - Update service functions to return error dicts instead of `None`, preserving error details from the Agent Generator microservice - Update `core.py` to pass through error responses properly - Update `create_agent.py` to handle error responses with user-friendly messages based on error type ## Error Types Now Propagated | Error Type | Description | User Message | |------------|-------------|--------------| | `llm_parse_error` | LLM returned unparseable response | "The AI had trouble understanding this request" | | `llm_timeout` / `timeout` | Request timed out | "The request took too long" | | `llm_rate_limit` / `rate_limit` | Rate limited | "The service is currently busy" | | `validation_error` | Agent validation failed | "The generated agent failed validation" | | `connection_error` | Could not connect to Agent Generator | Generic error message | | `http_error` | HTTP error from Agent Generator | Generic error message | | `unknown` | Unclassified error | Generic error message | ## Motivation This enables better debugging for issues like SECRT-1817 where decomposition failed due to transient LLM errors but the root cause was unclear in the logs. Now: 1. Error details from the Agent Generator microservice are preserved 2. Users get more helpful error messages based on error type 3. Debugging is easier with `error_type` in response details ## Related PR - Agent Generator side: https://github.com/Significant-Gravitas/AutoGPT-Agent-Generator/pull/102 ## Test Plan - [ ] Test decomposition with various error scenarios (timeout, parse error) - [ ] Verify user-friendly messages are shown based on error type - [ ] Check that error details are logged properly |
||
|
|
4cd5da678d |
refactor(claude): Split autogpt_platform/CLAUDE.md into project-specific files (#11788)
Split `autogpt_platform/CLAUDE.md` into project-specific files, to make the scope of the instructions clearer. Also, some minor improvements: - Change references to other Markdown files to @file/path.md syntax that Claude recognizes - Update ambiguous/incorrect/outdated instructions - Remove trailing slashes - Fix broken file path references in other docs (including comments) |
||
|
|
9538992eaf |
hotfix(frontend): flags copilot redirects (#11878)
## Changes 🏗️ - Refactor homepage redirect logic to always point to `/` - the `/` route handles whether to redirect to `/copilot` or `/library` based on flag - Simplify `useGetFlag` checks - Add `<FeatureFlagRedirect />` and `<FeatureFlagPage />` wrapper components - helpers to do 1 thing or the other, depending on chat enabled/disabled - avoids boilerplate code, checking flagss and redirects mistakes (especially around race conditions with LD init ) ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Log in / out of AutoGPT with flag disabled/enabled - [x] Sign up to AutoGPT with flag disabled/enabled - [x] Redirects to homepage always work `/` - [x] Can't access Copilot with disabled flag |