- Add comprehensive documentation for workspace and media file handling
- Cover UserWorkspace/UserWorkspaceFile DB models
- Document WorkspaceManager API with session scoping
- Explain store_media_file() media normalization pipeline
- Define responsibility boundaries for virus scanning and persistence
- Include decision tree for when to use each component
- Add key files reference table
- Update CLAUDE.md with reference to new documentation
Defense in depth - scan content at the persistence layer regardless of
caller. Previously scanning was only at entry points (store_media_file,
WriteWorkspaceFileTool), which created a trust boundary.
Closes OPEN-2993
<!-- Clearly explain the need for these changes: -->
This PR adds general-purpose video editing blocks for the AutoGPT
Platform, enabling automated video production workflows like documentary
creation, marketing videos, tutorial assembly, and content repurposing.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
**New blocks added in `backend/blocks/video/`:**
- `VideoDownloadBlock` - Download videos from URLs (YouTube, Vimeo, news
sites, direct links) using yt-dlp
- `VideoClipBlock` - Extract time segments from videos with start/end
time validation
- `VideoConcatBlock` - Merge multiple video clips with optional
transitions (none, crossfade, fade_black)
- `VideoTextOverlayBlock` - Add text overlays/captions with positioning
and timing options
- `VideoNarrationBlock` - Generate AI narration via ElevenLabs and mix
with video audio (replace, mix, or ducking modes)
**Dependencies required:**
- `yt-dlp` - For video downloading
- `moviepy` - For video editing operations
**Implementation details:**
- All blocks follow the SDK pattern with proper error handling and
exception chaining
- Proper resource cleanup in `finally` blocks to prevent memory leaks
- Input validation (e.g., end_time > start_time)
- Test mocks included for CI
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Blocks follow the SDK pattern with
`BlockSchemaInput`/`BlockSchemaOutput`
- [x] Resource cleanup is implemented in `finally` blocks
- [x] Exception chaining is properly implemented
- [x] Input validation is in place
- [x] Test mocks are provided for CI environments
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
N/A - No configuration changes required.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Adds new multimedia blocks that invoke ffmpeg/MoviePy and introduces
new external dependencies (plus container packages), which can impact
runtime stability and resource usage; download/overlay blocks are
present but disabled due to sandbox/policy concerns.
>
> **Overview**
> Adds a new `backend.blocks.video` module with general-purpose video
workflow blocks (download, clip, concat w/ transitions, loop, add-audio,
text overlay, and ElevenLabs-powered narration), including shared
utilities for codec selection, filename cleanup, and an ffmpeg-based
chapter-strip workaround for MoviePy.
>
> Extends credentials/config to support ElevenLabs
(`ELEVENLABS_API_KEY`, provider enum, system credentials, and cost
config) and adds new dependencies (`elevenlabs`, `yt-dlp`) plus Docker
runtime packages (`ffmpeg`, `imagemagick`).
>
> Improves file/reference handling end-to-end by embedding MIME types in
`workspace://...#mime` outputs and updating frontend rendering to detect
video vs image from MIME fragments (and broaden supported audio/video
extensions), with optional enhanced output rendering behind a feature
flag in the legacy builder UI.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
da7a44d794. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Adds support for Anthropic's newly released Claude Opus 4.6 model.
## Changes
- Added `claude-opus-4-6` to the `LlmModel` enum
- Added model metadata: 200K context window (1M beta), **128K max output
tokens**
- Added block cost config (same pricing tier as Opus 4.5: $5/MTok input,
$25/MTok output)
- Updated chat config default model to Claude Opus 4.6
## Model Details
From [Anthropic's
docs](https://docs.anthropic.com/en/docs/about-claude/models):
- **API ID:** `claude-opus-4-6`
- **Context window:** 200K tokens (1M beta)
- **Max output:** 128K tokens (up from 64K on Opus 4.5)
- **Extended thinking:** Yes
- **Adaptive thinking:** Yes (new, Opus 4.6 exclusive)
- **Knowledge cutoff:** May 2025 (reliable), Aug 2025 (training)
- **Pricing:** $5/MTok input, $25/MTok output (same as Opus 4.5)
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
## Summary
Implements a `TextEncoderBlock` that encodes plain text into escape
sequences (the reverse of `TextDecoderBlock`).
## Changes
### Block Implementation
- Added `encoder_block.py` with `TextEncoderBlock` in
`autogpt_platform/backend/backend/blocks/`
- Uses `codecs.encode(text, "unicode_escape").decode("utf-8")` for
encoding
- Mirrors the structure and patterns of the existing `TextDecoderBlock`
- Categorised as `BlockCategory.TEXT`
### Documentation
- Added Text Encoder section to
`docs/integrations/block-integrations/text.md` (the auto-generated docs
file for TEXT category blocks)
- Expanded "How it works" with technical details on the encoding method,
validation, and edge cases
- Added 3 structured use cases per docs guidelines: JSON payload
preparation, Config/ENV generation, Snapshot fixtures
- Added Text Encoder to the overview table in
`docs/integrations/README.md`
- Removed standalone `encoder_block.md` (TEXT category blocks belong in
`text.md` per `CATEGORY_FILE_MAP` in `generate_block_docs.py`)
### Documentation Formatting (CodeRabbit feedback)
- Added blank lines around markdown tables (MD058)
- Added `text` language tags to fenced code blocks (MD040)
- Restructured use case section with bold headings per coding guidelines
## How Docs Were Synced
The `check-docs-sync` CI job runs `poetry run python
scripts/generate_block_docs.py --check` which expects blocks to be
documented in category-grouped files. Since `TextEncoderBlock` uses
`BlockCategory.TEXT`, the `CATEGORY_FILE_MAP` maps it to `text.md` — not
a standalone file. The block entry was added to `text.md` following the
exact format used by the generator (with `<!-- MANUAL -->` markers for
hand-written sections).
## Related Issue
Fixes#11111
---------
Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: lif <19658300+majiayu000@users.noreply.github.com>
Co-authored-by: Aryan Kaul <134673289+aryancodes1@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
## Summary
When editing an agent via CoPilot's `edit_agent` tool, the code was
always creating a new `LibraryAgent` entry instead of updating the
existing one to point to the new graph version. This caused duplicate
agents to appear in the user's library.
## Changes
In `save_agent_to_library()`:
- When `is_update=True`, now checks if there's an existing library agent
for the graph using `get_library_agent_by_graph_id()`
- If found, uses `update_agent_version_in_library()` to update the
existing library agent to point to the new version
- Falls back to creating a new library agent if no existing one is found
(e.g., if editing a graph that wasn't added to library yet)
## Testing
- Verified lint/format checks pass
- Plan reviewed and approved by Staff Engineer Plan Reviewer agent
## Related
Fixes [SECRT-1857](https://linear.app/autogpt/issue/SECRT-1857)
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Summary
- Add asymptotic progress bar that appears during long-running chat
tasks
- Progress bar shows after 10 seconds with "Working on it..." label and
percentage
- Uses half-life formula: ~50% at 30s, ~75% at 60s, ~87.5% at 90s, etc.
- Creates the classic "game loading bar" effect that never reaches 100%
https://github.com/user-attachments/assets/3c59289e-793c-4a08-b3fc-69e1eef28b1f
## Test plan
- [x] Start a chat that triggers agent generation
- [x] Wait 10+ seconds for the progress bar to appear
- [x] Verify progress bar is centered with label and percentage
- [x] Verify progress follows expected timing (~50% at 30s)
- [x] Verify progress bar disappears when task completes
---------
Co-authored-by: Otto <otto@agpt.co>
## Summary
- Add special UI prompt when agent is successfully created in chat
- Show "Agent Created Successfully" with agent name
- Provide two action buttons:
- **Run with example values**: Sends chat message asking AI to run with
placeholders
- **Run with my inputs**: Opens RunAgentModal for custom input
configuration
- After run/schedule, automatically send chat message with execution
details for AI monitoring
https://github.com/user-attachments/assets/b11e118c-de59-4b79-a629-8bd0d52d9161
## Test plan
- [x] Create an agent through chat
- [x] Verify "Agent Created Successfully" prompt appears
- [x] Click "Run with example values" - verify chat message is sent
- [x] Click "Run with my inputs" - verify RunAgentModal opens
- [x] Fill inputs and run - verify chat message with execution ID is
sent
- [x] Fill inputs and schedule - verify chat message with schedule
details is sent
---------
Co-authored-by: Otto <otto@agpt.co>
When users search for agents, guide them toward creating custom agents
if no results are found or after showing results. This improves user
engagement by offering a clear next step.
### Changes 🏗️
- Updated `agent_search.py` to add CTAs in search responses
- Added messaging to inform users they can create custom agents based on
their needs
- Applied to both "no results found" and "agents found" scenarios
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Search for agents in marketplace with matching results
- [x] Search for agents in marketplace with no results
- [x] Search for agents in library with matching results
- [x] Search for agents in library with no results
- [x] Verify CTA message appears in all cases
---------
Co-authored-by: Otto <otto@agpt.co>
In non-production environments, the chat service now fetches prompts
with the `latest` label instead of the default production-labeled
prompt. This makes it easier to test and iterate on prompt changes in
dev/staging without needing to promote them to production first.
### Changes 🏗️
- Updated `_get_system_prompt_template()` in chat service to pass
`label="latest"` when `app_env` is not `PRODUCTION`
- Production environments continue using the default behavior
(production-labeled prompts)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that in non-production environments, prompts with
`latest` label are fetched
- [x] Verified that production environments still use the default
(production) labeled prompts
Co-authored-by: Otto <otto@agpt.co>
## Summary
Fixes [SECRT-1889](https://linear.app/autogpt/issue/SECRT-1889): The
YouTube transcription block was yielding both `video_id` and `error`
when the transcript fetch failed.
## Problem
The block yielded `video_id` immediately upon extracting it from the
URL, before attempting to fetch the transcript. If the transcript fetch
failed, both outputs were present.
```python
# Before
video_id = self.extract_video_id(input_data.youtube_url)
yield "video_id", video_id # ← Yielded before transcript attempt
transcript = self.get_transcript(video_id, credentials) # ← Could fail here
```
## Solution
Wrap the entire operation in try/except and only yield outputs after all
operations succeed:
```python
# After
try:
video_id = self.extract_video_id(input_data.youtube_url)
transcript = self.get_transcript(video_id, credentials)
transcript_text = self.format_transcript(transcript=transcript)
# Only yield after all operations succeed
yield "video_id", video_id
yield "transcript", transcript_text
except Exception as e:
yield "error", str(e)
```
This follows the established pattern in other blocks (e.g.,
`ai_image_generator_block.py`).
## Testing
- All 10 unit tests pass (`test/blocks/test_youtube.py`)
- Lint/format checks pass
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
### Changes 🏗️
Fixes **AUTOGPT-SERVER-7JA** (123 events since Jan 27, 2026).
#### Problem
`StreamHeartbeat` was added to keep SSE connections alive during
long-running tool executions (yielded every 15s while waiting). However,
the main `stream_chat_completion` handler's `elif` chain didn't have a
case for it:
```
StreamTextStart → ✅ handled
StreamTextDelta → ✅ handled
StreamTextEnd → ✅ handled
StreamToolInputStart → ✅ handled
StreamToolInputAvailable → ✅ handled
StreamToolOutputAvailable → ✅ handled
StreamFinish → ✅ handled
StreamError → ✅ handled
StreamUsage → ✅ handled
StreamHeartbeat → ❌ fell through to 'Unknown chunk type' error
```
This meant every heartbeat during tool execution generated a Sentry
error instead of keeping the connection alive.
#### Fix
Add `StreamHeartbeat` to the `elif` chain and yield it through. The
route handler already calls `to_sse()` on all yielded chunks, and
`StreamHeartbeat.to_sse()` correctly returns `: heartbeat\n\n` (SSE
comment format, ignored by clients but keeps proxies/load balancers
happy).
**1 file changed, 3 insertions.**
## Summary
Fixes the flaky `test_block_credit_reset` test that was failing on
multiple PRs with `assert 0 == 1000`.
## Root Cause
The test calls `disable_test_user_transactions()` which sets `updatedAt`
to 35 days ago from the **actual current time**. It then mocks
`time_now` to January 1st.
**The bug**: If the test runs in early February, 35 days ago is January
— the **same month** as the mocked `time_now`. The credit refill logic
only triggers when the balance snapshot is from a *different* month, so
no refill happens and the balance stays at 0.
## Fix
After calling `disable_test_user_transactions()`, explicitly set
`updatedAt` to December of the previous year. This ensures it's always
in a different month than the mocked `month1` (January), regardless of
when the test runs.
## Testing
CI will verify the fix.
## Summary
Implements [SECRT-1880](https://linear.app/autogpt/issue/SECRT-1880) -
Improve Linear Search Block
## Changes
### Models (`models.py`)
- Added `State` model with `id`, `name`, and `type` fields for workflow
state information
- Added `state: State | None` field to `Issue` model
### API Client (`_api.py`)
- Updated `try_search_issues()` to:
- Add `max_results` parameter (default 10, was ~50) to reduce token
usage
- Add `team_id` parameter for team filtering
- Return `createdAt`, `state`, `project`, and `assignee` fields in
results
- Fixed `try_get_team_by_name()` to return descriptive error message
when team not found instead of crashing with `IndexError`
### Block (`issues.py`)
- Added `max_results` input parameter (1-100, default 10)
- Added `team_name` input parameter for optional team filtering
- Added `error` output field for graceful error handling
- Added categories (`PRODUCTIVITY`, `ISSUE_TRACKING`)
- Updated test fixtures to include new fields
## Breaking Changes
| Change | Before | After | Mitigation |
|--------|--------|-------|------------|
| Default result count | ~50 | 10 | Users can set `max_results` up to
100 if needed |
## Non-Breaking Changes
- `state` field added to `Issue` (optional, defaults to `None`)
- `max_results` param added (has default value)
- `team_name` param added (optional, defaults to `None`)
- `error` output added (follows established pattern from GitHub blocks)
## Testing
- [x] Format/lint checks pass
- [x] Unit test fixtures updated
Resolves SECRT-1880
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
- Fixes [SECRT-1851: \[Copilot\] `run_agent` tool doesn't filter
host-scoped credentials](https://linear.app/autogpt/issue/SECRT-1851)
- Follow-up to #11881
### Changes 🏗️
- Filter host-scoped credentials for `run_agent` tool
- Tighten validation on host input field in `HostScopedCredentialsModal`
- Use netloc (w/ port) rather than just hostname (w/o port) as host
scope
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create graph that requires host-scoped credentials to work
- Create host-scoped credentials with a *different* host
- Try to have Copilot run the graph
- [x] -> no matching credentials available
- Create new credentials
- [x] -> works
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
## Summary
Update the CoPilot homepage to shift from "what do you want to
automate?" to "tell me about your problems." This lowers the barrier to
engagement by letting users describe their work frustrations instead of
requiring them to identify automations themselves.
## Changes
| Element | Before | After |
|---------|--------|-------|
| Headline | "What do you want to automate?" | "Tell me about your work
— I'll find what to automate." |
| Placeholder | "You can search or just ask - e.g. 'create a blog post
outline'" | "What's your role and what eats up most of your day? e.g.
'I'm a real estate agent and I hate...'" |
| Button 1 | "Show me what I can automate" | "I don't know where to
start, just ask me stuff" |
| Button 2 | "Design a custom workflow" | "I do the same thing every
week and it's killing me" |
| Button 3 | "Help me with content creation" | "Help me find where I'm
wasting my time" |
| Container | max-w-2xl | max-w-3xl |
> **Note on container width:** The `max-w-2xl` → `max-w-3xl` change is
just to keep the longer headline on one line. This works but may not be
the ideal solution — @lluis-xai should advise on the proper approach.
## Why This Matters
The current UX assumes users know what they want to automate. In
reality, most users know what frustrates them but can't identify
automations. The current screen blocks Otto from starting the discovery
conversation that leads to useful recommendations.
## Files Changed
- `autogpt_platform/frontend/src/app/(platform)/copilot/page.tsx` —
headline, placeholder, container width
- `autogpt_platform/frontend/src/app/(platform)/copilot/helpers.ts` —
quick action button text
Resolves: [SECRT-1876](https://linear.app/autogpt/issue/SECRT-1876)
---------
Co-authored-by: Lluis Agusti <hi@llu.lu>
## Summary
Fixes a reflected cross-site scripting (XSS) vulnerability in the OAuth
callback route.
**Security Issue:**
https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/202
### Vulnerability
The OAuth callback route at
`frontend/src/app/(platform)/auth/integrations/oauth_callback/route.ts`
was writing user-controlled data directly into an HTML response without
proper sanitization. This allowed potential attackers to inject
malicious scripts via OAuth callback parameters.
### Fix
Added a `safeJsonStringify()` function that escapes characters that
could break out of the script context:
- `<` → `\u003c`
- `>` → `\u003e`
- `&` → `\u0026`
This prevents any user-provided values from being interpreted as
HTML/script content when embedded in the response.
### References
- [OWASP XSS Prevention Cheat
Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html)
- [CWE-79: Improper Neutralization of Input During Web Page
Generation](https://cwe.mitre.org/data/definitions/79.html)
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified the OAuth callback still functions correctly
- [x] Confirmed special characters in OAuth responses are properly
escaped
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
- Disable auto-opening Wallet for first time user and on credit increase
- Remove no longer needed `lastSeenCredits` state and storage
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Wallet doesn't open automatically
Fixes GHSA-rc89-6g7g-v5v7 / CVE-2026-22038
The logger.info() calls were explicitly logging API keys via
get_secret_value(), exposing credentials in plaintext logs.
Changes:
- Replace info-level credential logging with debug-level provider logging
- Remove all explicit secret value logging from observe/act/extract blocks
Co-authored-by: Otto <otto@agpt.co>
## Changes 🏗️
Adds Redis-based SSE reconnection support for long-running CoPilot
operations (like Agent Generator), enabling clients to reconnect and
resume receiving updates after disconnection.
### What this does:
- **Stream Registry** - Redis-backed task tracking with message
persistence via Redis Streams
- **SSE Reconnection** - Clients can reconnect to active tasks using
`task_id` and `last_message_id`
- **Duplicate Message Fix** - Filters out in-progress assistant messages
from session response when active stream exists
- **Completion Consumer** - Handles background task completion
notifications via Redis Streams
### Architecture:
```
1. User sends message → Backend creates task in Redis
2. SSE chunks written to Redis Stream for persistence
3. Client receives chunks via SSE subscription
4. If client disconnects → Task continues in background
5. Client reconnects → GET /sessions/{id} returns active_stream info
6. Client subscribes to /tasks/{task_id}/stream with last_message_id
7. Missed messages replayed from Redis Stream
```
### Key endpoints:
- `GET /sessions/{session_id}` - Returns `active_stream` info if task is
running
- `GET /tasks/{task_id}/stream?last_message_id=X` - SSE endpoint for
reconnection
- `GET /tasks/{task_id}` - Get task status
- `POST /operations/{op_id}/complete` - Webhook for external service
completion
### Duplicate message fix:
When `GET /sessions/{id}` detects an active stream:
1. Filters out the in-progress assistant message from response
2. Returns `last_message_id="0-0"` so client replays stream from
beginning
3. Client receives complete response only through SSE (single source of
truth)
### Frontend changes:
- Task persistence in localStorage for cross-tab reconnection
- Stream event dispatcher handles reconnection flow
- Deduplication logic prevents duplicate messages
### Testing:
- Manual testing of reconnection scenarios
- Verified duplicate message fix works correctly
## Related
- Resolves SSE timeout issues for Agent Generator
- Fixes duplicate message bug on reconnection
## Summary
Adds a new copilot tool that allows users to customize
marketplace/template agents using natural language before adding them to
their library.
This exposes the Agent Generator's `/api/template-modification` endpoint
to the copilot, which was previously not available.
## Changes
- **service.py**: Add `customize_template_external` to call Agent
Generator's template modification endpoint
- **core.py**:
- Add `customize_template` wrapper function
- Extract `graph_to_json` as a reusable function (was previously inline
in `get_agent_as_json`)
- **customize_agent.py**: New tool that:
- Takes marketplace agent ID (format: `creator/slug`)
- Fetches template from store via `store_db.get_agent()`
- Calls Agent Generator for customization
- Handles clarifying questions from the generator
- Saves customized agent to user's library
- **__init__.py**: Register the tool in `TOOL_REGISTRY` for
auto-discovery
## Usage Flow
1. User searches marketplace: *"Find me a newsletter agent"*
2. Copilot calls `find_agent` → returns `autogpt/newsletter-writer`
3. User: *"Customize that agent to post to Discord instead of email"*
4. Copilot calls:
```
customize_agent(
agent_id="autogpt/newsletter-writer",
modifications="Post to Discord instead of sending email"
)
```
5. Agent Generator may ask clarifying questions (e.g., "What Discord
channel?")
6. Customized agent is saved to user's library
## Test plan
- [x] Verified tool imports correctly
- [x] Verified tool is registered in `TOOL_REGISTRY`
- [x] Verified OpenAI function schema is valid
- [x] Ran existing tests (`pytest backend/api/features/chat/tools/`) -
all pass
- [x] Type checker (`pyright`) passes with 0 errors
- [ ] Manual testing with copilot (requires Agent Generator service)
## Summary
Adds retry logic and graceful error handling to `initialize_blocks()` to
prevent transient DB errors from crashing server startup.
## Problem
When a transient database error occurs during block initialization
(e.g., Prisma P1017 "Server has closed the connection"), the entire
server fails to start. This is overly aggressive since:
1. Blocks are already registered in memory
2. The DB sync is primarily for tracking/schema storage
3. One flaky connection shouldn't prevent the server from starting
**Triggered by:** [Sentry
AUTOGPT-SERVER-7PW](https://significant-gravitas.sentry.io/issues/7238733543/)
## Solution
- Add retry decorator (3 attempts with exponential backoff) for DB
operations
- On failure after retries, log a warning and continue to the next block
- Blocks remain available in memory even if DB sync fails
- Log summary of any failed blocks at the end
## Changes
- `autogpt_platform/backend/backend/data/block.py`: Wrap block DB sync
in retry logic with graceful fallback
## Testing
- Existing block initialization behavior unchanged on success
- On transient DB errors: retries up to 3 times, then continues with
warning
## Summary
Routes `increment_onboarding_runs` and `cleanup_expired_oauth_tokens`
through the DatabaseManager RPC client instead of calling Prisma
directly.
## Problem
The Scheduler service never connects its Prisma client. While
`add_graph_execution()` in `utils.py` has a fallback that routes through
DatabaseManager when Prisma isn't connected, subsequent calls in the
scheduler were hitting Prisma directly:
- `increment_onboarding_runs()` after successful graph execution
- `cleanup_expired_oauth_tokens()` in the scheduled job
These threw `ClientNotConnectedError`, caught by generic exception
handlers but spamming Sentry (~696K events since December per the
original analysis in #11926).
## Solution
Follow the same pattern as `utils.py`:
1. Add `cleanup_expired_oauth_tokens` to `DatabaseManager` and
`DatabaseManagerAsyncClient`
2. Update scheduler to use `get_database_manager_async_client()` for
both calls
## Changes
- **database.py**: Import and expose `cleanup_expired_oauth_tokens` in
both manager classes
- **scheduler.py**: Use `db.increment_onboarding_runs()` and
`db.cleanup_expired_oauth_tokens()` via the async client
## Impact
- Eliminates Sentry error spam from scheduler
- Onboarding run counters now actually increment for scheduled
executions
- OAuth token cleanup now actually runs
## Testing
Deploy to staging with scheduled graphs and verify:
1. No more `ClientNotConnectedError` in scheduler logs
2. `UserOnboarding.agentRuns` increments on scheduled runs
3. Expired OAuth tokens get cleaned up
Refs: #11926 (original fix that was closed)
Fixing
https://github.com/Significant-Gravitas/AutoGPT/pull/11297#discussion_r2496833421
### Changes 🏗️
1. event_bus.py - Added close method to AsyncRedisEventBus
- Added __init__ method to track the _pubsub instance attribute
- Added async def close() method that closes the PubSub connection
safely
- Modified listen_events() to store the pubsub reference in self._pubsub
2. ws_api.py - Added cleanup in event_broadcaster
- Wrapped the worker coroutines in try/finally block
- The finally block calls close() on both event buses to ensure cleanup
happens on any exit (including exceptions before retry)
## Summary
Fixes CoPilot becoming unresponsive after long-running tools complete,
and refactors context window management into a reusable function.
## Problem
After `create_agent` completes, `_generate_llm_continuation()` was
sending ALL messages to OpenRouter without any context compaction. When
conversations exceeded ~50 messages, OpenRouter rejected requests with
`provider_name: 'unknown'` (no provider would accept).
**Evidence:** Langfuse session
[44fbb803-092e-4ebd-b288-852959f4faf5](https://cloud.langfuse.com/project/cmk5qhf210003ad079sd8utjt/sessions/44fbb803-092e-4ebd-b288-852959f4faf5)
showed:
- Successful calls: 32-50 messages, known providers
- Failed calls: 52+ messages, `provider: unknown`, `completion: null`
## Changes
### Refactor: Extract reusable `_manage_context_window()`
- Counts tokens and checks against 120k threshold
- Summarizes old messages while keeping recent 15
- Ensures tool_call/tool_response pairs stay intact
- Progressive truncation if still over limit
- Returns `ContextWindowResult` dataclass with messages, token count,
compaction status, and errors
- Helper `_messages_to_dicts()` reduces code duplication
### Fix: Update `_generate_llm_continuation()`
- Now calls `_manage_context_window()` before making LLM calls
- Adds retry logic with exponential backoff (matching
`_stream_chat_chunks` behavior)
### Cleanup: Update `_stream_chat_chunks()`
- Replaced inline context management with call to
`_manage_context_window()`
- Eliminates code duplication between the two functions
## Testing
- Syntax check: ✅
- Ruff lint: ✅
- Import verification: ✅
## Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] My changes generate no new warnings
- [x] I have checked that my changes do not break existing functionality
---------
Co-authored-by: Otto <otto@agpt.co>
## Summary
Updates the Agent Generator client to enrich the description with
context before calling, instead of sending `user_instruction` as a
separate parameter.
## Context
Companion PR to Significant-Gravitas/AutoGPT-Agent-Generator#105 which
removes unused parameters from the decompose API.
## Changes
- Enrich `description` with `context` (e.g., clarifying question
answers) before sending
- Remove `user_instruction` from request payload
## How it works
Both input boxes and chat box work the same way - the frontend
constructs a formatted message with answers and sends it as a user
message. The backend then enriches the description with this context
before calling the external Agent Generator service.
## Summary
Fixes a bug where the fallback path in context compaction passes
`recent_messages` (already sliced) instead of `messages_dict` (full
conversation) to `_ensure_tool_pairs_intact`.
This caused the function to fail to find assistant messages that exist
in the original conversation but were outside the sliced window,
resulting in orphan tool_results being sent to Anthropic and rejected
with:
```
messages.66.content.0: unexpected tool_use_id found in tool_result blocks: toolu_vrtx_019bi1PDvEn7o5ByAxcS3VdA
```
## Changes
- Pass `messages_dict` and `slice_start` (relative to full conversation)
instead of `recent_messages` and `reduced_slice_start` (relative to
already-sliced list)
## Testing
This is a targeted fix for the fallback path. The bug only manifests
when:
1. Token count > 120k (triggers compaction)
2. Initial compaction + summary still exceeds limit (triggers fallback)
3. A tool_result's corresponding assistant is in `messages_dict` but not
in `recent_messages`
## Related
- Fixes SECRT-1861
- Related: SECRT-1839 (original fix that missed this code path)
### Changes 🏗️
- Add a unit test to verify webhook ingress URL generation matches the
FastAPI route.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] poetry run pytest backend/integrations/webhooks/utils_test.py
--confcutdir=backend/integrations/webhooks
#### For configuration changes:
- [x] .env.default is updated or already compatible with my changes
- [x] docker-compose.yml is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under Changes)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Tests**
* Added a unit test that validates webhook ingress URL generation
matches the application's resolved route (scheme, host, and path) for
provider-specific webhook endpoints, improving confidence in routing
behavior and helping prevent regressions.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Problem
When marketplace agents are included in the `library_agents` payload
sent to the Agent Generator service, they were missing required fields
(`graph_id`, `graph_version`, `input_schema`, `output_schema`). This
caused Pydantic validation to fail with HTTP 422 Unprocessable Entity.
**Root cause:** The `MarketplaceAgentSummary` TypedDict had a different
shape than `LibraryAgentInfo` expected by the Agent Generator:
- Agent Generator expects: `graph_id`, `graph_version`, `name`,
`description`, `input_schema`, `output_schema`
- MarketplaceAgentSummary had: `name`, `description`, `sub_heading`,
`creator`, `is_marketplace_agent`
## Solution
1. **Add `agent_graph_id` to `StoreAgent` model** - The field was
already in the database view but not exposed
2. **Include `agentGraphId` in hybrid search SQL query** - Carry the
field through the search CTEs
3. **Update `search_marketplace_agents_for_generation()`** - Now fetches
full graph schemas using `get_graph()` and returns `LibraryAgentSummary`
(same type as library agents)
4. **Update deduplication logic** - Use `graph_id` instead of name for
more accurate deduplication
## Changes
- `backend/api/features/store/model.py`: Add optional `agent_graph_id`
field to `StoreAgent`
- `backend/api/features/store/hybrid_search.py`: Include `agentGraphId`
in SQL query columns
- `backend/api/features/store/db.py`: Map `agentGraphId` when creating
`StoreAgent` objects
- `backend/api/features/chat/tools/agent_generator/core.py`: Update
`search_marketplace_agents_for_generation()` to fetch and include full
graph schemas
## Testing
- [ ] Agent creation on dev with marketplace agents in context
- [ ] Verify no 422 errors from Agent Generator
- [ ] Verify marketplace agents can be used as sub-agents
Fixes: SECRT-1817
---------
Co-authored-by: majdyz <majdyz@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Summary
Addresses #11785 - users were encountering `openai_api_key_credentials`
errors when following the create-basic-agent guide because it didn't
mention the need to configure API credentials before using AI blocks.
## Changes
Added a **Prerequisites** section to
`docs/platform/create-basic-agent.md` explaining:
- **Cloud users:** Go to Profile → Integrations to add API keys
- **Self-hosted (Docker):** Add keys to `autogpt_platform/backend/.env`
and restart services
Also added a note that the Calculator example doesn't need credentials,
making it a good first test.
## Related
- Issue: #11785
## Context
When users ask the chat to create agents, they may want to compose
workflows that reuse their existing agents as sub-agents. For this to
work, the Agent Generator service needs to know what agents the user has
available.
**Challenge:** Users can have large libraries with many agents. Fetching
all of them would be slow and provide too much context to the LLM.
## Solution
This PR implements **search-based library agent fetching** with a
**two-phase search** strategy:
1. **Phase 1 (Initial Search):** When the user describes their goal, we
search for relevant library agents using the goal as the search query
2. **Phase 2 (Step-Based Enrichment):** After the goal is decomposed
into steps, we extract keywords from those steps and search for
additional relevant agents
This ensures we find agents that are relevant to both the high-level
goal AND the specific steps identified.
### Example Flow
```
User goal: "Create an agent that fetches weather and sends a summary email"
Phase 1: Search for "weather email summary" → finds "Weather Fetcher" agent
Phase 2: After decomposition identifies steps like "send email notification"
→ searches "send email notification" → finds "Gmail Sender" agent
```
### Changes
**Library Agent Fetching:**
- `get_library_agents_for_generation()` - Search-based fetching from
user's library
- `search_marketplace_agents_for_generation()` - Search public
marketplace
- `get_all_relevant_agents_for_generation()` - Combines both with
deduplication
**Two-Phase Search:**
- `extract_search_terms_from_steps()` - Extracts keywords from
decomposed steps
- `enrich_library_agents_from_steps()` - Searches for additional agents
based on steps
- Integrated into `create_agent.py` as "Step 1.5" after goal
decomposition
**Type Safety:**
- Added `TypedDict` definitions: `LibraryAgentSummary`,
`MarketplaceAgentSummary`, `DecompositionStep`, `DecompositionResult`
### Design Decisions
- **Search-based, not fetch-all:** Scalable for large libraries
- **Library agents prioritized:** They have full schemas; marketplace
agents have basic info only
- **Deduplication by name and graph_id:** Prevents duplicates across
searches
- **Graceful degradation:** Failures don't block agent generation
- **Limited to 3 search terms:** Avoids excessive API calls during
enrichment
## Related PR
- Agent Generator:
https://github.com/Significant-Gravitas/AutoGPT-Agent-Generator/pull/103
## Test plan
- [x] `test_library_agents.py` - 19 tests covering all new functions
- [x] `test_service.py` - 4 tests for library_agents passthrough
- [ ] Integration test: Create agent with library sub-agent composition
### Changes 🏗️
Further improvements to LaunchDarkly initialisation and homepage
redirect...
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally with the flag disabled/enabled, and the
redirects work
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <0ubbe@users.noreply.github.com>
## Changes 🏗️
Removes the `key` prop from `LDProvider` that was causing full remounts
when user context changed.
### The Problem
The `key={context.key}` prop was forcing React to unmount and remount
the entire LDProvider when switching from anonymous → logged in user:
```
1. Page loads, user loading → key="anonymous" → LD mounts → flags available ✅
2. User finishes loading → key="user-123" → React sees key changed
3. LDProvider UNMOUNTS → flags become undefined ❌
4. New LDProvider MOUNTS → initializes again → flags available ✅
```
This caused the flag values to cycle: `undefined → value → undefined →
value`
### The Fix
Remove the `key` prop. The LDProvider handles context changes internally
via the `context` prop, which triggers `identify()` without remounting
the provider.
## Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Flag values don't flicker on page load
- [ ] Flag values update correctly when logging in/out
- [ ] No redirect race conditions
Related: SECRT-1845
[SECRT-1842: run_agent tool does not correctly use credentials - agents
fail with insufficient auth
scopes](https://linear.app/autogpt/issue/SECRT-1842)
### Changes 🏗️
- Include scopes in credentials filter in
`backend.api.features.chat.tools.utils.match_user_credentials_to_graph`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- CI must pass
- It's broken now and a simple change so we'll test in the dev
deployment
### Changes 🏗️
The page used a client-side redirect (`useEffect` + `router.replace`)
which only works after JavaScript loads and hydrates. On deployed sites,
if there's any delay or failure in JS execution, users see an
empty/black page because the component returns null.
**Fix:** Converted to a server-side redirect using redirect() from
next/navigation. This is a server component now, so:
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested locally but will see it fully working once deployed
## Summary
Remove `claude-3-7-sonnet-20250219` from LLM model definitions ahead of
Anthropic's API retirement, with comprehensive migration and defensive
error handling.
## Background
Anthropic is retiring Claude 3.7 Sonnet (`claude-3-7-sonnet-20250219`)
on **February 19, 2026 at 9:00 AM PT**. This PR removes the model from
the platform and migrates existing users to prevent service
interruptions.
## Changes
### Code Changes
- Remove `CLAUDE_3_7_SONNET` enum member from `LlmModel` in `llm.py`
- Remove corresponding `ModelMetadata` entry
- Remove `CLAUDE_3_7_SONNET` from `StagehandRecommendedLlmModel` enum
- Remove `CLAUDE_3_7_SONNET` from block cost config
- Add `CLAUDE_4_5_SONNET` to `StagehandRecommendedLlmModel` enum
- Update Stagehand block defaults from `CLAUDE_3_7_SONNET` to
`CLAUDE_4_5_SONNET` (staying in Claude family)
- Add defensive error handling in `CredentialsFieldInfo.discriminate()`
for deprecated model values
### Database Migration
- Adds migration `20260126120000_migrate_claude_3_7_to_4_5_sonnet`
- Migrates `AgentNode.constantInput` model references
- Migrates `AgentNodeExecutionInputOutput.data` preset overrides
### Documentation
- Updated `docs/integrations/block-integrations/llm.md` to remove
deprecated model
- Updated `docs/integrations/block-integrations/stagehand/blocks.md` to
remove deprecated model and add Claude 4.5 Sonnet
## Notes
- Agent JSON files in `autogpt_platform/backend/agents/` still reference
this model in their provider mappings. These are auto-generated and
should be regenerated separately.
## Testing
- [ ] Verify LLM block still functions with remaining models
- [ ] Confirm no import errors in affected files
- [ ] Verify migration runs successfully
- [ ] Verify deprecated model gives helpful error message instead of
KeyError
## Changes 🏗️
Fixes the hard refresh redirect bug (SECRT-1845) by removing the double
feature flag check.
### Before (buggy)
```
/ → checks flag → /copilot or /library
/copilot (layout) → checks flag → /library if OFF
```
On hard refresh, two sequential LD checks created a race condition
window.
### After (fixed)
```
/ → always redirects to /copilot
/copilot (layout) → single flag check via FeatureFlagPage
```
Single check point = no double-check race condition.
## Root Cause
As identified by @0ubbe: the root page and copilot layout were both
checking the feature flag. On hard refresh with network latency, the
second check could fire before LaunchDarkly fully initialized, causing
users to be bounced to `/library`.
## Test Plan
- [ ] Hard refresh on `/` → should go to `/copilot` (flag ON)
- [ ] Hard refresh on `/copilot` → should stay on `/copilot` (flag ON)
- [ ] With flag OFF → should redirect to `/library`
- [ ] Normal navigation still works
Fixes: SECRT-1845
cc @0ubbe
## Summary
Refocuses the chat input textarea after voice transcription finishes,
allowing users to immediately use `spacebar+enter` to record and send
their prompt.
## Changes
- Added `inputId` parameter to `useVoiceRecording` hook
- After transcription completes, the input is automatically focused
- This improves the voice input UX flow
## Testing
1. Click mic button or press spacebar to record voice
2. Record a message and stop
3. After transcription completes, the input should be focused
4. User can now press Enter to send or spacebar to record again
---------
Co-authored-by: Lluis Agusti <hi@llu.lu>
## Summary
Fixes flaky E2E marketplace and library tests that were causing PRs to
be removed from the merge queue.
## Root Cause
1. **Test data was probabilistic** - `e2e_test_data.py` used random
chances (40% approve, then 20-50% feature), which could result in 0
featured agents
2. **Library pagination threshold wrong** - Checked `>= 10`, but page
size is 20
3. **Fixed timeouts** - Used `waitForTimeout(2000)` /
`waitForTimeout(10000)` instead of proper waits
## Changes
### Backend (`e2e_test_data.py`)
- Add guaranteed minimums: 8 featured agents, 5 featured creators, 10
top agents
- First N submissions are deterministically approved and featured
- Increase agents per user from 15 → 25 (for pagination with
page_size=20)
- Fix library agent creation to use constants instead of hardcoded `10`
### Frontend Tests
- `library.spec.ts`: Fix pagination threshold to `PAGE_SIZE` (20)
- `library.page.ts`: Replace 2s timeout with `networkidle` +
`waitForFunction`
- `marketplace.page.ts`: Add `networkidle` wait, 30s waits in
`getFirst*` methods
- `marketplace.spec.ts`: Replace 10s timeout with `waitForFunction`
- `marketplace-creator.spec.ts`: Add `networkidle` + element waits
## Related
- Closes SECRT-1848, SECRT-1849
- Should unblock #11841 and other PRs in merge queue
---------
Co-authored-by: Ubbe <hi@ubbe.dev>