- Defer WorkspaceManager import to inside store_media_file() to break circular import
- Remove deprecated return_content and save_to_workspace parameters (no callers)
- Make return_format a required parameter
- Update tests to use return_format
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Defer sanitize_filename import to inside _build_file_path() to break
the circular import chain: workspace_storage → file → workspace → data → blocks → file
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- routes.py: Sanitize filename in Content-Disposition header to prevent
header injection (RFC5987 encoding for non-ASCII)
- http.py: Replace assert with explicit ValueError for graph_exec_id check
(asserts can be stripped with -O)
- workspace.py: Remove unused functions (get_workspace_by_id,
hard_delete_workspace_file, update_workspace_file)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add shutdown_workspace_storage() to properly close GCS aiohttp sessions
during application shutdown. Follows the same pattern as cloud_storage.py.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Sanitize filenames using sanitize_filename() before building paths
- Add is_relative_to() check after path resolution for defense in depth
- Replace string comparison with is_relative_to() in _parse_storage_path()
for robust path containment on case-insensitive filesystems
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use model_copy() instead of mutating shared ExecutionContext to prevent
race conditions when multiple nodes execute concurrently. Each node now
gets its own isolated copy with correct node_id and node_exec_id values.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The empty string fallback was dead code since store_media_file() validates
graph_exec_id before these lines execute. Replace with explicit asserts
for clearer failure if assumptions are ever violated.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Keep only GET /files/{file_id}/download which is used by frontend chat
to render workspace:// images. Remove 10 unused endpoints and models.py.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Both download_file and download_file_by_path now use _create_file_download_response()
to eliminate ~40 lines of duplicated download handling code.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
get_file_count() now accepts path parameter to match list_files() filtering,
fixing pagination totals when filtering by path prefix.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
get_file_count() now uses the same session scoping logic as list_files(),
ensuring consistent totals when include_all_sessions is false.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update SendAuthenticatedWebRequestBlock to use execution_context
instead of separate graph_exec_id/user_id parameters, matching
the parent class signature.
Update test_http.py to pass execution_context to all test calls.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Rename store_media_file() return_format options to make intent clear:
- "local_path" -> "for_local_processing" (ffmpeg, MoviePy, PIL)
- "data_uri" -> "for_external_api" (Replicate, OpenAI APIs)
- "workspace_ref" -> "for_block_output" (auto-adapts to context)
The "for_block_output" format now gracefully handles both contexts:
- CoPilot (has workspace): returns workspace:// reference
- Graph execution (no workspace): falls back to data URI
This prevents blocks from failing in graph execution while still
providing workspace persistence in CoPilot.
Also adds documentation to CLAUDE.md, new_blocks.md, and
block-sdk-guide.md explaining when to use each format.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Simplify store_media_file API with a single return_format parameter:
- "local_path": Return relative path (for local processing like MoviePy)
- "data_uri": Return base64 data URI (for external APIs like Replicate)
- "workspace_ref": Save to workspace and return workspace://id (for CoPilot)
This replaces the confusing combination of return_content and save_to_workspace
parameters. The old parameters are deprecated but still work via a compatibility
layer.
Updated all blocks to use the new explicit return_format parameter:
- Local processing: return_format="local_path"
- External APIs: return_format="data_uri"
- CoPilot outputs: return_format="workspace_ref"
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When blocks need to pass content to external APIs (Replicate, Discord),
they need data URIs, not workspace references. Added `save_to_workspace`
parameter to control this:
- save_to_workspace=True (default): save to workspace, return ref
- save_to_workspace=False: don't save, return data URI for API use
Updated blocks:
- AIImageEditorBlock (Flux Kontext) - input for Replicate
- AIImageCustomizerBlock - input for Replicate
- SendDiscordFileBlock - input for Discord
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When a block reads from workspace:// and needs content for an external API
(e.g., AIImageEditorBlock sending to Replicate), return data URI instead
of workspace reference.
Logic:
- workspace:// input + return_content=True → data URI (for external APIs)
- workspace:// input + return_content=False → local path (for processing)
- URL/data URI + return_content=True → save to workspace, return ref
- URL/data URI + return_content=False → local path
Fixes AIImageEditorBlock "Does not match format 'uri'" error when input
is a workspace reference.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When blocks need to process files locally (MoviePy, ffmpeg, etc.), they call
store_media_file with return_content=False expecting a local file path.
Previously, this always returned a workspace:// reference when workspace
was available, causing errors like:
"File does not exist: /tmp/exec_file/.../workspace:/abc123"
Now the logic is:
- return_content=True: return workspace:// ref (for CoPilot output persistence)
- return_content=False: return local relative path (for file processing)
Also prevents re-saving when input is already a workspace:// reference,
avoiding unique constraint violations on the (workspaceId, path) index.
Fixes MediaDurationBlock, FileReadBlock, LoopVideoBlock, AddAudioToVideoBlock
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds workspace API endpoints to the generated OpenAPI specification:
- GET /api/workspace - Get workspace info
- GET /api/workspace/files - List workspace files
- POST /api/workspace/files - Upload file
- GET /api/workspace/files/{id} - Get file info
- GET /api/workspace/files/{id}/download - Download file
- DELETE /api/workspace/files/{id} - Delete file
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update executor to propagate workspace context:
- Pass workspace_id in execution kwargs
- Update test utilities with workspace support
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add frontend support for workspace:// image references:
- MarkdownContent: transform workspace:// URLs using generated API
- Route through /api/proxy for proper auth handling
- Add "AI cannot see this image" overlay for workspace files
- Update proxy route to handle binary file downloads
- Format block outputs with workspace refs as markdown images
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update media-generating blocks to save outputs to workspace:
- AIImageCustomizerBlock: store customized images
- AIImageGeneratorBlock: store generated images
- AIShortformVideoCreatorBlock (3 blocks): store videos
- BannerbearTextOverlayBlock: store generated images
- AIVideoGeneratorBlock (FAL): store generated videos
- AIImageEditorBlock (Flux Kontext): store edited images
- CreateTalkingAvatarVideoBlock: store avatar videos
All blocks now return workspace:// references instead of
direct URLs, enabling persistent storage and preventing
context bloat from large base64 data URIs.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update run_block.py with proper CoPilot-to-graph context mapping:
- graph_id = copilot-session-{session_id} (agent = session)
- graph_exec_id = copilot-session-{session_id} (run = session)
- graph_version = 1 (versions are 1-indexed)
- Pass workspace_id and session_id for file operations
- Each chat session is its own agent with one continuous run
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add tools for CoPilot to manage workspace files:
- list_workspace_files: list files with session scoping
- read_workspace_file: read file content or metadata
- write_workspace_file: save content to workspace
- delete_workspace_file: remove files
- Session-aware operations (default to current session folder)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add API routes for workspace file management:
- GET /api/workspace - get workspace info
- POST /api/workspace/files - upload file
- GET /api/workspace/files - list files
- GET /api/workspace/files/{id} - get file info
- GET /api/workspace/files/{id}/download - download file
- DELETE /api/workspace/files/{id} - soft delete
- Stream file content when signed URLs unavailable
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update store_media_file to use workspace when available:
- Save files to workspace instead of temp exec_file dir
- Return workspace:// references instead of base64 data URIs
- Handle workspace:// input references (read from workspace)
- Pass session_id to WorkspaceManager for session scoping
- Prevents context bloat (100KB file = ~133KB as base64)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Extend ExecutionContext with workspace fields:
- workspace_id: user's workspace for file persistence
- session_id: chat session for file isolation
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add storage layer for workspace files:
- WorkspaceStorageBackend: abstract interface
- GCSWorkspaceStorage: Google Cloud Storage implementation
- LocalWorkspaceStorage: local filesystem for self-hosted
- WorkspaceManager: high-level file operations with session scoping
- Session-scoped virtual paths: /sessions/{session_id}/{filename}
- Fallback to API proxy when GCS signed URLs unavailable
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
<!-- Clearly explain the need for these changes: -->
Config change to increase the max times an agent can run in the chat and
the max number of scheduels created by copilot in one chat
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Increases per-chat operational limits for Copilot.
>
> - Bumps `max_agent_runs` default from `3` to `30` in `ChatConfig`
> - Bumps `max_agent_schedules` default from `3` to `30` in `ChatConfig`
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
93cbae6d27. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- Clearly explain the need for these changes: -->
oops file
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
removes file that should have not been commited
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Removes erroneous `backend/blocks/video/__init__.py`, eliminating an
unintended `video` package.
>
> - Deletes a placeholder comment-only file
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
3b84576c33. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
## Summary
Fixes context compaction breaking tool_call/tool_response pairs, causing
API validation errors.
## Problem
When context compaction slices messages with `messages[-KEEP_RECENT:]`,
a naive slice can separate an assistant message containing `tool_calls`
from its corresponding tool response messages. This causes API
validation errors like:
```
messages.0.content.1: unexpected 'tool_use_id' found in 'tool_result' blocks: orphan_12345.
Each 'tool_result' block must have a corresponding 'tool_use' block in the previous message.
```
## Solution
Added `_ensure_tool_pairs_intact()` helper function that:
1. Detects orphan tool responses in a slice (tool messages whose
`tool_call_id` has no matching assistant message)
2. Extends the slice backwards to include the missing assistant messages
3. Falls back to removing orphan tool responses if the assistant cannot
be found (edge case)
Applied this safeguard to:
- The initial `KEEP_RECENT` slice (line ~990)
- The progressive fallback slices when still over token limit (line
~1079)
## Testing
- Syntax validated with `python -m py_compile`
- Logic reviewed for correctness
## Linear
Fixes SECRT-1839
---
*Debugged by Toran & Orion in #agpt Discord*
## Summary
Agent generation (`create_agent`, `edit_agent`) can take 1-5 minutes.
Previously, if the user closed their browser tab during this time:
1. The SSE connection would die
2. The tool execution would be cancelled via `CancelledError`
3. The result would be lost - even if the agent-generator service
completed successfully
This PR ensures long-running tool operations survive SSE disconnections.
### Changes 🏗️
**Backend:**
- **base.py**: Added `is_long_running` property to `BaseTool` for tools
to opt-in to background execution
- **create_agent.py / edit_agent.py**: Set `is_long_running = True`
- **models.py**: Added `OperationStartedResponse`,
`OperationPendingResponse`, `OperationInProgressResponse` types
- **service.py**: Modified `_yield_tool_call()` to:
- Check if tool is `is_long_running`
- Save "pending" message to chat history immediately
- Spawn background task that runs independently of SSE
- Return `operation_started` immediately (don't wait)
- Update chat history with result when background task completes
- Track running operations for idempotency (prevents duplicate ops on
refresh)
- **db.py**: Added `update_tool_message_content()` to update pending
messages
- **model.py**: Added `invalidate_session_cache()` to clear Redis after
background completion
**Frontend:**
- **useChatMessage.ts**: Added operation message types
- **helpers.ts**: Handle `operation_started`, `operation_pending`,
`operation_in_progress` response types
- **PendingOperationWidget**: New component to display operation status
with spinner
- **ChatMessage.tsx**: Render `PendingOperationWidget` for operation
messages
### How It Works
```
User Request → Save "pending" message → Spawn background task → Return immediately
↓
Task runs independently of SSE
↓
On completion: Update message in chat history
↓
User refreshes → Loads history → Sees result
```
### User Experience
1. User requests agent creation
2. Sees "Agent creation started. You can close this tab - check your
library in a few minutes."
3. Can close browser tab safely
4. When they return, chat shows the completed result (or error)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] pyright passes (0 errors)
- [x] TypeScript checks pass
- [x] Formatters applied
### Test Plan
1. Start agent creation in copilot
2. Close browser tab immediately after seeing "operation_started"
3. Wait 2-3 minutes
4. Reopen chat
5. Verify: Chat history shows completion message and agent appears in
library
---------
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️
On the **Copilot** page:
- prevent unnecessary sidebar repaints
- show a disclaimer when switching chats on the sidebar to terminate a
current stream
- handle loading better
- save streams better when disconnecting
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally and test the above
## Summary
Disabled blocks (e.g., webhook blocks without `platform_base_url`
configured) were being indexed and returned in chat tool search results.
This PR ensures they are properly filtered out.
### Changes 🏗️
- **find_block.py**: Skip disabled blocks when enriching search results
- **content_handlers.py**:
- Skip disabled blocks during embedding indexing
- Update `get_stats()` to only count enabled blocks for accurate
coverage metrics
### Why
Blocks can be disabled for various reasons (missing OAuth config, no
platform URL for webhooks, etc.). These blocks shouldn't appear in
search results since users cannot use them.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified disabled blocks are filtered from search results
- [x] Verified disabled blocks are not indexed
- [x] Verified stats accurately reflect enabled block count
## Changes 🏗️
- **Fix infinite loop in copilot page**
- use Zustand selectors instead of full store object to get stable
function references
- **Centralize chat streaming logic**
- move all streaming files from `providers/chat-stream/` to
`components/contextual/Chat/` for better colocation and reusability
- **Rename `copilot-store` → `copilot-page-store`**: Clarify scope
- **Fix message duplication**
- Only replay chunks from active streams (not completed ones) since
backend already provides persisted messages in `initialMessages`
- **Auto-focus chat input**
- Focus textarea when streaming ends and input is re-enabled
- **Graceful error display**
- Render tool response errors in muted style (small text + warning icon)
instead of raw "Error: ..." text
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Navigate to copilot page - no infinite loop errors
- [x] Start a new chat, send message, verify streaming works
- [x] Navigate away and back to a completed session - no duplicate
messages
- [x] After stream completes, verify chat input receives focus
- [x] Trigger a tool error - verify it displays with muted styling
## Summary
- Fixes race condition when multiple concurrent requests try to process
the same reviews (e.g., double-click, multiple browser tabs)
- Previously the second request would fail with "Reviews not found,
access denied, or not in WAITING status"
- Now handles this gracefully by treating already-processed reviews with
the same decision as success
## Changes
- Added `get_reviews_by_node_exec_ids()` function that fetches reviews
regardless of status
- Modified `process_all_reviews_for_execution()` to handle
already-processed reviews
- Updated route to use idempotent validation
## Test plan
- [x] Linter passes (`poetry run ruff check`)
- [x] Type checker passes (`poetry run pyright`)
- [x] Formatter passes (`poetry run format`)
- [ ] Manual testing: double-click approve button should not cause
errors
Fixes AUTOGPT-SERVER-7HE
## Summary
Long-running chat tools (like `create_agent` and `edit_agent`) were
timing out because no SSE data was sent during tool execution. GCP load
balancers and proxies have idle connection timeouts (~60 seconds), and
when the external Agent Generator service takes longer than this, the
connection would drop.
This PR adds SSE heartbeat comments during tool execution to keep
connections alive.
### Changes 🏗️
- **response_model.py**: Added `StreamHeartbeat` response type that
emits SSE comments (`: heartbeat\n\n`)
- **service.py**: Modified `_yield_tool_call()` to:
- Run tool execution in a background asyncio task
- Yield heartbeat events every 15 seconds while waiting
- Handle task failures with explicit error responses (no silent
failures)
- Handle cancellation gracefully
- **create_agent.py**: Improved error messages with more context and
details
- **edit_agent.py**: Improved error messages with more context and
details
### How It Works
```
Tool Call → Background Task Started
│
├── Every 15 seconds: yield `: heartbeat\n\n` (SSE comment)
│
└── Task Complete → yield tool result OR error response
```
SSE comments (`: heartbeat\n\n`) are:
- Ignored by SSE clients (don't trigger events)
- Keep TCP connections alive through proxies/load balancers
- Don't affect the AI SDK data protocol
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All chat service tests pass (17 tests)
- [x] Verified heartbeats are sent during long tool execution
- [x] Verified errors are properly reported to frontend
## Changes 🏗️
Implements automatic context window management to prevent chat failures
when conversations exceed token limits.
### Problem
- **Issue**: [SECRT-1800] Long chat conversations stop working when
context grows beyond model limits (~113k tokens observed)
- **Root Cause**: Chat service sends ALL messages to LLM without
token-aware compression, eventually exceeding Claude Opus 4.5's 200k
context window
### Solution
Implements a sliding window with summarization strategy:
1. Monitors token count before sending to LLM (triggers at 120k tokens)
2. Keeps last 15 messages completely intact (preserves recent
conversation flow)
3. Summarizes older messages using gpt-4o-mini (fast & cheap)
4. Rebuilds context: `[system_prompt] + [summary] +
[recent_15_messages]`
5. Full history preserved in database (only compresses when sending to
LLM)
### Changes Made
- **Added** `_summarize_messages()` helper function to create concise
summaries using gpt-4o-mini
- **Modified** `_stream_chat_chunks()` to implement token counting and
conditional summarization
- **Integrated** existing `estimate_token_count()` utility for accurate
token measurement
- **Added** graceful fallback - continues with original messages if
summarization fails
## Motivation and Context 🎯
Without context management, users with long chat sessions (250+
messages) experience:
- Complete chat failure when hitting 200k token limit
- Lost conversation context
- Poor user experience
This fix enables:
- ✅ Unlimited conversation length
- ✅ Transparent operation (no UX changes)
- ✅ Preserved conversation quality (recent messages intact)
- ✅ Cost-efficient (~$0.0001 per summarization)
## Testing 🧪
### Expected Behavior
- Conversations < 120k tokens: No change (normal operation)
- Conversations > 120k tokens:
- Log message: `Context summarized: {tokens} tokens, kept last 15
messages + summary`
- Chat continues working smoothly
- Recent context remains intact
### How to Verify
1. Start a chat session in copilot
2. Send 250-600 messages (or 50+ with large code blocks)
3. Check logs for "Context summarized:" message
4. Verify chat continues working without errors
5. Verify conversation quality remains good
## Checklist ✅
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] My changes generate no new warnings
- [x] I have tested my changes and verified they work as expected
We are removing Langfuse tracing from the chat/copilot system in favor
of using OpenRouter's broadcast feature, which keeps our codebase
simpler. Langfuse prompt management is retained for fetching system
prompts.
### Changes 🏗️
**Removed Langfuse tracing:**
- Removed `@observe` decorators from all 11 chat tool files
- Removed `langfuse.openai` wrapper (now using standard `openai` client)
- Removed `start_as_current_observation` and `propagate_attributes`
context managers from `service.py`
- Removed `update_current_trace()`, `update_current_span()`,
`span.update()` calls
**Retained Langfuse prompt management:**
- `langfuse.get_prompt()` for fetching system prompts
- `_is_langfuse_configured()` check for prompt availability
- Configuration for `langfuse_prompt_name`
**Files modified:**
- `backend/api/features/chat/service.py`
- `backend/api/features/chat/tools/*.py` (11 tool files)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified `poetry run format` passes
- [x] Verified no `@observe` decorators remain in chat tools
- [x] Verified Langfuse prompt fetching is still functional (code
preserved)
## Summary
Add interactive UI to collect user answers when the agent-generator
service returns clarifying questions during agent creation/editing.
Previously, when the backend asked clarifying questions, the frontend
would just display them as text with no way for users to answer. This
caused the chat to keep retrying without the necessary context.
## Changes
- **ChatMessageData type**: Add `clarification_needed` variant with
questions field
- **ClarificationQuestionsWidget**: New component with interactive form
to collect answers
- **parseToolResponse**: Detect and parse `clarification_needed`
responses from backend
- **ChatMessage**: Render the widget when clarification is needed
## How It Works
1. User requests to create/edit agent
2. Backend returns `ClarificationNeededResponse` with list of questions
3. Frontend shows interactive form with text inputs for each question
4. User fills in answers and clicks "Submit Answers"
5. Answers are sent back as context to the tool
6. Backend receives full context and continues
## UI Features
- Shows all questions with examples (if provided)
- Input validation (all questions must be answered to submit)
- Visual feedback (checkmarks when answered)
- Numbered questions for clarity
- Submit button disabled until all answered
- Follows same design pattern as `credentials_needed` flow
## Related
- Backend support for clarification was added in #11819
- Fixes the issue shown in the screenshot where users couldn't answer
clarifying questions
## Test plan
- [ ] Test creating agent that requires clarifying questions
- [ ] Verify questions are displayed in interactive form
- [ ] Verify all questions must be answered before submitting
- [ ] Verify answers are sent back to backend as context
- [ ] Verify agent creation continues with full context