mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-08 03:00:28 -04:00
## Summary
- Add file attachment support to copilot chat (documents, images,
spreadsheets, video, audio)
- Show upload progress with spinner overlays on file chips during upload
- Display attached files as styled pills in sent user messages using AI
SDK's native `FileUIPart`
- Backend upload endpoint with virus scanning (ClamAV), per-file size
limits, and per-user storage caps
- Enrich chat stream with file metadata so the LLM can access files via
`read_workspace_file`
Resolves: [SECRT-1788](https://linear.app/autogpt/issue/SECRT-1788)
### Backend
| File | Change |
|------|--------|
| `chat/routes.py` | Accept `file_ids` in stream request, enrich user
message with file metadata |
| `workspace/routes.py` | New `POST /files/upload` and `GET
/storage/usage` endpoints |
| `executor/utils.py` | Thread `file_ids` through
`CoPilotExecutionEntry` and RabbitMQ |
| `settings.py` | Add `max_file_size_mb` and `max_workspace_storage_mb`
config |
### Frontend
| File | Change |
|------|--------|
| `AttachmentMenu.tsx` | **New** — `+` button with popover for file
category selection |
| `FileChips.tsx` | **New** — file preview chips with upload spinner
state |
| `MessageAttachments.tsx` | **New** — paperclip pills rendering
`FileUIPart` in chat bubbles |
| `upload/route.ts` | **New** — Next.js API proxy for multipart uploads
to backend |
| `ChatInput.tsx` | Integrate attachment menu, file chips, upload
progress |
| `useCopilotPage.ts` | Upload flow, `FileUIPart` construction,
transport `file_ids` extraction |
| `ChatMessagesContainer.tsx` | Render file parts as
`MessageAttachments` |
| `ChatContainer.tsx` / `EmptySession.tsx` | Thread `isUploadingFiles`
prop |
| `useChatInput.ts` | `canSendEmpty` option for file-only sends |
| `stream/route.ts` | Forward `file_ids` to backend |
## Test plan
- [x] Attach files via `+` button → file chips appear with X buttons
- [x] Remove a chip → file is removed from the list
- [x] Send message with files → chips show upload spinners → message
appears with file attachment pills
- [x] Upload failure → toast error, chips revert to editable (no phantom
message sent)
- [x] New session (empty form): same upload flow works
- [x] Messages without files render normally
- [x] Network tab: `file_ids` present in stream POST body
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Adds authenticated file upload/storage-quota enforcement and threads
`file_ids` through the chat streaming path, which affects data handling
and storage behavior. Risk is mitigated by UUID/workspace scoping, size
limits, and virus scanning but still touches security- and
reliability-sensitive upload flows.
>
> **Overview**
> Copilot chat now supports attaching files: the frontend adds
drag-and-drop and an attach button, shows selected files as removable
chips with an upload-in-progress state, and renders sent attachments
using AI SDK `FileUIPart` with download links.
>
> On send, files are uploaded to the backend (with client-side limits
and failure handling) and the chat stream request includes `file_ids`;
the backend sanitizes/filters IDs, scopes them to the user’s workspace,
appends an `[Attached files]` metadata block to the user message for the
LLM, and forwards the sanitized IDs through `enqueue_copilot_turn`.
>
> The backend adds `POST /workspace/files/upload` (filename
sanitization, per-file size limit, ClamAV scan, and per-user storage
quota with post-write rollback) plus `GET /workspace/storage/usage`,
introduces `max_workspace_storage_mb` config, optimizes workspace size
calculation, and fixes executor cleanup to avoid un-awaited coroutine
warnings; new route tests cover file ID validation and upload quota/scan
behaviors.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
8d3b95d046. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
225 lines
6.9 KiB
Python
225 lines
6.9 KiB
Python
"""RabbitMQ queue configuration for CoPilot executor.
|
|
|
|
Defines two exchanges and queues following the graph executor pattern:
|
|
- 'copilot_execution' (DIRECT) for chat generation tasks
|
|
- 'copilot_cancel' (FANOUT) for cancellation requests
|
|
"""
|
|
|
|
import logging
|
|
|
|
from pydantic import BaseModel
|
|
|
|
from backend.data.rabbitmq import Exchange, ExchangeType, Queue, RabbitMQConfig
|
|
from backend.util.logging import TruncatedLogger, is_structured_logging_enabled
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
# ============ Logging Helper ============ #
|
|
|
|
|
|
class CoPilotLogMetadata(TruncatedLogger):
|
|
"""Structured logging helper for CoPilot executor.
|
|
|
|
In cloud environments (structured logging enabled), uses a simple prefix
|
|
and passes metadata via json_fields. In local environments, uses a detailed
|
|
prefix with all metadata key-value pairs for easier debugging.
|
|
|
|
Args:
|
|
logger: The underlying logger instance
|
|
max_length: Maximum log message length before truncation
|
|
**kwargs: Metadata key-value pairs (e.g., session_id="xyz", turn_id="abc")
|
|
These are added to json_fields in cloud mode, or to the prefix in local mode.
|
|
"""
|
|
|
|
def __init__(
|
|
self,
|
|
logger: logging.Logger,
|
|
max_length: int = 1000,
|
|
**kwargs: str | None,
|
|
):
|
|
# Filter out None values
|
|
metadata = {k: v for k, v in kwargs.items() if v is not None}
|
|
metadata["component"] = "CoPilotExecutor"
|
|
|
|
if is_structured_logging_enabled():
|
|
prefix = "[CoPilotExecutor]"
|
|
else:
|
|
# Build prefix from metadata key-value pairs
|
|
meta_parts = "|".join(
|
|
f"{k}:{v}" for k, v in metadata.items() if k != "component"
|
|
)
|
|
prefix = (
|
|
f"[CoPilotExecutor|{meta_parts}]" if meta_parts else "[CoPilotExecutor]"
|
|
)
|
|
|
|
super().__init__(
|
|
logger,
|
|
max_length=max_length,
|
|
prefix=prefix,
|
|
metadata=metadata,
|
|
)
|
|
|
|
|
|
# ============ Exchange and Queue Configuration ============ #
|
|
|
|
COPILOT_EXECUTION_EXCHANGE = Exchange(
|
|
name="copilot_execution",
|
|
type=ExchangeType.DIRECT,
|
|
durable=True,
|
|
auto_delete=False,
|
|
)
|
|
COPILOT_EXECUTION_QUEUE_NAME = "copilot_execution_queue"
|
|
COPILOT_EXECUTION_ROUTING_KEY = "copilot.run"
|
|
|
|
COPILOT_CANCEL_EXCHANGE = Exchange(
|
|
name="copilot_cancel",
|
|
type=ExchangeType.FANOUT,
|
|
durable=True,
|
|
auto_delete=False,
|
|
)
|
|
COPILOT_CANCEL_QUEUE_NAME = "copilot_cancel_queue"
|
|
|
|
# CoPilot operations can include extended thinking and agent generation
|
|
# which may take 30+ minutes to complete
|
|
COPILOT_CONSUMER_TIMEOUT_SECONDS = 60 * 60 # 1 hour
|
|
|
|
# Graceful shutdown timeout - allow in-flight operations to complete
|
|
GRACEFUL_SHUTDOWN_TIMEOUT_SECONDS = 30 * 60 # 30 minutes
|
|
|
|
|
|
def create_copilot_queue_config() -> RabbitMQConfig:
|
|
"""Create RabbitMQ configuration for CoPilot executor.
|
|
|
|
Defines two exchanges and queues:
|
|
- 'copilot_execution' (DIRECT) for chat generation tasks
|
|
- 'copilot_cancel' (FANOUT) for cancellation requests
|
|
|
|
Returns:
|
|
RabbitMQConfig with exchanges and queues defined
|
|
"""
|
|
run_queue = Queue(
|
|
name=COPILOT_EXECUTION_QUEUE_NAME,
|
|
exchange=COPILOT_EXECUTION_EXCHANGE,
|
|
routing_key=COPILOT_EXECUTION_ROUTING_KEY,
|
|
durable=True,
|
|
auto_delete=False,
|
|
arguments={
|
|
# Extended consumer timeout for long-running LLM operations
|
|
# Default 30-minute timeout is insufficient for extended thinking
|
|
# and agent generation which can take 30+ minutes
|
|
"x-consumer-timeout": COPILOT_CONSUMER_TIMEOUT_SECONDS
|
|
* 1000,
|
|
},
|
|
)
|
|
cancel_queue = Queue(
|
|
name=COPILOT_CANCEL_QUEUE_NAME,
|
|
exchange=COPILOT_CANCEL_EXCHANGE,
|
|
routing_key="", # not used for FANOUT
|
|
durable=True,
|
|
auto_delete=False,
|
|
)
|
|
return RabbitMQConfig(
|
|
vhost="/",
|
|
exchanges=[COPILOT_EXECUTION_EXCHANGE, COPILOT_CANCEL_EXCHANGE],
|
|
queues=[run_queue, cancel_queue],
|
|
)
|
|
|
|
|
|
# ============ Message Models ============ #
|
|
|
|
|
|
class CoPilotExecutionEntry(BaseModel):
|
|
"""Task payload for CoPilot AI generation.
|
|
|
|
This model represents a chat generation task to be processed by the executor.
|
|
"""
|
|
|
|
session_id: str
|
|
"""Chat session ID (also used for dedup/locking)"""
|
|
|
|
turn_id: str = ""
|
|
"""Per-turn UUID for Redis stream isolation"""
|
|
|
|
user_id: str | None
|
|
"""User ID (may be None for anonymous users)"""
|
|
|
|
message: str
|
|
"""User's message to process"""
|
|
|
|
is_user_message: bool = True
|
|
"""Whether the message is from the user (vs system/assistant)"""
|
|
|
|
context: dict[str, str] | None = None
|
|
"""Optional context for the message (e.g., {url: str, content: str})"""
|
|
|
|
file_ids: list[str] | None = None
|
|
"""Workspace file IDs attached to the user's message"""
|
|
|
|
|
|
class CancelCoPilotEvent(BaseModel):
|
|
"""Event to cancel a CoPilot operation."""
|
|
|
|
session_id: str
|
|
"""Session ID to cancel"""
|
|
|
|
|
|
# ============ Queue Publishing Helpers ============ #
|
|
|
|
|
|
async def enqueue_copilot_turn(
|
|
session_id: str,
|
|
user_id: str | None,
|
|
message: str,
|
|
turn_id: str,
|
|
is_user_message: bool = True,
|
|
context: dict[str, str] | None = None,
|
|
file_ids: list[str] | None = None,
|
|
) -> None:
|
|
"""Enqueue a CoPilot task for processing by the executor service.
|
|
|
|
Args:
|
|
session_id: Chat session ID (also used for dedup/locking)
|
|
user_id: User ID (may be None for anonymous users)
|
|
message: User's message to process
|
|
turn_id: Per-turn UUID for Redis stream isolation
|
|
is_user_message: Whether the message is from the user (vs system/assistant)
|
|
context: Optional context for the message (e.g., {url: str, content: str})
|
|
file_ids: Optional workspace file IDs attached to the user's message
|
|
"""
|
|
from backend.util.clients import get_async_copilot_queue
|
|
|
|
entry = CoPilotExecutionEntry(
|
|
session_id=session_id,
|
|
turn_id=turn_id,
|
|
user_id=user_id,
|
|
message=message,
|
|
is_user_message=is_user_message,
|
|
context=context,
|
|
file_ids=file_ids,
|
|
)
|
|
|
|
queue_client = await get_async_copilot_queue()
|
|
await queue_client.publish_message(
|
|
routing_key=COPILOT_EXECUTION_ROUTING_KEY,
|
|
message=entry.model_dump_json(),
|
|
exchange=COPILOT_EXECUTION_EXCHANGE,
|
|
)
|
|
|
|
|
|
async def enqueue_cancel_task(session_id: str) -> None:
|
|
"""Publish a cancel request for a running CoPilot session.
|
|
|
|
Sends a ``CancelCoPilotEvent`` to the FANOUT exchange so all executor
|
|
pods receive the cancellation signal.
|
|
"""
|
|
from backend.util.clients import get_async_copilot_queue
|
|
|
|
event = CancelCoPilotEvent(session_id=session_id)
|
|
queue_client = await get_async_copilot_queue()
|
|
await queue_client.publish_message(
|
|
routing_key="", # FANOUT ignores routing key
|
|
message=event.model_dump_json(),
|
|
exchange=COPILOT_CANCEL_EXCHANGE,
|
|
)
|