mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-14 16:55:13 -05:00
## Summary
Full integration of the **Claude Agent SDK** to replace the existing
one-turn OpenAI-compatible CoPilot implementation with a multi-turn,
tool-using AI agent.
### What changed
**Core SDK Integration** (`chat/sdk/` — new module)
- **`service.py`**: Main orchestrator — spawns Claude Code CLI as a
subprocess per user message, streams responses back via SSE. Handles
conversation history compression, session lifecycle, and error recovery.
- **`response_adapter.py`**: Translates Claude Agent SDK events (text
deltas, tool use, errors, result messages) into the existing CoPilot
`StreamEvent` protocol so the frontend works unchanged.
- **`tool_adapter.py`**: Bridges CoPilot's MCP tools (find_block,
run_block, create_agent, etc.) into the SDK's tool format. Handles
schema conversion and result serialization.
- **`security_hooks.py`**: Pre/Post tool-use hooks that enforce a strict
allowlist of tools, block path traversal, sandbox file operations to
per-session workspace directories, cap sub-agent spawning, and prevent
the model from accessing unauthorized system resources.
- **`transcript.py`**: JSONL transcript I/O utilities for the stateless
`--resume` feature (see below).
**Stateless Multi-Turn Resume** (new)
- Instead of compressing conversation history via LLM on every turn
(lossy and expensive), we capture Claude Code's native JSONL session
transcript via a **Stop hook** callback, persist it in the DB
(`ChatSession.sdkTranscript`), and restore it on the next turn via
`--resume <file>`.
- This preserves full tool call/result context across turns with zero
token overhead for history.
- Feature-flagged via `CLAUDE_AGENT_USE_RESUME` (default: off).
- DB migration: `ALTER TABLE "ChatSession" ADD COLUMN "sdkTranscript"
TEXT`.
**Sandboxed Tool Execution** (`chat/tools/`)
- **`bash_exec.py`**: Sandboxed bash execution using bubblewrap
(`bwrap`) with read-only root filesystem, per-session writable
workspace, resource limits (CPU, memory, file size), and network
isolation.
- **`sandbox.py`**: Shared bubblewrap sandbox infrastructure — generates
`bwrap` command lines with configurable mounts, environment, and
resource constraints.
- **`web_fetch.py`**: URL fetching tool with domain allowlist, size
limits, and content-type filtering.
- **`check_operation_status.py`**: Polling tool for long-running
operations (agent creation, block execution) so the SDK doesn't block
waiting.
- **`find_block.py`** / **`run_block.py`**: Enhanced with category
filtering, optimized response size (removed raw JSON schemas), and
better error handling.
**Security**
- Path traversal prevention: session IDs sanitized, all file ops
confined to workspace dirs, symlink resolution.
- Tool allowlist enforcement via SDK hooks — model cannot call arbitrary
tools.
- Built-in `Bash` tool blocked via `disallowed_tools` to prevent
bypassing sandboxed `bash_exec`.
- Sub-agent (`Task`) spawning capped at configurable limit (default:
10).
- CodeQL-clean path sanitization patterns.
**Streaming & Reconnection**
- SSE stream registry backed by Redis Streams for crash-resilient
reconnection.
- Long-running operation tracking with TTL-based cleanup.
- Atomic message append to prevent race conditions on concurrent writes.
**Configuration** (`config.py`)
- `use_claude_agent_sdk` — master toggle (default: on)
- `claude_agent_model` — model override for SDK path
- `claude_agent_max_buffer_size` — JSON parsing buffer (10MB)
- `claude_agent_max_subtasks` — sub-agent cap (10)
- `claude_agent_use_resume` — transcript-based resume (default: off)
- `thinking_enabled` — extended thinking for Claude models
**Tests**
- `sdk/response_adapter_test.py` — 366 lines covering all event
translation paths
- `sdk/security_hooks_test.py` — 165 lines covering tool blocking, path
traversal, subtask limits
- `chat/model_test.py` — 214 lines covering session model serialization
- `chat/service_test.py` — Integration tests including multi-turn resume
keyword recall
- `tools/find_block_test.py` / `run_block_test.py` — Extended with new
tool behavior tests
## Test plan
- [x] Unit tests pass (`sdk/response_adapter_test.py`,
`security_hooks_test.py`, `model_test.py`)
- [x] Integration test: multi-turn keyword recall via `--resume`
(`service_test.py::test_sdk_resume_multi_turn`)
- [x] Manual E2E: CoPilot chat sessions with tool calls, bash execution,
and multi-turn context
- [x] Pre-commit hooks pass (ruff, isort, black, pyright, flake8)
- [ ] Staging deployment with `claude_agent_use_resume=false` initially
- [ ] Enable resume in staging, verify transcript capture and recall
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<details><summary><h3>Greptile Summary</h3></summary>
This PR replaces the existing OpenAI-compatible CoPilot with a full
Claude Agent SDK integration, introducing multi-turn conversations,
stateless resume via JSONL transcripts, and sandboxed tool execution.
**Key changes:**
- **SDK integration** (`chat/sdk/`): spawns Claude Code CLI subprocess
per message, translates events to frontend protocol, bridges MCP tools
- **Stateless resume**: captures JSONL transcripts via Stop hook,
persists in `ChatSession.sdkTranscript`, restores with `--resume`
(feature-flagged, default off)
- **Sandboxed execution**: bubblewrap sandbox for bash commands with
filesystem whitelist, network isolation, resource limits
- **Security hooks**: tool allowlist enforcement, path traversal
prevention, workspace-scoped file operations, sub-agent spawn limits
- **Long-running operations**: delegates `create_agent`/`edit_agent` to
existing stream_registry infrastructure for SSE reconnection
- **Feature flag**: `CHAT_USE_CLAUDE_AGENT_SDK` with LaunchDarkly
support, defaults to enabled
**Security issues found:**
- Path traversal validation has logic errors in `security_hooks.py:82`
(tilde expansion order) and `service.py:266` (redundant `..` check)
- Config validator always prefers env var over explicit `False` value
(`config.py:162`)
- Race condition in `routes.py:323` — message persisted before task
registration, could duplicate on retry
- Resource limits in sandbox may fail silently (`sandbox.py:109`)
**Test coverage is strong** with 366 lines for response adapter, 165 for
security hooks, and integration tests for multi-turn resume.
</details>
<details><summary><h3>Confidence Score: 3/5</h3></summary>
- This PR is generally safe but has critical security issues in path
validation that must be fixed before merge
- Score reflects strong architecture and test coverage offset by real
security vulnerabilities: the tilde expansion bug in `security_hooks.py`
could allow sandbox escape, the race condition could cause message
duplication, and the silent ulimit failures could bypass resource
limits. The bubblewrap sandbox and allowlist enforcement are
well-designed, but the path validation bugs need fixing. The transcript
resume feature is properly feature-flagged. Overall the implementation
is solid but the security issues prevent a higher score.
- Pay close attention to
`backend/api/features/chat/sdk/security_hooks.py` (path traversal
vulnerability), `backend/api/features/chat/routes.py` (race condition),
`backend/api/features/chat/tools/sandbox.py` (silent resource limit
failures), and `backend/api/features/chat/sdk/service.py` (redundant
security check)
</details>
<details><summary><h3>Sequence Diagram</h3></summary>
```mermaid
sequenceDiagram
participant Frontend
participant Routes as routes.py
participant SDKService as sdk/service.py
participant ClaudeSDK as Claude Agent SDK CLI
participant SecurityHooks as security_hooks.py
participant ToolAdapter as tool_adapter.py
participant CoPilotTools as tools/*
participant Sandbox as sandbox.py (bwrap)
participant DB as Database
participant Redis as stream_registry
Frontend->>Routes: POST /chat (user message)
Routes->>SDKService: stream_chat_completion_sdk()
SDKService->>DB: get_chat_session()
DB-->>SDKService: session + messages
alt Resume enabled AND transcript exists
SDKService->>SDKService: validate_transcript()
SDKService->>SDKService: write_transcript_to_tempfile()
Note over SDKService: Pass --resume to SDK
else No resume
SDKService->>SDKService: _compress_conversation_history()
Note over SDKService: Inject history into user message
end
SDKService->>SecurityHooks: create_security_hooks()
SDKService->>ToolAdapter: create_copilot_mcp_server()
SDKService->>ClaudeSDK: spawn subprocess with MCP server
loop Streaming Conversation
ClaudeSDK->>SDKService: AssistantMessage (text/tool_use)
SDKService->>Frontend: StreamTextDelta / StreamToolInputAvailable
alt Tool Call
ClaudeSDK->>SecurityHooks: PreToolUse hook
SecurityHooks->>SecurityHooks: validate path, check allowlist
alt Tool blocked
SecurityHooks-->>ClaudeSDK: deny
else Tool allowed
SecurityHooks-->>ClaudeSDK: allow
ClaudeSDK->>ToolAdapter: call MCP tool
alt Long-running tool (create_agent, edit_agent)
ToolAdapter->>Redis: register task
ToolAdapter->>DB: save OperationPendingResponse
ToolAdapter->>ToolAdapter: spawn background task
ToolAdapter-->>ClaudeSDK: OperationStartedResponse
else Regular tool (find_block, bash_exec)
ToolAdapter->>CoPilotTools: execute()
alt bash_exec
CoPilotTools->>Sandbox: run_sandboxed()
Sandbox->>Sandbox: build bwrap command
Note over Sandbox: Network isolation,<br/>filesystem whitelist,<br/>resource limits
Sandbox-->>CoPilotTools: stdout, stderr, exit_code
end
CoPilotTools-->>ToolAdapter: result
ToolAdapter->>ToolAdapter: stash full output
ToolAdapter-->>ClaudeSDK: MCP response
end
SecurityHooks->>SecurityHooks: PostToolUse hook (log)
end
end
ClaudeSDK->>SDKService: UserMessage (ToolResultBlock)
SDKService->>ToolAdapter: pop_pending_tool_output()
SDKService->>Frontend: StreamToolOutputAvailable
end
ClaudeSDK->>SecurityHooks: Stop hook
SecurityHooks->>SDKService: transcript_path callback
SDKService->>SDKService: read_transcript_file()
SDKService->>DB: save transcript to session.sdkTranscript
ClaudeSDK->>SDKService: ResultMessage (success)
SDKService->>Frontend: StreamFinish
SDKService->>DB: upsert_chat_session()
```
</details>
<sub>Last reviewed commit: 28c1121</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
144 lines
5.1 KiB
Docker
144 lines
5.1 KiB
Docker
# ============================ DEPENDENCY BUILDER ============================ #
|
|
|
|
FROM debian:13-slim AS builder
|
|
|
|
# Set environment variables
|
|
ENV PYTHONDONTWRITEBYTECODE=1
|
|
ENV PYTHONUNBUFFERED=1
|
|
ENV DEBIAN_FRONTEND=noninteractive
|
|
|
|
WORKDIR /app
|
|
|
|
RUN echo 'Acquire::http::Pipeline-Depth 0;\nAcquire::http::No-Cache true;\nAcquire::BrokenProxy true;\n' > /etc/apt/apt.conf.d/99fixbadproxy
|
|
|
|
# Install Node.js repository key and setup
|
|
RUN apt-get update --allow-releaseinfo-change --fix-missing \
|
|
&& apt-get install -y curl ca-certificates gnupg \
|
|
&& mkdir -p /etc/apt/keyrings \
|
|
&& curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \
|
|
&& echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list
|
|
|
|
# Update package list and install Python, Node.js, and build dependencies
|
|
RUN apt-get update \
|
|
&& apt-get install -y \
|
|
python3.13 \
|
|
python3.13-dev \
|
|
python3.13-venv \
|
|
python3-pip \
|
|
build-essential \
|
|
libpq5 \
|
|
libz-dev \
|
|
libssl-dev \
|
|
postgresql-client \
|
|
nodejs \
|
|
&& rm -rf /var/lib/apt/lists/*
|
|
|
|
ENV POETRY_HOME=/opt/poetry
|
|
ENV POETRY_NO_INTERACTION=1
|
|
ENV POETRY_VIRTUALENVS_CREATE=true
|
|
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
|
|
ENV PATH=/opt/poetry/bin:$PATH
|
|
|
|
RUN pip3 install poetry --break-system-packages
|
|
|
|
# Copy and install dependencies
|
|
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
|
|
COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/
|
|
WORKDIR /app/autogpt_platform/backend
|
|
RUN poetry install --no-ansi --no-root
|
|
|
|
# Generate Prisma client
|
|
COPY autogpt_platform/backend/schema.prisma ./
|
|
COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/partial_types.py
|
|
COPY autogpt_platform/backend/gen_prisma_types_stub.py ./
|
|
RUN poetry run prisma generate && poetry run gen-prisma-stub
|
|
|
|
# ============================== BACKEND SERVER ============================== #
|
|
|
|
FROM debian:13-slim AS server
|
|
|
|
WORKDIR /app
|
|
|
|
ENV POETRY_HOME=/opt/poetry \
|
|
POETRY_NO_INTERACTION=1 \
|
|
POETRY_VIRTUALENVS_CREATE=true \
|
|
POETRY_VIRTUALENVS_IN_PROJECT=true \
|
|
DEBIAN_FRONTEND=noninteractive
|
|
ENV PATH=/opt/poetry/bin:$PATH
|
|
|
|
# Install Python, FFmpeg, ImageMagick, and CLI tools for agent use.
|
|
# bubblewrap provides OS-level sandbox (whitelist-only FS + no network)
|
|
# for the bash_exec MCP tool.
|
|
# Using --no-install-recommends saves ~650MB by skipping unnecessary deps like llvm, mesa, etc.
|
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
|
python3.13 \
|
|
python3-pip \
|
|
ffmpeg \
|
|
imagemagick \
|
|
jq \
|
|
ripgrep \
|
|
tree \
|
|
bubblewrap \
|
|
&& rm -rf /var/lib/apt/lists/*
|
|
|
|
COPY --from=builder /usr/local/lib/python3* /usr/local/lib/python3*
|
|
COPY --from=builder /usr/local/bin/poetry /usr/local/bin/poetry
|
|
# Copy Node.js installation for Prisma
|
|
COPY --from=builder /usr/bin/node /usr/bin/node
|
|
COPY --from=builder /usr/lib/node_modules /usr/lib/node_modules
|
|
COPY --from=builder /usr/bin/npm /usr/bin/npm
|
|
COPY --from=builder /usr/bin/npx /usr/bin/npx
|
|
COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries
|
|
|
|
WORKDIR /app/autogpt_platform/backend
|
|
|
|
# Copy only the .venv from builder (not the entire /app directory)
|
|
# The .venv includes the generated Prisma client
|
|
COPY --from=builder /app/autogpt_platform/backend/.venv ./.venv
|
|
ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH"
|
|
|
|
# Copy dependency files + autogpt_libs (path dependency)
|
|
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
|
|
COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml ./
|
|
|
|
# Copy backend code + docs (for Copilot docs search)
|
|
COPY autogpt_platform/backend ./
|
|
COPY docs /app/docs
|
|
RUN poetry install --no-ansi --only-root
|
|
|
|
ENV PORT=8000
|
|
|
|
CMD ["poetry", "run", "rest"]
|
|
|
|
# =============================== DB MIGRATOR =============================== #
|
|
|
|
# Lightweight migrate stage - only needs Prisma CLI, not full Python environment
|
|
FROM debian:13-slim AS migrate
|
|
|
|
WORKDIR /app/autogpt_platform/backend
|
|
|
|
ENV DEBIAN_FRONTEND=noninteractive
|
|
|
|
# Install only what's needed for prisma migrate: Node.js and minimal Python for prisma-python
|
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
|
python3.13 \
|
|
python3-pip \
|
|
ca-certificates \
|
|
&& rm -rf /var/lib/apt/lists/*
|
|
|
|
# Copy Node.js from builder (needed for Prisma CLI)
|
|
COPY --from=builder /usr/bin/node /usr/bin/node
|
|
COPY --from=builder /usr/lib/node_modules /usr/lib/node_modules
|
|
COPY --from=builder /usr/bin/npm /usr/bin/npm
|
|
|
|
# Copy Prisma binaries
|
|
COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries
|
|
|
|
# Install prisma-client-py directly (much smaller than copying full venv)
|
|
RUN pip3 install prisma>=0.15.0 --break-system-packages
|
|
|
|
COPY autogpt_platform/backend/schema.prisma ./
|
|
COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/partial_types.py
|
|
COPY autogpt_platform/backend/gen_prisma_types_stub.py ./
|
|
COPY autogpt_platform/backend/migrations ./migrations
|