mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-15 01:05:13 -05:00
ca216dfd7f91ef1e79c3129879b31a8a1d2b79a9
17 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
52b3aebf71 |
feat(backend/sdk): Claude Agent SDK integration for CoPilot (#12103)
## Summary
Full integration of the **Claude Agent SDK** to replace the existing
one-turn OpenAI-compatible CoPilot implementation with a multi-turn,
tool-using AI agent.
### What changed
**Core SDK Integration** (`chat/sdk/` — new module)
- **`service.py`**: Main orchestrator — spawns Claude Code CLI as a
subprocess per user message, streams responses back via SSE. Handles
conversation history compression, session lifecycle, and error recovery.
- **`response_adapter.py`**: Translates Claude Agent SDK events (text
deltas, tool use, errors, result messages) into the existing CoPilot
`StreamEvent` protocol so the frontend works unchanged.
- **`tool_adapter.py`**: Bridges CoPilot's MCP tools (find_block,
run_block, create_agent, etc.) into the SDK's tool format. Handles
schema conversion and result serialization.
- **`security_hooks.py`**: Pre/Post tool-use hooks that enforce a strict
allowlist of tools, block path traversal, sandbox file operations to
per-session workspace directories, cap sub-agent spawning, and prevent
the model from accessing unauthorized system resources.
- **`transcript.py`**: JSONL transcript I/O utilities for the stateless
`--resume` feature (see below).
**Stateless Multi-Turn Resume** (new)
- Instead of compressing conversation history via LLM on every turn
(lossy and expensive), we capture Claude Code's native JSONL session
transcript via a **Stop hook** callback, persist it in the DB
(`ChatSession.sdkTranscript`), and restore it on the next turn via
`--resume <file>`.
- This preserves full tool call/result context across turns with zero
token overhead for history.
- Feature-flagged via `CLAUDE_AGENT_USE_RESUME` (default: off).
- DB migration: `ALTER TABLE "ChatSession" ADD COLUMN "sdkTranscript"
TEXT`.
**Sandboxed Tool Execution** (`chat/tools/`)
- **`bash_exec.py`**: Sandboxed bash execution using bubblewrap
(`bwrap`) with read-only root filesystem, per-session writable
workspace, resource limits (CPU, memory, file size), and network
isolation.
- **`sandbox.py`**: Shared bubblewrap sandbox infrastructure — generates
`bwrap` command lines with configurable mounts, environment, and
resource constraints.
- **`web_fetch.py`**: URL fetching tool with domain allowlist, size
limits, and content-type filtering.
- **`check_operation_status.py`**: Polling tool for long-running
operations (agent creation, block execution) so the SDK doesn't block
waiting.
- **`find_block.py`** / **`run_block.py`**: Enhanced with category
filtering, optimized response size (removed raw JSON schemas), and
better error handling.
**Security**
- Path traversal prevention: session IDs sanitized, all file ops
confined to workspace dirs, symlink resolution.
- Tool allowlist enforcement via SDK hooks — model cannot call arbitrary
tools.
- Built-in `Bash` tool blocked via `disallowed_tools` to prevent
bypassing sandboxed `bash_exec`.
- Sub-agent (`Task`) spawning capped at configurable limit (default:
10).
- CodeQL-clean path sanitization patterns.
**Streaming & Reconnection**
- SSE stream registry backed by Redis Streams for crash-resilient
reconnection.
- Long-running operation tracking with TTL-based cleanup.
- Atomic message append to prevent race conditions on concurrent writes.
**Configuration** (`config.py`)
- `use_claude_agent_sdk` — master toggle (default: on)
- `claude_agent_model` — model override for SDK path
- `claude_agent_max_buffer_size` — JSON parsing buffer (10MB)
- `claude_agent_max_subtasks` — sub-agent cap (10)
- `claude_agent_use_resume` — transcript-based resume (default: off)
- `thinking_enabled` — extended thinking for Claude models
**Tests**
- `sdk/response_adapter_test.py` — 366 lines covering all event
translation paths
- `sdk/security_hooks_test.py` — 165 lines covering tool blocking, path
traversal, subtask limits
- `chat/model_test.py` — 214 lines covering session model serialization
- `chat/service_test.py` — Integration tests including multi-turn resume
keyword recall
- `tools/find_block_test.py` / `run_block_test.py` — Extended with new
tool behavior tests
## Test plan
- [x] Unit tests pass (`sdk/response_adapter_test.py`,
`security_hooks_test.py`, `model_test.py`)
- [x] Integration test: multi-turn keyword recall via `--resume`
(`service_test.py::test_sdk_resume_multi_turn`)
- [x] Manual E2E: CoPilot chat sessions with tool calls, bash execution,
and multi-turn context
- [x] Pre-commit hooks pass (ruff, isort, black, pyright, flake8)
- [ ] Staging deployment with `claude_agent_use_resume=false` initially
- [ ] Enable resume in staging, verify transcript capture and recall
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<details><summary><h3>Greptile Summary</h3></summary>
This PR replaces the existing OpenAI-compatible CoPilot with a full
Claude Agent SDK integration, introducing multi-turn conversations,
stateless resume via JSONL transcripts, and sandboxed tool execution.
**Key changes:**
- **SDK integration** (`chat/sdk/`): spawns Claude Code CLI subprocess
per message, translates events to frontend protocol, bridges MCP tools
- **Stateless resume**: captures JSONL transcripts via Stop hook,
persists in `ChatSession.sdkTranscript`, restores with `--resume`
(feature-flagged, default off)
- **Sandboxed execution**: bubblewrap sandbox for bash commands with
filesystem whitelist, network isolation, resource limits
- **Security hooks**: tool allowlist enforcement, path traversal
prevention, workspace-scoped file operations, sub-agent spawn limits
- **Long-running operations**: delegates `create_agent`/`edit_agent` to
existing stream_registry infrastructure for SSE reconnection
- **Feature flag**: `CHAT_USE_CLAUDE_AGENT_SDK` with LaunchDarkly
support, defaults to enabled
**Security issues found:**
- Path traversal validation has logic errors in `security_hooks.py:82`
(tilde expansion order) and `service.py:266` (redundant `..` check)
- Config validator always prefers env var over explicit `False` value
(`config.py:162`)
- Race condition in `routes.py:323` — message persisted before task
registration, could duplicate on retry
- Resource limits in sandbox may fail silently (`sandbox.py:109`)
**Test coverage is strong** with 366 lines for response adapter, 165 for
security hooks, and integration tests for multi-turn resume.
</details>
<details><summary><h3>Confidence Score: 3/5</h3></summary>
- This PR is generally safe but has critical security issues in path
validation that must be fixed before merge
- Score reflects strong architecture and test coverage offset by real
security vulnerabilities: the tilde expansion bug in `security_hooks.py`
could allow sandbox escape, the race condition could cause message
duplication, and the silent ulimit failures could bypass resource
limits. The bubblewrap sandbox and allowlist enforcement are
well-designed, but the path validation bugs need fixing. The transcript
resume feature is properly feature-flagged. Overall the implementation
is solid but the security issues prevent a higher score.
- Pay close attention to
`backend/api/features/chat/sdk/security_hooks.py` (path traversal
vulnerability), `backend/api/features/chat/routes.py` (race condition),
`backend/api/features/chat/tools/sandbox.py` (silent resource limit
failures), and `backend/api/features/chat/sdk/service.py` (redundant
security check)
</details>
<details><summary><h3>Sequence Diagram</h3></summary>
```mermaid
sequenceDiagram
participant Frontend
participant Routes as routes.py
participant SDKService as sdk/service.py
participant ClaudeSDK as Claude Agent SDK CLI
participant SecurityHooks as security_hooks.py
participant ToolAdapter as tool_adapter.py
participant CoPilotTools as tools/*
participant Sandbox as sandbox.py (bwrap)
participant DB as Database
participant Redis as stream_registry
Frontend->>Routes: POST /chat (user message)
Routes->>SDKService: stream_chat_completion_sdk()
SDKService->>DB: get_chat_session()
DB-->>SDKService: session + messages
alt Resume enabled AND transcript exists
SDKService->>SDKService: validate_transcript()
SDKService->>SDKService: write_transcript_to_tempfile()
Note over SDKService: Pass --resume to SDK
else No resume
SDKService->>SDKService: _compress_conversation_history()
Note over SDKService: Inject history into user message
end
SDKService->>SecurityHooks: create_security_hooks()
SDKService->>ToolAdapter: create_copilot_mcp_server()
SDKService->>ClaudeSDK: spawn subprocess with MCP server
loop Streaming Conversation
ClaudeSDK->>SDKService: AssistantMessage (text/tool_use)
SDKService->>Frontend: StreamTextDelta / StreamToolInputAvailable
alt Tool Call
ClaudeSDK->>SecurityHooks: PreToolUse hook
SecurityHooks->>SecurityHooks: validate path, check allowlist
alt Tool blocked
SecurityHooks-->>ClaudeSDK: deny
else Tool allowed
SecurityHooks-->>ClaudeSDK: allow
ClaudeSDK->>ToolAdapter: call MCP tool
alt Long-running tool (create_agent, edit_agent)
ToolAdapter->>Redis: register task
ToolAdapter->>DB: save OperationPendingResponse
ToolAdapter->>ToolAdapter: spawn background task
ToolAdapter-->>ClaudeSDK: OperationStartedResponse
else Regular tool (find_block, bash_exec)
ToolAdapter->>CoPilotTools: execute()
alt bash_exec
CoPilotTools->>Sandbox: run_sandboxed()
Sandbox->>Sandbox: build bwrap command
Note over Sandbox: Network isolation,<br/>filesystem whitelist,<br/>resource limits
Sandbox-->>CoPilotTools: stdout, stderr, exit_code
end
CoPilotTools-->>ToolAdapter: result
ToolAdapter->>ToolAdapter: stash full output
ToolAdapter-->>ClaudeSDK: MCP response
end
SecurityHooks->>SecurityHooks: PostToolUse hook (log)
end
end
ClaudeSDK->>SDKService: UserMessage (ToolResultBlock)
SDKService->>ToolAdapter: pop_pending_tool_output()
SDKService->>Frontend: StreamToolOutputAvailable
end
ClaudeSDK->>SecurityHooks: Stop hook
SecurityHooks->>SDKService: transcript_path callback
SDKService->>SDKService: read_transcript_file()
SDKService->>DB: save transcript to session.sdkTranscript
ClaudeSDK->>SDKService: ResultMessage (success)
SDKService->>Frontend: StreamFinish
SDKService->>DB: upsert_chat_session()
```
</details>
<sub>Last reviewed commit: 28c1121</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
|
||
|
|
43b25b5e2f |
ci(frontend): Speed up E2E test job (#12090)
The frontend `e2e_test` doesn't have a working build cache setup, causing really slow builds = slow test jobs. These changes reduce total test runtime from ~12 minutes to ~5 minutes. ### Changes 🏗️ - Inject build cache config into docker compose config; let `buildx bake` use GHA cache directly - Add `docker-ci-fix-compose-build-cache.py` script - Optimize `backend/Dockerfile` + root `.dockerignore` - Replace broken DIY pnpm store caching with `actions/setup-node` built-in cache management - Add caching for test seed data created in DB ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - CI |
||
|
|
85b6520710 |
feat(blocks): Add video editing blocks (#11796)
<!-- Clearly explain the need for these changes: -->
This PR adds general-purpose video editing blocks for the AutoGPT
Platform, enabling automated video production workflows like documentary
creation, marketing videos, tutorial assembly, and content repurposing.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
**New blocks added in `backend/blocks/video/`:**
- `VideoDownloadBlock` - Download videos from URLs (YouTube, Vimeo, news
sites, direct links) using yt-dlp
- `VideoClipBlock` - Extract time segments from videos with start/end
time validation
- `VideoConcatBlock` - Merge multiple video clips with optional
transitions (none, crossfade, fade_black)
- `VideoTextOverlayBlock` - Add text overlays/captions with positioning
and timing options
- `VideoNarrationBlock` - Generate AI narration via ElevenLabs and mix
with video audio (replace, mix, or ducking modes)
**Dependencies required:**
- `yt-dlp` - For video downloading
- `moviepy` - For video editing operations
**Implementation details:**
- All blocks follow the SDK pattern with proper error handling and
exception chaining
- Proper resource cleanup in `finally` blocks to prevent memory leaks
- Input validation (e.g., end_time > start_time)
- Test mocks included for CI
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Blocks follow the SDK pattern with
`BlockSchemaInput`/`BlockSchemaOutput`
- [x] Resource cleanup is implemented in `finally` blocks
- [x] Exception chaining is properly implemented
- [x] Input validation is in place
- [x] Test mocks are provided for CI environments
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
N/A - No configuration changes required.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Adds new multimedia blocks that invoke ffmpeg/MoviePy and introduces
new external dependencies (plus container packages), which can impact
runtime stability and resource usage; download/overlay blocks are
present but disabled due to sandbox/policy concerns.
>
> **Overview**
> Adds a new `backend.blocks.video` module with general-purpose video
workflow blocks (download, clip, concat w/ transitions, loop, add-audio,
text overlay, and ElevenLabs-powered narration), including shared
utilities for codec selection, filename cleanup, and an ffmpeg-based
chapter-strip workaround for MoviePy.
>
> Extends credentials/config to support ElevenLabs
(`ELEVENLABS_API_KEY`, provider enum, system credentials, and cost
config) and adds new dependencies (`elevenlabs`, `yt-dlp`) plus Docker
runtime packages (`ffmpeg`, `imagemagick`).
>
> Improves file/reference handling end-to-end by embedding MIME types in
`workspace://...#mime` outputs and updating frontend rendering to detect
video vs image from MIME fragments (and broaden supported audio/video
extensions), with optional enhanced output rendering behind a feature
flag in the legacy builder UI.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
8b83bb8647 |
feat(backend): unified hybrid search with embedding backfill for all content types (#11767)
## Summary This PR extends the embedding system to support **blocks** and **documentation** content types in addition to store agents, and introduces **unified hybrid search** across all content types using a single `UnifiedContentEmbedding` table. ### Key Changes 1. **Unified Hybrid Search Architecture** - Added `search` tsvector column to `UnifiedContentEmbedding` table - New `unified_hybrid_search()` function searches across all content types (agents, blocks, docs) - Updated `hybrid_search()` for store agents to use `UnifiedContentEmbedding.search` - Removed deprecated `search` column from `StoreListingVersion` table 2. **Pluggable Content Handler Architecture** - Created abstract `ContentHandler` base class for extensibility - Implemented handlers: `StoreAgentHandler`, `BlockHandler`, `DocumentationHandler` - Registry pattern for easy addition of new content types 3. **Block Embeddings** - Discovers all blocks using `get_blocks()` - Extracts searchable text from: name, description, categories, input/output schemas 4. **Documentation Embeddings** - Scans `/docs/` directory for `.md` and `.mdx` files - Extracts title from first `#` heading or uses filename as fallback 5. **Hybrid Search Graceful Degradation** - Falls back to lexical-only search if query embedding generation fails - Redistributes semantic weight proportionally to other components - Logs warning instead of throwing error 6. **Database Migrations** - `20260115200000_add_unified_search_tsvector`: Adds search column to UnifiedContentEmbedding with auto-update trigger - `20260115210000_remove_storelistingversion_search`: Removes deprecated search column and updates StoreAgent view 7. **Orphan Cleanup** - `cleanup_orphaned_embeddings()` removes embeddings for deleted content - Always runs after backfill, even at 100% coverage ### Review Comments Addressed - ✅ SQL parameter index bug when user_id provided (embeddings.py) - ✅ Early return skipping cleanup at 100% coverage (scheduler.py) - ✅ Inconsistent return structure across code paths (scheduler.py) - ✅ SQL UNION syntax error - added parentheses for ORDER BY/LIMIT (hybrid_search.py) - ✅ Version numeric ordering in aggregations (migration) - ✅ Embedding dimension uses EMBEDDING_DIM constant ### Files Changed - `backend/api/features/store/content_handlers.py` (NEW): Handler architecture - `backend/api/features/store/embeddings.py`: Refactored to use handlers - `backend/api/features/store/hybrid_search.py`: Unified search + graceful degradation - `backend/executor/scheduler.py`: Process all content types, consistent returns - `migrations/20260115200000_add_unified_search_tsvector/`: Add tsvector to unified table - `migrations/20260115210000_remove_storelistingversion_search/`: Remove old search column - `schema.prisma`: Updated UnifiedContentEmbedding and StoreListingVersion models - `*_test.py`: Added tests for unified_hybrid_search ## Test Plan 1. ✅ All tests passing on Python 3.11, 3.12, 3.13 2. ✅ Types check passing 3. ✅ CodeRabbit and Sentry reviews addressed 4. Deploy to staging and verify: - Backfill job processes all content types - Search results include blocks and docs - Search works without OpenAI API (graceful degradation) 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Swifty <craigswift13@gmail.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> |
||
|
|
843c487500 |
feat(backend): add prisma types stub generator for pyright compatibility (#11736)
Prisma's generated `types.py` file is 57,000+ lines with complex recursive TypedDict definitions that exhaust Pyright's type inference budget. This causes random type errors and makes the type checker unreliable. ### Changes 🏗️ - Add `gen_prisma_types_stub.py` script that generates a lightweight `.pyi` stub file - The stub preserves safe types (Literal, TypeVar) while collapsing complex TypedDicts to `dict[str, Any]` - Integrate stub generation into all workflows that run `prisma generate`: - `platform-backend-ci.yml` - `claude.yml` - `claude-dependabot.yml` - `copilot-setup-steps.yml` - `docker-compose.platform.yml` - `Dockerfile` - `Makefile` (migrate & reset-db targets) - `linter.py` (lint & format commands) - Add `gen-prisma-stub` poetry script entry - Fix two pre-existing type errors that were previously masked: - `store/db.py`: Replace private type `_StoreListingVersion_version_OrderByInput` with dict literal - `airtable/_webhook.py`: Add cast for `Serializable` type ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run `poetry run format` - passes with 0 errors (down from 57+) - [x] Run `poetry run lint` - passes with 0 errors - [x] Run `poetry run gen-prisma-stub` - generates stub successfully - [x] Verify stub file is created at correct location with proper content #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Chores** * Added a lightweight Prisma type-stub generator and integrated it into build, lint, CI/CD, and container workflows. * Build, migration, formatting, and lint steps now generate these stubs to improve type-checking performance and reduce overhead during builds and deployments. * Exposed a project command to run stub generation manually. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> |
||
|
|
8610118ddc | ai sucks - fixing | ||
|
|
ebb4ebb025 | include parital types in second place | ||
|
|
cb532e1c4d | update docker file to include partial types | ||
|
|
50689218ed | feat(backend): implement comprehensive load testing performance fixes + database health improvements (#10965) | ||
|
|
f4538d6f5a |
build(backend): Change base image to debian:13-slim w/ Python 3.13 (#10654)
- Resolves #10653 The objective is to move to a base image with fewer active vulnerabilities. Hence the choice for `debian:13-slim` (0 high, 1 medium, 21 low severity), a huge improvement compared to our current base image `python:3.11.10-slim-bookworm` (4 high, 11 medium, 15 low severity). ### Changes 🏗️ - Change backend base image to `debian:13-slim` - Use Python 3.13 - Fix now-deprecated use of class property in `AppProcess` and `BaseAppService` - Expand backend CI matrix to run with Python 3.11 through 3.13 - Update Python version constraint in `pyproject.toml` to include Python 3.13 Also, unrelated: - Update `autogpt-platform-backend` package version to `v0.6.22`, the latest release ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] CI passes - [x] No new errors in deployment logs - [x] Everything seems to work normally in deployment |
||
|
|
4bfeddc03d |
feat(platform/docker): add frontend service to docker-compose with env config improvements (#10615)
## Summary This PR adds the frontend service to the Docker Compose configuration, enabling `docker compose up` to run the complete stack, including the frontend. It also implements comprehensive environment variable improvements, unified .env file support, and fixes Docker networking issues. ## Key Changes ### 🐳 Docker Compose Improvements - **Added frontend service** to `docker-compose.yml` and `docker-compose.platform.yml` - **Production build**: Uses `pnpm build + serve` instead of dev server for better stability and lower memory usage - **Service dependencies**: Frontend now waits for backend services (`rest_server`, `websocket_server`) to be ready - **YAML anchors**: Implemented DRY configuration to avoid duplicating environment values ### 📁 Unified .env File Support - **Frontend .env loading**: Automatically loads `.env` file during Docker build and runtime - **Backend .env loading**: Optional `.env` file support with fallback to sensible defaults in `settings.py` - **Single source of truth**: All `NEXT_PUBLIC_*` and API keys can be defined in respective `.env` files - **Docker integration**: Updated `.dockerignore` to include `.env` files in build context - **Git tracking**: Frontend and backend `.env` files are now trackable (removed from gitignore) ### 🔧 Environment Variable Architecture - **Dual environment strategy**: - Server-side code uses Docker service names (`http://rest_server:8006/api`) - Client-side code uses localhost URLs (`http://localhost:8006/api`) - **Comprehensive config**: Added build args and runtime environment variables - **Network compatibility**: Fixes connection issues between frontend and backend containers - **Shared backend variables**: Common environment variables (service hosts, auth settings) centralized using YAML anchors ### 🛠️ Code Improvements - **Centralized env-config helper** (`/frontend/src/lib/env-config.ts`) with server-side priority - **Updated all frontend code** to use shared environment helpers instead of direct `process.env` access - **Consistent API**: All environment variable access now goes through helper functions - **Settings.py improvements**: Better defaults for CORS origins and optional .env file loading ### 🔗 Files Changed - `docker-compose.yml` & `docker-compose.platform.yml` - Added frontend service and shared backend env vars - `frontend/Dockerfile` - Simplified build process to use .env files directly - `backend/settings.py` - Optional .env loading and better defaults - `frontend/src/lib/env-config.ts` - New centralized environment configuration - `.dockerignore` - Allow .env files in build context - `.gitignore` - Updated to allow frontend/backend .env files - Multiple frontend files - Updated to use env helpers - Updates to both auto installer scripts to work with the latest setup! ## Benefits - ✅ **Single command deployment**: `docker compose up` now runs everything - ✅ **Better reliability**: Production build reduces memory usage and crashes - ✅ **Network compatibility**: Proper container-to-container communication - ✅ **Maintainable config**: Centralized environment variable management with .env files - ✅ **Development friendly**: Works in both Docker and local development - ✅ **API key management**: Easy configuration through .env files for all services - ✅ **No more manual env vars**: Frontend and backend automatically load their respective .env files ## Testing - ✅ Verified Docker service communication works correctly - ✅ Frontend responds and serves content properly - ✅ Environment variables are correctly resolved in both server and client contexts - ✅ No connection errors after implementing service dependencies - ✅ .env file loading works correctly in both build and runtime phases - ✅ Backend services work with and without .env files present ### Checklist 📋 #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Lluis Agusti <hi@llu.lu> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> Co-authored-by: Claude <claude@users.noreply.github.com> Co-authored-by: Bentlybro <Github@bentlybro.com> |
||
|
|
3e0742f9c5 |
Spike/infra pooling (#9812)
<!-- Clearly explain the need for these changes: --> Swap to pooling supabase connections rather than depending on x number of max open connections ### Changes 🏗️ Adds direct connect URL to be used throughout the system <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Test thoroughly all of the endpoints in the dev env with switched infra matching pr - [x] Follow the new release plan tests - [x] Follow the old release plan tests #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) <details> <summary>configuration changes</summary> - Change how we connect to the database to use direct when configured and database URL when not - update prisma for this - have default matching database and default </details> |
||
|
|
d638c1f484 |
Fix Poetry v2.0.0 compatibility (#9197)
Make all changes necessary to make everything work with Poetry v2.0.0. - Resolves #9196 ## Changes - Removed `--no-update` flag from `poetry lock` command in codebase - Removed extra path arguments from `poetry -C [path] run [command]` occurrences - Regenerated all lock files in hierarchical order - Added workaround for Poetry bug where `packages.[i].format` is now suddenly required Additionally: - Fixed up .dockerignore - Fixes .venv being erroneously copied over from local - Fixes build context bloat (300MB -> 2.5MB) - Fixed warnings about entrypoint script not being installed in docker builds ### Relevant (breaking) changes in v2.0.0 - `--no-update` flag no longer exists for `poetry lock` as it has become default behavior - The `-C` option now actually changes the directory, so any path arguments in `poetry run` commands can/must be removed - Poetry v2.0.0 uses the new v2.1 lock file spec, so all lock files have to be regenerated to avoid false-positive lock file updates and checks on future PRs - **BUG:** when specifying `poetry.tool.packages`, `format` is required now - python-poetry/poetry#9961 Full Poetry v2.0.0 release notes and change log: https://python-poetry.org/blog/announcing-poetry-2.0.0 |
||
|
|
2de5e3dd83 |
feat(platform): Agent Store V2 (#8874)
# 🌎 Overview AutoGPT Store Version 2 expands on the Pre-Store by enhancing agent discovery, providing richer content presentation, and introducing new user engagement features. The focus is on creating a visually appealing and interactive marketplace that allows users to explore and evaluate agents through images, videos, and detailed descriptions. ### Vision To create a visually compelling and interactive open-source marketplace for autonomous AI agents, where users can easily discover, evaluate, and interact with agents through media-rich listings, ratings, and version history. ### Objectives 📊 Incorporate visuals (icons, images, videos) into agent listings. ⭐ Introduce a rating system and agent run count. 🔄 Provide version history and update logs from creators. 🔍 Improve user experience with advanced search and filtering features. ### Changes 🏗️ <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [ ] I have clearly listed my changes in the PR description - [ ] I have made a test plan - [ ] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [ ] ... <details> <summary>Example test plan</summary> - [ ] Create from scratch and execute an agent with at least 3 blocks - [ ] Import an agent from file upload, and confirm it executes correctly - [ ] Upload agent to marketplace - [ ] Import an agent from marketplace and confirm it executes correctly - [ ] Edit an agent from monitor, and confirm it executes correctly </details> #### For configuration changes: - [ ] `.env.example` is updated or already compatible with my changes - [ ] `docker-compose.yml` is updated or already compatible with my changes - [ ] I have included a list of my configuration changes in the PR description (under **Changes**) <details> <summary>Examples of configuration changes</summary> - Changing ports - Adding new services that need to communicate with each other - Secrets or environment variable changes - New or infrastructure changes such as databases </details> --------- Co-authored-by: Bently <tomnoon9@gmail.com> Co-authored-by: Aarushi <aarushik93@gmail.com> |
||
|
|
3c0dea0017 |
[Snyk] Security upgrade python from 3.11-slim-buster to 3.11.10-slim-bookworm (#8619)
fix: autogpt_platform/backend/Dockerfile to reduce vulnerabilities The following vulnerabilities are fixed with an upgrade: - https://snyk.io/vuln/SNYK-DEBIAN10-GNUTLS28-6159414 - https://snyk.io/vuln/SNYK-DEBIAN10-NCURSES-1655739 - https://snyk.io/vuln/SNYK-DEBIAN10-NCURSES-1655739 - https://snyk.io/vuln/SNYK-DEBIAN10-NCURSES-5421196 - https://snyk.io/vuln/SNYK-DEBIAN10-NCURSES-5421196 Co-authored-by: snyk-bot <snyk-bot@snyk.io> |
||
|
|
1622a4aaa8 |
refactor(backend): Move credentials storage to prisma user (#8283)
* feat(frontend,backend): testing * feat: testing * feat(backend): it works for reading email * feat(backend): more docs on google * fix(frontend,backend): formatting * feat(backend): more logigin (i know this should be debug) * feat(backend): make real the default scopes * feat(backend): tests and linting * fix: code review prep * feat: sheets block * feat: liniting * Update route.ts * Update autogpt_platform/backend/backend/integrations/oauth/google.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * Update autogpt_platform/backend/backend/server/routers/integrations.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * fix: revert opener change * feat(frontend): add back opener required to work on mac edge * feat(frontend): drop typing list import from gmail * fix: code review comments * feat: code review changes * feat: code review changes * fix(backend): move from asserts to checks so they don't get optimized away in the future * fix(backend): code review changes * fix(backend): remove google specific check * fix: add typing * fix: only enable google blocks when oauth is configured for google * fix: errors are real and valid outputs always when output * fix(backend): add provider detail for debuging scope declines * Update autogpt_platform/frontend/src/components/integrations/credentials-input.tsx Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * fix(frontend): enhance with comment, typeof error isn't known so this is best way to ensure the stringifyication will work * feat: code review change requests * fix: linting * fix: reduce error catching * fix: doc messages in code * fix: check the correct scopes object 😄 * fix: remove double (and not needed) try catch * fix: lint * fix: scopes * feat: handle the default scopes better * feat: better email objectification * feat: process attachements turns out an email doesn't need a body * fix: lint * Update google.py * Update autogpt_platform/backend/backend/data/block.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * fix: quit trying and except failure * Update autogpt_platform/backend/backend/server/routers/integrations.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * feat: don't allow expired states * fix: clarify function name and purpose * feat: code links updates * feat: additional docs on adding a block * fix: type hint missing which means the block won't work * fix: linting * fix: docs formatting * Update issues.py * fix: improve the naming * fix: formatting * Update new_blocks.md * Update new_blocks.md * feat: better docs on what the args mean * feat: more details on yield * Update new_blocks.md * fix: remove ignore from docs build * feat: initial migration * feat: migration tested with supabase-> prisma data location * add custom migrations and script * update migration command * formatting and linting * updated migration script * add direct db url * add find files * rename * use binary instead of source * temp adding supabase * remove unused functions * adding missed merge * fix: commit hash for lock * ci: fix lint * fix: minor bugs that prevented connecting and migrating to dbs and auth * fix: linting * fix: missed await * fix(backend): phase one pr updates * fix: handle error with returning user object from database_manager * fix: linting * Address comments * Make the migration safe * Update migration doc * Move misplaced model functions * Grammar * Revert lock * Remove irrelevant changes * Remove irrelevant changes * Avoid adding trigger on public schema --------- Co-authored-by: Reinier van der Leer <pwuts@agpt.co> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> Co-authored-by: Aarushi <aarushik93@gmail.com> Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com> |
||
|
|
ef7cfbb860 |
refactor: AutoGPT Platform Stealth Launch Repo Re-Org (#8113)
Restructuring the Repo to make it clear the difference between classic autogpt and the autogpt platform: * Move the "classic" projects `autogpt`, `forge`, `frontend`, and `benchmark` into a `classic` folder * Also rename `autogpt` to `original_autogpt` for absolute clarity * Rename `rnd/` to `autogpt_platform/` * `rnd/autogpt_builder` -> `autogpt_platform/frontend` * `rnd/autogpt_server` -> `autogpt_platform/backend` * Adjust any paths accordingly |