## Changes 🏗️ - Run the API query generation as part of the `dev` command - update the `README` to reflect so - Add CI job to generate queries and type-check to make sure we are not out of sync - the job is run both in Front-end and Back-end changes - Generate the files via script to load the BE URL dynamically from the env - Remove generated files from Git - rename the `type-check` command to `types` ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] CI passes - [x] `README` updates make sense #### For configuration changes: None --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
7.6 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Repository Overview
AutoGPT Platform is a monorepo containing:
- Backend (
/backend): Python FastAPI server with async support - Frontend (
/frontend): Next.js React application - Shared Libraries (
/autogpt_libs): Common Python utilities
Essential Commands
Backend Development
# Install dependencies
cd backend && poetry install
# Run database migrations
poetry run prisma migrate dev
# Start all services (database, redis, rabbitmq, clamav)
docker compose up -d
# Run the backend server
poetry run serve
# Run tests
poetry run test
# Run specific test
poetry run pytest path/to/test_file.py::test_function_name
# Run block tests (tests that validate all blocks work correctly)
poetry run pytest backend/blocks/test/test_block.py -xvs
# Run tests for a specific block (e.g., GetCurrentTimeBlock)
poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[GetCurrentTimeBlock]' -xvs
# Lint and format
# prefer format if you want to just "fix" it and only get the errors that can't be autofixed
poetry run format # Black + isort
poetry run lint # ruff
More details can be found in TESTING.md
Creating/Updating Snapshots
When you first write a test or when the expected output changes:
poetry run pytest path/to/test.py --snapshot-update
⚠️ Important: Always review snapshot changes before committing! Use git diff to verify the changes are expected.
Frontend Development
# Install dependencies
cd frontend && npm install
# Start development server
npm run dev
# Run E2E tests
npm run test
# Run Storybook for component development
npm run storybook
# Build production
npm run build
# Type checking
npm run types
Architecture Overview
Backend Architecture
- API Layer: FastAPI with REST and WebSocket endpoints
- Database: PostgreSQL with Prisma ORM, includes pgvector for embeddings
- Queue System: RabbitMQ for async task processing
- Execution Engine: Separate executor service processes agent workflows
- Authentication: JWT-based with Supabase integration
- Security: Cache protection middleware prevents sensitive data caching in browsers/proxies
Frontend Architecture
- Framework: Next.js App Router with React Server Components
- State Management: React hooks + Supabase client for real-time updates
- Workflow Builder: Visual graph editor using @xyflow/react
- UI Components: Radix UI primitives with Tailwind CSS styling
- Feature Flags: LaunchDarkly integration
Key Concepts
- Agent Graphs: Workflow definitions stored as JSON, executed by the backend
- Blocks: Reusable components in
/backend/blocks/that perform specific tasks - Integrations: OAuth and API connections stored per user
- Store: Marketplace for sharing agent templates
- Virus Scanning: ClamAV integration for file upload security
Testing Approach
- Backend uses pytest with snapshot testing for API responses
- Test files are colocated with source files (
*_test.py) - Frontend uses Playwright for E2E tests
- Component testing via Storybook
Database Schema
Key models (defined in /backend/schema.prisma):
User: Authentication and profile dataAgentGraph: Workflow definitions with version controlAgentGraphExecution: Execution history and resultsAgentNode: Individual nodes in a workflowStoreListing: Marketplace listings for sharing agents
Environment Configuration
Configuration Files
- Backend:
/backend/.env.default(defaults) →/backend/.env(user overrides) - Frontend:
/frontend/.env.default(defaults) →/frontend/.env(user overrides) - Platform:
/.env.default(Supabase/shared defaults) →/.env(user overrides)
Docker Environment Loading Order
.env.defaultfiles provide base configuration (tracked in git).envfiles provide user-specific overrides (gitignored)- Docker Compose
environment:sections provide service-specific overrides - Shell environment variables have highest precedence
Key Points
- All services use hardcoded defaults in docker-compose files (no
${VARIABLE}substitutions) - The
env_filedirective loads variables INTO containers at runtime - Backend/Frontend services use YAML anchors for consistent configuration
- Supabase services (
db/docker/docker-compose.yml) follow the same pattern
Common Development Tasks
Adding a new block:
- Create new file in
/backend/backend/blocks/ - Inherit from
Blockbase class - Define input/output schemas
- Implement
runmethod - Register in block registry
- Generate the block uuid using
uuid.uuid4()
Note: when making many new blocks analyze the interfaces for each of these blcoks and picture if they would go well together in a graph based editor or would they struggle to connect productively? ex: do the inputs and outputs tie well together?
Modifying the API:
- Update route in
/backend/backend/server/routers/ - Add/update Pydantic models in same directory
- Write tests alongside the route file
- Run
poetry run testto verify
Frontend feature development:
- Components go in
/frontend/src/components/ - Use existing UI components from
/frontend/src/components/ui/ - Add Storybook stories for new components
- Test with Playwright if user-facing
Security Implementation
Cache Protection Middleware:
- Located in
/backend/backend/server/middleware/security.py - Default behavior: Disables caching for ALL endpoints with
Cache-Control: no-store, no-cache, must-revalidate, private - Uses an allow list approach - only explicitly permitted paths can be cached
- Cacheable paths include: static assets (
/static/*,/_next/static/*), health checks, public store pages, documentation - Prevents sensitive data (auth tokens, API keys, user data) from being cached by browsers/proxies
- To allow caching for a new endpoint, add it to
CACHEABLE_PATHSin the middleware - Applied to both main API server and external API applications
Creating Pull Requests
- Create the PR aginst the
devbranch of the repository. - Ensure the branch name is descriptive (e.g.,
feature/add-new-block)/ - Use conventional commit messages (see below)/
- Fill out the .github/PULL_REQUEST_TEMPLATE.md template as the PR description/
- Run the github pre-commit hooks to ensure code quality.
Reviewing/Revising Pull Requests
- When the user runs /pr-comments or tries to fetch them, also run gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews to get the reviews
- Use gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews/[review_id]/comments to get the review contents
- Use gh api /repos/Significant-Gravitas/AutoGPT/issues/9924/comments to get the pr specific comments
Conventional Commits
Use this format for commit messages and Pull Request titles:
Conventional Commit Types:
feat: Introduces a new feature to the codebasefix: Patches a bug in the codebaserefactor: Code change that neither fixes a bug nor adds a feature; also applies to removing featuresci: Changes to CI configurationdocs: Documentation-only changesdx: Improvements to the developer experience
Recommended Base Scopes:
platform: Changes affecting both frontend and backendfrontendbackendinfrablocks: Modifications/additions of individual blocks
Subscope Examples:
backend/executorbackend/dbfrontend/builder(includes changes to the block UI component)infra/prod
Use these scopes and subscopes for clarity and consistency in commit messages.