mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-01-07 22:33:57 -05:00
feat(platform/docker): add frontend service to docker-compose with env config improvements (#10615)
## Summary This PR adds the frontend service to the Docker Compose configuration, enabling `docker compose up` to run the complete stack, including the frontend. It also implements comprehensive environment variable improvements, unified .env file support, and fixes Docker networking issues. ## Key Changes ### 🐳 Docker Compose Improvements - **Added frontend service** to `docker-compose.yml` and `docker-compose.platform.yml` - **Production build**: Uses `pnpm build + serve` instead of dev server for better stability and lower memory usage - **Service dependencies**: Frontend now waits for backend services (`rest_server`, `websocket_server`) to be ready - **YAML anchors**: Implemented DRY configuration to avoid duplicating environment values ### 📁 Unified .env File Support - **Frontend .env loading**: Automatically loads `.env` file during Docker build and runtime - **Backend .env loading**: Optional `.env` file support with fallback to sensible defaults in `settings.py` - **Single source of truth**: All `NEXT_PUBLIC_*` and API keys can be defined in respective `.env` files - **Docker integration**: Updated `.dockerignore` to include `.env` files in build context - **Git tracking**: Frontend and backend `.env` files are now trackable (removed from gitignore) ### 🔧 Environment Variable Architecture - **Dual environment strategy**: - Server-side code uses Docker service names (`http://rest_server:8006/api`) - Client-side code uses localhost URLs (`http://localhost:8006/api`) - **Comprehensive config**: Added build args and runtime environment variables - **Network compatibility**: Fixes connection issues between frontend and backend containers - **Shared backend variables**: Common environment variables (service hosts, auth settings) centralized using YAML anchors ### 🛠️ Code Improvements - **Centralized env-config helper** (`/frontend/src/lib/env-config.ts`) with server-side priority - **Updated all frontend code** to use shared environment helpers instead of direct `process.env` access - **Consistent API**: All environment variable access now goes through helper functions - **Settings.py improvements**: Better defaults for CORS origins and optional .env file loading ### 🔗 Files Changed - `docker-compose.yml` & `docker-compose.platform.yml` - Added frontend service and shared backend env vars - `frontend/Dockerfile` - Simplified build process to use .env files directly - `backend/settings.py` - Optional .env loading and better defaults - `frontend/src/lib/env-config.ts` - New centralized environment configuration - `.dockerignore` - Allow .env files in build context - `.gitignore` - Updated to allow frontend/backend .env files - Multiple frontend files - Updated to use env helpers - Updates to both auto installer scripts to work with the latest setup! ## Benefits - ✅ **Single command deployment**: `docker compose up` now runs everything - ✅ **Better reliability**: Production build reduces memory usage and crashes - ✅ **Network compatibility**: Proper container-to-container communication - ✅ **Maintainable config**: Centralized environment variable management with .env files - ✅ **Development friendly**: Works in both Docker and local development - ✅ **API key management**: Easy configuration through .env files for all services - ✅ **No more manual env vars**: Frontend and backend automatically load their respective .env files ## Testing - ✅ Verified Docker service communication works correctly - ✅ Frontend responds and serves content properly - ✅ Environment variables are correctly resolved in both server and client contexts - ✅ No connection errors after implementing service dependencies - ✅ .env file loading works correctly in both build and runtime phases - ✅ Backend services work with and without .env files present ### Checklist 📋 #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Lluis Agusti <hi@llu.lu> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> Co-authored-by: Claude <claude@users.noreply.github.com> Co-authored-by: Bentlybro <Github@bentlybro.com>
This commit is contained in:
@@ -15,6 +15,7 @@
|
||||
!autogpt_platform/backend/pyproject.toml
|
||||
!autogpt_platform/backend/poetry.lock
|
||||
!autogpt_platform/backend/README.md
|
||||
!autogpt_platform/backend/.env
|
||||
|
||||
# Platform - Market
|
||||
!autogpt_platform/market/market/
|
||||
@@ -34,6 +35,7 @@
|
||||
## config
|
||||
!autogpt_platform/frontend/*.config.*
|
||||
!autogpt_platform/frontend/.env.*
|
||||
!autogpt_platform/frontend/.env
|
||||
|
||||
# Classic - AutoGPT
|
||||
!classic/original_autogpt/autogpt/
|
||||
|
||||
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -24,7 +24,8 @@
|
||||
</details>
|
||||
|
||||
#### For configuration changes:
|
||||
- [ ] `.env.example` is updated or already compatible with my changes
|
||||
|
||||
- [ ] `.env.default` is updated or already compatible with my changes
|
||||
- [ ] `docker-compose.yml` is updated or already compatible with my changes
|
||||
- [ ] I have included a list of my configuration changes in the PR description (under **Changes**)
|
||||
|
||||
|
||||
15
.github/workflows/platform-frontend-ci.yml
vendored
15
.github/workflows/platform-frontend-ci.yml
vendored
@@ -176,11 +176,7 @@ jobs:
|
||||
|
||||
- name: Copy default supabase .env
|
||||
run: |
|
||||
cp ../.env.example ../.env
|
||||
|
||||
- name: Copy backend .env
|
||||
run: |
|
||||
cp ../backend/.env.example ../backend/.env
|
||||
cp ../.env.default ../.env
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
@@ -252,15 +248,6 @@ jobs:
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Setup .env
|
||||
run: cp .env.example .env
|
||||
|
||||
- name: Build frontend
|
||||
run: pnpm build --turbo
|
||||
# uses Turbopack, much faster and safe enough for a test pipeline
|
||||
env:
|
||||
NEXT_PUBLIC_PW_TEST: true
|
||||
|
||||
- name: Install Browser 'chromium'
|
||||
run: pnpm playwright install --with-deps chromium
|
||||
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -5,6 +5,8 @@ classic/original_autogpt/*.json
|
||||
auto_gpt_workspace/*
|
||||
*.mpeg
|
||||
.env
|
||||
# Root .env files
|
||||
/.env
|
||||
azure.yaml
|
||||
.vscode
|
||||
.idea/*
|
||||
@@ -121,7 +123,6 @@ celerybeat.pid
|
||||
|
||||
# Environments
|
||||
.direnv/
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv*/
|
||||
|
||||
@@ -114,13 +114,31 @@ Key models (defined in `/backend/schema.prisma`):
|
||||
- `StoreListing`: Marketplace listings for sharing agents
|
||||
|
||||
### Environment Configuration
|
||||
- Backend: `.env` file in `/backend`
|
||||
- Frontend: `.env.local` file in `/frontend`
|
||||
- Both require Supabase credentials and API keys for various services
|
||||
|
||||
#### Configuration Files
|
||||
|
||||
- **Backend**: `/backend/.env.default` (defaults) → `/backend/.env` (user overrides)
|
||||
- **Frontend**: `/frontend/.env.default` (defaults) → `/frontend/.env` (user overrides)
|
||||
- **Platform**: `/.env.default` (Supabase/shared defaults) → `/.env` (user overrides)
|
||||
|
||||
#### Docker Environment Loading Order
|
||||
|
||||
1. `.env.default` files provide base configuration (tracked in git)
|
||||
2. `.env` files provide user-specific overrides (gitignored)
|
||||
3. Docker Compose `environment:` sections provide service-specific overrides
|
||||
4. Shell environment variables have highest precedence
|
||||
|
||||
#### Key Points
|
||||
|
||||
- All services use hardcoded defaults in docker-compose files (no `${VARIABLE}` substitutions)
|
||||
- The `env_file` directive loads variables INTO containers at runtime
|
||||
- Backend/Frontend services use YAML anchors for consistent configuration
|
||||
- Supabase services (`db/docker/docker-compose.yml`) follow the same pattern
|
||||
|
||||
### Common Development Tasks
|
||||
|
||||
**Adding a new block:**
|
||||
|
||||
1. Create new file in `/backend/backend/blocks/`
|
||||
2. Inherit from `Block` base class
|
||||
3. Define input/output schemas
|
||||
|
||||
@@ -8,7 +8,6 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
|
||||
|
||||
- Docker
|
||||
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
|
||||
- Node.js & NPM (for running the frontend application)
|
||||
|
||||
### Running the System
|
||||
|
||||
@@ -24,10 +23,10 @@ To run the AutoGPT Platform, follow these steps:
|
||||
2. Run the following command:
|
||||
|
||||
```
|
||||
cp .env.example .env
|
||||
cp .env.default .env
|
||||
```
|
||||
|
||||
This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables.
|
||||
This command will copy the `.env.default` file to `.env`. You can modify the `.env` file to add your own environment variables.
|
||||
|
||||
3. Run the following command:
|
||||
|
||||
@@ -37,44 +36,7 @@ To run the AutoGPT Platform, follow these steps:
|
||||
|
||||
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
|
||||
|
||||
4. Navigate to `frontend` within the `autogpt_platform` directory:
|
||||
|
||||
```
|
||||
cd frontend
|
||||
```
|
||||
|
||||
You will need to run your frontend application separately on your local machine.
|
||||
|
||||
5. Run the following command:
|
||||
|
||||
```
|
||||
cp .env.example .env.local
|
||||
```
|
||||
|
||||
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
|
||||
|
||||
6. Run the following command:
|
||||
|
||||
Enable corepack and install dependencies by running:
|
||||
|
||||
```
|
||||
corepack enable
|
||||
pnpm i
|
||||
```
|
||||
|
||||
Generate the API client (this step is required before running the frontend):
|
||||
|
||||
```
|
||||
pnpm generate:api-client
|
||||
```
|
||||
|
||||
Then start the frontend application in development mode:
|
||||
|
||||
```
|
||||
pnpm dev
|
||||
```
|
||||
|
||||
7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
|
||||
4. After all the services are in ready state, open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
|
||||
|
||||
### Docker Compose Commands
|
||||
|
||||
@@ -184,6 +146,7 @@ The platform includes scripts for generating and managing the API client:
|
||||
If you need to update the API client after making changes to the backend API:
|
||||
|
||||
1. Ensure the backend services are running:
|
||||
|
||||
```
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
52
autogpt_platform/backend/.dockerignore
Normal file
52
autogpt_platform/backend/.dockerignore
Normal file
@@ -0,0 +1,52 @@
|
||||
# Development and testing files
|
||||
**/__pycache__
|
||||
**/*.pyc
|
||||
**/*.pyo
|
||||
**/*.pyd
|
||||
**/.Python
|
||||
**/env/
|
||||
**/venv/
|
||||
**/.venv/
|
||||
**/pip-log.txt
|
||||
**/.pytest_cache/
|
||||
**/test-results/
|
||||
**/snapshots/
|
||||
**/test/
|
||||
|
||||
# IDE and editor files
|
||||
**/.vscode/
|
||||
**/.idea/
|
||||
**/*.swp
|
||||
**/*.swo
|
||||
*~
|
||||
|
||||
# OS files
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Logs
|
||||
**/*.log
|
||||
**/logs/
|
||||
|
||||
# Git
|
||||
.git/
|
||||
.gitignore
|
||||
|
||||
# Documentation
|
||||
**/*.md
|
||||
!README.md
|
||||
|
||||
# Local development files
|
||||
.env
|
||||
.env.local
|
||||
**/.env.test
|
||||
|
||||
# Build artifacts
|
||||
**/dist/
|
||||
**/build/
|
||||
**/target/
|
||||
|
||||
# Docker files (avoid recursion)
|
||||
Dockerfile*
|
||||
docker-compose*
|
||||
.dockerignore
|
||||
@@ -1,3 +1,9 @@
|
||||
# Backend Configuration
|
||||
# This file contains environment variables that MUST be set for the AutoGPT platform
|
||||
# Variables with working defaults in settings.py are not included here
|
||||
|
||||
## ===== REQUIRED DATABASE CONFIGURATION ===== ##
|
||||
# PostgreSQL Database Connection
|
||||
DB_USER=postgres
|
||||
DB_PASS=your-super-secret-and-long-postgres-password
|
||||
DB_NAME=postgres
|
||||
@@ -10,74 +16,49 @@ DB_SCHEMA=platform
|
||||
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
|
||||
DIRECT_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
|
||||
PRISMA_SCHEMA="postgres/schema.prisma"
|
||||
ENABLE_AUTH=true
|
||||
|
||||
# EXECUTOR
|
||||
NUM_GRAPH_WORKERS=10
|
||||
|
||||
BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
|
||||
|
||||
# generate using `from cryptography.fernet import Fernet;Fernet.generate_key().decode()`
|
||||
ENCRYPTION_KEY='dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw='
|
||||
UNSUBSCRIBE_SECRET_KEY = 'HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio='
|
||||
|
||||
## ===== REQUIRED SERVICE CREDENTIALS ===== ##
|
||||
# Redis Configuration
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=password
|
||||
|
||||
ENABLE_CREDIT=false
|
||||
STRIPE_API_KEY=
|
||||
STRIPE_WEBHOOK_SECRET=
|
||||
# RabbitMQ Credentials
|
||||
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
|
||||
RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
|
||||
|
||||
# What environment things should be logged under: local dev or prod
|
||||
APP_ENV=local
|
||||
# What environment to behave as: "local" or "cloud"
|
||||
BEHAVE_AS=local
|
||||
PYRO_HOST=localhost
|
||||
SENTRY_DSN=
|
||||
|
||||
# Email For Postmark so we can send emails
|
||||
POSTMARK_SERVER_API_TOKEN=
|
||||
POSTMARK_SENDER_EMAIL=invalid@invalid.com
|
||||
POSTMARK_WEBHOOK_TOKEN=
|
||||
|
||||
## User auth with Supabase is required for any of the 3rd party integrations with auth to work.
|
||||
ENABLE_AUTH=true
|
||||
# Supabase Authentication
|
||||
SUPABASE_URL=http://localhost:8000
|
||||
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
|
||||
# RabbitMQ credentials -- Used for communication between services
|
||||
RABBITMQ_HOST=localhost
|
||||
RABBITMQ_PORT=5672
|
||||
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
|
||||
RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
|
||||
## ===== REQUIRED SECURITY KEYS ===== ##
|
||||
# Generate using: from cryptography.fernet import Fernet;Fernet.generate_key().decode()
|
||||
ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=
|
||||
UNSUBSCRIBE_SECRET_KEY=HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio=
|
||||
|
||||
## GCS bucket is required for marketplace and library functionality
|
||||
## ===== IMPORTANT OPTIONAL CONFIGURATION ===== ##
|
||||
# Platform URLs (set these for webhooks and OAuth to work)
|
||||
PLATFORM_BASE_URL=http://localhost:8000
|
||||
FRONTEND_BASE_URL=http://localhost:3000
|
||||
|
||||
# Media Storage (required for marketplace and library functionality)
|
||||
MEDIA_GCS_BUCKET_NAME=
|
||||
|
||||
## For local development, you may need to set FRONTEND_BASE_URL for the OAuth flow
|
||||
## for integrations to work. Defaults to the value of PLATFORM_BASE_URL if not set.
|
||||
# FRONTEND_BASE_URL=http://localhost:3000
|
||||
## ===== API KEYS AND OAUTH CREDENTIALS ===== ##
|
||||
# All API keys below are optional - only add what you need
|
||||
|
||||
## PLATFORM_BASE_URL must be set to a *publicly accessible* URL pointing to your backend
|
||||
## to use the platform's webhook-related functionality.
|
||||
## If you are developing locally, you can use something like ngrok to get a publc URL
|
||||
## and tunnel it to your locally running backend.
|
||||
PLATFORM_BASE_URL=http://localhost:3000
|
||||
|
||||
## Cloudflare Turnstile (CAPTCHA) Configuration
|
||||
## Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
|
||||
## This is the backend secret key
|
||||
TURNSTILE_SECRET_KEY=
|
||||
## This is the verify URL
|
||||
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
|
||||
|
||||
LAUNCH_DARKLY_SDK_KEY=
|
||||
|
||||
## == INTEGRATION CREDENTIALS == ##
|
||||
# Each set of server side credentials is required for the corresponding 3rd party
|
||||
# integration to work.
|
||||
# AI/LLM Services
|
||||
OPENAI_API_KEY=
|
||||
ANTHROPIC_API_KEY=
|
||||
GROQ_API_KEY=
|
||||
LLAMA_API_KEY=
|
||||
AIML_API_KEY=
|
||||
OPEN_ROUTER_API_KEY=
|
||||
NVIDIA_API_KEY=
|
||||
|
||||
# OAuth Credentials
|
||||
# For the OAuth callback URL, use <your_frontend_url>/auth/integrations/oauth_callback,
|
||||
# e.g. http://localhost:3000/auth/integrations/oauth_callback
|
||||
|
||||
@@ -87,7 +68,6 @@ GITHUB_CLIENT_SECRET=
|
||||
|
||||
# Google OAuth App server credentials - https://console.cloud.google.com/apis/credentials, and enable gmail api and set scopes
|
||||
# https://console.cloud.google.com/apis/credentials/consent ?project=<your_project_id>
|
||||
|
||||
# You'll need to add/enable the following scopes (minimum):
|
||||
# https://console.developers.google.com/apis/api/gmail.googleapis.com/overview ?project=<your_project_id>
|
||||
# https://console.cloud.google.com/apis/library/sheets.googleapis.com/ ?project=<your_project_id>
|
||||
@@ -123,104 +103,65 @@ LINEAR_CLIENT_SECRET=
|
||||
TODOIST_CLIENT_ID=
|
||||
TODOIST_CLIENT_SECRET=
|
||||
|
||||
## ===== OPTIONAL API KEYS ===== ##
|
||||
|
||||
# LLM
|
||||
OPENAI_API_KEY=
|
||||
ANTHROPIC_API_KEY=
|
||||
AIML_API_KEY=
|
||||
GROQ_API_KEY=
|
||||
OPEN_ROUTER_API_KEY=
|
||||
LLAMA_API_KEY=
|
||||
|
||||
# Reddit
|
||||
# Go to https://www.reddit.com/prefs/apps and create a new app
|
||||
# Choose "script" for the type
|
||||
# Fill in the redirect uri as <your_frontend_url>/auth/integrations/oauth_callback, e.g. http://localhost:3000/auth/integrations/oauth_callback
|
||||
NOTION_CLIENT_ID=
|
||||
NOTION_CLIENT_SECRET=
|
||||
REDDIT_CLIENT_ID=
|
||||
REDDIT_CLIENT_SECRET=
|
||||
REDDIT_USER_AGENT="AutoGPT:1.0 (by /u/autogpt)"
|
||||
|
||||
# Discord
|
||||
DISCORD_BOT_TOKEN=
|
||||
# Payment Processing
|
||||
STRIPE_API_KEY=
|
||||
STRIPE_WEBHOOK_SECRET=
|
||||
|
||||
# SMTP/Email
|
||||
SMTP_SERVER=
|
||||
SMTP_PORT=
|
||||
SMTP_USERNAME=
|
||||
SMTP_PASSWORD=
|
||||
# Email Service (for sending notifications and confirmations)
|
||||
POSTMARK_SERVER_API_TOKEN=
|
||||
POSTMARK_SENDER_EMAIL=invalid@invalid.com
|
||||
POSTMARK_WEBHOOK_TOKEN=
|
||||
|
||||
# D-ID
|
||||
# Error Tracking
|
||||
SENTRY_DSN=
|
||||
|
||||
# Cloudflare Turnstile (CAPTCHA) Configuration
|
||||
# Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
|
||||
# This is the backend secret key
|
||||
TURNSTILE_SECRET_KEY=
|
||||
# This is the verify URL
|
||||
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
|
||||
|
||||
# Feature Flags
|
||||
LAUNCH_DARKLY_SDK_KEY=
|
||||
|
||||
# Content Generation & Media
|
||||
DID_API_KEY=
|
||||
FAL_API_KEY=
|
||||
IDEOGRAM_API_KEY=
|
||||
REPLICATE_API_KEY=
|
||||
REVID_API_KEY=
|
||||
SCREENSHOTONE_API_KEY=
|
||||
UNREAL_SPEECH_API_KEY=
|
||||
|
||||
# Open Weather Map
|
||||
# Data & Search Services
|
||||
E2B_API_KEY=
|
||||
EXA_API_KEY=
|
||||
JINA_API_KEY=
|
||||
MEM0_API_KEY=
|
||||
OPENWEATHERMAP_API_KEY=
|
||||
|
||||
# SMTP
|
||||
SMTP_SERVER=
|
||||
SMTP_PORT=
|
||||
SMTP_USERNAME=
|
||||
SMTP_PASSWORD=
|
||||
|
||||
# Medium
|
||||
MEDIUM_API_KEY=
|
||||
MEDIUM_AUTHOR_ID=
|
||||
|
||||
# Google Maps
|
||||
GOOGLE_MAPS_API_KEY=
|
||||
|
||||
# Replicate
|
||||
REPLICATE_API_KEY=
|
||||
# Communication Services
|
||||
DISCORD_BOT_TOKEN=
|
||||
MEDIUM_API_KEY=
|
||||
MEDIUM_AUTHOR_ID=
|
||||
SMTP_SERVER=
|
||||
SMTP_PORT=
|
||||
SMTP_USERNAME=
|
||||
SMTP_PASSWORD=
|
||||
|
||||
# Ideogram
|
||||
IDEOGRAM_API_KEY=
|
||||
|
||||
# Fal
|
||||
FAL_API_KEY=
|
||||
|
||||
# Exa
|
||||
EXA_API_KEY=
|
||||
|
||||
# E2B
|
||||
E2B_API_KEY=
|
||||
|
||||
# Mem0
|
||||
MEM0_API_KEY=
|
||||
|
||||
# Nvidia
|
||||
NVIDIA_API_KEY=
|
||||
|
||||
# Apollo
|
||||
# Business & Marketing Tools
|
||||
APOLLO_API_KEY=
|
||||
|
||||
# SmartLead
|
||||
SMARTLEAD_API_KEY=
|
||||
|
||||
# ZeroBounce
|
||||
ZEROBOUNCE_API_KEY=
|
||||
|
||||
# Ayrshare
|
||||
AYRSHARE_API_KEY=
|
||||
AYRSHARE_JWT_KEY=
|
||||
SMARTLEAD_API_KEY=
|
||||
ZEROBOUNCE_API_KEY=
|
||||
|
||||
## ===== OPTIONAL API KEYS END ===== ##
|
||||
|
||||
# Block Error Rate Monitoring
|
||||
BLOCK_ERROR_RATE_THRESHOLD=0.5
|
||||
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400
|
||||
|
||||
# Logging Configuration
|
||||
LOG_LEVEL=INFO
|
||||
ENABLE_CLOUD_LOGGING=false
|
||||
ENABLE_FILE_LOGGING=false
|
||||
# Use to manually set the log directory
|
||||
# LOG_DIR=./logs
|
||||
|
||||
# Example Blocks Configuration
|
||||
# Set to true to enable example blocks in development
|
||||
# These blocks are disabled by default in production
|
||||
ENABLE_EXAMPLE_BLOCKS=false
|
||||
|
||||
# Cloud Storage Configuration
|
||||
# Cleanup interval for expired files (hours between cleanup runs, 1-24 hours)
|
||||
CLOUD_STORAGE_CLEANUP_INTERVAL_HOURS=6
|
||||
# Other Services
|
||||
AUTOMOD_API_KEY=
|
||||
1
autogpt_platform/backend/.gitignore
vendored
1
autogpt_platform/backend/.gitignore
vendored
@@ -1,3 +1,4 @@
|
||||
.env
|
||||
database.db
|
||||
database.db-journal
|
||||
dev.db
|
||||
|
||||
@@ -8,14 +8,14 @@ WORKDIR /app
|
||||
|
||||
RUN echo 'Acquire::http::Pipeline-Depth 0;\nAcquire::http::No-Cache true;\nAcquire::BrokenProxy true;\n' > /etc/apt/apt.conf.d/99fixbadproxy
|
||||
|
||||
RUN apt-get update --allow-releaseinfo-change --fix-missing
|
||||
|
||||
# Install build dependencies
|
||||
RUN apt-get install -y build-essential
|
||||
RUN apt-get install -y libpq5
|
||||
RUN apt-get install -y libz-dev
|
||||
RUN apt-get install -y libssl-dev
|
||||
RUN apt-get install -y postgresql-client
|
||||
# Update package list and install build dependencies in a single layer
|
||||
RUN apt-get update --allow-releaseinfo-change --fix-missing \
|
||||
&& apt-get install -y \
|
||||
build-essential \
|
||||
libpq5 \
|
||||
libz-dev \
|
||||
libssl-dev \
|
||||
postgresql-client
|
||||
|
||||
ENV POETRY_HOME=/opt/poetry
|
||||
ENV POETRY_NO_INTERACTION=1
|
||||
@@ -68,6 +68,12 @@ COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.tom
|
||||
|
||||
WORKDIR /app/autogpt_platform/backend
|
||||
|
||||
FROM server_dependencies AS migrate
|
||||
|
||||
# Migration stage only needs schema and migrations - much lighter than full backend
|
||||
COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/
|
||||
COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations
|
||||
|
||||
FROM server_dependencies AS server
|
||||
|
||||
COPY autogpt_platform/backend /app/autogpt_platform/backend
|
||||
|
||||
@@ -360,7 +360,7 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
|
||||
description="Maximum message size limit for communication with the message bus",
|
||||
)
|
||||
|
||||
backend_cors_allow_origins: List[str] = Field(default_factory=list)
|
||||
backend_cors_allow_origins: List[str] = Field(default=["http://localhost:3000"])
|
||||
|
||||
@field_validator("backend_cors_allow_origins")
|
||||
@classmethod
|
||||
|
||||
@@ -1,123 +0,0 @@
|
||||
############
|
||||
# Secrets
|
||||
# YOU MUST CHANGE THESE BEFORE GOING INTO PRODUCTION
|
||||
############
|
||||
|
||||
POSTGRES_PASSWORD=your-super-secret-and-long-postgres-password
|
||||
JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
DASHBOARD_USERNAME=supabase
|
||||
DASHBOARD_PASSWORD=this_password_is_insecure_and_should_be_updated
|
||||
SECRET_KEY_BASE=UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
|
||||
VAULT_ENC_KEY=your-encryption-key-32-chars-min
|
||||
|
||||
|
||||
############
|
||||
# Database - You can change these to any PostgreSQL database that has logical replication enabled.
|
||||
############
|
||||
|
||||
POSTGRES_HOST=db
|
||||
POSTGRES_DB=postgres
|
||||
POSTGRES_PORT=5432
|
||||
# default user is postgres
|
||||
|
||||
|
||||
############
|
||||
# Supavisor -- Database pooler
|
||||
############
|
||||
POOLER_PROXY_PORT_TRANSACTION=6543
|
||||
POOLER_DEFAULT_POOL_SIZE=20
|
||||
POOLER_MAX_CLIENT_CONN=100
|
||||
POOLER_TENANT_ID=your-tenant-id
|
||||
|
||||
|
||||
############
|
||||
# API Proxy - Configuration for the Kong Reverse proxy.
|
||||
############
|
||||
|
||||
KONG_HTTP_PORT=8000
|
||||
KONG_HTTPS_PORT=8443
|
||||
|
||||
|
||||
############
|
||||
# API - Configuration for PostgREST.
|
||||
############
|
||||
|
||||
PGRST_DB_SCHEMAS=public,storage,graphql_public
|
||||
|
||||
|
||||
############
|
||||
# Auth - Configuration for the GoTrue authentication server.
|
||||
############
|
||||
|
||||
## General
|
||||
SITE_URL=http://localhost:3000
|
||||
ADDITIONAL_REDIRECT_URLS=
|
||||
JWT_EXPIRY=3600
|
||||
DISABLE_SIGNUP=false
|
||||
API_EXTERNAL_URL=http://localhost:8000
|
||||
|
||||
## Mailer Config
|
||||
MAILER_URLPATHS_CONFIRMATION="/auth/v1/verify"
|
||||
MAILER_URLPATHS_INVITE="/auth/v1/verify"
|
||||
MAILER_URLPATHS_RECOVERY="/auth/v1/verify"
|
||||
MAILER_URLPATHS_EMAIL_CHANGE="/auth/v1/verify"
|
||||
|
||||
## Email auth
|
||||
ENABLE_EMAIL_SIGNUP=true
|
||||
ENABLE_EMAIL_AUTOCONFIRM=false
|
||||
SMTP_ADMIN_EMAIL=admin@example.com
|
||||
SMTP_HOST=supabase-mail
|
||||
SMTP_PORT=2500
|
||||
SMTP_USER=fake_mail_user
|
||||
SMTP_PASS=fake_mail_password
|
||||
SMTP_SENDER_NAME=fake_sender
|
||||
ENABLE_ANONYMOUS_USERS=false
|
||||
|
||||
## Phone auth
|
||||
ENABLE_PHONE_SIGNUP=true
|
||||
ENABLE_PHONE_AUTOCONFIRM=true
|
||||
|
||||
|
||||
############
|
||||
# Studio - Configuration for the Dashboard
|
||||
############
|
||||
|
||||
STUDIO_DEFAULT_ORGANIZATION=Default Organization
|
||||
STUDIO_DEFAULT_PROJECT=Default Project
|
||||
|
||||
STUDIO_PORT=3000
|
||||
# replace if you intend to use Studio outside of localhost
|
||||
SUPABASE_PUBLIC_URL=http://localhost:8000
|
||||
|
||||
# Enable webp support
|
||||
IMGPROXY_ENABLE_WEBP_DETECTION=true
|
||||
|
||||
# Add your OpenAI API key to enable SQL Editor Assistant
|
||||
OPENAI_API_KEY=
|
||||
|
||||
|
||||
############
|
||||
# Functions - Configuration for Functions
|
||||
############
|
||||
# NOTE: VERIFY_JWT applies to all functions. Per-function VERIFY_JWT is not supported yet.
|
||||
FUNCTIONS_VERIFY_JWT=false
|
||||
|
||||
|
||||
############
|
||||
# Logs - Configuration for Logflare
|
||||
# Please refer to https://supabase.com/docs/reference/self-hosting-analytics/introduction
|
||||
############
|
||||
|
||||
LOGFLARE_LOGGER_BACKEND_API_KEY=your-super-secret-and-long-logflare-key
|
||||
|
||||
# Change vector.toml sinks to reflect this change
|
||||
LOGFLARE_API_KEY=your-super-secret-and-long-logflare-key
|
||||
|
||||
# Docker socket location - this value will differ depending on your OS
|
||||
DOCKER_SOCKET_LOCATION=/var/run/docker.sock
|
||||
|
||||
# Google Cloud Project details
|
||||
GOOGLE_PROJECT_ID=GOOGLE_PROJECT_ID
|
||||
GOOGLE_PROJECT_NUMBER=GOOGLE_PROJECT_NUMBER
|
||||
1
autogpt_platform/db/docker/.gitignore
vendored
1
autogpt_platform/db/docker/.gitignore
vendored
@@ -1,5 +1,4 @@
|
||||
volumes/db/data
|
||||
volumes/storage
|
||||
.env
|
||||
test.http
|
||||
docker-compose.override.yml
|
||||
|
||||
@@ -5,8 +5,101 @@
|
||||
# Destroy: docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml down -v --remove-orphans
|
||||
# Reset everything: ./reset.sh
|
||||
|
||||
# Environment Variable Loading Order (first → last, later overrides earlier):
|
||||
# 1. ../../.env.default - Default values for all Supabase settings
|
||||
# 2. ../../.env - User's custom configuration (if exists)
|
||||
# 3. ./.env - Local overrides specific to db/docker (if exists)
|
||||
# 4. environment key - Service-specific overrides defined below
|
||||
# 5. Shell environment - Variables exported before running docker compose
|
||||
|
||||
name: supabase
|
||||
|
||||
# Common env_file configuration for all Supabase services
|
||||
x-supabase-env-files: &supabase-env-files
|
||||
env_file:
|
||||
- ../../.env.default # Base defaults from platform root
|
||||
- path: ../../.env # User overrides from platform root (optional)
|
||||
required: false
|
||||
- path: ./.env # Local overrides for db/docker (optional)
|
||||
required: false
|
||||
|
||||
# Common Supabase environment - hardcoded defaults to avoid variable substitution
|
||||
x-supabase-env: &supabase-env
|
||||
# Core PostgreSQL settings
|
||||
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
|
||||
POSTGRES_HOST: db
|
||||
POSTGRES_PORT: "5432"
|
||||
POSTGRES_DB: postgres
|
||||
|
||||
# Authentication & Security
|
||||
JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
DASHBOARD_USERNAME: supabase
|
||||
DASHBOARD_PASSWORD: this_password_is_insecure_and_should_be_updated
|
||||
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
|
||||
VAULT_ENC_KEY: your-encryption-key-32-chars-min
|
||||
|
||||
# URLs and Endpoints
|
||||
SITE_URL: http://localhost:3000
|
||||
API_EXTERNAL_URL: http://localhost:8000
|
||||
SUPABASE_PUBLIC_URL: http://localhost:8000
|
||||
ADDITIONAL_REDIRECT_URLS: ""
|
||||
|
||||
# Feature Flags
|
||||
DISABLE_SIGNUP: "false"
|
||||
ENABLE_EMAIL_SIGNUP: "true"
|
||||
ENABLE_EMAIL_AUTOCONFIRM: "false"
|
||||
ENABLE_ANONYMOUS_USERS: "false"
|
||||
ENABLE_PHONE_SIGNUP: "true"
|
||||
ENABLE_PHONE_AUTOCONFIRM: "true"
|
||||
FUNCTIONS_VERIFY_JWT: "false"
|
||||
IMGPROXY_ENABLE_WEBP_DETECTION: "true"
|
||||
|
||||
# Email/SMTP Configuration
|
||||
SMTP_ADMIN_EMAIL: admin@example.com
|
||||
SMTP_HOST: supabase-mail
|
||||
SMTP_PORT: "2500"
|
||||
SMTP_USER: fake_mail_user
|
||||
SMTP_PASS: fake_mail_password
|
||||
SMTP_SENDER_NAME: fake_sender
|
||||
|
||||
# Mailer URLs
|
||||
MAILER_URLPATHS_CONFIRMATION: /auth/v1/verify
|
||||
MAILER_URLPATHS_INVITE: /auth/v1/verify
|
||||
MAILER_URLPATHS_RECOVERY: /auth/v1/verify
|
||||
MAILER_URLPATHS_EMAIL_CHANGE: /auth/v1/verify
|
||||
|
||||
# JWT Settings
|
||||
JWT_EXPIRY: "3600"
|
||||
|
||||
# Database Schemas
|
||||
PGRST_DB_SCHEMAS: public,storage,graphql_public
|
||||
|
||||
# Studio Settings
|
||||
STUDIO_DEFAULT_ORGANIZATION: Default Organization
|
||||
STUDIO_DEFAULT_PROJECT: Default Project
|
||||
|
||||
# Logging
|
||||
LOGFLARE_API_KEY: your-super-secret-and-long-logflare-key
|
||||
|
||||
# Pooler Settings
|
||||
POOLER_DEFAULT_POOL_SIZE: "20"
|
||||
POOLER_MAX_CLIENT_CONN: "100"
|
||||
POOLER_TENANT_ID: your-tenant-id
|
||||
POOLER_PROXY_PORT_TRANSACTION: "6543"
|
||||
|
||||
# Kong Ports
|
||||
KONG_HTTP_PORT: "8000"
|
||||
KONG_HTTPS_PORT: "8443"
|
||||
|
||||
# Docker
|
||||
DOCKER_SOCKET_LOCATION: /var/run/docker.sock
|
||||
|
||||
# Google Cloud (if needed)
|
||||
GOOGLE_PROJECT_ID: GOOGLE_PROJECT_ID
|
||||
GOOGLE_PROJECT_NUMBER: GOOGLE_PROJECT_NUMBER
|
||||
|
||||
services:
|
||||
|
||||
studio:
|
||||
@@ -27,21 +120,24 @@ services:
|
||||
depends_on:
|
||||
analytics:
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
STUDIO_PG_META_URL: http://meta:8080
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
|
||||
|
||||
DEFAULT_ORGANIZATION_NAME: ${STUDIO_DEFAULT_ORGANIZATION}
|
||||
DEFAULT_PROJECT_NAME: ${STUDIO_DEFAULT_PROJECT}
|
||||
OPENAI_API_KEY: ${OPENAI_API_KEY:-}
|
||||
DEFAULT_ORGANIZATION_NAME: Default Organization
|
||||
DEFAULT_PROJECT_NAME: Default Project
|
||||
OPENAI_API_KEY: ""
|
||||
|
||||
SUPABASE_URL: http://kong:8000
|
||||
SUPABASE_PUBLIC_URL: ${SUPABASE_PUBLIC_URL}
|
||||
SUPABASE_ANON_KEY: ${ANON_KEY}
|
||||
SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
|
||||
AUTH_JWT_SECRET: ${JWT_SECRET}
|
||||
SUPABASE_PUBLIC_URL: http://localhost:8000
|
||||
SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
SUPABASE_SERVICE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
AUTH_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
|
||||
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
|
||||
LOGFLARE_API_KEY: your-super-secret-and-long-logflare-key
|
||||
LOGFLARE_URL: http://analytics:4000
|
||||
NEXT_PUBLIC_ENABLE_LOGS: true
|
||||
# Comment to use Big Query backend for analytics
|
||||
@@ -54,15 +150,18 @@ services:
|
||||
image: kong:2.8.1
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- ${KONG_HTTP_PORT}:8000/tcp
|
||||
- ${KONG_HTTPS_PORT}:8443/tcp
|
||||
- 8000:8000/tcp
|
||||
- 8443:8443/tcp
|
||||
volumes:
|
||||
# https://github.com/supabase/supabase/issues/12661
|
||||
- ./volumes/api/kong.yml:/home/kong/temp.yml:ro
|
||||
depends_on:
|
||||
analytics:
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
KONG_DATABASE: "off"
|
||||
KONG_DECLARATIVE_CONFIG: /home/kong/kong.yml
|
||||
# https://github.com/supabase/cli/issues/14
|
||||
@@ -70,10 +169,10 @@ services:
|
||||
KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth
|
||||
KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
|
||||
KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k
|
||||
SUPABASE_ANON_KEY: ${ANON_KEY}
|
||||
SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
|
||||
DASHBOARD_USERNAME: ${DASHBOARD_USERNAME}
|
||||
DASHBOARD_PASSWORD: ${DASHBOARD_PASSWORD}
|
||||
SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
SUPABASE_SERVICE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
DASHBOARD_USERNAME: supabase
|
||||
DASHBOARD_PASSWORD: this_password_is_insecure_and_should_be_updated
|
||||
# https://unix.stackexchange.com/a/294837
|
||||
entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'
|
||||
|
||||
@@ -100,46 +199,49 @@ services:
|
||||
condition: service_healthy
|
||||
analytics:
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
GOTRUE_API_HOST: 0.0.0.0
|
||||
GOTRUE_API_PORT: 9999
|
||||
API_EXTERNAL_URL: ${API_EXTERNAL_URL}
|
||||
API_EXTERNAL_URL: http://localhost:8000
|
||||
|
||||
GOTRUE_DB_DRIVER: postgres
|
||||
GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
|
||||
GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:your-super-secret-and-long-postgres-password@db:5432/postgres
|
||||
|
||||
GOTRUE_SITE_URL: ${SITE_URL}
|
||||
GOTRUE_URI_ALLOW_LIST: ${ADDITIONAL_REDIRECT_URLS}
|
||||
GOTRUE_DISABLE_SIGNUP: ${DISABLE_SIGNUP}
|
||||
GOTRUE_SITE_URL: http://localhost:3000
|
||||
GOTRUE_URI_ALLOW_LIST: ""
|
||||
GOTRUE_DISABLE_SIGNUP: false
|
||||
|
||||
GOTRUE_JWT_ADMIN_ROLES: service_role
|
||||
GOTRUE_JWT_AUD: authenticated
|
||||
GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
|
||||
GOTRUE_JWT_EXP: ${JWT_EXPIRY}
|
||||
GOTRUE_JWT_SECRET: ${JWT_SECRET}
|
||||
GOTRUE_JWT_EXP: 3600
|
||||
GOTRUE_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
|
||||
GOTRUE_EXTERNAL_EMAIL_ENABLED: ${ENABLE_EMAIL_SIGNUP}
|
||||
GOTRUE_EXTERNAL_ANONYMOUS_USERS_ENABLED: ${ENABLE_ANONYMOUS_USERS}
|
||||
GOTRUE_MAILER_AUTOCONFIRM: ${ENABLE_EMAIL_AUTOCONFIRM}
|
||||
GOTRUE_EXTERNAL_EMAIL_ENABLED: true
|
||||
GOTRUE_EXTERNAL_ANONYMOUS_USERS_ENABLED: false
|
||||
GOTRUE_MAILER_AUTOCONFIRM: false
|
||||
|
||||
# Uncomment to bypass nonce check in ID Token flow. Commonly set to true when using Google Sign In on mobile.
|
||||
# GOTRUE_EXTERNAL_SKIP_NONCE_CHECK: true
|
||||
|
||||
# GOTRUE_MAILER_SECURE_EMAIL_CHANGE_ENABLED: true
|
||||
# GOTRUE_SMTP_MAX_FREQUENCY: 1s
|
||||
GOTRUE_SMTP_ADMIN_EMAIL: ${SMTP_ADMIN_EMAIL}
|
||||
GOTRUE_SMTP_HOST: ${SMTP_HOST}
|
||||
GOTRUE_SMTP_PORT: ${SMTP_PORT}
|
||||
GOTRUE_SMTP_USER: ${SMTP_USER}
|
||||
GOTRUE_SMTP_PASS: ${SMTP_PASS}
|
||||
GOTRUE_SMTP_SENDER_NAME: ${SMTP_SENDER_NAME}
|
||||
GOTRUE_MAILER_URLPATHS_INVITE: ${MAILER_URLPATHS_INVITE}
|
||||
GOTRUE_MAILER_URLPATHS_CONFIRMATION: ${MAILER_URLPATHS_CONFIRMATION}
|
||||
GOTRUE_MAILER_URLPATHS_RECOVERY: ${MAILER_URLPATHS_RECOVERY}
|
||||
GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: ${MAILER_URLPATHS_EMAIL_CHANGE}
|
||||
GOTRUE_SMTP_ADMIN_EMAIL: admin@example.com
|
||||
GOTRUE_SMTP_HOST: supabase-mail
|
||||
GOTRUE_SMTP_PORT: 2500
|
||||
GOTRUE_SMTP_USER: fake_mail_user
|
||||
GOTRUE_SMTP_PASS: fake_mail_password
|
||||
GOTRUE_SMTP_SENDER_NAME: fake_sender
|
||||
GOTRUE_MAILER_URLPATHS_INVITE: /auth/v1/verify
|
||||
GOTRUE_MAILER_URLPATHS_CONFIRMATION: /auth/v1/verify
|
||||
GOTRUE_MAILER_URLPATHS_RECOVERY: /auth/v1/verify
|
||||
GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: /auth/v1/verify
|
||||
|
||||
GOTRUE_EXTERNAL_PHONE_ENABLED: ${ENABLE_PHONE_SIGNUP}
|
||||
GOTRUE_SMS_AUTOCONFIRM: ${ENABLE_PHONE_AUTOCONFIRM}
|
||||
GOTRUE_EXTERNAL_PHONE_ENABLED: true
|
||||
GOTRUE_SMS_AUTOCONFIRM: true
|
||||
# Uncomment to enable custom access token hook. Please see: https://supabase.com/docs/guides/auth/auth-hooks for full list of hooks and additional details about custom_access_token_hook
|
||||
|
||||
# GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_ENABLED: "true"
|
||||
@@ -170,14 +272,17 @@ services:
|
||||
condition: service_healthy
|
||||
analytics:
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
|
||||
PGRST_DB_SCHEMAS: ${PGRST_DB_SCHEMAS}
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
PGRST_DB_URI: postgres://authenticator:your-super-secret-and-long-postgres-password@db:5432/postgres
|
||||
PGRST_DB_SCHEMAS: public,storage,graphql_public
|
||||
PGRST_DB_ANON_ROLE: anon
|
||||
PGRST_JWT_SECRET: ${JWT_SECRET}
|
||||
PGRST_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
PGRST_DB_USE_LEGACY_GUCS: "false"
|
||||
PGRST_APP_SETTINGS_JWT_SECRET: ${JWT_SECRET}
|
||||
PGRST_APP_SETTINGS_JWT_EXP: ${JWT_EXPIRY}
|
||||
PGRST_APP_SETTINGS_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
PGRST_APP_SETTINGS_JWT_EXP: 3600
|
||||
command:
|
||||
[
|
||||
"postgrest"
|
||||
@@ -204,23 +309,26 @@ services:
|
||||
"-o",
|
||||
"/dev/null",
|
||||
"-H",
|
||||
"Authorization: Bearer ${ANON_KEY}",
|
||||
"Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE",
|
||||
"http://localhost:4000/api/tenants/realtime-dev/health"
|
||||
]
|
||||
timeout: 5s
|
||||
interval: 5s
|
||||
retries: 3
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
PORT: 4000
|
||||
DB_HOST: ${POSTGRES_HOST}
|
||||
DB_PORT: ${POSTGRES_PORT}
|
||||
DB_HOST: db
|
||||
DB_PORT: 5432
|
||||
DB_USER: supabase_admin
|
||||
DB_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
DB_NAME: ${POSTGRES_DB}
|
||||
DB_PASSWORD: your-super-secret-and-long-postgres-password
|
||||
DB_NAME: postgres
|
||||
DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
|
||||
DB_ENC_KEY: supabaserealtime
|
||||
API_JWT_SECRET: ${JWT_SECRET}
|
||||
SECRET_KEY_BASE: ${SECRET_KEY_BASE}
|
||||
API_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
|
||||
ERL_AFLAGS: -proto_dist inet_tcp
|
||||
DNS_NODES: "''"
|
||||
RLIMIT_NOFILE: "10000"
|
||||
@@ -256,12 +364,15 @@ services:
|
||||
condition: service_started
|
||||
imgproxy:
|
||||
condition: service_started
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
ANON_KEY: ${ANON_KEY}
|
||||
SERVICE_KEY: ${SERVICE_ROLE_KEY}
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
SERVICE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
POSTGREST_URL: http://rest:3000
|
||||
PGRST_JWT_SECRET: ${JWT_SECRET}
|
||||
DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
|
||||
PGRST_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
DATABASE_URL: postgres://supabase_storage_admin:your-super-secret-and-long-postgres-password@db:5432/postgres
|
||||
FILE_SIZE_LIMIT: 52428800
|
||||
STORAGE_BACKEND: file
|
||||
FILE_STORAGE_BACKEND_PATH: /var/lib/storage
|
||||
@@ -288,11 +399,14 @@ services:
|
||||
timeout: 5s
|
||||
interval: 5s
|
||||
retries: 3
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
IMGPROXY_BIND: ":5001"
|
||||
IMGPROXY_LOCAL_FILESYSTEM_ROOT: /
|
||||
IMGPROXY_USE_ETAG: "true"
|
||||
IMGPROXY_ENABLE_WEBP_DETECTION: ${IMGPROXY_ENABLE_WEBP_DETECTION}
|
||||
IMGPROXY_ENABLE_WEBP_DETECTION: true
|
||||
|
||||
meta:
|
||||
container_name: supabase-meta
|
||||
@@ -304,13 +418,16 @@ services:
|
||||
condition: service_healthy
|
||||
analytics:
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
PG_META_PORT: 8080
|
||||
PG_META_DB_HOST: ${POSTGRES_HOST}
|
||||
PG_META_DB_PORT: ${POSTGRES_PORT}
|
||||
PG_META_DB_NAME: ${POSTGRES_DB}
|
||||
PG_META_DB_HOST: db
|
||||
PG_META_DB_PORT: 5432
|
||||
PG_META_DB_NAME: postgres
|
||||
PG_META_DB_USER: supabase_admin
|
||||
PG_META_DB_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
PG_META_DB_PASSWORD: your-super-secret-and-long-postgres-password
|
||||
|
||||
functions:
|
||||
container_name: supabase-edge-functions
|
||||
@@ -321,14 +438,17 @@ services:
|
||||
depends_on:
|
||||
analytics:
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
JWT_SECRET: ${JWT_SECRET}
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
SUPABASE_URL: http://kong:8000
|
||||
SUPABASE_ANON_KEY: ${ANON_KEY}
|
||||
SUPABASE_SERVICE_ROLE_KEY: ${SERVICE_ROLE_KEY}
|
||||
SUPABASE_DB_URL: postgresql://postgres:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
|
||||
SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
SUPABASE_SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
SUPABASE_DB_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres
|
||||
# TODO: Allow configuring VERIFY_JWT per function. This PR might help: https://github.com/supabase/cli/pull/786
|
||||
VERIFY_JWT: "${FUNCTIONS_VERIFY_JWT}"
|
||||
VERIFY_JWT: "false"
|
||||
command:
|
||||
[
|
||||
"start",
|
||||
@@ -362,26 +482,29 @@ services:
|
||||
db:
|
||||
# Disable this if you are using an external Postgres database
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
LOGFLARE_NODE_HOST: 127.0.0.1
|
||||
DB_USERNAME: supabase_admin
|
||||
DB_DATABASE: _supabase
|
||||
DB_HOSTNAME: ${POSTGRES_HOST}
|
||||
DB_PORT: ${POSTGRES_PORT}
|
||||
DB_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
DB_HOSTNAME: db
|
||||
DB_PORT: 5432
|
||||
DB_PASSWORD: your-super-secret-and-long-postgres-password
|
||||
DB_SCHEMA: _analytics
|
||||
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
|
||||
LOGFLARE_API_KEY: your-super-secret-and-long-logflare-key
|
||||
LOGFLARE_SINGLE_TENANT: true
|
||||
LOGFLARE_SUPABASE_MODE: true
|
||||
LOGFLARE_MIN_CLUSTER_SIZE: 1
|
||||
|
||||
# Comment variables to use Big Query backend for analytics
|
||||
POSTGRES_BACKEND_URL: postgresql://supabase_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/_supabase
|
||||
POSTGRES_BACKEND_URL: postgresql://supabase_admin:your-super-secret-and-long-postgres-password@db:5432/_supabase
|
||||
POSTGRES_BACKEND_SCHEMA: _analytics
|
||||
LOGFLARE_FEATURE_FLAG_OVERRIDE: multibackend=true
|
||||
# Uncomment to use Big Query backend for analytics
|
||||
# GOOGLE_PROJECT_ID: ${GOOGLE_PROJECT_ID}
|
||||
# GOOGLE_PROJECT_NUMBER: ${GOOGLE_PROJECT_NUMBER}
|
||||
# GOOGLE_PROJECT_ID: GOOGLE_PROJECT_ID
|
||||
# GOOGLE_PROJECT_NUMBER: GOOGLE_PROJECT_NUMBER
|
||||
|
||||
# Comment out everything below this point if you are using an external Postgres database
|
||||
db:
|
||||
@@ -422,16 +545,19 @@ services:
|
||||
depends_on:
|
||||
vector:
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
POSTGRES_HOST: /var/run/postgresql
|
||||
PGPORT: ${POSTGRES_PORT}
|
||||
POSTGRES_PORT: ${POSTGRES_PORT}
|
||||
PGPASSWORD: ${POSTGRES_PASSWORD}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
PGDATABASE: ${POSTGRES_DB}
|
||||
POSTGRES_DB: ${POSTGRES_DB}
|
||||
JWT_SECRET: ${JWT_SECRET}
|
||||
JWT_EXP: ${JWT_EXPIRY}
|
||||
PGPORT: 5432
|
||||
POSTGRES_PORT: 5432
|
||||
PGPASSWORD: your-super-secret-and-long-postgres-password
|
||||
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
|
||||
PGDATABASE: postgres
|
||||
POSTGRES_DB: postgres
|
||||
JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
JWT_EXP: 3600
|
||||
command:
|
||||
[
|
||||
"postgres",
|
||||
@@ -447,7 +573,7 @@ services:
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./volumes/logs/vector.yml:/etc/vector/vector.yml:ro
|
||||
- ${DOCKER_SOCKET_LOCATION}:/var/run/docker.sock:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
healthcheck:
|
||||
test:
|
||||
[
|
||||
@@ -461,8 +587,11 @@ services:
|
||||
timeout: 5s
|
||||
interval: 5s
|
||||
retries: 3
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
LOGFLARE_API_KEY: your-super-secret-and-long-logflare-key
|
||||
command:
|
||||
[
|
||||
"--config",
|
||||
@@ -475,8 +604,8 @@ services:
|
||||
image: supabase/supavisor:2.4.12
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- ${POSTGRES_PORT}:5432
|
||||
- ${POOLER_PROXY_PORT_TRANSACTION}:6543
|
||||
- 5432:5432
|
||||
- 6543:6543
|
||||
volumes:
|
||||
- ./volumes/pooler/pooler.exs:/etc/pooler/pooler.exs:ro
|
||||
healthcheck:
|
||||
@@ -498,22 +627,25 @@ services:
|
||||
condition: service_healthy
|
||||
analytics:
|
||||
condition: service_healthy
|
||||
<<: *supabase-env-files
|
||||
environment:
|
||||
<<: *supabase-env
|
||||
# Keep any existing environment variables specific to that service
|
||||
PORT: 4000
|
||||
POSTGRES_PORT: ${POSTGRES_PORT}
|
||||
POSTGRES_DB: ${POSTGRES_DB}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
DATABASE_URL: ecto://supabase_admin:${POSTGRES_PASSWORD}@db:${POSTGRES_PORT}/_supabase
|
||||
POSTGRES_PORT: 5432
|
||||
POSTGRES_DB: postgres
|
||||
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
|
||||
DATABASE_URL: ecto://supabase_admin:your-super-secret-and-long-postgres-password@db:5432/_supabase
|
||||
CLUSTER_POSTGRES: true
|
||||
SECRET_KEY_BASE: ${SECRET_KEY_BASE}
|
||||
VAULT_ENC_KEY: ${VAULT_ENC_KEY}
|
||||
API_JWT_SECRET: ${JWT_SECRET}
|
||||
METRICS_JWT_SECRET: ${JWT_SECRET}
|
||||
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
|
||||
VAULT_ENC_KEY: your-encryption-key-32-chars-min
|
||||
API_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
METRICS_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
REGION: local
|
||||
ERL_AFLAGS: -proto_dist inet_tcp
|
||||
POOLER_TENANT_ID: ${POOLER_TENANT_ID}
|
||||
POOLER_DEFAULT_POOL_SIZE: ${POOLER_DEFAULT_POOL_SIZE}
|
||||
POOLER_MAX_CLIENT_CONN: ${POOLER_MAX_CLIENT_CONN}
|
||||
POOLER_TENANT_ID: your-tenant-id
|
||||
POOLER_DEFAULT_POOL_SIZE: 20
|
||||
POOLER_MAX_CLIENT_CONN: 100
|
||||
POOLER_POOL_MODE: transaction
|
||||
command:
|
||||
[
|
||||
|
||||
@@ -34,11 +34,11 @@ else
|
||||
echo "No .env file found. Skipping .env removal step..."
|
||||
fi
|
||||
|
||||
if [ -f ".env.example" ]; then
|
||||
echo "Copying .env.example to .env..."
|
||||
cp .env.example .env
|
||||
if [ -f ".env.default" ]; then
|
||||
echo "Copying .env.default to .env..."
|
||||
cp .env.default .env
|
||||
else
|
||||
echo ".env.example file not found. Skipping .env reset step..."
|
||||
echo ".env.default file not found. Skipping .env reset step..."
|
||||
fi
|
||||
|
||||
echo "Cleanup complete!"
|
||||
@@ -1,9 +1,39 @@
|
||||
# Environment Variable Loading Order (first → last, later overrides earlier):
|
||||
# 1. backend/.env.default - Default values for all settings
|
||||
# 2. backend/.env - User's custom configuration (if exists)
|
||||
# 3. environment key - Docker-specific overrides defined below
|
||||
# 4. Shell environment - Variables exported before running docker compose
|
||||
# 5. CLI arguments - docker compose run -e VAR=value
|
||||
|
||||
# Common backend environment - Docker service names
|
||||
x-backend-env:
|
||||
&backend-env # Docker internal service hostnames (override localhost defaults)
|
||||
PYRO_HOST: "0.0.0.0"
|
||||
AGENTSERVER_HOST: rest_server
|
||||
SCHEDULER_HOST: scheduler_server
|
||||
DATABASEMANAGER_HOST: database_manager
|
||||
EXECUTIONMANAGER_HOST: executor
|
||||
NOTIFICATIONMANAGER_HOST: notification_server
|
||||
CLAMAV_SERVICE_HOST: clamav
|
||||
DB_HOST: db
|
||||
REDIS_HOST: redis
|
||||
RABBITMQ_HOST: rabbitmq
|
||||
# Override Supabase URL for Docker network
|
||||
SUPABASE_URL: http://kong:8000
|
||||
|
||||
# Common env_file configuration for backend services
|
||||
x-backend-env-files: &backend-env-files
|
||||
env_file:
|
||||
- backend/.env.default # Base defaults (always exists)
|
||||
- path: backend/.env # User overrides (optional)
|
||||
required: false
|
||||
|
||||
services:
|
||||
migrate:
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: autogpt_platform/backend/Dockerfile
|
||||
target: server
|
||||
target: migrate
|
||||
command: ["sh", "-c", "poetry run prisma migrate deploy"]
|
||||
develop:
|
||||
watch:
|
||||
@@ -20,10 +50,11 @@ services:
|
||||
- app-network
|
||||
restart: on-failure
|
||||
healthcheck:
|
||||
test: ["CMD", "poetry", "run", "prisma", "migrate", "status"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
test: ["CMD-SHELL", "poetry run prisma migrate status | grep -q 'No pending migrations' || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 5s
|
||||
|
||||
redis:
|
||||
image: redis:latest
|
||||
@@ -73,29 +104,12 @@ services:
|
||||
condition: service_completed_successfully
|
||||
rabbitmq:
|
||||
condition: service_healthy
|
||||
<<: *backend-env-files
|
||||
environment:
|
||||
- SUPABASE_URL=http://kong:8000
|
||||
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
- SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- RABBITMQ_HOST=rabbitmq
|
||||
- RABBITMQ_PORT=5672
|
||||
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
|
||||
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
|
||||
- REDIS_PASSWORD=password
|
||||
- ENABLE_AUTH=true
|
||||
- PYRO_HOST=0.0.0.0
|
||||
- SCHEDULER_HOST=scheduler_server
|
||||
- EXECUTIONMANAGER_HOST=executor
|
||||
- NOTIFICATIONMANAGER_HOST=notification_server
|
||||
- CLAMAV_SERVICE_HOST=clamav
|
||||
- NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000
|
||||
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
|
||||
- ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw= # DO NOT USE IN PRODUCTION!!
|
||||
- UNSUBSCRIBE_SECRET_KEY=HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio= # DO NOT USE IN PRODUCTION!!
|
||||
<<: *backend-env
|
||||
# Service-specific overrides
|
||||
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
ports:
|
||||
- "8006:8006"
|
||||
networks:
|
||||
@@ -123,26 +137,12 @@ services:
|
||||
condition: service_completed_successfully
|
||||
database_manager:
|
||||
condition: service_started
|
||||
<<: *backend-env-files
|
||||
environment:
|
||||
- DATABASEMANAGER_HOST=database_manager
|
||||
- SUPABASE_URL=http://kong:8000
|
||||
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
- SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
|
||||
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=password
|
||||
- RABBITMQ_HOST=rabbitmq
|
||||
- RABBITMQ_PORT=5672
|
||||
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
|
||||
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
|
||||
- ENABLE_AUTH=true
|
||||
- PYRO_HOST=0.0.0.0
|
||||
- AGENTSERVER_HOST=rest_server
|
||||
- NOTIFICATIONMANAGER_HOST=notification_server
|
||||
- CLAMAV_SERVICE_HOST=clamav
|
||||
- ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw= # DO NOT USE IN PRODUCTION!!
|
||||
<<: *backend-env
|
||||
# Service-specific overrides
|
||||
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
ports:
|
||||
- "8002:8002"
|
||||
networks:
|
||||
@@ -168,22 +168,12 @@ services:
|
||||
condition: service_completed_successfully
|
||||
database_manager:
|
||||
condition: service_started
|
||||
<<: *backend-env-files
|
||||
environment:
|
||||
- DATABASEMANAGER_HOST=database_manager
|
||||
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=password
|
||||
# - RABBITMQ_HOST=rabbitmq
|
||||
# - RABBITMQ_PORT=5672
|
||||
# - RABBITMQ_DEFAULT_USER=rabbitmq_user_default
|
||||
# - RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
|
||||
- ENABLE_AUTH=true
|
||||
- PYRO_HOST=0.0.0.0
|
||||
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
|
||||
|
||||
<<: *backend-env
|
||||
# Service-specific overrides
|
||||
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
ports:
|
||||
- "8001:8001"
|
||||
networks:
|
||||
@@ -205,11 +195,12 @@ services:
|
||||
condition: service_healthy
|
||||
migrate:
|
||||
condition: service_completed_successfully
|
||||
<<: *backend-env-files
|
||||
environment:
|
||||
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- PYRO_HOST=0.0.0.0
|
||||
- ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw= # DO NOT USE IN PRODUCTION!!
|
||||
<<: *backend-env
|
||||
# Service-specific overrides
|
||||
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
ports:
|
||||
- "8005:8005"
|
||||
networks:
|
||||
@@ -250,23 +241,12 @@ services:
|
||||
# interval: 10s
|
||||
# timeout: 10s
|
||||
# retries: 5
|
||||
<<: *backend-env-files
|
||||
environment:
|
||||
- DATABASEMANAGER_HOST=database_manager
|
||||
- NOTIFICATIONMANAGER_HOST=notification_server
|
||||
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
|
||||
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=password
|
||||
- RABBITMQ_HOST=rabbitmq
|
||||
- RABBITMQ_PORT=5672
|
||||
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
|
||||
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
|
||||
- ENABLE_AUTH=true
|
||||
- PYRO_HOST=0.0.0.0
|
||||
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
|
||||
|
||||
<<: *backend-env
|
||||
# Service-specific overrides
|
||||
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
|
||||
ports:
|
||||
- "8003:8003"
|
||||
networks:
|
||||
@@ -292,52 +272,39 @@ services:
|
||||
condition: service_completed_successfully
|
||||
database_manager:
|
||||
condition: service_started
|
||||
<<: *backend-env-files
|
||||
environment:
|
||||
- DATABASEMANAGER_HOST=database_manager
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=password
|
||||
- RABBITMQ_HOST=rabbitmq
|
||||
- RABBITMQ_PORT=5672
|
||||
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
|
||||
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
|
||||
- ENABLE_AUTH=true
|
||||
- PYRO_HOST=0.0.0.0
|
||||
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
|
||||
|
||||
<<: *backend-env
|
||||
ports:
|
||||
- "8007:8007"
|
||||
networks:
|
||||
- app-network
|
||||
|
||||
# frontend:
|
||||
# build:
|
||||
# context: ../
|
||||
# dockerfile: autogpt_platform/frontend/Dockerfile
|
||||
# target: dev
|
||||
# depends_on:
|
||||
# db:
|
||||
# condition: service_healthy
|
||||
# rest_server:
|
||||
# condition: service_started
|
||||
# websocket_server:
|
||||
# condition: service_started
|
||||
# migrate:
|
||||
# condition: service_completed_successfully
|
||||
# environment:
|
||||
# - NEXT_PUBLIC_SUPABASE_URL=http://kong:8000
|
||||
# - NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
# - DATABASE_URL=postgresql://agpt_user:pass123@postgres:5432/postgres?connect_timeout=60&schema=platform
|
||||
# - DIRECT_URL=postgresql://agpt_user:pass123@postgres:5432/postgres?connect_timeout=60&schema=platform
|
||||
# - NEXT_PUBLIC_AGPT_SERVER_URL=http://localhost:8006/api
|
||||
# - NEXT_PUBLIC_AGPT_WS_SERVER_URL=ws://localhost:8001/ws
|
||||
# - NEXT_PUBLIC_AGPT_MARKETPLACE_URL=http://localhost:8015/api/v1/market
|
||||
# - NEXT_PUBLIC_BEHAVE_AS=LOCAL
|
||||
# ports:
|
||||
# - "3000:3000"
|
||||
# networks:
|
||||
# - app-network
|
||||
|
||||
frontend:
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: autogpt_platform/frontend/Dockerfile
|
||||
target: prod
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
migrate:
|
||||
condition: service_completed_successfully
|
||||
ports:
|
||||
- "3000:3000"
|
||||
networks:
|
||||
- app-network
|
||||
# Load environment variables in order (later overrides earlier)
|
||||
env_file:
|
||||
- path: ./frontend/.env.default # Base defaults (always exists)
|
||||
- path: ./frontend/.env # User overrides (optional)
|
||||
required: false
|
||||
environment:
|
||||
# Server-side environment variables (Docker service names)
|
||||
# These override the localhost URLs from env files when running in Docker
|
||||
AUTH_CALLBACK_URL: http://rest_server:8006/auth/callback
|
||||
SUPABASE_URL: http://kong:8000
|
||||
AGPT_SERVER_URL: http://rest_server:8006/api
|
||||
AGPT_WS_SERVER_URL: ws://websocket_server:8001/ws
|
||||
networks:
|
||||
app-network:
|
||||
driver: bridge
|
||||
|
||||
@@ -20,6 +20,7 @@ x-supabase-services:
|
||||
- app-network
|
||||
- shared-network
|
||||
|
||||
|
||||
services:
|
||||
# AGPT services
|
||||
migrate:
|
||||
@@ -96,11 +97,11 @@ services:
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
# frontend:
|
||||
# <<: *agpt-services
|
||||
# extends:
|
||||
# file: ./docker-compose.platform.yml
|
||||
# service: frontend
|
||||
frontend:
|
||||
<<: *agpt-services
|
||||
extends:
|
||||
file: ./docker-compose.platform.yml
|
||||
service: frontend
|
||||
|
||||
# Supabase services
|
||||
studio:
|
||||
@@ -171,7 +172,7 @@ services:
|
||||
file: ./db/docker/docker-compose.yml
|
||||
service: db
|
||||
ports:
|
||||
- ${POSTGRES_PORT}:5432 # We don't use Supavisor locally, so we expose the db directly.
|
||||
- 5432:5432 # We don't use Supavisor locally, so we expose the db directly.
|
||||
|
||||
vector:
|
||||
<<: *supabase-services
|
||||
@@ -196,3 +197,17 @@ services:
|
||||
- redis
|
||||
- rabbitmq
|
||||
- clamav
|
||||
- migrate
|
||||
|
||||
deps_backend:
|
||||
<<: *agpt-services
|
||||
profiles:
|
||||
- local
|
||||
image: busybox
|
||||
command: /bin/true
|
||||
depends_on:
|
||||
- deps
|
||||
- rest_server
|
||||
- executor
|
||||
- websocket_server
|
||||
- database_manager
|
||||
|
||||
20
autogpt_platform/frontend/.env.default
Normal file
20
autogpt_platform/frontend/.env.default
Normal file
@@ -0,0 +1,20 @@
|
||||
NEXT_PUBLIC_SUPABASE_URL=http://localhost:8000
|
||||
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
|
||||
NEXT_PUBLIC_AGPT_SERVER_URL=http://localhost:8006/api
|
||||
NEXT_PUBLIC_AGPT_WS_SERVER_URL=ws://localhost:8001/ws
|
||||
NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000
|
||||
|
||||
NEXT_PUBLIC_APP_ENV=local
|
||||
NEXT_PUBLIC_BEHAVE_AS=LOCAL
|
||||
|
||||
NEXT_PUBLIC_LAUNCHDARKLY_ENABLED=false
|
||||
NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID=687ab1372f497809b131e06e
|
||||
|
||||
NEXT_PUBLIC_SHOW_BILLING_PAGE=false
|
||||
NEXT_PUBLIC_TURNSTILE=disabled
|
||||
NEXT_PUBLIC_REACT_QUERY_DEVTOOL=true
|
||||
|
||||
NEXT_PUBLIC_GA_MEASUREMENT_ID=G-FH2XK2W4GN
|
||||
NEXT_PUBLIC_PW_TEST=true
|
||||
|
||||
@@ -1,44 +0,0 @@
|
||||
NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000
|
||||
|
||||
NEXT_PUBLIC_AUTH_CALLBACK_URL=http://localhost:8006/auth/callback
|
||||
NEXT_PUBLIC_AGPT_SERVER_URL=http://localhost:8006/api
|
||||
NEXT_PUBLIC_AGPT_WS_SERVER_URL=ws://localhost:8001/ws
|
||||
NEXT_PUBLIC_AGPT_MARKETPLACE_URL=http://localhost:8015/api/v1/market
|
||||
NEXT_PUBLIC_LAUNCHDARKLY_ENABLED=false
|
||||
NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID=687ab1372f497809b131e06e # Local environment on Launch darkly
|
||||
NEXT_PUBLIC_APP_ENV=local
|
||||
|
||||
NEXT_PUBLIC_AGPT_SERVER_BASE_URL=http://localhost:8006
|
||||
|
||||
## Locale settings
|
||||
|
||||
NEXT_PUBLIC_DEFAULT_LOCALE=en
|
||||
NEXT_PUBLIC_LOCALES=en,es
|
||||
|
||||
## Supabase credentials
|
||||
|
||||
NEXT_PUBLIC_SUPABASE_URL=http://localhost:8000
|
||||
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
|
||||
|
||||
## OAuth Callback URL
|
||||
## This should be {domain}/auth/callback
|
||||
## Only used if you're using Supabase and OAuth
|
||||
AUTH_CALLBACK_URL="${NEXT_PUBLIC_FRONTEND_BASE_URL}/auth/callback"
|
||||
GA_MEASUREMENT_ID=G-FH2XK2W4GN
|
||||
|
||||
# When running locally, set NEXT_PUBLIC_BEHAVE_AS=CLOUD to use the a locally hosted marketplace (as is typical in development, and the cloud deployment), otherwise set it to LOCAL to have the marketplace open in a new tab
|
||||
NEXT_PUBLIC_BEHAVE_AS=LOCAL
|
||||
NEXT_PUBLIC_SHOW_BILLING_PAGE=false
|
||||
|
||||
## Cloudflare Turnstile (CAPTCHA) Configuration
|
||||
## Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
|
||||
## This is the frontend site key
|
||||
NEXT_PUBLIC_CLOUDFLARE_TURNSTILE_SITE_KEY=
|
||||
NEXT_PUBLIC_TURNSTILE=disabled
|
||||
|
||||
# Devtools
|
||||
NEXT_PUBLIC_REACT_QUERY_DEVTOOL=true
|
||||
|
||||
# In case you are running Playwright locally
|
||||
# NEXT_PUBLIC_PW_TEST=true
|
||||
|
||||
1
autogpt_platform/frontend/.gitignore
vendored
1
autogpt_platform/frontend/.gitignore
vendored
@@ -31,6 +31,7 @@ yarn.lock
|
||||
package-lock.json
|
||||
|
||||
# local env files
|
||||
.env
|
||||
.env*.local
|
||||
|
||||
# vercel
|
||||
|
||||
@@ -17,7 +17,12 @@ CMD ["pnpm", "run", "dev", "--hostname", "0.0.0.0"]
|
||||
FROM base AS build
|
||||
COPY autogpt_platform/frontend/ .
|
||||
ENV SKIP_STORYBOOK_TESTS=true
|
||||
RUN pnpm build
|
||||
RUN if [ -f .env ]; then \
|
||||
cat .env.default .env > .env.merged && mv .env.merged .env; \
|
||||
else \
|
||||
cp .env.default .env; \
|
||||
fi
|
||||
RUN pnpm build --turbo
|
||||
|
||||
# Prod stage - based on NextJS reference Dockerfile https://github.com/vercel/next.js/blob/64271354533ed16da51be5dce85f0dbd15f17517/examples/with-docker/Dockerfile
|
||||
FROM node:21-alpine AS prod
|
||||
|
||||
@@ -45,7 +45,7 @@ export default defineConfig({
|
||||
webServer: {
|
||||
command: "pnpm start",
|
||||
url: "http://localhost:3000",
|
||||
reuseExistingServer: !process.env.CI,
|
||||
reuseExistingServer: true,
|
||||
},
|
||||
|
||||
/* Configure projects for major browsers */
|
||||
|
||||
@@ -66,6 +66,7 @@ export const AgentTableRow = ({
|
||||
return (
|
||||
<div
|
||||
data-testid="agent-table-row"
|
||||
data-agent-id={agent_id}
|
||||
data-agent-name={agentName}
|
||||
className="hidden items-center border-b border-neutral-300 px-4 py-4 hover:bg-neutral-50 dark:border-neutral-700 dark:hover:bg-neutral-800 md:flex"
|
||||
>
|
||||
|
||||
@@ -3,6 +3,7 @@ import {
|
||||
getServerAuthToken,
|
||||
} from "@/lib/autogpt-server-api/helpers";
|
||||
import { isServerSide } from "@/lib/utils/is-server-side";
|
||||
import { getAgptServerBaseUrl } from "@/lib/env-config";
|
||||
|
||||
const FRONTEND_BASE_URL =
|
||||
process.env.NEXT_PUBLIC_FRONTEND_BASE_URL || "http://localhost:3000";
|
||||
@@ -12,9 +13,7 @@ const getBaseUrl = (): string => {
|
||||
if (!isServerSide()) {
|
||||
return API_PROXY_BASE_URL;
|
||||
} else {
|
||||
return (
|
||||
process.env.NEXT_PUBLIC_AGPT_SERVER_BASE_URL || "http://localhost:8006"
|
||||
);
|
||||
return getAgptServerBaseUrl();
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@@ -3,19 +3,12 @@ import {
|
||||
makeAuthenticatedFileUpload,
|
||||
makeAuthenticatedRequest,
|
||||
} from "@/lib/autogpt-server-api/helpers";
|
||||
import { getAgptServerBaseUrl } from "@/lib/env-config";
|
||||
import { NextRequest, NextResponse } from "next/server";
|
||||
|
||||
function getBackendBaseUrl() {
|
||||
if (process.env.NEXT_PUBLIC_AGPT_SERVER_URL) {
|
||||
return process.env.NEXT_PUBLIC_AGPT_SERVER_URL.replace("/api", "");
|
||||
}
|
||||
|
||||
return "http://localhost:8006";
|
||||
}
|
||||
|
||||
function buildBackendUrl(path: string[], queryString: string): string {
|
||||
const backendPath = path.join("/");
|
||||
return `${getBackendBaseUrl()}/${backendPath}${queryString}`;
|
||||
return `${getAgptServerBaseUrl()}/${backendPath}${queryString}`;
|
||||
}
|
||||
|
||||
async function handleJsonRequest(
|
||||
|
||||
@@ -28,7 +28,7 @@ export default async function RootLayout({
|
||||
>
|
||||
<head>
|
||||
<GoogleAnalytics
|
||||
gaId={process.env.GA_MEASUREMENT_ID || "G-FH2XK2W4GN"} // This is the measurement Id for the Google Analytics dev project
|
||||
gaId={process.env.NEXT_PUBLIC_GA_MEASUREMENT_ID || "G-FH2XK2W4GN"} // This is the measurement Id for the Google Analytics dev project
|
||||
/>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
@@ -3,6 +3,12 @@ import { getServerSupabase } from "@/lib/supabase/server/getServerSupabase";
|
||||
import { createBrowserClient } from "@supabase/ssr";
|
||||
import type { SupabaseClient } from "@supabase/supabase-js";
|
||||
import { Key, storage } from "@/services/storage/local-storage";
|
||||
import {
|
||||
getAgptServerApiUrl,
|
||||
getAgptWsServerUrl,
|
||||
getSupabaseUrl,
|
||||
getSupabaseAnonKey,
|
||||
} from "@/lib/env-config";
|
||||
import * as Sentry from "@sentry/nextjs";
|
||||
import type {
|
||||
AddUserCreditsResponse,
|
||||
@@ -86,10 +92,8 @@ export default class BackendAPI {
|
||||
heartbeatTimeoutID: number | null = null;
|
||||
|
||||
constructor(
|
||||
baseUrl: string = process.env.NEXT_PUBLIC_AGPT_SERVER_URL ||
|
||||
"http://localhost:8006/api",
|
||||
wsUrl: string = process.env.NEXT_PUBLIC_AGPT_WS_SERVER_URL ||
|
||||
"ws://localhost:8001/ws",
|
||||
baseUrl: string = getAgptServerApiUrl(),
|
||||
wsUrl: string = getAgptWsServerUrl(),
|
||||
) {
|
||||
this.baseUrl = baseUrl;
|
||||
this.wsUrl = wsUrl;
|
||||
@@ -97,11 +101,9 @@ export default class BackendAPI {
|
||||
|
||||
private async getSupabaseClient(): Promise<SupabaseClient | null> {
|
||||
return isClient
|
||||
? createBrowserClient(
|
||||
process.env.NEXT_PUBLIC_SUPABASE_URL!,
|
||||
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
|
||||
{ isSingleton: true },
|
||||
)
|
||||
? createBrowserClient(getSupabaseUrl(), getSupabaseAnonKey(), {
|
||||
isSingleton: true,
|
||||
})
|
||||
: await getServerSupabase();
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { getServerSupabase } from "@/lib/supabase/server/getServerSupabase";
|
||||
import { Key, storage } from "@/services/storage/local-storage";
|
||||
import { getAgptServerApiUrl } from "@/lib/env-config";
|
||||
import { isServerSide } from "../utils/is-server-side";
|
||||
|
||||
import { GraphValidationErrorResponse } from "./types";
|
||||
@@ -56,9 +57,7 @@ export function buildClientUrl(path: string): string {
|
||||
}
|
||||
|
||||
export function buildServerUrl(path: string): string {
|
||||
const baseUrl =
|
||||
process.env.NEXT_PUBLIC_AGPT_SERVER_URL || "http://localhost:8006/api";
|
||||
return `${baseUrl}${path}`;
|
||||
return `${getAgptServerApiUrl()}${path}`;
|
||||
}
|
||||
|
||||
export function buildUrlWithQuery(
|
||||
|
||||
@@ -6,8 +6,7 @@ import {
|
||||
makeAuthenticatedFileUpload,
|
||||
makeAuthenticatedRequest,
|
||||
} from "./helpers";
|
||||
|
||||
const DEFAULT_BASE_URL = "http://localhost:8006/api";
|
||||
import { getAgptServerApiUrl } from "@/lib/env-config";
|
||||
|
||||
export interface ProxyRequestOptions {
|
||||
method: "GET" | "POST" | "PUT" | "PATCH" | "DELETE";
|
||||
@@ -21,7 +20,7 @@ export async function proxyApiRequest({
|
||||
method,
|
||||
path,
|
||||
payload,
|
||||
baseUrl = process.env.NEXT_PUBLIC_AGPT_SERVER_URL || DEFAULT_BASE_URL,
|
||||
baseUrl = getAgptServerApiUrl(),
|
||||
contentType = "application/json",
|
||||
}: ProxyRequestOptions) {
|
||||
return await Sentry.withServerActionInstrumentation(
|
||||
@@ -37,8 +36,7 @@ export async function proxyApiRequest({
|
||||
export async function proxyFileUpload(
|
||||
path: string,
|
||||
formData: FormData,
|
||||
baseUrl = process.env.NEXT_PUBLIC_AGPT_SERVER_URL ||
|
||||
"http://localhost:8006/api",
|
||||
baseUrl = getAgptServerApiUrl(),
|
||||
): Promise<string> {
|
||||
return await Sentry.withServerActionInstrumentation(
|
||||
"proxyFileUpload",
|
||||
|
||||
65
autogpt_platform/frontend/src/lib/env-config.ts
Normal file
65
autogpt_platform/frontend/src/lib/env-config.ts
Normal file
@@ -0,0 +1,65 @@
|
||||
/**
|
||||
* Environment configuration helper
|
||||
*
|
||||
* Provides unified access to environment variables with server-side priority.
|
||||
* Server-side code uses Docker service names, client-side falls back to localhost.
|
||||
*/
|
||||
|
||||
import { isServerSide } from "./utils/is-server-side";
|
||||
|
||||
/**
|
||||
* Gets the AGPT server URL with server-side priority
|
||||
* Server-side: Uses AGPT_SERVER_URL (http://rest_server:8006/api)
|
||||
* Client-side: Falls back to NEXT_PUBLIC_AGPT_SERVER_URL (http://localhost:8006/api)
|
||||
*/
|
||||
export function getAgptServerApiUrl(): string {
|
||||
// If server-side and server URL exists, use it
|
||||
if (isServerSide() && process.env.AGPT_SERVER_URL) {
|
||||
return process.env.AGPT_SERVER_URL;
|
||||
}
|
||||
|
||||
// Otherwise use the public URL
|
||||
return process.env.NEXT_PUBLIC_AGPT_SERVER_URL || "http://localhost:8006/api";
|
||||
}
|
||||
|
||||
export function getAgptServerBaseUrl(): string {
|
||||
return getAgptServerApiUrl().replace("/api", "");
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the AGPT WebSocket URL with server-side priority
|
||||
* Server-side: Uses AGPT_WS_SERVER_URL (ws://websocket_server:8001/ws)
|
||||
* Client-side: Falls back to NEXT_PUBLIC_AGPT_WS_SERVER_URL (ws://localhost:8001/ws)
|
||||
*/
|
||||
export function getAgptWsServerUrl(): string {
|
||||
// If server-side and server URL exists, use it
|
||||
if (isServerSide() && process.env.AGPT_WS_SERVER_URL) {
|
||||
return process.env.AGPT_WS_SERVER_URL;
|
||||
}
|
||||
|
||||
// Otherwise use the public URL
|
||||
return process.env.NEXT_PUBLIC_AGPT_WS_SERVER_URL || "ws://localhost:8001/ws";
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the Supabase URL with server-side priority
|
||||
* Server-side: Uses SUPABASE_URL (http://kong:8000)
|
||||
* Client-side: Falls back to NEXT_PUBLIC_SUPABASE_URL (http://localhost:8000)
|
||||
*/
|
||||
export function getSupabaseUrl(): string {
|
||||
// If server-side and server URL exists, use it
|
||||
if (isServerSide() && process.env.SUPABASE_URL) {
|
||||
return process.env.SUPABASE_URL;
|
||||
}
|
||||
|
||||
// Otherwise use the public URL
|
||||
return process.env.NEXT_PUBLIC_SUPABASE_URL || "http://localhost:8000";
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the Supabase anon key
|
||||
* Uses NEXT_PUBLIC_SUPABASE_ANON_KEY since anon keys are public and same across environments
|
||||
*/
|
||||
export function getSupabaseAnonKey(): string {
|
||||
return process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY || "";
|
||||
}
|
||||
@@ -4,6 +4,7 @@ import { User } from "@supabase/supabase-js";
|
||||
import { usePathname, useRouter } from "next/navigation";
|
||||
import { useEffect, useMemo, useRef, useState } from "react";
|
||||
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
|
||||
import { getSupabaseUrl, getSupabaseAnonKey } from "@/lib/env-config";
|
||||
import {
|
||||
getCurrentUser,
|
||||
refreshSession,
|
||||
@@ -32,16 +33,12 @@ export function useSupabase() {
|
||||
|
||||
const supabase = useMemo(() => {
|
||||
try {
|
||||
return createBrowserClient(
|
||||
process.env.NEXT_PUBLIC_SUPABASE_URL!,
|
||||
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
|
||||
{
|
||||
isSingleton: true,
|
||||
auth: {
|
||||
persistSession: false, // Don't persist session on client with httpOnly cookies
|
||||
},
|
||||
return createBrowserClient(getSupabaseUrl(), getSupabaseAnonKey(), {
|
||||
isSingleton: true,
|
||||
auth: {
|
||||
persistSession: false, // Don't persist session on client with httpOnly cookies
|
||||
},
|
||||
);
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("Error creating Supabase client", error);
|
||||
return null;
|
||||
|
||||
@@ -1,47 +1,43 @@
|
||||
import { createServerClient } from "@supabase/ssr";
|
||||
import { NextResponse, type NextRequest } from "next/server";
|
||||
import { getCookieSettings, isAdminPage, isProtectedPage } from "./helpers";
|
||||
import { getSupabaseUrl, getSupabaseAnonKey } from "../env-config";
|
||||
|
||||
export async function updateSession(request: NextRequest) {
|
||||
let supabaseResponse = NextResponse.next({
|
||||
request,
|
||||
});
|
||||
|
||||
const isAvailable = Boolean(
|
||||
process.env.NEXT_PUBLIC_SUPABASE_URL &&
|
||||
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY,
|
||||
);
|
||||
const supabaseUrl = getSupabaseUrl();
|
||||
const supabaseKey = getSupabaseAnonKey();
|
||||
const isAvailable = Boolean(supabaseUrl && supabaseKey);
|
||||
|
||||
if (!isAvailable) {
|
||||
return supabaseResponse;
|
||||
}
|
||||
|
||||
try {
|
||||
const supabase = createServerClient(
|
||||
process.env.NEXT_PUBLIC_SUPABASE_URL!,
|
||||
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
|
||||
{
|
||||
cookies: {
|
||||
getAll() {
|
||||
return request.cookies.getAll();
|
||||
},
|
||||
setAll(cookiesToSet) {
|
||||
cookiesToSet.forEach(({ name, value }) =>
|
||||
request.cookies.set(name, value),
|
||||
);
|
||||
supabaseResponse = NextResponse.next({
|
||||
request,
|
||||
const supabase = createServerClient(supabaseUrl, supabaseKey, {
|
||||
cookies: {
|
||||
getAll() {
|
||||
return request.cookies.getAll();
|
||||
},
|
||||
setAll(cookiesToSet) {
|
||||
cookiesToSet.forEach(({ name, value }) =>
|
||||
request.cookies.set(name, value),
|
||||
);
|
||||
supabaseResponse = NextResponse.next({
|
||||
request,
|
||||
});
|
||||
cookiesToSet.forEach(({ name, value, options }) => {
|
||||
supabaseResponse.cookies.set(name, value, {
|
||||
...options,
|
||||
...getCookieSettings(),
|
||||
});
|
||||
cookiesToSet.forEach(({ name, value, options }) => {
|
||||
supabaseResponse.cookies.set(name, value, {
|
||||
...options,
|
||||
...getCookieSettings(),
|
||||
});
|
||||
});
|
||||
},
|
||||
});
|
||||
},
|
||||
},
|
||||
);
|
||||
});
|
||||
|
||||
const userResponse = await supabase.auth.getUser();
|
||||
const user = userResponse.data.user;
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { createServerClient, type CookieOptions } from "@supabase/ssr";
|
||||
import { getCookieSettings } from "../helpers";
|
||||
import { getSupabaseUrl, getSupabaseAnonKey } from "../../env-config";
|
||||
|
||||
type Cookies = { name: string; value: string; options?: CookieOptions }[];
|
||||
|
||||
@@ -11,8 +12,8 @@ export async function getServerSupabase() {
|
||||
|
||||
try {
|
||||
const supabase = createServerClient(
|
||||
process.env.NEXT_PUBLIC_SUPABASE_URL!,
|
||||
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
|
||||
getSupabaseUrl(),
|
||||
getSupabaseAnonKey(),
|
||||
{
|
||||
cookies: {
|
||||
getAll() {
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
* Utility functions for working with Cloudflare Turnstile
|
||||
*/
|
||||
import { BehaveAs, getBehaveAs } from "@/lib/utils";
|
||||
import { getAgptServerApiUrl } from "@/lib/env-config";
|
||||
|
||||
export async function verifyTurnstileToken(
|
||||
token: string,
|
||||
@@ -19,19 +20,16 @@ export async function verifyTurnstileToken(
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await fetch(
|
||||
`${process.env.NEXT_PUBLIC_AGPT_SERVER_URL}/turnstile/verify`,
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({
|
||||
token,
|
||||
action,
|
||||
}),
|
||||
const response = await fetch(`${getAgptServerApiUrl()}/turnstile/verify`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
);
|
||||
body: JSON.stringify({
|
||||
token,
|
||||
action,
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
console.error("Turnstile verification failed:", await response.text());
|
||||
|
||||
@@ -2,7 +2,7 @@ import { LoginPage } from "./pages/login.page";
|
||||
import test, { expect } from "@playwright/test";
|
||||
import { TEST_AGENT_DATA, TEST_CREDENTIALS } from "./credentials";
|
||||
import { getSelectors } from "./utils/selectors";
|
||||
import { hasUrl } from "./utils/assertion";
|
||||
import { hasUrl, isHidden } from "./utils/assertion";
|
||||
|
||||
test.describe("Agent Dashboard", () => {
|
||||
test.beforeEach(async ({ page }) => {
|
||||
@@ -89,6 +89,7 @@ test.describe("Agent Dashboard", () => {
|
||||
}
|
||||
|
||||
const firstRow = rows.first();
|
||||
const deletedAgentId = await firstRow.getAttribute("data-agent-id");
|
||||
await firstRow.scrollIntoViewIfNeeded();
|
||||
|
||||
const delActionsButton = firstRow.getByTestId("agent-table-row-actions");
|
||||
@@ -100,9 +101,7 @@ test.describe("Agent Dashboard", () => {
|
||||
await expect(deleteButton).toBeVisible();
|
||||
await deleteButton.click();
|
||||
|
||||
// Wait for row count to drop by 1
|
||||
await expect
|
||||
.poll(async () => await rows.count(), { timeout: 15000 })
|
||||
.toBe(beforeCount - 1);
|
||||
// Assert that the card with the deleted agent ID is not visible
|
||||
await isHidden(page.locator(`[data-agent-id="${deletedAgentId}"]`));
|
||||
});
|
||||
});
|
||||
|
||||
@@ -35,7 +35,11 @@ test("user can publish an agent through the complete flow", async ({
|
||||
const agentToSelect = publishAgentModal.getByTestId("agent-card").first();
|
||||
await agentToSelect.click();
|
||||
|
||||
const nextButton = publishAgentModal.getByRole("button", { name: "Next" });
|
||||
const nextButton = publishAgentModal.getByRole("button", {
|
||||
name: "Next",
|
||||
exact: true,
|
||||
});
|
||||
|
||||
await isEnabled(nextButton);
|
||||
await nextButton.click();
|
||||
|
||||
@@ -101,7 +105,10 @@ test("should validate all form fields in publish agent form", async ({
|
||||
const agentToSelect = publishAgentModal.getByTestId("agent-card").first();
|
||||
await agentToSelect.click();
|
||||
|
||||
const nextButton = publishAgentModal.getByRole("button", { name: "Next" });
|
||||
const nextButton = publishAgentModal.getByRole("button", {
|
||||
name: "Next",
|
||||
exact: true,
|
||||
});
|
||||
await nextButton.click();
|
||||
|
||||
await isVisible(getText("Write a bit of details about your agent"));
|
||||
|
||||
@@ -1,58 +1,46 @@
|
||||
@echo off
|
||||
setlocal enabledelayedexpansion
|
||||
|
||||
goto :main
|
||||
REM Variables
|
||||
set SCRIPT_DIR=%~dp0
|
||||
set REPO_DIR=%SCRIPT_DIR%..\..
|
||||
set CLONE_NEEDED=0
|
||||
set LOG_FILE=
|
||||
|
||||
REM --- Helper: Check command existence ---
|
||||
:check_command
|
||||
if "%1"=="" (
|
||||
echo ERROR: check_command called with no command argument!
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
where %1 >nul 2>nul
|
||||
if errorlevel 1 (
|
||||
echo %2 is not installed. Please install it and try again.
|
||||
pause
|
||||
exit /b 1
|
||||
) else (
|
||||
echo %2 is installed.
|
||||
)
|
||||
goto :eof
|
||||
|
||||
:main
|
||||
echo =============================
|
||||
echo AutoGPT Windows Setup
|
||||
echo =============================
|
||||
echo.
|
||||
|
||||
REM --- Variables ---
|
||||
set SCRIPT_DIR=%~dp0
|
||||
set LOG_DIR=%SCRIPT_DIR%logs
|
||||
set BACKEND_LOG=%LOG_DIR%\backend_setup.log
|
||||
set FRONTEND_LOG=%LOG_DIR%\frontend_setup.log
|
||||
set CLONE_NEEDED=0
|
||||
set REPO_DIR=%SCRIPT_DIR%..\..
|
||||
|
||||
REM --- Create logs folder immediately ---
|
||||
if not exist "%LOG_DIR%" mkdir "%LOG_DIR%"
|
||||
|
||||
REM Check prerequisites
|
||||
echo Checking prerequisites...
|
||||
call :check_command git Git
|
||||
call :check_command docker Docker
|
||||
call :check_command npm Node.js
|
||||
call :check_command pnpm pnpm
|
||||
where git >nul 2>nul
|
||||
if errorlevel 1 (
|
||||
echo Git is not installed. Please install it and try again.
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
echo Git is installed.
|
||||
|
||||
where docker >nul 2>nul
|
||||
if errorlevel 1 (
|
||||
echo Docker is not installed. Please install it and try again.
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
echo Docker is installed.
|
||||
echo.
|
||||
|
||||
REM --- Detect repo ---
|
||||
REM Detect repo
|
||||
if exist "%REPO_DIR%\.git" (
|
||||
echo Using existing AutoGPT repository.
|
||||
set CLONE_NEEDED=0
|
||||
) else (
|
||||
set REPO_DIR=%SCRIPT_DIR%AutoGPT
|
||||
set CLONE_NEEDED=1
|
||||
)
|
||||
|
||||
REM --- Clone repo if needed ---
|
||||
REM Clone repo if needed
|
||||
if %CLONE_NEEDED%==1 (
|
||||
echo Cloning AutoGPT repository...
|
||||
git clone https://github.com/Significant-Gravitas/AutoGPT.git "%REPO_DIR%"
|
||||
@@ -61,72 +49,47 @@ if %CLONE_NEEDED%==1 (
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
) else (
|
||||
echo Using existing AutoGPT repository.
|
||||
echo Repository cloned successfully.
|
||||
)
|
||||
echo.
|
||||
|
||||
REM --- Prompt for Sentry enablement ---
|
||||
set SENTRY_ENABLED=0
|
||||
echo Would you like to enable debug information to be shared so we can fix your issues? [Y/n]
|
||||
set /p sentry_answer="Enable Sentry? [Y/n]: "
|
||||
if /I "%sentry_answer%"=="" set SENTRY_ENABLED=1
|
||||
if /I "%sentry_answer%"=="y" set SENTRY_ENABLED=1
|
||||
if /I "%sentry_answer%"=="yes" set SENTRY_ENABLED=1
|
||||
if /I "%sentry_answer%"=="n" set SENTRY_ENABLED=0
|
||||
if /I "%sentry_answer%"=="no" set SENTRY_ENABLED=0
|
||||
|
||||
REM --- Setup backend ---
|
||||
echo Setting up backend services...
|
||||
echo.
|
||||
REM Navigate to autogpt_platform
|
||||
cd /d "%REPO_DIR%\autogpt_platform"
|
||||
if exist .env.example copy /Y .env.example .env >nul
|
||||
cd backend
|
||||
if exist .env.example copy /Y .env.example .env >nul
|
||||
|
||||
REM --- Set SENTRY_DSN in backend/.env ---
|
||||
set SENTRY_DSN=https://11d0640fef35640e0eb9f022eb7d7626@o4505260022104064.ingest.us.sentry.io/4507890252447744
|
||||
if %SENTRY_ENABLED%==1 (
|
||||
powershell -Command "(Get-Content .env) -replace '^SENTRY_DSN=.*', 'SENTRY_DSN=%SENTRY_DSN%' | Set-Content .env"
|
||||
echo Sentry enabled in backend.
|
||||
) else (
|
||||
powershell -Command "(Get-Content .env) -replace '^SENTRY_DSN=.*', 'SENTRY_DSN=' | Set-Content .env"
|
||||
echo Sentry not enabled in backend.
|
||||
)
|
||||
cd ..
|
||||
|
||||
docker compose down > "%BACKEND_LOG%" 2>&1
|
||||
if errorlevel 1 echo (docker compose down failed, continuing...)
|
||||
docker compose up -d --build >> "%BACKEND_LOG%" 2>&1
|
||||
if errorlevel 1 (
|
||||
echo Backend setup failed. See log: %BACKEND_LOG%
|
||||
echo Failed to navigate to autogpt_platform directory.
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
echo Backend services started successfully.
|
||||
echo.
|
||||
|
||||
REM --- Setup frontend ---
|
||||
echo Setting up frontend application...
|
||||
REM Create logs directory
|
||||
if not exist logs mkdir logs
|
||||
|
||||
REM Run docker compose with logging
|
||||
echo Starting AutoGPT services with Docker Compose...
|
||||
echo This may take a few minutes on first run...
|
||||
echo.
|
||||
cd frontend
|
||||
if exist .env.example copy /Y .env.example .env >nul
|
||||
call pnpm.cmd install
|
||||
set LOG_FILE=%REPO_DIR%\autogpt_platform\logs\docker_setup.log
|
||||
docker compose up -d > "%LOG_FILE%" 2>&1
|
||||
if errorlevel 1 (
|
||||
echo pnpm install failed!
|
||||
echo Docker compose failed. Check log file for details: %LOG_FILE%
|
||||
echo.
|
||||
echo Common issues:
|
||||
echo - Docker is not running
|
||||
echo - Insufficient disk space
|
||||
echo - Port conflicts (check if ports 3000, 8000, etc. are in use)
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
echo Frontend dependencies installed successfully.
|
||||
echo.
|
||||
|
||||
REM --- Start frontend dev server in the same terminal ---
|
||||
echo Setup complete!
|
||||
echo =============================
|
||||
echo Setup Complete!
|
||||
echo =============================
|
||||
echo.
|
||||
echo Access AutoGPT at: http://localhost:3000
|
||||
echo To stop services, press Ctrl+C and run "docker compose down" in %REPO_DIR%\autogpt_platform
|
||||
echo API available at: http://localhost:8000
|
||||
echo.
|
||||
echo The frontend will now start in this terminal. Closing this window will stop the frontend.
|
||||
echo Press Ctrl+C to stop the frontend at any time.
|
||||
echo To stop services: docker compose down
|
||||
echo To view logs: docker compose logs -f
|
||||
echo.
|
||||
|
||||
call pnpm.cmd dev
|
||||
echo Press any key to exit (services will keep running)...
|
||||
pause >nul
|
||||
325
autogpt_platform/installer/setup-autogpt.sh
Normal file → Executable file
325
autogpt_platform/installer/setup-autogpt.sh
Normal file → Executable file
@@ -4,9 +4,7 @@
|
||||
# AutoGPT Setup Script
|
||||
# ------------------------------------------------------------------------------
|
||||
# This script automates the installation and setup of AutoGPT on Linux systems.
|
||||
# It checks prerequisites, clones the repository, sets up backend and frontend,
|
||||
# configures Sentry (optional), and starts all services. Designed for clarity
|
||||
# and maintainability. Run this script from a terminal.
|
||||
# It checks prerequisites, clones the repository, and starts all services.
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
# --- Global Variables ---
|
||||
@@ -14,24 +12,19 @@ GREEN='\033[0;32m'
|
||||
BLUE='\033[0;34m'
|
||||
YELLOW='\033[1;33m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
NC='\033[0m'
|
||||
|
||||
# Variables
|
||||
REPO_DIR=""
|
||||
CLONE_NEEDED=false
|
||||
DOCKER_CMD="docker"
|
||||
DOCKER_COMPOSE_CMD="docker compose"
|
||||
LOG_DIR=""
|
||||
SENTRY_ENABLED=0
|
||||
LOG_FILE=""
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Helper Functions
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
# Print colored text
|
||||
print_color() {
|
||||
printf "${!1}%s${NC}\n" "$2"
|
||||
}
|
||||
|
||||
# Print the ASCII banner
|
||||
print_banner() {
|
||||
print_color "BLUE" "
|
||||
d8888 888 .d8888b. 8888888b. 88888888888
|
||||
@@ -45,295 +38,109 @@ d88P 888 \"Y88888 \"Y888 \"Y88P\" \"Y8888P88 888 888
|
||||
"
|
||||
}
|
||||
|
||||
# Handle errors and exit
|
||||
handle_error() {
|
||||
echo ""
|
||||
print_color "RED" "Error: $1"
|
||||
print_color "YELLOW" "Press Enter to exit..."
|
||||
read -r
|
||||
if [ -n "$LOG_FILE" ] && [ -f "$LOG_FILE" ]; then
|
||||
print_color "RED" "Check log file for details: $LOG_FILE"
|
||||
fi
|
||||
exit 1
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Logging Functions
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
# Prepare log directory
|
||||
setup_logs() {
|
||||
LOG_DIR="$REPO_DIR/autogpt_platform/logs"
|
||||
mkdir -p "$LOG_DIR"
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Health Check Functions
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
# Check service health by polling an endpoint
|
||||
check_health() {
|
||||
local url=$1
|
||||
local expected=$2
|
||||
local name=$3
|
||||
local max_attempts=$4
|
||||
local timeout=$5
|
||||
|
||||
if ! command -v curl &> /dev/null; then
|
||||
echo "curl not found. Skipping health check for $name."
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "Checking $name health..."
|
||||
for ((attempt=1; attempt<=max_attempts; attempt++)); do
|
||||
echo "Attempt $attempt/$max_attempts"
|
||||
response=$(curl -s --max-time "$timeout" "$url")
|
||||
if [[ "$response" == *"$expected"* ]]; then
|
||||
echo "✓ $name is healthy"
|
||||
return 0
|
||||
fi
|
||||
echo "Waiting 5s before next attempt..."
|
||||
sleep 5
|
||||
done
|
||||
echo "✗ $name health check failed after $max_attempts attempts"
|
||||
return 1
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Prerequisite and Environment Functions
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
# Check for required commands
|
||||
check_command() {
|
||||
local cmd=$1
|
||||
local name=$2
|
||||
local url=$3
|
||||
|
||||
if ! command -v "$cmd" &> /dev/null; then
|
||||
handle_error "$name is not installed. Please install it and try again. Visit $url"
|
||||
check_prerequisites() {
|
||||
print_color "BLUE" "Checking prerequisites..."
|
||||
|
||||
if ! command -v git &> /dev/null; then
|
||||
handle_error "Git is not installed. Please install it and try again."
|
||||
else
|
||||
print_color "GREEN" "✓ $name is installed"
|
||||
print_color "GREEN" "✓ Git is installed"
|
||||
fi
|
||||
}
|
||||
|
||||
# Check for optional commands
|
||||
check_command_optional() {
|
||||
local cmd=$1
|
||||
if command -v "$cmd" &> /dev/null; then
|
||||
print_color "GREEN" "✓ $cmd is installed"
|
||||
|
||||
if ! command -v docker &> /dev/null; then
|
||||
handle_error "Docker is not installed. Please install it and try again."
|
||||
else
|
||||
print_color "YELLOW" "$cmd is not installed. Some features will be skipped."
|
||||
print_color "GREEN" "✓ Docker is installed"
|
||||
fi
|
||||
}
|
||||
|
||||
# Check Docker permissions and adjust commands if needed
|
||||
check_docker_permissions() {
|
||||
|
||||
if ! docker info &> /dev/null; then
|
||||
print_color "YELLOW" "Docker requires elevated privileges. Using sudo for Docker commands..."
|
||||
print_color "YELLOW" "Using sudo for Docker commands..."
|
||||
DOCKER_CMD="sudo docker"
|
||||
DOCKER_COMPOSE_CMD="sudo docker compose"
|
||||
fi
|
||||
|
||||
print_color "GREEN" "All prerequisites installed!"
|
||||
}
|
||||
|
||||
# Check all prerequisites
|
||||
check_prerequisites() {
|
||||
print_color "GREEN" "AutoGPT's Automated Setup Script"
|
||||
print_color "GREEN" "-------------------------------"
|
||||
print_color "BLUE" "This script will automatically install and set up AutoGPT for you."
|
||||
echo ""
|
||||
print_color "YELLOW" "Checking prerequisites:"
|
||||
|
||||
check_command git "Git" "https://git-scm.com/downloads"
|
||||
check_command docker "Docker" "https://docs.docker.com/get-docker/"
|
||||
check_docker_permissions
|
||||
check_command npm "npm (Node.js)" "https://nodejs.org/en/download/"
|
||||
check_command pnpm "pnpm (Node.js package manager)" "https://pnpm.io/installation"
|
||||
check_command_optional curl "curl"
|
||||
|
||||
print_color "GREEN" "All prerequisites are installed! Starting installation..."
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Detect installation mode and set repo directory
|
||||
# (Clones if not in a repo, otherwise uses current directory)
|
||||
detect_installation_mode() {
|
||||
detect_repo() {
|
||||
if [[ "$PWD" == */autogpt_platform/installer ]]; then
|
||||
if [[ -d "../../.git" ]]; then
|
||||
REPO_DIR="$(cd ../..; pwd)"
|
||||
CLONE_NEEDED=false
|
||||
cd ../.. || handle_error "Failed to navigate to repository root."
|
||||
cd ../.. || handle_error "Failed to navigate to repo root"
|
||||
print_color "GREEN" "Using existing AutoGPT repository."
|
||||
else
|
||||
CLONE_NEEDED=true
|
||||
REPO_DIR="$(pwd)/AutoGPT"
|
||||
cd "$(dirname \"$(dirname \"$(dirname \"$PWD\")\")\")" || handle_error "Failed to navigate to parent directory."
|
||||
fi
|
||||
elif [[ -d ".git" && -d "autogpt_platform/installer" ]]; then
|
||||
REPO_DIR="$PWD"
|
||||
CLONE_NEEDED=false
|
||||
print_color "GREEN" "Using existing AutoGPT repository."
|
||||
else
|
||||
CLONE_NEEDED=true
|
||||
REPO_DIR="$(pwd)/AutoGPT"
|
||||
fi
|
||||
}
|
||||
|
||||
# Clone the repository if needed
|
||||
clone_repository() {
|
||||
clone_repo() {
|
||||
if [ "$CLONE_NEEDED" = true ]; then
|
||||
print_color "BLUE" "Cloning AutoGPT repository..."
|
||||
if git clone https://github.com/Significant-Gravitas/AutoGPT.git "$REPO_DIR"; then
|
||||
print_color "GREEN" "✓ Repo cloned successfully!"
|
||||
else
|
||||
handle_error "Failed to clone the repository."
|
||||
fi
|
||||
else
|
||||
print_color "GREEN" "Using existing AutoGPT repository"
|
||||
git clone https://github.com/Significant-Gravitas/AutoGPT.git "$REPO_DIR" || handle_error "Failed to clone repository"
|
||||
print_color "GREEN" "Repository cloned successfully."
|
||||
fi
|
||||
}
|
||||
|
||||
# Prompt for Sentry enablement and set global flag
|
||||
prompt_sentry_enablement() {
|
||||
print_color "YELLOW" "Would you like to enable debug information to be shared so we can fix your issues? [Y/n]"
|
||||
read -r sentry_answer
|
||||
case "${sentry_answer,,}" in
|
||||
""|y|yes)
|
||||
SENTRY_ENABLED=1
|
||||
;;
|
||||
n|no)
|
||||
SENTRY_ENABLED=0
|
||||
;;
|
||||
*)
|
||||
print_color "YELLOW" "Invalid input. Defaulting to yes. Sentry will be enabled."
|
||||
SENTRY_ENABLED=1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Setup Functions
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
# Set up backend services and configure Sentry if enabled
|
||||
setup_backend() {
|
||||
print_color "BLUE" "Setting up backend services..."
|
||||
cd "$REPO_DIR/autogpt_platform" || handle_error "Failed to navigate to backend directory."
|
||||
cp .env.example .env || handle_error "Failed to copy environment file."
|
||||
|
||||
# Set SENTRY_DSN in backend/.env
|
||||
cd backend || handle_error "Failed to navigate to backend subdirectory."
|
||||
cp .env.example .env || handle_error "Failed to copy backend environment file."
|
||||
sentry_url="https://11d0640fef35640e0eb9f022eb7d7626@o4505260022104064.ingest.us.sentry.io/4507890252447744"
|
||||
if [ "$SENTRY_ENABLED" = "1" ]; then
|
||||
sed -i "s|^SENTRY_DSN=.*$|SENTRY_DSN=$sentry_url|" .env || echo "SENTRY_DSN=$sentry_url" >> .env
|
||||
print_color "GREEN" "Sentry enabled in backend."
|
||||
run_docker() {
|
||||
cd "$REPO_DIR/autogpt_platform" || handle_error "Failed to navigate to autogpt_platform"
|
||||
|
||||
print_color "BLUE" "Starting AutoGPT services with Docker Compose..."
|
||||
print_color "YELLOW" "This may take a few minutes on first run..."
|
||||
echo
|
||||
|
||||
mkdir -p logs
|
||||
LOG_FILE="$REPO_DIR/autogpt_platform/logs/docker_setup.log"
|
||||
|
||||
if $DOCKER_COMPOSE_CMD up -d > "$LOG_FILE" 2>&1; then
|
||||
print_color "GREEN" "✓ Services started successfully!"
|
||||
else
|
||||
sed -i "s|^SENTRY_DSN=.*$|SENTRY_DSN=|" .env || echo "SENTRY_DSN=" >> .env
|
||||
print_color "YELLOW" "Sentry not enabled in backend."
|
||||
fi
|
||||
cd .. # back to autogpt_platform
|
||||
|
||||
$DOCKER_COMPOSE_CMD down || handle_error "Failed to stop existing backend services."
|
||||
$DOCKER_COMPOSE_CMD up -d --build || handle_error "Failed to start backend services."
|
||||
print_color "GREEN" "✓ Backend services started successfully"
|
||||
}
|
||||
|
||||
# Set up frontend application
|
||||
setup_frontend() {
|
||||
print_color "BLUE" "Setting up frontend application..."
|
||||
cd "$REPO_DIR/autogpt_platform/frontend" || handle_error "Failed to navigate to frontend directory."
|
||||
cp .env.example .env || handle_error "Failed to copy frontend environment file."
|
||||
corepack enable || handle_error "Failed to enable corepack."
|
||||
pnpm install || handle_error "Failed to install frontend dependencies."
|
||||
print_color "GREEN" "✓ Frontend dependencies installed successfully"
|
||||
}
|
||||
|
||||
# Run backend and frontend setup concurrently and manage logs
|
||||
run_concurrent_setup() {
|
||||
setup_logs
|
||||
backend_log="$LOG_DIR/backend_setup.log"
|
||||
frontend_log="$LOG_DIR/frontend_setup.log"
|
||||
|
||||
: > "$backend_log"
|
||||
: > "$frontend_log"
|
||||
|
||||
setup_backend > "$backend_log" 2>&1 &
|
||||
backend_pid=$!
|
||||
echo "Backend setup finished."
|
||||
|
||||
setup_frontend > "$frontend_log" 2>&1 &
|
||||
frontend_pid=$!
|
||||
echo "Frontend setup finished."
|
||||
|
||||
show_spinner "$backend_pid" "$frontend_pid"
|
||||
|
||||
wait $backend_pid; backend_status=$?
|
||||
wait $frontend_pid; frontend_status=$?
|
||||
|
||||
if [ $backend_status -ne 0 ]; then
|
||||
print_color "RED" "Backend setup failed. See log: $backend_log"
|
||||
print_color "RED" "Docker compose failed. Check log file for details: $LOG_FILE"
|
||||
print_color "YELLOW" "Common issues:"
|
||||
print_color "YELLOW" "- Docker is not running"
|
||||
print_color "YELLOW" "- Insufficient disk space"
|
||||
print_color "YELLOW" "- Port conflicts (check if ports 3000, 8000, etc. are in use)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ $frontend_status -ne 0 ]; then
|
||||
print_color "RED" "Frontend setup failed. See log: $frontend_log"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
# Show a spinner while background jobs run
|
||||
show_spinner() {
|
||||
local backend_pid=$1
|
||||
local frontend_pid=$2
|
||||
spin='-\|/'
|
||||
i=0
|
||||
messages=("Working..." "Still working..." "Setting up dependencies..." "Almost there...")
|
||||
msg_index=0
|
||||
msg_counter=0
|
||||
clear_line=" "
|
||||
|
||||
while kill -0 $backend_pid 2>/dev/null || kill -0 $frontend_pid 2>/dev/null; do
|
||||
i=$(( (i+1) % 4 ))
|
||||
msg_counter=$(( (msg_counter+1) % 300 ))
|
||||
if [ $msg_counter -eq 0 ]; then
|
||||
msg_index=$(( (msg_index+1) % ${#messages[@]} ))
|
||||
fi
|
||||
printf "\r${clear_line}\r${YELLOW}[%c]${NC} %s" "${spin:$i:1}" "${messages[$msg_index]}"
|
||||
sleep .1
|
||||
done
|
||||
printf "\r${clear_line}\r${GREEN}[✓]${NC} Setup completed!\n"
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Main Entry Point
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
main() {
|
||||
print_banner
|
||||
print_color "GREEN" "AutoGPT Setup Script"
|
||||
print_color "GREEN" "-------------------"
|
||||
|
||||
check_prerequisites
|
||||
prompt_sentry_enablement
|
||||
detect_installation_mode
|
||||
clone_repository
|
||||
setup_logs
|
||||
run_concurrent_setup
|
||||
|
||||
print_color "YELLOW" "Starting frontend..."
|
||||
(cd "$REPO_DIR/autogpt_platform/frontend" && pnpm dev > "$LOG_DIR/frontend_dev.log" 2>&1 &)
|
||||
|
||||
print_color "YELLOW" "Waiting for services to start..."
|
||||
sleep 20
|
||||
|
||||
print_color "YELLOW" "Verifying services health..."
|
||||
check_health "http://localhost:8006/health" "\"status\":\"healthy\"" "Backend" 6 15
|
||||
check_health "http://localhost:3000/health" "Yay im healthy" "Frontend" 6 15
|
||||
|
||||
if [ $backend_status -ne 0 ] || [ $frontend_status -ne 0 ]; then
|
||||
print_color "RED" "Setup failed. See logs for details."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_color "GREEN" "Setup complete!"
|
||||
print_color "BLUE" "Access AutoGPT at: http://localhost:3000"
|
||||
print_color "YELLOW" "To stop services, press Ctrl+C and run 'docker compose down' in $REPO_DIR/autogpt_platform"
|
||||
echo ""
|
||||
print_color "GREEN" "Press Enter to exit (services will keep running)..."
|
||||
read -r
|
||||
detect_repo
|
||||
clone_repo
|
||||
run_docker
|
||||
|
||||
echo
|
||||
print_color "GREEN" "============================="
|
||||
print_color "GREEN" " Setup Complete!"
|
||||
print_color "GREEN" "============================="
|
||||
echo
|
||||
print_color "BLUE" "🚀 Access AutoGPT at: http://localhost:3000"
|
||||
print_color "BLUE" "📡 API available at: http://localhost:8000"
|
||||
echo
|
||||
print_color "YELLOW" "To stop services: docker compose down"
|
||||
print_color "YELLOW" "To view logs: docker compose logs -f"
|
||||
echo
|
||||
print_color "YELLOW" "All commands should be run in: $REPO_DIR/autogpt_platform"
|
||||
}
|
||||
|
||||
main
|
||||
main
|
||||
@@ -20,11 +20,11 @@ KEY2=value2
|
||||
|
||||
The server will automatically load the `.env` file when it starts. You can also set the environment variables directly in your shell. Refer to your operating system's documentation on how to set environment variables in the current session.
|
||||
|
||||
The valid options are listed in `.env.example` in the root of the builder and server directories. You can copy the `.env.example` file to `.env` and modify the values as needed.
|
||||
The valid options are listed in `.env.default` in the root of the builder and server directories. You can copy the `.env.default` file to `.env` and modify the values as needed.
|
||||
|
||||
```bash
|
||||
# Copy the .env.example file to .env
|
||||
cp .env.example .env
|
||||
# Copy the .env.default file to .env
|
||||
cp .env.default .env
|
||||
```
|
||||
|
||||
### Secrets directory
|
||||
@@ -88,17 +88,17 @@ We use the Poetry to manage the dependencies. To set up the project, follow thes
|
||||
```sh
|
||||
poetry shell
|
||||
```
|
||||
|
||||
|
||||
3. Install dependencies
|
||||
|
||||
```sh
|
||||
poetry install
|
||||
```
|
||||
|
||||
4. Copy .env.example to .env
|
||||
|
||||
4. Copy .env.default to .env
|
||||
|
||||
```sh
|
||||
cp .env.example .env
|
||||
cp .env.default .env
|
||||
```
|
||||
|
||||
5. Generate the Prisma client
|
||||
@@ -106,7 +106,6 @@ We use the Poetry to manage the dependencies. To set up the project, follow thes
|
||||
```sh
|
||||
poetry run prisma generate
|
||||
```
|
||||
|
||||
|
||||
> In case Prisma generates the client for the global Python installation instead of the virtual environment, the current mitigation is to just uninstall the global Prisma package:
|
||||
>
|
||||
@@ -114,7 +113,7 @@ We use the Poetry to manage the dependencies. To set up the project, follow thes
|
||||
> pip uninstall prisma
|
||||
> ```
|
||||
>
|
||||
> Then run the generation again. The path *should* look something like this:
|
||||
> Then run the generation again. The path _should_ look something like this:
|
||||
> `<some path>/pypoetry/virtualenvs/backend-TQIRSwR6-py3.12/bin/prisma`
|
||||
|
||||
6. Run the postgres database from the /rnd folder
|
||||
|
||||
@@ -107,53 +107,28 @@ If you get stuck, follow [this guide](https://docs.github.com/en/repositories/cr
|
||||
|
||||
Once that's complete you can continue the setup process.
|
||||
|
||||
### Running the backend services
|
||||
### Running the AutoGPT Platform
|
||||
|
||||
To run the backend services, follow these steps:
|
||||
To run the platform, follow these steps:
|
||||
|
||||
* Navigate to the `autogpt_platform` directory inside the AutoGPT folder:
|
||||
```bash
|
||||
cd AutoGPT/autogpt_platform
|
||||
```
|
||||
|
||||
* Copy the `.env.example` file to `.env` in `autogpt_platform`:
|
||||
```
|
||||
cp .env.example .env
|
||||
```
|
||||
This command will copy the `.env.example` file to `.env` in the `supabase` directory. You can modify the `.env` file to add your own environment variables.
|
||||
- Copy the `.env.default` file to `.env` in `autogpt_platform`:
|
||||
|
||||
* Run the backend services:
|
||||
```
|
||||
cp .env.default .env
|
||||
```
|
||||
|
||||
This command will copy the `.env.default` file to `.env` in the `autogpt_platform` directory. You can modify the `.env` file to add your own environment variables.
|
||||
|
||||
- Run the platform services:
|
||||
```
|
||||
docker compose up -d --build
|
||||
```
|
||||
This command will start all the necessary backend services defined in the `docker-compose.combined.yml` file in detached mode.
|
||||
|
||||
|
||||
### Running the frontend application
|
||||
|
||||
To run the frontend application open a new terminal and follow these steps:
|
||||
|
||||
- Navigate to `frontend` folder within the `autogpt_platform` directory:
|
||||
|
||||
```
|
||||
cd frontend
|
||||
```
|
||||
|
||||
- Copy the `.env.example` file available in the `frontend` directory to `.env` in the same directory:
|
||||
|
||||
```
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
You can modify the `.env` within this folder to add your own environment variables for the frontend application.
|
||||
|
||||
- Run the following command:
|
||||
```
|
||||
corepack enable
|
||||
pnpm install
|
||||
pnpm dev
|
||||
```
|
||||
This command will enable corepack, install the necessary dependencies with pnpm, and start the frontend application in development mode.
|
||||
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
|
||||
|
||||
### Checking if the application is running
|
||||
|
||||
@@ -185,127 +160,6 @@ poetry run cli gen-encrypt-key
|
||||
|
||||
Then, replace the existing key in the `autogpt_platform/backend/.env` file with the new one.
|
||||
|
||||
!!! Note
|
||||
*The steps below are an alternative to [Running the backend services](#running-the-backend-services)*
|
||||
|
||||
<details>
|
||||
<summary><strong>Alternate Steps</strong></summary>
|
||||
|
||||
#### AutoGPT Agent Server (OLD)
|
||||
This is an initial project for creating the next generation of agent execution, which is an AutoGPT agent server.
|
||||
The agent server will enable the creation of composite multi-agent systems that utilize AutoGPT agents and other non-agent components as its primitives.
|
||||
|
||||
##### Docs
|
||||
|
||||
You can access the docs for the [AutoGPT Agent Server here](https://docs.agpt.co/#1-autogpt-server).
|
||||
|
||||
##### Setup
|
||||
|
||||
We use the Poetry to manage the dependencies. To set up the project, follow these steps inside this directory:
|
||||
|
||||
0. Install Poetry
|
||||
|
||||
```sh
|
||||
pip install poetry
|
||||
```
|
||||
|
||||
1. Configure Poetry to use .venv in your project directory
|
||||
|
||||
```sh
|
||||
poetry config virtualenvs.in-project true
|
||||
```
|
||||
|
||||
2. Enter the poetry shell
|
||||
|
||||
```sh
|
||||
poetry shell
|
||||
```
|
||||
|
||||
3. Install dependencies
|
||||
|
||||
```sh
|
||||
poetry install
|
||||
```
|
||||
|
||||
4. Copy .env.example to .env
|
||||
|
||||
```sh
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
5. Generate the Prisma client
|
||||
|
||||
```sh
|
||||
poetry run prisma generate
|
||||
```
|
||||
|
||||
> In case Prisma generates the client for the global Python installation instead of the virtual environment, the current mitigation is to just uninstall the global Prisma package:
|
||||
>
|
||||
> ```sh
|
||||
> pip uninstall prisma
|
||||
> ```
|
||||
>
|
||||
> Then run the generation again. The path *should* look something like this:
|
||||
> `<some path>/pypoetry/virtualenvs/backend-TQIRSwR6-py3.12/bin/prisma`
|
||||
|
||||
6. Migrate the database. Be careful because this deletes current data in the database.
|
||||
|
||||
```sh
|
||||
docker compose up db -d
|
||||
poetry run prisma migrate deploy
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
### Starting the AutoGPT server without Docker
|
||||
|
||||
To run the server locally, start in the autogpt_platform folder:
|
||||
|
||||
```sh
|
||||
cd ..
|
||||
```
|
||||
|
||||
Run the following command to run database in docker but the application locally:
|
||||
|
||||
```sh
|
||||
docker compose --profile local up deps --build --detach
|
||||
cd backend
|
||||
poetry run app
|
||||
```
|
||||
|
||||
### Starting the AutoGPT server with Docker
|
||||
|
||||
Run the following command to build the dockerfiles:
|
||||
|
||||
```sh
|
||||
docker compose build
|
||||
```
|
||||
|
||||
Run the following command to run the app:
|
||||
|
||||
```sh
|
||||
docker compose up
|
||||
```
|
||||
|
||||
Run the following to automatically rebuild when code changes, in another terminal:
|
||||
|
||||
```sh
|
||||
docker compose watch
|
||||
```
|
||||
|
||||
Run the following command to shut down:
|
||||
|
||||
```sh
|
||||
docker compose down
|
||||
```
|
||||
|
||||
If you run into issues with dangling orphans, try:
|
||||
|
||||
```sh
|
||||
docker compose down --volumes --remove-orphans && docker-compose up --force-recreate --renew-anon-volumes --remove-orphans
|
||||
```
|
||||
|
||||
### 📌 Windows Installation Note
|
||||
|
||||
When installing Docker on Windows, it is **highly recommended** to select **WSL 2** instead of Hyper-V. Using Hyper-V can cause compatibility issues with Supabase, leading to the `supabase-db` container being marked as **unhealthy**.
|
||||
@@ -332,14 +186,92 @@ For more details, refer to [Docker's official documentation](https://docs.docker
|
||||
|
||||
## Development
|
||||
|
||||
### Formatting & Linting
|
||||
Auto formatter and linter are set up in the project. To run them:
|
||||
### Frontend Development
|
||||
|
||||
Install:
|
||||
#### Running the frontend locally
|
||||
|
||||
To run the frontend locally, you need to have Node.js and PNPM installed on your machine.
|
||||
|
||||
Install [Node.js](https://nodejs.org/en/download/) to manage dependencies and run the frontend application.
|
||||
|
||||
Install [PNPM](https://pnpm.io/installation) to manage the frontend dependencies.
|
||||
|
||||
Run the service dependencies (backend, database, message queues, etc.):
|
||||
```sh
|
||||
docker compose --profile local up deps_backend --build --detach
|
||||
```
|
||||
|
||||
Go to the `autogpt_platform/frontend` directory:
|
||||
```sh
|
||||
cd frontend
|
||||
```
|
||||
|
||||
Install the dependencies:
|
||||
```sh
|
||||
pnpm install
|
||||
```
|
||||
|
||||
Generate the API client:
|
||||
```sh
|
||||
pnpm generate:api-client
|
||||
```
|
||||
|
||||
Run the frontend application:
|
||||
```sh
|
||||
pnpm dev
|
||||
```
|
||||
|
||||
#### Formatting & Linting
|
||||
|
||||
Auto formatter and linter are set up in the project. To run them:
|
||||
Format the code:
|
||||
```sh
|
||||
pnpm format
|
||||
```
|
||||
|
||||
Lint the code:
|
||||
```sh
|
||||
pnpm lint
|
||||
```
|
||||
|
||||
#### Testing
|
||||
|
||||
To run the tests, you can use the following command:
|
||||
```sh
|
||||
pnpm test
|
||||
```
|
||||
|
||||
### Backend Development
|
||||
|
||||
#### Running the backend locally
|
||||
|
||||
To run the backend locally, you need to have Python 3.10 or higher installed on your machine.
|
||||
|
||||
Install [Poetry](https://python-poetry.org/docs/#installation) to manage dependencies and virtual environments.
|
||||
|
||||
Run the backend dependencies (database, message queues, etc.):
|
||||
```sh
|
||||
docker compose --profile local up deps --build --detach
|
||||
```
|
||||
|
||||
Go to the `autogpt_platform/backend` directory:
|
||||
```sh
|
||||
cd backend
|
||||
```
|
||||
|
||||
Install the dependencies:
|
||||
```sh
|
||||
poetry install --with dev
|
||||
```
|
||||
|
||||
Run the backend server:
|
||||
```sh
|
||||
poetry run app
|
||||
```
|
||||
|
||||
#### Formatting & Linting
|
||||
Auto formatter and linter are set up in the project. To run them:
|
||||
|
||||
Format the code:
|
||||
```sh
|
||||
poetry run format
|
||||
@@ -350,71 +282,14 @@ Lint the code:
|
||||
poetry run lint
|
||||
```
|
||||
|
||||
### Testing
|
||||
#### Testing
|
||||
|
||||
To run the tests:
|
||||
|
||||
```sh
|
||||
poetry run test
|
||||
poetry run pytest -s
|
||||
```
|
||||
|
||||
To update stored snapshots after intentional API changes:
|
||||
|
||||
```sh
|
||||
pytest --snapshot-update
|
||||
```
|
||||
|
||||
## Project Outline
|
||||
|
||||
The current project has the following main modules:
|
||||
|
||||
#### **blocks**
|
||||
|
||||
This module stores all the Agent Blocks, which are reusable components to build a graph that represents the agent's behavior.
|
||||
|
||||
#### **data**
|
||||
|
||||
This module stores the logical model that is persisted in the database.
|
||||
It abstracts the database operations into functions that can be called by the service layer.
|
||||
Any code that interacts with Prisma objects or the database should reside in this module.
|
||||
The main models are:
|
||||
* `block`: anything related to the block used in the graph
|
||||
* `execution`: anything related to the execution graph execution
|
||||
* `graph`: anything related to the graph, node, and its relations
|
||||
|
||||
#### **execution**
|
||||
|
||||
This module stores the business logic of executing the graph.
|
||||
It currently has the following main modules:
|
||||
* `manager`: A service that consumes the queue of the graph execution and executes the graph. It contains both pieces of logic.
|
||||
* `scheduler`: A service that triggers scheduled graph execution based on a cron expression. It pushes an execution request to the manager.
|
||||
|
||||
#### **server**
|
||||
|
||||
This module stores the logic for the server API.
|
||||
It contains all the logic used for the API that allows the client to create, execute, and monitor the graph and its execution.
|
||||
This API service interacts with other services like those defined in `manager` and `scheduler`.
|
||||
|
||||
#### **utils**
|
||||
|
||||
This module stores utility functions that are used across the project.
|
||||
Currently, it has two main modules:
|
||||
* `process`: A module that contains the logic to spawn a new process.
|
||||
* `service`: A module that serves as a parent class for all the services in the project.
|
||||
|
||||
## Service Communication
|
||||
|
||||
Currently, there are only 3 active services:
|
||||
|
||||
- AgentServer (the API, defined in `server.py`)
|
||||
- ExecutionManager (the executor, defined in `manager.py`)
|
||||
- Scheduler (the scheduler, defined in `scheduler.py`)
|
||||
|
||||
The services run in independent Python processes and communicate through an IPC.
|
||||
A communication layer (`service.py`) is created to decouple the communication library from the implementation.
|
||||
|
||||
Currently, the IPC is done using Pyro5 and abstracted in a way that allows a function decorated with `@expose` to be called from a different process.
|
||||
|
||||
## Adding a New Agent Block
|
||||
|
||||
To add a new agent block, you need to create a new class that inherits from `Block` and provides the following information:
|
||||
@@ -424,4 +299,5 @@ To add a new agent block, you need to create a new class that inherits from `Blo
|
||||
* `run` method: the main logic of the block.
|
||||
* `test_input` & `test_output`: the sample input and output data for the block, which will be used to auto-test the block.
|
||||
* You can mock the functions declared in the block using the `test_mock` field for your unit tests.
|
||||
* Once you finish creating the block, you can test it by running `poetry run pytest -s test/block/test_block.py`.
|
||||
* Once you finish creating the block, you can test it by running `poetry run pytest backend/blocks/test/test_block.py -s`.
|
||||
* Create a Pull Request to the `dev` branch of the repository with your changes so you can share it with the community :)
|
||||
|
||||
Reference in New Issue
Block a user