Compare commits

...

3 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
9a61b45644 feat: Complete container publishing implementation with deployment tools and templates
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
2025-09-17 18:28:23 +00:00
copilot-swe-agent[bot]
1d207a9b52 feat: Add platform container publishing infrastructure and deployment guides
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
2025-09-17 18:25:14 +00:00
copilot-swe-agent[bot]
7f01df5bee Initial plan 2025-09-17 18:14:42 +00:00
12 changed files with 3497 additions and 3 deletions

View File

@@ -0,0 +1,113 @@
name: Platform - Container Publishing
on:
release:
types: [published]
workflow_dispatch:
inputs:
no_cache:
type: boolean
description: 'Build from scratch, without using cached layers'
default: false
registry:
type: choice
description: 'Container registry to publish to'
options:
- 'both'
- 'ghcr'
- 'dockerhub'
default: 'both'
env:
GHCR_REGISTRY: ghcr.io
GHCR_IMAGE_BASE: ${{ github.repository_owner }}/autogpt-platform
DOCKERHUB_IMAGE_BASE: ${{ secrets.DOCKER_USER }}/autogpt-platform
permissions:
contents: read
packages: write
jobs:
build-and-publish:
runs-on: ubuntu-latest
strategy:
matrix:
component: [backend, frontend]
fail-fast: false
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
if: inputs.registry == 'both' || inputs.registry == 'ghcr' || github.event_name == 'release'
uses: docker/login-action@v3
with:
registry: ${{ env.GHCR_REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Log in to Docker Hub
if: (inputs.registry == 'both' || inputs.registry == 'dockerhub' || github.event_name == 'release') && secrets.DOCKER_USER
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.GHCR_REGISTRY }}/${{ env.GHCR_IMAGE_BASE }}-${{ matrix.component }}
${{ secrets.DOCKER_USER && format('{0}-{1}', env.DOCKERHUB_IMAGE_BASE, matrix.component) || '' }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=raw,value=latest,enable={{is_default_branch}}
- name: Set build context and dockerfile for backend
if: matrix.component == 'backend'
run: |
echo "BUILD_CONTEXT=." >> $GITHUB_ENV
echo "DOCKERFILE=autogpt_platform/backend/Dockerfile" >> $GITHUB_ENV
echo "BUILD_TARGET=server" >> $GITHUB_ENV
- name: Set build context and dockerfile for frontend
if: matrix.component == 'frontend'
run: |
echo "BUILD_CONTEXT=." >> $GITHUB_ENV
echo "DOCKERFILE=autogpt_platform/frontend/Dockerfile" >> $GITHUB_ENV
echo "BUILD_TARGET=prod" >> $GITHUB_ENV
- name: Build and push container image
uses: docker/build-push-action@v6
with:
context: ${{ env.BUILD_CONTEXT }}
file: ${{ env.DOCKERFILE }}
target: ${{ env.BUILD_TARGET }}
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: ${{ !inputs.no_cache && 'type=gha' || '' }},scope=platform-${{ matrix.component }}
cache-to: type=gha,scope=platform-${{ matrix.component }},mode=max
- name: Generate build summary
run: |
echo "## 🐳 Container Build Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Component:** ${{ matrix.component }}" >> $GITHUB_STEP_SUMMARY
echo "**Registry:** ${{ inputs.registry || 'both' }}" >> $GITHUB_STEP_SUMMARY
echo "**Tags:** ${{ steps.meta.outputs.tags }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Images Published:" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo "${{ steps.meta.outputs.tags }}" | sed 's/,/\n/g' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY

View File

@@ -0,0 +1,389 @@
# AutoGPT Platform Container Publishing
This document describes the container publishing infrastructure and deployment options for the AutoGPT Platform.
## Published Container Images
### GitHub Container Registry (GHCR) - Recommended
- **Backend**: `ghcr.io/significant-gravitas/autogpt-platform-backend`
- **Frontend**: `ghcr.io/significant-gravitas/autogpt-platform-frontend`
### Docker Hub
- **Backend**: `significantgravitas/autogpt-platform-backend`
- **Frontend**: `significantgravitas/autogpt-platform-frontend`
## Available Tags
- `latest` - Latest stable release from master branch
- `v1.0.0`, `v1.1.0`, etc. - Specific version releases
- `main` - Latest development build (use with caution)
## Quick Start
### Using Docker Compose (Recommended)
```bash
# Clone the repository (or just download the compose file)
git clone https://github.com/Significant-Gravitas/AutoGPT.git
cd AutoGPT/autogpt_platform
# Deploy with published images
./deploy.sh deploy
```
### Manual Docker Run
```bash
# Start dependencies first
docker network create autogpt
# PostgreSQL
docker run -d --name postgres --network autogpt \
-e POSTGRES_DB=autogpt \
-e POSTGRES_USER=autogpt \
-e POSTGRES_PASSWORD=password \
-v postgres_data:/var/lib/postgresql/data \
postgres:15
# Redis
docker run -d --name redis --network autogpt \
-v redis_data:/data \
redis:7-alpine redis-server --requirepass password
# RabbitMQ
docker run -d --name rabbitmq --network autogpt \
-e RABBITMQ_DEFAULT_USER=autogpt \
-e RABBITMQ_DEFAULT_PASS=password \
-p 15672:15672 \
rabbitmq:3-management
# Backend
docker run -d --name backend --network autogpt \
-p 8000:8000 \
-e DATABASE_URL=postgresql://autogpt:password@postgres:5432/autogpt \
-e REDIS_HOST=redis \
-e RABBITMQ_HOST=rabbitmq \
ghcr.io/significant-gravitas/autogpt-platform-backend:latest
# Frontend
docker run -d --name frontend --network autogpt \
-p 3000:3000 \
-e AGPT_SERVER_URL=http://localhost:8000/api \
ghcr.io/significant-gravitas/autogpt-platform-frontend:latest
```
## Deployment Scripts
### Deploy Script
The included `deploy.sh` script provides a complete deployment solution:
```bash
# Basic deployment
./deploy.sh deploy
# Deploy specific version
./deploy.sh -v v1.0.0 deploy
# Deploy from Docker Hub
./deploy.sh -r docker.io deploy
# Production deployment
./deploy.sh -p production deploy
# Other operations
./deploy.sh start # Start services
./deploy.sh stop # Stop services
./deploy.sh restart # Restart services
./deploy.sh update # Update to latest
./deploy.sh backup # Create backup
./deploy.sh status # Show status
./deploy.sh logs # Show logs
./deploy.sh cleanup # Remove everything
```
## Platform-Specific Deployment Guides
### Unraid
See [Unraid Deployment Guide](../docs/content/platform/deployment/unraid.md)
Key features:
- Community Applications template
- Web UI management
- Automatic updates
- Built-in backup system
### Home Assistant Add-on
See [Home Assistant Add-on Guide](../docs/content/platform/deployment/home-assistant.md)
Key features:
- Native Home Assistant integration
- Automation services
- Entity monitoring
- Backup integration
### Kubernetes
See [Kubernetes Deployment Guide](../docs/content/platform/deployment/kubernetes.md)
Key features:
- Helm charts
- Horizontal scaling
- Health checks
- Persistent volumes
## Container Architecture
### Backend Container
- **Base Image**: `debian:13-slim`
- **Runtime**: Python 3.13 with Poetry
- **Services**: REST API, WebSocket, Executor, Scheduler, Database Manager, Notification
- **Ports**: 8000-8007 (depending on service)
- **Health Check**: `GET /health`
### Frontend Container
- **Base Image**: `node:21-alpine`
- **Runtime**: Next.js production build
- **Port**: 3000
- **Health Check**: HTTP 200 on root path
## Environment Configuration
### Required Environment Variables
#### Backend
```env
DATABASE_URL=postgresql://user:pass@host:5432/db
REDIS_HOST=redis
RABBITMQ_HOST=rabbitmq
JWT_SECRET=your-secret-key
```
#### Frontend
```env
AGPT_SERVER_URL=http://backend:8000/api
SUPABASE_URL=http://auth:8000
```
### Optional Configuration
```env
# Logging
LOG_LEVEL=INFO
ENABLE_DEBUG=false
# Performance
REDIS_PASSWORD=your-redis-password
RABBITMQ_PASSWORD=your-rabbitmq-password
# Security
CORS_ORIGINS=http://localhost:3000
```
## CI/CD Pipeline
### GitHub Actions Workflow
The publishing workflow (`.github/workflows/platform-container-publish.yml`) automatically:
1. **Triggers** on releases and manual dispatch
2. **Builds** both backend and frontend containers
3. **Tests** container functionality
4. **Publishes** to both GHCR and Docker Hub
5. **Tags** with version and latest
### Manual Publishing
```bash
# Build and tag locally
docker build -t ghcr.io/significant-gravitas/autogpt-platform-backend:latest \
-f autogpt_platform/backend/Dockerfile \
--target server .
docker build -t ghcr.io/significant-gravitas/autogpt-platform-frontend:latest \
-f autogpt_platform/frontend/Dockerfile \
--target prod .
# Push to registry
docker push ghcr.io/significant-gravitas/autogpt-platform-backend:latest
docker push ghcr.io/significant-gravitas/autogpt-platform-frontend:latest
```
## Security Considerations
### Container Security
1. **Non-root users** - Containers run as non-root
2. **Minimal base images** - Using slim/alpine images
3. **No secrets in images** - All secrets via environment variables
4. **Read-only filesystem** - Where possible
5. **Resource limits** - CPU and memory limits set
### Deployment Security
1. **Network isolation** - Use dedicated networks
2. **TLS encryption** - Enable HTTPS in production
3. **Secret management** - Use Docker secrets or external secret stores
4. **Regular updates** - Keep images updated
5. **Vulnerability scanning** - Regular security scans
## Monitoring
### Health Checks
All containers include health checks:
```bash
# Check container health
docker inspect --format='{{.State.Health.Status}}' container_name
# Manual health check
curl http://localhost:8000/health
```
### Metrics
The backend exposes Prometheus metrics at `/metrics`:
```bash
curl http://localhost:8000/metrics
```
### Logging
Containers log to stdout/stderr for easy aggregation:
```bash
# View logs
docker logs container_name
# Follow logs
docker logs -f container_name
# Aggregate logs
docker compose logs -f
```
## Troubleshooting
### Common Issues
1. **Container won't start**
```bash
# Check logs
docker logs container_name
# Check environment
docker exec container_name env
```
2. **Database connection failed**
```bash
# Test connectivity
docker exec backend ping postgres
# Check database status
docker exec postgres pg_isready
```
3. **Port conflicts**
```bash
# Check port usage
ss -tuln | grep :3000
# Use different ports
docker run -p 3001:3000 ...
```
### Debug Mode
Enable debug mode for detailed logging:
```env
LOG_LEVEL=DEBUG
ENABLE_DEBUG=true
```
## Performance Optimization
### Resource Limits
```yaml
# Docker Compose
services:
backend:
deploy:
resources:
limits:
memory: 2G
cpus: '1.0'
reservations:
memory: 1G
cpus: '0.5'
```
### Scaling
```bash
# Scale backend services
docker compose up -d --scale backend=3
# Or use Docker Swarm
docker service scale backend=3
```
## Backup and Recovery
### Data Backup
```bash
# Database backup
docker exec postgres pg_dump -U autogpt autogpt > backup.sql
# Volume backup
docker run --rm -v postgres_data:/data -v $(pwd):/backup \
alpine tar czf /backup/postgres_backup.tar.gz /data
```
### Restore
```bash
# Database restore
docker exec -i postgres psql -U autogpt autogpt < backup.sql
# Volume restore
docker run --rm -v postgres_data:/data -v $(pwd):/backup \
alpine tar xzf /backup/postgres_backup.tar.gz -C /
```
## Support
- **Documentation**: [Platform Docs](../docs/content/platform/)
- **Issues**: [GitHub Issues](https://github.com/Significant-Gravitas/AutoGPT/issues)
- **Discord**: [AutoGPT Community](https://discord.gg/autogpt)
- **Docker Hub**: [Container Registry](https://hub.docker.com/r/significantgravitas/)
## Contributing
To contribute to the container infrastructure:
1. **Test locally** with `docker build` and `docker run`
2. **Update documentation** if making changes
3. **Test deployment scripts** on your platform
4. **Submit PR** with clear description of changes
## Roadmap
- [ ] ARM64 support for Apple Silicon
- [ ] Helm charts for Kubernetes
- [ ] Official Unraid template
- [ ] Home Assistant Add-on store submission
- [ ] Multi-stage builds optimization
- [ ] Security scanning integration
- [ ] Performance benchmarking

View File

@@ -2,16 +2,38 @@
Welcome to the AutoGPT Platform - a powerful system for creating and running AI agents to solve business problems. This platform enables you to harness the power of artificial intelligence to automate tasks, analyze data, and generate insights for your organization.
## Getting Started
## Deployment Options
### Quick Deploy with Published Containers (Recommended)
The fastest way to get started is using our pre-built containers:
```bash
# Download and run with published images
curl -fsSL https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/autogpt_platform/deploy.sh -o deploy.sh
chmod +x deploy.sh
./deploy.sh deploy
```
Access the platform at http://localhost:3000 after deployment completes.
### Platform-Specific Deployments
- **Unraid**: [Deployment Guide](../docs/content/platform/deployment/unraid.md)
- **Home Assistant**: [Add-on Guide](../docs/content/platform/deployment/home-assistant.md)
- **Kubernetes**: [K8s Deployment](../docs/content/platform/deployment/kubernetes.md)
- **General Containers**: [Container Guide](../docs/content/platform/container-deployment.md)
## Development Setup
### Prerequisites
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
### Running the System
### Running from Source
To run the AutoGPT Platform, follow these steps:
To run the AutoGPT Platform from source for development:
1. Clone this repository to your local machine and navigate to the `autogpt_platform` directory within the repository:
@@ -157,3 +179,28 @@ If you need to update the API client after making changes to the backend API:
```
This will fetch the latest OpenAPI specification and regenerate the TypeScript client code.
## Container Deployment
For production deployments and specific platforms, see our container deployment guides:
- **[Container Deployment Overview](CONTAINERS.md)** - Complete guide to using published containers
- **[Deployment Script](deploy.sh)** - Automated deployment and management tool
- **[Published Images](docker-compose.published.yml)** - Docker Compose for published containers
### Published Container Images
- **Backend**: `ghcr.io/significant-gravitas/autogpt-platform-backend:latest`
- **Frontend**: `ghcr.io/significant-gravitas/autogpt-platform-frontend:latest`
### Quick Production Deployment
```bash
# Deploy with published containers
./deploy.sh deploy
# Or use the published compose file directly
docker compose -f docker-compose.published.yml up -d
```
For detailed deployment instructions, troubleshooting, and platform-specific guides, see the [Container Documentation](CONTAINERS.md).

262
autogpt_platform/build-test.sh Executable file
View File

@@ -0,0 +1,262 @@
#!/bin/bash
# AutoGPT Platform Container Build Test Script
# This script tests container builds locally before CI/CD
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
REGISTRY="ghcr.io"
IMAGE_PREFIX="significant-gravitas/autogpt-platform"
VERSION="test"
BUILD_ARGS=""
# Functions
info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
error() {
echo -e "${RED}[ERROR]${NC} $1"
}
usage() {
cat << EOF
AutoGPT Platform Container Build Test Script
Usage: $0 [OPTIONS] [COMPONENT]
COMPONENTS:
backend Build backend container only
frontend Build frontend container only
all Build both containers (default)
OPTIONS:
-r, --registry REGISTRY Container registry (default: ghcr.io)
-t, --tag TAG Image tag (default: test)
--no-cache Build without cache
--push Push images after build
-h, --help Show this help message
EXAMPLES:
$0 # Build both containers
$0 backend # Build backend only
$0 --no-cache all # Build without cache
$0 --push frontend # Build and push frontend
EOF
}
check_docker() {
if ! command -v docker &> /dev/null; then
error "Docker is not installed"
exit 1
fi
if ! docker info &> /dev/null; then
error "Docker daemon is not running"
exit 1
fi
success "Docker is available"
}
build_backend() {
info "Building backend container..."
local image_name="$REGISTRY/$IMAGE_PREFIX-backend:$VERSION"
local dockerfile="autogpt_platform/backend/Dockerfile"
info "Building: $image_name"
info "Dockerfile: $dockerfile"
info "Context: ."
info "Target: server"
if docker build \
-t "$image_name" \
-f "$dockerfile" \
--target server \
$BUILD_ARGS \
.; then
success "Backend container built successfully: $image_name"
# Test the container
info "Testing backend container..."
if docker run --rm -d --name autogpt-backend-test "$image_name" > /dev/null; then
sleep 5
if docker ps | grep -q autogpt-backend-test; then
success "Backend container is running"
docker stop autogpt-backend-test > /dev/null
else
warning "Backend container started but may have issues"
fi
else
warning "Failed to start backend container for testing"
fi
return 0
else
error "Backend container build failed"
return 1
fi
}
build_frontend() {
info "Building frontend container..."
local image_name="$REGISTRY/$IMAGE_PREFIX-frontend:$VERSION"
local dockerfile="autogpt_platform/frontend/Dockerfile"
info "Building: $image_name"
info "Dockerfile: $dockerfile"
info "Context: ."
info "Target: prod"
if docker build \
-t "$image_name" \
-f "$dockerfile" \
--target prod \
$BUILD_ARGS \
.; then
success "Frontend container built successfully: $image_name"
# Test the container
info "Testing frontend container..."
if docker run --rm -d --name autogpt-frontend-test -p 3001:3000 "$image_name" > /dev/null; then
sleep 10
if docker ps | grep -q autogpt-frontend-test; then
if curl -s -o /dev/null -w "%{http_code}" http://localhost:3001 | grep -q "200\|302\|404"; then
success "Frontend container is responding"
else
warning "Frontend container started but not responding to HTTP requests"
fi
docker stop autogpt-frontend-test > /dev/null
else
warning "Frontend container started but may have issues"
fi
else
warning "Failed to start frontend container for testing"
fi
return 0
else
error "Frontend container build failed"
return 1
fi
}
push_images() {
if [[ "$PUSH_IMAGES" == "true" ]]; then
info "Pushing images to registry..."
local backend_image="$REGISTRY/$IMAGE_PREFIX-backend:$VERSION"
local frontend_image="$REGISTRY/$IMAGE_PREFIX-frontend:$VERSION"
for image in "$backend_image" "$frontend_image"; do
if docker images | grep -q "$image"; then
info "Pushing $image..."
if docker push "$image"; then
success "Pushed $image"
else
error "Failed to push $image"
fi
fi
done
fi
}
show_images() {
info "Built images:"
docker images | grep "$IMAGE_PREFIX" | grep "$VERSION"
}
cleanup_test_containers() {
# Clean up any test containers that might be left running
docker ps -a | grep "autogpt-.*-test" | awk '{print $1}' | xargs -r docker rm -f > /dev/null 2>&1 || true
}
# Parse command line arguments
COMPONENT="all"
PUSH_IMAGES="false"
while [[ $# -gt 0 ]]; do
case $1 in
-r|--registry)
REGISTRY="$2"
shift 2
;;
-t|--tag)
VERSION="$2"
shift 2
;;
--no-cache)
BUILD_ARGS="$BUILD_ARGS --no-cache"
shift
;;
--push)
PUSH_IMAGES="true"
shift
;;
-h|--help)
usage
exit 0
;;
backend|frontend|all)
COMPONENT="$1"
shift
;;
*)
error "Unknown option: $1"
usage
exit 1
;;
esac
done
# Main execution
info "AutoGPT Platform Container Build Test"
info "Component: $COMPONENT"
info "Registry: $REGISTRY"
info "Tag: $VERSION"
check_docker
cleanup_test_containers
# Build containers based on component selection
case "$COMPONENT" in
backend)
build_backend
;;
frontend)
build_frontend
;;
all)
if build_backend && build_frontend; then
success "All containers built successfully"
else
error "Some container builds failed"
exit 1
fi
;;
esac
push_images
show_images
cleanup_test_containers
success "Build test completed successfully"

480
autogpt_platform/deploy.sh Executable file
View File

@@ -0,0 +1,480 @@
#!/bin/bash
# AutoGPT Platform Deployment Script
# This script deploys AutoGPT Platform using published container images
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
COMPOSE_FILE="docker-compose.published.yml"
ENV_FILE=".env"
BACKUP_DIR="backups"
LOG_FILE="deploy.log"
# Default values
REGISTRY="ghcr.io"
IMAGE_PREFIX="significant-gravitas/autogpt-platform"
VERSION="latest"
PROFILE="local"
ACTION=""
# Functions
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
}
info() {
echo -e "${BLUE}[INFO]${NC} $1" | tee -a "$LOG_FILE"
}
success() {
echo -e "${GREEN}[SUCCESS]${NC} $1" | tee -a "$LOG_FILE"
}
warning() {
echo -e "${YELLOW}[WARNING]${NC} $1" | tee -a "$LOG_FILE"
}
error() {
echo -e "${RED}[ERROR]${NC} $1" | tee -a "$LOG_FILE"
}
usage() {
cat << EOF
AutoGPT Platform Deployment Script
Usage: $0 [OPTIONS] ACTION
ACTIONS:
deploy Deploy the platform
start Start existing deployment
stop Stop the deployment
restart Restart the deployment
update Update to latest images
backup Create backup of data
restore Restore from backup
logs Show logs
status Show deployment status
cleanup Remove all containers and volumes
OPTIONS:
-r, --registry REGISTRY Container registry (default: ghcr.io)
-v, --version VERSION Image version/tag (default: latest)
-p, --profile PROFILE Docker compose profile (default: local)
-f, --file FILE Compose file (default: docker-compose.published.yml)
-e, --env FILE Environment file (default: .env)
-h, --help Show this help message
EXAMPLES:
$0 deploy # Deploy with defaults
$0 -v v1.0.0 deploy # Deploy specific version
$0 -r docker.io update # Update from Docker Hub
$0 -p production deploy # Deploy for production
EOF
}
check_dependencies() {
info "Checking dependencies..."
if ! command -v docker &> /dev/null; then
error "Docker is not installed. Please install Docker first."
exit 1
fi
if ! command -v docker-compose &> /dev/null && ! docker compose version &> /dev/null; then
error "Docker Compose is not installed. Please install Docker Compose first."
exit 1
fi
# Check if Docker daemon is running
if ! docker info &> /dev/null; then
error "Docker daemon is not running. Please start Docker first."
exit 1
fi
success "All dependencies are available"
}
setup_environment() {
info "Setting up environment..."
# Create necessary directories
mkdir -p "$BACKUP_DIR"
mkdir -p "data/postgres"
mkdir -p "data/redis"
mkdir -p "data/rabbitmq"
mkdir -p "data/backend"
# Create environment file if it doesn't exist
if [[ ! -f "$ENV_FILE" ]]; then
info "Creating default environment file..."
cat > "$ENV_FILE" << EOF
# AutoGPT Platform Configuration
POSTGRES_PASSWORD=your-super-secret-and-long-postgres-password
REDIS_PASSWORD=your-redis-password
RABBITMQ_PASSWORD=your-rabbitmq-password
JWT_SECRET=your-long-random-jwt-secret-with-at-least-32-characters
# Registry Configuration
REGISTRY=${REGISTRY}
IMAGE_PREFIX=${IMAGE_PREFIX}
VERSION=${VERSION}
# Network Configuration
BACKEND_PORT=8006
FRONTEND_PORT=3000
POSTGRES_PORT=5432
REDIS_PORT=6379
RABBITMQ_PORT=5672
RABBITMQ_MANAGEMENT_PORT=15672
# Development
PROFILE=${PROFILE}
EOF
warning "Created default $ENV_FILE - please review and update passwords!"
fi
success "Environment setup complete"
}
check_ports() {
info "Checking if required ports are available..."
local ports=(3000 8000 8001 8002 8003 8005 8006 8007 5432 6379 5672 15672)
local used_ports=()
for port in "${ports[@]}"; do
if ss -tuln | grep -q ":$port "; then
used_ports+=("$port")
fi
done
if [[ ${#used_ports[@]} -gt 0 ]]; then
warning "The following ports are already in use: ${used_ports[*]}"
warning "This may cause conflicts. Please stop services using these ports or modify the configuration."
else
success "All required ports are available"
fi
}
pull_images() {
info "Pulling container images..."
local images=(
"$REGISTRY/$IMAGE_PREFIX-backend:$VERSION"
"$REGISTRY/$IMAGE_PREFIX-frontend:$VERSION"
)
for image in "${images[@]}"; do
info "Pulling $image..."
if docker pull "$image"; then
success "Pulled $image"
else
error "Failed to pull $image"
exit 1
fi
done
}
deploy() {
info "Deploying AutoGPT Platform..."
check_dependencies
setup_environment
check_ports
pull_images
# Update compose file with current settings
export REGISTRY="$REGISTRY"
export IMAGE_PREFIX="$IMAGE_PREFIX"
export VERSION="$VERSION"
info "Starting services..."
if docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" up -d; then
success "AutoGPT Platform deployed successfully!"
info "Waiting for services to be ready..."
sleep 10
show_status
info "Access the platform at:"
info " Frontend: http://localhost:3000"
info " Backend API: http://localhost:8006"
info " Database Admin: http://localhost:8910 (if using local profile)"
info " RabbitMQ Management: http://localhost:15672"
else
error "Deployment failed. Check logs with: $0 logs"
exit 1
fi
}
start_services() {
info "Starting AutoGPT Platform services..."
if docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" start; then
success "Services started successfully"
show_status
else
error "Failed to start services"
exit 1
fi
}
stop_services() {
info "Stopping AutoGPT Platform services..."
if docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" stop; then
success "Services stopped successfully"
else
error "Failed to stop services"
exit 1
fi
}
restart_services() {
info "Restarting AutoGPT Platform services..."
stop_services
start_services
}
update_services() {
info "Updating AutoGPT Platform to version $VERSION..."
# Pull new images
pull_images
# Recreate containers with new images
info "Recreating containers with new images..."
if docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" up -d --force-recreate; then
success "Update completed successfully"
show_status
else
error "Update failed"
exit 1
fi
}
backup_data() {
local backup_name="autogpt-backup-$(date +%Y%m%d-%H%M%S)"
local backup_path="$BACKUP_DIR/$backup_name"
info "Creating backup: $backup_name..."
mkdir -p "$backup_path"
# Stop services for consistent backup
info "Stopping services for backup..."
docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" stop
# Backup database
info "Backing up database..."
docker compose -f "$COMPOSE_FILE" run --rm db pg_dump -U postgres postgres > "$backup_path/database.sql"
# Backup volumes
info "Backing up data volumes..."
cp -r data "$backup_path/"
# Backup configuration
cp "$ENV_FILE" "$backup_path/"
cp "$COMPOSE_FILE" "$backup_path/"
# Restart services
info "Restarting services..."
docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" start
success "Backup created: $backup_path"
}
restore_data() {
if [[ $# -lt 1 ]]; then
error "Please specify backup directory to restore from"
error "Usage: $0 restore <backup-directory>"
exit 1
fi
local backup_path="$1"
if [[ ! -d "$backup_path" ]]; then
error "Backup directory not found: $backup_path"
exit 1
fi
warning "This will overwrite current data. Are you sure? (y/N)"
read -r response
if [[ ! "$response" =~ ^[Yy]$ ]]; then
info "Restore cancelled"
exit 0
fi
info "Restoring from backup: $backup_path..."
# Stop services
stop_services
# Restore data
info "Restoring data volumes..."
rm -rf data
cp -r "$backup_path/data" .
# Restore configuration
if [[ -f "$backup_path/$ENV_FILE" ]]; then
cp "$backup_path/$ENV_FILE" .
info "Restored environment configuration"
fi
# Start services
start_services
# Restore database
if [[ -f "$backup_path/database.sql" ]]; then
info "Restoring database..."
docker compose -f "$COMPOSE_FILE" exec -T db psql -U postgres postgres < "$backup_path/database.sql"
fi
success "Restore completed successfully"
}
show_logs() {
info "Showing logs (press Ctrl+C to exit)..."
docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" logs -f
}
show_status() {
info "AutoGPT Platform Status:"
docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" ps
echo
info "Service Health:"
# Check service health
local services=("frontend:3000" "rest_server:8006" "db:5432" "redis:6379")
for service in "${services[@]}"; do
local name="${service%:*}"
local port="${service#*:}"
if docker compose -f "$COMPOSE_FILE" ps "$name" | grep -q "Up"; then
if nc -z localhost "$port" 2>/dev/null; then
echo -e " ${GREEN}${NC} $name (port $port)"
else
echo -e " ${YELLOW}${NC} $name (container up, port not accessible)"
fi
else
echo -e " ${RED}${NC} $name (container down)"
fi
done
}
cleanup() {
warning "This will remove all containers and volumes. Are you sure? (y/N)"
read -r response
if [[ ! "$response" =~ ^[Yy]$ ]]; then
info "Cleanup cancelled"
exit 0
fi
info "Cleaning up AutoGPT Platform..."
# Stop and remove containers
docker compose -f "$COMPOSE_FILE" --profile "$PROFILE" down -v --remove-orphans
# Remove images
docker images | grep "$IMAGE_PREFIX" | awk '{print $3}' | xargs -r docker rmi
# Remove data directories
rm -rf data
success "Cleanup completed"
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-r|--registry)
REGISTRY="$2"
shift 2
;;
-v|--version)
VERSION="$2"
shift 2
;;
-p|--profile)
PROFILE="$2"
shift 2
;;
-f|--file)
COMPOSE_FILE="$2"
shift 2
;;
-e|--env)
ENV_FILE="$2"
shift 2
;;
-h|--help)
usage
exit 0
;;
deploy|start|stop|restart|update|backup|restore|logs|status|cleanup)
ACTION="$1"
shift
break
;;
*)
error "Unknown option: $1"
usage
exit 1
;;
esac
done
# Check if action is provided
if [[ -z "$ACTION" ]]; then
error "No action specified"
usage
exit 1
fi
# Execute action
case "$ACTION" in
deploy)
deploy
;;
start)
start_services
;;
stop)
stop_services
;;
restart)
restart_services
;;
update)
update_services
;;
backup)
backup_data
;;
restore)
restore_data "$@"
;;
logs)
show_logs
;;
status)
show_status
;;
cleanup)
cleanup
;;
*)
error "Unknown action: $ACTION"
usage
exit 1
;;
esac

View File

@@ -0,0 +1,514 @@
# AutoGPT Platform - Published Container Deployment
# This compose file uses pre-built containers from GitHub Container Registry
# Use this for production deployments or when you don't want to build from source
networks:
app-network:
name: app-network
shared-network:
name: shared-network
volumes:
supabase-config:
clamav-data:
postgres-data:
redis-data:
rabbitmq-data:
x-agpt-services:
&agpt-services
networks:
- app-network
- shared-network
x-supabase-services:
&supabase-services
networks:
- app-network
- shared-network
services:
# Database migration service
migrate:
<<: *agpt-services
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
command: ["sh", "-c", "poetry run prisma migrate deploy"]
depends_on:
db:
condition: service_healthy
env_file:
- backend/.env.default
- path: backend/.env
required: false
environment:
PYRO_HOST: "0.0.0.0"
DB_HOST: db
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
SUPABASE_URL: http://kong:8000
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
restart: on-failure
healthcheck:
test: ["CMD-SHELL", "poetry run prisma migrate status | grep -q 'No pending migrations' || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 5s
# Redis cache service
redis:
<<: *agpt-services
image: redis:latest
command: redis-server --requirepass password
ports:
- "6379:6379"
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
# RabbitMQ message broker
rabbitmq:
<<: *agpt-services
image: rabbitmq:management
container_name: rabbitmq
volumes:
- rabbitmq-data:/var/lib/rabbitmq
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: autogpt
RABBITMQ_DEFAULT_PASS: autogpt_password
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 30s
timeout: 10s
retries: 5
start_period: 10s
# Backend API server
rest_server:
<<: *agpt-services
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
command: ["python", "-m", "backend.rest"]
depends_on:
redis:
condition: service_healthy
db:
condition: service_healthy
migrate:
condition: service_completed_successfully
rabbitmq:
condition: service_healthy
env_file:
- backend/.env.default
- path: backend/.env
required: false
environment:
PYRO_HOST: "0.0.0.0"
AGENTSERVER_HOST: rest_server
SCHEDULER_HOST: scheduler_server
DATABASEMANAGER_HOST: database_manager
EXECUTIONMANAGER_HOST: executor
NOTIFICATIONMANAGER_HOST: notification_server
CLAMAV_SERVICE_HOST: clamav
DB_HOST: db
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
SUPABASE_URL: http://kong:8000
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
ports:
- "8006:8006"
# Backend executor service
executor:
<<: *agpt-services
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
command: ["python", "-m", "backend.exec"]
depends_on:
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
db:
condition: service_healthy
migrate:
condition: service_completed_successfully
database_manager:
condition: service_started
env_file:
- backend/.env.default
- path: backend/.env
required: false
environment:
PYRO_HOST: "0.0.0.0"
AGENTSERVER_HOST: rest_server
SCHEDULER_HOST: scheduler_server
DATABASEMANAGER_HOST: database_manager
EXECUTIONMANAGER_HOST: executor
NOTIFICATIONMANAGER_HOST: notification_server
CLAMAV_SERVICE_HOST: clamav
DB_HOST: db
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
SUPABASE_URL: http://kong:8000
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
ports:
- "8002:8002"
# Backend WebSocket server
websocket_server:
<<: *agpt-services
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
command: ["python", "-m", "backend.ws"]
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
migrate:
condition: service_completed_successfully
database_manager:
condition: service_started
env_file:
- backend/.env.default
- path: backend/.env
required: false
environment:
PYRO_HOST: "0.0.0.0"
AGENTSERVER_HOST: rest_server
SCHEDULER_HOST: scheduler_server
DATABASEMANAGER_HOST: database_manager
EXECUTIONMANAGER_HOST: executor
NOTIFICATIONMANAGER_HOST: notification_server
CLAMAV_SERVICE_HOST: clamav
DB_HOST: db
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
SUPABASE_URL: http://kong:8000
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
ports:
- "8001:8001"
# Backend database manager
database_manager:
<<: *agpt-services
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
command: ["python", "-m", "backend.db"]
depends_on:
db:
condition: service_healthy
migrate:
condition: service_completed_successfully
env_file:
- backend/.env.default
- path: backend/.env
required: false
environment:
PYRO_HOST: "0.0.0.0"
AGENTSERVER_HOST: rest_server
SCHEDULER_HOST: scheduler_server
DATABASEMANAGER_HOST: database_manager
EXECUTIONMANAGER_HOST: executor
NOTIFICATIONMANAGER_HOST: notification_server
CLAMAV_SERVICE_HOST: clamav
DB_HOST: db
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
SUPABASE_URL: http://kong:8000
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
ports:
- "8005:8005"
# Backend scheduler service
scheduler_server:
<<: *agpt-services
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
command: ["python", "-m", "backend.scheduler"]
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
migrate:
condition: service_completed_successfully
database_manager:
condition: service_started
env_file:
- backend/.env.default
- path: backend/.env
required: false
environment:
PYRO_HOST: "0.0.0.0"
AGENTSERVER_HOST: rest_server
SCHEDULER_HOST: scheduler_server
DATABASEMANAGER_HOST: database_manager
EXECUTIONMANAGER_HOST: executor
NOTIFICATIONMANAGER_HOST: notification_server
CLAMAV_SERVICE_HOST: clamav
DB_HOST: db
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
SUPABASE_URL: http://kong:8000
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
ports:
- "8003:8003"
# Backend notification service
notification_server:
<<: *agpt-services
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
command: ["python", "-m", "backend.notification"]
depends_on:
db:
condition: service_healthy
rabbitmq:
condition: service_healthy
migrate:
condition: service_completed_successfully
database_manager:
condition: service_started
env_file:
- backend/.env.default
- path: backend/.env
required: false
environment:
PYRO_HOST: "0.0.0.0"
AGENTSERVER_HOST: rest_server
SCHEDULER_HOST: scheduler_server
DATABASEMANAGER_HOST: database_manager
EXECUTIONMANAGER_HOST: executor
NOTIFICATIONMANAGER_HOST: notification_server
CLAMAV_SERVICE_HOST: clamav
DB_HOST: db
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
SUPABASE_URL: http://kong:8000
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
ports:
- "8007:8007"
# ClamAV antivirus service
clamav:
<<: *agpt-services
image: clamav/clamav-debian:latest
ports:
- "3310:3310"
volumes:
- clamav-data:/var/lib/clamav
environment:
- CLAMAV_NO_FRESHCLAMD=false
- CLAMD_CONF_StreamMaxLength=50M
- CLAMD_CONF_MaxFileSize=100M
- CLAMD_CONF_MaxScanSize=100M
- CLAMD_CONF_MaxThreads=12
- CLAMD_CONF_ReadTimeout=300
healthcheck:
test: ["CMD-SHELL", "clamdscan --version || exit 1"]
interval: 30s
timeout: 10s
retries: 3
# Frontend application
frontend:
<<: *agpt-services
image: ghcr.io/significant-gravitas/autogpt-platform-frontend:latest
depends_on:
db:
condition: service_healthy
migrate:
condition: service_completed_successfully
ports:
- "3000:3000"
env_file:
- path: ./frontend/.env.default
- path: ./frontend/.env
required: false
environment:
# Server-side environment variables (Docker service names)
AUTH_CALLBACK_URL: http://rest_server:8006/auth/callback
SUPABASE_URL: http://kong:8000
AGPT_SERVER_URL: http://rest_server:8006/api
AGPT_WS_SERVER_URL: ws://websocket_server:8001/ws
# Supabase services (minimal: auth + db + kong)
kong:
<<: *supabase-services
image: supabase/kong:v0.1.0
environment:
KONG_DATABASE: "off"
KONG_DECLARATIVE_CONFIG: /etc/kong/kong.yml
KONG_DNS_ORDER: LAST,A,CNAME
KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth
KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k
ports:
- "8000:8000/tcp"
volumes:
- ./db/docker/volumes/api/kong.yml:/etc/kong/kong.yml:ro
auth:
<<: *supabase-services
image: supabase/gotrue:v2.151.0
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9999/health"]
timeout: 5s
interval: 5s
retries: 3
restart: unless-stopped
environment:
GOTRUE_API_HOST: 0.0.0.0
GOTRUE_API_PORT: 9999
GOTRUE_DB_DRIVER: postgres
GOTRUE_DB_DATABASE_URL: postgresql://supabase_auth_admin:root@db:5432/postgres?search_path=auth
GOTRUE_SITE_URL: http://localhost:3000
GOTRUE_URI_ALLOW_LIST: "*"
GOTRUE_DISABLE_SIGNUP: false
GOTRUE_JWT_ADMIN_ROLES: service_role
GOTRUE_JWT_AUD: authenticated
GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
GOTRUE_JWT_EXP: 3600
GOTRUE_JWT_SECRET: super-secret-jwt-token-with-at-least-32-characters-long
GOTRUE_EXTERNAL_EMAIL_ENABLED: true
GOTRUE_MAILER_AUTOCONFIRM: true
GOTRUE_SMTP_ADMIN_EMAIL: admin@email.com
GOTRUE_SMTP_HOST: supabase-mail
GOTRUE_SMTP_PORT: 2500
GOTRUE_SMTP_USER: fake_mail_user
GOTRUE_SMTP_PASS: fake_mail_password
GOTRUE_SMTP_SENDER_NAME: fake_sender
GOTRUE_MAILER_URLPATHS_INVITE: http://localhost:3000/auth/callback
GOTRUE_MAILER_URLPATHS_CONFIRMATION: http://localhost:3000/auth/callback
GOTRUE_MAILER_URLPATHS_RECOVERY: http://localhost:3000/auth/callback
GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: http://localhost:3000/auth/callback
db:
<<: *supabase-services
image: supabase/postgres:15.1.0.147
healthcheck:
test: pg_isready -U postgres -h localhost
interval: 5s
timeout: 5s
retries: 10
command:
- postgres
- -c
- config_file=/etc/postgresql/postgresql.conf
- -c
- log_min_messages=fatal
restart: unless-stopped
ports:
- "5432:5432"
environment:
POSTGRES_HOST: /var/run/postgresql
PGPORT: 5432
POSTGRES_PORT: 5432
PGPASSWORD: your-super-secret-and-long-postgres-password
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
PGDATABASE: postgres
POSTGRES_DB: postgres
PGUSER: postgres
POSTGRES_USER: postgres
POSTGRES_INITDB_ARGS: --lc-collate=C --lc-ctype=C
volumes:
- postgres-data:/var/lib/postgresql/data
- ./db/docker/volumes/db/realtime.sql:/docker-entrypoint-initdb.d/migrations/99-realtime.sql:Z
- ./db/docker/volumes/db/webhooks.sql:/docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql:Z
- ./db/docker/volumes/db/roles.sql:/docker-entrypoint-initdb.d/init-scripts/99-roles.sql:Z
# Development-only services (studio for database management)
meta:
<<: *supabase-services
profiles:
- local
image: supabase/studio:20240101-5cc8dea
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:8080/api/profile', (r) => {if (r.statusCode !== 200) throw new Error(r.statusCode)})"]
timeout: 5s
interval: 5s
retries: 3
depends_on:
db:
condition: service_healthy
restart: unless-stopped
environment:
STUDIO_PG_META_URL: http://localhost:8080
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
DEFAULT_ORGANIZATION_NAME: Default Organization
DEFAULT_PROJECT_NAME: Default Project
SUPABASE_URL: http://kong:8000
SUPABASE_PUBLIC_URL: http://localhost:8000
SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0
SUPABASE_SERVICE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImV4cCI6MTk4MzgxMjk5Nn0.EGIM96RAZx35lJzdJsyH-qQwv8Hdp7fsn3W0YpN81IU
studio:
<<: *supabase-services
profiles:
- local
image: supabase/studio:20240101-5cc8dea
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3000/api/profile', (r) => {if (r.statusCode !== 200) throw new Error(r.statusCode)})"]
timeout: 5s
interval: 5s
retries: 3
depends_on:
meta:
condition: service_healthy
restart: unless-stopped
ports:
- "8910:3000/tcp"
environment:
STUDIO_PG_META_URL: http://meta:8080
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
DEFAULT_ORGANIZATION_NAME: Default Organization
DEFAULT_PROJECT_NAME: Default Project
SUPABASE_URL: http://kong:8000
SUPABASE_PUBLIC_URL: http://localhost:8000
SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0
SUPABASE_SERVICE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImV4cCI6MTk4MzgxMjk5Nn0.EGIM96RAZx35lJzdJsyH-qQwv8Hdp7fsn3W0YpN81IU
# Helper services for development
deps:
<<: *supabase-services
profiles:
- local
image: busybox
command: /bin/true
depends_on:
- kong
- auth
- db
- studio
- redis
- rabbitmq
- clamav
- migrate
deps_backend:
<<: *agpt-services
profiles:
- local
image: busybox
command: /bin/true
depends_on:
- deps
- rest_server
- executor
- websocket_server
- database_manager

View File

@@ -0,0 +1,137 @@
{
"name": "AutoGPT Platform",
"version": "1.0.0",
"slug": "autogpt-platform",
"description": "AutoGPT Platform for creating and managing AI agents with Home Assistant integration",
"url": "https://github.com/Significant-Gravitas/AutoGPT",
"codenotary": "notary@home-assistant.io",
"arch": [
"aarch64",
"amd64",
"armhf",
"armv7",
"i386"
],
"startup": "services",
"init": false,
"privileged": [
"SYS_ADMIN"
],
"hassio_api": true,
"hassio_role": "default",
"homeassistant_api": true,
"host_network": false,
"host_pid": false,
"host_ipc": false,
"auto_uart": false,
"devices": [],
"udev": false,
"tmpfs": false,
"environment": {
"LOG_LEVEL": "info"
},
"map": [
"config:rw",
"ssl:ro",
"addons_config:rw"
],
"ports": {
"3000/tcp": 3000,
"8000/tcp": 8000
},
"ports_description": {
"3000/tcp": "AutoGPT Platform Frontend",
"8000/tcp": "AutoGPT Platform Backend API"
},
"webui": "http://[HOST]:[PORT:3000]",
"ingress": true,
"ingress_port": 3000,
"ingress_entry": "/",
"panel_icon": "mdi:robot",
"panel_title": "AutoGPT Platform",
"panel_admin": true,
"audio": false,
"video": false,
"gpio": false,
"usb": false,
"uart": false,
"kernel_modules": false,
"devicetree": false,
"docker_api": false,
"full_access": false,
"apparmor": true,
"auth_api": true,
"snapshot_exclude": [
"tmp/**",
"logs/**"
],
"options": {
"database": {
"host": "localhost",
"port": 5432,
"username": "autogpt",
"password": "!secret autogpt_db_password",
"database": "autogpt"
},
"redis": {
"host": "localhost",
"port": 6379,
"password": "!secret autogpt_redis_password"
},
"auth": {
"jwt_secret": "!secret autogpt_jwt_secret",
"admin_email": "admin@example.com"
},
"homeassistant": {
"enabled": true,
"token": "!secret ha_long_lived_token",
"api_url": "http://supervisor/core/api"
},
"logging": {
"level": "INFO",
"file_logging": true
}
},
"schema": {
"database": {
"host": "str",
"port": "int(1,65535)",
"username": "str",
"password": "password",
"database": "str"
},
"redis": {
"host": "str",
"port": "int(1,65535)",
"password": "password?"
},
"auth": {
"jwt_secret": "password",
"admin_email": "email"
},
"homeassistant": {
"enabled": "bool",
"token": "password?",
"api_url": "url?"
},
"logging": {
"level": "list(DEBUG|INFO|WARNING|ERROR)?",
"file_logging": "bool?"
},
"advanced": {
"backend_workers": "int(1,10)?",
"max_agents": "int(1,100)?",
"resource_limits": {
"cpu": "float(0.1,4.0)?",
"memory": "str?"
}
}
},
"image": "ghcr.io/significant-gravitas/autogpt-platform-{arch}",
"services": [
"mqtt:want"
],
"discovery": [
"autogpt_platform"
]
}

View File

@@ -0,0 +1,43 @@
<?xml version="1.0"?>
<Container version="2">
<Name>AutoGPT-Platform</Name>
<Repository>ghcr.io/significant-gravitas/autogpt-platform-frontend:latest</Repository>
<Registry>https://ghcr.io/significant-gravitas/autogpt-platform-frontend</Registry>
<Network>bridge</Network>
<MyIP/>
<Shell>bash</Shell>
<Privileged>false</Privileged>
<Support>https://github.com/Significant-Gravitas/AutoGPT/issues</Support>
<Project>https://github.com/Significant-Gravitas/AutoGPT</Project>
<Overview>AutoGPT Platform is a powerful system for creating, deploying, and managing continuous AI agents that automate complex workflows. This template sets up the complete platform including frontend, backend services, database, and message queue.&#xD;
&#xD;
This is a complete stack deployment that includes:&#xD;
- Frontend web interface&#xD;
- Backend API services&#xD;
- PostgreSQL database with pgvector&#xD;
- Redis for caching&#xD;
- RabbitMQ for task queuing&#xD;
- Supabase for authentication&#xD;
&#xD;
IMPORTANT: This template creates multiple containers. Make sure you have sufficient resources (minimum 8GB RAM, 4 CPU cores) and available disk space (minimum 20GB).</Overview>
<Category>Productivity: Tools: Other:</Category>
<WebUI>http://[IP]:[PORT:3000]</WebUI>
<TemplateURL>https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/autogpt_platform/templates/unraid-template.xml</TemplateURL>
<Icon>https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/assets/autogpt_logo.png</Icon>
<ExtraParams>--network=autogpt</ExtraParams>
<PostArgs/>
<CPUset/>
<DateInstalled>1704067200</DateInstalled>
<DonateText>AutoGPT is an open-source project. Consider supporting development.</DonateText>
<DonateLink>https://github.com/sponsors/Significant-Gravitas</DonateLink>
<Requires>This template requires the AutoGPT Platform stack to be deployed. Please follow the setup instructions at: https://docs.agpt.co/platform/deployment/unraid/</Requires>
<Config Name="WebUI Port" Target="3000" Default="3000" Mode="tcp" Description="Port for the AutoGPT Platform web interface" Type="Port" Display="always" Required="true" Mask="false">3000</Config>
<Config Name="Backend API URL" Target="AGPT_SERVER_URL" Default="http://[IP]:8006/api" Mode="" Description="URL for the backend API server. Replace [IP] with your Unraid server IP." Type="Variable" Display="always" Required="true" Mask="false">http://[IP]:8006/api</Config>
<Config Name="Supabase URL" Target="SUPABASE_URL" Default="http://[IP]:8000" Mode="" Description="URL for Supabase authentication. Replace [IP] with your Unraid server IP." Type="Variable" Display="always" Required="true" Mask="false">http://[IP]:8000</Config>
<Config Name="WebSocket URL" Target="AGPT_WS_SERVER_URL" Default="ws://[IP]:8001/ws" Mode="" Description="WebSocket URL for real-time communication. Replace [IP] with your Unraid server IP." Type="Variable" Display="always" Required="true" Mask="false">ws://[IP]:8001/ws</Config>
<Config Name="Config Directory" Target="/app/config" Default="/mnt/user/appdata/autogpt-platform/frontend" Mode="rw" Description="Directory for frontend configuration files" Type="Path" Display="advanced" Required="true" Mask="false">/mnt/user/appdata/autogpt-platform/frontend</Config>
<Config Name="Log Level" Target="LOG_LEVEL" Default="INFO" Mode="" Description="Logging level (DEBUG, INFO, WARNING, ERROR)" Type="Variable" Display="advanced" Required="false" Mask="false">INFO</Config>
<Config Name="Node Environment" Target="NODE_ENV" Default="production" Mode="" Description="Node.js environment mode" Type="Variable" Display="advanced" Required="false" Mask="false">production</Config>
<Config Name="PUID" Target="PUID" Default="99" Mode="" Description="User ID for file permissions" Type="Variable" Display="advanced" Required="false" Mask="false">99</Config>
<Config Name="PGID" Target="PGID" Default="100" Mode="" Description="Group ID for file permissions" Type="Variable" Display="advanced" Required="false" Mask="false">100</Config>
</Container>

View File

@@ -0,0 +1,133 @@
# AutoGPT Platform Container Deployment
This guide covers deploying AutoGPT Platform using pre-built containers from GitHub Container Registry (GHCR) or Docker Hub.
## Available Container Images
The AutoGPT Platform is published as separate containers for each component:
### GitHub Container Registry (Recommended)
- **Backend**: `ghcr.io/significant-gravitas/autogpt-platform-backend:latest`
- **Frontend**: `ghcr.io/significant-gravitas/autogpt-platform-frontend:latest`
### Docker Hub
- **Backend**: `significantgravitas/autogpt-platform-backend:latest`
- **Frontend**: `significantgravitas/autogpt-platform-frontend:latest`
## Quick Start with Docker Compose
The simplest way to deploy the platform is using the provided docker-compose file:
```bash
# Download the compose file
curl -O https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/autogpt_platform/docker-compose.yml
# Start the platform with published containers
AUTOGPT_USE_PUBLISHED_IMAGES=true docker compose up -d
```
## Manual Container Deployment
### Prerequisites
1. **PostgreSQL Database** with pgvector extension
2. **Redis** for caching and session management
3. **RabbitMQ** for task queuing
4. **ClamAV** for file scanning (optional but recommended)
### Environment Variables
Both containers require configuration through environment variables. See the [environment configuration guide](./advanced_setup.md#environment-variables) for detailed settings.
#### Backend Container
```bash
docker run -d \
--name autogpt-backend \
-p 8000:8000 \
-e DATABASE_URL="postgresql://user:pass@db:5432/autogpt" \
-e REDIS_HOST="redis" \
-e RABBITMQ_HOST="rabbitmq" \
ghcr.io/significant-gravitas/autogpt-platform-backend:latest
```
#### Frontend Container
```bash
docker run -d \
--name autogpt-frontend \
-p 3000:3000 \
-e AGPT_SERVER_URL="http://backend:8000/api" \
-e SUPABASE_URL="http://auth:8000" \
ghcr.io/significant-gravitas/autogpt-platform-frontend:latest
```
## Image Versions and Tags
- `latest` - Latest stable release
- `v1.0.0` - Specific version tags
- `master` - Latest development build (use with caution)
## Health Checks
The containers include health check endpoints:
- **Backend**: `GET /health` on port 8000
- **Frontend**: HTTP 200 response on port 3000
## Resource Requirements
### Minimum Requirements
- **CPU**: 2 cores
- **RAM**: 4GB
- **Storage**: 10GB
### Recommended for Production
- **CPU**: 4+ cores
- **RAM**: 8GB+
- **Storage**: 50GB+ (depends on usage)
## Security Considerations
1. **Never expose internal services** (database, Redis, RabbitMQ) to the internet
2. **Use environment files** for sensitive configuration
3. **Enable TLS** for production deployments
4. **Regular updates** - monitor for security updates
5. **Network segmentation** - isolate platform from other services
## Troubleshooting
### Common Issues
1. **Container won't start**: Check logs with `docker logs <container_name>`
2. **Database connection fails**: Verify DATABASE_URL and network connectivity
3. **Frontend can't reach backend**: Check AGPT_SERVER_URL configuration
4. **Performance issues**: Monitor resource usage and scale accordingly
### Logging
Containers log to stdout/stderr by default. Configure log aggregation for production:
```bash
# View logs
docker logs autogpt-backend
docker logs autogpt-frontend
# Follow logs
docker logs -f autogpt-backend
```
## Production Deployment Checklist
- [ ] Database backup strategy in place
- [ ] Monitoring and alerting configured
- [ ] TLS certificates configured
- [ ] Environment variables secured
- [ ] Resource limits set
- [ ] Log aggregation configured
- [ ] Health checks enabled
- [ ] Update strategy defined
## Next Steps
- [Unraid Deployment Guide](./deployment/unraid.md)
- [Home Assistant Add-on](./deployment/home-assistant.md)
- [Kubernetes Deployment](./deployment/kubernetes.md)

View File

@@ -0,0 +1,382 @@
# AutoGPT Platform Home Assistant Add-on
This guide covers integrating AutoGPT Platform with Home Assistant as an add-on.
## Overview
The AutoGPT Platform Home Assistant Add-on allows you to run AutoGPT Platform directly within your Home Assistant environment, enabling powerful automation capabilities and seamless integration with your smart home.
## Prerequisites
- **Home Assistant OS** or **Home Assistant Supervised**
- **Advanced Mode** enabled in Home Assistant
- **Minimum 4GB RAM** available
- **10GB+ free storage space**
## Installation
### Method 1: Add-on Store (Coming Soon)
1. Open Home Assistant
2. Go to **Settings****Add-ons**
3. Click **Add-on Store**
4. Search for "AutoGPT Platform"
5. Click **Install**
### Method 2: Manual Repository Addition
Until the add-on is available in the official store:
1. Go to **Settings****Add-ons**
2. Click **Add-on Store**
3. Click the three dots (⋮) → **Repositories**
4. Add repository: `https://github.com/Significant-Gravitas/AutoGPT-HomeAssistant-Addon`
5. Find "AutoGPT Platform" in the store
6. Click **Install**
## Configuration
### Basic Configuration
After installation, configure the add-on:
```yaml
# Add-on Configuration
database:
host: "localhost"
port: 5432
username: "autogpt"
password: "!secret autogpt_db_password"
database: "autogpt"
redis:
host: "localhost"
port: 6379
password: "!secret autogpt_redis_password"
auth:
jwt_secret: "!secret autogpt_jwt_secret"
admin_email: "admin@yourdomain.com"
network:
backend_port: 8000
frontend_port: 3000
# Home Assistant Integration
homeassistant:
enabled: true
token: "!secret ha_long_lived_token"
api_url: "http://supervisor/core/api"
```
### Secrets Configuration
Add to your `secrets.yaml`:
```yaml
autogpt_db_password: "your_secure_database_password"
autogpt_redis_password: "your_secure_redis_password"
autogpt_jwt_secret: "your_long_random_jwt_secret"
ha_long_lived_token: "your_home_assistant_long_lived_access_token"
```
## Home Assistant Integration Features
### Available Services
The add-on exposes several services to Home Assistant:
#### autogpt_platform.create_agent
Create a new agent workflow:
```yaml
service: autogpt_platform.create_agent
data:
name: "Temperature Monitor"
description: "Monitor temperature and adjust HVAC"
triggers:
- type: "state_change"
entity_id: "sensor.living_room_temperature"
actions:
- type: "service_call"
service: "climate.set_temperature"
```
#### autogpt_platform.run_agent
Execute an agent workflow:
```yaml
service: autogpt_platform.run_agent
data:
agent_id: "temperature_monitor_001"
input_data:
current_temp: "{{ states('sensor.living_room_temperature') }}"
```
#### autogpt_platform.stop_agent
Stop a running agent:
```yaml
service: autogpt_platform.stop_agent
data:
agent_id: "temperature_monitor_001"
```
### Entity Integration
The add-on creates several entities in Home Assistant:
#### Sensors
- `sensor.autogpt_agent_count` - Number of active agents
- `sensor.autogpt_task_queue` - Tasks in queue
- `sensor.autogpt_system_status` - Overall system health
#### Binary Sensors
- `binary_sensor.autogpt_backend_online` - Backend service status
- `binary_sensor.autogpt_database_connected` - Database connection status
#### Switches
- `switch.autogpt_agent_execution` - Enable/disable agent execution
- `switch.autogpt_auto_updates` - Enable/disable automatic updates
### Automation Examples
#### Example 1: Smart Lighting Based on Occupancy
```yaml
automation:
- alias: "AutoGPT Smart Lighting"
trigger:
- platform: state
entity_id: binary_sensor.living_room_occupancy
action:
- service: autogpt_platform.run_agent
data:
agent_id: "smart_lighting_001"
input_data:
occupancy: "{{ trigger.to_state.state }}"
time_of_day: "{{ now().hour }}"
current_brightness: "{{ state_attr('light.living_room', 'brightness') }}"
```
#### Example 2: Energy Management
```yaml
automation:
- alias: "AutoGPT Energy Optimization"
trigger:
- platform: time_pattern
minutes: "/15" # Every 15 minutes
action:
- service: autogpt_platform.run_agent
data:
agent_id: "energy_optimizer_001"
input_data:
current_usage: "{{ states('sensor.home_energy_usage') }}"
solar_production: "{{ states('sensor.solar_power') }}"
electricity_price: "{{ states('sensor.electricity_price') }}"
```
## Advanced Configuration
### Custom Blocks for Home Assistant
You can create custom blocks that interact with Home Assistant:
```python
# Example: Home Assistant Service Call Block
class HomeAssistantServiceBlock(Block):
def __init__(self):
super().__init__(
id="ha_service_call",
description="Call a Home Assistant service",
input_schema=self.Input,
output_schema=self.Output,
)
class Input(BlockSchema):
service: str = Field(description="Service to call (e.g., 'light.turn_on')")
entity_id: str = Field(description="Target entity ID")
service_data: dict = Field(description="Service data", default={})
class Output(BlockSchema):
success: bool = Field(description="Service call success")
response: dict = Field(description="Service response data")
async def run(self, input_data: Input) -> BlockOutput:
# Implementation calls Home Assistant API
pass
```
### Resource Limits
Configure resource limits in the add-on:
```yaml
resources:
cpu: "2"
memory: "4Gi"
limits:
max_agents: 50
max_executions_per_hour: 1000
```
## Networking
### Internal Access
- **Backend API**: `http://127.0.0.1:8000`
- **Frontend**: `http://127.0.0.1:3000`
- **WebSocket**: `ws://127.0.0.1:8001`
### External Access
If you need external access, configure the add-on to use host networking:
```yaml
network:
external_access: true
host_networking: false # Use bridge mode
```
Then access via:
- **Frontend**: `http://[HA_IP]:3000`
- **Backend API**: `http://[HA_IP]:8000`
## Backup and Restore
### Automatic Backups
The add-on integrates with Home Assistant's backup system:
1. Go to **Settings****System****Backups**
2. Click **Create Backup**
3. Select **AutoGPT Platform** in partial backup options
### Manual Backup
Create manual backups of important data:
```bash
# From Home Assistant Terminal add-on
ha addons backup autogpt-platform
```
### Restore Process
1. **Stop the add-on**: Settings → Add-ons → AutoGPT Platform → Stop
2. **Restore backup**: System → Backups → Select backup → Restore
3. **Start the add-on**: Settings → Add-ons → AutoGPT Platform → Start
## Monitoring and Logs
### Log Access
View logs through Home Assistant:
1. Go to **Settings****Add-ons**
2. Select **AutoGPT Platform**
3. Click **Logs** tab
### Log Levels
Configure logging in the add-on:
```yaml
logging:
level: "INFO" # DEBUG, INFO, WARNING, ERROR
file_logging: true
max_log_size: "100MB"
```
### Performance Monitoring
Monitor add-on performance:
```yaml
# Lovelace dashboard card
type: entities
title: AutoGPT Platform Status
entities:
- sensor.autogpt_agent_count
- sensor.autogpt_task_queue
- sensor.autogpt_system_status
- binary_sensor.autogpt_backend_online
- switch.autogpt_agent_execution
```
## Troubleshooting
### Common Issues
1. **Add-on won't start**
- Check available memory (minimum 4GB required)
- Verify configuration syntax
- Check logs for specific errors
2. **Home Assistant integration not working**
- Verify long-lived access token
- Check Home Assistant API permissions
- Ensure WebSocket connection is established
3. **Agents not executing**
- Check `switch.autogpt_agent_execution` is enabled
- Verify database connectivity
- Check agent configuration and triggers
### Debug Mode
Enable debug mode for detailed logging:
```yaml
debug: true
logging:
level: "DEBUG"
```
### Performance Issues
If experiencing performance issues:
1. **Increase memory allocation**:
```yaml
resources:
memory: "6Gi" # Increase from default 4Gi
```
2. **Limit concurrent executions**:
```yaml
limits:
max_concurrent_agents: 10
```
3. **Optimize database queries**:
```yaml
database:
connection_pool_size: 20
max_connections: 100
```
## Security Considerations
1. **Use strong secrets** - Generate random passwords and JWT secrets
2. **Limit network access** - Use internal networking when possible
3. **Regular updates** - Keep the add-on updated
4. **Backup encryption** - Enable backup encryption in Home Assistant
5. **Review agent permissions** - Audit what services agents can access
## Updates
The add-on supports automatic updates through Home Assistant:
1. **Enable auto-updates**: Add-on settings → Auto-update
2. **Manual updates**: Add-on store → AutoGPT Platform → Update
## Support and Community
- **Home Assistant Community**: [AutoGPT Platform Discussion](https://community.home-assistant.io)
- **GitHub Issues**: [AutoGPT Repository](https://github.com/Significant-Gravitas/AutoGPT/issues)
- **Add-on Repository**: [AutoGPT HomeAssistant Add-on](https://github.com/Significant-Gravitas/AutoGPT-HomeAssistant-Addon)
## Examples and Templates
Check the [examples directory](https://github.com/Significant-Gravitas/AutoGPT-HomeAssistant-Addon/tree/main/examples) for:
- Sample automation configurations
- Custom block examples
- Integration templates
- Best practices guides

View File

@@ -0,0 +1,690 @@
# AutoGPT Platform Kubernetes Deployment
This guide covers deploying AutoGPT Platform on Kubernetes clusters.
## Prerequisites
- **Kubernetes 1.20+** cluster
- **kubectl** configured
- **Helm 3.x** (optional, for easier management)
- **Persistent Volume** support
- **Ingress Controller** (for external access)
## Quick Deploy with Helm
### Add Helm Repository
```bash
helm repo add autogpt https://helm.significant-gravitas.org/autogpt-platform
helm repo update
```
### Install with Default Configuration
```bash
helm install autogpt-platform autogpt/autogpt-platform \
--namespace autogpt \
--create-namespace
```
### Custom Configuration
```bash
# Create values.yaml
cat > values.yaml << EOF
backend:
image:
repository: ghcr.io/significant-gravitas/autogpt-platform-backend
tag: latest
replicas: 2
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
frontend:
image:
repository: ghcr.io/significant-gravitas/autogpt-platform-frontend
tag: latest
replicas: 2
database:
enabled: true
persistence:
size: 20Gi
redis:
enabled: true
persistence:
size: 5Gi
ingress:
enabled: true
hostname: autogpt.yourdomain.com
tls: true
EOF
helm install autogpt-platform autogpt/autogpt-platform \
--namespace autogpt \
--create-namespace \
--values values.yaml
```
## Manual Kubernetes Deployment
### Namespace
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: autogpt
```
### ConfigMap
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: autogpt-config
namespace: autogpt
data:
DATABASE_URL: "postgresql://autogpt:password@postgres:5432/autogpt"
REDIS_HOST: "redis"
RABBITMQ_HOST: "rabbitmq"
AGPT_SERVER_URL: "http://backend:8000/api"
```
### Secrets
```yaml
apiVersion: v1
kind: Secret
metadata:
name: autogpt-secrets
namespace: autogpt
type: Opaque
data:
database-password: cGFzc3dvcmQ= # base64 encoded
redis-password: cGFzc3dvcmQ=
jwt-secret: eW91ci1qd3Qtc2VjcmV0
```
### PostgreSQL Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: autogpt
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
env:
- name: POSTGRES_DB
value: autogpt
- name: POSTGRES_USER
value: autogpt
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: autogpt-secrets
key: database-password
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: autogpt
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: autogpt
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
```
### Redis Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: autogpt
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
args: ["redis-server", "--requirepass", "$(REDIS_PASSWORD)"]
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: autogpt-secrets
key: redis-password
ports:
- containerPort: 6379
volumeMounts:
- name: redis-storage
mountPath: /data
volumes:
- name: redis-storage
persistentVolumeClaim:
claimName: redis-pvc
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: autogpt
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
namespace: autogpt
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
```
### RabbitMQ Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
namespace: autogpt
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3-management
env:
- name: RABBITMQ_DEFAULT_USER
value: autogpt
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: autogpt-secrets
key: rabbitmq-password
ports:
- containerPort: 5672
- containerPort: 15672
volumeMounts:
- name: rabbitmq-storage
mountPath: /var/lib/rabbitmq
volumes:
- name: rabbitmq-storage
persistentVolumeClaim:
claimName: rabbitmq-pvc
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
namespace: autogpt
spec:
selector:
app: rabbitmq
ports:
- name: amqp
port: 5672
targetPort: 5672
- name: management
port: 15672
targetPort: 15672
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rabbitmq-pvc
namespace: autogpt
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
```
### Backend Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: autogpt
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
envFrom:
- configMapRef:
name: autogpt-config
env:
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: autogpt-secrets
key: jwt-secret
ports:
- containerPort: 8000
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: autogpt
spec:
selector:
app: backend
ports:
- port: 8000
targetPort: 8000
```
### Frontend Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: autogpt
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: ghcr.io/significant-gravitas/autogpt-platform-frontend:latest
envFrom:
- configMapRef:
name: autogpt-config
ports:
- containerPort: 3000
resources:
requests:
memory: "512Mi"
cpu: "0.5"
limits:
memory: "1Gi"
cpu: "1"
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: autogpt
spec:
selector:
app: frontend
ports:
- port: 3000
targetPort: 3000
```
### Ingress
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: autogpt-ingress
namespace: autogpt
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- autogpt.yourdomain.com
secretName: autogpt-tls
rules:
- host: autogpt.yourdomain.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend
port:
number: 8000
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
```
## Horizontal Pod Autoscaling
```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: backend-hpa
namespace: autogpt
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: backend
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
```
## Monitoring and Observability
### Prometheus ServiceMonitor
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: autogpt-backend
namespace: autogpt
spec:
selector:
matchLabels:
app: backend
endpoints:
- port: "8000"
path: /metrics
```
### Grafana Dashboard ConfigMap
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: autogpt-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
autogpt-platform.json: |
{
"dashboard": {
"title": "AutoGPT Platform",
"panels": [
{
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "rate(http_requests_total{job=\"backend\"}[5m])"
}
]
}
]
}
}
```
## Backup Strategy
### Database Backup CronJob
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
namespace: autogpt
spec:
schedule: "0 2 * * *" # Daily at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:15
command:
- /bin/bash
- -c
- |
pg_dump -h postgres -U autogpt autogpt > /backup/autogpt-$(date +%Y%m%d).sql
# Upload to S3 or other storage
aws s3 cp /backup/autogpt-$(date +%Y%m%d).sql s3://your-backup-bucket/
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: autogpt-secrets
key: database-password
volumeMounts:
- name: backup-storage
mountPath: /backup
volumes:
- name: backup-storage
emptyDir: {}
restartPolicy: OnFailure
```
## Security Best Practices
### Network Policies
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: autogpt-network-policy
namespace: autogpt
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
- to: []
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
```
### Pod Security Standards
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: autogpt
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
```
## Troubleshooting
### Debug Pod
```yaml
apiVersion: v1
kind: Pod
metadata:
name: debug
namespace: autogpt
spec:
containers:
- name: debug
image: nicolaka/netshoot
command: ["/bin/bash"]
args: ["-c", "while true; do ping backend; sleep 30; done"]
restartPolicy: Never
```
### Common Commands
```bash
# Check pod status
kubectl get pods -n autogpt
# View logs
kubectl logs -f deployment/backend -n autogpt
# Access pod shell
kubectl exec -it deployment/backend -n autogpt -- /bin/bash
# Port forward for local access
kubectl port-forward service/frontend 3000:3000 -n autogpt
# Check resource usage
kubectl top pods -n autogpt
```
## Production Checklist
- [ ] TLS certificates configured
- [ ] Resource limits set
- [ ] Persistent volumes configured
- [ ] Backup strategy implemented
- [ ] Monitoring and alerting set up
- [ ] Network policies applied
- [ ] Security contexts configured
- [ ] Horizontal autoscaling enabled
- [ ] Ingress properly configured
- [ ] Database properly secured

View File

@@ -0,0 +1,304 @@
# AutoGPT Platform on Unraid
This guide covers deploying AutoGPT Platform on Unraid using the Community Applications plugin.
## Prerequisites
1. **Unraid 6.8+** (recommended 6.10+)
2. **Community Applications** plugin installed
3. **Minimum 4GB RAM** allocated to Docker
4. **10GB+ free disk space**
## Installation Methods
### Method 1: Community Applications (Recommended)
1. Open **Apps** tab in Unraid
2. Search for "AutoGPT Platform"
3. Click **Install** on the official template
4. Configure the parameters (see below)
5. Click **Apply**
### Method 2: Manual Docker Template
If the template isn't available yet, you can create it manually:
1. Go to **Docker** tab
2. Click **Add Container**
3. Use the configuration below
## Container Configuration
### AutoGPT Platform Backend
```yaml
Repository: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
Network Type: Custom: autogpt
WebUI: http://[IP]:[PORT:8000]
Icon URL: https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/assets/autogpt_logo.png
```
#### Port Mappings
- **Container Port**: 8000
- **Host Port**: 8000 (or your preferred port)
- **Connection Type**: TCP
#### Volume Mappings
- **Container Path**: /app/data
- **Host Path**: /mnt/user/appdata/autogpt-platform/backend
- **Access Mode**: Read/Write
#### Environment Variables
```bash
DATABASE_URL=postgresql://autogpt:password@autogpt-db:5432/autogpt
REDIS_HOST=autogpt-redis
RABBITMQ_HOST=autogpt-rabbitmq
SUPABASE_URL=http://autogpt-auth:8000
```
### AutoGPT Platform Frontend
```yaml
Repository: ghcr.io/significant-gravitas/autogpt-platform-frontend:latest
Network Type: Custom: autogpt
WebUI: http://[IP]:[PORT:3000]
```
#### Port Mappings
- **Container Port**: 3000
- **Host Port**: 3000 (or your preferred port)
- **Connection Type**: TCP
#### Environment Variables
```bash
AGPT_SERVER_URL=http://[UNRAID_IP]:8000/api
SUPABASE_URL=http://[UNRAID_IP]:8001
```
## Required Dependencies
You'll also need these containers for a complete setup:
### PostgreSQL Database
```yaml
Repository: postgres:15
Network Type: Custom: autogpt
Environment Variables:
POSTGRES_DB=autogpt
POSTGRES_USER=autogpt
POSTGRES_PASSWORD=your_secure_password
Volume Mappings:
/var/lib/postgresql/data -> /mnt/user/appdata/autogpt-platform/postgres
```
### Redis
```yaml
Repository: redis:7-alpine
Network Type: Custom: autogpt
Command: redis-server --requirepass your_redis_password
Volume Mappings:
/data -> /mnt/user/appdata/autogpt-platform/redis
```
### RabbitMQ
```yaml
Repository: rabbitmq:3-management
Network Type: Custom: autogpt
Port Mappings:
5672:5672 (AMQP)
15672:15672 (Management UI)
Environment Variables:
RABBITMQ_DEFAULT_USER=autogpt
RABBITMQ_DEFAULT_PASS=your_rabbitmq_password
Volume Mappings:
/var/lib/rabbitmq -> /mnt/user/appdata/autogpt-platform/rabbitmq
```
## Network Setup
1. **Create Custom Network**:
```bash
docker network create autogpt
```
2. **Assign all containers** to the `autogpt` network
## Docker Compose Alternative
For easier management, you can use docker-compose:
1. **Enable docker-compose plugin** in Unraid
2. **Create compose file** in `/mnt/user/appdata/autogpt-platform/docker-compose.yml`:
```yaml
version: '3.8'
networks:
autogpt:
driver: bridge
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: autogpt
POSTGRES_USER: autogpt
POSTGRES_PASSWORD: your_secure_password
volumes:
- /mnt/user/appdata/autogpt-platform/postgres:/var/lib/postgresql/data
networks:
- autogpt
redis:
image: redis:7-alpine
command: redis-server --requirepass your_redis_password
volumes:
- /mnt/user/appdata/autogpt-platform/redis:/data
networks:
- autogpt
rabbitmq:
image: rabbitmq:3-management
environment:
RABBITMQ_DEFAULT_USER: autogpt
RABBITMQ_DEFAULT_PASS: your_rabbitmq_password
volumes:
- /mnt/user/appdata/autogpt-platform/rabbitmq:/var/lib/rabbitmq
ports:
- "15672:15672"
networks:
- autogpt
backend:
image: ghcr.io/significant-gravitas/autogpt-platform-backend:latest
environment:
DATABASE_URL: postgresql://autogpt:your_secure_password@postgres:5432/autogpt
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
ports:
- "8000:8000"
depends_on:
- postgres
- redis
- rabbitmq
networks:
- autogpt
frontend:
image: ghcr.io/significant-gravitas/autogpt-platform-frontend:latest
environment:
AGPT_SERVER_URL: http://[UNRAID_IP]:8000/api
ports:
- "3000:3000"
depends_on:
- backend
networks:
- autogpt
```
## Backup Strategy
### Important Data Locations
- **Database**: `/mnt/user/appdata/autogpt-platform/postgres`
- **User uploads**: `/mnt/user/appdata/autogpt-platform/backend`
- **Configuration**: Container templates
### Automated Backup Script
```bash
#!/bin/bash
# Save as /mnt/user/scripts/backup-autogpt.sh
BACKUP_DIR="/mnt/user/backups/autogpt-$(date +%Y%m%d)"
mkdir -p "$BACKUP_DIR"
# Stop containers
docker stop autogpt-backend autogpt-frontend
# Backup database
docker exec autogpt-postgres pg_dump -U autogpt autogpt > "$BACKUP_DIR/database.sql"
# Backup appdata
cp -r /mnt/user/appdata/autogpt-platform "$BACKUP_DIR/"
# Start containers
docker start autogpt-postgres autogpt-redis autogpt-rabbitmq autogpt-backend autogpt-frontend
echo "Backup completed: $BACKUP_DIR"
```
## Troubleshooting
### Common Issues
1. **Containers won't start**
- Check Docker log: **Docker** tab → container → **Logs**
- Verify network connectivity between containers
- Ensure sufficient RAM allocated to Docker
2. **Database connection errors**
- Verify PostgreSQL container is running
- Check DATABASE_URL environment variable
- Ensure containers are on same network
3. **Frontend can't reach backend**
- Verify AGPT_SERVER_URL uses correct Unraid IP
- Check firewall settings
- Ensure backend container is accessible
### Performance Optimization
1. **Use SSD for appdata** if possible
2. **Allocate sufficient RAM** to Docker (minimum 4GB)
3. **Enable Docker logging limits**:
```json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
```
## Monitoring
### Built-in Monitoring
- **Docker tab**: Container status and resource usage
- **RabbitMQ Management**: http://[UNRAID_IP]:15672
### Optional: Grafana Dashboard
Install Grafana from Community Applications for advanced monitoring:
1. Install **Grafana** and **Prometheus**
2. Configure scraping of container metrics
3. Import AutoGPT Platform dashboard
## Security Recommendations
1. **Change default passwords** for all services
2. **Use strong passwords** (20+ characters)
3. **Limit network access** to trusted devices only
4. **Regular updates**: Check for container updates monthly
5. **Backup encryption**: Encrypt backup files
## Updates
### Manual Updates
1. **Stop containers**: Docker tab → Stop
2. **Force update**: Docker tab → Force Update
3. **Start containers**: Docker tab → Start
### Automated Updates (Optional)
Install **Watchtower** from Community Applications for automatic updates:
```yaml
# Add to docker-compose.yml
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --interval 30 --cleanup
```
## Support
- **Unraid Forums**: [AutoGPT Platform Support](https://forums.unraid.net)
- **GitHub Issues**: [AutoGPT Repository](https://github.com/Significant-Gravitas/AutoGPT/issues)
- **Discord**: [AutoGPT Community](https://discord.gg/autogpt)