Compare commits

..

6 Commits

Author SHA1 Message Date
Zamil Majdy
5899897d8f refactor: simplify LaunchDarkly context and remove redundant code 2025-08-13 08:29:14 +00:00
Nicholas Tindle
0389c865aa feat(backend): in progress laucnhdarkly role lookup 2025-08-11 21:01:52 -05:00
Zamil Majdy
b88576d313 refactor(feature-flag): eliminate code duplication in role handling
Extract role and segment logic into reusable _add_role_to_attributes helper function.
This removes the duplicate code between use_user_id_only and full context paths.

- Add _add_role_to_attributes() helper function
- Remove duplicated role/segment logic in is_feature_enabled()
- Cleaner, more maintainable code with single source of truth

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 12:29:10 +07:00
Zamil Majdy
14634a6ce9 feat(platform): implement application-layer User model with proper type safety
## Core Changes

### Application Layer Separation
- Create application-layer User model with snake_case convention (created_at, email_verified, etc.)
- Add validation to prevent Prisma objects crossing service boundaries
- Replace all hasattr/getattr defensive coding with proper typing

### HTTP Client Improvements
- Prevent retry of HTTP 4xx client errors (404, 403, 401) which are permanent failures
- Add HTTPClientError and HTTPServerError exception categorization
- Comprehensive test coverage for retry behavior

### LaunchDarkly Integration Fixes
- Fix serialization issues by using proper snake_case application models
- Update feature flag client to use typed User model instead of Any
- Clean JSON parsing with proper imports (JSONDecodeError, json_loads)

### Type Safety Improvements
- Replace Any type annotations with proper PrismaUser typing
- Use AutoTopUpConfig class directly instead of generic dict
- Remove defensive hasattr() calls with direct attribute access
- Achieve zero format errors across entire codebase

## Files Modified
- backend/data/model.py: New application User model with from_db() converter
- backend/util/service.py: HTTP retry logic + Prisma validation
- backend/data/user.py: Updated to return application models
- autogpt_libs/feature_flag/client.py: Type-safe LaunchDarkly integration
- Multiple test files: Migrated to use application User model

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 12:26:30 +07:00
Zamil Majdy
10a402a766 fix(service): prevent retry of HTTP 4xx client errors
## Problem
The service client retry logic was incorrectly retrying HTTP 4xx client errors
(404, 403, 401, etc.), which should never be retried since they represent
permanent client-side issues that won't be resolved by retrying.

## Solution
- **Added HTTPClientError and HTTPServerError exceptions** to categorize HTTP errors
- **Modified retry exclusions** to include HTTPClientError in the exclude_exceptions list
- **Enhanced error handling** to wrap 4xx errors in HTTPClientError (non-retryable) and 5xx errors in HTTPServerError (retryable)
- **Preserved mapped exceptions** - When the server returns a properly formatted RemoteCallError with a mapped exception type (ValueError, etc.), that original exception is re-raised regardless of HTTP status

## Changes Made
- **New Exception Classes**: Added HTTPClientError and HTTPServerError with status code tracking
- **Improved Error Categorization**: HTTP errors are now properly categorized by status code:
  - 4xx → HTTPClientError (excluded from retries)
  - 5xx → HTTPServerError (can be retried)
  - Mapped exceptions (ValueError, etc.) → Original exception type preserved
- **Clean Logic Flow**: Simplified exception handling logic to check for mapped exceptions first, then fall back to HTTP status categorization
- **Comprehensive Tests**: Added TestHTTPErrorRetryBehavior with coverage for various status codes and exception mapping

## Benefits
- **No more wasted retry attempts** on permanent client errors (404, 403, etc.)
- **Faster error handling** for client errors since they fail immediately
- **Preserved compatibility** with existing exception mapping system
- **Better resource utilization** by avoiding unnecessary retry delays
- **Cleaner logs** with fewer spurious retry warnings

## Files Modified
- `backend/util/service.py`: Core retry logic and error handling improvements
- `backend/util/service_test.py`: Comprehensive test coverage for HTTP error retry behavior

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 11:54:25 +07:00
Zamil Majdy
004011726d feat(feature-flag): add LaunchDarkly segment support with 24h caching
## Summary
- Enhanced LaunchDarkly feature flag system to support context targeting with user segments
- Added 24-hour TTL caching for user context data to improve performance
- Implemented comprehensive user segmentation logic including role-based, domain-based, and account age segments

## Key Changes Made

### LaunchDarkly Client Enhancement
- **Segment Support**: Added full user context with segments for LaunchDarkly's contextTargets and segmentMatch rules
- **24-Hour Caching**: Implemented TTL cache with 86400 seconds (24 hours) to reduce database calls
- **Clean Architecture**: Uses existing `get_user_by_id` function following credentials_store.py pattern
- **Async-Only**: Removed sync support, made all functions async for consistency

### Segmentation Logic
- **Role-based**: user, admin, system segments
- **Domain-based**: employee segment for agpt.co emails, external for others
- **Account age**: new_user (<7 days), recent_user (7-30 days), established_user (>30 days)
- **Custom metadata**: Supports additional segments from user metadata

### Files Modified
- `autogpt_libs/feature_flag/client.py`: Main implementation with segment support and caching
- `autogpt_libs/utils/cache.py`: Added async TTL cache decorator with Protocol typing
- `autogpt_libs/utils/cache_test.py`: Comprehensive tests for TTL and process-level caching
- `backend/executor/database.py`: Exposed `get_user_by_id` for RPC support

## Technical Details
- Uses `@async_ttl_cache(maxsize=1000, ttl_seconds=86400)` for efficient caching
- Fallback to simple user_id context if database unavailable
- Proper exception handling without dangerous swallowing
- Strong typing with Protocol for cache functions
- Backwards compatible with `use_user_id_only` parameter

## Test Plan
- [x] All cache tests pass (TTL and process-level)
- [x] Feature flag evaluation works with segments
- [x] Database RPC calls work correctly
- [x] 24-hour cache reduces database load
- [x] Fallback to simple context when database unavailable

## Impact
- **Better targeting**: LaunchDarkly rules with segments now work correctly
- **Performance**: 24-hour cache dramatically reduces database calls
- **Maintainability**: Clean separation using existing database functions
- **Reliability**: Proper error handling and fallbacks

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 11:27:38 +07:00
519 changed files with 27484 additions and 10412 deletions

View File

@@ -15,7 +15,6 @@
!autogpt_platform/backend/pyproject.toml
!autogpt_platform/backend/poetry.lock
!autogpt_platform/backend/README.md
!autogpt_platform/backend/.env
# Platform - Market
!autogpt_platform/market/market/
@@ -28,7 +27,6 @@
# Platform - Frontend
!autogpt_platform/frontend/src/
!autogpt_platform/frontend/public/
!autogpt_platform/frontend/scripts/
!autogpt_platform/frontend/package.json
!autogpt_platform/frontend/pnpm-lock.yaml
!autogpt_platform/frontend/tsconfig.json
@@ -36,7 +34,6 @@
## config
!autogpt_platform/frontend/*.config.*
!autogpt_platform/frontend/.env.*
!autogpt_platform/frontend/.env
# Classic - AutoGPT
!classic/original_autogpt/autogpt/

View File

@@ -24,8 +24,7 @@
</details>
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my changes
- [ ] I have included a list of my configuration changes in the PR description (under **Changes**)

View File

@@ -82,6 +82,37 @@ jobs:
- name: Run lint
run: pnpm lint
type-check:
runs-on: ubuntu-latest
needs: setup
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run tsc check
run: pnpm type-check
chromatic:
runs-on: ubuntu-latest
needs: setup
@@ -145,7 +176,11 @@ jobs:
- name: Copy default supabase .env
run: |
cp ../.env.default ../.env
cp ../.env.example ../.env
- name: Copy backend .env
run: |
cp ../backend/.env.example ../backend/.env
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -217,6 +252,15 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Setup .env
run: cp .env.example .env
- name: Build frontend
run: pnpm build --turbo
# uses Turbopack, much faster and safe enough for a test pipeline
env:
NEXT_PUBLIC_PW_TEST: true
- name: Install Browser 'chromium'
run: pnpm playwright install --with-deps chromium

View File

@@ -1,132 +0,0 @@
name: AutoGPT Platform - Frontend CI
on:
push:
branches: [master, dev]
paths:
- ".github/workflows/platform-fullstack-ci.yml"
- "autogpt_platform/**"
pull_request:
paths:
- ".github/workflows/platform-fullstack-ci.yml"
- "autogpt_platform/**"
merge_group:
defaults:
run:
shell: bash
working-directory: autogpt_platform/frontend
jobs:
setup:
runs-on: ubuntu-latest
outputs:
cache-key: ${{ steps.cache-key.outputs.key }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Generate cache key
id: cache-key
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
types:
runs-on: ubuntu-latest
needs: setup
strategy:
fail-fast: false
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Copy default supabase .env
run: |
cp ../.env.default ../.env
- name: Copy backend .env
run: |
cp ../backend/.env.default ../backend/.env
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml --profile local --profile deps_backend up -d
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Setup .env
run: cp .env.default .env
- name: Wait for services to be ready
run: |
echo "Waiting for rest_server to be ready..."
timeout 60 sh -c 'until curl -f http://localhost:8006/health 2>/dev/null; do sleep 2; done' || echo "Rest server health check timeout, continuing..."
echo "Waiting for database to be ready..."
timeout 60 sh -c 'until docker compose -f ../docker-compose.yml exec -T db pg_isready -U postgres 2>/dev/null; do sleep 2; done' || echo "Database ready check timeout, continuing..."
- name: Generate API queries
run: pnpm generate:api:force
- name: Check for API schema changes
run: |
if ! git diff --exit-code src/app/api/openapi.json; then
echo "❌ API schema changes detected in src/app/api/openapi.json"
echo ""
echo "The openapi.json file has been modified after running 'pnpm generate:api-all'."
echo "This usually means changes have been made in the BE endpoints without updating the Frontend."
echo "The API schema is now out of sync with the Front-end queries."
echo ""
echo "To fix this:"
echo "1. Pull the backend 'docker compose pull && docker compose up -d --build --force-recreate'"
echo "2. Run 'pnpm generate:api' locally"
echo "3. Run 'pnpm types' locally"
echo "4. Fix any TypeScript errors that may have been introduced"
echo "5. Commit and push your changes"
echo ""
exit 1
else
echo "✅ No API schema changes detected"
fi
- name: Run Typescript checks
run: pnpm types

3
.gitignore vendored
View File

@@ -5,8 +5,6 @@ classic/original_autogpt/*.json
auto_gpt_workspace/*
*.mpeg
.env
# Root .env files
/.env
azure.yaml
.vscode
.idea/*
@@ -123,6 +121,7 @@ celerybeat.pid
# Environments
.direnv/
.env
.venv
env/
venv*/

View File

@@ -235,7 +235,7 @@ repos:
hooks:
- id: tsc
name: Typecheck - AutoGPT Platform - Frontend
entry: bash -c 'cd autogpt_platform/frontend && pnpm types'
entry: bash -c 'cd autogpt_platform/frontend && pnpm type-check'
files: ^autogpt_platform/frontend/
types: [file]
language: system

View File

@@ -3,16 +3,6 @@
[![Discord Follow](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2Fautogpt%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&label=total%20members&logo=discord&logoColor=white&color=7289da)](https://discord.gg/autogpt) &ensp;
[![Twitter Follow](https://img.shields.io/twitter/follow/Auto_GPT?style=social)](https://twitter.com/Auto_GPT) &ensp;
<!-- Keep these links. Translations will automatically update with the README. -->
[Deutsch](https://zdoc.app/de/Significant-Gravitas/AutoGPT) |
[Español](https://zdoc.app/es/Significant-Gravitas/AutoGPT) |
[français](https://zdoc.app/fr/Significant-Gravitas/AutoGPT) |
[日本語](https://zdoc.app/ja/Significant-Gravitas/AutoGPT) |
[한국어](https://zdoc.app/ko/Significant-Gravitas/AutoGPT) |
[Português](https://zdoc.app/pt/Significant-Gravitas/AutoGPT) |
[Русский](https://zdoc.app/ru/Significant-Gravitas/AutoGPT) |
[中文](https://zdoc.app/zh/Significant-Gravitas/AutoGPT)
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
## Hosting Options

View File

@@ -1,11 +1,9 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Repository Overview
AutoGPT Platform is a monorepo containing:
- **Backend** (`/backend`): Python FastAPI server with async support
- **Frontend** (`/frontend`): Next.js React application
- **Shared Libraries** (`/autogpt_libs`): Common Python utilities
@@ -13,7 +11,6 @@ AutoGPT Platform is a monorepo containing:
## Essential Commands
### Backend Development
```bash
# Install dependencies
cd backend && poetry install
@@ -44,7 +41,6 @@ poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[GetC
poetry run format # Black + isort
poetry run lint # ruff
```
More details can be found in TESTING.md
#### Creating/Updating Snapshots
@@ -57,8 +53,8 @@ poetry run pytest path/to/test.py --snapshot-update
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
### Frontend Development
### Frontend Development
```bash
# Install dependencies
cd frontend && npm install
@@ -76,13 +72,12 @@ npm run storybook
npm run build
# Type checking
npm run types
npm run type-check
```
## Architecture Overview
### Backend Architecture
- **API Layer**: FastAPI with REST and WebSocket endpoints
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
- **Queue System**: RabbitMQ for async task processing
@@ -91,7 +86,6 @@ npm run types
- **Security**: Cache protection middleware prevents sensitive data caching in browsers/proxies
### Frontend Architecture
- **Framework**: Next.js App Router with React Server Components
- **State Management**: React hooks + Supabase client for real-time updates
- **Workflow Builder**: Visual graph editor using @xyflow/react
@@ -99,7 +93,6 @@ npm run types
- **Feature Flags**: LaunchDarkly integration
### Key Concepts
1. **Agent Graphs**: Workflow definitions stored as JSON, executed by the backend
2. **Blocks**: Reusable components in `/backend/blocks/` that perform specific tasks
3. **Integrations**: OAuth and API connections stored per user
@@ -107,16 +100,13 @@ npm run types
5. **Virus Scanning**: ClamAV integration for file upload security
### Testing Approach
- Backend uses pytest with snapshot testing for API responses
- Test files are colocated with source files (`*_test.py`)
- Frontend uses Playwright for E2E tests
- Component testing via Storybook
### Database Schema
Key models (defined in `/backend/schema.prisma`):
- `User`: Authentication and profile data
- `AgentGraph`: Workflow definitions with version control
- `AgentGraphExecution`: Execution history and results
@@ -124,31 +114,13 @@ Key models (defined in `/backend/schema.prisma`):
- `StoreListing`: Marketplace listings for sharing agents
### Environment Configuration
#### Configuration Files
- **Backend**: `/backend/.env.default` (defaults) → `/backend/.env` (user overrides)
- **Frontend**: `/frontend/.env.default` (defaults) → `/frontend/.env` (user overrides)
- **Platform**: `/.env.default` (Supabase/shared defaults) → `/.env` (user overrides)
#### Docker Environment Loading Order
1. `.env.default` files provide base configuration (tracked in git)
2. `.env` files provide user-specific overrides (gitignored)
3. Docker Compose `environment:` sections provide service-specific overrides
4. Shell environment variables have highest precedence
#### Key Points
- All services use hardcoded defaults in docker-compose files (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
- Supabase services (`db/docker/docker-compose.yml`) follow the same pattern
- Backend: `.env` file in `/backend`
- Frontend: `.env.local` file in `/frontend`
- Both require Supabase credentials and API keys for various services
### Common Development Tasks
**Adding a new block:**
1. Create new file in `/backend/backend/blocks/`
2. Inherit from `Block` base class
3. Define input/output schemas
@@ -160,14 +132,12 @@ Note: when making many new blocks analyze the interfaces for each of these blcok
ex: do the inputs and outputs tie well together?
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside the route file
4. Run `poetry run test` to verify
**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for new components
@@ -176,7 +146,6 @@ ex: do the inputs and outputs tie well together?
### Security Implementation
**Cache Protection Middleware:**
- Located in `/backend/backend/server/middleware/security.py`
- Default behavior: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses an allow list approach - only explicitly permitted paths can be cached
@@ -185,20 +154,14 @@ ex: do the inputs and outputs tie well together?
- To allow caching for a new endpoint, add it to `CACHEABLE_PATHS` in the middleware
- Applied to both main API server and external API applications
### Creating Pull Requests
### Creating Pull Requests
- Create the PR aginst the `dev` branch of the repository.
- Ensure the branch name is descriptive (e.g., `feature/add-new-block`)/
- Use conventional commit messages (see below)/
- Fill out the .github/PULL_REQUEST_TEMPLATE.md template as the PR description/
- Run the github pre-commit hooks to ensure code quality.
### Reviewing/Revising Pull Requests
- When the user runs /pr-comments or tries to fetch them, also run gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews to get the reviews
- Use gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews/[review_id]/comments to get the review contents
- Use gh api /repos/Significant-Gravitas/AutoGPT/issues/9924/comments to get the pr specific comments
### Conventional Commits
Use this format for commit messages and Pull Request titles:

View File

@@ -8,6 +8,7 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
- Node.js & NPM (for running the frontend application)
### Running the System
@@ -23,10 +24,10 @@ To run the AutoGPT Platform, follow these steps:
2. Run the following command:
```
cp .env.default .env
cp .env.example .env
```
This command will copy the `.env.default` file to `.env`. You can modify the `.env` file to add your own environment variables.
This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables.
3. Run the following command:
@@ -36,7 +37,44 @@ To run the AutoGPT Platform, follow these steps:
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
4. After all the services are in ready state, open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
4. Navigate to `frontend` within the `autogpt_platform` directory:
```
cd frontend
```
You will need to run your frontend application separately on your local machine.
5. Run the following command:
```
cp .env.example .env.local
```
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
6. Run the following command:
Enable corepack and install dependencies by running:
```
corepack enable
pnpm i
```
Generate the API client (this step is required before running the frontend):
```
pnpm generate:api-client
```
Then start the frontend application in development mode:
```
pnpm dev
```
7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
### Docker Compose Commands
@@ -139,21 +177,20 @@ The platform includes scripts for generating and managing the API client:
- `pnpm fetch:openapi`: Fetches the OpenAPI specification from the backend service (requires backend to be running on port 8006)
- `pnpm generate:api-client`: Generates the TypeScript API client from the OpenAPI specification using Orval
- `pnpm generate:api`: Runs both fetch and generate commands in sequence
- `pnpm generate:api-all`: Runs both fetch and generate commands in sequence
#### Manual API Client Updates
If you need to update the API client after making changes to the backend API:
1. Ensure the backend services are running:
```
docker compose up -d
```
2. Generate the updated API client:
```
pnpm generate:api
pnpm generate:api-all
```
This will fetch the latest OpenAPI specification and regenerate the TypeScript client code.

View File

@@ -0,0 +1,324 @@
import contextlib
import logging
from functools import wraps
from json import JSONDecodeError
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Optional, TypeVar
if TYPE_CHECKING:
from backend.data.model import User
import ldclient
from backend.util.json import loads as json_loads
from fastapi import HTTPException
from ldclient import Context, LDClient
from ldclient.config import Config
from typing_extensions import ParamSpec
from .config import SETTINGS
logger = logging.getLogger(__name__)
P = ParamSpec("P")
T = TypeVar("T")
_is_initialized = False
def get_client() -> LDClient:
"""Get the LaunchDarkly client singleton."""
if not _is_initialized:
initialize_launchdarkly()
return ldclient.get()
def initialize_launchdarkly() -> None:
sdk_key = SETTINGS.launch_darkly_sdk_key
logger.debug(
f"Initializing LaunchDarkly with SDK key: {'present' if sdk_key else 'missing'}"
)
if not sdk_key:
logger.warning("LaunchDarkly SDK key not configured")
return
config = Config(sdk_key)
ldclient.set_config(config)
if ldclient.get().is_initialized():
global _is_initialized
_is_initialized = True
logger.info("LaunchDarkly client initialized successfully")
else:
logger.error("LaunchDarkly client failed to initialize")
def shutdown_launchdarkly() -> None:
"""Shutdown the LaunchDarkly client."""
if ldclient.get().is_initialized():
ldclient.get().close()
logger.info("LaunchDarkly client closed successfully")
def create_context(
user_id: str, additional_attributes: Optional[dict[str, Any]] = None
) -> Context:
"""Create LaunchDarkly context with optional additional attributes."""
# Use the key from attributes if provided, otherwise use user_id
context_key = user_id
if additional_attributes and "key" in additional_attributes:
context_key = additional_attributes["key"]
builder = Context.builder(str(context_key)).kind("user")
if additional_attributes:
for key, value in additional_attributes.items():
# Skip kind and key as they're already set
if key in ["kind", "key"]:
continue
elif key == "custom" and isinstance(value, dict):
# Handle custom attributes object - these go as individual attributes
for custom_key, custom_value in value.items():
builder.set(custom_key, custom_value)
else:
builder.set(key, value)
return builder.build()
async def _fetch_user_context_data(user_id: str) -> dict[str, Any]:
"""
Fetch user data and build LaunchDarkly context.
Args:
user_id: The user ID to fetch data for
Returns:
Dictionary with user context data including role
"""
# Use the unified database access approach
from backend.util.clients import get_database_manager_async_client
db_client = get_database_manager_async_client()
user = await db_client.get_user_by_id(user_id)
# Build LaunchDarkly context from user data
return _build_launchdarkly_context(user)
def _build_launchdarkly_context(user: "User") -> dict[str, Any]:
"""
Build LaunchDarkly context data matching frontend format.
Returns a context like:
{
"kind": "user",
"key": "user-id",
"email": "user@example.com", # Optional
"anonymous": false,
"custom": {
"role": "admin" # Optional
}
}
Args:
user: User object from database
Returns:
Dictionary with user context data
"""
from autogpt_libs.auth.models import DEFAULT_USER_ID
# Build basic context - always include kind, key, and anonymous
context_data: dict[str, Any] = {
"kind": "user",
"key": user.id,
"anonymous": False,
}
# Add email if present
if user.email:
context_data["email"] = user.email
# Initialize custom attributes
custom: dict[str, Any] = {}
# Determine user role from metadata
role = None
# Check if user is default/system user
if user.id == DEFAULT_USER_ID:
role = "admin" # Default user has admin privileges when auth is disabled
elif user.metadata:
# Check for role in metadata
try:
# Handle both string (direct DB) and dict (RPC) formats
if isinstance(user.metadata, str):
metadata = json_loads(user.metadata)
elif isinstance(user.metadata, dict):
metadata = user.metadata
else:
metadata = {}
# Extract role from metadata if present
if metadata.get("role"):
role = metadata["role"]
except (JSONDecodeError, TypeError) as e:
logger.debug(f"Failed to parse user metadata for context: {e}")
# Add role to custom attributes if present
if role:
custom["role"] = role
# Only add custom object if it has content
if custom:
context_data["custom"] = custom
return context_data
async def is_feature_enabled(
flag_key: str,
user_id: str,
default: bool = False,
use_user_id_only: bool = False,
additional_attributes: Optional[dict[str, Any]] = None,
user_role: Optional[str] = None,
) -> bool:
"""
Check if a feature flag is enabled for a user with full LaunchDarkly context support.
Args:
flag_key: The LaunchDarkly feature flag key
user_id: The user ID to evaluate the flag for
default: Default value if LaunchDarkly is unavailable or flag evaluation fails
use_user_id_only: If True, only use user_id without fetching database context
additional_attributes: Additional attributes to include in the context
user_role: Optional user role (e.g., "admin", "user") to add to segments
Returns:
True if feature is enabled, False otherwise
"""
try:
client = get_client()
if use_user_id_only:
# Simple context with just user ID (for backward compatibility)
attrs = additional_attributes or {}
if user_role:
# Add role to custom attributes for consistency
if "custom" not in attrs:
attrs["custom"] = {}
if isinstance(attrs["custom"], dict):
attrs["custom"]["role"] = user_role
context = create_context(str(user_id), attrs)
else:
# Full context with user segments and metadata from database
try:
user_data = await _fetch_user_context_data(user_id)
except ImportError as e:
# Database modules not available - fallback to simple context
logger.debug(f"Database modules not available: {e}")
user_data = {}
except Exception as e:
# Database error - log and fallback to simple context
logger.warning(f"Failed to fetch user context for {user_id}: {e}")
user_data = {}
# Merge additional attributes and role
attrs = additional_attributes or {}
# If user_role is provided, add it to custom attributes
if user_role:
if "custom" not in user_data:
user_data["custom"] = {}
user_data["custom"]["role"] = user_role
# Merge additional attributes with user data
# Handle custom attributes specially
if "custom" in attrs and isinstance(attrs["custom"], dict):
if "custom" not in user_data:
user_data["custom"] = {}
user_data["custom"].update(attrs["custom"])
# Remove custom from attrs to avoid duplication
attrs = {k: v for k, v in attrs.items() if k != "custom"}
# Merge remaining attributes
final_attrs = {**user_data, **attrs}
context = create_context(str(user_id), final_attrs)
# Evaluate the flag
result = client.variation(flag_key, context, default)
logger.debug(
f"Feature flag {flag_key} for user {user_id}: {result} "
f"(use_user_id_only: {use_user_id_only})"
)
return result
except Exception as e:
logger.debug(
f"LaunchDarkly flag evaluation failed for {flag_key}: {e}, using default={default}"
)
return default
def feature_flag(
flag_key: str,
default: bool = False,
) -> Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]:
"""
Decorator for async feature flag protected endpoints.
Args:
flag_key: The LaunchDarkly feature flag key
default: Default value if flag evaluation fails
Returns:
Decorator that only works with async functions
"""
def decorator(func: Callable[P, Awaitable[T]]) -> Callable[P, Awaitable[T]]:
@wraps(func)
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
try:
user_id = kwargs.get("user_id")
if not user_id:
raise ValueError("user_id is required")
if not get_client().is_initialized():
logger.warning(
f"LaunchDarkly not initialized, using default={default}"
)
is_enabled = default
else:
# Use the unified function with full context support
is_enabled = await is_feature_enabled(
flag_key, str(user_id), default, use_user_id_only=False
)
if not is_enabled:
raise HTTPException(status_code=404, detail="Feature not available")
return await func(*args, **kwargs)
except Exception as e:
logger.error(f"Error evaluating feature flag {flag_key}: {e}")
raise
return async_wrapper
return decorator
@contextlib.contextmanager
def mock_flag_variation(flag_key: str, return_value: Any):
"""Context manager for testing feature flags."""
original_variation = get_client().variation
get_client().variation = lambda key, context, default: (
return_value if key == flag_key else original_variation(key, context, default)
)
try:
yield
finally:
get_client().variation = original_variation

View File

@@ -0,0 +1,84 @@
import pytest
from ldclient import LDClient
from autogpt_libs.feature_flag.client import (
feature_flag,
is_feature_enabled,
mock_flag_variation,
)
@pytest.fixture
def ld_client(mocker):
client = mocker.Mock(spec=LDClient)
mocker.patch("ldclient.get", return_value=client)
client.is_initialized.return_value = True
return client
@pytest.mark.asyncio
async def test_feature_flag_enabled(ld_client):
ld_client.variation.return_value = True
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = test_function(user_id="test-user")
assert result == "success"
ld_client.variation.assert_called_once()
@pytest.mark.asyncio
async def test_feature_flag_unauthorized_response(ld_client):
ld_client.variation.return_value = False
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = test_function(user_id="test-user")
assert result == {"error": "disabled"}
def test_mock_flag_variation(ld_client):
with mock_flag_variation("test-flag", True):
assert ld_client.variation("test-flag", None, False)
with mock_flag_variation("test-flag", False):
assert ld_client.variation("test-flag", None, False)
def test_is_feature_enabled(ld_client):
"""Test the is_feature_enabled helper function."""
ld_client.is_initialized.return_value = True
ld_client.variation.return_value = True
result = is_feature_enabled("test-flag", "user123", default=False)
assert result is True
ld_client.variation.assert_called_once()
call_args = ld_client.variation.call_args
assert call_args[0][0] == "test-flag" # flag_key
assert call_args[0][2] is False # default value
def test_is_feature_enabled_not_initialized(ld_client):
"""Test is_feature_enabled when LaunchDarkly is not initialized."""
ld_client.is_initialized.return_value = False
result = is_feature_enabled("test-flag", "user123", default=True)
assert result is True # Should return default
ld_client.variation.assert_not_called()
def test_is_feature_enabled_exception(mocker):
"""Test is_feature_enabled when get_client() raises an exception."""
mocker.patch(
"autogpt_libs.feature_flag.client.get_client",
side_effect=Exception("Client error"),
)
result = is_feature_enabled("test-flag", "user123", default=True)
assert result is True # Should return default

View File

@@ -0,0 +1,15 @@
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
launch_darkly_sdk_key: str = Field(
default="",
description="The Launch Darkly SDK key",
validation_alias="LAUNCH_DARKLY_SDK_KEY",
)
model_config = SettingsConfigDict(case_sensitive=True, extra="ignore")
SETTINGS = Settings()

View File

@@ -1,8 +1,6 @@
"""Logging module for Auto-GPT."""
import logging
import os
import socket
import sys
from pathlib import Path
@@ -12,15 +10,6 @@ from pydantic_settings import BaseSettings, SettingsConfigDict
from .filters import BelowLevelFilter
from .formatters import AGPTFormatter
# Configure global socket timeout and gRPC keepalive to prevent deadlocks
# This must be done at import time before any gRPC connections are established
socket.setdefaulttimeout(30) # 30-second socket timeout
# Enable gRPC keepalive to detect dead connections faster
os.environ.setdefault("GRPC_KEEPALIVE_TIME_MS", "30000") # 30 seconds
os.environ.setdefault("GRPC_KEEPALIVE_TIMEOUT_MS", "5000") # 5 seconds
os.environ.setdefault("GRPC_KEEPALIVE_PERMIT_WITHOUT_CALLS", "true")
LOG_DIR = Path(__file__).parent.parent.parent.parent / "logs"
LOG_FILE = "activity.log"
DEBUG_LOG_FILE = "debug.log"
@@ -90,6 +79,7 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
Note: This function is typically called at the start of the application
to set up the logging infrastructure.
"""
config = LoggingConfig()
log_handlers: list[logging.Handler] = []
@@ -115,17 +105,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
if config.enable_cloud_logging or force_cloud_logging:
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers.transports import (
BackgroundThreadTransport,
)
from google.cloud.logging_v2.handlers.transports.sync import SyncTransport
client = google.cloud.logging.Client()
# Use BackgroundThreadTransport to prevent blocking the main thread
# and deadlocks when gRPC calls to Google Cloud Logging hang
cloud_handler = CloudLoggingHandler(
client,
name="autogpt_logs",
transport=BackgroundThreadTransport,
transport=SyncTransport,
)
cloud_handler.setLevel(config.level)
log_handlers.append(cloud_handler)

View File

@@ -1,5 +1,39 @@
import logging
import re
from typing import Any
import uvicorn.config
from colorama import Fore
def remove_color_codes(s: str) -> str:
return re.sub(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])", "", s)
def fmt_kwargs(kwargs: dict) -> str:
return ", ".join(f"{n}={repr(v)}" for n, v in kwargs.items())
def print_attribute(
title: str, value: Any, title_color: str = Fore.GREEN, value_color: str = ""
) -> None:
logger = logging.getLogger()
logger.info(
str(value),
extra={
"title": f"{title.rstrip(':')}:",
"title_color": title_color,
"color": value_color,
},
)
def generate_uvicorn_config():
"""
Generates a uvicorn logging config that silences uvicorn's default logging and tells it to use the native logging module.
"""
log_config = dict(uvicorn.config.LOGGING_CONFIG)
log_config["loggers"]["uvicorn"] = {"handlers": []}
log_config["loggers"]["uvicorn.error"] = {"handlers": []}
log_config["loggers"]["uvicorn.access"] = {"handlers": []}
return log_config

View File

@@ -4,6 +4,7 @@ import threading
import time
from functools import wraps
from typing import (
Any,
Awaitable,
Callable,
ParamSpec,
@@ -22,13 +23,11 @@ logger = logging.getLogger(__name__)
@overload
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]:
pass
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]: ...
@overload
def thread_cached(func: Callable[P, R]) -> Callable[P, R]:
pass
def thread_cached(func: Callable[P, R]) -> Callable[P, R]: ...
def thread_cached(
@@ -76,32 +75,26 @@ def clear_thread_cache(func: Callable) -> None:
clear()
FuncT = TypeVar("FuncT")
R_co = TypeVar("R_co", covariant=True)
@runtime_checkable
class AsyncCachedFunction(Protocol[P, R_co]):
class AsyncCachedFunction(Protocol):
"""Protocol for async functions with cache management methods."""
def cache_clear(self) -> None:
"""Clear all cached entries."""
return None
def cache_info(self) -> dict[str, int | None]:
def cache_info(self) -> dict[str, Any]:
"""Get cache statistics."""
return {}
async def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
async def __call__(self, *args: Any, **kwargs: Any) -> Any:
"""Call the cached function."""
return None # type: ignore
return None
def async_ttl_cache(
maxsize: int = 128, ttl_seconds: int | None = None
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
) -> Callable[[Callable[..., Awaitable[Any]]], AsyncCachedFunction]:
"""
TTL (Time To Live) cache decorator for async functions.
@@ -127,13 +120,13 @@ def async_ttl_cache(
"""
def decorator(
async_func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
async_func: Callable[..., Awaitable[Any]],
) -> AsyncCachedFunction:
# Cache storage - use union type to handle both cases
cache_storage: dict[tuple, R | Tuple[R, float]] = {}
cache_storage: dict[Any, Any | Tuple[Any, float]] = {}
@wraps(async_func)
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
async def wrapper(*args, **kwargs):
# Create cache key from arguments
key = (args, tuple(sorted(kwargs.items())))
current_time = time.time()
@@ -145,7 +138,7 @@ def async_ttl_cache(
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, cache_storage[key])
return cache_storage[key]
else:
# With TTL - check expiration
cached_data = cache_storage[key]
@@ -155,7 +148,7 @@ def async_ttl_cache(
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, result)
return result
else:
# Expired entry
del cache_storage[key]
@@ -192,7 +185,7 @@ def async_ttl_cache(
def cache_clear() -> None:
cache_storage.clear()
def cache_info() -> dict[str, int | None]:
def cache_info() -> dict[str, Any]:
return {
"size": len(cache_storage),
"maxsize": maxsize,
@@ -203,35 +196,14 @@ def async_ttl_cache(
setattr(wrapper, "cache_clear", cache_clear)
setattr(wrapper, "cache_info", cache_info)
return cast(AsyncCachedFunction[P, R], wrapper)
return cast(AsyncCachedFunction, wrapper)
return decorator
@overload
def async_cache(
func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
pass
@overload
def async_cache(
func: None = None,
*,
maxsize: int = 128,
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
pass
def async_cache(
func: Callable[P, Awaitable[R]] | None = None,
*,
maxsize: int = 128,
) -> (
AsyncCachedFunction[P, R]
| Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]
):
) -> Callable[[Callable[..., Awaitable[Any]]], AsyncCachedFunction]:
"""
Process-level cache decorator for async functions (no TTL).
@@ -239,28 +211,15 @@ def async_cache(
This is a convenience wrapper around async_ttl_cache with ttl_seconds=None.
Args:
func: The async function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
Returns:
Decorated function or decorator
Decorator function
Example:
# Without parentheses (uses default maxsize=128)
@async_cache
async def get_data(param: str) -> dict:
return {"result": param}
# With parentheses and custom maxsize
@async_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
# Expensive computation here
return {"result": param}
"""
if func is None:
# Called with parentheses @async_cache() or @async_cache(maxsize=...)
return async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
else:
# Called without parentheses @async_cache
decorator = async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
return decorator(func)
return async_ttl_cache(maxsize=maxsize, ttl_seconds=None)

View File

@@ -461,7 +461,7 @@ class TestAsyncTTLCache:
# Cache size should be reduced (cleanup removes oldest entries)
info = size_limited_function.cache_info()
assert info["size"] is not None and info["size"] <= 3 # Should be cleaned up
assert info["size"] <= 3 # Should be cleaned up
@pytest.mark.asyncio
async def test_argument_variations(self):

View File

@@ -1,52 +0,0 @@
# Development and testing files
**/__pycache__
**/*.pyc
**/*.pyo
**/*.pyd
**/.Python
**/env/
**/venv/
**/.venv/
**/pip-log.txt
**/.pytest_cache/
**/test-results/
**/snapshots/
**/test/
# IDE and editor files
**/.vscode/
**/.idea/
**/*.swp
**/*.swo
*~
# OS files
.DS_Store
Thumbs.db
# Logs
**/*.log
**/logs/
# Git
.git/
.gitignore
# Documentation
**/*.md
!README.md
# Local development files
.env
.env.local
**/.env.test
# Build artifacts
**/dist/
**/build/
**/target/
# Docker files (avoid recursion)
Dockerfile*
docker-compose*
.dockerignore

View File

@@ -1,9 +1,3 @@
# Backend Configuration
# This file contains environment variables that MUST be set for the AutoGPT platform
# Variables with working defaults in settings.py are not included here
## ===== REQUIRED DATABASE CONFIGURATION ===== ##
# PostgreSQL Database Connection
DB_USER=postgres
DB_PASS=your-super-secret-and-long-postgres-password
DB_NAME=postgres
@@ -16,50 +10,72 @@ DB_SCHEMA=platform
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
DIRECT_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
PRISMA_SCHEMA="postgres/schema.prisma"
ENABLE_AUTH=true
## ===== REQUIRED SERVICE CREDENTIALS ===== ##
# Redis Configuration
# EXECUTOR
NUM_GRAPH_WORKERS=10
BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
# generate using `from cryptography.fernet import Fernet;Fernet.generate_key().decode()`
ENCRYPTION_KEY='dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw='
UNSUBSCRIBE_SECRET_KEY = 'HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio='
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=password
# RabbitMQ Credentials
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
ENABLE_CREDIT=false
STRIPE_API_KEY=
STRIPE_WEBHOOK_SECRET=
# Supabase Authentication
# What environment things should be logged under: local dev or prod
APP_ENV=local
# What environment to behave as: "local" or "cloud"
BEHAVE_AS=local
PYRO_HOST=localhost
SENTRY_DSN=
# Email For Postmark so we can send emails
POSTMARK_SERVER_API_TOKEN=
POSTMARK_SENDER_EMAIL=invalid@invalid.com
POSTMARK_WEBHOOK_TOKEN=
## User auth with Supabase is required for any of the 3rd party integrations with auth to work.
ENABLE_AUTH=true
SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
## ===== REQUIRED SECURITY KEYS ===== ##
# Generate using: from cryptography.fernet import Fernet;Fernet.generate_key().decode()
ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=
UNSUBSCRIBE_SECRET_KEY=HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio=
# RabbitMQ credentials -- Used for communication between services
RABBITMQ_HOST=localhost
RABBITMQ_PORT=5672
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
## ===== IMPORTANT OPTIONAL CONFIGURATION ===== ##
# Platform URLs (set these for webhooks and OAuth to work)
PLATFORM_BASE_URL=http://localhost:8000
FRONTEND_BASE_URL=http://localhost:3000
# Media Storage (required for marketplace and library functionality)
## GCS bucket is required for marketplace and library functionality
MEDIA_GCS_BUCKET_NAME=
## ===== API KEYS AND OAUTH CREDENTIALS ===== ##
# All API keys below are optional - only add what you need
## For local development, you may need to set FRONTEND_BASE_URL for the OAuth flow
## for integrations to work. Defaults to the value of PLATFORM_BASE_URL if not set.
# FRONTEND_BASE_URL=http://localhost:3000
# AI/LLM Services
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GROQ_API_KEY=
LLAMA_API_KEY=
AIML_API_KEY=
V0_API_KEY=
OPEN_ROUTER_API_KEY=
NVIDIA_API_KEY=
## PLATFORM_BASE_URL must be set to a *publicly accessible* URL pointing to your backend
## to use the platform's webhook-related functionality.
## If you are developing locally, you can use something like ngrok to get a publc URL
## and tunnel it to your locally running backend.
PLATFORM_BASE_URL=http://localhost:3000
## Cloudflare Turnstile (CAPTCHA) Configuration
## Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
## This is the backend secret key
TURNSTILE_SECRET_KEY=
## This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
## == INTEGRATION CREDENTIALS == ##
# Each set of server side credentials is required for the corresponding 3rd party
# integration to work.
# OAuth Credentials
# For the OAuth callback URL, use <your_frontend_url>/auth/integrations/oauth_callback,
# e.g. http://localhost:3000/auth/integrations/oauth_callback
@@ -69,6 +85,7 @@ GITHUB_CLIENT_SECRET=
# Google OAuth App server credentials - https://console.cloud.google.com/apis/credentials, and enable gmail api and set scopes
# https://console.cloud.google.com/apis/credentials/consent ?project=<your_project_id>
# You'll need to add/enable the following scopes (minimum):
# https://console.developers.google.com/apis/api/gmail.googleapis.com/overview ?project=<your_project_id>
# https://console.cloud.google.com/apis/library/sheets.googleapis.com/ ?project=<your_project_id>
@@ -104,66 +121,104 @@ LINEAR_CLIENT_SECRET=
TODOIST_CLIENT_ID=
TODOIST_CLIENT_SECRET=
NOTION_CLIENT_ID=
NOTION_CLIENT_SECRET=
## ===== OPTIONAL API KEYS ===== ##
# LLM
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
AIML_API_KEY=
GROQ_API_KEY=
OPEN_ROUTER_API_KEY=
LLAMA_API_KEY=
# Reddit
# Go to https://www.reddit.com/prefs/apps and create a new app
# Choose "script" for the type
# Fill in the redirect uri as <your_frontend_url>/auth/integrations/oauth_callback, e.g. http://localhost:3000/auth/integrations/oauth_callback
REDDIT_CLIENT_ID=
REDDIT_CLIENT_SECRET=
REDDIT_USER_AGENT="AutoGPT:1.0 (by /u/autogpt)"
# Payment Processing
STRIPE_API_KEY=
STRIPE_WEBHOOK_SECRET=
# Email Service (for sending notifications and confirmations)
POSTMARK_SERVER_API_TOKEN=
POSTMARK_SENDER_EMAIL=invalid@invalid.com
POSTMARK_WEBHOOK_TOKEN=
# Error Tracking
SENTRY_DSN=
# Cloudflare Turnstile (CAPTCHA) Configuration
# Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
# This is the backend secret key
TURNSTILE_SECRET_KEY=
# This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
# Feature Flags
LAUNCH_DARKLY_SDK_KEY=
# Content Generation & Media
DID_API_KEY=
FAL_API_KEY=
IDEOGRAM_API_KEY=
REPLICATE_API_KEY=
REVID_API_KEY=
SCREENSHOTONE_API_KEY=
UNREAL_SPEECH_API_KEY=
# Data & Search Services
E2B_API_KEY=
EXA_API_KEY=
JINA_API_KEY=
MEM0_API_KEY=
OPENWEATHERMAP_API_KEY=
GOOGLE_MAPS_API_KEY=
# Communication Services
# Discord
DISCORD_BOT_TOKEN=
MEDIUM_API_KEY=
MEDIUM_AUTHOR_ID=
# SMTP/Email
SMTP_SERVER=
SMTP_PORT=
SMTP_USERNAME=
SMTP_PASSWORD=
# Business & Marketing Tools
# D-ID
DID_API_KEY=
# Open Weather Map
OPENWEATHERMAP_API_KEY=
# SMTP
SMTP_SERVER=
SMTP_PORT=
SMTP_USERNAME=
SMTP_PASSWORD=
# Medium
MEDIUM_API_KEY=
MEDIUM_AUTHOR_ID=
# Google Maps
GOOGLE_MAPS_API_KEY=
# Replicate
REPLICATE_API_KEY=
# Ideogram
IDEOGRAM_API_KEY=
# Fal
FAL_API_KEY=
# Exa
EXA_API_KEY=
# E2B
E2B_API_KEY=
# Mem0
MEM0_API_KEY=
# Nvidia
NVIDIA_API_KEY=
# Apollo
APOLLO_API_KEY=
ENRICHLAYER_API_KEY=
AYRSHARE_API_KEY=
AYRSHARE_JWT_KEY=
# SmartLead
SMARTLEAD_API_KEY=
# ZeroBounce
ZEROBOUNCE_API_KEY=
# Other Services
AUTOMOD_API_KEY=
# Ayrshare
AYRSHARE_API_KEY=
AYRSHARE_JWT_KEY=
## ===== OPTIONAL API KEYS END ===== ##
# Block Error Rate Monitoring
BLOCK_ERROR_RATE_THRESHOLD=0.5
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400
# Logging Configuration
LOG_LEVEL=INFO
ENABLE_CLOUD_LOGGING=false
ENABLE_FILE_LOGGING=false
# Use to manually set the log directory
# LOG_DIR=./logs
# Example Blocks Configuration
# Set to true to enable example blocks in development
# These blocks are disabled by default in production
ENABLE_EXAMPLE_BLOCKS=false
# Cloud Storage Configuration
# Cleanup interval for expired files (hours between cleanup runs, 1-24 hours)
CLOUD_STORAGE_CLEANUP_INTERVAL_HOURS=6

View File

@@ -1,4 +1,3 @@
.env
database.db
database.db-journal
dev.db

View File

@@ -8,14 +8,14 @@ WORKDIR /app
RUN echo 'Acquire::http::Pipeline-Depth 0;\nAcquire::http::No-Cache true;\nAcquire::BrokenProxy true;\n' > /etc/apt/apt.conf.d/99fixbadproxy
# Update package list and install build dependencies in a single layer
RUN apt-get update --allow-releaseinfo-change --fix-missing \
&& apt-get install -y \
build-essential \
libpq5 \
libz-dev \
libssl-dev \
postgresql-client
RUN apt-get update --allow-releaseinfo-change --fix-missing
# Install build dependencies
RUN apt-get install -y build-essential
RUN apt-get install -y libpq5
RUN apt-get install -y libz-dev
RUN apt-get install -y libssl-dev
RUN apt-get install -y postgresql-client
ENV POETRY_HOME=/opt/poetry
ENV POETRY_NO_INTERACTION=1
@@ -68,12 +68,6 @@ COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.tom
WORKDIR /app/autogpt_platform/backend
FROM server_dependencies AS migrate
# Migration stage only needs schema and migrations - much lighter than full backend
COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/
COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations
FROM server_dependencies AS server
COPY autogpt_platform/backend /app/autogpt_platform/backend

View File

@@ -1,408 +0,0 @@
"""
API module for Enrichlayer integration.
This module provides a client for interacting with the Enrichlayer API,
which allows fetching LinkedIn profile data and related information.
"""
import datetime
import enum
import logging
from json import JSONDecodeError
from typing import Any, Optional, TypeVar
from pydantic import BaseModel, Field
from backend.data.model import APIKeyCredentials
from backend.util.request import Requests
logger = logging.getLogger(__name__)
T = TypeVar("T")
class EnrichlayerAPIException(Exception):
"""Exception raised for Enrichlayer API errors."""
def __init__(self, message: str, status_code: int):
super().__init__(message)
self.status_code = status_code
class FallbackToCache(enum.Enum):
ON_ERROR = "on-error"
NEVER = "never"
class UseCache(enum.Enum):
IF_PRESENT = "if-present"
NEVER = "never"
class SocialMediaProfiles(BaseModel):
"""Social media profiles model."""
twitter: Optional[str] = None
facebook: Optional[str] = None
github: Optional[str] = None
class Experience(BaseModel):
"""Experience model for LinkedIn profiles."""
company: Optional[str] = None
title: Optional[str] = None
description: Optional[str] = None
location: Optional[str] = None
starts_at: Optional[dict[str, int]] = None
ends_at: Optional[dict[str, int]] = None
company_linkedin_profile_url: Optional[str] = None
class Education(BaseModel):
"""Education model for LinkedIn profiles."""
school: Optional[str] = None
degree_name: Optional[str] = None
field_of_study: Optional[str] = None
starts_at: Optional[dict[str, int]] = None
ends_at: Optional[dict[str, int]] = None
school_linkedin_profile_url: Optional[str] = None
class PersonProfileResponse(BaseModel):
"""Response model for LinkedIn person profile.
This model represents the response from Enrichlayer's LinkedIn profile API.
The API returns comprehensive profile data including work experience,
education, skills, and contact information (when available).
Example API Response:
{
"public_identifier": "johnsmith",
"full_name": "John Smith",
"occupation": "Software Engineer at Tech Corp",
"experiences": [
{
"company": "Tech Corp",
"title": "Software Engineer",
"starts_at": {"year": 2020, "month": 1}
}
],
"education": [...],
"skills": ["Python", "JavaScript", ...]
}
"""
public_identifier: Optional[str] = None
profile_pic_url: Optional[str] = None
full_name: Optional[str] = None
first_name: Optional[str] = None
last_name: Optional[str] = None
occupation: Optional[str] = None
headline: Optional[str] = None
summary: Optional[str] = None
country: Optional[str] = None
country_full_name: Optional[str] = None
city: Optional[str] = None
state: Optional[str] = None
experiences: Optional[list[Experience]] = None
education: Optional[list[Education]] = None
languages: Optional[list[str]] = None
skills: Optional[list[str]] = None
inferred_salary: Optional[dict[str, Any]] = None
personal_email: Optional[str] = None
personal_contact_number: Optional[str] = None
social_media_profiles: Optional[SocialMediaProfiles] = None
extra: Optional[dict[str, Any]] = None
class SimilarProfile(BaseModel):
"""Similar profile model for LinkedIn person lookup."""
similarity: float
linkedin_profile_url: str
class PersonLookupResponse(BaseModel):
"""Response model for LinkedIn person lookup.
This model represents the response from Enrichlayer's person lookup API.
The API returns a LinkedIn profile URL and similarity scores when
searching for a person by name and company.
Example API Response:
{
"url": "https://www.linkedin.com/in/johnsmith/",
"name_similarity_score": 0.95,
"company_similarity_score": 0.88,
"title_similarity_score": 0.75,
"location_similarity_score": 0.60
}
"""
url: str | None = None
name_similarity_score: float | None
company_similarity_score: float | None
title_similarity_score: float | None
location_similarity_score: float | None
last_updated: datetime.datetime | None = None
profile: PersonProfileResponse | None = None
class RoleLookupResponse(BaseModel):
"""Response model for LinkedIn role lookup.
This model represents the response from Enrichlayer's role lookup API.
The API returns LinkedIn profile data for a specific role at a company.
Example API Response:
{
"linkedin_profile_url": "https://www.linkedin.com/in/johnsmith/",
"profile_data": {...} // Full PersonProfileResponse data when enrich_profile=True
}
"""
linkedin_profile_url: Optional[str] = None
profile_data: Optional[PersonProfileResponse] = None
class ProfilePictureResponse(BaseModel):
"""Response model for LinkedIn profile picture.
This model represents the response from Enrichlayer's profile picture API.
The API returns a URL to the person's LinkedIn profile picture.
Example API Response:
{
"tmp_profile_pic_url": "https://media.licdn.com/dms/image/..."
}
"""
tmp_profile_pic_url: str = Field(
..., description="URL of the profile picture", alias="tmp_profile_pic_url"
)
@property
def profile_picture_url(self) -> str:
"""Backward compatibility property for profile_picture_url."""
return self.tmp_profile_pic_url
class EnrichlayerClient:
"""Client for interacting with the Enrichlayer API."""
API_BASE_URL = "https://enrichlayer.com/api/v2"
def __init__(
self,
credentials: Optional[APIKeyCredentials] = None,
custom_requests: Optional[Requests] = None,
):
"""
Initialize the Enrichlayer client.
Args:
credentials: The credentials to use for authentication.
custom_requests: Custom Requests instance for testing.
"""
if custom_requests:
self._requests = custom_requests
else:
headers: dict[str, str] = {
"Content-Type": "application/json",
}
if credentials:
headers["Authorization"] = (
f"Bearer {credentials.api_key.get_secret_value()}"
)
self._requests = Requests(
extra_headers=headers,
raise_for_status=False,
)
async def _handle_response(self, response) -> Any:
"""
Handle API response and check for errors.
Args:
response: The response object from the request.
Returns:
The response data.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
if not response.ok:
try:
error_data = response.json()
error_message = error_data.get("message", "")
except JSONDecodeError:
error_message = response.text
raise EnrichlayerAPIException(
f"Enrichlayer API request failed ({response.status_code}): {error_message}",
response.status_code,
)
return response.json()
async def fetch_profile(
self,
linkedin_url: str,
fallback_to_cache: FallbackToCache = FallbackToCache.ON_ERROR,
use_cache: UseCache = UseCache.IF_PRESENT,
include_skills: bool = False,
include_inferred_salary: bool = False,
include_personal_email: bool = False,
include_personal_contact_number: bool = False,
include_social_media: bool = False,
include_extra: bool = False,
) -> PersonProfileResponse:
"""
Fetch a LinkedIn profile with optional parameters.
Args:
linkedin_url: The LinkedIn profile URL to fetch.
fallback_to_cache: Cache usage if live fetch fails ('on-error' or 'never').
use_cache: Cache utilization ('if-present' or 'never').
include_skills: Whether to include skills data.
include_inferred_salary: Whether to include inferred salary data.
include_personal_email: Whether to include personal email.
include_personal_contact_number: Whether to include personal contact number.
include_social_media: Whether to include social media profiles.
include_extra: Whether to include additional data.
Returns:
The LinkedIn profile data.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"url": linkedin_url,
"fallback_to_cache": fallback_to_cache.value.lower(),
"use_cache": use_cache.value.lower(),
}
if include_skills:
params["skills"] = "include"
if include_inferred_salary:
params["inferred_salary"] = "include"
if include_personal_email:
params["personal_email"] = "include"
if include_personal_contact_number:
params["personal_contact_number"] = "include"
if include_social_media:
params["twitter_profile_id"] = "include"
params["facebook_profile_id"] = "include"
params["github_profile_id"] = "include"
if include_extra:
params["extra"] = "include"
response = await self._requests.get(
f"{self.API_BASE_URL}/profile", params=params
)
return PersonProfileResponse(**await self._handle_response(response))
async def lookup_person(
self,
first_name: str,
company_domain: str,
last_name: str | None = None,
location: Optional[str] = None,
title: Optional[str] = None,
include_similarity_checks: bool = False,
enrich_profile: bool = False,
) -> PersonLookupResponse:
"""
Look up a LinkedIn profile by person's information.
Args:
first_name: The person's first name.
last_name: The person's last name.
company_domain: The domain of the company they work for.
location: The person's location.
title: The person's job title.
include_similarity_checks: Whether to include similarity checks.
enrich_profile: Whether to enrich the profile.
Returns:
The LinkedIn profile lookup result.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {"first_name": first_name, "company_domain": company_domain}
if last_name:
params["last_name"] = last_name
if location:
params["location"] = location
if title:
params["title"] = title
if include_similarity_checks:
params["similarity_checks"] = "include"
if enrich_profile:
params["enrich_profile"] = "enrich"
response = await self._requests.get(
f"{self.API_BASE_URL}/profile/resolve", params=params
)
return PersonLookupResponse(**await self._handle_response(response))
async def lookup_role(
self, role: str, company_name: str, enrich_profile: bool = False
) -> RoleLookupResponse:
"""
Look up a LinkedIn profile by role in a company.
Args:
role: The role title (e.g., CEO, CTO).
company_name: The name of the company.
enrich_profile: Whether to enrich the profile.
Returns:
The LinkedIn profile lookup result.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"role": role,
"company_name": company_name,
}
if enrich_profile:
params["enrich_profile"] = "enrich"
response = await self._requests.get(
f"{self.API_BASE_URL}/find/company/role", params=params
)
return RoleLookupResponse(**await self._handle_response(response))
async def get_profile_picture(
self, linkedin_profile_url: str
) -> ProfilePictureResponse:
"""
Get a LinkedIn profile picture URL.
Args:
linkedin_profile_url: The LinkedIn profile URL.
Returns:
The profile picture URL.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"linkedin_person_profile_url": linkedin_profile_url,
}
response = await self._requests.get(
f"{self.API_BASE_URL}/person/profile-picture", params=params
)
return ProfilePictureResponse(**await self._handle_response(response))

View File

@@ -1,34 +0,0 @@
"""
Authentication module for Enrichlayer API integration.
This module provides credential types and test credentials for the Enrichlayer API.
"""
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsMetaInput
from backend.integrations.providers import ProviderName
# Define the type of credentials input expected for Enrichlayer API
EnrichlayerCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.ENRICHLAYER], Literal["api_key"]
]
# Mock credentials for testing Enrichlayer API integration
TEST_CREDENTIALS = APIKeyCredentials(
id="1234a567-89bc-4def-ab12-3456cdef7890",
provider="enrichlayer",
api_key=SecretStr("mock-enrichlayer-api-key"),
title="Mock Enrichlayer API key",
expires_at=None,
)
# Dictionary representation of test credentials for input fields
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}

View File

@@ -1,527 +0,0 @@
"""
Block definitions for Enrichlayer API integration.
This module implements blocks for interacting with the Enrichlayer API,
which provides access to LinkedIn profile data and related information.
"""
import logging
from typing import Optional
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField
from backend.util.type import MediaFileType
from ._api import (
EnrichlayerClient,
Experience,
FallbackToCache,
PersonLookupResponse,
PersonProfileResponse,
RoleLookupResponse,
UseCache,
)
from ._auth import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, EnrichlayerCredentialsInput
logger = logging.getLogger(__name__)
class GetLinkedinProfileBlock(Block):
"""Block to fetch LinkedIn profile data using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for GetLinkedinProfileBlock."""
linkedin_url: str = SchemaField(
description="LinkedIn profile URL to fetch data from",
placeholder="https://www.linkedin.com/in/username/",
)
fallback_to_cache: FallbackToCache = SchemaField(
description="Cache usage if live fetch fails",
default=FallbackToCache.ON_ERROR,
advanced=True,
)
use_cache: UseCache = SchemaField(
description="Cache utilization strategy",
default=UseCache.IF_PRESENT,
advanced=True,
)
include_skills: bool = SchemaField(
description="Include skills data",
default=False,
advanced=True,
)
include_inferred_salary: bool = SchemaField(
description="Include inferred salary data",
default=False,
advanced=True,
)
include_personal_email: bool = SchemaField(
description="Include personal email",
default=False,
advanced=True,
)
include_personal_contact_number: bool = SchemaField(
description="Include personal contact number",
default=False,
advanced=True,
)
include_social_media: bool = SchemaField(
description="Include social media profiles",
default=False,
advanced=True,
)
include_extra: bool = SchemaField(
description="Include additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for GetLinkedinProfileBlock."""
profile: PersonProfileResponse = SchemaField(
description="LinkedIn profile data"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfileBlock."""
super().__init__(
id="f6e0ac73-4f1d-4acb-b4b7-b67066c5984e",
description="Fetch LinkedIn profile data using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=GetLinkedinProfileBlock.Input,
output_schema=GetLinkedinProfileBlock.Output,
test_input={
"linkedin_url": "https://www.linkedin.com/in/williamhgates/",
"include_skills": True,
"include_social_media": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"profile",
PersonProfileResponse(
public_identifier="williamhgates",
full_name="Bill Gates",
occupation="Co-chair at Bill & Melinda Gates Foundation",
experiences=[
Experience(
company="Bill & Melinda Gates Foundation",
title="Co-chair",
starts_at={"year": 2000},
)
],
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_fetch_profile": lambda *args, **kwargs: PersonProfileResponse(
public_identifier="williamhgates",
full_name="Bill Gates",
occupation="Co-chair at Bill & Melinda Gates Foundation",
experiences=[
Experience(
company="Bill & Melinda Gates Foundation",
title="Co-chair",
starts_at={"year": 2000},
)
],
),
},
)
@staticmethod
async def _fetch_profile(
credentials: APIKeyCredentials,
linkedin_url: str,
fallback_to_cache: FallbackToCache = FallbackToCache.ON_ERROR,
use_cache: UseCache = UseCache.IF_PRESENT,
include_skills: bool = False,
include_inferred_salary: bool = False,
include_personal_email: bool = False,
include_personal_contact_number: bool = False,
include_social_media: bool = False,
include_extra: bool = False,
):
client = EnrichlayerClient(credentials)
profile = await client.fetch_profile(
linkedin_url=linkedin_url,
fallback_to_cache=fallback_to_cache,
use_cache=use_cache,
include_skills=include_skills,
include_inferred_salary=include_inferred_salary,
include_personal_email=include_personal_email,
include_personal_contact_number=include_personal_contact_number,
include_social_media=include_social_media,
include_extra=include_extra,
)
return profile
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to fetch LinkedIn profile data.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
profile = await self._fetch_profile(
credentials=credentials,
linkedin_url=input_data.linkedin_url,
fallback_to_cache=input_data.fallback_to_cache,
use_cache=input_data.use_cache,
include_skills=input_data.include_skills,
include_inferred_salary=input_data.include_inferred_salary,
include_personal_email=input_data.include_personal_email,
include_personal_contact_number=input_data.include_personal_contact_number,
include_social_media=input_data.include_social_media,
include_extra=input_data.include_extra,
)
yield "profile", profile
except Exception as e:
logger.error(f"Error fetching LinkedIn profile: {str(e)}")
yield "error", str(e)
class LinkedinPersonLookupBlock(Block):
"""Block to look up LinkedIn profiles by person's information using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for LinkedinPersonLookupBlock."""
first_name: str = SchemaField(
description="Person's first name",
placeholder="John",
advanced=False,
)
last_name: str | None = SchemaField(
description="Person's last name",
placeholder="Doe",
default=None,
advanced=False,
)
company_domain: str = SchemaField(
description="Domain of the company they work for (optional)",
placeholder="example.com",
advanced=False,
)
location: Optional[str] = SchemaField(
description="Person's location (optional)",
placeholder="San Francisco",
default=None,
)
title: Optional[str] = SchemaField(
description="Person's job title (optional)",
placeholder="CEO",
default=None,
)
include_similarity_checks: bool = SchemaField(
description="Include similarity checks",
default=False,
advanced=True,
)
enrich_profile: bool = SchemaField(
description="Enrich the profile with additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for LinkedinPersonLookupBlock."""
lookup_result: PersonLookupResponse = SchemaField(
description="LinkedIn profile lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinPersonLookupBlock."""
super().__init__(
id="d237a98a-5c4b-4a1c-b9e3-e6f9a6c81df7",
description="Look up LinkedIn profiles by person information using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=LinkedinPersonLookupBlock.Input,
output_schema=LinkedinPersonLookupBlock.Output,
test_input={
"first_name": "Bill",
"last_name": "Gates",
"company_domain": "gatesfoundation.org",
"include_similarity_checks": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"lookup_result",
PersonLookupResponse(
url="https://www.linkedin.com/in/williamhgates/",
name_similarity_score=0.93,
company_similarity_score=0.83,
title_similarity_score=0.3,
location_similarity_score=0.20,
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_lookup_person": lambda *args, **kwargs: PersonLookupResponse(
url="https://www.linkedin.com/in/williamhgates/",
name_similarity_score=0.93,
company_similarity_score=0.83,
title_similarity_score=0.3,
location_similarity_score=0.20,
)
},
)
@staticmethod
async def _lookup_person(
credentials: APIKeyCredentials,
first_name: str,
company_domain: str,
last_name: str | None = None,
location: Optional[str] = None,
title: Optional[str] = None,
include_similarity_checks: bool = False,
enrich_profile: bool = False,
):
client = EnrichlayerClient(credentials=credentials)
lookup_result = await client.lookup_person(
first_name=first_name,
last_name=last_name,
company_domain=company_domain,
location=location,
title=title,
include_similarity_checks=include_similarity_checks,
enrich_profile=enrich_profile,
)
return lookup_result
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to look up LinkedIn profiles.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
lookup_result = await self._lookup_person(
credentials=credentials,
first_name=input_data.first_name,
last_name=input_data.last_name,
company_domain=input_data.company_domain,
location=input_data.location,
title=input_data.title,
include_similarity_checks=input_data.include_similarity_checks,
enrich_profile=input_data.enrich_profile,
)
yield "lookup_result", lookup_result
except Exception as e:
logger.error(f"Error looking up LinkedIn profile: {str(e)}")
yield "error", str(e)
class LinkedinRoleLookupBlock(Block):
"""Block to look up LinkedIn profiles by role in a company using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for LinkedinRoleLookupBlock."""
role: str = SchemaField(
description="Role title (e.g., CEO, CTO)",
placeholder="CEO",
)
company_name: str = SchemaField(
description="Name of the company",
placeholder="Microsoft",
)
enrich_profile: bool = SchemaField(
description="Enrich the profile with additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for LinkedinRoleLookupBlock."""
role_lookup_result: RoleLookupResponse = SchemaField(
description="LinkedIn role lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinRoleLookupBlock."""
super().__init__(
id="3b9fc742-06d4-49c7-b5ce-7e302dd7c8a7",
description="Look up LinkedIn profiles by role in a company using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=LinkedinRoleLookupBlock.Input,
output_schema=LinkedinRoleLookupBlock.Output,
test_input={
"role": "Co-chair",
"company_name": "Gates Foundation",
"enrich_profile": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"role_lookup_result",
RoleLookupResponse(
linkedin_profile_url="https://www.linkedin.com/in/williamhgates/",
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_lookup_role": lambda *args, **kwargs: RoleLookupResponse(
linkedin_profile_url="https://www.linkedin.com/in/williamhgates/",
),
},
)
@staticmethod
async def _lookup_role(
credentials: APIKeyCredentials,
role: str,
company_name: str,
enrich_profile: bool = False,
):
client = EnrichlayerClient(credentials=credentials)
role_lookup_result = await client.lookup_role(
role=role,
company_name=company_name,
enrich_profile=enrich_profile,
)
return role_lookup_result
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to look up LinkedIn profiles by role.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
role_lookup_result = await self._lookup_role(
credentials=credentials,
role=input_data.role,
company_name=input_data.company_name,
enrich_profile=input_data.enrich_profile,
)
yield "role_lookup_result", role_lookup_result
except Exception as e:
logger.error(f"Error looking up role in company: {str(e)}")
yield "error", str(e)
class GetLinkedinProfilePictureBlock(Block):
"""Block to get LinkedIn profile pictures using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for GetLinkedinProfilePictureBlock."""
linkedin_profile_url: str = SchemaField(
description="LinkedIn profile URL",
placeholder="https://www.linkedin.com/in/username/",
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for GetLinkedinProfilePictureBlock."""
profile_picture_url: MediaFileType = SchemaField(
description="LinkedIn profile picture URL"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfilePictureBlock."""
super().__init__(
id="68d5a942-9b3f-4e9a-b7c1-d96ea4321f0d",
description="Get LinkedIn profile pictures using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=GetLinkedinProfilePictureBlock.Input,
output_schema=GetLinkedinProfilePictureBlock.Output,
test_input={
"linkedin_profile_url": "https://www.linkedin.com/in/williamhgates/",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"profile_picture_url",
"https://media.licdn.com/dms/image/C4D03AQFj-xjuXrLFSQ/profile-displayphoto-shrink_800_800/0/1576881858598?e=1686787200&v=beta&t=zrQC76QwsfQQIWthfOnrKRBMZ5D-qIAvzLXLmWgYvTk",
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_get_profile_picture": lambda *args, **kwargs: "https://media.licdn.com/dms/image/C4D03AQFj-xjuXrLFSQ/profile-displayphoto-shrink_800_800/0/1576881858598?e=1686787200&v=beta&t=zrQC76QwsfQQIWthfOnrKRBMZ5D-qIAvzLXLmWgYvTk",
},
)
@staticmethod
async def _get_profile_picture(
credentials: APIKeyCredentials, linkedin_profile_url: str
):
client = EnrichlayerClient(credentials=credentials)
profile_picture_response = await client.get_profile_picture(
linkedin_profile_url=linkedin_profile_url,
)
return profile_picture_response.profile_picture_url
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to get LinkedIn profile pictures.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
profile_picture = await self._get_profile_picture(
credentials=credentials,
linkedin_profile_url=input_data.linkedin_profile_url,
)
yield "profile_picture_url", profile_picture
except Exception as e:
logger.error(f"Error getting profile picture: {str(e)}")
yield "error", str(e)

View File

@@ -29,8 +29,8 @@ class FirecrawlExtractBlock(Block):
prompt: str | None = SchemaField(
description="The prompt to use for the crawl", default=None, advanced=False
)
output_schema: dict | None = SchemaField(
description="A Json Schema describing the output structure if more rigid structure is desired.",
output_schema: str | None = SchemaField(
description="A more rigid structure if you already know the JSON layout.",
default=None,
)
enable_web_search: bool = SchemaField(
@@ -56,6 +56,7 @@ class FirecrawlExtractBlock(Block):
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
# Sync call
extract_result = app.extract(
urls=input_data.urls,
prompt=input_data.prompt,

File diff suppressed because it is too large Load Diff

View File

@@ -37,7 +37,6 @@ LLMProviderName = Literal[
ProviderName.OPENAI,
ProviderName.OPEN_ROUTER,
ProviderName.LLAMA_API,
ProviderName.V0,
]
AICredentials = CredentialsMetaInput[LLMProviderName, Literal["api_key"]]
@@ -156,10 +155,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
LLAMA_API_LLAMA4_MAVERICK = "Llama-4-Maverick-17B-128E-Instruct-FP8"
LLAMA_API_LLAMA3_3_8B = "Llama-3.3-8B-Instruct"
LLAMA_API_LLAMA3_3_70B = "Llama-3.3-70B-Instruct"
# v0 by Vercel models
V0_1_5_MD = "v0-1.5-md"
V0_1_5_LG = "v0-1.5-lg"
V0_1_0_MD = "v0-1.0-md"
@property
def metadata(self) -> ModelMetadata:
@@ -285,10 +280,6 @@ MODEL_METADATA = {
LlmModel.LLAMA_API_LLAMA4_MAVERICK: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_8B: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_70B: ModelMetadata("llama_api", 128000, 4028),
# v0 by Vercel models
LlmModel.V0_1_5_MD: ModelMetadata("v0", 128000, 64000),
LlmModel.V0_1_5_LG: ModelMetadata("v0", 512000, 64000),
LlmModel.V0_1_0_MD: ModelMetadata("v0", 128000, 64000),
}
for model in LlmModel:
@@ -685,11 +676,7 @@ async def llm_call(
client = openai.OpenAI(
base_url="https://api.aimlapi.com/v2",
api_key=credentials.api_key.get_secret_value(),
default_headers={
"X-Project": "AutoGPT",
"X-Title": "AutoGPT",
"HTTP-Referer": "https://github.com/Significant-Gravitas/AutoGPT",
},
default_headers={"X-Project": "AutoGPT"},
)
completion = client.chat.completions.create(
@@ -709,42 +696,6 @@ async def llm_call(
),
reasoning=None,
)
elif provider == "v0":
tools_param = tools if tools else openai.NOT_GIVEN
client = openai.AsyncOpenAI(
base_url="https://api.v0.dev/v1",
api_key=credentials.api_key.get_secret_value(),
)
response_format = None
if json_format:
response_format = {"type": "json_object"}
parallel_tool_calls_param = get_parallel_tool_calls_param(
llm_model, parallel_tool_calls
)
response = await client.chat.completions.create(
model=llm_model.value,
messages=prompt, # type: ignore
response_format=response_format, # type: ignore
max_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=parallel_tool_calls_param,
)
tool_calls = extract_openai_tool_calls(response)
reasoning = extract_openai_reasoning(response)
return LLMResponse(
raw_response=response.choices[0].message,
prompt=prompt,
response=response.choices[0].message.content or "",
tool_calls=tool_calls,
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
reasoning=reasoning,
)
else:
raise ValueError(f"Unsupported LLM provider: {provider}")

View File

@@ -291,32 +291,9 @@ class SmartDecisionMakerBlock(Block):
for link in links:
sink_name = SmartDecisionMakerBlock.cleanup(link.sink_name)
# Handle dynamic fields (e.g., values_#_*, items_$_*, etc.)
# These are fields that get merged by the executor into their base field
if (
"_#_" in link.sink_name
or "_$_" in link.sink_name
or "_@_" in link.sink_name
):
# For dynamic fields, provide a generic string schema
# The executor will handle merging these into the appropriate structure
properties[sink_name] = {
"type": "string",
"description": f"Dynamic value for {link.sink_name}",
}
else:
# For regular fields, use the block's schema
try:
properties[sink_name] = sink_block_input_schema.get_field_schema(
link.sink_name
)
except (KeyError, AttributeError):
# If the field doesn't exist in the schema, provide a generic schema
properties[sink_name] = {
"type": "string",
"description": f"Value for {link.sink_name}",
}
properties[sink_name] = sink_block_input_schema.get_field_schema(
link.sink_name
)
tool_function["parameters"] = {
**block.input_schema.jsonschema(),
@@ -501,6 +478,10 @@ class SmartDecisionMakerBlock(Block):
}
)
prompt.extend(tool_output)
if input_data.multiple_tool_calls:
input_data.sys_prompt += "\nYou can call a tool (different tools) multiple times in a single response."
else:
input_data.sys_prompt += "\nOnly provide EXACTLY one function call, multiple tool calls is strictly prohibited."
values = input_data.prompt_values
if values:

View File

@@ -1,130 +0,0 @@
from unittest.mock import Mock
import pytest
from backend.blocks.data_manipulation import AddToListBlock, CreateDictionaryBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
@pytest.mark.asyncio
async def test_smart_decision_maker_handles_dynamic_dict_fields():
"""Test Smart Decision Maker can handle dynamic dictionary fields (_#_) for any block"""
# Create a mock node for CreateDictionaryBlock
mock_node = Mock()
mock_node.block = CreateDictionaryBlock()
mock_node.block_id = CreateDictionaryBlock().id
mock_node.input_default = {}
# Create mock links with dynamic dictionary fields
mock_links = [
Mock(
source_name="tools_^_create_dict_~_name",
sink_name="values_#_name", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_create_dict_~_age",
sink_name="values_#_age", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_create_dict_~_city",
sink_name="values_#_city", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
]
# Generate function signature
signature = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, mock_links # type: ignore
)
# Verify the signature was created successfully
assert signature["type"] == "function"
assert "parameters" in signature["function"]
assert "properties" in signature["function"]["parameters"]
# Check that dynamic fields are handled
properties = signature["function"]["parameters"]["properties"]
assert len(properties) == 3 # Should have all three fields
# Each dynamic field should have proper schema
for prop_value in properties.values():
assert "type" in prop_value
assert prop_value["type"] == "string" # Dynamic fields get string type
assert "description" in prop_value
assert "Dynamic value for" in prop_value["description"]
@pytest.mark.asyncio
async def test_smart_decision_maker_handles_dynamic_list_fields():
"""Test Smart Decision Maker can handle dynamic list fields (_$_) for any block"""
# Create a mock node for AddToListBlock
mock_node = Mock()
mock_node.block = AddToListBlock()
mock_node.block_id = AddToListBlock().id
mock_node.input_default = {}
# Create mock links with dynamic list fields
mock_links = [
Mock(
source_name="tools_^_add_to_list_~_0",
sink_name="entries_$_0", # Dynamic list field
sink_id="list_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_add_to_list_~_1",
sink_name="entries_$_1", # Dynamic list field
sink_id="list_node_id",
source_id="smart_decision_node_id",
),
]
# Generate function signature
signature = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, mock_links # type: ignore
)
# Verify dynamic list fields are handled properly
assert signature["type"] == "function"
properties = signature["function"]["parameters"]["properties"]
assert len(properties) == 2 # Should have both list items
# Each dynamic field should have proper schema
for prop_value in properties.values():
assert prop_value["type"] == "string"
assert "Dynamic value for" in prop_value["description"]
@pytest.mark.asyncio
async def test_create_dict_block_with_dynamic_values():
"""Test CreateDictionaryBlock processes dynamic values correctly"""
block = CreateDictionaryBlock()
# Simulate what happens when executor merges dynamic fields
# The executor merges values_#_* fields into the values dict
input_data = block.input_schema(
values={
"existing": "value",
"name": "Alice", # This would come from values_#_name
"age": 25, # This would come from values_#_age
}
)
# Run the block
result = {}
async for output_name, output_value in block.run(input_data):
result[output_name] = output_value
# Check the result
assert "dictionary" in result
assert result["dictionary"]["existing"] == "value"
assert result["dictionary"]["name"] == "Alice"
assert result["dictionary"]["age"] == 25

View File

@@ -5,12 +5,6 @@ from backend.blocks.ai_shortform_video_block import AIShortformVideoCreatorBlock
from backend.blocks.apollo.organization import SearchOrganizationsBlock
from backend.blocks.apollo.people import SearchPeopleBlock
from backend.blocks.apollo.person import GetPersonDetailBlock
from backend.blocks.enrichlayer.linkedin import (
GetLinkedinProfileBlock,
GetLinkedinProfilePictureBlock,
LinkedinPersonLookupBlock,
LinkedinRoleLookupBlock,
)
from backend.blocks.flux_kontext import AIImageEditorBlock, FluxKontextModelName
from backend.blocks.ideogram import IdeogramModelBlock
from backend.blocks.jina.embeddings import JinaEmbeddingBlock
@@ -36,7 +30,6 @@ from backend.integrations.credentials_store import (
anthropic_credentials,
apollo_credentials,
did_credentials,
enrichlayer_credentials,
groq_credentials,
ideogram_credentials,
jina_credentials,
@@ -46,7 +39,6 @@ from backend.integrations.credentials_store import (
replicate_credentials,
revid_credentials,
unreal_credentials,
v0_credentials,
)
# =============== Configure the cost for each LLM Model call =============== #
@@ -123,10 +115,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: 1,
LlmModel.GEMINI_2_0_FLASH_LITE: 1,
LlmModel.DEEPSEEK_R1_0528: 1,
# v0 by Vercel models
LlmModel.V0_1_5_MD: 1,
LlmModel.V0_1_5_LG: 2,
LlmModel.V0_1_0_MD: 1,
}
for model in LlmModel:
@@ -216,23 +204,6 @@ LLM_COST = (
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "llama_api"
]
# v0 by Vercel Models
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {
"id": v0_credentials.id,
"provider": v0_credentials.provider,
"type": v0_credentials.type,
},
},
cost_amount=cost,
)
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "v0"
]
# AI/ML Api Models
+ [
BlockCost(
@@ -405,54 +376,6 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
},
)
],
GetLinkedinProfileBlock: [
BlockCost(
cost_amount=1,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
LinkedinPersonLookupBlock: [
BlockCost(
cost_amount=2,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
LinkedinRoleLookupBlock: [
BlockCost(
cost_amount=3,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
GetLinkedinProfilePictureBlock: [
BlockCost(
cost_amount=3,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
SmartDecisionMakerBlock: LLM_COST,
SearchOrganizationsBlock: [
BlockCost(

View File

@@ -286,17 +286,11 @@ class UserCreditBase(ABC):
transaction = await CreditTransaction.prisma().find_first_or_raise(
where={"transactionKey": transaction_key, "userId": user_id}
)
if transaction.isActive:
return
async with db.locked_transaction(f"usr_trx_{user_id}"):
transaction = await CreditTransaction.prisma().find_first_or_raise(
where={"transactionKey": transaction_key, "userId": user_id}
)
if transaction.isActive:
return
user_balance, _ = await self._get_credits(user_id)
await CreditTransaction.prisma().update(
where={

View File

@@ -33,7 +33,7 @@ from prisma.types import (
AgentNodeExecutionUpdateInput,
AgentNodeExecutionWhereInput,
)
from pydantic import BaseModel, ConfigDict, JsonValue, ValidationError
from pydantic import BaseModel, ConfigDict, JsonValue
from pydantic.fields import Field
from backend.server.v2.store.exceptions import DatabaseError
@@ -59,7 +59,7 @@ from .includes import (
GRAPH_EXECUTION_INCLUDE_WITH_NODES,
graph_execution_include,
)
from .model import GraphExecutionStats, NodeExecutionStats
from .model import GraphExecutionStats
T = TypeVar("T")
@@ -318,30 +318,18 @@ class NodeExecutionResult(BaseModel):
@staticmethod
def from_db(_node_exec: AgentNodeExecution, user_id: Optional[str] = None):
try:
stats = NodeExecutionStats.model_validate(_node_exec.stats or {})
except (ValueError, ValidationError):
stats = NodeExecutionStats()
if stats.cleared_inputs:
input_data: BlockInput = defaultdict()
for name, messages in stats.cleared_inputs.items():
input_data[name] = messages[-1] if messages else ""
elif _node_exec.executionData:
if _node_exec.executionData:
# Execution that has been queued for execution will persist its data.
input_data = type_utils.convert(_node_exec.executionData, dict[str, Any])
else:
# For incomplete execution, executionData will not be yet available.
input_data: BlockInput = defaultdict()
for data in _node_exec.Input or []:
input_data[data.name] = type_utils.convert(data.data, type[Any])
output_data: CompletedBlockOutput = defaultdict(list)
if stats.cleared_outputs:
for name, messages in stats.cleared_outputs.items():
output_data[name].extend(messages)
else:
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
graph_execution: AgentGraphExecution | None = _node_exec.GraphExecution
if graph_execution:

View File

@@ -1,109 +0,0 @@
import logging
from collections import defaultdict
from datetime import datetime
from prisma.enums import AgentExecutionStatus
from backend.data.execution import get_graph_executions
from backend.data.graph import get_graph_metadata
from backend.data.model import UserExecutionSummaryStats
from backend.server.v2.store.exceptions import DatabaseError
from backend.util.logging import TruncatedLogger
logger = TruncatedLogger(logging.getLogger(__name__), prefix="[SummaryData]")
async def get_user_execution_summary_data(
user_id: str, start_time: datetime, end_time: datetime
) -> UserExecutionSummaryStats:
"""Gather all summary data for a user in a time range.
This function fetches graph executions once and aggregates all required
statistics in a single pass for efficiency.
"""
try:
# Fetch graph executions once
executions = await get_graph_executions(
user_id=user_id,
created_time_gte=start_time,
created_time_lte=end_time,
)
# Initialize aggregation variables
total_credits_used = 0.0
total_executions = len(executions)
successful_runs = 0
failed_runs = 0
terminated_runs = 0
execution_times = []
agent_usage = defaultdict(int)
cost_by_graph_id = defaultdict(float)
# Single pass through executions to aggregate all stats
for execution in executions:
# Count execution statuses (including TERMINATED as failed)
if execution.status == AgentExecutionStatus.COMPLETED:
successful_runs += 1
elif execution.status == AgentExecutionStatus.FAILED:
failed_runs += 1
elif execution.status == AgentExecutionStatus.TERMINATED:
terminated_runs += 1
# Aggregate costs from stats
if execution.stats and hasattr(execution.stats, "cost"):
cost_in_dollars = execution.stats.cost / 100
total_credits_used += cost_in_dollars
cost_by_graph_id[execution.graph_id] += cost_in_dollars
# Collect execution times
if execution.stats and hasattr(execution.stats, "duration"):
execution_times.append(execution.stats.duration)
# Count agent usage
agent_usage[execution.graph_id] += 1
# Calculate derived stats
total_execution_time = sum(execution_times)
average_execution_time = (
total_execution_time / len(execution_times) if execution_times else 0
)
# Find most used agent
most_used_agent = "No agents used"
if agent_usage:
most_used_agent_id = max(agent_usage, key=lambda k: agent_usage[k])
try:
graph_meta = await get_graph_metadata(graph_id=most_used_agent_id)
most_used_agent = (
graph_meta.name if graph_meta else f"Agent {most_used_agent_id[:8]}"
)
except Exception:
logger.warning(f"Could not get metadata for graph {most_used_agent_id}")
most_used_agent = f"Agent {most_used_agent_id[:8]}"
# Convert graph_ids to agent names for cost breakdown
cost_breakdown = {}
for graph_id, cost in cost_by_graph_id.items():
try:
graph_meta = await get_graph_metadata(graph_id=graph_id)
agent_name = graph_meta.name if graph_meta else f"Agent {graph_id[:8]}"
except Exception:
logger.warning(f"Could not get metadata for graph {graph_id}")
agent_name = f"Agent {graph_id[:8]}"
cost_breakdown[agent_name] = cost
# Build the summary stats object (include terminated runs as failed)
return UserExecutionSummaryStats(
total_credits_used=total_credits_used,
total_executions=total_executions,
successful_runs=successful_runs,
failed_runs=failed_runs + terminated_runs,
most_used_agent=most_used_agent,
total_execution_time=total_execution_time,
average_execution_time=average_execution_time,
cost_breakdown=cost_breakdown,
)
except Exception as e:
logger.error(f"Failed to get user summary data: {e}")
raise DatabaseError(f"Failed to get user summary data: {e}") from e

View File

@@ -764,9 +764,6 @@ class NodeExecutionStats(BaseModel):
output_token_count: int = 0
extra_cost: int = 0
extra_steps: int = 0
# Moderation fields
cleared_inputs: Optional[dict[str, list[str]]] = None
cleared_outputs: Optional[dict[str, list[str]]] = None
def __iadd__(self, other: "NodeExecutionStats") -> "NodeExecutionStats":
"""Mutate this instance by adding another NodeExecutionStats."""
@@ -821,21 +818,3 @@ class GraphExecutionStats(BaseModel):
activity_status: Optional[str] = Field(
default=None, description="AI-generated summary of what the agent did"
)
class UserExecutionSummaryStats(BaseModel):
"""Summary of user statistics for a specific user."""
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
)
total_credits_used: float = Field(default=0)
total_executions: int = Field(default=0)
successful_runs: int = Field(default=0)
failed_runs: int = Field(default=0)
most_used_agent: str = Field(default="")
total_execution_time: float = Field(default=0)
average_execution_time: float = Field(default=0)
cost_breakdown: dict[str, float] = Field(default_factory=dict)

View File

@@ -6,17 +6,20 @@ import json
import logging
from typing import TYPE_CHECKING, Any, NotRequired, TypedDict
from autogpt_libs.feature_flag.client import is_feature_enabled
from pydantic import SecretStr
from backend.blocks.llm import LlmModel, llm_call
from backend.data.block import get_block
from backend.data.execution import ExecutionStatus, NodeExecutionResult
from backend.data.model import APIKeyCredentials, GraphExecutionStats
from backend.util.feature_flag import Flag, is_feature_enabled
from backend.util.retry import func_retry
from backend.util.settings import Settings
from backend.util.truncate import truncate
# LaunchDarkly feature flag key for AI activity status generation
AI_ACTIVITY_STATUS_FLAG_KEY = "ai-agent-execution-summary"
if TYPE_CHECKING:
from backend.executor import DatabaseManagerAsyncClient
@@ -100,7 +103,9 @@ async def generate_activity_status_for_execution(
AI-generated activity status string, or None if feature is disabled
"""
# Check LaunchDarkly feature flag for AI activity status generation with full context support
if not await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id):
if not await is_feature_enabled(
AI_ACTIVITY_STATUS_FLAG_KEY, user_id, default=False
):
logger.debug("AI activity status generation is disabled via LaunchDarkly")
return None

View File

@@ -20,7 +20,6 @@ from backend.data.execution import (
upsert_execution_input,
upsert_execution_output,
)
from backend.data.generate_data import get_user_execution_summary_data
from backend.data.graph import (
get_connected_output_nodes,
get_graph,
@@ -36,6 +35,7 @@ from backend.data.notifications import (
)
from backend.data.user import (
get_active_user_ids_in_timerange,
get_user_by_id,
get_user_email_by_id,
get_user_email_verification,
get_user_integrations,
@@ -130,6 +130,7 @@ class DatabaseManager(AppService):
# User Comms - async
get_active_user_ids_in_timerange = _(get_active_user_ids_in_timerange)
get_user_by_id = _(get_user_by_id)
get_user_email_by_id = _(get_user_email_by_id)
get_user_email_verification = _(get_user_email_verification)
get_user_notification_preference = _(get_user_notification_preference)
@@ -145,9 +146,6 @@ class DatabaseManager(AppService):
get_user_notification_oldest_message_in_batch
)
# Summary data - async
get_user_execution_summary_data = _(get_user_execution_summary_data)
class DatabaseManagerClient(AppServiceClient):
d = DatabaseManager
@@ -173,9 +171,6 @@ class DatabaseManagerClient(AppServiceClient):
spend_credits = _(d.spend_credits)
get_credits = _(d.get_credits)
# Summary data - async
get_user_execution_summary_data = _(d.get_user_execution_summary_data)
# Block error monitoring
get_block_error_stats = _(d.get_block_error_stats)
@@ -208,6 +203,7 @@ class DatabaseManagerAsyncClient(AppServiceClient):
# User Comms
get_active_user_ids_in_timerange = d.get_active_user_ids_in_timerange
get_user_by_id = d.get_user_by_id
get_user_email_by_id = d.get_user_email_by_id
get_user_email_verification = d.get_user_email_verification
get_user_notification_preference = d.get_user_notification_preference
@@ -222,6 +218,3 @@ class DatabaseManagerAsyncClient(AppServiceClient):
get_user_notification_oldest_message_in_batch = (
d.get_user_notification_oldest_message_in_batch
)
# Summary data
get_user_execution_summary_data = d.get_user_execution_summary_data

View File

@@ -27,7 +27,7 @@ from backend.executor.activity_status_generator import (
)
from backend.executor.utils import LogMetadata
from backend.notifications.notifications import queue_notification
from backend.util.exceptions import InsufficientBalanceError, ModerationError
from backend.util.exceptions import InsufficientBalanceError
if TYPE_CHECKING:
from backend.executor import DatabaseManagerClient, DatabaseManagerAsyncClient
@@ -67,7 +67,6 @@ from backend.executor.utils import (
validate_exec,
)
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.server.v2.AutoMod.manager import automod_manager
from backend.util import json
from backend.util.clients import (
get_async_execution_event_bus,
@@ -760,22 +759,6 @@ class ExecutionProcessor:
amount=1,
)
# Input moderation
try:
if moderation_error := asyncio.run_coroutine_threadsafe(
automod_manager.moderate_graph_execution_inputs(
db_client=get_db_async_client(),
graph_exec=graph_exec,
),
self.node_evaluation_loop,
).result(timeout=30.0):
raise moderation_error
except asyncio.TimeoutError:
log_metadata.warning(
f"Input moderation timed out for graph execution {graph_exec.graph_exec_id}, bypassing moderation and continuing execution"
)
# Continue execution without moderation
# ------------------------------------------------------------
# Prepopulate queue ---------------------------------------
# ------------------------------------------------------------
@@ -914,25 +897,6 @@ class ExecutionProcessor:
time.sleep(0.1)
# loop done --------------------------------------------------
# Output moderation
try:
if moderation_error := asyncio.run_coroutine_threadsafe(
automod_manager.moderate_graph_execution_outputs(
db_client=get_db_async_client(),
graph_exec_id=graph_exec.graph_exec_id,
user_id=graph_exec.user_id,
graph_id=graph_exec.graph_id,
),
self.node_evaluation_loop,
).result(timeout=30.0):
raise moderation_error
except asyncio.TimeoutError:
log_metadata.warning(
f"Output moderation timed out for graph execution {graph_exec.graph_exec_id}, bypassing moderation and continuing execution"
)
# Continue execution without moderation
# Determine final execution status based on whether there was an error or termination
if cancel.is_set():
execution_status = ExecutionStatus.TERMINATED
@@ -953,12 +917,11 @@ class ExecutionProcessor:
else Exception(f"{e.__class__.__name__}: {e}")
)
known_errors = (InsufficientBalanceError, ModerationError)
known_errors = (InsufficientBalanceError,)
if isinstance(error, known_errors):
execution_stats.error = str(error)
return ExecutionStatus.FAILED
execution_status = ExecutionStatus.FAILED
log_metadata.exception(
f"Failed graph execution {graph_exec.graph_exec_id}: {error}"
)
@@ -1208,9 +1171,6 @@ class ExecutionManager(AppProcess):
)
return
# Check if channel is closed and force reconnection if needed
if not self.cancel_client.is_ready:
self.cancel_client.disconnect()
self.cancel_client.connect()
cancel_channel = self.cancel_client.get_channel()
cancel_channel.basic_consume(
@@ -1240,9 +1200,6 @@ class ExecutionManager(AppProcess):
)
return
# Check if channel is closed and force reconnection if needed
if not self.run_client.is_ready:
self.run_client.disconnect()
self.run_client.connect()
run_channel = self.run_client.get_channel()
run_channel.basic_qos(prefetch_count=self.pool_size)

View File

@@ -269,9 +269,7 @@ class Scheduler(AppService):
self.scheduler = BackgroundScheduler(
executors={
"default": ThreadPoolExecutor(
max_workers=self.db_pool_size()
), # Match DB pool size to prevent resource contention
"default": ThreadPoolExecutor(max_workers=10), # Max 10 concurrent jobs
},
job_defaults={
"coalesce": True, # Skip redundant missed jobs - just run the latest
@@ -307,10 +305,9 @@ class Scheduler(AppService):
if self.register_system_tasks:
# Notification PROCESS WEEKLY SUMMARY
# Runs every Monday at 9 AM UTC
self.scheduler.add_job(
process_weekly_summary,
CronTrigger.from_crontab("0 9 * * 1"),
CronTrigger.from_crontab("0 * * * *"),
id="process_weekly_summary",
kwargs={},
replace_existing=True,

View File

@@ -548,7 +548,7 @@ async def validate_graph_with_credentials(
return node_input_errors
async def _construct_starting_node_execution_input(
async def _construct_node_execution_input(
graph: GraphModel,
user_id: str,
graph_inputs: BlockInput,
@@ -622,7 +622,7 @@ async def validate_and_construct_node_execution_input(
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[dict[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> tuple[GraphModel, list[tuple[str, BlockInput]], dict[str, dict[str, JsonValue]]]:
) -> tuple[GraphModel, list[tuple[str, BlockInput]]]:
"""
Public wrapper that handles graph fetching, credential mapping, and validation+construction.
This centralizes the logic used by both scheduler validation and actual execution.
@@ -666,14 +666,14 @@ async def validate_and_construct_node_execution_input(
nodes_input_masks or {},
)
starting_nodes_input = await _construct_starting_node_execution_input(
starting_nodes_input = await _construct_node_execution_input(
graph=graph,
user_id=user_id,
graph_inputs=graph_inputs,
nodes_input_masks=nodes_input_masks,
)
return graph, starting_nodes_input, nodes_input_masks
return graph, starting_nodes_input
def _merge_nodes_input_masks(
@@ -856,15 +856,13 @@ async def add_graph_execution(
else:
edb = get_database_manager_async_client()
graph, starting_nodes_input, nodes_input_masks = (
await validate_and_construct_node_execution_input(
graph_id=graph_id,
user_id=user_id,
graph_inputs=inputs or {},
graph_version=graph_version,
graph_credentials_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
)
graph, starting_nodes_input = await validate_and_construct_node_execution_input(
graph_id=graph_id,
user_id=user_id,
graph_inputs=inputs or {},
graph_version=graph_version,
graph_credentials_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
)
graph_exec = None

View File

@@ -182,15 +182,6 @@ zerobounce_credentials = APIKeyCredentials(
expires_at=None,
)
enrichlayer_credentials = APIKeyCredentials(
id="d9fce73a-6c1d-4e8b-ba2e-12a456789def",
provider="enrichlayer",
api_key=SecretStr(settings.secrets.enrichlayer_api_key),
title="Use Credits for Enrichlayer",
expires_at=None,
)
llama_api_credentials = APIKeyCredentials(
id="d44045af-1c33-4833-9e19-752313214de2",
provider="llama_api",
@@ -199,14 +190,6 @@ llama_api_credentials = APIKeyCredentials(
expires_at=None,
)
v0_credentials = APIKeyCredentials(
id="c4e6d1a0-3b5f-4789-a8e2-9b123456789f",
provider="v0",
api_key=SecretStr(settings.secrets.v0_api_key),
title="Use Credits for v0 by Vercel",
expires_at=None,
)
DEFAULT_CREDENTIALS = [
ollama_credentials,
revid_credentials,
@@ -220,7 +203,6 @@ DEFAULT_CREDENTIALS = [
jina_credentials,
unreal_credentials,
open_router_credentials,
enrichlayer_credentials,
fal_credentials,
exa_credentials,
e2b_credentials,
@@ -231,8 +213,6 @@ DEFAULT_CREDENTIALS = [
smartlead_credentials,
zerobounce_credentials,
google_maps_credentials,
llama_api_credentials,
v0_credentials,
]
@@ -299,8 +279,6 @@ class IntegrationCredentialsStore:
all_credentials.append(unreal_credentials)
if settings.secrets.open_router_api_key:
all_credentials.append(open_router_credentials)
if settings.secrets.enrichlayer_api_key:
all_credentials.append(enrichlayer_credentials)
if settings.secrets.fal_api_key:
all_credentials.append(fal_credentials)
if settings.secrets.exa_api_key:

View File

@@ -25,7 +25,6 @@ class ProviderName(str, Enum):
GROQ = "groq"
HTTP = "http"
HUBSPOT = "hubspot"
ENRICHLAYER = "enrichlayer"
IDEOGRAM = "ideogram"
JINA = "jina"
LLAMA_API = "llama_api"
@@ -48,7 +47,6 @@ class ProviderName(str, Enum):
TWITTER = "twitter"
TODOIST = "todoist"
UNREAL_SPEECH = "unreal_speech"
V0 = "v0"
ZEROBOUNCE = "zerobounce"
@classmethod

View File

@@ -223,14 +223,10 @@ class NotificationManager(AppService):
processed_count = 0
current_time = datetime.now(tz=timezone.utc)
start_time = current_time - timedelta(days=7)
logger.info(
f"Querying for active users between {start_time} and {current_time}"
)
users = await get_database_manager_async_client().get_active_user_ids_in_timerange(
end_time=current_time.isoformat(),
start_time=start_time.isoformat(),
)
logger.info(f"Found {len(users)} active users in the last 7 days")
for user in users:
await self._queue_scheduled_notification(
SummaryParamsEventModel(
@@ -388,13 +384,10 @@ class NotificationManager(AppService):
async def _queue_scheduled_notification(self, event: SummaryParamsEventModel):
"""Queue a scheduled notification - exposed method for other services to call"""
try:
logger.info(
f"Queueing scheduled notification type={event.type} user_id={event.user_id}"
)
logger.debug(f"Received Request to queue scheduled notification {event=}")
exchange = "notifications"
routing_key = get_routing_key(event.type)
logger.info(f"Using routing key: {routing_key}")
# Publish to RabbitMQ
await self.rabbit.publish_message(
@@ -402,7 +395,6 @@ class NotificationManager(AppService):
message=event.model_dump_json(),
exchange=next(ex for ex in EXCHANGES if ex.name == exchange),
)
logger.info(f"Successfully queued notification for user {event.user_id}")
except Exception as e:
logger.exception(f"Error queueing notification: {e}")
@@ -424,99 +416,85 @@ class NotificationManager(AppService):
# only if both are true, should we email this person
return validated_email and preference
async def _gather_summary_data(
def _gather_summary_data(
self, user_id: str, event_type: NotificationType, params: BaseSummaryParams
) -> BaseSummaryData:
"""Gathers the data to build a summary notification"""
logger.info(
f"Gathering summary data for {user_id} and {event_type} with {params=}"
f"Gathering summary data for {user_id} and {event_type} wiht {params=}"
)
try:
# Get summary data from the database
summary_data = await get_database_manager_async_client().get_user_execution_summary_data(
user_id=user_id,
start_time=params.start_date,
end_time=params.end_date,
# total_credits_used = self.run_and_wait(
# get_total_credits_used(user_id, start_time, end_time)
# )
# total_executions = self.run_and_wait(
# get_total_executions(user_id, start_time, end_time)
# )
# most_used_agent = self.run_and_wait(
# get_most_used_agent(user_id, start_time, end_time)
# )
# execution_times = self.run_and_wait(
# get_execution_time(user_id, start_time, end_time)
# )
# runs = self.run_and_wait(
# get_runs(user_id, start_time, end_time)
# )
total_credits_used = 3.0
total_executions = 2
most_used_agent = {"name": "Some"}
execution_times = [1, 2, 3]
runs = [{"status": "COMPLETED"}, {"status": "FAILED"}]
successful_runs = len([run for run in runs if run["status"] == "COMPLETED"])
failed_runs = len([run for run in runs if run["status"] != "COMPLETED"])
average_execution_time = (
sum(execution_times) / len(execution_times) if execution_times else 0
)
# cost_breakdown = self.run_and_wait(
# get_cost_breakdown(user_id, start_time, end_time)
# )
cost_breakdown = {
"agent1": 1.0,
"agent2": 2.0,
}
if event_type == NotificationType.DAILY_SUMMARY and isinstance(
params, DailySummaryParams
):
return DailySummaryData(
total_credits_used=total_credits_used,
total_executions=total_executions,
most_used_agent=most_used_agent["name"],
total_execution_time=sum(execution_times),
successful_runs=successful_runs,
failed_runs=failed_runs,
average_execution_time=average_execution_time,
cost_breakdown=cost_breakdown,
date=params.date,
)
# Extract data from summary
total_credits_used = summary_data.total_credits_used
total_executions = summary_data.total_executions
most_used_agent = summary_data.most_used_agent
successful_runs = summary_data.successful_runs
failed_runs = summary_data.failed_runs
total_execution_time = summary_data.total_execution_time
average_execution_time = summary_data.average_execution_time
cost_breakdown = summary_data.cost_breakdown
if event_type == NotificationType.DAILY_SUMMARY and isinstance(
params, DailySummaryParams
):
return DailySummaryData(
total_credits_used=total_credits_used,
total_executions=total_executions,
most_used_agent=most_used_agent,
total_execution_time=total_execution_time,
successful_runs=successful_runs,
failed_runs=failed_runs,
average_execution_time=average_execution_time,
cost_breakdown=cost_breakdown,
date=params.date,
)
elif event_type == NotificationType.WEEKLY_SUMMARY and isinstance(
params, WeeklySummaryParams
):
return WeeklySummaryData(
total_credits_used=total_credits_used,
total_executions=total_executions,
most_used_agent=most_used_agent,
total_execution_time=total_execution_time,
successful_runs=successful_runs,
failed_runs=failed_runs,
average_execution_time=average_execution_time,
cost_breakdown=cost_breakdown,
start_date=params.start_date,
end_date=params.end_date,
)
else:
raise ValueError("Invalid event type or params")
except Exception as e:
logger.error(f"Failed to gather summary data: {e}")
# Return sensible defaults in case of error
if event_type == NotificationType.DAILY_SUMMARY and isinstance(
params, DailySummaryParams
):
return DailySummaryData(
total_credits_used=0.0,
total_executions=0,
most_used_agent="No data available",
total_execution_time=0.0,
successful_runs=0,
failed_runs=0,
average_execution_time=0.0,
cost_breakdown={},
date=params.date,
)
elif event_type == NotificationType.WEEKLY_SUMMARY and isinstance(
params, WeeklySummaryParams
):
return WeeklySummaryData(
total_credits_used=0.0,
total_executions=0,
most_used_agent="No data available",
total_execution_time=0.0,
successful_runs=0,
failed_runs=0,
average_execution_time=0.0,
cost_breakdown={},
start_date=params.start_date,
end_date=params.end_date,
)
else:
raise ValueError("Invalid event type or params") from e
elif event_type == NotificationType.WEEKLY_SUMMARY and isinstance(
params, WeeklySummaryParams
):
return WeeklySummaryData(
total_credits_used=total_credits_used,
total_executions=total_executions,
most_used_agent=most_used_agent["name"],
total_execution_time=sum(execution_times),
successful_runs=successful_runs,
failed_runs=failed_runs,
average_execution_time=average_execution_time,
cost_breakdown=cost_breakdown,
start_date=params.start_date,
end_date=params.end_date,
)
else:
raise ValueError("Invalid event type or params")
async def _should_batch(
self, user_id: str, event_type: NotificationType, event: NotificationEventModel
@@ -786,7 +764,7 @@ class NotificationManager(AppService):
)
return True
summary_data = await self._gather_summary_data(
summary_data = self._gather_summary_data(
event.user_id, event.type, model.data
)

View File

@@ -5,64 +5,23 @@ data.start_date: the start date of the summary
data.end_date: the end date of the summary
data.total_credits_used: the total credits used during the summary
data.total_executions: the total number of executions during the summary
data.most_used_agent: the most used agent's name during the summary
data.most_used_agent: the most used agent's nameduring the summary
data.total_execution_time: the total execution time during the summary
data.successful_runs: the total number of successful runs during the summary
data.failed_runs: the total number of failed runs during the summary
data.average_execution_time: the average execution time during the summary
data.cost_breakdown: the cost breakdown during the summary (dict mapping agent names to credit amounts)
data.cost_breakdown: the cost breakdown during the summary
#}
<h1 style="color: #5D23BB; font-size: 32px; font-weight: 600; margin-bottom: 25px; margin-top: 0;">
Weekly Summary
</h1>
<h1>Weekly Summary</h1>
<h2 style="color: #070629; font-size: 24px; font-weight: 500; margin-bottom: 20px;">
Your Agent Activity: {{ data.start_date.strftime('%B %-d') }} {{ data.end_date.strftime('%B %-d') }}
</h2>
<div style="background-color: #ffffff; border-radius: 8px; padding: 20px; margin-bottom: 25px;">
<ul style="list-style-type: disc; padding-left: 20px; margin: 0;">
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 8px;">
<strong>Total Executions:</strong> {{ data.total_executions }}
</li>
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 8px;">
<strong>Total Credits Used:</strong> {{ data.total_credits_used|format("%.2f") }}
</li>
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 8px;">
<strong>Total Execution Time:</strong> {{ data.total_execution_time|format("%.1f") }} seconds
</li>
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 8px;">
<strong>Successful Runs:</strong> {{ data.successful_runs }}
</li>
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 8px;">
<strong>Failed Runs:</strong> {{ data.failed_runs }}
</li>
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 8px;">
<strong>Average Execution Time:</strong> {{ data.average_execution_time|format("%.1f") }} seconds
</li>
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 8px;">
<strong>Most Used Agent:</strong> {{ data.most_used_agent }}
</li>
{% if data.cost_breakdown %}
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 8px;">
<strong>Cost Breakdown:</strong>
<ul style="list-style-type: disc; padding-left: 40px; margin-top: 8px;">
{% for agent_name, credits in data.cost_breakdown.items() %}
<li style="font-size: 16px; line-height: 1.8; margin-bottom: 4px;">
{{ agent_name }}: {{ credits|format("%.2f") }} credits
</li>
{% endfor %}
</ul>
</li>
{% endif %}
</ul>
</div>
<p style="font-size: 16px; line-height: 165%; margin-top: 20px; margin-bottom: 10px;">
Thank you for being a part of the AutoGPT community! 🎉
</p>
<p style="font-size: 16px; line-height: 165%; margin-bottom: 0;">
Join the conversation on <a href="https://discord.gg/autogpt" style="color: #4285F4; text-decoration: underline;">Discord here</a>.
</p>
<p>Start Date: {{ data.start_date }}</p>
<p>End Date: {{ data.end_date }}</p>
<p>Total Credits Used: {{ data.total_credits_used }}</p>
<p>Total Executions: {{ data.total_executions }}</p>
<p>Most Used Agent: {{ data.most_used_agent }}</p>
<p>Total Execution Time: {{ data.total_execution_time }}</p>
<p>Successful Runs: {{ data.successful_runs }}</p>
<p>Failed Runs: {{ data.failed_runs }}</p>
<p>Average Execution Time: {{ data.average_execution_time }}</p>
<p>Cost Breakdown: {{ data.cost_breakdown }}</p>

View File

@@ -0,0 +1,11 @@
from supabase import Client, create_client
from backend.util.settings import Settings
settings = Settings()
def get_supabase() -> Client:
return create_client(
settings.secrets.supabase_url, settings.secrets.supabase_service_role_key
)

View File

@@ -9,6 +9,11 @@ import fastapi.responses
import pydantic
import starlette.middleware.cors
import uvicorn
from autogpt_libs.feature_flag.client import (
initialize_launchdarkly,
shutdown_launchdarkly,
)
from autogpt_libs.logging.utils import generate_uvicorn_config
from fastapi.exceptions import RequestValidationError
from fastapi.routing import APIRoute
@@ -36,7 +41,6 @@ from backend.server.external.api import external_app
from backend.server.middleware.security import SecurityHeadersMiddleware
from backend.util import json
from backend.util.cloud_storage import shutdown_cloud_storage_handler
from backend.util.feature_flag import initialize_launchdarkly, shutdown_launchdarkly
from backend.util.service import UnhealthyServiceError
settings = backend.util.settings.Settings()
@@ -246,7 +250,7 @@ class AgentServer(backend.util.service.AppProcess):
server_app,
host=backend.util.settings.Config().agent_api_host,
port=backend.util.settings.Config().agent_api_port,
log_config=None,
log_config=generate_uvicorn_config(),
)
def cleanup(self):

View File

@@ -8,6 +8,7 @@ from typing import Annotated, Any, Sequence
import pydantic
import stripe
from autogpt_libs.auth.middleware import auth_middleware
from autogpt_libs.feature_flag.client import feature_flag
from fastapi import (
APIRouter,
Body,
@@ -84,7 +85,6 @@ from backend.server.utils import get_user_id
from backend.util.clients import get_scheduler_client
from backend.util.cloud_storage import get_cloud_storage_handler
from backend.util.exceptions import GraphValidationError, NotFoundError
from backend.util.feature_flag import feature_flag
from backend.util.settings import Settings
from backend.util.virus_scanner import scan_content_safe
@@ -458,16 +458,12 @@ async def stripe_webhook(request: Request):
event = stripe.Webhook.construct_event(
payload, sig_header, settings.secrets.stripe_webhook_secret
)
except ValueError as e:
except ValueError:
# Invalid payload
raise HTTPException(
status_code=400, detail=f"Invalid payload: {str(e) or type(e).__name__}"
)
except stripe.SignatureVerificationError as e:
raise HTTPException(status_code=400)
except stripe.SignatureVerificationError:
# Invalid signature
raise HTTPException(
status_code=400, detail=f"Invalid signature: {str(e) or type(e).__name__}"
)
raise HTTPException(status_code=400)
if (
event["type"] == "checkout.session.completed"
@@ -680,15 +676,7 @@ async def update_graph(
# Handle deactivation of the previously active version
await on_graph_deactivate(current_active_version, user_id=user_id)
# Fetch new graph version *with sub-graphs* (needed for credentials input schema)
new_graph_version_with_subgraphs = await graph_db.get_graph(
graph_id,
new_graph_version.version,
user_id=user_id,
include_subgraphs=True,
)
assert new_graph_version_with_subgraphs # make type checker happy
return new_graph_version_with_subgraphs
return new_graph_version
@v1_router.put(

View File

@@ -1 +0,0 @@
# AutoMod integration for content moderation

View File

@@ -1,353 +0,0 @@
import asyncio
import json
import logging
from typing import TYPE_CHECKING, Any, Literal
if TYPE_CHECKING:
from backend.executor import DatabaseManagerAsyncClient
from pydantic import ValidationError
from backend.data.execution import ExecutionStatus
from backend.server.v2.AutoMod.models import (
AutoModRequest,
AutoModResponse,
ModerationConfig,
)
from backend.util.exceptions import ModerationError
from backend.util.feature_flag import Flag, is_feature_enabled
from backend.util.request import Requests
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
class AutoModManager:
def __init__(self):
self.config = self._load_config()
def _load_config(self) -> ModerationConfig:
"""Load AutoMod configuration from settings"""
settings = Settings()
return ModerationConfig(
enabled=settings.config.automod_enabled,
api_url=settings.config.automod_api_url,
api_key=settings.secrets.automod_api_key,
timeout=settings.config.automod_timeout,
retry_attempts=settings.config.automod_retry_attempts,
retry_delay=settings.config.automod_retry_delay,
fail_open=settings.config.automod_fail_open,
)
async def moderate_graph_execution_inputs(
self, db_client: "DatabaseManagerAsyncClient", graph_exec, timeout: int = 10
) -> Exception | None:
"""
Complete input moderation flow for graph execution
Returns: error_if_failed (None means success)
"""
if not self.config.enabled:
return None
# Check if AutoMod feature is enabled for this user
if not await is_feature_enabled(Flag.AUTOMOD, graph_exec.user_id):
logger.debug(f"AutoMod feature not enabled for user {graph_exec.user_id}")
return None
# Get graph model and collect all inputs
graph_model = await db_client.get_graph(
graph_exec.graph_id,
user_id=graph_exec.user_id,
version=graph_exec.graph_version,
)
if not graph_model or not graph_model.nodes:
return None
all_inputs = []
for node in graph_model.nodes:
if node.input_default:
all_inputs.extend(str(v) for v in node.input_default.values() if v)
if (masks := graph_exec.nodes_input_masks) and (mask := masks.get(node.id)):
all_inputs.extend(str(v) for v in mask.values() if v)
if not all_inputs:
return None
# Combine all content and moderate directly
content = " ".join(all_inputs)
# Run moderation
logger.warning(
f"Moderating inputs for graph execution {graph_exec.graph_exec_id}"
)
try:
moderation_passed = await self._moderate_content(
content,
{
"user_id": graph_exec.user_id,
"graph_id": graph_exec.graph_id,
"graph_exec_id": graph_exec.graph_exec_id,
"moderation_type": "execution_input",
},
)
if not moderation_passed:
logger.warning(
f"Moderation failed for graph execution {graph_exec.graph_exec_id}"
)
# Update node statuses for frontend display before raising error
await self._update_failed_nodes_for_moderation(
db_client, graph_exec.graph_exec_id, "input"
)
return ModerationError(
message="Execution failed due to input content moderation",
user_id=graph_exec.user_id,
graph_exec_id=graph_exec.graph_exec_id,
moderation_type="input",
)
return None
except asyncio.TimeoutError:
logger.warning(
f"Input moderation timed out for graph execution {graph_exec.graph_exec_id}, bypassing moderation"
)
return None # Bypass moderation on timeout
except Exception as e:
logger.warning(f"Input moderation execution failed: {e}")
return ModerationError(
message="Execution failed due to input content moderation error",
user_id=graph_exec.user_id,
graph_exec_id=graph_exec.graph_exec_id,
moderation_type="input",
)
async def moderate_graph_execution_outputs(
self,
db_client: "DatabaseManagerAsyncClient",
graph_exec_id: str,
user_id: str,
graph_id: str,
timeout: int = 10,
) -> Exception | None:
"""
Complete output moderation flow for graph execution
Returns: error_if_failed (None means success)
"""
if not self.config.enabled:
return None
# Check if AutoMod feature is enabled for this user
if not await is_feature_enabled(Flag.AUTOMOD, user_id):
logger.debug(f"AutoMod feature not enabled for user {user_id}")
return None
# Get completed executions and collect outputs
completed_executions = await db_client.get_node_executions(
graph_exec_id, statuses=[ExecutionStatus.COMPLETED], include_exec_data=True
)
if not completed_executions:
return None
all_outputs = []
for exec_entry in completed_executions:
if exec_entry.output_data:
all_outputs.extend(str(v) for v in exec_entry.output_data.values() if v)
if not all_outputs:
return None
# Combine all content and moderate directly
content = " ".join(all_outputs)
# Run moderation
logger.warning(f"Moderating outputs for graph execution {graph_exec_id}")
try:
moderation_passed = await self._moderate_content(
content,
{
"user_id": user_id,
"graph_id": graph_id,
"graph_exec_id": graph_exec_id,
"moderation_type": "execution_output",
},
)
if not moderation_passed:
logger.warning(f"Moderation failed for graph execution {graph_exec_id}")
# Update node statuses for frontend display before raising error
await self._update_failed_nodes_for_moderation(
db_client, graph_exec_id, "output"
)
return ModerationError(
message="Execution failed due to output content moderation",
user_id=user_id,
graph_exec_id=graph_exec_id,
moderation_type="output",
)
return None
except asyncio.TimeoutError:
logger.warning(
f"Output moderation timed out for graph execution {graph_exec_id}, bypassing moderation"
)
return None # Bypass moderation on timeout
except Exception as e:
logger.warning(f"Output moderation execution failed: {e}")
return ModerationError(
message="Execution failed due to output content moderation error",
user_id=user_id,
graph_exec_id=graph_exec_id,
moderation_type="output",
)
async def _update_failed_nodes_for_moderation(
self,
db_client: "DatabaseManagerAsyncClient",
graph_exec_id: str,
moderation_type: Literal["input", "output"],
):
"""Update node execution statuses for frontend display when moderation fails"""
# Import here to avoid circular imports
from backend.executor.manager import send_async_execution_update
if moderation_type == "input":
# For input moderation, mark queued/running/incomplete nodes as failed
target_statuses = [
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
ExecutionStatus.INCOMPLETE,
]
else:
# For output moderation, mark completed nodes as failed
target_statuses = [ExecutionStatus.COMPLETED]
# Get the executions that need to be updated
executions_to_update = await db_client.get_node_executions(
graph_exec_id, statuses=target_statuses, include_exec_data=True
)
if not executions_to_update:
return
# Prepare database update tasks
exec_updates = []
for exec_entry in executions_to_update:
# Collect all input and output names to clear
cleared_inputs = {}
cleared_outputs = {}
if exec_entry.input_data:
for name in exec_entry.input_data.keys():
cleared_inputs[name] = ["Failed due to content moderation"]
if exec_entry.output_data:
for name in exec_entry.output_data.keys():
cleared_outputs[name] = ["Failed due to content moderation"]
# Add update task to list
exec_updates.append(
db_client.update_node_execution_status(
exec_entry.node_exec_id,
status=ExecutionStatus.FAILED,
stats={
"error": "Failed due to content moderation",
"cleared_inputs": cleared_inputs,
"cleared_outputs": cleared_outputs,
},
)
)
# Execute all database updates in parallel
updated_execs = await asyncio.gather(*exec_updates)
# Send all websocket updates in parallel
await asyncio.gather(
*[
send_async_execution_update(updated_exec)
for updated_exec in updated_execs
]
)
async def _moderate_content(self, content: str, metadata: dict[str, Any]) -> bool:
"""Moderate content using AutoMod API
Returns:
True: Content approved or timeout occurred
False: Content rejected by moderation
Raises:
asyncio.TimeoutError: When moderation times out (should be bypassed)
"""
try:
request_data = AutoModRequest(
type="text",
content=content,
metadata=metadata,
)
response = await self._make_request(request_data)
if response.success and response.status == "approved":
logger.debug(
f"Content approved for {metadata.get('graph_exec_id', 'unknown')}"
)
return True
else:
reasons = [r.reason for r in response.moderation_results if r.reason]
error_msg = f"Content rejected by AutoMod: {'; '.join(reasons)}"
logger.warning(f"Content rejected: {error_msg}")
return False
except asyncio.TimeoutError:
# Re-raise timeout to be handled by calling methods
logger.warning(
f"AutoMod API timeout for {metadata.get('graph_exec_id', 'unknown')}"
)
raise
except Exception as e:
logger.error(f"AutoMod moderation error: {e}")
return self.config.fail_open
async def _make_request(self, request_data: AutoModRequest) -> AutoModResponse:
"""Make HTTP request to AutoMod API using the standard request utility"""
url = f"{self.config.api_url}/moderate"
headers = {
"Content-Type": "application/json",
"X-API-Key": self.config.api_key.strip(),
}
# Create requests instance with timeout and retry configuration
requests = Requests(
extra_headers=headers,
retry_max_wait=float(self.config.timeout),
)
try:
response = await requests.post(
url, json=request_data.model_dump(), timeout=self.config.timeout
)
response_data = response.json()
return AutoModResponse.model_validate(response_data)
except asyncio.TimeoutError:
# Re-raise timeout error to be caught by _moderate_content
raise
except (json.JSONDecodeError, ValidationError) as e:
raise Exception(f"Invalid response from AutoMod API: {e}")
except Exception as e:
# Check if this is an aiohttp timeout that we should convert
if "timeout" in str(e).lower():
raise asyncio.TimeoutError(f"AutoMod API request timed out: {e}")
raise Exception(f"AutoMod API request failed: {e}")
# Global instance
automod_manager = AutoModManager()

View File

@@ -1,57 +0,0 @@
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
class AutoModRequest(BaseModel):
"""Request model for AutoMod API"""
type: str = Field(..., description="Content type - 'text', 'image', 'video'")
content: str = Field(..., description="The content to moderate")
metadata: Optional[Dict[str, Any]] = Field(
default=None, description="Additional context about the content"
)
class ModerationResult(BaseModel):
"""Individual moderation result"""
decision: str = Field(
..., description="Moderation decision: 'approved', 'rejected', 'flagged'"
)
reason: Optional[str] = Field(default=None, description="Reason for the decision")
class AutoModResponse(BaseModel):
"""Response model for AutoMod API"""
success: bool = Field(..., description="Whether the request was successful")
status: str = Field(
..., description="Overall status: 'approved', 'rejected', 'flagged', 'pending'"
)
moderation_results: List[ModerationResult] = Field(
default_factory=list, description="List of moderation results"
)
class ModerationConfig(BaseModel):
"""Configuration for AutoMod integration"""
enabled: bool = Field(default=True, description="Whether moderation is enabled")
api_url: str = Field(default="", description="AutoMod API base URL")
api_key: str = Field(..., description="AutoMod API key")
timeout: int = Field(default=30, description="Request timeout in seconds")
retry_attempts: int = Field(default=3, description="Number of retry attempts")
retry_delay: float = Field(
default=1.0, description="Delay between retries in seconds"
)
fail_open: bool = Field(
default=False,
description="If True, allow execution to continue if moderation fails",
)
moderate_inputs: bool = Field(
default=True, description="Whether to moderate block inputs"
)
moderate_outputs: bool = Field(
default=True, description="Whether to moderate block outputs"
)

View File

@@ -241,11 +241,7 @@ async def get_library_agent_by_graph_id(
)
if not agent:
return None
assert agent.AgentGraph # make type checker happy
# Include sub-graphs so we can make a full credentials input schema
sub_graphs = await graph_db.get_sub_graphs(agent.AgentGraph)
return library_model.LibraryAgent.from_db(agent, sub_graphs=sub_graphs)
return library_model.LibraryAgent.from_db(agent)
except prisma.errors.PrismaError as e:
logger.error(f"Database error fetching library agent by graph ID: {e}")
raise store_exceptions.DatabaseError("Failed to fetch library agent") from e

View File

@@ -6,6 +6,7 @@ from typing import Protocol
import pydantic
import uvicorn
from autogpt_libs.auth import parse_jwt_token
from autogpt_libs.logging.utils import generate_uvicorn_config
from fastapi import Depends, FastAPI, WebSocket, WebSocketDisconnect
from starlette.middleware.cors import CORSMiddleware
@@ -308,7 +309,7 @@ class WebsocketServer(AppProcess):
server_app,
host=Config().websocket_server_host,
port=Config().websocket_server_port,
log_config=None,
log_config=generate_uvicorn_config(),
)
def cleanup(self):

View File

@@ -2,18 +2,11 @@
Centralized service client helpers with thread caching.
"""
from functools import cache
from typing import TYPE_CHECKING
from autogpt_libs.utils.cache import async_cache, thread_cached
from backend.util.settings import Settings
settings = Settings()
from autogpt_libs.utils.cache import thread_cached
if TYPE_CHECKING:
from supabase import AClient, Client
from backend.data.execution import (
AsyncRedisExecutionEventBus,
RedisExecutionEventBus,
@@ -116,29 +109,6 @@ def get_integration_credentials_store() -> "IntegrationCredentialsStore":
return IntegrationCredentialsStore()
# ============ Supabase Clients ============ #
@cache
def get_supabase() -> "Client":
"""Get a process-cached synchronous Supabase client instance."""
from supabase import create_client
return create_client(
settings.secrets.supabase_url, settings.secrets.supabase_service_role_key
)
@async_cache
async def get_async_supabase() -> "AClient":
"""Get a process-cached asynchronous Supabase client instance."""
from supabase import create_async_client
return await create_async_client(
settings.secrets.supabase_url, settings.secrets.supabase_service_role_key
)
# ============ Notification Queue Helpers ============ #

View File

@@ -33,33 +33,6 @@ class InsufficientBalanceError(ValueError):
return self.message
class ModerationError(ValueError):
"""Content moderation failure during execution"""
user_id: str
message: str
graph_exec_id: str
moderation_type: str
def __init__(
self,
message: str,
user_id: str,
graph_exec_id: str,
moderation_type: str = "content",
):
super().__init__(message)
self.args = (message, user_id, graph_exec_id, moderation_type)
self.message = message
self.user_id = user_id
self.graph_exec_id = graph_exec_id
self.moderation_type = moderation_type
def __str__(self):
"""Used to display the error message in the frontend, because we str() the error when sending the execution update"""
return self.message
class GraphValidationError(ValueError):
"""Structured validation error for graph validation failures"""
@@ -71,10 +44,4 @@ class GraphValidationError(ValueError):
self.node_errors = node_errors or {}
def __str__(self):
return self.message + "".join(
[
f"\n {node_id}:"
+ "".join([f"\n {k}: {e}" for k, e in errors.items()])
for node_id, errors in self.node_errors.items()
]
)
return self.message

View File

@@ -1,257 +0,0 @@
import contextlib
import logging
from enum import Enum
from functools import wraps
from typing import Any, Awaitable, Callable, TypeVar
import ldclient
from autogpt_libs.utils.cache import async_ttl_cache
from fastapi import HTTPException
from ldclient import Context, LDClient
from ldclient.config import Config
from typing_extensions import ParamSpec
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
# Load settings at module level
settings = Settings()
P = ParamSpec("P")
T = TypeVar("T")
_is_initialized = False
class Flag(str, Enum):
"""
Centralized enum for all LaunchDarkly feature flags.
Add new flags here to ensure consistency across the codebase.
"""
AUTOMOD = "AutoMod"
AI_ACTIVITY_STATUS = "ai-agent-execution-summary"
BETA_BLOCKS = "beta-blocks"
AGENT_ACTIVITY = "agent-activity"
def get_client() -> LDClient:
"""Get the LaunchDarkly client singleton."""
if not _is_initialized:
initialize_launchdarkly()
return ldclient.get()
def initialize_launchdarkly() -> None:
sdk_key = settings.secrets.launch_darkly_sdk_key
logger.debug(
f"Initializing LaunchDarkly with SDK key: {'present' if sdk_key else 'missing'}"
)
if not sdk_key:
logger.warning("LaunchDarkly SDK key not configured")
return
config = Config(sdk_key)
ldclient.set_config(config)
if ldclient.get().is_initialized():
global _is_initialized
_is_initialized = True
logger.info("LaunchDarkly client initialized successfully")
else:
logger.error("LaunchDarkly client failed to initialize")
def shutdown_launchdarkly() -> None:
"""Shutdown the LaunchDarkly client."""
if ldclient.get().is_initialized():
ldclient.get().close()
logger.info("LaunchDarkly client closed successfully")
@async_ttl_cache(maxsize=1000, ttl_seconds=86400) # 1000 entries, 24 hours TTL
async def _fetch_user_context_data(user_id: str) -> Context:
"""
Fetch user context for LaunchDarkly from Supabase.
Args:
user_id: The user ID to fetch data for
Returns:
LaunchDarkly Context object
"""
builder = Context.builder(user_id).kind("user").anonymous(True)
try:
from backend.util.clients import get_supabase
# If we have user data, update context
response = get_supabase().auth.admin.get_user_by_id(user_id)
if response and response.user:
user = response.user
builder.anonymous(False)
if user.role:
builder.set("role", user.role)
# It's weird, I know, but it is what it is.
builder.set("custom", {"role": user.role})
if user.email:
builder.set("email", user.email)
builder.set("email_domain", user.email.split("@")[-1])
except Exception as e:
logger.warning(f"Failed to fetch user context for {user_id}: {e}")
return builder.build()
async def get_feature_flag_value(
flag_key: str,
user_id: str,
default: Any = None,
) -> Any:
"""
Get the raw value of a feature flag for a user.
This is the generic function that returns the actual flag value,
which could be a boolean, string, number, or JSON object.
Args:
flag_key: The LaunchDarkly feature flag key
user_id: The user ID to evaluate the flag for
default: Default value if LaunchDarkly is unavailable or flag evaluation fails
Returns:
The flag value from LaunchDarkly
"""
try:
client = get_client()
# Check if client is initialized
if not client.is_initialized():
logger.debug(
f"LaunchDarkly not initialized, using default={default} for {flag_key}"
)
return default
# Get user context from Supabase
context = await _fetch_user_context_data(user_id)
# Evaluate flag
result = client.variation(flag_key, context, default)
logger.debug(
f"Feature flag {flag_key} for user {user_id}: {result} (type: {type(result).__name__})"
)
return result
except Exception as e:
logger.warning(
f"LaunchDarkly flag evaluation failed for {flag_key}: {e}, using default={default}"
)
return default
async def is_feature_enabled(
flag_key: Flag,
user_id: str,
default: bool = False,
) -> bool:
"""
Check if a feature flag is enabled for a user.
Args:
flag_key: The Flag enum value
user_id: The user ID to evaluate the flag for
default: Default value if LaunchDarkly is unavailable or flag evaluation fails
Returns:
True if feature is enabled, False otherwise
"""
result = await get_feature_flag_value(flag_key.value, user_id, default)
# If the result is already a boolean, return it
if isinstance(result, bool):
return result
# Log a warning if the flag is not returning a boolean
logger.warning(
f"Feature flag {flag_key} returned non-boolean value: {result} (type: {type(result).__name__}). "
f"This flag should be configured as a boolean in LaunchDarkly. Using default={default}"
)
# Return the default if we get a non-boolean value
# This prevents objects from being incorrectly treated as True
return default
def feature_flag(
flag_key: str,
default: bool = False,
) -> Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]:
"""
Decorator for async feature flag protected endpoints.
Args:
flag_key: The LaunchDarkly feature flag key
default: Default value if flag evaluation fails
Returns:
Decorator that only works with async functions
"""
def decorator(func: Callable[P, Awaitable[T]]) -> Callable[P, Awaitable[T]]:
@wraps(func)
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
try:
user_id = kwargs.get("user_id")
if not user_id:
raise ValueError("user_id is required")
if not get_client().is_initialized():
logger.warning(
f"LaunchDarkly not initialized, using default={default}"
)
is_enabled = default
else:
# Use the internal function directly since we have a raw string flag_key
flag_value = await get_feature_flag_value(
flag_key, str(user_id), default
)
# Ensure we treat flag value as boolean
if isinstance(flag_value, bool):
is_enabled = flag_value
else:
# Log warning and use default for non-boolean values
logger.warning(
f"Feature flag {flag_key} returned non-boolean value: {flag_value} (type: {type(flag_value).__name__}). "
f"Using default={default}"
)
is_enabled = default
if not is_enabled:
raise HTTPException(status_code=404, detail="Feature not available")
return await func(*args, **kwargs)
except Exception as e:
logger.error(f"Error evaluating feature flag {flag_key}: {e}")
raise
return async_wrapper
return decorator
@contextlib.contextmanager
def mock_flag_variation(flag_key: str, return_value: Any):
"""Context manager for testing feature flags."""
original_variation = get_client().variation
get_client().variation = lambda key, context, default: (
return_value if key == flag_key else original_variation(key, context, default)
)
try:
yield
finally:
get_client().variation = original_variation

View File

@@ -1,113 +0,0 @@
import pytest
from fastapi import HTTPException
from ldclient import LDClient
from backend.util.feature_flag import (
Flag,
feature_flag,
is_feature_enabled,
mock_flag_variation,
)
@pytest.fixture
def ld_client(mocker):
client = mocker.Mock(spec=LDClient)
mocker.patch("ldclient.get", return_value=client)
client.is_initialized.return_value = True
return client
@pytest.mark.asyncio
async def test_feature_flag_enabled(ld_client):
ld_client.variation.return_value = True
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = await test_function(user_id="test-user")
assert result == "success"
ld_client.variation.assert_called_once()
@pytest.mark.asyncio
async def test_feature_flag_unauthorized_response(ld_client):
ld_client.variation.return_value = False
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
with pytest.raises(HTTPException) as exc_info:
await test_function(user_id="test-user")
assert exc_info.value.status_code == 404
def test_mock_flag_variation(ld_client):
with mock_flag_variation("test-flag", True):
assert ld_client.variation("test-flag", None, False) is True
with mock_flag_variation("test-flag", False):
assert ld_client.variation("test-flag", None, True) is False
@pytest.mark.asyncio
async def test_is_feature_enabled(ld_client):
"""Test the is_feature_enabled helper function."""
ld_client.is_initialized.return_value = True
ld_client.variation.return_value = True
result = await is_feature_enabled(Flag.AUTOMOD, "user123", default=False)
assert result is True
ld_client.variation.assert_called_once()
call_args = ld_client.variation.call_args
assert call_args[0][0] == "AutoMod" # flag_key
assert call_args[0][2] is False # default value
@pytest.mark.asyncio
async def test_is_feature_enabled_not_initialized(ld_client):
"""Test is_feature_enabled when LaunchDarkly is not initialized."""
ld_client.is_initialized.return_value = False
result = await is_feature_enabled(Flag.AGENT_ACTIVITY, "user123", default=True)
assert result is True # Should return default
ld_client.variation.assert_not_called()
@pytest.mark.asyncio
async def test_is_feature_enabled_exception(mocker):
"""Test is_feature_enabled when get_client() raises an exception."""
mocker.patch(
"backend.util.feature_flag.get_client",
side_effect=Exception("Client error"),
)
result = await is_feature_enabled(Flag.AGENT_ACTIVITY, "user123", default=True)
assert result is True # Should return default
def test_flag_enum_values():
"""Test that Flag enum has expected values."""
assert Flag.AUTOMOD == "AutoMod"
assert Flag.AI_ACTIVITY_STATUS == "ai-agent-execution-summary"
assert Flag.BETA_BLOCKS == "beta-blocks"
assert Flag.AGENT_ACTIVITY == "agent-activity"
@pytest.mark.asyncio
async def test_is_feature_enabled_with_flag_enum(mocker):
"""Test is_feature_enabled function with Flag enum."""
mock_get_feature_flag_value = mocker.patch(
"backend.util.feature_flag.get_feature_flag_value"
)
mock_get_feature_flag_value.return_value = True
result = await is_feature_enabled(Flag.AUTOMOD, "user123")
assert result is True
# Should call with the flag's string value
mock_get_feature_flag_value.assert_called_once()

View File

@@ -24,6 +24,7 @@ from typing import (
import httpx
import uvicorn
from autogpt_libs.logging.utils import generate_uvicorn_config
from fastapi import FastAPI, Request, responses
from pydantic import BaseModel, TypeAdapter, create_model
@@ -265,7 +266,7 @@ class AppService(BaseAppService, ABC):
self.fastapi_app,
host=api_host,
port=self.get_port(),
log_config=None, # Explicitly None to avoid uvicorn replacing the logger.
log_config=generate_uvicorn_config(),
log_level=self.log_level,
)
)

View File

@@ -295,32 +295,6 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
description="Maximum file size in MB for file uploads (1-1024 MB)",
)
# AutoMod configuration
automod_enabled: bool = Field(
default=False,
description="Whether AutoMod content moderation is enabled",
)
automod_api_url: str = Field(
default="",
description="AutoMod API base URL - Make sure it ends in /api",
)
automod_timeout: int = Field(
default=30,
description="Timeout in seconds for AutoMod API requests",
)
automod_retry_attempts: int = Field(
default=3,
description="Number of retry attempts for AutoMod API requests",
)
automod_retry_delay: float = Field(
default=1.0,
description="Delay between retries for AutoMod API requests in seconds",
)
automod_fail_open: bool = Field(
default=False,
description="If True, allow execution to continue if AutoMod fails",
)
@field_validator("platform_base_url", "frontend_base_url")
@classmethod
def validate_platform_base_url(cls, v: str, info: ValidationInfo) -> str:
@@ -360,7 +334,7 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
description="Maximum message size limit for communication with the message bus",
)
backend_cors_allow_origins: List[str] = Field(default=["http://localhost:3000"])
backend_cors_allow_origins: List[str] = Field(default_factory=list)
@field_validator("backend_cors_allow_origins")
@classmethod
@@ -472,7 +446,6 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings):
groq_api_key: str = Field(default="", description="Groq API key")
open_router_api_key: str = Field(default="", description="Open Router API Key")
llama_api_key: str = Field(default="", description="Llama API Key")
v0_api_key: str = Field(default="", description="v0 by Vercel API key")
reddit_client_id: str = Field(default="", description="Reddit client ID")
reddit_client_secret: str = Field(default="", description="Reddit client secret")
@@ -522,20 +495,10 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings):
apollo_api_key: str = Field(default="", description="Apollo API Key")
smartlead_api_key: str = Field(default="", description="SmartLead API Key")
zerobounce_api_key: str = Field(default="", description="ZeroBounce API Key")
enrichlayer_api_key: str = Field(default="", description="Enrichlayer API Key")
# AutoMod API credentials
automod_api_key: str = Field(default="", description="AutoMod API key")
# LaunchDarkly feature flags
launch_darkly_sdk_key: str = Field(
default="",
description="The LaunchDarkly SDK key for feature flag management",
)
ayrshare_api_key: str = Field(default="", description="Ayrshare API Key")
ayrshare_jwt_key: str = Field(default="", description="Ayrshare private Key")
# Add more secret fields as needed
model_config = SettingsConfigDict(
env_file=".env",
env_file_encoding="utf-8",

View File

@@ -6737,4 +6737,4 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<3.13"
content-hash = "795414d7ce8f288ea6c65893268b5c29a7c9a60ad75cde28ac7bcdb65f426dfe"
content-hash = "05e2b99bd6dc5a74a89df0e1e504853b66bd519062159b0de5fbedf6a1f4d986"

View File

@@ -10,7 +10,6 @@ packages = [{ include = "backend", format = "sdist" }]
[tool.poetry.dependencies]
python = ">=3.10,<3.13"
aio-pika = "^9.5.5"
aiohttp = "^3.10.0"
aiodns = "^3.5.0"
anthropic = "^0.59.0"
apscheduler = "^3.11.0"

View File

@@ -30,10 +30,10 @@ from backend.data.graph import Graph, Link, Node, create_graph
# Import API functions from the backend
from backend.data.user import get_or_create_user
from backend.server.integrations.utils import get_supabase
from backend.server.v2.library.db import create_library_agent, create_preset
from backend.server.v2.library.model import LibraryAgentPresetCreatable
from backend.server.v2.store.db import create_store_submission, review_store_submission
from backend.util.clients import get_supabase
faker = Faker()

View File

@@ -0,0 +1,123 @@
############
# Secrets
# YOU MUST CHANGE THESE BEFORE GOING INTO PRODUCTION
############
POSTGRES_PASSWORD=your-super-secret-and-long-postgres-password
JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
DASHBOARD_USERNAME=supabase
DASHBOARD_PASSWORD=this_password_is_insecure_and_should_be_updated
SECRET_KEY_BASE=UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
VAULT_ENC_KEY=your-encryption-key-32-chars-min
############
# Database - You can change these to any PostgreSQL database that has logical replication enabled.
############
POSTGRES_HOST=db
POSTGRES_DB=postgres
POSTGRES_PORT=5432
# default user is postgres
############
# Supavisor -- Database pooler
############
POOLER_PROXY_PORT_TRANSACTION=6543
POOLER_DEFAULT_POOL_SIZE=20
POOLER_MAX_CLIENT_CONN=100
POOLER_TENANT_ID=your-tenant-id
############
# API Proxy - Configuration for the Kong Reverse proxy.
############
KONG_HTTP_PORT=8000
KONG_HTTPS_PORT=8443
############
# API - Configuration for PostgREST.
############
PGRST_DB_SCHEMAS=public,storage,graphql_public
############
# Auth - Configuration for the GoTrue authentication server.
############
## General
SITE_URL=http://localhost:3000
ADDITIONAL_REDIRECT_URLS=
JWT_EXPIRY=3600
DISABLE_SIGNUP=false
API_EXTERNAL_URL=http://localhost:8000
## Mailer Config
MAILER_URLPATHS_CONFIRMATION="/auth/v1/verify"
MAILER_URLPATHS_INVITE="/auth/v1/verify"
MAILER_URLPATHS_RECOVERY="/auth/v1/verify"
MAILER_URLPATHS_EMAIL_CHANGE="/auth/v1/verify"
## Email auth
ENABLE_EMAIL_SIGNUP=true
ENABLE_EMAIL_AUTOCONFIRM=false
SMTP_ADMIN_EMAIL=admin@example.com
SMTP_HOST=supabase-mail
SMTP_PORT=2500
SMTP_USER=fake_mail_user
SMTP_PASS=fake_mail_password
SMTP_SENDER_NAME=fake_sender
ENABLE_ANONYMOUS_USERS=false
## Phone auth
ENABLE_PHONE_SIGNUP=true
ENABLE_PHONE_AUTOCONFIRM=true
############
# Studio - Configuration for the Dashboard
############
STUDIO_DEFAULT_ORGANIZATION=Default Organization
STUDIO_DEFAULT_PROJECT=Default Project
STUDIO_PORT=3000
# replace if you intend to use Studio outside of localhost
SUPABASE_PUBLIC_URL=http://localhost:8000
# Enable webp support
IMGPROXY_ENABLE_WEBP_DETECTION=true
# Add your OpenAI API key to enable SQL Editor Assistant
OPENAI_API_KEY=
############
# Functions - Configuration for Functions
############
# NOTE: VERIFY_JWT applies to all functions. Per-function VERIFY_JWT is not supported yet.
FUNCTIONS_VERIFY_JWT=false
############
# Logs - Configuration for Logflare
# Please refer to https://supabase.com/docs/reference/self-hosting-analytics/introduction
############
LOGFLARE_LOGGER_BACKEND_API_KEY=your-super-secret-and-long-logflare-key
# Change vector.toml sinks to reflect this change
LOGFLARE_API_KEY=your-super-secret-and-long-logflare-key
# Docker socket location - this value will differ depending on your OS
DOCKER_SOCKET_LOCATION=/var/run/docker.sock
# Google Cloud Project details
GOOGLE_PROJECT_ID=GOOGLE_PROJECT_ID
GOOGLE_PROJECT_NUMBER=GOOGLE_PROJECT_NUMBER

View File

@@ -1,4 +1,5 @@
volumes/db/data
volumes/storage
.env
test.http
docker-compose.override.yml

View File

@@ -5,101 +5,8 @@
# Destroy: docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml down -v --remove-orphans
# Reset everything: ./reset.sh
# Environment Variable Loading Order (first → last, later overrides earlier):
# 1. ../../.env.default - Default values for all Supabase settings
# 2. ../../.env - User's custom configuration (if exists)
# 3. ./.env - Local overrides specific to db/docker (if exists)
# 4. environment key - Service-specific overrides defined below
# 5. Shell environment - Variables exported before running docker compose
name: supabase
# Common env_file configuration for all Supabase services
x-supabase-env-files: &supabase-env-files
env_file:
- ../../.env.default # Base defaults from platform root
- path: ../../.env # User overrides from platform root (optional)
required: false
- path: ./.env # Local overrides for db/docker (optional)
required: false
# Common Supabase environment - hardcoded defaults to avoid variable substitution
x-supabase-env: &supabase-env
# Core PostgreSQL settings
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
POSTGRES_HOST: db
POSTGRES_PORT: "5432"
POSTGRES_DB: postgres
# Authentication & Security
JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
DASHBOARD_USERNAME: supabase
DASHBOARD_PASSWORD: this_password_is_insecure_and_should_be_updated
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
VAULT_ENC_KEY: your-encryption-key-32-chars-min
# URLs and Endpoints
SITE_URL: http://localhost:3000
API_EXTERNAL_URL: http://localhost:8000
SUPABASE_PUBLIC_URL: http://localhost:8000
ADDITIONAL_REDIRECT_URLS: ""
# Feature Flags
DISABLE_SIGNUP: "false"
ENABLE_EMAIL_SIGNUP: "true"
ENABLE_EMAIL_AUTOCONFIRM: "false"
ENABLE_ANONYMOUS_USERS: "false"
ENABLE_PHONE_SIGNUP: "true"
ENABLE_PHONE_AUTOCONFIRM: "true"
FUNCTIONS_VERIFY_JWT: "false"
IMGPROXY_ENABLE_WEBP_DETECTION: "true"
# Email/SMTP Configuration
SMTP_ADMIN_EMAIL: admin@example.com
SMTP_HOST: supabase-mail
SMTP_PORT: "2500"
SMTP_USER: fake_mail_user
SMTP_PASS: fake_mail_password
SMTP_SENDER_NAME: fake_sender
# Mailer URLs
MAILER_URLPATHS_CONFIRMATION: /auth/v1/verify
MAILER_URLPATHS_INVITE: /auth/v1/verify
MAILER_URLPATHS_RECOVERY: /auth/v1/verify
MAILER_URLPATHS_EMAIL_CHANGE: /auth/v1/verify
# JWT Settings
JWT_EXPIRY: "3600"
# Database Schemas
PGRST_DB_SCHEMAS: public,storage,graphql_public
# Studio Settings
STUDIO_DEFAULT_ORGANIZATION: Default Organization
STUDIO_DEFAULT_PROJECT: Default Project
# Logging
LOGFLARE_API_KEY: your-super-secret-and-long-logflare-key
# Pooler Settings
POOLER_DEFAULT_POOL_SIZE: "20"
POOLER_MAX_CLIENT_CONN: "100"
POOLER_TENANT_ID: your-tenant-id
POOLER_PROXY_PORT_TRANSACTION: "6543"
# Kong Ports
KONG_HTTP_PORT: "8000"
KONG_HTTPS_PORT: "8443"
# Docker
DOCKER_SOCKET_LOCATION: /var/run/docker.sock
# Google Cloud (if needed)
GOOGLE_PROJECT_ID: GOOGLE_PROJECT_ID
GOOGLE_PROJECT_NUMBER: GOOGLE_PROJECT_NUMBER
services:
studio:
@@ -117,24 +24,24 @@ services:
timeout: 10s
interval: 5s
retries: 3
<<: *supabase-env-files
depends_on:
analytics:
condition: service_healthy
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
STUDIO_PG_META_URL: http://meta:8080
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
DEFAULT_ORGANIZATION_NAME: Default Organization
DEFAULT_PROJECT_NAME: Default Project
OPENAI_API_KEY: ""
DEFAULT_ORGANIZATION_NAME: ${STUDIO_DEFAULT_ORGANIZATION}
DEFAULT_PROJECT_NAME: ${STUDIO_DEFAULT_PROJECT}
OPENAI_API_KEY: ${OPENAI_API_KEY:-}
SUPABASE_URL: http://kong:8000
SUPABASE_PUBLIC_URL: http://localhost:8000
SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SUPABASE_SERVICE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
AUTH_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
SUPABASE_PUBLIC_URL: ${SUPABASE_PUBLIC_URL}
SUPABASE_ANON_KEY: ${ANON_KEY}
SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
AUTH_JWT_SECRET: ${JWT_SECRET}
LOGFLARE_API_KEY: your-super-secret-and-long-logflare-key
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
LOGFLARE_URL: http://analytics:4000
NEXT_PUBLIC_ENABLE_LOGS: true
# Comment to use Big Query backend for analytics
@@ -147,15 +54,15 @@ services:
image: kong:2.8.1
restart: unless-stopped
ports:
- 8000:8000/tcp
- 8443:8443/tcp
- ${KONG_HTTP_PORT}:8000/tcp
- ${KONG_HTTPS_PORT}:8443/tcp
volumes:
# https://github.com/supabase/supabase/issues/12661
- ./volumes/api/kong.yml:/home/kong/temp.yml:ro
<<: *supabase-env-files
depends_on:
analytics:
condition: service_healthy
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
KONG_DATABASE: "off"
KONG_DECLARATIVE_CONFIG: /home/kong/kong.yml
# https://github.com/supabase/cli/issues/14
@@ -163,10 +70,10 @@ services:
KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth
KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k
SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SUPABASE_SERVICE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
DASHBOARD_USERNAME: supabase
DASHBOARD_PASSWORD: this_password_is_insecure_and_should_be_updated
SUPABASE_ANON_KEY: ${ANON_KEY}
SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
DASHBOARD_USERNAME: ${DASHBOARD_USERNAME}
DASHBOARD_PASSWORD: ${DASHBOARD_PASSWORD}
# https://unix.stackexchange.com/a/294837
entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'
@@ -191,49 +98,48 @@ services:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
<<: *supabase-env-files
analytics:
condition: service_healthy
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
GOTRUE_API_HOST: 0.0.0.0
GOTRUE_API_PORT: 9999
API_EXTERNAL_URL: http://localhost:8000
API_EXTERNAL_URL: ${API_EXTERNAL_URL}
GOTRUE_DB_DRIVER: postgres
GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:your-super-secret-and-long-postgres-password@db:5432/postgres
GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
GOTRUE_SITE_URL: http://localhost:3000
GOTRUE_URI_ALLOW_LIST: ""
GOTRUE_DISABLE_SIGNUP: false
GOTRUE_SITE_URL: ${SITE_URL}
GOTRUE_URI_ALLOW_LIST: ${ADDITIONAL_REDIRECT_URLS}
GOTRUE_DISABLE_SIGNUP: ${DISABLE_SIGNUP}
GOTRUE_JWT_ADMIN_ROLES: service_role
GOTRUE_JWT_AUD: authenticated
GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
GOTRUE_JWT_EXP: 3600
GOTRUE_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
GOTRUE_JWT_EXP: ${JWT_EXPIRY}
GOTRUE_JWT_SECRET: ${JWT_SECRET}
GOTRUE_EXTERNAL_EMAIL_ENABLED: true
GOTRUE_EXTERNAL_ANONYMOUS_USERS_ENABLED: false
GOTRUE_MAILER_AUTOCONFIRM: false
GOTRUE_EXTERNAL_EMAIL_ENABLED: ${ENABLE_EMAIL_SIGNUP}
GOTRUE_EXTERNAL_ANONYMOUS_USERS_ENABLED: ${ENABLE_ANONYMOUS_USERS}
GOTRUE_MAILER_AUTOCONFIRM: ${ENABLE_EMAIL_AUTOCONFIRM}
# Uncomment to bypass nonce check in ID Token flow. Commonly set to true when using Google Sign In on mobile.
# GOTRUE_EXTERNAL_SKIP_NONCE_CHECK: true
# GOTRUE_MAILER_SECURE_EMAIL_CHANGE_ENABLED: true
# GOTRUE_SMTP_MAX_FREQUENCY: 1s
GOTRUE_SMTP_ADMIN_EMAIL: admin@example.com
GOTRUE_SMTP_HOST: supabase-mail
GOTRUE_SMTP_PORT: 2500
GOTRUE_SMTP_USER: fake_mail_user
GOTRUE_SMTP_PASS: fake_mail_password
GOTRUE_SMTP_SENDER_NAME: fake_sender
GOTRUE_MAILER_URLPATHS_INVITE: /auth/v1/verify
GOTRUE_MAILER_URLPATHS_CONFIRMATION: /auth/v1/verify
GOTRUE_MAILER_URLPATHS_RECOVERY: /auth/v1/verify
GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: /auth/v1/verify
GOTRUE_SMTP_ADMIN_EMAIL: ${SMTP_ADMIN_EMAIL}
GOTRUE_SMTP_HOST: ${SMTP_HOST}
GOTRUE_SMTP_PORT: ${SMTP_PORT}
GOTRUE_SMTP_USER: ${SMTP_USER}
GOTRUE_SMTP_PASS: ${SMTP_PASS}
GOTRUE_SMTP_SENDER_NAME: ${SMTP_SENDER_NAME}
GOTRUE_MAILER_URLPATHS_INVITE: ${MAILER_URLPATHS_INVITE}
GOTRUE_MAILER_URLPATHS_CONFIRMATION: ${MAILER_URLPATHS_CONFIRMATION}
GOTRUE_MAILER_URLPATHS_RECOVERY: ${MAILER_URLPATHS_RECOVERY}
GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: ${MAILER_URLPATHS_EMAIL_CHANGE}
GOTRUE_EXTERNAL_PHONE_ENABLED: true
GOTRUE_SMS_AUTOCONFIRM: true
GOTRUE_EXTERNAL_PHONE_ENABLED: ${ENABLE_PHONE_SIGNUP}
GOTRUE_SMS_AUTOCONFIRM: ${ENABLE_PHONE_AUTOCONFIRM}
# Uncomment to enable custom access token hook. Please see: https://supabase.com/docs/guides/auth/auth-hooks for full list of hooks and additional details about custom_access_token_hook
# GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_ENABLED: "true"
@@ -262,17 +168,16 @@ services:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
<<: *supabase-env-files
analytics:
condition: service_healthy
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
PGRST_DB_URI: postgres://authenticator:your-super-secret-and-long-postgres-password@db:5432/postgres
PGRST_DB_SCHEMAS: public,storage,graphql_public
PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
PGRST_DB_SCHEMAS: ${PGRST_DB_SCHEMAS}
PGRST_DB_ANON_ROLE: anon
PGRST_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
PGRST_JWT_SECRET: ${JWT_SECRET}
PGRST_DB_USE_LEGACY_GUCS: "false"
PGRST_APP_SETTINGS_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
PGRST_APP_SETTINGS_JWT_EXP: 3600
PGRST_APP_SETTINGS_JWT_SECRET: ${JWT_SECRET}
PGRST_APP_SETTINGS_JWT_EXP: ${JWT_EXPIRY}
command:
[
"postgrest"
@@ -287,6 +192,8 @@ services:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
analytics:
condition: service_healthy
healthcheck:
test:
[
@@ -297,26 +204,23 @@ services:
"-o",
"/dev/null",
"-H",
"Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE",
"Authorization: Bearer ${ANON_KEY}",
"http://localhost:4000/api/tenants/realtime-dev/health"
]
timeout: 5s
interval: 5s
retries: 3
<<: *supabase-env-files
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
PORT: 4000
DB_HOST: db
DB_PORT: 5432
DB_HOST: ${POSTGRES_HOST}
DB_PORT: ${POSTGRES_PORT}
DB_USER: supabase_admin
DB_PASSWORD: your-super-secret-and-long-postgres-password
DB_NAME: postgres
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_NAME: ${POSTGRES_DB}
DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
DB_ENC_KEY: supabaserealtime
API_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
API_JWT_SECRET: ${JWT_SECRET}
SECRET_KEY_BASE: ${SECRET_KEY_BASE}
ERL_AFLAGS: -proto_dist inet_tcp
DNS_NODES: "''"
RLIMIT_NOFILE: "10000"
@@ -352,15 +256,12 @@ services:
condition: service_started
imgproxy:
condition: service_started
<<: *supabase-env-files
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SERVICE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
ANON_KEY: ${ANON_KEY}
SERVICE_KEY: ${SERVICE_ROLE_KEY}
POSTGREST_URL: http://rest:3000
PGRST_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
DATABASE_URL: postgres://supabase_storage_admin:your-super-secret-and-long-postgres-password@db:5432/postgres
PGRST_JWT_SECRET: ${JWT_SECRET}
DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
FILE_SIZE_LIMIT: 52428800
STORAGE_BACKEND: file
FILE_STORAGE_BACKEND_PATH: /var/lib/storage
@@ -387,14 +288,11 @@ services:
timeout: 5s
interval: 5s
retries: 3
<<: *supabase-env-files
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
IMGPROXY_BIND: ":5001"
IMGPROXY_LOCAL_FILESYSTEM_ROOT: /
IMGPROXY_USE_ETAG: "true"
IMGPROXY_ENABLE_WEBP_DETECTION: true
IMGPROXY_ENABLE_WEBP_DETECTION: ${IMGPROXY_ENABLE_WEBP_DETECTION}
meta:
container_name: supabase-meta
@@ -404,16 +302,15 @@ services:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
<<: *supabase-env-files
analytics:
condition: service_healthy
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
PG_META_PORT: 8080
PG_META_DB_HOST: db
PG_META_DB_PORT: 5432
PG_META_DB_NAME: postgres
PG_META_DB_HOST: ${POSTGRES_HOST}
PG_META_DB_PORT: ${POSTGRES_PORT}
PG_META_DB_NAME: ${POSTGRES_DB}
PG_META_DB_USER: supabase_admin
PG_META_DB_PASSWORD: your-super-secret-and-long-postgres-password
PG_META_DB_PASSWORD: ${POSTGRES_PASSWORD}
functions:
container_name: supabase-edge-functions
@@ -421,17 +318,17 @@ services:
restart: unless-stopped
volumes:
- ./volumes/functions:/home/deno/functions:Z
<<: *supabase-env-files
depends_on:
analytics:
condition: service_healthy
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
JWT_SECRET: ${JWT_SECRET}
SUPABASE_URL: http://kong:8000
SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SUPABASE_SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
SUPABASE_DB_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres
SUPABASE_ANON_KEY: ${ANON_KEY}
SUPABASE_SERVICE_ROLE_KEY: ${SERVICE_ROLE_KEY}
SUPABASE_DB_URL: postgresql://postgres:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
# TODO: Allow configuring VERIFY_JWT per function. This PR might help: https://github.com/supabase/cli/pull/786
VERIFY_JWT: "false"
VERIFY_JWT: "${FUNCTIONS_VERIFY_JWT}"
command:
[
"start",
@@ -465,29 +362,26 @@ services:
db:
# Disable this if you are using an external Postgres database
condition: service_healthy
<<: *supabase-env-files
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
LOGFLARE_NODE_HOST: 127.0.0.1
DB_USERNAME: supabase_admin
DB_DATABASE: _supabase
DB_HOSTNAME: db
DB_PORT: 5432
DB_PASSWORD: your-super-secret-and-long-postgres-password
DB_HOSTNAME: ${POSTGRES_HOST}
DB_PORT: ${POSTGRES_PORT}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_SCHEMA: _analytics
LOGFLARE_API_KEY: your-super-secret-and-long-logflare-key
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
LOGFLARE_SINGLE_TENANT: true
LOGFLARE_SUPABASE_MODE: true
LOGFLARE_MIN_CLUSTER_SIZE: 1
# Comment variables to use Big Query backend for analytics
POSTGRES_BACKEND_URL: postgresql://supabase_admin:your-super-secret-and-long-postgres-password@db:5432/_supabase
POSTGRES_BACKEND_URL: postgresql://supabase_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/_supabase
POSTGRES_BACKEND_SCHEMA: _analytics
LOGFLARE_FEATURE_FLAG_OVERRIDE: multibackend=true
# Uncomment to use Big Query backend for analytics
# GOOGLE_PROJECT_ID: GOOGLE_PROJECT_ID
# GOOGLE_PROJECT_NUMBER: GOOGLE_PROJECT_NUMBER
# GOOGLE_PROJECT_ID: ${GOOGLE_PROJECT_ID}
# GOOGLE_PROJECT_NUMBER: ${GOOGLE_PROJECT_NUMBER}
# Comment out everything below this point if you are using an external Postgres database
db:
@@ -525,19 +419,19 @@ services:
interval: 5s
timeout: 5s
retries: 10
<<: *supabase-env-files
depends_on:
vector:
condition: service_healthy
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
POSTGRES_HOST: /var/run/postgresql
PGPORT: 5432
POSTGRES_PORT: 5432
PGPASSWORD: your-super-secret-and-long-postgres-password
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
PGDATABASE: postgres
POSTGRES_DB: postgres
JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
JWT_EXP: 3600
PGPORT: ${POSTGRES_PORT}
POSTGRES_PORT: ${POSTGRES_PORT}
PGPASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATABASE: ${POSTGRES_DB}
POSTGRES_DB: ${POSTGRES_DB}
JWT_SECRET: ${JWT_SECRET}
JWT_EXP: ${JWT_EXPIRY}
command:
[
"postgres",
@@ -553,7 +447,7 @@ services:
restart: unless-stopped
volumes:
- ./volumes/logs/vector.yml:/etc/vector/vector.yml:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${DOCKER_SOCKET_LOCATION}:/var/run/docker.sock:ro
healthcheck:
test:
[
@@ -567,11 +461,8 @@ services:
timeout: 5s
interval: 5s
retries: 3
<<: *supabase-env-files
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
LOGFLARE_API_KEY: your-super-secret-and-long-logflare-key
LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
command:
[
"--config",
@@ -584,8 +475,8 @@ services:
image: supabase/supavisor:2.4.12
restart: unless-stopped
ports:
- 5432:5432
- 6543:6543
- ${POSTGRES_PORT}:5432
- ${POOLER_PROXY_PORT_TRANSACTION}:6543
volumes:
- ./volumes/pooler/pooler.exs:/etc/pooler/pooler.exs:ro
healthcheck:
@@ -607,25 +498,22 @@ services:
condition: service_healthy
analytics:
condition: service_healthy
<<: *supabase-env-files
environment:
<<: *supabase-env
# Keep any existing environment variables specific to that service
PORT: 4000
POSTGRES_PORT: 5432
POSTGRES_DB: postgres
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
DATABASE_URL: ecto://supabase_admin:your-super-secret-and-long-postgres-password@db:5432/_supabase
POSTGRES_PORT: ${POSTGRES_PORT}
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
DATABASE_URL: ecto://supabase_admin:${POSTGRES_PASSWORD}@db:${POSTGRES_PORT}/_supabase
CLUSTER_POSTGRES: true
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
VAULT_ENC_KEY: your-encryption-key-32-chars-min
API_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
METRICS_JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
SECRET_KEY_BASE: ${SECRET_KEY_BASE}
VAULT_ENC_KEY: ${VAULT_ENC_KEY}
API_JWT_SECRET: ${JWT_SECRET}
METRICS_JWT_SECRET: ${JWT_SECRET}
REGION: local
ERL_AFLAGS: -proto_dist inet_tcp
POOLER_TENANT_ID: your-tenant-id
POOLER_DEFAULT_POOL_SIZE: 20
POOLER_MAX_CLIENT_CONN: 100
POOLER_TENANT_ID: ${POOLER_TENANT_ID}
POOLER_DEFAULT_POOL_SIZE: ${POOLER_DEFAULT_POOL_SIZE}
POOLER_MAX_CLIENT_CONN: ${POOLER_MAX_CLIENT_CONN}
POOLER_POOL_MODE: transaction
command:
[

View File

@@ -34,11 +34,11 @@ else
echo "No .env file found. Skipping .env removal step..."
fi
if [ -f ".env.default" ]; then
echo "Copying .env.default to .env..."
cp .env.default .env
if [ -f ".env.example" ]; then
echo "Copying .env.example to .env..."
cp .env.example .env
else
echo ".env.default file not found. Skipping .env reset step..."
echo ".env.example file not found. Skipping .env reset step..."
fi
echo "Cleanup complete!"

View File

@@ -1,39 +1,9 @@
# Environment Variable Loading Order (first → last, later overrides earlier):
# 1. backend/.env.default - Default values for all settings
# 2. backend/.env - User's custom configuration (if exists)
# 3. environment key - Docker-specific overrides defined below
# 4. Shell environment - Variables exported before running docker compose
# 5. CLI arguments - docker compose run -e VAR=value
# Common backend environment - Docker service names
x-backend-env:
&backend-env # Docker internal service hostnames (override localhost defaults)
PYRO_HOST: "0.0.0.0"
AGENTSERVER_HOST: rest_server
SCHEDULER_HOST: scheduler_server
DATABASEMANAGER_HOST: database_manager
EXECUTIONMANAGER_HOST: executor
NOTIFICATIONMANAGER_HOST: notification_server
CLAMAV_SERVICE_HOST: clamav
DB_HOST: db
REDIS_HOST: redis
RABBITMQ_HOST: rabbitmq
# Override Supabase URL for Docker network
SUPABASE_URL: http://kong:8000
# Common env_file configuration for backend services
x-backend-env-files: &backend-env-files
env_file:
- backend/.env.default # Base defaults (always exists)
- path: backend/.env # User overrides (optional)
required: false
services:
migrate:
build:
context: ../
dockerfile: autogpt_platform/backend/Dockerfile
target: migrate
target: server
command: ["sh", "-c", "poetry run prisma migrate deploy"]
develop:
watch:
@@ -50,11 +20,10 @@ services:
- app-network
restart: on-failure
healthcheck:
test: ["CMD-SHELL", "poetry run prisma migrate status | grep -q 'No pending migrations' || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 5s
test: ["CMD", "poetry", "run", "prisma", "migrate", "status"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:latest
@@ -104,12 +73,29 @@ services:
condition: service_completed_successfully
rabbitmq:
condition: service_healthy
<<: *backend-env-files
environment:
<<: *backend-env
# Service-specific overrides
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- SUPABASE_URL=http://kong:8000
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
- SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- REDIS_HOST=redis
- REDIS_PORT=6379
- RABBITMQ_HOST=rabbitmq
- RABBITMQ_PORT=5672
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
- REDIS_PASSWORD=password
- ENABLE_AUTH=true
- PYRO_HOST=0.0.0.0
- SCHEDULER_HOST=scheduler_server
- EXECUTIONMANAGER_HOST=executor
- NOTIFICATIONMANAGER_HOST=notification_server
- CLAMAV_SERVICE_HOST=clamav
- NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
- ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw= # DO NOT USE IN PRODUCTION!!
- UNSUBSCRIBE_SECRET_KEY=HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio= # DO NOT USE IN PRODUCTION!!
ports:
- "8006:8006"
networks:
@@ -137,12 +123,26 @@ services:
condition: service_completed_successfully
database_manager:
condition: service_started
<<: *backend-env-files
environment:
<<: *backend-env
# Service-specific overrides
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DATABASEMANAGER_HOST=database_manager
- SUPABASE_URL=http://kong:8000
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
- SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=password
- RABBITMQ_HOST=rabbitmq
- RABBITMQ_PORT=5672
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
- ENABLE_AUTH=true
- PYRO_HOST=0.0.0.0
- AGENTSERVER_HOST=rest_server
- NOTIFICATIONMANAGER_HOST=notification_server
- CLAMAV_SERVICE_HOST=clamav
- ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw= # DO NOT USE IN PRODUCTION!!
ports:
- "8002:8002"
networks:
@@ -168,12 +168,22 @@ services:
condition: service_completed_successfully
database_manager:
condition: service_started
<<: *backend-env-files
environment:
<<: *backend-env
# Service-specific overrides
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DATABASEMANAGER_HOST=database_manager
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=password
# - RABBITMQ_HOST=rabbitmq
# - RABBITMQ_PORT=5672
# - RABBITMQ_DEFAULT_USER=rabbitmq_user_default
# - RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
- ENABLE_AUTH=true
- PYRO_HOST=0.0.0.0
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
ports:
- "8001:8001"
networks:
@@ -195,12 +205,11 @@ services:
condition: service_healthy
migrate:
condition: service_completed_successfully
<<: *backend-env-files
environment:
<<: *backend-env
# Service-specific overrides
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- PYRO_HOST=0.0.0.0
- ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw= # DO NOT USE IN PRODUCTION!!
ports:
- "8005:8005"
networks:
@@ -241,12 +250,23 @@ services:
# interval: 10s
# timeout: 10s
# retries: 5
<<: *backend-env-files
environment:
<<: *backend-env
# Service-specific overrides
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DATABASEMANAGER_HOST=database_manager
- NOTIFICATIONMANAGER_HOST=notification_server
- SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
- DATABASE_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- DIRECT_URL=postgresql://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres?connect_timeout=60&schema=platform
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=password
- RABBITMQ_HOST=rabbitmq
- RABBITMQ_PORT=5672
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
- ENABLE_AUTH=true
- PYRO_HOST=0.0.0.0
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
ports:
- "8003:8003"
networks:
@@ -272,39 +292,52 @@ services:
condition: service_completed_successfully
database_manager:
condition: service_started
<<: *backend-env-files
environment:
<<: *backend-env
- DATABASEMANAGER_HOST=database_manager
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=password
- RABBITMQ_HOST=rabbitmq
- RABBITMQ_PORT=5672
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
- ENABLE_AUTH=true
- PYRO_HOST=0.0.0.0
- BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
ports:
- "8007:8007"
networks:
- app-network
frontend:
build:
context: ../
dockerfile: autogpt_platform/frontend/Dockerfile
target: prod
depends_on:
db:
condition: service_healthy
migrate:
condition: service_completed_successfully
ports:
- "3000:3000"
networks:
- app-network
# Load environment variables in order (later overrides earlier)
env_file:
- path: ./frontend/.env.default # Base defaults (always exists)
- path: ./frontend/.env # User overrides (optional)
required: false
environment:
# Server-side environment variables (Docker service names)
# These override the localhost URLs from env files when running in Docker
AUTH_CALLBACK_URL: http://rest_server:8006/auth/callback
SUPABASE_URL: http://kong:8000
AGPT_SERVER_URL: http://rest_server:8006/api
AGPT_WS_SERVER_URL: ws://websocket_server:8001/ws
# frontend:
# build:
# context: ../
# dockerfile: autogpt_platform/frontend/Dockerfile
# target: dev
# depends_on:
# db:
# condition: service_healthy
# rest_server:
# condition: service_started
# websocket_server:
# condition: service_started
# migrate:
# condition: service_completed_successfully
# environment:
# - NEXT_PUBLIC_SUPABASE_URL=http://kong:8000
# - NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
# - DATABASE_URL=postgresql://agpt_user:pass123@postgres:5432/postgres?connect_timeout=60&schema=platform
# - DIRECT_URL=postgresql://agpt_user:pass123@postgres:5432/postgres?connect_timeout=60&schema=platform
# - NEXT_PUBLIC_AGPT_SERVER_URL=http://localhost:8006/api
# - NEXT_PUBLIC_AGPT_WS_SERVER_URL=ws://localhost:8001/ws
# - NEXT_PUBLIC_AGPT_MARKETPLACE_URL=http://localhost:8015/api/v1/market
# - NEXT_PUBLIC_BEHAVE_AS=LOCAL
# ports:
# - "3000:3000"
# networks:
# - app-network
networks:
app-network:
driver: bridge

View File

@@ -20,7 +20,6 @@ x-supabase-services:
- app-network
- shared-network
services:
# AGPT services
migrate:
@@ -97,13 +96,19 @@ services:
timeout: 10s
retries: 3
frontend:
<<: *agpt-services
extends:
file: ./docker-compose.platform.yml
service: frontend
# frontend:
# <<: *agpt-services
# extends:
# file: ./docker-compose.platform.yml
# service: frontend
# Supabase services
studio:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: studio
# Supabase services (minimal: auth + db + kong)
kong:
<<: *supabase-services
extends:
@@ -118,35 +123,61 @@ services:
environment:
GOTRUE_MAILER_AUTOCONFIRM: true
rest:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: rest
realtime:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: realtime
storage:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: storage
imgproxy:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: imgproxy
meta:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: meta
functions:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: functions
analytics:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: analytics
db:
<<: *supabase-services
extends:
file: ./db/docker/docker-compose.yml
service: db
ports:
- 5432:5432 # We don't use Supavisor locally, so we expose the db directly.
- ${POSTGRES_PORT}:5432 # We don't use Supavisor locally, so we expose the db directly.
# Studio and its dependencies for local development only
meta:
vector:
<<: *supabase-services
profiles:
- local
extends:
file: ./db/docker/docker-compose.yml
service: meta
studio:
<<: *supabase-services
profiles:
- local
extends:
file: ./db/docker/docker-compose.yml
service: studio
depends_on:
meta:
condition: service_healthy
# environment:
# NEXT_PUBLIC_ENABLE_LOGS: false # Disable analytics/logging features
service: vector
deps:
<<: *supabase-services
@@ -155,24 +186,13 @@ services:
image: busybox
command: /bin/true
depends_on:
- studio
- kong
- auth
- meta
- analytics
- db
- studio
- vector
- redis
- rabbitmq
- clamav
- migrate
deps_backend:
<<: *agpt-services
profiles:
- local
image: busybox
command: /bin/true
depends_on:
- deps
- rest_server
- executor
- websocket_server
- database_manager

View File

@@ -1,20 +0,0 @@
NEXT_PUBLIC_SUPABASE_URL=http://localhost:8000
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
NEXT_PUBLIC_AGPT_SERVER_URL=http://localhost:8006/api
NEXT_PUBLIC_AGPT_WS_SERVER_URL=ws://localhost:8001/ws
NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000
NEXT_PUBLIC_APP_ENV=local
NEXT_PUBLIC_BEHAVE_AS=LOCAL
NEXT_PUBLIC_LAUNCHDARKLY_ENABLED=false
NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID=687ab1372f497809b131e06e
NEXT_PUBLIC_SHOW_BILLING_PAGE=false
NEXT_PUBLIC_TURNSTILE=disabled
NEXT_PUBLIC_REACT_QUERY_DEVTOOL=true
NEXT_PUBLIC_GA_MEASUREMENT_ID=G-FH2XK2W4GN
NEXT_PUBLIC_PW_TEST=true

View File

@@ -0,0 +1,44 @@
NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000
NEXT_PUBLIC_AUTH_CALLBACK_URL=http://localhost:8006/auth/callback
NEXT_PUBLIC_AGPT_SERVER_URL=http://localhost:8006/api
NEXT_PUBLIC_AGPT_WS_SERVER_URL=ws://localhost:8001/ws
NEXT_PUBLIC_AGPT_MARKETPLACE_URL=http://localhost:8015/api/v1/market
NEXT_PUBLIC_LAUNCHDARKLY_ENABLED=false
NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID=687ab1372f497809b131e06e # Local environment on Launch darkly
NEXT_PUBLIC_APP_ENV=local
NEXT_PUBLIC_AGPT_SERVER_BASE_URL=http://localhost:8006
## Locale settings
NEXT_PUBLIC_DEFAULT_LOCALE=en
NEXT_PUBLIC_LOCALES=en,es
## Supabase credentials
NEXT_PUBLIC_SUPABASE_URL=http://localhost:8000
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
## OAuth Callback URL
## This should be {domain}/auth/callback
## Only used if you're using Supabase and OAuth
AUTH_CALLBACK_URL="${NEXT_PUBLIC_FRONTEND_BASE_URL}/auth/callback"
GA_MEASUREMENT_ID=G-FH2XK2W4GN
# When running locally, set NEXT_PUBLIC_BEHAVE_AS=CLOUD to use the a locally hosted marketplace (as is typical in development, and the cloud deployment), otherwise set it to LOCAL to have the marketplace open in a new tab
NEXT_PUBLIC_BEHAVE_AS=LOCAL
NEXT_PUBLIC_SHOW_BILLING_PAGE=false
## Cloudflare Turnstile (CAPTCHA) Configuration
## Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
## This is the frontend site key
NEXT_PUBLIC_CLOUDFLARE_TURNSTILE_SITE_KEY=
NEXT_PUBLIC_TURNSTILE=disabled
# Devtools
NEXT_PUBLIC_REACT_QUERY_DEVTOOL=true
# In case you are running Playwright locally
# NEXT_PUBLIC_PW_TEST=true

View File

@@ -31,7 +31,6 @@ yarn.lock
package-lock.json
# local env files
.env
.env*.local
# vercel
@@ -54,7 +53,4 @@ storybook-static
*.ignore.*
*.ign.*
!.npmrc
.cursorrules
# Generated API files
src/app/api/__generated__/
.cursorrules

View File

@@ -5,16 +5,18 @@ RUN corepack enable
COPY autogpt_platform/frontend/package.json autogpt_platform/frontend/pnpm-lock.yaml ./
RUN --mount=type=cache,target=/root/.local/share/pnpm pnpm install --frozen-lockfile
# Dev stage
FROM base AS dev
ENV NODE_ENV=development
ENV HOSTNAME=0.0.0.0
COPY autogpt_platform/frontend/ .
EXPOSE 3000
CMD ["pnpm", "run", "dev", "--hostname", "0.0.0.0"]
# Build stage for prod
FROM base AS build
COPY autogpt_platform/frontend/ .
RUN if [ -f .env ]; then \
cat .env.default .env > .env.merged && mv .env.merged .env; \
else \
cp .env.default .env; \
fi
RUN pnpm run generate:api
ENV SKIP_STORYBOOK_TESTS=true
RUN pnpm build
# Prod stage - based on NextJS reference Dockerfile https://github.com/vercel/next.js/blob/64271354533ed16da51be5dce85f0dbd15f17517/examples/with-docker/Dockerfile

View File

@@ -18,58 +18,31 @@ Make sure you have Node.js 16.10+ installed. Corepack is included with Node.js b
>
> Then follow the setup steps below.
## Setup
### Setup
### 1. **Enable corepack** (run this once on your system):
1. **Enable corepack** (run this once on your system):
```bash
corepack enable
```
```bash
corepack enable
```
This enables corepack to automatically manage pnpm based on the `packageManager` field in `package.json`.
This enables corepack to automatically manage pnpm based on the `packageManager` field in `package.json`.
### 2. **Install dependencies**:
2. **Install dependencies**:
```bash
pnpm i
```
```bash
pnpm i
```
### 3. **Start the development server**:
3. **Start the development server**:
```bash
pnpm dev
```
#### Running the Front-end & Back-end separately
We recommend this approach if you are doing active development on the project. First spin up the Back-end:
```bash
# on `autogpt_platform`
docker compose --profile local up deps_backend -d
# on `autogpt_platform/backend`
poetry run app
```
Then start the Front-end:
```bash
# on `autogpt_platform/frontend`
pnpm dev
```
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. If the server starts on `http://localhost:3001` it means the Front-end is already running via Docker. You have to kill the container then or do `docker compose down`.
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
#### Running both the Front-end and Back-end via Docker
If you run:
```bash
# on `autogpt_platform`
docker compose up -d
```
It will spin up the Back-end and Front-end via Docker. The Front-end will start on port `3000`. This might not be
what you want when actively contributing to the Front-end as you won't have direct/easy access to the Next.js dev server.
### Subsequent Runs
For subsequent development sessions, you only need to run:
@@ -87,12 +60,12 @@ Every time a new Front-end dependency is added by you or others, you will need t
- `pnpm start` - Start production server
- `pnpm lint` - Run ESLint and Prettier checks
- `pnpm format` - Format code with Prettier
- `pnpm types` - Run TypeScript type checking
- `pnpm type-check` - Run TypeScript type checking
- `pnpm test` - Run Playwright tests
- `pnpm test-ui` - Run Playwright tests with UI
- `pnpm fetch:openapi` - Fetch OpenAPI spec from backend
- `pnpm generate:api-client` - Generate API client from OpenAPI spec
- `pnpm generate:api` - Fetch OpenAPI spec and generate API client
- `pnpm generate:api-all` - Fetch OpenAPI spec and generate API client
This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font.
@@ -115,7 +88,7 @@ This project uses an auto-generated API client powered by [**Orval**](https://or
```bash
# Fetch OpenAPI spec from backend and generate client
pnpm generate:api
pnpm generate:api-all
# Only fetch the OpenAPI spec
pnpm fetch:openapi

View File

@@ -3,13 +3,13 @@
"version": "0.3.4",
"private": true,
"scripts": {
"dev": "pnpm run generate:api:force && next dev --turbo",
"build": "next build",
"dev": "next dev --turbo",
"build": "cross-env pnpm run generate:api-client && SKIP_STORYBOOK_TESTS=true next build",
"start": "next start",
"start:standalone": "cd .next/standalone && node server.js",
"lint": "next lint && prettier --check .",
"format": "next lint --fix; prettier --write .",
"types": "tsc --noEmit",
"format": "prettier --write .",
"type-check": "tsc --noEmit",
"test": "next build --turbo && playwright test",
"test-ui": "next build --turbo && playwright test --ui",
"test:no-build": "playwright test",
@@ -18,43 +18,44 @@
"build-storybook": "storybook build",
"test-storybook": "test-storybook",
"test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"pnpm run build-storybook -- --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && pnpm run test-storybook\"",
"generate:api": "npx --yes tsx ./scripts/generate-api-queries.ts && orval --config ./orval.config.ts",
"generate:api:force": "npx --yes tsx ./scripts/generate-api-queries.ts --force && orval --config ./orval.config.ts"
"fetch:openapi": "curl http://localhost:8006/openapi.json > ./src/app/api/openapi.json && prettier --write ./src/app/api/openapi.json",
"generate:api-client": "orval --config ./orval.config.ts",
"generate:api-all": "pnpm run fetch:openapi && pnpm run generate:api-client"
},
"browserslist": [
"defaults"
],
"dependencies": {
"@faker-js/faker": "9.9.0",
"@hookform/resolvers": "5.2.1",
"@next/third-parties": "15.4.6",
"@hookform/resolvers": "5.2.0",
"@next/third-parties": "15.4.4",
"@phosphor-icons/react": "2.1.10",
"@radix-ui/react-alert-dialog": "1.1.15",
"@radix-ui/react-alert-dialog": "1.1.14",
"@radix-ui/react-avatar": "1.1.10",
"@radix-ui/react-checkbox": "1.3.3",
"@radix-ui/react-collapsible": "1.1.12",
"@radix-ui/react-context-menu": "2.2.16",
"@radix-ui/react-dialog": "1.1.15",
"@radix-ui/react-dropdown-menu": "2.1.16",
"@radix-ui/react-checkbox": "1.3.2",
"@radix-ui/react-collapsible": "1.1.11",
"@radix-ui/react-context-menu": "2.2.15",
"@radix-ui/react-dialog": "1.1.14",
"@radix-ui/react-dropdown-menu": "2.1.15",
"@radix-ui/react-icons": "1.3.2",
"@radix-ui/react-label": "2.1.7",
"@radix-ui/react-popover": "1.1.15",
"@radix-ui/react-radio-group": "1.3.8",
"@radix-ui/react-scroll-area": "1.2.10",
"@radix-ui/react-select": "2.2.6",
"@radix-ui/react-popover": "1.1.14",
"@radix-ui/react-radio-group": "1.3.7",
"@radix-ui/react-scroll-area": "1.2.9",
"@radix-ui/react-select": "2.2.5",
"@radix-ui/react-separator": "1.1.7",
"@radix-ui/react-slot": "1.2.3",
"@radix-ui/react-switch": "1.2.6",
"@radix-ui/react-tabs": "1.1.13",
"@radix-ui/react-toast": "1.2.15",
"@radix-ui/react-tooltip": "1.2.8",
"@radix-ui/react-switch": "1.2.5",
"@radix-ui/react-tabs": "1.1.12",
"@radix-ui/react-toast": "1.2.14",
"@radix-ui/react-tooltip": "1.2.7",
"@sentry/nextjs": "9.42.0",
"@supabase/ssr": "0.6.1",
"@supabase/supabase-js": "2.55.0",
"@tanstack/react-query": "5.85.3",
"@supabase/supabase-js": "2.52.1",
"@tanstack/react-query": "5.83.0",
"@tanstack/react-table": "8.21.3",
"@types/jaro-winkler": "0.2.4",
"@xyflow/react": "12.8.3",
"@xyflow/react": "12.8.2",
"boring-avatars": "1.11.2",
"class-variance-authority": "0.7.1",
"clsx": "2.1.1",
@@ -64,22 +65,22 @@
"dotenv": "17.2.1",
"elliptic": "6.6.1",
"embla-carousel-react": "8.6.0",
"framer-motion": "12.23.12",
"framer-motion": "12.23.9",
"geist": "1.4.2",
"jaro-winkler": "0.2.8",
"launchdarkly-react-client-sdk": "3.8.1",
"lodash": "4.17.21",
"lucide-react": "0.539.0",
"lucide-react": "0.525.0",
"moment": "2.30.1",
"next": "15.4.6",
"next": "15.4.4",
"next-themes": "0.4.6",
"nuqs": "2.4.3",
"party-js": "2.2.0",
"react": "18.3.1",
"react-day-picker": "9.8.1",
"react-day-picker": "9.8.0",
"react-dom": "18.3.1",
"react-drag-drop-files": "2.4.0",
"react-hook-form": "7.62.0",
"react-hook-form": "7.61.1",
"react-icons": "5.5.0",
"react-markdown": "9.0.3",
"react-modal": "3.16.3",
@@ -87,7 +88,7 @@
"react-window": "1.8.11",
"recharts": "2.15.3",
"shepherd.js": "14.5.1",
"sonner": "2.0.7",
"sonner": "2.0.6",
"tailwind-merge": "2.6.0",
"tailwindcss-animate": "1.0.7",
"uuid": "11.1.0",
@@ -95,42 +96,42 @@
"zod": "3.25.76"
},
"devDependencies": {
"@chromatic-com/storybook": "4.1.0",
"@playwright/test": "1.54.2",
"@storybook/addon-a11y": "9.1.2",
"@storybook/addon-docs": "9.1.2",
"@storybook/addon-links": "9.1.2",
"@storybook/addon-onboarding": "9.1.2",
"@storybook/nextjs": "9.1.2",
"@tanstack/eslint-plugin-query": "5.83.1",
"@tanstack/react-query-devtools": "5.84.2",
"@chromatic-com/storybook": "4.0.1",
"@playwright/test": "1.54.1",
"@storybook/addon-a11y": "9.0.17",
"@storybook/addon-docs": "9.0.17",
"@storybook/addon-links": "9.0.17",
"@storybook/addon-onboarding": "9.0.17",
"@storybook/nextjs": "9.0.17",
"@tanstack/eslint-plugin-query": "5.81.2",
"@tanstack/react-query-devtools": "5.83.0",
"@types/canvas-confetti": "1.9.0",
"@types/lodash": "4.17.20",
"@types/negotiator": "0.6.4",
"@types/node": "24.2.1",
"@types/node": "24.0.15",
"@types/react": "18.3.17",
"@types/react-dom": "18.3.5",
"@types/react-modal": "3.16.3",
"@types/react-window": "1.8.8",
"axe-playwright": "2.1.0",
"chromatic": "13.1.3",
"chromatic": "13.1.2",
"concurrently": "9.2.0",
"cross-env": "7.0.3",
"eslint": "8.57.1",
"eslint-config-next": "15.4.6",
"eslint-plugin-storybook": "9.1.2",
"eslint-config-next": "15.4.2",
"eslint-plugin-storybook": "9.0.17",
"import-in-the-middle": "1.14.2",
"msw": "2.10.4",
"msw-storybook-addon": "2.0.5",
"orval": "7.11.2",
"orval": "7.10.0",
"pbkdf2": "3.1.3",
"postcss": "8.5.6",
"prettier": "3.6.2",
"prettier-plugin-tailwindcss": "0.6.14",
"require-in-the-middle": "7.5.2",
"storybook": "9.1.2",
"storybook": "9.0.17",
"tailwindcss": "3.4.17",
"typescript": "5.9.2"
"typescript": "5.8.3"
},
"msw": {
"workerDirectory": [

View File

@@ -45,7 +45,7 @@ export default defineConfig({
webServer: {
command: "pnpm start",
url: "http://localhost:3000",
reuseExistingServer: true,
reuseExistingServer: !process.env.CI,
},
/* Configure projects for major browsers */

File diff suppressed because it is too large Load Diff

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env node
import { getAgptServerBaseUrl } from "@/lib/env-config";
import { execSync } from "child_process";
import * as path from "path";
import * as fs from "fs";
function fetchOpenApiSpec(): void {
const args = process.argv.slice(2);
const forceFlag = args.includes("--force");
const baseUrl = getAgptServerBaseUrl();
const openApiUrl = `${baseUrl}/openapi.json`;
const outputPath = path.join(
__dirname,
"..",
"src",
"app",
"api",
"openapi.json",
);
console.log(`Output path: ${outputPath}`);
console.log(`Force flag: ${forceFlag}`);
// Check if local file exists
const localFileExists = fs.existsSync(outputPath);
if (!forceFlag && localFileExists) {
console.log("✅ Using existing local OpenAPI spec file");
console.log("💡 Use --force flag to fetch from server");
return;
}
if (!localFileExists) {
console.log("📄 No local OpenAPI spec found, fetching from server...");
} else {
console.log(
"🔄 Force flag detected, fetching fresh OpenAPI spec from server...",
);
}
console.log(`Fetching OpenAPI spec from: ${openApiUrl}`);
try {
// Fetch the OpenAPI spec
execSync(`curl "${openApiUrl}" > "${outputPath}"`, { stdio: "inherit" });
// Format with prettier
execSync(`prettier --write "${outputPath}"`, { stdio: "inherit" });
console.log("✅ OpenAPI spec fetched and formatted successfully");
} catch (error) {
console.error("❌ Failed to fetch OpenAPI spec:", error);
process.exit(1);
}
}
if (require.main === module) {
fetchOpenApiSpec();
}

View File

@@ -14,7 +14,12 @@ export async function addDollars(formData: FormData) {
comments: formData.get("comments") as string,
};
const api = new BackendApi();
await api.addUserCredits(data.user_id, data.amount, data.comments);
const resp = await api.addUserCredits(
data.user_id,
data.amount,
data.comments,
);
console.log(resp);
revalidatePath("/admin/spending");
}

View File

@@ -29,7 +29,6 @@ function SpendingDashboard({
</div>
<Suspense
key={`${page}-${status}-${search}`}
fallback={
<div className="py-10 text-center">Loading submissions...</div>
}

View File

@@ -1,63 +0,0 @@
import { Button } from "@/components/ui/button";
import { cn } from "@/lib/utils";
import { Plus } from "lucide-react";
import { ButtonHTMLAttributes } from "react";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
title?: string;
description?: string;
ai_name?: string;
}
export const AiBlock: React.FC<Props> = ({
title,
description,
className,
ai_name,
...rest
}) => {
return (
<Button
className={cn(
"group flex h-[5.625rem] w-full min-w-[7.5rem] items-center justify-start space-x-3 whitespace-normal rounded-[0.75rem] bg-zinc-50 px-[0.875rem] py-[0.625rem] text-start shadow-none",
"hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300 disabled:pointer-events-none",
className,
)}
{...rest}
>
<div className="flex flex-1 flex-col items-start gap-1.5">
<div className="space-y-0.5">
<span
className={cn(
"line-clamp-1 font-sans text-sm font-medium leading-[1.375rem] text-zinc-700 group-disabled:text-zinc-400",
)}
>
{title}
</span>
<span
className={cn(
"line-clamp-1 font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
{description}
</span>
</div>
<span
className={cn(
"rounded-[0.75rem] bg-zinc-200 px-[0.5rem] font-sans text-xs leading-[1.25rem] text-zinc-500",
)}
>
Supports {ai_name}
</span>
</div>
<div
className={cn(
"flex h-7 w-7 items-center justify-center rounded-[0.5rem] bg-zinc-700 group-disabled:bg-zinc-400",
)}
>
<Plus className="h-5 w-5 text-zinc-50" strokeWidth={2} />
</div>
</Button>
);
};

View File

@@ -1,77 +0,0 @@
import { Button } from "@/components/ui/button";
import { Skeleton } from "@/components/ui/skeleton";
import { beautifyString, cn } from "@/lib/utils";
import { Plus } from "lucide-react";
import React, { ButtonHTMLAttributes } from "react";import { highlightText } from "./helpers";
;
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
title?: string;
description?: string;
highlightedText?: string;
}
interface BlockComponent extends React.FC<Props> {
Skeleton: React.FC<{ className?: string }>;
}
export const Block: BlockComponent = ({
title,
description,
highlightedText,
className,
...rest
}) => {
return (
<Button
className={cn(
"group flex h-16 w-full min-w-[7.5rem] items-center justify-start space-x-3 whitespace-normal rounded-[0.75rem] bg-zinc-50 px-[0.875rem] py-[0.625rem] text-start shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300 disabled:cursor-not-allowed",
className,
)}
{...rest}
>
<div className="flex flex-1 flex-col items-start gap-0.5">
{title && (
<span
className={cn(
"line-clamp-1 font-sans text-sm font-medium leading-[1.375rem] text-zinc-800 group-disabled:text-zinc-400",
)}
>
{highlightText(beautifyString(title), highlightedText)}
</span>
)}
{description && (
<span
className={cn(
"line-clamp-1 font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
{highlightText(description, highlightedText)}
</span>
)}
</div>
<div
className={cn(
"flex h-7 w-7 items-center justify-center rounded-[0.5rem] bg-zinc-700 group-disabled:bg-zinc-400",
)}
>
<Plus className="h-5 w-5 text-zinc-50" strokeWidth={2} />
</div>
</Button>
);
};
const BlockSkeleton = () => {
return (
<Skeleton className="flex h-16 w-full min-w-[7.5rem] animate-pulse items-center justify-start space-x-3 rounded-[0.75rem] bg-zinc-100 px-[0.875rem] py-[0.625rem]">
<div className="flex flex-1 flex-col items-start gap-0.5">
<Skeleton className="h-[1.375rem] w-24 rounded bg-zinc-200" />
<Skeleton className="h-5 w-32 rounded bg-zinc-200" />
</div>
<Skeleton className="h-7 w-7 rounded-[0.5rem] bg-zinc-200" />
</Skeleton>
);
};
Block.Skeleton = BlockSkeleton;

View File

@@ -1,51 +0,0 @@
import React from "react";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { ToyBrick } from "lucide-react";
import { BlockMenuContent } from "../BlockMenuContent/BlockMenuContent";
import { ControlPanelButton } from "../ControlPanelButton";
import { useBlockMenu } from "./useBlockMenu";
interface BlockMenuProps {
pinBlocksPopover: boolean;
blockMenuSelected: "save" | "block" | "";
setBlockMenuSelected: React.Dispatch<
React.SetStateAction<"" | "save" | "block">
>;
}
export const BlockMenu: React.FC<BlockMenuProps> = ({
pinBlocksPopover,
blockMenuSelected,
setBlockMenuSelected,
}) => {
const {open, onOpen} = useBlockMenu({pinBlocksPopover, setBlockMenuSelected});
return (
<Popover open={pinBlocksPopover ? true : open} onOpenChange={onOpen}>
<PopoverTrigger className="hover:cursor-pointer">
<ControlPanelButton
data-id="blocks-control-popover-trigger"
data-testid="blocks-control-blocks-button"
selected={blockMenuSelected === "block"}
className="rounded-none"
>
{/* Need to find phosphor icon alternative for this lucide icon */}
<ToyBrick className="h-5 w-6" strokeWidth={2} />
</ControlPanelButton>
</PopoverTrigger>
<PopoverContent
side="right"
align="start"
sideOffset={16}
className="absolute h-[75vh] w-[46.625rem] overflow-hidden rounded-[1rem] border-none p-0 shadow-[0_2px_6px_0_rgba(0,0,0,0.05)]"
data-id="blocks-control-popover-content"
>
<BlockMenuContent />
</PopoverContent>
</Popover>
);
};

View File

@@ -1,23 +0,0 @@
import { useState } from "react";
interface useBlockMenuProps {
pinBlocksPopover: boolean;
setBlockMenuSelected: React.Dispatch<
React.SetStateAction<"" | "save" | "block">
>;
}
export const useBlockMenu = ({pinBlocksPopover, setBlockMenuSelected}: useBlockMenuProps) => {
const [open, setOpen] = useState(false);
const onOpen = (newOpen: boolean) => {
if (!pinBlocksPopover) {
setOpen(newOpen);
setBlockMenuSelected(newOpen ? "block" : "");
}
};
return {
open,
onOpen,
};
};

View File

@@ -1,10 +0,0 @@
"use client";
import React from "react";
export const BlockMenuContent = () => {
return (
<div className="flex h-full w-full flex-col items-center justify-center">
This is the block menu content
</div>
);
};

View File

@@ -1,35 +0,0 @@
// BLOCK MENU TODO: We need a disable state in this, currently it's not in design.
import { cn } from "@/lib/utils";
import React from "react";
interface Props extends React.HTMLAttributes<HTMLDivElement> {
selected?: boolean;
children?: React.ReactNode; // For icon purpose
disabled?: boolean;
}
export const ControlPanelButton: React.FC<Props> = ({
selected = false,
children,
disabled,
className,
...rest
}) => {
return (
// Using div instead of button, because it's only for design purposes. We are using this to give design to PopoverTrigger.
<div
role="button"
className={cn(
"flex h-[4.25rem] w-[4.25rem] items-center justify-center whitespace-normal bg-white p-[1.38rem] text-zinc-800 shadow-none hover:cursor-pointer hover:bg-zinc-100 hover:text-zinc-950 focus:ring-0",
selected &&
"bg-violet-50 text-violet-700 hover:cursor-default hover:bg-violet-50 hover:text-violet-700 active:bg-violet-50 active:text-violet-700",
disabled && "cursor-not-allowed",
className,
)}
{...rest}
>
{children}
</div>
);
};

View File

@@ -1,54 +0,0 @@
import { Button } from "@/components/ui/button";
import { cn } from "@/lib/utils";
import { X } from "lucide-react";
import React, { ButtonHTMLAttributes } from "react";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
selected?: boolean;
number?: number;
name?: string;
}
export const FilterChip: React.FC<Props> = ({
selected = false,
number,
name,
className,
...rest
}) => {
return (
<Button
className={cn(
"group w-fit space-x-1 rounded-[1.5rem] border border-zinc-300 bg-transparent px-[0.625rem] py-[0.375rem] shadow-none transition-transform duration-300 ease-in-out",
"hover:border-violet-500 hover:bg-transparent focus:ring-0 disabled:cursor-not-allowed",
selected && "border-0 bg-violet-700 hover:border",
className,
)}
{...rest}
>
<span
className={cn(
"font-sans text-sm font-medium leading-[1.375rem] text-zinc-600 group-hover:text-zinc-600 group-disabled:text-zinc-400",
selected && "text-zinc-50",
)}
>
{name}
</span>
{selected && (
<>
<span className="flex h-4 w-4 items-center justify-center rounded-full bg-zinc-50 transition-all duration-300 ease-in-out group-hover:hidden">
<X
className="h-3 w-3 rounded-full text-violet-700"
strokeWidth={2}
/>
</span>
{number !== undefined && (
<span className="hidden h-[1.375rem] items-center rounded-[1.25rem] bg-violet-700 p-[0.375rem] text-zinc-50 transition-all duration-300 ease-in-out animate-in fade-in zoom-in group-hover:flex">
{number > 100 ? "100+" : number}
</span>
)}
</>
)}
</Button>
);
};

View File

@@ -1,88 +0,0 @@
import { Button } from "@/components/ui/button";
import { Skeleton } from "@/components/ui/skeleton";
import { beautifyString, cn } from "@/lib/utils";
import Image from "next/image";
import React, { ButtonHTMLAttributes } from "react";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
title?: string;
description?: string;
icon_url?: string;
number_of_blocks?: number;
}
interface IntegrationComponent extends React.FC<Props> {
Skeleton: React.FC<{ className?: string }>;
}
export const Integration: IntegrationComponent = ({
title,
icon_url,
description,
className,
number_of_blocks,
...rest
}) => {
return (
<Button
className={cn(
"group flex h-16 w-full min-w-[7.5rem] items-center justify-start space-x-3 whitespace-normal rounded-[0.75rem] bg-zinc-50 px-[0.875rem] py-[0.625rem] text-start shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-50 active:ring-1 active:ring-zinc-300 disabled:pointer-events-none",
className,
)}
{...rest}
>
<div className="relative h-[2.625rem] w-[2.625rem] overflow-hidden rounded-[0.5rem] bg-white">
{icon_url && (
<Image
src={icon_url}
alt="integration-icon"
fill
sizes="2.25rem"
className="w-full rounded-[0.5rem] object-contain group-disabled:opacity-50"
/>
)}
</div>
<div className="w-full">
<div className="flex items-center justify-between gap-2">
{title && (
<p className="line-clamp-1 flex-1 font-sans text-sm font-medium leading-[1.375rem] text-zinc-700 group-disabled:text-zinc-400">
{beautifyString(title)}
</p>
)}
<span className="flex h-[1.375rem] w-[1.6875rem] items-center justify-center rounded-[1.25rem] bg-[#f0f0f0] p-1.5 font-sans text-sm leading-[1.375rem] text-zinc-500 group-disabled:text-zinc-400">
{number_of_blocks}
</span>
</div>
<span className="line-clamp-1 font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400">
{description}
</span>
</div>
</Button>
);
};
const IntegrationSkeleton: React.FC<{ className?: string }> = ({
className,
}) => {
return (
<Skeleton
className={cn(
"flex h-16 w-full min-w-[7.5rem] animate-pulse items-center justify-start space-x-3 rounded-[0.75rem] bg-zinc-100 px-[0.875rem] py-[0.625rem]",
className,
)}
>
<Skeleton className="h-[2.625rem] w-[2.625rem] rounded-[0.5rem] bg-zinc-200" />
<div className="flex flex-1 flex-col items-start gap-0.5">
<div className="flex w-full items-center justify-between">
<Skeleton className="h-[1.375rem] w-24 rounded bg-zinc-200" />
<Skeleton className="h-[1.375rem] w-[1.6875rem] rounded-[1.25rem] bg-zinc-200" />
</div>
<Skeleton className="h-5 w-[80%] rounded bg-zinc-200" />
</div>
</Skeleton>
);
};
Integration.Skeleton = IntegrationSkeleton;

View File

@@ -1,60 +0,0 @@
import { Button } from "@/components/ui/button";
import { Skeleton } from "@/components/ui/skeleton";
import { beautifyString, cn } from "@/lib/utils";
import Image from "next/image";
import React, { ButtonHTMLAttributes } from "react";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
name?: string;
icon_url?: string;
}
interface IntegrationChipComponent extends React.FC<Props> {
Skeleton: React.FC;
}
export const IntegrationChip: IntegrationChipComponent = ({
icon_url,
name,
className,
...rest
}) => {
return (
<Button
className={cn(
"flex h-[3.25rem] w-full min-w-[7.5rem] justify-start gap-2 whitespace-normal rounded-[0.5rem] bg-zinc-50 p-2 pr-3 shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300",
className,
)}
{...rest}
>
<div className="relative h-9 w-9 rounded-[0.5rem] bg-transparent">
{icon_url && (
<Image
src={icon_url}
alt="integration-icon"
fill
sizes="2.25rem"
className="w-full object-contain"
/>
)}
</div>
{name && (
<span className="truncate font-sans text-sm font-normal leading-[1.375rem] text-zinc-800">
{beautifyString(name)}
</span>
)}
</Button>
);
};
const IntegrationChipSkeleton: React.FC = () => {
return (
<Skeleton className="flex h-[3.25rem] w-full min-w-[7.5rem] gap-2 rounded-[0.5rem] bg-zinc-100 p-2 pr-3">
<Skeleton className="h-9 w-12 rounded-[0.5rem] bg-zinc-200" />
<Skeleton className="h-5 w-24 self-center rounded-sm bg-zinc-200" />
</Skeleton>
);
};
IntegrationChip.Skeleton = IntegrationChipSkeleton;

View File

@@ -1,99 +0,0 @@
import { Button } from "@/components/ui/button";
import { Skeleton } from "@/components/ui/skeleton";
import { beautifyString, cn } from "@/lib/utils";
import { Plus } from "lucide-react";
import Image from "next/image";
import React, { ButtonHTMLAttributes } from "react";
import { highlightText } from "./helpers";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
title?: string;
description?: string;
icon_url?: string;
highlightedText?: string;
}
interface IntegrationBlockComponent extends React.FC<Props> {
Skeleton: React.FC<{ className?: string }>;
}
export const IntegrationBlock: IntegrationBlockComponent = ({
title,
icon_url,
description,
className,
highlightedText,
...rest
}) => {
return (
<Button
className={cn(
"group flex h-16 w-full min-w-[7.5rem] items-center justify-start gap-3 whitespace-normal rounded-[0.75rem] bg-zinc-50 px-[0.875rem] py-[0.625rem] text-start shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300 disabled:cursor-not-allowed",
className,
)}
{...rest}
>
<div className="relative h-[2.625rem] w-[2.625rem] rounded-[0.5rem] bg-white">
{icon_url && (
<Image
src={icon_url}
alt="integration-icon"
fill
sizes="2.25rem"
className="w-full object-contain group-disabled:opacity-50"
/>
)}
</div>
<div className="flex flex-1 flex-col items-start gap-0.5">
{title && (
<span
className={cn(
"line-clamp-1 font-sans text-sm font-medium leading-[1.375rem] text-zinc-800 group-disabled:text-zinc-400",
)}
>
{highlightText(beautifyString(title), highlightedText)}
</span>
)}
{description && (
<span
className={cn(
"line-clamp-1 font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
{highlightText(description, highlightedText)}
</span>
)}
</div>
<div
className={cn(
"flex h-7 w-7 items-center justify-center rounded-[0.5rem] bg-zinc-700 group-disabled:bg-zinc-400",
)}
>
<Plus className="h-5 w-5 text-zinc-50" strokeWidth={2} />
</div>
</Button>
);
};
const IntegrationBlockSkeleton = ({ className }: { className?: string }) => {
return (
<Skeleton
className={cn(
"flex h-16 w-full min-w-[7.5rem] animate-pulse items-center justify-start gap-3 rounded-[0.75rem] bg-zinc-100 px-[0.875rem] py-[0.625rem]",
className,
)}
>
<Skeleton className="h-[2.625rem] w-[2.625rem] rounded-[0.5rem] bg-zinc-200" />
<div className="flex flex-1 flex-col items-start gap-0.5">
<Skeleton className="h-[1.375rem] w-24 rounded bg-zinc-200" />
<Skeleton className="h-5 w-32 rounded bg-zinc-200" />
</div>
<Skeleton className="h-7 w-7 rounded-[0.5rem] bg-zinc-200" />
</Skeleton>
);
};
IntegrationBlock.Skeleton = IntegrationBlockSkeleton;

View File

@@ -1,135 +0,0 @@
import { Button } from "@/components/ui/button";
import { Skeleton } from "@/components/ui/skeleton";
import { cn } from "@/lib/utils";
import { ExternalLink, Loader2, Plus } from "lucide-react";
import Image from "next/image";
import React, { ButtonHTMLAttributes } from "react";
import Link from "next/link";
import { highlightText } from "./helpers";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
title?: string;
creator_name?: string;
number_of_runs?: number;
image_url?: string;
highlightedText?: string;
slug: string;
loading: boolean;
}
interface MarketplaceAgentBlockComponent extends React.FC<Props> {
Skeleton: React.FC<{ className?: string }>;
}
export const MarketplaceAgentBlock: MarketplaceAgentBlockComponent = ({
title,
image_url,
creator_name,
number_of_runs,
className,
loading,
highlightedText,
slug,
...rest
}) => {
return (
<Button
className={cn(
"group flex h-[4.375rem] w-full min-w-[7.5rem] items-center justify-start gap-3 whitespace-normal rounded-[0.75rem] bg-zinc-50 p-[0.625rem] pr-[0.875rem] text-start shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300 disabled:pointer-events-none",
className,
)}
{...rest}
>
<div className="relative h-[3.125rem] w-[5.625rem] overflow-hidden rounded-[0.375rem] bg-white">
{image_url && (
<Image
src={image_url}
alt="integration-icon"
fill
sizes="5.625rem"
className="w-full object-contain group-disabled:opacity-50"
/>
)}
</div>
<div className="flex flex-1 flex-col items-start gap-0.5">
{title && (
<span
className={cn(
"line-clamp-1 font-sans text-sm font-medium leading-[1.375rem] text-zinc-800 group-disabled:text-zinc-400",
)}
>
{highlightText(title, highlightedText)}
</span>
)}
<div className="flex items-center space-x-2.5">
<span
className={cn(
"truncate font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
By {creator_name}
</span>
<span className="font-sans text-zinc-400"></span>
<span
className={cn(
"truncate font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
{number_of_runs} runs
</span>
<span className="font-sans text-zinc-400"></span>
<Link
href={`/marketplace/agent/${creator_name}/${slug}`}
className="flex gap-0.5 truncate"
onClick={(e) => e.stopPropagation()}
>
<span className="font-sans text-xs leading-5 text-blue-700 underline">
Agent page
</span>
<ExternalLink className="h-4 w-4 text-blue-700" strokeWidth={1} />
</Link>
</div>
</div>
<div
className={cn(
"flex h-7 min-w-7 items-center justify-center rounded-[0.5rem] bg-zinc-700 group-disabled:bg-zinc-400",
)}
>
{!loading ? (
<Plus className="h-5 w-5 text-zinc-50" strokeWidth={2} />
) : (
<Loader2 className="h-5 w-5 animate-spin" />
)}
</div>
</Button>
);
};
const MarketplaceAgentBlockSkeleton: React.FC<{ className?: string }> = ({
className,
}) => {
return (
<Skeleton
className={cn(
"flex h-[4.375rem] w-full min-w-[7.5rem] animate-pulse items-center justify-start gap-3 rounded-[0.75rem] bg-zinc-100 p-[0.625rem] pr-[0.875rem]",
className,
)}
>
<Skeleton className="h-[3.125rem] w-[5.625rem] rounded-[0.375rem] bg-zinc-200" />
<div className="flex flex-1 flex-col items-start gap-0.5">
<Skeleton className="h-[1.375rem] w-24 rounded bg-zinc-200" />
<div className="flex items-center gap-1">
<Skeleton className="h-5 w-16 rounded bg-zinc-200" />
<Skeleton className="h-5 w-16 rounded bg-zinc-200" />
</div>
</div>
<Skeleton className="h-7 w-7 rounded-[0.5rem] bg-zinc-200" />
</Skeleton>
);
};
MarketplaceAgentBlock.Skeleton = MarketplaceAgentBlockSkeleton;

View File

@@ -1,40 +0,0 @@
// BLOCK MENU TODO: We need to add a better hover state to it; currently it's not in the design either.
import { Button } from "@/components/ui/button";
import { cn } from "@/lib/utils";
import React, { ButtonHTMLAttributes } from "react";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
selected?: boolean;
number?: number;
name?: string;
}
export const MenuItem: React.FC<Props> = ({
selected = false,
number,
name,
className,
...rest
}) => {
return (
<Button
className={cn(
"flex h-[2.375rem] w-[12.875rem] justify-between whitespace-normal rounded-[0.5rem] bg-transparent p-2 pl-3 shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0",
selected && "bg-zinc-100",
className,
)}
{...rest}
>
<span className="truncate font-sans text-sm font-medium leading-[1.375rem] text-zinc-800">
{name}
</span>
{number && (
<span className="font-sans text-sm font-normal leading-[1.375rem] text-zinc-600">
{number > 100 ? "100+" : number}
</span>
)}
</Button>
);
};

View File

@@ -1,110 +0,0 @@
import { Separator } from "@/components/ui/separator";
import { cn } from "@/lib/utils";
import React, { useMemo } from "react";
import { BlockMenu } from "../BlockMenu/BlockMenu";
import { useNewControlPanel } from "./useNewControlPanel";
import { NewSaveControl } from "../SaveControl/NewSaveControl";
import { GraphExecutionID } from "@/lib/autogpt-server-api";
import { history } from "@/components/history";
import { ControlPanelButton } from "../ControlPanelButton";
import { ArrowUUpLeftIcon, ArrowUUpRightIcon } from "@phosphor-icons/react";
export type Control = {
icon: React.ReactNode;
label: string;
disabled?: boolean;
onClick: () => void;
};
interface ControlPanelProps {
className?: string;
flowExecutionID: GraphExecutionID | undefined;
visualizeBeads: "no" | "static" | "animate";
pinSavePopover: boolean;
pinBlocksPopover: boolean;
}
export const NewControlPanel = ({
flowExecutionID,
visualizeBeads,
pinSavePopover,
pinBlocksPopover,
className,
}: ControlPanelProps) => {
const {
blockMenuSelected,
setBlockMenuSelected,
agentDescription,
setAgentDescription,
saveAgent,
agentName,
setAgentName,
savedAgent,
isSaving,
isRunning,
isStopping,
} = useNewControlPanel({ flowExecutionID, visualizeBeads });
const controls: Control[] = useMemo(
() => [
{
label: "Undo",
icon: <ArrowUUpLeftIcon size={20} weight="bold" />,
onClick: history.undo,
disabled: !history.canUndo(),
},
{
label: "Redo",
icon: <ArrowUUpRightIcon size={20} weight="bold" />,
onClick: history.redo,
disabled: !history.canRedo(),
},
],
[]
);
return (
<section
className={cn(
"absolute left-4 top-24 z-10 w-[4.25rem] overflow-hidden rounded-[1rem] border-none bg-white p-0 shadow-[0_1px_5px_0_rgba(0,0,0,0.1)]",
className
)}
>
<div className="flex flex-col items-center justify-center rounded-[1rem] p-0">
<BlockMenu
pinBlocksPopover={pinBlocksPopover}
blockMenuSelected={blockMenuSelected}
setBlockMenuSelected={setBlockMenuSelected}
/>
<Separator className="text-[#E1E1E1]" />
{controls.map((control, index) => (
<ControlPanelButton
key={index}
onClick={() => control.onClick()}
data-id={`control-button-${index}`}
data-testid={`blocks-control-${control.label.toLowerCase()}-button`}
disabled={control.disabled || false}
className="rounded-none"
>
{control.icon}
</ControlPanelButton>
))}
<Separator className="text-[#E1E1E1]" />
<NewSaveControl
agentMeta={savedAgent}
canSave={!isSaving && !isRunning && !isStopping}
onSave={saveAgent}
agentDescription={agentDescription}
onDescriptionChange={setAgentDescription}
agentName={agentName}
onNameChange={setAgentName}
pinSavePopover={pinSavePopover}
blockMenuSelected={blockMenuSelected}
setBlockMenuSelected={setBlockMenuSelected}
/>
</div>
</section>
);
};
export default NewControlPanel;

View File

@@ -1,35 +0,0 @@
import useAgentGraph from "@/hooks/useAgentGraph";
import { GraphExecutionID, GraphID } from "@/lib/autogpt-server-api";
import { useSearchParams } from "next/navigation";
import { useState } from "react";
export interface NewControlPanelProps {
flowExecutionID: GraphExecutionID | undefined;
visualizeBeads: "no" | "static" | "animate";
}
export const useNewControlPanel = ({flowExecutionID, visualizeBeads}: NewControlPanelProps) => {
const [blockMenuSelected, setBlockMenuSelected] = useState<
"save" | "block" | ""
>("");
const query = useSearchParams();
const _graphVersion = query.get("flowVersion");
const graphVersion = _graphVersion ? parseInt(_graphVersion) : undefined;
const flowID = query.get("flowID") as GraphID | null ?? undefined;
const {agentDescription, setAgentDescription, saveAgent, agentName, setAgentName, savedAgent, isSaving, isRunning, isStopping} = useAgentGraph(flowID, graphVersion, flowExecutionID, visualizeBeads !== "no")
return {
blockMenuSelected,
setBlockMenuSelected,
agentDescription,
setAgentDescription,
saveAgent,
agentName,
setAgentName,
savedAgent,
isSaving,
isRunning,
isStopping,
}
};

View File

@@ -1,17 +0,0 @@
import { SmileySadIcon } from "@phosphor-icons/react";
export const NoSearchResult = () => {
return (
<div className="flex h-full w-full flex-col items-center justify-center text-center">
<SmileySadIcon size={64} className="mb-10 text-zinc-400" />
<div className="space-y-1">
<p className="font-sans text-sm font-medium leading-[1.375rem] text-zinc-800">
No match found
</p>
<p className="font-sans text-sm font-normal leading-[1.375rem] text-zinc-600">
Try adjusting your search terms
</p>
</div>
</div>
);
};

View File

@@ -1,158 +0,0 @@
import React, { useCallback, useEffect } from "react";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { Card, CardContent, CardFooter } from "@/components/ui/card";
import { Input } from "@/components/ui/input";
import { Button } from "@/components/ui/button";
import { GraphMeta } from "@/lib/autogpt-server-api";
import { Label } from "@/components/ui/label";
import { IconSave } from "@/components/ui/icons";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { ControlPanelButton } from "../ControlPanelButton";
interface SaveControlProps {
agentMeta: GraphMeta | null;
agentName: string;
agentDescription: string;
canSave: boolean;
onSave: () => void;
onNameChange: (name: string) => void;
onDescriptionChange: (description: string) => void;
pinSavePopover: boolean;
blockMenuSelected: "save" | "block" | "";
setBlockMenuSelected: React.Dispatch<
React.SetStateAction<"" | "save" | "block">
>;
}
export const NewSaveControl = ({
agentMeta,
canSave,
onSave,
agentName,
onNameChange,
agentDescription,
onDescriptionChange,
blockMenuSelected,
setBlockMenuSelected,
pinSavePopover,
}: SaveControlProps) => {
const handleSave = useCallback(() => {
onSave();
}, [onSave]);
const { toast } = useToast();
useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => {
if ((event.ctrlKey || event.metaKey) && event.key === "s") {
event.preventDefault();
handleSave();
toast({
duration: 2000,
title: "All changes saved successfully!",
});
}
};
window.addEventListener("keydown", handleKeyDown);
return () => {
window.removeEventListener("keydown", handleKeyDown);
};
}, [handleSave, toast]);
return (
<Popover
open={pinSavePopover ? true : undefined}
onOpenChange={(open) => open || setBlockMenuSelected("")}
>
<PopoverTrigger>
<ControlPanelButton
data-id="save-control-popover-trigger"
data-testid="blocks-control-save-button"
selected={blockMenuSelected === "save"}
onClick={() => {
setBlockMenuSelected("save");
}}
className="rounded-none"
>
{/* Need to find phosphor icon alternative for this lucide icon */}
<IconSave className="h-5 w-5" strokeWidth={2} />
</ControlPanelButton>
</PopoverTrigger>
<PopoverContent
side="right"
sideOffset={16}
align="start"
className="w-[17rem] rounded-xl border-none p-0 shadow-none md:w-[30rem]"
data-id="save-control-popover-content"
>
<Card className="border-none shadow-none dark:bg-slate-900">
<CardContent className="p-4">
<div className="grid gap-3">
<Label htmlFor="name" className="dark:text-gray-300">
Name
</Label>
<Input
id="name"
placeholder="Enter your agent name"
className="col-span-3"
value={agentName}
onChange={(e) => onNameChange(e.target.value)}
data-id="save-control-name-input"
data-testid="save-control-name-input"
maxLength={100}
/>
<Label htmlFor="description" className="dark:text-gray-300">
Description
</Label>
<Input
id="description"
placeholder="Your agent description"
className="col-span-3"
value={agentDescription}
onChange={(e) => onDescriptionChange(e.target.value)}
data-id="save-control-description-input"
data-testid="save-control-description-input"
maxLength={500}
/>
{agentMeta?.version && (
<>
<Label htmlFor="version" className="dark:text-gray-300">
Version
</Label>
<Input
id="version"
placeholder="Version"
className="col-span-3"
value={agentMeta?.version || "-"}
disabled
data-testid="save-control-version-output"
/>
</>
)}
</div>
</CardContent>
<CardFooter className="flex flex-col items-stretch gap-2">
<Button
className="w-full dark:bg-slate-700 dark:text-slate-100 dark:hover:bg-slate-800"
onClick={handleSave}
data-id="save-control-save-agent"
data-testid="save-control-save-agent-button"
disabled={!canSave}
>
Save Agent
</Button>
</CardFooter>
</Card>
</PopoverContent>
</Popover>
);
};

View File

@@ -1,47 +0,0 @@
import { Button } from "@/components/ui/button";
import { Skeleton } from "@/components/ui/skeleton";
import { cn } from "@/lib/utils";
import { ArrowUpRight } from "lucide-react";
import React, { ButtonHTMLAttributes } from "react";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
content?: string;
}
interface SearchHistoryChipComponent extends React.FC<Props> {
Skeleton: React.FC<{ className?: string }>;
}
export const SearchHistoryChip: SearchHistoryChipComponent = ({
content,
className,
...rest
}) => {
return (
<Button
className={cn(
"my-[1px] h-[2.25rem] space-x-1 rounded-[1.5rem] bg-zinc-50 p-[0.375rem] pr-[0.625rem] shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300",
className,
)}
{...rest}
>
<ArrowUpRight className="h-6 w-6 text-zinc-500" strokeWidth={1.25} />
<span className="font-sans text-sm font-normal leading-[1.375rem] text-zinc-800">
{content}
</span>
</Button>
);
};
const SearchHistoryChipSkeleton: React.FC<{ className?: string }> = ({
className,
}) => {
return (
<Skeleton
className={cn("h-[2.25rem] w-32 rounded-[1.5rem] bg-zinc-100", className)}
/>
);
};
SearchHistoryChip.Skeleton = SearchHistoryChipSkeleton;

View File

@@ -1,117 +0,0 @@
import { Button } from "@/components/ui/button";
import { Skeleton } from "@/components/ui/skeleton";
import { cn } from "@/lib/utils";
import { Plus } from "lucide-react";
import Image from "next/image";
import React, { ButtonHTMLAttributes } from "react";
import { highlightText } from "./helpers";
import { formatTimeAgo } from "@/lib/utils/time";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
title?: string;
edited_time?: Date;
version?: number;
image_url?: string;
highlightedText?: string;
}
interface UGCAgentBlockComponent extends React.FC<Props> {
Skeleton: React.FC<{ className?: string }>;
}
export const UGCAgentBlock: UGCAgentBlockComponent = ({
title,
image_url,
edited_time = new Date(),
version,
className,
highlightedText,
...rest
}) => {
return (
<Button
className={cn(
"group flex h-[4.375rem] w-full min-w-[7.5rem] items-center justify-start gap-3 whitespace-normal rounded-[0.75rem] bg-zinc-50 p-[0.625rem] pr-[0.875rem] text-start shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300 disabled:cursor-not-allowed",
className,
)}
{...rest}
>
{image_url && (
<div className="relative h-[3.125rem] w-[5.625rem] overflow-hidden rounded-[0.375rem] bg-white">
<Image
src={image_url}
alt="integration-icon"
fill
sizes="5.625rem"
className="w-full object-contain group-disabled:opacity-50"
/>
</div>
)}
<div className="flex flex-1 flex-col items-start gap-0.5">
{title && (
<span
className={cn(
"line-clamp-1 font-sans text-sm font-medium leading-[1.375rem] text-zinc-800 group-disabled:text-zinc-400",
)}
>
{highlightText(title, highlightedText)}
</span>
)}
<div className="flex items-center space-x-1.5">
{edited_time && (
<span
className={cn(
"line-clamp-1 font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
Edited {formatTimeAgo(edited_time.toISOString())}
</span>
)}
<span className="font-sans text-zinc-400"></span>
<span
className={cn(
"line-clamp-1 font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
Version {version}
</span>
</div>
</div>
<div
className={cn(
"flex h-7 w-7 items-center justify-center rounded-[0.5rem] bg-zinc-700 group-disabled:bg-zinc-400",
)}
>
<Plus className="h-5 w-5 text-zinc-50" strokeWidth={2} />
</div>
</Button>
);
};
const UGCAgentBlockSkeleton: React.FC<{ className?: string }> = ({
className,
}) => {
return (
<Skeleton
className={cn(
"flex h-[4.375rem] w-full min-w-[7.5rem] animate-pulse items-center justify-start gap-3 rounded-[0.75rem] bg-zinc-100 p-[0.625rem] pr-[0.875rem]",
className,
)}
>
<Skeleton className="h-[3.125rem] w-[5.625rem] rounded-[0.375rem] bg-zinc-200" />
<div className="flex flex-1 flex-col items-start gap-0.5">
<Skeleton className="h-[1.375rem] w-24 rounded bg-zinc-200" />
<div className="flex items-center gap-1">
<Skeleton className="h-5 w-16 rounded bg-zinc-200" />
<Skeleton className="h-5 w-16 rounded bg-zinc-200" />
</div>
</div>
<Skeleton className="h-7 w-7 rounded-[0.5rem] bg-zinc-200" />
</Skeleton>
);
};
UGCAgentBlock.Skeleton = UGCAgentBlockSkeleton;

View File

@@ -1,22 +0,0 @@
export const highlightText = (
text: string | undefined,
highlight: string | undefined,
) => {
if (!text || !highlight) return text;
function escapeRegExp(s: string) {
return s.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
}
const escaped = escapeRegExp(highlight);
const parts = text.split(new RegExp(`(${escaped})`, "gi"));
return parts.map((part, i) =>
part.toLowerCase() === highlight?.toLowerCase() ? (
<mark key={i} className="bg-transparent font-bold">
{part}
</mark>
) : (
part
),
);
};

View File

@@ -1,74 +0,0 @@
"use client";
import { Breadcrumbs } from "@/components/molecules/Breadcrumbs/Breadcrumbs";
import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard";
import { useAgentRunsView } from "./useAgentRunsView";
import { AgentRunsLoading } from "./components/AgentRunsLoading";
import { Button } from "@/components/atoms/Button/Button";
import { Plus } from "@phosphor-icons/react";
export function AgentRunsView() {
const { response, ready, error, agentId } = useAgentRunsView();
// Handle loading state
if (!ready) {
return <AgentRunsLoading />;
}
// Handle errors - check for query error first, then response errors
if (error || (response && response.status !== 200)) {
return (
<ErrorCard
isSuccess={false}
responseError={error || undefined}
httpError={
response?.status !== 200
? {
status: response?.status,
statusText: "Request failed",
}
: undefined
}
context="agent"
onRetry={() => window.location.reload()}
/>
);
}
// Handle missing data
if (!response?.data) {
return (
<ErrorCard
isSuccess={false}
responseError={{ message: "No agent data found" }}
context="agent"
onRetry={() => window.location.reload()}
/>
);
}
const agent = response.data;
return (
<div className="grid h-screen grid-cols-[25%_85%] gap-4 pt-8">
{/* Left Sidebar - 30% */}
<div className="bg-gray-50 p-4">
<Button variant="primary" size="large" className="w-full">
<Plus size={20} /> New Run
</Button>
</div>
{/* Main Content - 70% */}
<div className="p-4">
<Breadcrumbs
items={[
{ name: "My Library", link: "/library" },
{ name: agent.name, link: `/library/agents/${agentId}` },
]}
/>
{/* Main content will go here */}
<div className="mt-4 text-gray-600">Main content area</div>
</div>
</div>
);
}

View File

@@ -1,25 +0,0 @@
import React from "react";
import { Skeleton } from "@/components/ui/skeleton";
export function AgentRunsLoading() {
return (
<div className="px-6 py-6">
<div className="flex h-screen w-full gap-4">
{/* Left Sidebar */}
<div className="w-80 space-y-4">
<Skeleton className="h-12 w-full" />
<Skeleton className="h-32 w-full" />
<Skeleton className="h-24 w-full" />
</div>
{/* Main Content */}
<div className="flex-1 space-y-4">
<Skeleton className="h-8 w-full" />
<Skeleton className="h-12 w-full" />
<Skeleton className="h-64 w-full" />
<Skeleton className="h-32 w-full" />
</div>
</div>
</div>
);
}

View File

@@ -1,15 +0,0 @@
import { useGetV2GetLibraryAgent } from "@/app/api/__generated__/endpoints/library/library";
import { useParams } from "next/navigation";
export function useAgentRunsView() {
const { id } = useParams();
const agentId = id as string;
const { data: response, isSuccess, error } = useGetV2GetLibraryAgent(agentId);
return {
agentId: id,
ready: isSuccess,
error,
response,
};
}

Some files were not shown because too many files have changed in this diff Show More