Compare commits

..

6 Commits

Author SHA1 Message Date
Zamil Majdy
5899897d8f refactor: simplify LaunchDarkly context and remove redundant code 2025-08-13 08:29:14 +00:00
Nicholas Tindle
0389c865aa feat(backend): in progress laucnhdarkly role lookup 2025-08-11 21:01:52 -05:00
Zamil Majdy
b88576d313 refactor(feature-flag): eliminate code duplication in role handling
Extract role and segment logic into reusable _add_role_to_attributes helper function.
This removes the duplicate code between use_user_id_only and full context paths.

- Add _add_role_to_attributes() helper function
- Remove duplicated role/segment logic in is_feature_enabled()
- Cleaner, more maintainable code with single source of truth

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 12:29:10 +07:00
Zamil Majdy
14634a6ce9 feat(platform): implement application-layer User model with proper type safety
## Core Changes

### Application Layer Separation
- Create application-layer User model with snake_case convention (created_at, email_verified, etc.)
- Add validation to prevent Prisma objects crossing service boundaries
- Replace all hasattr/getattr defensive coding with proper typing

### HTTP Client Improvements
- Prevent retry of HTTP 4xx client errors (404, 403, 401) which are permanent failures
- Add HTTPClientError and HTTPServerError exception categorization
- Comprehensive test coverage for retry behavior

### LaunchDarkly Integration Fixes
- Fix serialization issues by using proper snake_case application models
- Update feature flag client to use typed User model instead of Any
- Clean JSON parsing with proper imports (JSONDecodeError, json_loads)

### Type Safety Improvements
- Replace Any type annotations with proper PrismaUser typing
- Use AutoTopUpConfig class directly instead of generic dict
- Remove defensive hasattr() calls with direct attribute access
- Achieve zero format errors across entire codebase

## Files Modified
- backend/data/model.py: New application User model with from_db() converter
- backend/util/service.py: HTTP retry logic + Prisma validation
- backend/data/user.py: Updated to return application models
- autogpt_libs/feature_flag/client.py: Type-safe LaunchDarkly integration
- Multiple test files: Migrated to use application User model

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 12:26:30 +07:00
Zamil Majdy
10a402a766 fix(service): prevent retry of HTTP 4xx client errors
## Problem
The service client retry logic was incorrectly retrying HTTP 4xx client errors
(404, 403, 401, etc.), which should never be retried since they represent
permanent client-side issues that won't be resolved by retrying.

## Solution
- **Added HTTPClientError and HTTPServerError exceptions** to categorize HTTP errors
- **Modified retry exclusions** to include HTTPClientError in the exclude_exceptions list
- **Enhanced error handling** to wrap 4xx errors in HTTPClientError (non-retryable) and 5xx errors in HTTPServerError (retryable)
- **Preserved mapped exceptions** - When the server returns a properly formatted RemoteCallError with a mapped exception type (ValueError, etc.), that original exception is re-raised regardless of HTTP status

## Changes Made
- **New Exception Classes**: Added HTTPClientError and HTTPServerError with status code tracking
- **Improved Error Categorization**: HTTP errors are now properly categorized by status code:
  - 4xx → HTTPClientError (excluded from retries)
  - 5xx → HTTPServerError (can be retried)
  - Mapped exceptions (ValueError, etc.) → Original exception type preserved
- **Clean Logic Flow**: Simplified exception handling logic to check for mapped exceptions first, then fall back to HTTP status categorization
- **Comprehensive Tests**: Added TestHTTPErrorRetryBehavior with coverage for various status codes and exception mapping

## Benefits
- **No more wasted retry attempts** on permanent client errors (404, 403, etc.)
- **Faster error handling** for client errors since they fail immediately
- **Preserved compatibility** with existing exception mapping system
- **Better resource utilization** by avoiding unnecessary retry delays
- **Cleaner logs** with fewer spurious retry warnings

## Files Modified
- `backend/util/service.py`: Core retry logic and error handling improvements
- `backend/util/service_test.py`: Comprehensive test coverage for HTTP error retry behavior

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 11:54:25 +07:00
Zamil Majdy
004011726d feat(feature-flag): add LaunchDarkly segment support with 24h caching
## Summary
- Enhanced LaunchDarkly feature flag system to support context targeting with user segments
- Added 24-hour TTL caching for user context data to improve performance
- Implemented comprehensive user segmentation logic including role-based, domain-based, and account age segments

## Key Changes Made

### LaunchDarkly Client Enhancement
- **Segment Support**: Added full user context with segments for LaunchDarkly's contextTargets and segmentMatch rules
- **24-Hour Caching**: Implemented TTL cache with 86400 seconds (24 hours) to reduce database calls
- **Clean Architecture**: Uses existing `get_user_by_id` function following credentials_store.py pattern
- **Async-Only**: Removed sync support, made all functions async for consistency

### Segmentation Logic
- **Role-based**: user, admin, system segments
- **Domain-based**: employee segment for agpt.co emails, external for others
- **Account age**: new_user (<7 days), recent_user (7-30 days), established_user (>30 days)
- **Custom metadata**: Supports additional segments from user metadata

### Files Modified
- `autogpt_libs/feature_flag/client.py`: Main implementation with segment support and caching
- `autogpt_libs/utils/cache.py`: Added async TTL cache decorator with Protocol typing
- `autogpt_libs/utils/cache_test.py`: Comprehensive tests for TTL and process-level caching
- `backend/executor/database.py`: Exposed `get_user_by_id` for RPC support

## Technical Details
- Uses `@async_ttl_cache(maxsize=1000, ttl_seconds=86400)` for efficient caching
- Fallback to simple user_id context if database unavailable
- Proper exception handling without dangerous swallowing
- Strong typing with Protocol for cache functions
- Backwards compatible with `use_user_id_only` parameter

## Test Plan
- [x] All cache tests pass (TTL and process-level)
- [x] Feature flag evaluation works with segments
- [x] Database RPC calls work correctly
- [x] 24-hour cache reduces database load
- [x] Fallback to simple context when database unavailable

## Impact
- **Better targeting**: LaunchDarkly rules with segments now work correctly
- **Performance**: 24-hour cache dramatically reduces database calls
- **Maintainability**: Clean separation using existing database functions
- **Reliability**: Proper error handling and fallbacks

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 11:27:38 +07:00
967 changed files with 32335 additions and 43833 deletions

View File

@@ -15,7 +15,6 @@
!autogpt_platform/backend/pyproject.toml
!autogpt_platform/backend/poetry.lock
!autogpt_platform/backend/README.md
!autogpt_platform/backend/.env
# Platform - Market
!autogpt_platform/market/market/
@@ -28,7 +27,6 @@
# Platform - Frontend
!autogpt_platform/frontend/src/
!autogpt_platform/frontend/public/
!autogpt_platform/frontend/scripts/
!autogpt_platform/frontend/package.json
!autogpt_platform/frontend/pnpm-lock.yaml
!autogpt_platform/frontend/tsconfig.json
@@ -36,7 +34,6 @@
## config
!autogpt_platform/frontend/*.config.*
!autogpt_platform/frontend/.env.*
!autogpt_platform/frontend/.env
# Classic - AutoGPT
!classic/original_autogpt/autogpt/

View File

@@ -24,8 +24,7 @@
</details>
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my changes
- [ ] I have included a list of my configuration changes in the PR description (under **Changes**)

View File

@@ -1,244 +0,0 @@
# GitHub Copilot Instructions for AutoGPT
This file provides comprehensive onboarding information for GitHub Copilot coding agent to work efficiently with the AutoGPT repository.
## Repository Overview
**AutoGPT** is a powerful platform for creating, deploying, and managing continuous AI agents that automate complex workflows. This is a large monorepo (~150MB) containing multiple components:
- **AutoGPT Platform** (`autogpt_platform/`) - Main focus: Modern AI agent platform (Polyform Shield License)
- **Classic AutoGPT** (`classic/`) - Legacy agent system (MIT License)
- **Documentation** (`docs/`) - MkDocs-based documentation site
- **Infrastructure** - Docker configurations, CI/CD, and development tools
**Primary Languages & Frameworks:**
- **Backend**: Python 3.10-3.13, FastAPI, Prisma ORM, PostgreSQL, RabbitMQ
- **Frontend**: TypeScript, Next.js 15, React, Tailwind CSS, Radix UI
- **Development**: Docker, Poetry, pnpm, Playwright, Storybook
## Build and Validation Instructions
### Essential Setup Commands
**Always run these commands in the correct directory and in this order:**
1. **Initial Setup** (required once):
```bash
# Clone and enter repository
git clone <repo> && cd AutoGPT
# Start all services (database, redis, rabbitmq, clamav)
cd autogpt_platform && docker compose --profile local up deps --build --detach
```
2. **Backend Setup** (always run before backend development):
```bash
cd autogpt_platform/backend
poetry install # Install dependencies
poetry run prisma migrate dev # Run database migrations
poetry run prisma generate # Generate Prisma client
```
3. **Frontend Setup** (always run before frontend development):
```bash
cd autogpt_platform/frontend
pnpm install # Install dependencies
```
### Runtime Requirements
**Critical:** Always ensure Docker services are running before starting development:
```bash
cd autogpt_platform && docker compose --profile local up deps --build --detach
```
**Python Version:** Use Python 3.11 (required; managed by Poetry via pyproject.toml)
**Node.js Version:** Use Node.js 21+ with pnpm package manager
### Development Commands
**Backend Development:**
```bash
cd autogpt_platform/backend
poetry run serve # Start development server (port 8000)
poetry run test # Run all tests (requires ~5 minutes)
poetry run pytest path/to/test.py # Run specific test
poetry run format # Format code (Black + isort) - always run first
poetry run lint # Lint code (ruff) - run after format
```
**Frontend Development:**
```bash
cd autogpt_platform/frontend
pnpm dev # Start development server (port 3000) - use for active development
pnpm build # Build for production (only needed for E2E tests or deployment)
pnpm test # Run Playwright E2E tests (requires build first)
pnpm test-ui # Run tests with UI
pnpm format # Format and lint code
pnpm storybook # Start component development server
```
### Testing Strategy
**Backend Tests:**
- **Block Tests**: `poetry run pytest backend/blocks/test/test_block.py -xvs` (validates all blocks)
- **Specific Block**: `poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[BlockName]' -xvs`
- **Snapshot Tests**: Use `--snapshot-update` when output changes, always review with `git diff`
**Frontend Tests:**
- **E2E Tests**: Always run `pnpm dev` before `pnpm test` (Playwright requires running instance)
- **Component Tests**: Use Storybook for isolated component development
### Critical Validation Steps
**Before committing changes:**
1. Run `poetry run format` (backend) and `pnpm format` (frontend)
2. Ensure all tests pass in modified areas
3. Verify Docker services are still running
4. Check that database migrations apply cleanly
**Common Issues & Workarounds:**
- **Prisma issues**: Run `poetry run prisma generate` after schema changes
- **Permission errors**: Ensure Docker has proper permissions
- **Port conflicts**: Check the `docker-compose.yml` file for the current list of exposed ports. You can list all mapped ports with:
- **Test timeouts**: Backend tests can take 5+ minutes, use `-x` flag to stop on first failure
## Project Layout & Architecture
### Core Architecture
**AutoGPT Platform** (`autogpt_platform/`):
- `backend/` - FastAPI server with async support
- `backend/backend/` - Core API logic
- `backend/blocks/` - Agent execution blocks
- `backend/data/` - Database models and schemas
- `schema.prisma` - Database schema definition
- `frontend/` - Next.js application
- `src/app/` - App Router pages and layouts
- `src/components/` - Reusable React components
- `src/lib/` - Utilities and configurations
- `autogpt_libs/` - Shared Python utilities
- `docker-compose.yml` - Development stack orchestration
**Key Configuration Files:**
- `pyproject.toml` - Python dependencies and tooling
- `package.json` - Node.js dependencies and scripts
- `schema.prisma` - Database schema and migrations
- `next.config.mjs` - Next.js configuration
- `tailwind.config.ts` - Styling configuration
### Security & Middleware
**Cache Protection**: Backend includes middleware preventing sensitive data caching in browsers/proxies
**Authentication**: JWT-based with Supabase integration
**User ID Validation**: All data access requires user ID checks - verify this for any `data/*.py` changes
### Development Workflow
**GitHub Actions**: Multiple CI/CD workflows in `.github/workflows/`
- `platform-backend-ci.yml` - Backend testing and validation
- `platform-frontend-ci.yml` - Frontend testing and validation
- `platform-fullstack-ci.yml` - End-to-end integration tests
**Pre-commit Hooks**: Run linting and formatting checks
**Conventional Commits**: Use format `type(scope): description` (e.g., `feat(backend): add API`)
### Key Source Files
**Backend Entry Points:**
- `backend/backend/server/server.py` - FastAPI application setup
- `backend/backend/data/` - Database models and user management
- `backend/blocks/` - Agent execution blocks and logic
**Frontend Entry Points:**
- `frontend/src/app/layout.tsx` - Root application layout
- `frontend/src/app/page.tsx` - Home page
- `frontend/src/lib/supabase/` - Authentication and database client
**Protected Routes**: Update `frontend/lib/supabase/middleware.ts` when adding protected routes
### Agent Block System
Agents are built using a visual block-based system where each block performs a single action. Blocks are defined in `backend/blocks/` and must include:
- Block definition with input/output schemas
- Execution logic with proper error handling
- Tests validating functionality
### Database & ORM
**Prisma ORM** with PostgreSQL backend including pgvector for embeddings:
- Schema in `schema.prisma`
- Migrations in `backend/migrations/`
- Always run `prisma migrate dev` and `prisma generate` after schema changes
## Environment Configuration
### Configuration Files Priority Order
1. **Backend**: `/backend/.env.default` → `/backend/.env` (user overrides)
2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides)
3. **Platform**: `/.env.default` (Supabase/shared) → `/.env` (user overrides)
4. Docker Compose `environment:` sections override file-based config
5. Shell environment variables have highest precedence
### Docker Environment Setup
- All services use hardcoded defaults (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
- Copy `.env.default` files to `.env` for local development customization
## Advanced Development Patterns
### Adding New Blocks
1. Create file in `/backend/backend/blocks/`
2. Inherit from `Block` base class with input/output schemas
3. Implement `run` method with proper error handling
4. Generate block UUID using `uuid.uuid4()`
5. Register in block registry
6. Write tests alongside block implementation
7. Consider how inputs/outputs connect with other blocks in graph editor
### API Development
1. Update routes in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside route files
4. For `data/*.py` changes, validate user ID checks
5. Run `poetry run test` to verify changes
### Frontend Development
1. Components in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for component development
4. Test user-facing features with Playwright E2E tests
5. Update protected routes in middleware when needed
### Security Guidelines
**Cache Protection Middleware** (`/backend/backend/server/middleware/security.py`):
- Default: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses allow list approach for cacheable paths (static assets, health checks, public pages)
- Prevents sensitive data caching in browsers/proxies
- Add new cacheable endpoints to `CACHEABLE_PATHS`
### CI/CD Alignment
The repository has comprehensive CI workflows that test:
- **Backend**: Python 3.11-3.13, services (Redis/RabbitMQ/ClamAV), Prisma migrations, Poetry lock validation
- **Frontend**: Node.js 21, pnpm, Playwright with Docker Compose stack, API schema validation
- **Integration**: Full-stack type checking and E2E testing
Match these patterns when developing locally - the copilot setup environment mirrors these CI configurations.
## Collaboration with Other AI Assistants
This repository is actively developed with assistance from Claude (via CLAUDE.md files). When working on this codebase:
- Check for existing CLAUDE.md files that provide additional context
- Follow established patterns and conventions already in the codebase
- Maintain consistency with existing code style and architecture
- Consider that changes may be reviewed and extended by both human developers and AI assistants
## Trust These Instructions
These instructions are comprehensive and tested. Only perform additional searches if:
1. Information here is incomplete for your specific task
2. You encounter errors not covered by the workarounds
3. You need to understand implementation details not covered above
For detailed platform development patterns, refer to `autogpt_platform/CLAUDE.md` and `AGENTS.md` in the repository root.

View File

@@ -1,97 +0,0 @@
name: Auto Fix CI Failures
on:
workflow_run:
workflows: ["CI"]
types:
- completed
permissions:
contents: write
pull-requests: write
actions: read
issues: write
id-token: write # Required for OIDC token exchange
jobs:
auto-fix:
if: |
github.event.workflow_run.conclusion == 'failure' &&
github.event.workflow_run.pull_requests[0] &&
!startsWith(github.event.workflow_run.head_branch, 'claude-auto-fix-ci-')
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup git identity
run: |
git config --global user.email "claude[bot]@users.noreply.github.com"
git config --global user.name "claude[bot]"
- name: Create fix branch
id: branch
run: |
BRANCH_NAME="claude-auto-fix-ci-${{ github.event.workflow_run.head_branch }}-${{ github.run_id }}"
git checkout -b "$BRANCH_NAME"
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
- name: Get CI failure details
id: failure_details
uses: actions/github-script@v7
with:
script: |
const run = await github.rest.actions.getWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const jobs = await github.rest.actions.listJobsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const failedJobs = jobs.data.jobs.filter(job => job.conclusion === 'failure');
let errorLogs = [];
for (const job of failedJobs) {
const logs = await github.rest.actions.downloadJobLogsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
job_id: job.id
});
errorLogs.push({
jobName: job.name,
logs: logs.data
});
}
return {
runUrl: run.data.html_url,
failedJobs: failedJobs.map(j => j.name),
errorLogs: errorLogs
};
- name: Fix CI failures with Claude
id: claude
uses: anthropics/claude-code-action@v1
with:
prompt: |
/fix-ci
Failed CI Run: ${{ fromJSON(steps.failure_details.outputs.result).runUrl }}
Failed Jobs: ${{ join(fromJSON(steps.failure_details.outputs.result).failedJobs, ', ') }}
PR Number: ${{ github.event.workflow_run.pull_requests[0].number }}
Branch Name: ${{ steps.branch.outputs.branch_name }}
Base Branch: ${{ github.event.workflow_run.head_branch }}
Repository: ${{ github.repository }}
Error logs:
${{ toJSON(fromJSON(steps.failure_details.outputs.result).errorLogs) }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--allowedTools 'Edit,MultiEdit,Write,Read,Glob,Grep,LS,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*)'"

View File

@@ -1,379 +0,0 @@
# Claude Dependabot PR Review Workflow
#
# This workflow automatically runs Claude analysis on Dependabot PRs to:
# - Identify dependency changes and their versions
# - Look up changelogs for updated packages
# - Assess breaking changes and security impacts
# - Provide actionable recommendations for the development team
#
# Triggered on: Dependabot PRs (opened, synchronize)
# Requirements: ANTHROPIC_API_KEY secret must be configured
name: Claude Dependabot PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
dependabot-review:
# Only run on Dependabot PRs
if: github.actor == 'dependabot[bot]'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Dependabot Analysis
id: claude_review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |
You are Claude, an AI assistant specialized in reviewing Dependabot dependency update PRs.
Your primary tasks are:
1. **Analyze the dependency changes** in this Dependabot PR
2. **Look up changelogs** for all updated dependencies to understand what changed
3. **Identify breaking changes** and assess potential impact on the AutoGPT codebase
4. **Provide actionable recommendations** for the development team
## Analysis Process:
1. **Identify Changed Dependencies**:
- Use git diff to see what dependencies were updated
- Parse package.json, poetry.lock, requirements files, etc.
- List all package versions: old → new
2. **Changelog Research**:
- For each updated dependency, look up its changelog/release notes
- Use WebFetch to access GitHub releases, NPM package pages, PyPI project pages. The pr should also have some details
- Focus on versions between the old and new versions
- Identify: breaking changes, deprecations, security fixes, new features
3. **Breaking Change Assessment**:
- Categorize changes: BREAKING, MAJOR, MINOR, PATCH, SECURITY
- Assess impact on AutoGPT's usage patterns
- Check if AutoGPT uses affected APIs/features
- Look for migration guides or upgrade instructions
4. **Codebase Impact Analysis**:
- Search the AutoGPT codebase for usage of changed APIs
- Identify files that might be affected by breaking changes
- Check test files for deprecated usage patterns
- Look for configuration changes needed
## Output Format:
Provide a comprehensive review comment with:
### 🔍 Dependency Analysis Summary
- List of updated packages with version changes
- Overall risk assessment (LOW/MEDIUM/HIGH)
### 📋 Detailed Changelog Review
For each updated dependency:
- **Package**: name (old_version → new_version)
- **Changes**: Summary of key changes
- **Breaking Changes**: List any breaking changes
- **Security Fixes**: Note security improvements
- **Migration Notes**: Any upgrade steps needed
### ⚠️ Impact Assessment
- **Breaking Changes Found**: Yes/No with details
- **Affected Files**: List AutoGPT files that may need updates
- **Test Impact**: Any tests that may need updating
- **Configuration Changes**: Required config updates
### 🛠️ Recommendations
- **Action Required**: What the team should do
- **Testing Focus**: Areas to test thoroughly
- **Follow-up Tasks**: Any additional work needed
- **Merge Recommendation**: APPROVE/REVIEW_NEEDED/HOLD
### 📚 Useful Links
- Links to relevant changelogs, migration guides, documentation
Be thorough but concise. Focus on actionable insights that help the development team make informed decisions about the dependency updates.

View File

@@ -30,296 +30,18 @@ jobs:
github.event.issue.author_association == 'COLLABORATOR'
)
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: write
contents: read
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout code
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@v1
uses: anthropics/claude-code-action@beta
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr edit:*)"
--model opus
additional_permissions: |
actions: read

View File

@@ -1,302 +0,0 @@
name: "Copilot Setup Steps"
# Automatically run the setup steps when they are changed to allow for easy validation, and
# allow manual testing through the repository's "Actions" tab
on:
workflow_dispatch:
push:
paths:
- .github/workflows/copilot-setup-steps.yml
pull_request:
paths:
- .github/workflows/copilot-setup-steps.yml
jobs:
# The job MUST be called `copilot-setup-steps` or it will not be picked up by Copilot.
copilot-setup-steps:
runs-on: ubuntu-latest
timeout-minutes: 45
# Set the permissions to the lowest permissions possible needed for your steps.
# Copilot will be given its own token for its operations.
permissions:
# If you want to clone the repository as part of your setup steps, for example to install dependencies, you'll need the `contents: read` permission. If you don't clone the repository in your setup steps, Copilot will do this for you automatically after the steps complete.
contents: read
# You can define any steps you want, and they will run before the agent starts.
# If you do not check out your code, Copilot will do this for you.
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"

View File

@@ -32,7 +32,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.11", "3.12", "3.13"]
python-version: ["3.11"]
runs-on: ubuntu-latest
services:
@@ -201,7 +201,7 @@ jobs:
DIRECT_URL: ${{ steps.supabase.outputs.DB_URL }}
SUPABASE_URL: ${{ steps.supabase.outputs.API_URL }}
SUPABASE_SERVICE_ROLE_KEY: ${{ steps.supabase.outputs.SERVICE_ROLE_KEY }}
JWT_VERIFY_KEY: ${{ steps.supabase.outputs.JWT_SECRET }}
SUPABASE_JWT_SECRET: ${{ steps.supabase.outputs.JWT_SECRET }}
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
REDIS_PASSWORD: "testpassword"

View File

@@ -82,6 +82,37 @@ jobs:
- name: Run lint
run: pnpm lint
type-check:
runs-on: ubuntu-latest
needs: setup
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run tsc check
run: pnpm type-check
chromatic:
runs-on: ubuntu-latest
needs: setup
@@ -145,7 +176,11 @@ jobs:
- name: Copy default supabase .env
run: |
cp ../.env.default ../.env
cp ../.env.example ../.env
- name: Copy backend .env
run: |
cp ../backend/.env.example ../backend/.env
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -160,7 +195,7 @@ jobs:
- name: Run docker compose
run: |
NEXT_PUBLIC_PW_TEST=true docker compose -f ../docker-compose.yml up -d
docker compose -f ../docker-compose.yml up -d
env:
DOCKER_BUILDKIT: 1
BUILDX_CACHE_FROM: type=local,src=/tmp/.buildx-cache
@@ -217,6 +252,15 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Setup .env
run: cp .env.example .env
- name: Build frontend
run: pnpm build --turbo
# uses Turbopack, much faster and safe enough for a test pipeline
env:
NEXT_PUBLIC_PW_TEST: true
- name: Install Browser 'chromium'
run: pnpm playwright install --with-deps chromium

View File

@@ -1,132 +0,0 @@
name: AutoGPT Platform - Frontend CI
on:
push:
branches: [master, dev]
paths:
- ".github/workflows/platform-fullstack-ci.yml"
- "autogpt_platform/**"
pull_request:
paths:
- ".github/workflows/platform-fullstack-ci.yml"
- "autogpt_platform/**"
merge_group:
defaults:
run:
shell: bash
working-directory: autogpt_platform/frontend
jobs:
setup:
runs-on: ubuntu-latest
outputs:
cache-key: ${{ steps.cache-key.outputs.key }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Generate cache key
id: cache-key
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
types:
runs-on: ubuntu-latest
needs: setup
strategy:
fail-fast: false
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Copy default supabase .env
run: |
cp ../.env.default ../.env
- name: Copy backend .env
run: |
cp ../backend/.env.default ../backend/.env
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml --profile local --profile deps_backend up -d
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Setup .env
run: cp .env.default .env
- name: Wait for services to be ready
run: |
echo "Waiting for rest_server to be ready..."
timeout 60 sh -c 'until curl -f http://localhost:8006/health 2>/dev/null; do sleep 2; done' || echo "Rest server health check timeout, continuing..."
echo "Waiting for database to be ready..."
timeout 60 sh -c 'until docker compose -f ../docker-compose.yml exec -T db pg_isready -U postgres 2>/dev/null; do sleep 2; done' || echo "Database ready check timeout, continuing..."
- name: Generate API queries
run: pnpm generate:api:force
- name: Check for API schema changes
run: |
if ! git diff --exit-code src/app/api/openapi.json; then
echo "❌ API schema changes detected in src/app/api/openapi.json"
echo ""
echo "The openapi.json file has been modified after running 'pnpm generate:api-all'."
echo "This usually means changes have been made in the BE endpoints without updating the Frontend."
echo "The API schema is now out of sync with the Front-end queries."
echo ""
echo "To fix this:"
echo "1. Pull the backend 'docker compose pull && docker compose up -d --build --force-recreate'"
echo "2. Run 'pnpm generate:api' locally"
echo "3. Run 'pnpm types' locally"
echo "4. Fix any TypeScript errors that may have been introduced"
echo "5. Commit and push your changes"
echo ""
exit 1
else
echo "✅ No API schema changes detected"
fi
- name: Run Typescript checks
run: pnpm types

3
.gitignore vendored
View File

@@ -5,8 +5,6 @@ classic/original_autogpt/*.json
auto_gpt_workspace/*
*.mpeg
.env
# Root .env files
/.env
azure.yaml
.vscode
.idea/*
@@ -123,6 +121,7 @@ celerybeat.pid
# Environments
.direnv/
.env
.venv
env/
venv*/

View File

@@ -235,7 +235,7 @@ repos:
hooks:
- id: tsc
name: Typecheck - AutoGPT Platform - Frontend
entry: bash -c 'cd autogpt_platform/frontend && pnpm types'
entry: bash -c 'cd autogpt_platform/frontend && pnpm type-check'
files: ^autogpt_platform/frontend/
types: [file]
language: system

View File

@@ -3,16 +3,6 @@
[![Discord Follow](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2Fautogpt%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&label=total%20members&logo=discord&logoColor=white&color=7289da)](https://discord.gg/autogpt) &ensp;
[![Twitter Follow](https://img.shields.io/twitter/follow/Auto_GPT?style=social)](https://twitter.com/Auto_GPT) &ensp;
<!-- Keep these links. Translations will automatically update with the README. -->
[Deutsch](https://zdoc.app/de/Significant-Gravitas/AutoGPT) |
[Español](https://zdoc.app/es/Significant-Gravitas/AutoGPT) |
[français](https://zdoc.app/fr/Significant-Gravitas/AutoGPT) |
[日本語](https://zdoc.app/ja/Significant-Gravitas/AutoGPT) |
[한국어](https://zdoc.app/ko/Significant-Gravitas/AutoGPT) |
[Português](https://zdoc.app/pt/Significant-Gravitas/AutoGPT) |
[Русский](https://zdoc.app/ru/Significant-Gravitas/AutoGPT) |
[中文](https://zdoc.app/zh/Significant-Gravitas/AutoGPT)
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
## Hosting Options

View File

@@ -1,11 +1,9 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Repository Overview
AutoGPT Platform is a monorepo containing:
- **Backend** (`/backend`): Python FastAPI server with async support
- **Frontend** (`/frontend`): Next.js React application
- **Shared Libraries** (`/autogpt_libs`): Common Python utilities
@@ -13,7 +11,6 @@ AutoGPT Platform is a monorepo containing:
## Essential Commands
### Backend Development
```bash
# Install dependencies
cd backend && poetry install
@@ -44,7 +41,6 @@ poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[GetC
poetry run format # Black + isort
poetry run lint # ruff
```
More details can be found in TESTING.md
#### Creating/Updating Snapshots
@@ -57,35 +53,31 @@ poetry run pytest path/to/test.py --snapshot-update
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
### Frontend Development
### Frontend Development
```bash
# Install dependencies
cd frontend && pnpm i
cd frontend && npm install
# Start development server
pnpm dev
npm run dev
# Run E2E tests
pnpm test
npm run test
# Run Storybook for component development
pnpm storybook
npm run storybook
# Build production
pnpm build
npm run build
# Type checking
pnpm types
npm run type-check
```
We have a components library in autogpt_platform/frontend/src/components/atoms that should be used when adding new pages and components.
## Architecture Overview
### Backend Architecture
- **API Layer**: FastAPI with REST and WebSocket endpoints
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
- **Queue System**: RabbitMQ for async task processing
@@ -94,7 +86,6 @@ We have a components library in autogpt_platform/frontend/src/components/atoms t
- **Security**: Cache protection middleware prevents sensitive data caching in browsers/proxies
### Frontend Architecture
- **Framework**: Next.js App Router with React Server Components
- **State Management**: React hooks + Supabase client for real-time updates
- **Workflow Builder**: Visual graph editor using @xyflow/react
@@ -102,7 +93,6 @@ We have a components library in autogpt_platform/frontend/src/components/atoms t
- **Feature Flags**: LaunchDarkly integration
### Key Concepts
1. **Agent Graphs**: Workflow definitions stored as JSON, executed by the backend
2. **Blocks**: Reusable components in `/backend/blocks/` that perform specific tasks
3. **Integrations**: OAuth and API connections stored per user
@@ -110,16 +100,13 @@ We have a components library in autogpt_platform/frontend/src/components/atoms t
5. **Virus Scanning**: ClamAV integration for file upload security
### Testing Approach
- Backend uses pytest with snapshot testing for API responses
- Test files are colocated with source files (`*_test.py`)
- Frontend uses Playwright for E2E tests
- Component testing via Storybook
### Database Schema
Key models (defined in `/backend/schema.prisma`):
- `User`: Authentication and profile data
- `AgentGraph`: Workflow definitions with version control
- `AgentGraphExecution`: Execution history and results
@@ -127,59 +114,30 @@ Key models (defined in `/backend/schema.prisma`):
- `StoreListing`: Marketplace listings for sharing agents
### Environment Configuration
#### Configuration Files
- **Backend**: `/backend/.env.default` (defaults) → `/backend/.env` (user overrides)
- **Frontend**: `/frontend/.env.default` (defaults) → `/frontend/.env` (user overrides)
- **Platform**: `/.env.default` (Supabase/shared defaults) → `/.env` (user overrides)
#### Docker Environment Loading Order
1. `.env.default` files provide base configuration (tracked in git)
2. `.env` files provide user-specific overrides (gitignored)
3. Docker Compose `environment:` sections provide service-specific overrides
4. Shell environment variables have highest precedence
#### Key Points
- All services use hardcoded defaults in docker-compose files (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
- Supabase services (`db/docker/docker-compose.yml`) follow the same pattern
- Backend: `.env` file in `/backend`
- Frontend: `.env.local` file in `/frontend`
- Both require Supabase credentials and API keys for various services
### Common Development Tasks
**Adding a new block:**
Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-sdk-guide.md) which covers:
- Provider configuration with `ProviderBuilder`
- Block schema definition
- Authentication (API keys, OAuth, webhooks)
- Testing and validation
- File organization
Quick steps:
1. Create new file in `/backend/backend/blocks/`
2. Configure provider using `ProviderBuilder` in `_config.py`
3. Inherit from `Block` base class
4. Define input/output schemas using `BlockSchema`
5. Implement async `run` method
6. Generate unique block ID using `uuid.uuid4()`
7. Test with `poetry run pytest backend/blocks/test/test_block.py`
2. Inherit from `Block` base class
3. Define input/output schemas
4. Implement `run` method
5. Register in block registry
6. Generate the block uuid using `uuid.uuid4()`
Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
Note: when making many new blocks analyze the interfaces for each of these blcoks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside the route file
4. Run `poetry run test` to verify
**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for new components
@@ -188,7 +146,6 @@ ex: do the inputs and outputs tie well together?
### Security Implementation
**Cache Protection Middleware:**
- Located in `/backend/backend/server/middleware/security.py`
- Default behavior: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses an allow list approach - only explicitly permitted paths can be cached
@@ -197,20 +154,14 @@ ex: do the inputs and outputs tie well together?
- To allow caching for a new endpoint, add it to `CACHEABLE_PATHS` in the middleware
- Applied to both main API server and external API applications
### Creating Pull Requests
### Creating Pull Requests
- Create the PR aginst the `dev` branch of the repository.
- Ensure the branch name is descriptive (e.g., `feature/add-new-block`)/
- Use conventional commit messages (see below)/
- Fill out the .github/PULL_REQUEST_TEMPLATE.md template as the PR description/
- Run the github pre-commit hooks to ensure code quality.
### Reviewing/Revising Pull Requests
- When the user runs /pr-comments or tries to fetch them, also run gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews to get the reviews
- Use gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews/[review_id]/comments to get the review contents
- Use gh api /repos/Significant-Gravitas/AutoGPT/issues/9924/comments to get the pr specific comments
### Conventional Commits
Use this format for commit messages and Pull Request titles:

View File

@@ -8,6 +8,7 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
- Node.js & NPM (for running the frontend application)
### Running the System
@@ -23,10 +24,10 @@ To run the AutoGPT Platform, follow these steps:
2. Run the following command:
```
cp .env.default .env
cp .env.example .env
```
This command will copy the `.env.default` file to `.env`. You can modify the `.env` file to add your own environment variables.
This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables.
3. Run the following command:
@@ -36,7 +37,44 @@ To run the AutoGPT Platform, follow these steps:
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
4. After all the services are in ready state, open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
4. Navigate to `frontend` within the `autogpt_platform` directory:
```
cd frontend
```
You will need to run your frontend application separately on your local machine.
5. Run the following command:
```
cp .env.example .env.local
```
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
6. Run the following command:
Enable corepack and install dependencies by running:
```
corepack enable
pnpm i
```
Generate the API client (this step is required before running the frontend):
```
pnpm generate:api-client
```
Then start the frontend application in development mode:
```
pnpm dev
```
7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
### Docker Compose Commands
@@ -139,21 +177,20 @@ The platform includes scripts for generating and managing the API client:
- `pnpm fetch:openapi`: Fetches the OpenAPI specification from the backend service (requires backend to be running on port 8006)
- `pnpm generate:api-client`: Generates the TypeScript API client from the OpenAPI specification using Orval
- `pnpm generate:api`: Runs both fetch and generate commands in sequence
- `pnpm generate:api-all`: Runs both fetch and generate commands in sequence
#### Manual API Client Updates
If you need to update the API client after making changes to the backend API:
1. Ensure the backend services are running:
```
docker compose up -d
```
2. Generate the updated API client:
```
pnpm generate:api
pnpm generate:api-all
```
This will fetch the latest OpenAPI specification and regenerate the TypeScript client code.

View File

@@ -0,0 +1,35 @@
import hashlib
import secrets
from typing import NamedTuple
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
raw: str
prefix: str
postfix: str
hash: str
class APIKeyManager:
PREFIX: str = "agpt_"
PREFIX_LENGTH: int = 8
POSTFIX_LENGTH: int = 8
def generate_api_key(self) -> APIKeyContainer:
"""Generate a new API key with all its parts."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
return APIKeyContainer(
raw=raw_key,
prefix=raw_key[: self.PREFIX_LENGTH],
postfix=raw_key[-self.POSTFIX_LENGTH :],
hash=hashlib.sha256(raw_key.encode()).hexdigest(),
)
def verify_api_key(self, provided_key: str, stored_hash: str) -> bool:
"""Verify if a provided API key matches the stored hash."""
if not provided_key.startswith(self.PREFIX):
return False
provided_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(provided_hash, stored_hash)

View File

@@ -1,78 +0,0 @@
import hashlib
import secrets
from typing import NamedTuple
from cryptography.hazmat.primitives.kdf.scrypt import Scrypt
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
key: str
head: str
tail: str
hash: str
salt: str
class APIKeySmith:
PREFIX: str = "agpt_"
HEAD_LENGTH: int = 8
TAIL_LENGTH: int = 8
def generate_key(self) -> APIKeyContainer:
"""Generate a new API key with secure hashing."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
hash, salt = self.hash_key(raw_key)
return APIKeyContainer(
key=raw_key,
head=raw_key[: self.HEAD_LENGTH],
tail=raw_key[-self.TAIL_LENGTH :],
hash=hash,
salt=salt,
)
def verify_key(
self, provided_key: str, known_hash: str, known_salt: str | None = None
) -> bool:
"""
Verify an API key against a known hash (+ salt).
Supports verifying both legacy SHA256 and secure Scrypt hashes.
"""
if not provided_key.startswith(self.PREFIX):
return False
# Handle legacy SHA256 hashes (migration support)
if known_salt is None:
legacy_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(legacy_hash, known_hash)
try:
salt_bytes = bytes.fromhex(known_salt)
provided_hash = self._hash_key_with_salt(provided_key, salt_bytes)
return secrets.compare_digest(provided_hash, known_hash)
except (ValueError, TypeError):
return False
def hash_key(self, raw_key: str) -> tuple[str, str]:
"""Migrate a legacy hash to secure hash format."""
salt = self._generate_salt()
hash = self._hash_key_with_salt(raw_key, salt)
return hash, salt.hex()
def _generate_salt(self) -> bytes:
"""Generate a random salt for hashing."""
return secrets.token_bytes(32)
def _hash_key_with_salt(self, raw_key: str, salt: bytes) -> str:
"""Hash API key using Scrypt with salt."""
kdf = Scrypt(
length=32,
salt=salt,
n=2**14, # CPU/memory cost parameter
r=8, # Block size parameter
p=1, # Parallelization parameter
)
key_hash = kdf.derive(raw_key.encode())
return key_hash.hex()

View File

@@ -1,79 +0,0 @@
import hashlib
from autogpt_libs.api_key.keysmith import APIKeySmith
def test_generate_api_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
assert key.key.startswith(keysmith.PREFIX)
assert key.head == key.key[: keysmith.HEAD_LENGTH]
assert key.tail == key.key[-keysmith.TAIL_LENGTH :]
assert len(key.hash) == 64 # 32 bytes hex encoded
assert len(key.salt) == 64 # 32 bytes hex encoded
def test_verify_new_secure_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test correct key validates
assert keysmith.verify_key(key.key, key.hash, key.salt) is True
# Test wrong key fails
wrong_key = f"{keysmith.PREFIX}wrongkey123"
assert keysmith.verify_key(wrong_key, key.hash, key.salt) is False
def test_verify_legacy_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}legacykey123"
legacy_hash = hashlib.sha256(legacy_key.encode()).hexdigest()
# Test legacy key validates without salt
assert keysmith.verify_key(legacy_key, legacy_hash) is True
# Test wrong legacy key fails
wrong_key = f"{keysmith.PREFIX}wronglegacy"
assert keysmith.verify_key(wrong_key, legacy_hash) is False
def test_rehash_existing_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}migratekey123"
# Migrate the legacy key
new_hash, new_salt = keysmith.hash_key(legacy_key)
# Verify migrated key works
assert keysmith.verify_key(legacy_key, new_hash, new_salt) is True
# Verify different key fails with migrated hash
wrong_key = f"{keysmith.PREFIX}wrongkey"
assert keysmith.verify_key(wrong_key, new_hash, new_salt) is False
def test_invalid_key_prefix():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test key without proper prefix fails
invalid_key = "invalid_prefix_key"
assert keysmith.verify_key(invalid_key, key.hash, key.salt) is False
def test_secure_hash_requires_salt():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Secure hash without salt should fail
assert keysmith.verify_key(key.key, key.hash) is False
def test_invalid_salt_format():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Invalid salt format should fail gracefully
assert keysmith.verify_key(key.key, key.hash, "invalid_hex") is False

View File

@@ -1,13 +1,13 @@
from .config import verify_settings
from .dependencies import get_user_id, requires_admin_user, requires_user
from .helpers import add_auth_responses_to_openapi
from .depends import requires_admin_user, requires_user
from .jwt_utils import parse_jwt_token
from .middleware import APIKeyValidator, auth_middleware
from .models import User
__all__ = [
"verify_settings",
"get_user_id",
"requires_admin_user",
"parse_jwt_token",
"requires_user",
"add_auth_responses_to_openapi",
"requires_admin_user",
"APIKeyValidator",
"auth_middleware",
"User",
]

View File

@@ -1,90 +1,11 @@
import logging
import os
from jwt.algorithms import get_default_algorithms, has_crypto
logger = logging.getLogger(__name__)
class AuthConfigError(ValueError):
"""Raised when authentication configuration is invalid."""
pass
ALGO_RECOMMENDATION = (
"We highly recommend using an asymmetric algorithm such as ES256, "
"because when leaked, a shared secret would allow anyone to "
"forge valid tokens and impersonate users. "
"More info: https://supabase.com/docs/guides/auth/signing-keys#choosing-the-right-signing-algorithm" # noqa
)
class Settings:
def __init__(self):
self.JWT_VERIFY_KEY: str = os.getenv(
"JWT_VERIFY_KEY", os.getenv("SUPABASE_JWT_SECRET", "")
).strip()
self.JWT_ALGORITHM: str = os.getenv("JWT_SIGN_ALGORITHM", "HS256").strip()
self.validate()
def validate(self):
if not self.JWT_VERIFY_KEY:
raise AuthConfigError(
"JWT_VERIFY_KEY must be set. "
"An empty JWT secret would allow anyone to forge valid tokens."
)
if len(self.JWT_VERIFY_KEY) < 32:
logger.warning(
"⚠️ JWT_VERIFY_KEY appears weak (less than 32 characters). "
"Consider using a longer, cryptographically secure secret."
)
supported_algorithms = get_default_algorithms().keys()
if not has_crypto:
logger.warning(
"⚠️ Asymmetric JWT verification is not available "
"because the 'cryptography' package is not installed. "
+ ALGO_RECOMMENDATION
)
if (
self.JWT_ALGORITHM not in supported_algorithms
or self.JWT_ALGORITHM == "none"
):
raise AuthConfigError(
f"Invalid JWT_SIGN_ALGORITHM: '{self.JWT_ALGORITHM}'. "
"Supported algorithms are listed on "
"https://pyjwt.readthedocs.io/en/stable/algorithms.html"
)
if self.JWT_ALGORITHM.startswith("HS"):
logger.warning(
f"⚠️ JWT_SIGN_ALGORITHM is set to '{self.JWT_ALGORITHM}', "
"a symmetric shared-key signature algorithm. " + ALGO_RECOMMENDATION
)
self.JWT_SECRET_KEY: str = os.getenv("SUPABASE_JWT_SECRET", "")
self.ENABLE_AUTH: bool = os.getenv("ENABLE_AUTH", "false").lower() == "true"
self.JWT_ALGORITHM: str = "HS256"
_settings: Settings = None # type: ignore
def get_settings() -> Settings:
global _settings
if not _settings:
_settings = Settings()
return _settings
def verify_settings() -> None:
global _settings
if not _settings:
_settings = Settings() # calls validation indirectly
return
_settings.validate()
settings = Settings()

View File

@@ -1,306 +0,0 @@
"""
Comprehensive tests for auth configuration to ensure 100% line and branch coverage.
These tests verify critical security checks preventing JWT token forgery.
"""
import logging
import os
import pytest
from pytest_mock import MockerFixture
from autogpt_libs.auth.config import AuthConfigError, Settings
def test_environment_variable_precedence(mocker: MockerFixture):
"""Test that environment variables take precedence over defaults."""
secret = "environment-secret-key-with-proper-length-123456"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_environment_variable_backwards_compatible(mocker: MockerFixture):
"""Test that SUPABASE_JWT_SECRET is read if JWT_VERIFY_KEY is not set."""
secret = "environment-secret-key-with-proper-length-123456"
mocker.patch.dict(os.environ, {"SUPABASE_JWT_SECRET": secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_auth_config_error_inheritance():
"""Test that AuthConfigError is properly defined as an Exception."""
assert issubclass(AuthConfigError, Exception)
error = AuthConfigError("test message")
assert str(error) == "test message"
def test_settings_static_after_creation(mocker: MockerFixture):
"""Test that settings maintain their values after creation."""
secret = "immutable-secret-key-with-proper-length-12345"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
settings = Settings()
original_secret = settings.JWT_VERIFY_KEY
# Changing environment after creation shouldn't affect settings
os.environ["JWT_VERIFY_KEY"] = "different-secret"
assert settings.JWT_VERIFY_KEY == original_secret
def test_settings_load_with_valid_secret(mocker: MockerFixture):
"""Test auth enabled with a valid JWT secret."""
valid_secret = "a" * 32 # 32 character secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": valid_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == valid_secret
def test_settings_load_with_strong_secret(mocker: MockerFixture):
"""Test auth enabled with a cryptographically strong secret."""
strong_secret = "super-secret-jwt-token-with-at-least-32-characters-long"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": strong_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == strong_secret
assert len(settings.JWT_VERIFY_KEY) >= 32
def test_secret_empty_raises_error(mocker: MockerFixture):
"""Test that auth enabled with empty secret raises AuthConfigError."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": ""}, clear=True)
with pytest.raises(Exception) as exc_info:
Settings()
assert "JWT_VERIFY_KEY" in str(exc_info.value)
def test_secret_missing_raises_error(mocker: MockerFixture):
"""Test that auth enabled without secret env var raises AuthConfigError."""
mocker.patch.dict(os.environ, {}, clear=True)
with pytest.raises(Exception) as exc_info:
Settings()
assert "JWT_VERIFY_KEY" in str(exc_info.value)
@pytest.mark.parametrize("secret", [" ", " ", "\t", "\n", " \t\n "])
def test_secret_only_whitespace_raises_error(mocker: MockerFixture, secret: str):
"""Test that auth enabled with whitespace-only secret raises error."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
with pytest.raises(ValueError):
Settings()
def test_secret_weak_logs_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that weak JWT secret triggers warning log."""
weak_secret = "short" # Less than 32 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": weak_secret}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert settings.JWT_VERIFY_KEY == weak_secret
assert "key appears weak" in caplog.text.lower()
assert "less than 32 characters" in caplog.text
def test_secret_31_char_logs_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that 31-character secret triggers warning (boundary test)."""
secret_31 = "a" * 31 # Exactly 31 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret_31}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert len(settings.JWT_VERIFY_KEY) == 31
assert "key appears weak" in caplog.text.lower()
def test_secret_32_char_no_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that 32-character secret does not trigger warning (boundary test)."""
secret_32 = "a" * 32 # Exactly 32 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret_32}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert len(settings.JWT_VERIFY_KEY) == 32
assert "JWT secret appears weak" not in caplog.text
def test_secret_whitespace_stripped(mocker: MockerFixture):
"""Test that JWT secret whitespace is stripped."""
secret = "a" * 32
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": f" {secret} "}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_secret_with_special_characters(mocker: MockerFixture):
"""Test JWT secret with special characters."""
special_secret = "!@#$%^&*()_+-=[]{}|;:,.<>?`~" + "a" * 10 # 40 chars total
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": special_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == special_secret
def test_secret_with_unicode(mocker: MockerFixture):
"""Test JWT secret with unicode characters."""
unicode_secret = "秘密🔐キー" + "a" * 25 # Ensure >32 bytes
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": unicode_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == unicode_secret
def test_secret_very_long(mocker: MockerFixture):
"""Test JWT secret with excessive length."""
long_secret = "a" * 1000 # 1000 character secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": long_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == long_secret
assert len(settings.JWT_VERIFY_KEY) == 1000
def test_secret_with_newline(mocker: MockerFixture):
"""Test JWT secret containing newlines."""
multiline_secret = "secret\nwith\nnewlines" + "a" * 20
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": multiline_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == multiline_secret
def test_secret_base64_encoded(mocker: MockerFixture):
"""Test JWT secret that looks like base64."""
base64_secret = "dGhpc19pc19hX3NlY3JldF9rZXlfd2l0aF9wcm9wZXJfbGVuZ3Ro"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": base64_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == base64_secret
def test_secret_numeric_only(mocker: MockerFixture):
"""Test JWT secret with only numbers."""
numeric_secret = "1234567890" * 4 # 40 character numeric secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": numeric_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == numeric_secret
def test_algorithm_default_hs256(mocker: MockerFixture):
"""Test that JWT algorithm defaults to HS256."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": "a" * 32}, clear=True)
settings = Settings()
assert settings.JWT_ALGORITHM == "HS256"
def test_algorithm_whitespace_stripped(mocker: MockerFixture):
"""Test that JWT algorithm whitespace is stripped."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": " HS256 "},
clear=True,
)
settings = Settings()
assert settings.JWT_ALGORITHM == "HS256"
def test_no_crypto_warning(mocker: MockerFixture, caplog: pytest.LogCaptureFixture):
"""Test warning when crypto package is not available."""
secret = "a" * 32
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
# Mock has_crypto to return False
mocker.patch("autogpt_libs.auth.config.has_crypto", False)
with caplog.at_level(logging.WARNING):
Settings()
assert "Asymmetric JWT verification is not available" in caplog.text
assert "cryptography" in caplog.text
def test_algorithm_invalid_raises_error(mocker: MockerFixture):
"""Test that invalid JWT algorithm raises AuthConfigError."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": "INVALID_ALG"},
clear=True,
)
with pytest.raises(AuthConfigError) as exc_info:
Settings()
assert "Invalid JWT_SIGN_ALGORITHM" in str(exc_info.value)
assert "INVALID_ALG" in str(exc_info.value)
def test_algorithm_none_raises_error(mocker: MockerFixture):
"""Test that 'none' algorithm raises AuthConfigError."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": "none"},
clear=True,
)
with pytest.raises(AuthConfigError) as exc_info:
Settings()
assert "Invalid JWT_SIGN_ALGORITHM" in str(exc_info.value)
@pytest.mark.parametrize("algorithm", ["HS256", "HS384", "HS512"])
def test_algorithm_symmetric_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture, algorithm: str
):
"""Test warning for symmetric algorithms (HS256, HS384, HS512)."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": algorithm},
clear=True,
)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert algorithm in caplog.text
assert "symmetric shared-key signature algorithm" in caplog.text
assert settings.JWT_ALGORITHM == algorithm
@pytest.mark.parametrize(
"algorithm",
["ES256", "ES384", "ES512", "RS256", "RS384", "RS512", "PS256", "PS384", "PS512"],
)
def test_algorithm_asymmetric_no_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture, algorithm: str
):
"""Test that asymmetric algorithms do not trigger warning."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": algorithm},
clear=True,
)
with caplog.at_level(logging.WARNING):
settings = Settings()
# Should not contain the symmetric algorithm warning
assert "symmetric shared-key signature algorithm" not in caplog.text
assert settings.JWT_ALGORITHM == algorithm

View File

@@ -1,45 +0,0 @@
"""
FastAPI dependency functions for JWT-based authentication and authorization.
These are the high-level dependency functions used in route definitions.
"""
import fastapi
from .jwt_utils import get_jwt_payload, verify_user
from .models import User
def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
FastAPI dependency that requires a valid authenticated user.
Raises:
HTTPException: 401 for authentication failures
"""
return verify_user(jwt_payload, admin_only=False)
def requires_admin_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
FastAPI dependency that requires a valid admin user.
Raises:
HTTPException: 401 for authentication failures, 403 for insufficient permissions
"""
return verify_user(jwt_payload, admin_only=True)
def get_user_id(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> str:
"""
FastAPI dependency that returns the ID of the authenticated user.
Raises:
HTTPException: 401 for authentication failures or missing user ID
"""
user_id = jwt_payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
return user_id

View File

@@ -1,335 +0,0 @@
"""
Comprehensive integration tests for authentication dependencies.
Tests the full authentication flow from HTTP requests to user validation.
"""
import os
import pytest
from fastapi import FastAPI, HTTPException, Security
from fastapi.testclient import TestClient
from pytest_mock import MockerFixture
from autogpt_libs.auth.dependencies import (
get_user_id,
requires_admin_user,
requires_user,
)
from autogpt_libs.auth.models import User
class TestAuthDependencies:
"""Test suite for authentication dependency functions."""
@pytest.fixture
def app(self):
"""Create a test FastAPI application."""
app = FastAPI()
@app.get("/user")
def get_user_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id, "role": user.role}
@app.get("/admin")
def get_admin_endpoint(user: User = Security(requires_admin_user)):
return {"user_id": user.user_id, "role": user.role}
@app.get("/user-id")
def get_user_id_endpoint(user_id: str = Security(get_user_id)):
return {"user_id": user_id}
return app
@pytest.fixture
def client(self, app):
"""Create a test client."""
return TestClient(app)
def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user with valid JWT payload."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
# Mock get_jwt_payload to return our test payload
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
assert isinstance(user, User)
assert user.user_id == "user-123"
assert user.role == "user"
def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user accepts admin users."""
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
assert user.user_id == "admin-456"
assert user.role == "admin"
def test_requires_user_missing_sub(self):
"""Test requires_user with missing user ID."""
jwt_payload = {"role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_requires_user_empty_sub(self):
"""Test requires_user with empty user ID."""
jwt_payload = {"sub": "", "role": "user"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
assert exc_info.value.status_code == 401
def test_requires_admin_user_with_admin(self, mocker: MockerFixture):
"""Test requires_admin_user with admin role."""
jwt_payload = {
"sub": "admin-789",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_admin_user(jwt_payload)
assert user.user_id == "admin-789"
assert user.role == "admin"
def test_requires_admin_user_with_regular_user(self):
"""Test requires_admin_user rejects regular users."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_admin_user(jwt_payload)
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
def test_requires_admin_user_missing_role(self):
"""Test requires_admin_user with missing role."""
jwt_payload = {"sub": "user-123", "email": "user@example.com"}
with pytest.raises(KeyError):
requires_admin_user(jwt_payload)
def test_get_user_id_with_valid_payload(self, mocker: MockerFixture):
"""Test get_user_id extracts user ID correctly."""
jwt_payload = {"sub": "user-id-xyz", "role": "user"}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = get_user_id(jwt_payload)
assert user_id == "user-id-xyz"
def test_get_user_id_missing_sub(self):
"""Test get_user_id with missing user ID."""
jwt_payload = {"role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_get_user_id_none_sub(self):
"""Test get_user_id with None user ID."""
jwt_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
assert exc_info.value.status_code == 401
class TestAuthDependenciesIntegration:
"""Integration tests for auth dependencies with FastAPI."""
acceptable_jwt_secret = "test-secret-with-proper-length-123456"
@pytest.fixture
def create_token(self, mocker: MockerFixture):
"""Helper to create JWT tokens."""
import jwt
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": self.acceptable_jwt_secret},
clear=True,
)
def _create_token(payload, secret=self.acceptable_jwt_secret):
return jwt.encode(payload, secret, algorithm="HS256")
return _create_token
def test_endpoint_auth_enabled_no_token(self):
"""Test endpoints require token when auth is enabled."""
app = FastAPI()
@app.get("/test")
def test_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id}
client = TestClient(app)
# Should fail without auth header
response = client.get("/test")
assert response.status_code == 401
def test_endpoint_with_valid_token(self, create_token):
"""Test endpoint with valid JWT token."""
app = FastAPI()
@app.get("/test")
def test_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id, "role": user.role}
client = TestClient(app)
token = create_token(
{"sub": "test-user", "role": "user", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get("/test", headers={"Authorization": f"Bearer {token}"})
assert response.status_code == 200
assert response.json()["user_id"] == "test-user"
def test_admin_endpoint_requires_admin_role(self, create_token):
"""Test admin endpoint rejects non-admin users."""
app = FastAPI()
@app.get("/admin")
def admin_endpoint(user: User = Security(requires_admin_user)):
return {"user_id": user.user_id}
client = TestClient(app)
# Regular user token
user_token = create_token(
{"sub": "regular-user", "role": "user", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get(
"/admin", headers={"Authorization": f"Bearer {user_token}"}
)
assert response.status_code == 403
# Admin token
admin_token = create_token(
{"sub": "admin-user", "role": "admin", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get(
"/admin", headers={"Authorization": f"Bearer {admin_token}"}
)
assert response.status_code == 200
assert response.json()["user_id"] == "admin-user"
class TestAuthDependenciesEdgeCases:
"""Edge case tests for authentication dependencies."""
def test_dependency_with_complex_payload(self):
"""Test dependencies handle complex JWT payloads."""
complex_payload = {
"sub": "user-123",
"role": "admin",
"email": "test@example.com",
"app_metadata": {"provider": "email", "providers": ["email"]},
"user_metadata": {
"full_name": "Test User",
"avatar_url": "https://example.com/avatar.jpg",
},
"aud": "authenticated",
"iat": 1234567890,
"exp": 9999999999,
}
user = requires_user(complex_payload)
assert user.user_id == "user-123"
assert user.email == "test@example.com"
admin = requires_admin_user(complex_payload)
assert admin.role == "admin"
def test_dependency_with_unicode_in_payload(self):
"""Test dependencies handle unicode in JWT payloads."""
unicode_payload = {
"sub": "user-😀-123",
"role": "user",
"email": "测试@example.com",
"name": "日本語",
}
user = requires_user(unicode_payload)
assert "😀" in user.user_id
assert user.email == "测试@example.com"
def test_dependency_with_null_values(self):
"""Test dependencies handle null values in payload."""
null_payload = {
"sub": "user-123",
"role": "user",
"email": None,
"phone": None,
"metadata": None,
}
user = requires_user(null_payload)
assert user.user_id == "user-123"
assert user.email is None
def test_concurrent_requests_isolation(self):
"""Test that concurrent requests don't interfere with each other."""
payload1 = {"sub": "user-1", "role": "user"}
payload2 = {"sub": "user-2", "role": "admin"}
# Simulate concurrent processing
user1 = requires_user(payload1)
user2 = requires_admin_user(payload2)
assert user1.user_id == "user-1"
assert user2.user_id == "user-2"
assert user1.role == "user"
assert user2.role == "admin"
@pytest.mark.parametrize(
"payload,expected_error,admin_only",
[
(None, "Authorization header is missing", False),
({}, "User ID not found", False),
({"sub": ""}, "User ID not found", False),
({"role": "user"}, "User ID not found", False),
({"sub": "user", "role": "user"}, "Admin access required", True),
],
)
def test_dependency_error_cases(
self, payload, expected_error: str, admin_only: bool
):
"""Test that errors propagate correctly through dependencies."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
with pytest.raises(HTTPException) as exc_info:
verify_user(payload, admin_only=admin_only)
assert expected_error in exc_info.value.detail
def test_dependency_valid_user(self):
"""Test valid user case for dependency."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
# Valid case
user = verify_user({"sub": "user", "role": "user"}, admin_only=False)
assert user.user_id == "user"

View File

@@ -0,0 +1,46 @@
import fastapi
from .config import settings
from .middleware import auth_middleware
from .models import DEFAULT_USER_ID, User
def requires_user(payload: dict = fastapi.Depends(auth_middleware)) -> User:
return verify_user(payload, admin_only=False)
def requires_admin_user(
payload: dict = fastapi.Depends(auth_middleware),
) -> User:
return verify_user(payload, admin_only=True)
def verify_user(payload: dict | None, admin_only: bool) -> User:
if not payload:
if settings.ENABLE_AUTH:
raise fastapi.HTTPException(
status_code=401, detail="Authorization header is missing"
)
# This handles the case when authentication is disabled
payload = {"sub": DEFAULT_USER_ID, "role": "admin"}
user_id = payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
if admin_only and payload["role"] != "admin":
raise fastapi.HTTPException(status_code=403, detail="Admin access required")
return User.from_payload(payload)
def get_user_id(payload: dict = fastapi.Depends(auth_middleware)) -> str:
user_id = payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
return user_id

View File

@@ -0,0 +1,68 @@
import pytest
from .depends import requires_admin_user, requires_user, verify_user
def test_verify_user_no_payload():
user = verify_user(None, admin_only=False)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_verify_user_no_user_id():
with pytest.raises(Exception):
verify_user({"role": "admin"}, admin_only=False)
def test_verify_user_not_admin():
with pytest.raises(Exception):
verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"},
admin_only=True,
)
def test_verify_user_with_admin_role():
user = verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "admin"},
admin_only=True,
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_verify_user_with_user_role():
user = verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"},
admin_only=False,
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "user"
def test_requires_user():
user = requires_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"}
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "user"
def test_requires_user_no_user_id():
with pytest.raises(Exception):
requires_user({"role": "user"})
def test_requires_admin_user():
user = requires_admin_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "admin"}
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_requires_admin_user_not_admin():
with pytest.raises(Exception):
requires_admin_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"}
)

View File

@@ -1,68 +0,0 @@
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from .jwt_utils import bearer_jwt_auth
def add_auth_responses_to_openapi(app: FastAPI) -> None:
"""
Set up custom OpenAPI schema generation that adds 401 responses
to all authenticated endpoints.
This is needed when using HTTPBearer with auto_error=False to get proper
401 responses instead of 403, but FastAPI only automatically adds security
responses when auto_error=True.
"""
def custom_openapi():
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
version=app.version,
description=app.description,
routes=app.routes,
)
# Add 401 response to all endpoints that have security requirements
for path, methods in openapi_schema["paths"].items():
for method, details in methods.items():
security_schemas = [
schema
for auth_option in details.get("security", [])
for schema in auth_option.keys()
]
if bearer_jwt_auth.scheme_name not in security_schemas:
continue
if "responses" not in details:
details["responses"] = {}
details["responses"]["401"] = {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
}
# Ensure #/components/responses exists
if "components" not in openapi_schema:
openapi_schema["components"] = {}
if "responses" not in openapi_schema["components"]:
openapi_schema["components"]["responses"] = {}
# Define 401 response
openapi_schema["components"]["responses"]["HTTP401NotAuthenticatedError"] = {
"description": "Authentication required",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"detail": {"type": "string"}},
}
}
},
}
app.openapi_schema = openapi_schema
return app.openapi_schema
app.openapi = custom_openapi

View File

@@ -1,435 +0,0 @@
"""
Comprehensive tests for auth helpers module to achieve 100% coverage.
Tests OpenAPI schema generation and authentication response handling.
"""
from unittest import mock
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from autogpt_libs.auth.helpers import add_auth_responses_to_openapi
from autogpt_libs.auth.jwt_utils import bearer_jwt_auth
def test_add_auth_responses_to_openapi_basic():
"""Test adding 401 responses to OpenAPI schema."""
app = FastAPI(title="Test App", version="1.0.0")
# Add some test endpoints with authentication
from fastapi import Depends
from autogpt_libs.auth.dependencies import requires_user
@app.get("/protected", dependencies=[Depends(requires_user)])
def protected_endpoint():
return {"message": "Protected"}
@app.get("/public")
def public_endpoint():
return {"message": "Public"}
# Apply the OpenAPI customization
add_auth_responses_to_openapi(app)
# Get the OpenAPI schema
schema = app.openapi()
# Verify basic schema properties
assert schema["info"]["title"] == "Test App"
assert schema["info"]["version"] == "1.0.0"
# Verify 401 response component is added
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# Verify 401 response structure
error_response = schema["components"]["responses"]["HTTP401NotAuthenticatedError"]
assert error_response["description"] == "Authentication required"
assert "application/json" in error_response["content"]
assert "schema" in error_response["content"]["application/json"]
# Verify schema properties
response_schema = error_response["content"]["application/json"]["schema"]
assert response_schema["type"] == "object"
assert "detail" in response_schema["properties"]
assert response_schema["properties"]["detail"]["type"] == "string"
def test_add_auth_responses_to_openapi_with_security():
"""Test that 401 responses are added only to secured endpoints."""
app = FastAPI()
# Mock endpoint with security
from fastapi import Security
from autogpt_libs.auth.dependencies import get_user_id
@app.get("/secured")
def secured_endpoint(user_id: str = Security(get_user_id)):
return {"user_id": user_id}
@app.post("/also-secured")
def another_secured(user_id: str = Security(get_user_id)):
return {"status": "ok"}
@app.get("/unsecured")
def unsecured_endpoint():
return {"public": True}
# Apply OpenAPI customization
add_auth_responses_to_openapi(app)
# Get schema
schema = app.openapi()
# Check that secured endpoints have 401 responses
if "/secured" in schema["paths"]:
if "get" in schema["paths"]["/secured"]:
secured_get = schema["paths"]["/secured"]["get"]
if "responses" in secured_get:
assert "401" in secured_get["responses"]
assert (
secured_get["responses"]["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
if "/also-secured" in schema["paths"]:
if "post" in schema["paths"]["/also-secured"]:
secured_post = schema["paths"]["/also-secured"]["post"]
if "responses" in secured_post:
assert "401" in secured_post["responses"]
# Check that unsecured endpoint does not have 401 response
if "/unsecured" in schema["paths"]:
if "get" in schema["paths"]["/unsecured"]:
unsecured_get = schema["paths"]["/unsecured"]["get"]
if "responses" in unsecured_get:
assert "401" not in unsecured_get.get("responses", {})
def test_add_auth_responses_to_openapi_cached_schema():
"""Test that OpenAPI schema is cached after first generation."""
app = FastAPI()
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema twice
schema1 = app.openapi()
schema2 = app.openapi()
# Should return the same cached object
assert schema1 is schema2
def test_add_auth_responses_to_openapi_existing_responses():
"""Test handling endpoints that already have responses defined."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get(
"/with-responses",
responses={
200: {"description": "Success"},
404: {"description": "Not found"},
},
)
def endpoint_with_responses(jwt: dict = Security(get_jwt_payload)):
return {"data": "test"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Check that existing responses are preserved and 401 is added
if "/with-responses" in schema["paths"]:
if "get" in schema["paths"]["/with-responses"]:
responses = schema["paths"]["/with-responses"]["get"].get("responses", {})
# Original responses should be preserved
if "200" in responses:
assert responses["200"]["description"] == "Success"
if "404" in responses:
assert responses["404"]["description"] == "Not found"
# 401 should be added
if "401" in responses:
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_add_auth_responses_to_openapi_no_security_endpoints():
"""Test with app that has no secured endpoints."""
app = FastAPI()
@app.get("/public1")
def public1():
return {"message": "public1"}
@app.post("/public2")
def public2():
return {"message": "public2"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Component should still be added for consistency
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# But no endpoints should have 401 responses
for path in schema["paths"].values():
for method in path.values():
if isinstance(method, dict) and "responses" in method:
assert "401" not in method["responses"]
def test_add_auth_responses_to_openapi_multiple_security_schemes():
"""Test endpoints with multiple security requirements."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.dependencies import requires_admin_user, requires_user
from autogpt_libs.auth.models import User
@app.get("/multi-auth")
def multi_auth(
user: User = Security(requires_user),
admin: User = Security(requires_admin_user),
):
return {"status": "super secure"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Should have 401 response
if "/multi-auth" in schema["paths"]:
if "get" in schema["paths"]["/multi-auth"]:
responses = schema["paths"]["/multi-auth"]["get"].get("responses", {})
if "401" in responses:
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_add_auth_responses_to_openapi_empty_components():
"""Test when OpenAPI schema has no components section initially."""
app = FastAPI()
# Mock get_openapi to return schema without components
original_get_openapi = get_openapi
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Remove components if it exists
if "components" in schema:
del schema["components"]
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Components should be created
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
def test_add_auth_responses_to_openapi_all_http_methods():
"""Test that all HTTP methods are handled correctly."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get("/resource")
def get_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "GET"}
@app.post("/resource")
def post_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "POST"}
@app.put("/resource")
def put_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "PUT"}
@app.patch("/resource")
def patch_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "PATCH"}
@app.delete("/resource")
def delete_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "DELETE"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# All methods should have 401 response
if "/resource" in schema["paths"]:
for method in ["get", "post", "put", "patch", "delete"]:
if method in schema["paths"]["/resource"]:
method_spec = schema["paths"]["/resource"][method]
if "responses" in method_spec:
assert "401" in method_spec["responses"]
def test_bearer_jwt_auth_scheme_config():
"""Test that bearer_jwt_auth is configured correctly."""
assert bearer_jwt_auth.scheme_name == "HTTPBearerJWT"
assert bearer_jwt_auth.auto_error is False
def test_add_auth_responses_with_no_routes():
"""Test OpenAPI generation with app that has no routes."""
app = FastAPI(title="Empty App")
# Apply customization to empty app
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Should still have basic structure
assert schema["info"]["title"] == "Empty App"
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
def test_custom_openapi_function_replacement():
"""Test that the custom openapi function properly replaces the default."""
app = FastAPI()
# Store original function
original_openapi = app.openapi
# Apply customization
add_auth_responses_to_openapi(app)
# Function should be replaced
assert app.openapi != original_openapi
assert callable(app.openapi)
def test_endpoint_without_responses_section():
"""Test endpoint that has security but no responses section initially."""
app = FastAPI()
from fastapi import Security
from fastapi.openapi.utils import get_openapi as original_get_openapi
from autogpt_libs.auth.jwt_utils import get_jwt_payload
# Create endpoint
@app.get("/no-responses")
def endpoint_without_responses(jwt: dict = Security(get_jwt_payload)):
return {"data": "test"}
# Mock get_openapi to remove responses from the endpoint
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Remove responses from our endpoint to trigger line 40
if "/no-responses" in schema.get("paths", {}):
if "get" in schema["paths"]["/no-responses"]:
# Delete responses to force the code to create it
if "responses" in schema["paths"]["/no-responses"]["get"]:
del schema["paths"]["/no-responses"]["get"]["responses"]
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema and verify 401 was added
schema = app.openapi()
# The endpoint should now have 401 response
if "/no-responses" in schema["paths"]:
if "get" in schema["paths"]["/no-responses"]:
responses = schema["paths"]["/no-responses"]["get"].get("responses", {})
assert "401" in responses
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_components_with_existing_responses():
"""Test when components already has a responses section."""
app = FastAPI()
# Mock get_openapi to return schema with existing components/responses
from fastapi.openapi.utils import get_openapi as original_get_openapi
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Add existing components/responses
if "components" not in schema:
schema["components"] = {}
schema["components"]["responses"] = {
"ExistingResponse": {"description": "An existing response"}
}
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Both responses should exist
assert "ExistingResponse" in schema["components"]["responses"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# Verify our 401 response structure
error_response = schema["components"]["responses"][
"HTTP401NotAuthenticatedError"
]
assert error_response["description"] == "Authentication required"
def test_openapi_schema_persistence():
"""Test that modifications to OpenAPI schema persist correctly."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get("/test")
def test_endpoint(jwt: dict = Security(get_jwt_payload)):
return {"test": True}
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema multiple times
schema1 = app.openapi()
# Modify the cached schema (shouldn't affect future calls)
schema1["info"]["title"] = "Modified Title"
# Clear cache and get again
app.openapi_schema = None
schema2 = app.openapi()
# Should regenerate with original title
assert schema2["info"]["title"] == app.title
assert schema2["info"]["title"] != "Modified Title"

View File

@@ -1,48 +1,11 @@
import logging
from typing import Any
from typing import Any, Dict
import jwt
from fastapi import HTTPException, Security
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from .config import get_settings
from .models import User
logger = logging.getLogger(__name__)
# Bearer token authentication scheme
bearer_jwt_auth = HTTPBearer(
bearerFormat="jwt", scheme_name="HTTPBearerJWT", auto_error=False
)
from .config import settings
def get_jwt_payload(
credentials: HTTPAuthorizationCredentials | None = Security(bearer_jwt_auth),
) -> dict[str, Any]:
"""
Extract and validate JWT payload from HTTP Authorization header.
This is the core authentication function that handles:
- Reading the `Authorization` header to obtain the JWT token
- Verifying the JWT token's signature
- Decoding the JWT token's payload
:param credentials: HTTP Authorization credentials from bearer token
:return: JWT payload dictionary
:raises HTTPException: 401 if authentication fails
"""
if not credentials:
raise HTTPException(status_code=401, detail="Authorization header is missing")
try:
payload = parse_jwt_token(credentials.credentials)
logger.debug("Token decoded successfully")
return payload
except ValueError as e:
raise HTTPException(status_code=401, detail=str(e))
def parse_jwt_token(token: str) -> dict[str, Any]:
def parse_jwt_token(token: str) -> Dict[str, Any]:
"""
Parse and validate a JWT token.
@@ -50,11 +13,10 @@ def parse_jwt_token(token: str) -> dict[str, Any]:
:return: The decoded payload
:raises ValueError: If the token is invalid or expired
"""
settings = get_settings()
try:
payload = jwt.decode(
token,
settings.JWT_VERIFY_KEY,
settings.JWT_SECRET_KEY,
algorithms=[settings.JWT_ALGORITHM],
audience="authenticated",
)
@@ -63,18 +25,3 @@ def parse_jwt_token(token: str) -> dict[str, Any]:
raise ValueError("Token has expired")
except jwt.InvalidTokenError as e:
raise ValueError(f"Invalid token: {str(e)}")
def verify_user(jwt_payload: dict | None, admin_only: bool) -> User:
if jwt_payload is None:
raise HTTPException(status_code=401, detail="Authorization header is missing")
user_id = jwt_payload.get("sub")
if not user_id:
raise HTTPException(status_code=401, detail="User ID not found in token")
if admin_only and jwt_payload["role"] != "admin":
raise HTTPException(status_code=403, detail="Admin access required")
return User.from_payload(jwt_payload)

View File

@@ -1,308 +0,0 @@
"""
Comprehensive tests for JWT token parsing and validation.
Ensures 100% line and branch coverage for JWT security functions.
"""
import os
from datetime import datetime, timedelta, timezone
import jwt
import pytest
from fastapi import HTTPException
from fastapi.security import HTTPAuthorizationCredentials
from pytest_mock import MockerFixture
from autogpt_libs.auth import config, jwt_utils
from autogpt_libs.auth.config import Settings
from autogpt_libs.auth.models import User
MOCK_JWT_SECRET = "test-secret-key-with-at-least-32-characters"
TEST_USER_PAYLOAD = {
"sub": "test-user-id",
"role": "user",
"aud": "authenticated",
"email": "test@example.com",
}
TEST_ADMIN_PAYLOAD = {
"sub": "admin-user-id",
"role": "admin",
"aud": "authenticated",
"email": "admin@example.com",
}
@pytest.fixture(autouse=True)
def mock_config(mocker: MockerFixture):
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": MOCK_JWT_SECRET}, clear=True)
mocker.patch.object(config, "_settings", Settings())
yield
def create_token(payload, secret=None, algorithm="HS256"):
"""Helper to create JWT tokens."""
if secret is None:
secret = MOCK_JWT_SECRET
return jwt.encode(payload, secret, algorithm=algorithm)
def test_parse_jwt_token_valid():
"""Test parsing a valid JWT token."""
token = create_token(TEST_USER_PAYLOAD)
result = jwt_utils.parse_jwt_token(token)
assert result["sub"] == "test-user-id"
assert result["role"] == "user"
assert result["aud"] == "authenticated"
def test_parse_jwt_token_expired():
"""Test parsing an expired JWT token."""
expired_payload = {
**TEST_USER_PAYLOAD,
"exp": datetime.now(timezone.utc) - timedelta(hours=1),
}
token = create_token(expired_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Token has expired" in str(exc_info.value)
def test_parse_jwt_token_invalid_signature():
"""Test parsing a token with invalid signature."""
# Create token with different secret
token = create_token(TEST_USER_PAYLOAD, secret="wrong-secret")
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_malformed():
"""Test parsing a malformed token."""
malformed_tokens = [
"not.a.token",
"invalid",
"",
# Header only
"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9",
# No signature
"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0In0",
]
for token in malformed_tokens:
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_wrong_audience():
"""Test parsing a token with wrong audience."""
wrong_aud_payload = {**TEST_USER_PAYLOAD, "aud": "wrong-audience"}
token = create_token(wrong_aud_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_missing_audience():
"""Test parsing a token without audience claim."""
no_aud_payload = {k: v for k, v in TEST_USER_PAYLOAD.items() if k != "aud"}
token = create_token(no_aud_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_get_jwt_payload_with_valid_token():
"""Test extracting JWT payload with valid bearer token."""
token = create_token(TEST_USER_PAYLOAD)
credentials = HTTPAuthorizationCredentials(scheme="Bearer", credentials=token)
result = jwt_utils.get_jwt_payload(credentials)
assert result["sub"] == "test-user-id"
assert result["role"] == "user"
def test_get_jwt_payload_no_credentials():
"""Test JWT payload when no credentials provided."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(None)
assert exc_info.value.status_code == 401
assert "Authorization header is missing" in exc_info.value.detail
def test_get_jwt_payload_invalid_token():
"""Test JWT payload extraction with invalid token."""
credentials = HTTPAuthorizationCredentials(
scheme="Bearer", credentials="invalid.token.here"
)
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(credentials)
assert exc_info.value.status_code == 401
assert "Invalid token" in exc_info.value.detail
def test_verify_user_with_valid_user():
"""Test verifying a valid user."""
user = jwt_utils.verify_user(TEST_USER_PAYLOAD, admin_only=False)
assert isinstance(user, User)
assert user.user_id == "test-user-id"
assert user.role == "user"
assert user.email == "test@example.com"
def test_verify_user_with_admin():
"""Test verifying an admin user."""
user = jwt_utils.verify_user(TEST_ADMIN_PAYLOAD, admin_only=True)
assert isinstance(user, User)
assert user.user_id == "admin-user-id"
assert user.role == "admin"
def test_verify_user_admin_only_with_regular_user():
"""Test verifying regular user when admin is required."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(TEST_USER_PAYLOAD, admin_only=True)
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
def test_verify_user_no_payload():
"""Test verifying user with no payload."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(None, admin_only=False)
assert exc_info.value.status_code == 401
assert "Authorization header is missing" in exc_info.value.detail
def test_verify_user_missing_sub():
"""Test verifying user with payload missing 'sub' field."""
invalid_payload = {"role": "user", "email": "test@example.com"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_empty_sub():
"""Test verifying user with empty 'sub' field."""
invalid_payload = {"sub": "", "role": "user"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_none_sub():
"""Test verifying user with None 'sub' field."""
invalid_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_missing_role_admin_check():
"""Test verifying admin when role field is missing."""
no_role_payload = {"sub": "user-id"}
with pytest.raises(KeyError):
# This will raise KeyError when checking payload["role"]
jwt_utils.verify_user(no_role_payload, admin_only=True)
# ======================== EDGE CASES ======================== #
def test_jwt_with_additional_claims():
"""Test JWT token with additional custom claims."""
extra_claims_payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
"custom_claim": "custom_value",
"permissions": ["read", "write"],
"metadata": {"key": "value"},
}
token = create_token(extra_claims_payload)
result = jwt_utils.parse_jwt_token(token)
assert result["sub"] == "user-id"
assert result["custom_claim"] == "custom_value"
assert result["permissions"] == ["read", "write"]
def test_jwt_with_numeric_sub():
"""Test JWT token with numeric user ID."""
payload = {
"sub": 12345, # Numeric ID
"role": "user",
"aud": "authenticated",
}
# Should convert to string internally
user = jwt_utils.verify_user(payload, admin_only=False)
assert user.user_id == 12345
def test_jwt_with_very_long_sub():
"""Test JWT token with very long user ID."""
long_id = "a" * 1000
payload = {
"sub": long_id,
"role": "user",
"aud": "authenticated",
}
user = jwt_utils.verify_user(payload, admin_only=False)
assert user.user_id == long_id
def test_jwt_with_special_characters_in_claims():
"""Test JWT token with special characters in claims."""
payload = {
"sub": "user@example.com/special-chars!@#$%",
"role": "admin",
"aud": "authenticated",
"email": "test+special@example.com",
}
user = jwt_utils.verify_user(payload, admin_only=True)
assert "special-chars!@#$%" in user.user_id
def test_jwt_with_future_iat():
"""Test JWT token with issued-at time in future."""
future_payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
"iat": datetime.now(timezone.utc) + timedelta(hours=1),
}
token = create_token(future_payload)
# PyJWT validates iat claim and should reject future tokens
with pytest.raises(ValueError, match="not yet valid"):
jwt_utils.parse_jwt_token(token)
def test_jwt_with_different_algorithms():
"""Test that only HS256 algorithm is accepted."""
payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
}
# Try different algorithms
algorithms = ["HS384", "HS512", "none"]
for algo in algorithms:
if algo == "none":
# Special case for 'none' algorithm (security vulnerability if accepted)
token = create_token(payload, "", algorithm="none")
else:
token = create_token(payload, algorithm=algo)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)

View File

@@ -0,0 +1,140 @@
import inspect
import logging
import secrets
from typing import Any, Callable, Optional
from fastapi import HTTPException, Request, Security
from fastapi.security import APIKeyHeader, HTTPBearer
from starlette.status import HTTP_401_UNAUTHORIZED
from .config import settings
from .jwt_utils import parse_jwt_token
security = HTTPBearer()
logger = logging.getLogger(__name__)
async def auth_middleware(request: Request):
if not settings.ENABLE_AUTH:
# If authentication is disabled, allow the request to proceed
logger.warning("Auth disabled")
return {}
security = HTTPBearer()
credentials = await security(request)
if not credentials:
raise HTTPException(status_code=401, detail="Authorization header is missing")
try:
payload = parse_jwt_token(credentials.credentials)
request.state.user = payload
logger.debug("Token decoded successfully")
except ValueError as e:
raise HTTPException(status_code=401, detail=str(e))
return payload
class APIKeyValidator:
"""
Configurable API key validator that supports custom validation functions
for FastAPI applications.
This class provides a flexible way to implement API key authentication with optional
custom validation logic. It can be used for simple token matching
or more complex validation scenarios like database lookups.
Examples:
Simple token validation:
```python
validator = APIKeyValidator(
header_name="X-API-Key",
expected_token="your-secret-token"
)
@app.get("/protected", dependencies=[Depends(validator.get_dependency())])
def protected_endpoint():
return {"message": "Access granted"}
```
Custom validation with database lookup:
```python
async def validate_with_db(api_key: str):
api_key_obj = await db.get_api_key(api_key)
return api_key_obj if api_key_obj and api_key_obj.is_active else None
validator = APIKeyValidator(
header_name="X-API-Key",
validate_fn=validate_with_db
)
```
Args:
header_name (str): The name of the header containing the API key
expected_token (Optional[str]): The expected API key value for simple token matching
validate_fn (Optional[Callable]): Custom validation function that takes an API key
string and returns a boolean or object. Can be async.
error_status (int): HTTP status code to use for validation errors
error_message (str): Error message to return when validation fails
"""
def __init__(
self,
header_name: str,
expected_token: Optional[str] = None,
validate_fn: Optional[Callable[[str], bool]] = None,
error_status: int = HTTP_401_UNAUTHORIZED,
error_message: str = "Invalid API key",
):
# Create the APIKeyHeader as a class property
self.security_scheme = APIKeyHeader(name=header_name)
self.expected_token = expected_token
self.custom_validate_fn = validate_fn
self.error_status = error_status
self.error_message = error_message
async def default_validator(self, api_key: str) -> bool:
if not self.expected_token:
raise ValueError(
"Expected Token Required to be set when uisng API Key Validator default validation"
)
return secrets.compare_digest(api_key, self.expected_token)
async def __call__(
self, request: Request, api_key: str = Security(APIKeyHeader)
) -> Any:
if api_key is None:
raise HTTPException(status_code=self.error_status, detail="Missing API key")
# Use custom validation if provided, otherwise use default equality check
validator = self.custom_validate_fn or self.default_validator
result = (
await validator(api_key)
if inspect.iscoroutinefunction(validator)
else validator(api_key)
)
if not result:
raise HTTPException(
status_code=self.error_status, detail=self.error_message
)
# Store validation result in request state if it's not just a boolean
if result is not True:
request.state.api_key = result
return result
def get_dependency(self):
"""
Returns a callable dependency that FastAPI will recognize as a security scheme
"""
async def validate_api_key(
request: Request, api_key: str = Security(self.security_scheme)
) -> Any:
return await self(request, api_key)
# This helps FastAPI recognize it as a security dependency
validate_api_key.__name__ = f"validate_{self.security_scheme.model.name}"
return validate_api_key

View File

@@ -0,0 +1,324 @@
import contextlib
import logging
from functools import wraps
from json import JSONDecodeError
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Optional, TypeVar
if TYPE_CHECKING:
from backend.data.model import User
import ldclient
from backend.util.json import loads as json_loads
from fastapi import HTTPException
from ldclient import Context, LDClient
from ldclient.config import Config
from typing_extensions import ParamSpec
from .config import SETTINGS
logger = logging.getLogger(__name__)
P = ParamSpec("P")
T = TypeVar("T")
_is_initialized = False
def get_client() -> LDClient:
"""Get the LaunchDarkly client singleton."""
if not _is_initialized:
initialize_launchdarkly()
return ldclient.get()
def initialize_launchdarkly() -> None:
sdk_key = SETTINGS.launch_darkly_sdk_key
logger.debug(
f"Initializing LaunchDarkly with SDK key: {'present' if sdk_key else 'missing'}"
)
if not sdk_key:
logger.warning("LaunchDarkly SDK key not configured")
return
config = Config(sdk_key)
ldclient.set_config(config)
if ldclient.get().is_initialized():
global _is_initialized
_is_initialized = True
logger.info("LaunchDarkly client initialized successfully")
else:
logger.error("LaunchDarkly client failed to initialize")
def shutdown_launchdarkly() -> None:
"""Shutdown the LaunchDarkly client."""
if ldclient.get().is_initialized():
ldclient.get().close()
logger.info("LaunchDarkly client closed successfully")
def create_context(
user_id: str, additional_attributes: Optional[dict[str, Any]] = None
) -> Context:
"""Create LaunchDarkly context with optional additional attributes."""
# Use the key from attributes if provided, otherwise use user_id
context_key = user_id
if additional_attributes and "key" in additional_attributes:
context_key = additional_attributes["key"]
builder = Context.builder(str(context_key)).kind("user")
if additional_attributes:
for key, value in additional_attributes.items():
# Skip kind and key as they're already set
if key in ["kind", "key"]:
continue
elif key == "custom" and isinstance(value, dict):
# Handle custom attributes object - these go as individual attributes
for custom_key, custom_value in value.items():
builder.set(custom_key, custom_value)
else:
builder.set(key, value)
return builder.build()
async def _fetch_user_context_data(user_id: str) -> dict[str, Any]:
"""
Fetch user data and build LaunchDarkly context.
Args:
user_id: The user ID to fetch data for
Returns:
Dictionary with user context data including role
"""
# Use the unified database access approach
from backend.util.clients import get_database_manager_async_client
db_client = get_database_manager_async_client()
user = await db_client.get_user_by_id(user_id)
# Build LaunchDarkly context from user data
return _build_launchdarkly_context(user)
def _build_launchdarkly_context(user: "User") -> dict[str, Any]:
"""
Build LaunchDarkly context data matching frontend format.
Returns a context like:
{
"kind": "user",
"key": "user-id",
"email": "user@example.com", # Optional
"anonymous": false,
"custom": {
"role": "admin" # Optional
}
}
Args:
user: User object from database
Returns:
Dictionary with user context data
"""
from autogpt_libs.auth.models import DEFAULT_USER_ID
# Build basic context - always include kind, key, and anonymous
context_data: dict[str, Any] = {
"kind": "user",
"key": user.id,
"anonymous": False,
}
# Add email if present
if user.email:
context_data["email"] = user.email
# Initialize custom attributes
custom: dict[str, Any] = {}
# Determine user role from metadata
role = None
# Check if user is default/system user
if user.id == DEFAULT_USER_ID:
role = "admin" # Default user has admin privileges when auth is disabled
elif user.metadata:
# Check for role in metadata
try:
# Handle both string (direct DB) and dict (RPC) formats
if isinstance(user.metadata, str):
metadata = json_loads(user.metadata)
elif isinstance(user.metadata, dict):
metadata = user.metadata
else:
metadata = {}
# Extract role from metadata if present
if metadata.get("role"):
role = metadata["role"]
except (JSONDecodeError, TypeError) as e:
logger.debug(f"Failed to parse user metadata for context: {e}")
# Add role to custom attributes if present
if role:
custom["role"] = role
# Only add custom object if it has content
if custom:
context_data["custom"] = custom
return context_data
async def is_feature_enabled(
flag_key: str,
user_id: str,
default: bool = False,
use_user_id_only: bool = False,
additional_attributes: Optional[dict[str, Any]] = None,
user_role: Optional[str] = None,
) -> bool:
"""
Check if a feature flag is enabled for a user with full LaunchDarkly context support.
Args:
flag_key: The LaunchDarkly feature flag key
user_id: The user ID to evaluate the flag for
default: Default value if LaunchDarkly is unavailable or flag evaluation fails
use_user_id_only: If True, only use user_id without fetching database context
additional_attributes: Additional attributes to include in the context
user_role: Optional user role (e.g., "admin", "user") to add to segments
Returns:
True if feature is enabled, False otherwise
"""
try:
client = get_client()
if use_user_id_only:
# Simple context with just user ID (for backward compatibility)
attrs = additional_attributes or {}
if user_role:
# Add role to custom attributes for consistency
if "custom" not in attrs:
attrs["custom"] = {}
if isinstance(attrs["custom"], dict):
attrs["custom"]["role"] = user_role
context = create_context(str(user_id), attrs)
else:
# Full context with user segments and metadata from database
try:
user_data = await _fetch_user_context_data(user_id)
except ImportError as e:
# Database modules not available - fallback to simple context
logger.debug(f"Database modules not available: {e}")
user_data = {}
except Exception as e:
# Database error - log and fallback to simple context
logger.warning(f"Failed to fetch user context for {user_id}: {e}")
user_data = {}
# Merge additional attributes and role
attrs = additional_attributes or {}
# If user_role is provided, add it to custom attributes
if user_role:
if "custom" not in user_data:
user_data["custom"] = {}
user_data["custom"]["role"] = user_role
# Merge additional attributes with user data
# Handle custom attributes specially
if "custom" in attrs and isinstance(attrs["custom"], dict):
if "custom" not in user_data:
user_data["custom"] = {}
user_data["custom"].update(attrs["custom"])
# Remove custom from attrs to avoid duplication
attrs = {k: v for k, v in attrs.items() if k != "custom"}
# Merge remaining attributes
final_attrs = {**user_data, **attrs}
context = create_context(str(user_id), final_attrs)
# Evaluate the flag
result = client.variation(flag_key, context, default)
logger.debug(
f"Feature flag {flag_key} for user {user_id}: {result} "
f"(use_user_id_only: {use_user_id_only})"
)
return result
except Exception as e:
logger.debug(
f"LaunchDarkly flag evaluation failed for {flag_key}: {e}, using default={default}"
)
return default
def feature_flag(
flag_key: str,
default: bool = False,
) -> Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]:
"""
Decorator for async feature flag protected endpoints.
Args:
flag_key: The LaunchDarkly feature flag key
default: Default value if flag evaluation fails
Returns:
Decorator that only works with async functions
"""
def decorator(func: Callable[P, Awaitable[T]]) -> Callable[P, Awaitable[T]]:
@wraps(func)
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
try:
user_id = kwargs.get("user_id")
if not user_id:
raise ValueError("user_id is required")
if not get_client().is_initialized():
logger.warning(
f"LaunchDarkly not initialized, using default={default}"
)
is_enabled = default
else:
# Use the unified function with full context support
is_enabled = await is_feature_enabled(
flag_key, str(user_id), default, use_user_id_only=False
)
if not is_enabled:
raise HTTPException(status_code=404, detail="Feature not available")
return await func(*args, **kwargs)
except Exception as e:
logger.error(f"Error evaluating feature flag {flag_key}: {e}")
raise
return async_wrapper
return decorator
@contextlib.contextmanager
def mock_flag_variation(flag_key: str, return_value: Any):
"""Context manager for testing feature flags."""
original_variation = get_client().variation
get_client().variation = lambda key, context, default: (
return_value if key == flag_key else original_variation(key, context, default)
)
try:
yield
finally:
get_client().variation = original_variation

View File

@@ -0,0 +1,84 @@
import pytest
from ldclient import LDClient
from autogpt_libs.feature_flag.client import (
feature_flag,
is_feature_enabled,
mock_flag_variation,
)
@pytest.fixture
def ld_client(mocker):
client = mocker.Mock(spec=LDClient)
mocker.patch("ldclient.get", return_value=client)
client.is_initialized.return_value = True
return client
@pytest.mark.asyncio
async def test_feature_flag_enabled(ld_client):
ld_client.variation.return_value = True
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = test_function(user_id="test-user")
assert result == "success"
ld_client.variation.assert_called_once()
@pytest.mark.asyncio
async def test_feature_flag_unauthorized_response(ld_client):
ld_client.variation.return_value = False
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = test_function(user_id="test-user")
assert result == {"error": "disabled"}
def test_mock_flag_variation(ld_client):
with mock_flag_variation("test-flag", True):
assert ld_client.variation("test-flag", None, False)
with mock_flag_variation("test-flag", False):
assert ld_client.variation("test-flag", None, False)
def test_is_feature_enabled(ld_client):
"""Test the is_feature_enabled helper function."""
ld_client.is_initialized.return_value = True
ld_client.variation.return_value = True
result = is_feature_enabled("test-flag", "user123", default=False)
assert result is True
ld_client.variation.assert_called_once()
call_args = ld_client.variation.call_args
assert call_args[0][0] == "test-flag" # flag_key
assert call_args[0][2] is False # default value
def test_is_feature_enabled_not_initialized(ld_client):
"""Test is_feature_enabled when LaunchDarkly is not initialized."""
ld_client.is_initialized.return_value = False
result = is_feature_enabled("test-flag", "user123", default=True)
assert result is True # Should return default
ld_client.variation.assert_not_called()
def test_is_feature_enabled_exception(mocker):
"""Test is_feature_enabled when get_client() raises an exception."""
mocker.patch(
"autogpt_libs.feature_flag.client.get_client",
side_effect=Exception("Client error"),
)
result = is_feature_enabled("test-flag", "user123", default=True)
assert result is True # Should return default

View File

@@ -0,0 +1,15 @@
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
launch_darkly_sdk_key: str = Field(
default="",
description="The Launch Darkly SDK key",
validation_alias="LAUNCH_DARKLY_SDK_KEY",
)
model_config = SettingsConfigDict(case_sensitive=True, extra="ignore")
SETTINGS = Settings()

View File

@@ -1,8 +1,6 @@
"""Logging module for Auto-GPT."""
import logging
import os
import socket
import sys
from pathlib import Path
@@ -12,15 +10,6 @@ from pydantic_settings import BaseSettings, SettingsConfigDict
from .filters import BelowLevelFilter
from .formatters import AGPTFormatter
# Configure global socket timeout and gRPC keepalive to prevent deadlocks
# This must be done at import time before any gRPC connections are established
socket.setdefaulttimeout(30) # 30-second socket timeout
# Enable gRPC keepalive to detect dead connections faster
os.environ.setdefault("GRPC_KEEPALIVE_TIME_MS", "30000") # 30 seconds
os.environ.setdefault("GRPC_KEEPALIVE_TIMEOUT_MS", "5000") # 5 seconds
os.environ.setdefault("GRPC_KEEPALIVE_PERMIT_WITHOUT_CALLS", "true")
LOG_DIR = Path(__file__).parent.parent.parent.parent / "logs"
LOG_FILE = "activity.log"
DEBUG_LOG_FILE = "debug.log"
@@ -90,6 +79,7 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
Note: This function is typically called at the start of the application
to set up the logging infrastructure.
"""
config = LoggingConfig()
log_handlers: list[logging.Handler] = []
@@ -115,17 +105,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
if config.enable_cloud_logging or force_cloud_logging:
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers.transports import (
BackgroundThreadTransport,
)
from google.cloud.logging_v2.handlers.transports.sync import SyncTransport
client = google.cloud.logging.Client()
# Use BackgroundThreadTransport to prevent blocking the main thread
# and deadlocks when gRPC calls to Google Cloud Logging hang
cloud_handler = CloudLoggingHandler(
client,
name="autogpt_logs",
transport=BackgroundThreadTransport,
transport=SyncTransport,
)
cloud_handler.setLevel(config.level)
log_handlers.append(cloud_handler)

View File

@@ -1,5 +1,39 @@
import logging
import re
from typing import Any
import uvicorn.config
from colorama import Fore
def remove_color_codes(s: str) -> str:
return re.sub(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])", "", s)
def fmt_kwargs(kwargs: dict) -> str:
return ", ".join(f"{n}={repr(v)}" for n, v in kwargs.items())
def print_attribute(
title: str, value: Any, title_color: str = Fore.GREEN, value_color: str = ""
) -> None:
logger = logging.getLogger()
logger.info(
str(value),
extra={
"title": f"{title.rstrip(':')}:",
"title_color": title_color,
"color": value_color,
},
)
def generate_uvicorn_config():
"""
Generates a uvicorn logging config that silences uvicorn's default logging and tells it to use the native logging module.
"""
log_config = dict(uvicorn.config.LOGGING_CONFIG)
log_config["loggers"]["uvicorn"] = {"handlers": []}
log_config["loggers"]["uvicorn.error"] = {"handlers": []}
log_config["loggers"]["uvicorn.access"] = {"handlers": []}
return log_config

View File

@@ -4,6 +4,7 @@ import threading
import time
from functools import wraps
from typing import (
Any,
Awaitable,
Callable,
ParamSpec,
@@ -22,13 +23,11 @@ logger = logging.getLogger(__name__)
@overload
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]:
pass
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]: ...
@overload
def thread_cached(func: Callable[P, R]) -> Callable[P, R]:
pass
def thread_cached(func: Callable[P, R]) -> Callable[P, R]: ...
def thread_cached(
@@ -76,32 +75,26 @@ def clear_thread_cache(func: Callable) -> None:
clear()
FuncT = TypeVar("FuncT")
R_co = TypeVar("R_co", covariant=True)
@runtime_checkable
class AsyncCachedFunction(Protocol[P, R_co]):
class AsyncCachedFunction(Protocol):
"""Protocol for async functions with cache management methods."""
def cache_clear(self) -> None:
"""Clear all cached entries."""
return None
def cache_info(self) -> dict[str, int | None]:
def cache_info(self) -> dict[str, Any]:
"""Get cache statistics."""
return {}
async def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
async def __call__(self, *args: Any, **kwargs: Any) -> Any:
"""Call the cached function."""
return None # type: ignore
return None
def async_ttl_cache(
maxsize: int = 128, ttl_seconds: int | None = None
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
) -> Callable[[Callable[..., Awaitable[Any]]], AsyncCachedFunction]:
"""
TTL (Time To Live) cache decorator for async functions.
@@ -127,13 +120,13 @@ def async_ttl_cache(
"""
def decorator(
async_func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
async_func: Callable[..., Awaitable[Any]],
) -> AsyncCachedFunction:
# Cache storage - use union type to handle both cases
cache_storage: dict[tuple, R | Tuple[R, float]] = {}
cache_storage: dict[Any, Any | Tuple[Any, float]] = {}
@wraps(async_func)
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
async def wrapper(*args, **kwargs):
# Create cache key from arguments
key = (args, tuple(sorted(kwargs.items())))
current_time = time.time()
@@ -145,7 +138,7 @@ def async_ttl_cache(
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, cache_storage[key])
return cache_storage[key]
else:
# With TTL - check expiration
cached_data = cache_storage[key]
@@ -155,7 +148,7 @@ def async_ttl_cache(
logger.debug(
f"Cache hit for {async_func.__name__} with key: {str(key)[:50]}"
)
return cast(R, result)
return result
else:
# Expired entry
del cache_storage[key]
@@ -192,7 +185,7 @@ def async_ttl_cache(
def cache_clear() -> None:
cache_storage.clear()
def cache_info() -> dict[str, int | None]:
def cache_info() -> dict[str, Any]:
return {
"size": len(cache_storage),
"maxsize": maxsize,
@@ -203,35 +196,14 @@ def async_ttl_cache(
setattr(wrapper, "cache_clear", cache_clear)
setattr(wrapper, "cache_info", cache_info)
return cast(AsyncCachedFunction[P, R], wrapper)
return cast(AsyncCachedFunction, wrapper)
return decorator
@overload
def async_cache(
func: Callable[P, Awaitable[R]],
) -> AsyncCachedFunction[P, R]:
pass
@overload
def async_cache(
func: None = None,
*,
maxsize: int = 128,
) -> Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]:
pass
def async_cache(
func: Callable[P, Awaitable[R]] | None = None,
*,
maxsize: int = 128,
) -> (
AsyncCachedFunction[P, R]
| Callable[[Callable[P, Awaitable[R]]], AsyncCachedFunction[P, R]]
):
) -> Callable[[Callable[..., Awaitable[Any]]], AsyncCachedFunction]:
"""
Process-level cache decorator for async functions (no TTL).
@@ -239,28 +211,15 @@ def async_cache(
This is a convenience wrapper around async_ttl_cache with ttl_seconds=None.
Args:
func: The async function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
Returns:
Decorated function or decorator
Decorator function
Example:
# Without parentheses (uses default maxsize=128)
@async_cache
async def get_data(param: str) -> dict:
return {"result": param}
# With parentheses and custom maxsize
@async_cache(maxsize=1000)
async def expensive_computation(param: str) -> dict:
# Expensive computation here
return {"result": param}
"""
if func is None:
# Called with parentheses @async_cache() or @async_cache(maxsize=...)
return async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
else:
# Called without parentheses @async_cache
decorator = async_ttl_cache(maxsize=maxsize, ttl_seconds=None)
return decorator(func)
return async_ttl_cache(maxsize=maxsize, ttl_seconds=None)

View File

@@ -461,7 +461,7 @@ class TestAsyncTTLCache:
# Cache size should be reduced (cleanup removes oldest entries)
info = size_limited_function.cache_info()
assert info["size"] is not None and info["size"] <= 3 # Should be cleaned up
assert info["size"] <= 3 # Should be cleaned up
@pytest.mark.asyncio
async def test_argument_variations(self):

View File

@@ -54,7 +54,7 @@ version = "1.2.0"
description = "Backport of asyncio.Runner, a context manager that controls event loop life cycle."
optional = false
python-versions = "<3.11,>=3.8"
groups = ["dev"]
groups = ["main"]
markers = "python_version < \"3.11\""
files = [
{file = "backports_asyncio_runner-1.2.0-py3-none-any.whl", hash = "sha256:0da0a936a8aeb554eccb426dc55af3ba63bcdc69fa1a600b5bb305413a4477b5"},
@@ -85,87 +85,6 @@ files = [
{file = "certifi-2025.7.14.tar.gz", hash = "sha256:8ea99dbdfaaf2ba2f9bac77b9249ef62ec5218e7c2b2e903378ed5fccf765995"},
]
[[package]]
name = "cffi"
version = "1.17.1"
description = "Foreign Function Interface for Python calling C code."
optional = false
python-versions = ">=3.8"
groups = ["main"]
markers = "platform_python_implementation != \"PyPy\""
files = [
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"},
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"},
{file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"},
{file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"},
{file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"},
{file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"},
{file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"},
{file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"},
{file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"},
{file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"},
{file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"},
{file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"},
{file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"},
{file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"},
{file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"},
{file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"},
{file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"},
{file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"},
{file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"},
{file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"},
{file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"},
{file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"},
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"},
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"},
]
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "3.4.2"
@@ -289,176 +208,12 @@ version = "0.4.6"
description = "Cross-platform colored terminal text."
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["main", "dev"]
groups = ["main"]
files = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
[[package]]
name = "coverage"
version = "7.10.5"
description = "Code coverage measurement for Python"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "coverage-7.10.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c6a5c3414bfc7451b879141ce772c546985163cf553f08e0f135f0699a911801"},
{file = "coverage-7.10.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bc8e4d99ce82f1710cc3c125adc30fd1487d3cf6c2cd4994d78d68a47b16989a"},
{file = "coverage-7.10.5-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:02252dc1216e512a9311f596b3169fad54abcb13827a8d76d5630c798a50a754"},
{file = "coverage-7.10.5-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:73269df37883e02d460bee0cc16be90509faea1e3bd105d77360b512d5bb9c33"},
{file = "coverage-7.10.5-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f8a81b0614642f91c9effd53eec284f965577591f51f547a1cbeb32035b4c2f"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6a29f8e0adb7f8c2b95fa2d4566a1d6e6722e0a637634c6563cb1ab844427dd9"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:fcf6ab569436b4a647d4e91accba12509ad9f2554bc93d3aee23cc596e7f99c3"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:90dc3d6fb222b194a5de60af8d190bedeeddcbc7add317e4a3cd333ee6b7c879"},
{file = "coverage-7.10.5-cp310-cp310-win32.whl", hash = "sha256:414a568cd545f9dc75f0686a0049393de8098414b58ea071e03395505b73d7a8"},
{file = "coverage-7.10.5-cp310-cp310-win_amd64.whl", hash = "sha256:e551f9d03347196271935fd3c0c165f0e8c049220280c1120de0084d65e9c7ff"},
{file = "coverage-7.10.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c177e6ffe2ebc7c410785307758ee21258aa8e8092b44d09a2da767834f075f2"},
{file = "coverage-7.10.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:14d6071c51ad0f703d6440827eaa46386169b5fdced42631d5a5ac419616046f"},
{file = "coverage-7.10.5-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:61f78c7c3bc272a410c5ae3fde7792b4ffb4acc03d35a7df73ca8978826bb7ab"},
{file = "coverage-7.10.5-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f39071caa126f69d63f99b324fb08c7b1da2ec28cbb1fe7b5b1799926492f65c"},
{file = "coverage-7.10.5-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:343a023193f04d46edc46b2616cdbee68c94dd10208ecd3adc56fcc54ef2baa1"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:585ffe93ae5894d1ebdee69fc0b0d4b7c75d8007983692fb300ac98eed146f78"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:b0ef4e66f006ed181df29b59921bd8fc7ed7cd6a9289295cd8b2824b49b570df"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:eb7b0bbf7cc1d0453b843eca7b5fa017874735bef9bfdfa4121373d2cc885ed6"},
{file = "coverage-7.10.5-cp311-cp311-win32.whl", hash = "sha256:1d043a8a06987cc0c98516e57c4d3fc2c1591364831e9deb59c9e1b4937e8caf"},
{file = "coverage-7.10.5-cp311-cp311-win_amd64.whl", hash = "sha256:fefafcca09c3ac56372ef64a40f5fe17c5592fab906e0fdffd09543f3012ba50"},
{file = "coverage-7.10.5-cp311-cp311-win_arm64.whl", hash = "sha256:7e78b767da8b5fc5b2faa69bb001edafcd6f3995b42a331c53ef9572c55ceb82"},
{file = "coverage-7.10.5-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c2d05c7e73c60a4cecc7d9b60dbfd603b4ebc0adafaef371445b47d0f805c8a9"},
{file = "coverage-7.10.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:32ddaa3b2c509778ed5373b177eb2bf5662405493baeff52278a0b4f9415188b"},
{file = "coverage-7.10.5-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:dd382410039fe062097aa0292ab6335a3f1e7af7bba2ef8d27dcda484918f20c"},
{file = "coverage-7.10.5-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7fa22800f3908df31cea6fb230f20ac49e343515d968cc3a42b30d5c3ebf9b5a"},
{file = "coverage-7.10.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f366a57ac81f5e12797136552f5b7502fa053c861a009b91b80ed51f2ce651c6"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5f1dc8f1980a272ad4a6c84cba7981792344dad33bf5869361576b7aef42733a"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:2285c04ee8676f7938b02b4936d9b9b672064daab3187c20f73a55f3d70e6b4a"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c2492e4dd9daab63f5f56286f8a04c51323d237631eb98505d87e4c4ff19ec34"},
{file = "coverage-7.10.5-cp312-cp312-win32.whl", hash = "sha256:38a9109c4ee8135d5df5505384fc2f20287a47ccbe0b3f04c53c9a1989c2bbaf"},
{file = "coverage-7.10.5-cp312-cp312-win_amd64.whl", hash = "sha256:6b87f1ad60b30bc3c43c66afa7db6b22a3109902e28c5094957626a0143a001f"},
{file = "coverage-7.10.5-cp312-cp312-win_arm64.whl", hash = "sha256:672a6c1da5aea6c629819a0e1461e89d244f78d7b60c424ecf4f1f2556c041d8"},
{file = "coverage-7.10.5-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ef3b83594d933020f54cf65ea1f4405d1f4e41a009c46df629dd964fcb6e907c"},
{file = "coverage-7.10.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2b96bfdf7c0ea9faebce088a3ecb2382819da4fbc05c7b80040dbc428df6af44"},
{file = "coverage-7.10.5-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:63df1fdaffa42d914d5c4d293e838937638bf75c794cf20bee12978fc8c4e3bc"},
{file = "coverage-7.10.5-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8002dc6a049aac0e81ecec97abfb08c01ef0c1fbf962d0c98da3950ace89b869"},
{file = "coverage-7.10.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:63d4bb2966d6f5f705a6b0c6784c8969c468dbc4bcf9d9ded8bff1c7e092451f"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1f672efc0731a6846b157389b6e6d5d5e9e59d1d1a23a5c66a99fd58339914d5"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:3f39cef43d08049e8afc1fde4a5da8510fc6be843f8dea350ee46e2a26b2f54c"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:2968647e3ed5a6c019a419264386b013979ff1fb67dd11f5c9886c43d6a31fc2"},
{file = "coverage-7.10.5-cp313-cp313-win32.whl", hash = "sha256:0d511dda38595b2b6934c2b730a1fd57a3635c6aa2a04cb74714cdfdd53846f4"},
{file = "coverage-7.10.5-cp313-cp313-win_amd64.whl", hash = "sha256:9a86281794a393513cf117177fd39c796b3f8e3759bb2764259a2abba5cce54b"},
{file = "coverage-7.10.5-cp313-cp313-win_arm64.whl", hash = "sha256:cebd8e906eb98bb09c10d1feed16096700b1198d482267f8bf0474e63a7b8d84"},
{file = "coverage-7.10.5-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0520dff502da5e09d0d20781df74d8189ab334a1e40d5bafe2efaa4158e2d9e7"},
{file = "coverage-7.10.5-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d9cd64aca68f503ed3f1f18c7c9174cbb797baba02ca8ab5112f9d1c0328cd4b"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:0913dd1613a33b13c4f84aa6e3f4198c1a21ee28ccb4f674985c1f22109f0aae"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:1b7181c0feeb06ed8a02da02792f42f829a7b29990fef52eff257fef0885d760"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:36d42b7396b605f774d4372dd9c49bed71cbabce4ae1ccd074d155709dd8f235"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:b4fdc777e05c4940b297bf47bf7eedd56a39a61dc23ba798e4b830d585486ca5"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:42144e8e346de44a6f1dbd0a56575dd8ab8dfa7e9007da02ea5b1c30ab33a7db"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:66c644cbd7aed8fe266d5917e2c9f65458a51cfe5eeff9c05f15b335f697066e"},
{file = "coverage-7.10.5-cp313-cp313t-win32.whl", hash = "sha256:2d1b73023854068c44b0c554578a4e1ef1b050ed07cf8b431549e624a29a66ee"},
{file = "coverage-7.10.5-cp313-cp313t-win_amd64.whl", hash = "sha256:54a1532c8a642d8cc0bd5a9a51f5a9dcc440294fd06e9dda55e743c5ec1a8f14"},
{file = "coverage-7.10.5-cp313-cp313t-win_arm64.whl", hash = "sha256:74d5b63fe3f5f5d372253a4ef92492c11a4305f3550631beaa432fc9df16fcff"},
{file = "coverage-7.10.5-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:68c5e0bc5f44f68053369fa0d94459c84548a77660a5f2561c5e5f1e3bed7031"},
{file = "coverage-7.10.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:cf33134ffae93865e32e1e37df043bef15a5e857d8caebc0099d225c579b0fa3"},
{file = "coverage-7.10.5-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:ad8fa9d5193bafcf668231294241302b5e683a0518bf1e33a9a0dfb142ec3031"},
{file = "coverage-7.10.5-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:146fa1531973d38ab4b689bc764592fe6c2f913e7e80a39e7eeafd11f0ef6db2"},
{file = "coverage-7.10.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6013a37b8a4854c478d3219ee8bc2392dea51602dd0803a12d6f6182a0061762"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:eb90fe20db9c3d930fa2ad7a308207ab5b86bf6a76f54ab6a40be4012d88fcae"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:384b34482272e960c438703cafe63316dfbea124ac62006a455c8410bf2a2262"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:467dc74bd0a1a7de2bedf8deaf6811f43602cb532bd34d81ffd6038d6d8abe99"},
{file = "coverage-7.10.5-cp314-cp314-win32.whl", hash = "sha256:556d23d4e6393ca898b2e63a5bca91e9ac2d5fb13299ec286cd69a09a7187fde"},
{file = "coverage-7.10.5-cp314-cp314-win_amd64.whl", hash = "sha256:f4446a9547681533c8fa3e3c6cf62121eeee616e6a92bd9201c6edd91beffe13"},
{file = "coverage-7.10.5-cp314-cp314-win_arm64.whl", hash = "sha256:5e78bd9cf65da4c303bf663de0d73bf69f81e878bf72a94e9af67137c69b9fe9"},
{file = "coverage-7.10.5-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:5661bf987d91ec756a47c7e5df4fbcb949f39e32f9334ccd3f43233bbb65e508"},
{file = "coverage-7.10.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a46473129244db42a720439a26984f8c6f834762fc4573616c1f37f13994b357"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:1f64b8d3415d60f24b058b58d859e9512624bdfa57a2d1f8aff93c1ec45c429b"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:44d43de99a9d90b20e0163f9770542357f58860a26e24dc1d924643bd6aa7cb4"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a931a87e5ddb6b6404e65443b742cb1c14959622777f2a4efd81fba84f5d91ba"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:f9559b906a100029274448f4c8b8b0a127daa4dade5661dfd821b8c188058842"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:b08801e25e3b4526ef9ced1aa29344131a8f5213c60c03c18fe4c6170ffa2874"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:ed9749bb8eda35f8b636fb7632f1c62f735a236a5d4edadd8bbcc5ea0542e732"},
{file = "coverage-7.10.5-cp314-cp314t-win32.whl", hash = "sha256:609b60d123fc2cc63ccee6d17e4676699075db72d14ac3c107cc4976d516f2df"},
{file = "coverage-7.10.5-cp314-cp314t-win_amd64.whl", hash = "sha256:0666cf3d2c1626b5a3463fd5b05f5e21f99e6aec40a3192eee4d07a15970b07f"},
{file = "coverage-7.10.5-cp314-cp314t-win_arm64.whl", hash = "sha256:bc85eb2d35e760120540afddd3044a5bf69118a91a296a8b3940dfc4fdcfe1e2"},
{file = "coverage-7.10.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:62835c1b00c4a4ace24c1a88561a5a59b612fbb83a525d1c70ff5720c97c0610"},
{file = "coverage-7.10.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5255b3bbcc1d32a4069d6403820ac8e6dbcc1d68cb28a60a1ebf17e47028e898"},
{file = "coverage-7.10.5-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3876385722e335d6e991c430302c24251ef9c2a9701b2b390f5473199b1b8ebf"},
{file = "coverage-7.10.5-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8048ce4b149c93447a55d279078c8ae98b08a6951a3c4d2d7e87f4efc7bfe100"},
{file = "coverage-7.10.5-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4028e7558e268dd8bcf4d9484aad393cafa654c24b4885f6f9474bf53183a82a"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:03f47dc870eec0367fcdd603ca6a01517d2504e83dc18dbfafae37faec66129a"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:2d488d7d42b6ded7ea0704884f89dcabd2619505457de8fc9a6011c62106f6e5"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b3dcf2ead47fa8be14224ee817dfc1df98043af568fe120a22f81c0eb3c34ad2"},
{file = "coverage-7.10.5-cp39-cp39-win32.whl", hash = "sha256:02650a11324b80057b8c9c29487020073d5e98a498f1857f37e3f9b6ea1b2426"},
{file = "coverage-7.10.5-cp39-cp39-win_amd64.whl", hash = "sha256:b45264dd450a10f9e03237b41a9a24e85cbb1e278e5a32adb1a303f58f0017f3"},
{file = "coverage-7.10.5-py3-none-any.whl", hash = "sha256:0be24d35e4db1d23d0db5c0f6a74a962e2ec83c426b5cac09f4234aadef38e4a"},
{file = "coverage-7.10.5.tar.gz", hash = "sha256:f2e57716a78bc3ae80b2207be0709a3b2b63b9f2dcf9740ee6ac03588a2015b6"},
]
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli ; python_full_version <= \"3.11.0a6\""]
[[package]]
name = "cryptography"
version = "45.0.6"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.7"
groups = ["main"]
files = [
{file = "cryptography-45.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:048e7ad9e08cf4c0ab07ff7f36cc3115924e22e2266e034450a890d9e312dd74"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:44647c5d796f5fc042bbc6d61307d04bf29bccb74d188f18051b635f20a9c75f"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e40b80ecf35ec265c452eea0ba94c9587ca763e739b8e559c128d23bff7ebbbf"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:00e8724bdad672d75e6f069b27970883179bd472cd24a63f6e620ca7e41cc0c5"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7a3085d1b319d35296176af31c90338eeb2ddac8104661df79f80e1d9787b8b2"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1b7fa6a1c1188c7ee32e47590d16a5a0646270921f8020efc9a511648e1b2e08"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:275ba5cc0d9e320cd70f8e7b96d9e59903c815ca579ab96c1e37278d231fc402"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:f4028f29a9f38a2025abedb2e409973709c660d44319c61762202206ed577c42"},
{file = "cryptography-45.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ee411a1b977f40bd075392c80c10b58025ee5c6b47a822a33c1198598a7a5f05"},
{file = "cryptography-45.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:e2a21a8eda2d86bb604934b6b37691585bd095c1f788530c1fcefc53a82b3453"},
{file = "cryptography-45.0.6-cp311-abi3-win32.whl", hash = "sha256:d063341378d7ee9c91f9d23b431a3502fc8bfacd54ef0a27baa72a0843b29159"},
{file = "cryptography-45.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:833dc32dfc1e39b7376a87b9a6a4288a10aae234631268486558920029b086ec"},
{file = "cryptography-45.0.6-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:3436128a60a5e5490603ab2adbabc8763613f638513ffa7d311c900a8349a2a0"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0d9ef57b6768d9fa58e92f4947cea96ade1233c0e236db22ba44748ffedca394"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ea3c42f2016a5bbf71825537c2ad753f2870191134933196bee408aac397b3d9"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:20ae4906a13716139d6d762ceb3e0e7e110f7955f3bc3876e3a07f5daadec5f3"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2dac5ec199038b8e131365e2324c03d20e97fe214af051d20c49db129844e8b3"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:18f878a34b90d688982e43f4b700408b478102dd58b3e39de21b5ebf6509c301"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:5bd6020c80c5b2b2242d6c48487d7b85700f5e0038e67b29d706f98440d66eb5"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:eccddbd986e43014263eda489abbddfbc287af5cddfd690477993dbb31e31016"},
{file = "cryptography-45.0.6-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:550ae02148206beb722cfe4ef0933f9352bab26b087af00e48fdfb9ade35c5b3"},
{file = "cryptography-45.0.6-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5b64e668fc3528e77efa51ca70fadcd6610e8ab231e3e06ae2bab3b31c2b8ed9"},
{file = "cryptography-45.0.6-cp37-abi3-win32.whl", hash = "sha256:780c40fb751c7d2b0c6786ceee6b6f871e86e8718a8ff4bc35073ac353c7cd02"},
{file = "cryptography-45.0.6-cp37-abi3-win_amd64.whl", hash = "sha256:20d15aed3ee522faac1a39fbfdfee25d17b1284bafd808e1640a74846d7c4d1b"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:705bb7c7ecc3d79a50f236adda12ca331c8e7ecfbea51edd931ce5a7a7c4f012"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:826b46dae41a1155a0c0e66fafba43d0ede1dc16570b95e40c4d83bfcf0a451d"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:cc4d66f5dc4dc37b89cfef1bd5044387f7a1f6f0abb490815628501909332d5d"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:f68f833a9d445cc49f01097d95c83a850795921b3f7cc6488731e69bde3288da"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3b5bf5267e98661b9b888a9250d05b063220dfa917a8203744454573c7eb79db"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:2384f2ab18d9be88a6e4f8972923405e2dbb8d3e16c6b43f15ca491d7831bd18"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-macosx_10_9_x86_64.whl", hash = "sha256:fc022c1fa5acff6def2fc6d7819bbbd31ccddfe67d075331a65d9cfb28a20983"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3de77e4df42ac8d4e4d6cdb342d989803ad37707cf8f3fbf7b088c9cbdd46427"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:599c8d7df950aa68baa7e98f7b73f4f414c9f02d0e8104a30c0182a07732638b"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:31a2b9a10530a1cb04ffd6aa1cd4d3be9ed49f7d77a4dafe198f3b382f41545c"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:e5b3dda1b00fb41da3af4c5ef3f922a200e33ee5ba0f0bc9ecf0b0c173958385"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:629127cfdcdc6806dfe234734d7cb8ac54edaf572148274fa377a7d3405b0043"},
{file = "cryptography-45.0.6.tar.gz", hash = "sha256:5c966c732cf6e4a276ce83b6e4c729edda2df6929083a952cc7da973c539c719"},
]
[package.dependencies]
cffi = {version = ">=1.14", markers = "platform_python_implementation != \"PyPy\""}
[package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs ; python_full_version >= \"3.8.0\"", "sphinx-rtd-theme (>=3.0.0) ; python_full_version >= \"3.8.0\""]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox (>=2024.4.15)", "nox[uv] (>=2024.3.2) ; python_full_version >= \"3.8.0\""]
pep8test = ["check-sdist ; python_full_version >= \"3.8.0\"", "click (>=8.0.1)", "mypy (>=1.4)", "ruff (>=0.3.6)"]
sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==45.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
name = "deprecation"
version = "2.1.0"
@@ -480,7 +235,7 @@ version = "1.3.0"
description = "Backport of PEP 654 (exception groups)"
optional = false
python-versions = ">=3.7"
groups = ["main", "dev"]
groups = ["main"]
markers = "python_version < \"3.11\""
files = [
{file = "exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10"},
@@ -955,7 +710,7 @@ version = "2.1.0"
description = "brain-dead simple config-ini parsing"
optional = false
python-versions = ">=3.8"
groups = ["dev"]
groups = ["main"]
files = [
{file = "iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"},
{file = "iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7"},
@@ -1002,18 +757,6 @@ dynamodb = ["boto3 (>=1.9.71)"]
redis = ["redis (>=2.10.5)"]
test-filesource = ["pyyaml (>=5.3.1)", "watchdog (>=3.0.0)"]
[[package]]
name = "nodeenv"
version = "1.9.1"
description = "Node.js virtual environment builder"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["dev"]
files = [
{file = "nodeenv-1.9.1-py2.py3-none-any.whl", hash = "sha256:ba11c9782d29c27c70ffbdda2d7415098754709be8a7056d79a737cd901155c9"},
{file = "nodeenv-1.9.1.tar.gz", hash = "sha256:6ec12890a2dab7946721edbfbcd91f3319c6ccc9aec47be7c7e6b7011ee6645f"},
]
[[package]]
name = "opentelemetry-api"
version = "1.35.0"
@@ -1036,7 +779,7 @@ version = "25.0"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.8"
groups = ["main", "dev"]
groups = ["main"]
files = [
{file = "packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484"},
{file = "packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"},
@@ -1048,7 +791,7 @@ version = "1.6.0"
description = "plugin and hook calling mechanisms for python"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
groups = ["main"]
files = [
{file = "pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746"},
{file = "pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3"},
@@ -1140,19 +883,6 @@ files = [
[package.dependencies]
pyasn1 = ">=0.6.1,<0.7.0"
[[package]]
name = "pycparser"
version = "2.22"
description = "C parser in Python"
optional = false
python-versions = ">=3.8"
groups = ["main"]
markers = "platform_python_implementation != \"PyPy\""
files = [
{file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"},
{file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"},
]
[[package]]
name = "pydantic"
version = "2.11.7"
@@ -1317,7 +1047,7 @@ version = "2.19.2"
description = "Pygments is a syntax highlighting package written in Python."
optional = false
python-versions = ">=3.8"
groups = ["dev"]
groups = ["main"]
files = [
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
@@ -1338,9 +1068,6 @@ files = [
{file = "pyjwt-2.10.1.tar.gz", hash = "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953"},
]
[package.dependencies]
cryptography = {version = ">=3.4.0", optional = true, markers = "extra == \"crypto\""}
[package.extras]
crypto = ["cryptography (>=3.4.0)"]
dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=6.0.0,<7.0.0)", "sphinx", "sphinx-rtd-theme", "zope.interface"]
@@ -1359,34 +1086,13 @@ files = [
{file = "pyrfc3339-2.0.1.tar.gz", hash = "sha256:e47843379ea35c1296c3b6c67a948a1a490ae0584edfcbdea0eaffb5dd29960b"},
]
[[package]]
name = "pyright"
version = "1.1.404"
description = "Command line wrapper for pyright"
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "pyright-1.1.404-py3-none-any.whl", hash = "sha256:c7b7ff1fdb7219c643079e4c3e7d4125f0dafcc19d253b47e898d130ea426419"},
{file = "pyright-1.1.404.tar.gz", hash = "sha256:455e881a558ca6be9ecca0b30ce08aa78343ecc031d37a198ffa9a7a1abeb63e"},
]
[package.dependencies]
nodeenv = ">=1.6.0"
typing-extensions = ">=4.1"
[package.extras]
all = ["nodejs-wheel-binaries", "twine (>=3.4.1)"]
dev = ["twine (>=3.4.1)"]
nodejs = ["nodejs-wheel-binaries"]
[[package]]
name = "pytest"
version = "8.4.1"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
groups = ["main"]
files = [
{file = "pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7"},
{file = "pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c"},
@@ -1410,7 +1116,7 @@ version = "1.1.0"
description = "Pytest support for asyncio"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
groups = ["main"]
files = [
{file = "pytest_asyncio-1.1.0-py3-none-any.whl", hash = "sha256:5fe2d69607b0bd75c656d1211f969cadba035030156745ee09e7d71740e58ecf"},
{file = "pytest_asyncio-1.1.0.tar.gz", hash = "sha256:796aa822981e01b68c12e4827b8697108f7205020f24b5793b3c41555dab68ea"},
@@ -1424,33 +1130,13 @@ pytest = ">=8.2,<9"
docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1)"]
testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)"]
[[package]]
name = "pytest-cov"
version = "6.2.1"
description = "Pytest plugin for measuring coverage."
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "pytest_cov-6.2.1-py3-none-any.whl", hash = "sha256:f5bc4c23f42f1cdd23c70b1dab1bbaef4fc505ba950d53e0081d0730dd7e86d5"},
{file = "pytest_cov-6.2.1.tar.gz", hash = "sha256:25cc6cc0a5358204b8108ecedc51a9b57b34cc6b8c967cc2c01a4e00d8a67da2"},
]
[package.dependencies]
coverage = {version = ">=7.5", extras = ["toml"]}
pluggy = ">=1.2"
pytest = ">=6.2.5"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "virtualenv"]
[[package]]
name = "pytest-mock"
version = "3.14.1"
description = "Thin-wrapper around the mock package for easier use with pytest"
optional = false
python-versions = ">=3.8"
groups = ["dev"]
groups = ["main"]
files = [
{file = "pytest_mock-3.14.1-py3-none-any.whl", hash = "sha256:178aefcd11307d874b4cd3100344e7e2d888d9791a6a1d9bfe90fbc1b74fd1d0"},
{file = "pytest_mock-3.14.1.tar.gz", hash = "sha256:159e9edac4c451ce77a5cdb9fc5d1100708d2dd4ba3c3df572f14097351af80e"},
@@ -1567,31 +1253,30 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.12.11"
version = "0.12.3"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.12.11-py3-none-linux_armv6l.whl", hash = "sha256:93fce71e1cac3a8bf9200e63a38ac5c078f3b6baebffb74ba5274fb2ab276065"},
{file = "ruff-0.12.11-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8e33ac7b28c772440afa80cebb972ffd823621ded90404f29e5ab6d1e2d4b93"},
{file = "ruff-0.12.11-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d69fb9d4937aa19adb2e9f058bc4fbfe986c2040acb1a4a9747734834eaa0bfd"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:411954eca8464595077a93e580e2918d0a01a19317af0a72132283e28ae21bee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6a2c0a2e1a450f387bf2c6237c727dd22191ae8c00e448e0672d624b2bbd7fb0"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ca4c3a7f937725fd2413c0e884b5248a19369ab9bdd850b5781348ba283f644"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:4d1df0098124006f6a66ecf3581a7f7e754c4df7644b2e6704cd7ca80ff95211"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5a8dd5f230efc99a24ace3b77e3555d3fbc0343aeed3fc84c8d89e75ab2ff793"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4dc75533039d0ed04cd33fb8ca9ac9620b99672fe7ff1533b6402206901c34ee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4fc58f9266d62c6eccc75261a665f26b4ef64840887fc6cbc552ce5b29f96cc8"},
{file = "ruff-0.12.11-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5a0113bd6eafd545146440225fe60b4e9489f59eb5f5f107acd715ba5f0b3d2f"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:0d737b4059d66295c3ea5720e6efc152623bb83fde5444209b69cd33a53e2000"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:916fc5defee32dbc1fc1650b576a8fed68f5e8256e2180d4d9855aea43d6aab2"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c984f07d7adb42d3ded5be894fb4007f30f82c87559438b4879fe7aa08c62b39"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e07fbb89f2e9249f219d88331c833860489b49cdf4b032b8e4432e9b13e8a4b9"},
{file = "ruff-0.12.11-py3-none-win32.whl", hash = "sha256:c792e8f597c9c756e9bcd4d87cf407a00b60af77078c96f7b6366ea2ce9ba9d3"},
{file = "ruff-0.12.11-py3-none-win_amd64.whl", hash = "sha256:a3283325960307915b6deb3576b96919ee89432ebd9c48771ca12ee8afe4a0fd"},
{file = "ruff-0.12.11-py3-none-win_arm64.whl", hash = "sha256:bae4d6e6a2676f8fb0f98b74594a048bae1b944aab17e9f5d504062303c6dbea"},
{file = "ruff-0.12.11.tar.gz", hash = "sha256:c6b09ae8426a65bbee5425b9d0b82796dbb07cb1af045743c79bfb163001165d"},
{file = "ruff-0.12.3-py3-none-linux_armv6l.whl", hash = "sha256:47552138f7206454eaf0c4fe827e546e9ddac62c2a3d2585ca54d29a890137a2"},
{file = "ruff-0.12.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:0a9153b000c6fe169bb307f5bd1b691221c4286c133407b8827c406a55282041"},
{file = "ruff-0.12.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:fa6b24600cf3b750e48ddb6057e901dd5b9aa426e316addb2a1af185a7509882"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2506961bf6ead54887ba3562604d69cb430f59b42133d36976421bc8bd45901"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c4faaff1f90cea9d3033cbbcdf1acf5d7fb11d8180758feb31337391691f3df0"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40dced4a79d7c264389de1c59467d5d5cefd79e7e06d1dfa2c75497b5269a5a6"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:0262d50ba2767ed0fe212aa7e62112a1dcbfd46b858c5bf7bbd11f326998bafc"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12371aec33e1a3758597c5c631bae9a5286f3c963bdfb4d17acdd2d395406687"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:560f13b6baa49785665276c963edc363f8ad4b4fc910a883e2625bdb14a83a9e"},
{file = "ruff-0.12.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:023040a3499f6f974ae9091bcdd0385dd9e9eb4942f231c23c57708147b06311"},
{file = "ruff-0.12.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:883d844967bffff5ab28bba1a4d246c1a1b2933f48cb9840f3fdc5111c603b07"},
{file = "ruff-0.12.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2120d3aa855ff385e0e562fdee14d564c9675edbe41625c87eeab744a7830d12"},
{file = "ruff-0.12.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:6b16647cbb470eaf4750d27dddc6ebf7758b918887b56d39e9c22cce2049082b"},
{file = "ruff-0.12.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e1417051edb436230023575b149e8ff843a324557fe0a265863b7602df86722f"},
{file = "ruff-0.12.3-py3-none-win32.whl", hash = "sha256:dfd45e6e926deb6409d0616078a666ebce93e55e07f0fb0228d4b2608b2c248d"},
{file = "ruff-0.12.3-py3-none-win_amd64.whl", hash = "sha256:a946cf1e7ba3209bdef039eb97647f1c77f6f540e5845ec9c114d3af8df873e7"},
{file = "ruff-0.12.3-py3-none-win_arm64.whl", hash = "sha256:5f9c7c9c8f84c2d7f27e93674d27136fbf489720251544c4da7fb3d742e011b1"},
{file = "ruff-0.12.3.tar.gz", hash = "sha256:f1b5a4b6668fd7b7ea3697d8d98857390b40c1320a63a178eee6be0899ea2d77"},
]
[[package]]
@@ -1725,7 +1410,7 @@ version = "2.2.1"
description = "A lil' TOML parser"
optional = false
python-versions = ">=3.8"
groups = ["dev"]
groups = ["main"]
markers = "python_version < \"3.11\""
files = [
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
@@ -1768,7 +1453,7 @@ version = "4.14.1"
description = "Backported and Experimental Type Hints for Python 3.9+"
optional = false
python-versions = ">=3.9"
groups = ["main", "dev"]
groups = ["main"]
files = [
{file = "typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76"},
{file = "typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36"},
@@ -1929,4 +1614,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<4.0"
content-hash = "0c40b63c3c921846cf05ccfb4e685d4959854b29c2c302245f9832e20aac6954"
content-hash = "f67db13e6f68b1d67a55eee908c1c560bfa44da8509f98f842889a7570a9830f"

View File

@@ -9,25 +9,21 @@ packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
colorama = "^0.4.6"
cryptography = "^45.0"
expiringdict = "^1.2.2"
fastapi = "^0.116.1"
google-cloud-logging = "^3.12.1"
launchdarkly-server-sdk = "^9.12.0"
pydantic = "^2.11.7"
pydantic-settings = "^2.10.1"
pyjwt = { version = "^2.10.1", extras = ["crypto"] }
pyjwt = "^2.10.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
redis = "^6.2.0"
supabase = "^2.16.0"
uvicorn = "^0.35.0"
[tool.poetry.group.dev.dependencies]
pyright = "^1.1.404"
pytest = "^8.4.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
pytest-cov = "^6.2.1"
ruff = "^0.12.11"
ruff = "^0.12.3"
[build-system]
requires = ["poetry-core"]

View File

@@ -1,52 +0,0 @@
# Development and testing files
**/__pycache__
**/*.pyc
**/*.pyo
**/*.pyd
**/.Python
**/env/
**/venv/
**/.venv/
**/pip-log.txt
**/.pytest_cache/
**/test-results/
**/snapshots/
**/test/
# IDE and editor files
**/.vscode/
**/.idea/
**/*.swp
**/*.swo
*~
# OS files
.DS_Store
Thumbs.db
# Logs
**/*.log
**/logs/
# Git
.git/
.gitignore
# Documentation
**/*.md
!README.md
# Local development files
.env
.env.local
**/.env.test
# Build artifacts
**/dist/
**/build/
**/target/
# Docker files (avoid recursion)
Dockerfile*
docker-compose*
.dockerignore

View File

@@ -1,9 +1,3 @@
# Backend Configuration
# This file contains environment variables that MUST be set for the AutoGPT platform
# Variables with working defaults in settings.py are not included here
## ===== REQUIRED DATABASE CONFIGURATION ===== ##
# PostgreSQL Database Connection
DB_USER=postgres
DB_PASS=your-super-secret-and-long-postgres-password
DB_NAME=postgres
@@ -17,48 +11,71 @@ DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME
DIRECT_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
PRISMA_SCHEMA="postgres/schema.prisma"
## ===== REQUIRED SERVICE CREDENTIALS ===== ##
# Redis Configuration
# EXECUTOR
NUM_GRAPH_WORKERS=10
BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
# generate using `from cryptography.fernet import Fernet;Fernet.generate_key().decode()`
ENCRYPTION_KEY='dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw='
UNSUBSCRIBE_SECRET_KEY = 'HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio='
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=password
# RabbitMQ Credentials
ENABLE_CREDIT=false
STRIPE_API_KEY=
STRIPE_WEBHOOK_SECRET=
# What environment things should be logged under: local dev or prod
APP_ENV=local
# What environment to behave as: "local" or "cloud"
BEHAVE_AS=local
PYRO_HOST=localhost
SENTRY_DSN=
# Email For Postmark so we can send emails
POSTMARK_SERVER_API_TOKEN=
POSTMARK_SENDER_EMAIL=invalid@invalid.com
POSTMARK_WEBHOOK_TOKEN=
## User auth with Supabase is required for any of the 3rd party integrations with auth to work.
ENABLE_AUTH=true
SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
# RabbitMQ credentials -- Used for communication between services
RABBITMQ_HOST=localhost
RABBITMQ_PORT=5672
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
# Supabase Authentication
SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
JWT_VERIFY_KEY=your-super-secret-jwt-token-with-at-least-32-characters-long
## ===== REQUIRED SECURITY KEYS ===== ##
# Generate using: from cryptography.fernet import Fernet;Fernet.generate_key().decode()
ENCRYPTION_KEY=dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=
UNSUBSCRIBE_SECRET_KEY=HlP8ivStJjmbf6NKi78m_3FnOogut0t5ckzjsIqeaio=
## ===== IMPORTANT OPTIONAL CONFIGURATION ===== ##
# Platform URLs (set these for webhooks and OAuth to work)
PLATFORM_BASE_URL=http://localhost:8000
FRONTEND_BASE_URL=http://localhost:3000
# Media Storage (required for marketplace and library functionality)
## GCS bucket is required for marketplace and library functionality
MEDIA_GCS_BUCKET_NAME=
## ===== API KEYS AND OAUTH CREDENTIALS ===== ##
# All API keys below are optional - only add what you need
## For local development, you may need to set FRONTEND_BASE_URL for the OAuth flow
## for integrations to work. Defaults to the value of PLATFORM_BASE_URL if not set.
# FRONTEND_BASE_URL=http://localhost:3000
# AI/LLM Services
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GROQ_API_KEY=
LLAMA_API_KEY=
AIML_API_KEY=
V0_API_KEY=
OPEN_ROUTER_API_KEY=
NVIDIA_API_KEY=
## PLATFORM_BASE_URL must be set to a *publicly accessible* URL pointing to your backend
## to use the platform's webhook-related functionality.
## If you are developing locally, you can use something like ngrok to get a publc URL
## and tunnel it to your locally running backend.
PLATFORM_BASE_URL=http://localhost:3000
## Cloudflare Turnstile (CAPTCHA) Configuration
## Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
## This is the backend secret key
TURNSTILE_SECRET_KEY=
## This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
## == INTEGRATION CREDENTIALS == ##
# Each set of server side credentials is required for the corresponding 3rd party
# integration to work.
# OAuth Credentials
# For the OAuth callback URL, use <your_frontend_url>/auth/integrations/oauth_callback,
# e.g. http://localhost:3000/auth/integrations/oauth_callback
@@ -68,6 +85,7 @@ GITHUB_CLIENT_SECRET=
# Google OAuth App server credentials - https://console.cloud.google.com/apis/credentials, and enable gmail api and set scopes
# https://console.cloud.google.com/apis/credentials/consent ?project=<your_project_id>
# You'll need to add/enable the following scopes (minimum):
# https://console.developers.google.com/apis/api/gmail.googleapis.com/overview ?project=<your_project_id>
# https://console.cloud.google.com/apis/library/sheets.googleapis.com/ ?project=<your_project_id>
@@ -103,75 +121,104 @@ LINEAR_CLIENT_SECRET=
TODOIST_CLIENT_ID=
TODOIST_CLIENT_SECRET=
NOTION_CLIENT_ID=
NOTION_CLIENT_SECRET=
## ===== OPTIONAL API KEYS ===== ##
# Discord OAuth App credentials
# 1. Go to https://discord.com/developers/applications
# 2. Create a new application
# 3. Go to OAuth2 section and add redirect URI: http://localhost:3000/auth/integrations/oauth_callback
# 4. Copy Client ID and Client Secret below
DISCORD_CLIENT_ID=
DISCORD_CLIENT_SECRET=
# LLM
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
AIML_API_KEY=
GROQ_API_KEY=
OPEN_ROUTER_API_KEY=
LLAMA_API_KEY=
# Reddit
# Go to https://www.reddit.com/prefs/apps and create a new app
# Choose "script" for the type
# Fill in the redirect uri as <your_frontend_url>/auth/integrations/oauth_callback, e.g. http://localhost:3000/auth/integrations/oauth_callback
REDDIT_CLIENT_ID=
REDDIT_CLIENT_SECRET=
REDDIT_USER_AGENT="AutoGPT:1.0 (by /u/autogpt)"
# Payment Processing
STRIPE_API_KEY=
STRIPE_WEBHOOK_SECRET=
# Email Service (for sending notifications and confirmations)
POSTMARK_SERVER_API_TOKEN=
POSTMARK_SENDER_EMAIL=invalid@invalid.com
POSTMARK_WEBHOOK_TOKEN=
# Error Tracking
SENTRY_DSN=
# Cloudflare Turnstile (CAPTCHA) Configuration
# Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
# This is the backend secret key
TURNSTILE_SECRET_KEY=
# This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
# Feature Flags
LAUNCH_DARKLY_SDK_KEY=
# Content Generation & Media
DID_API_KEY=
FAL_API_KEY=
IDEOGRAM_API_KEY=
REPLICATE_API_KEY=
REVID_API_KEY=
SCREENSHOTONE_API_KEY=
UNREAL_SPEECH_API_KEY=
# Data & Search Services
E2B_API_KEY=
EXA_API_KEY=
JINA_API_KEY=
MEM0_API_KEY=
OPENWEATHERMAP_API_KEY=
GOOGLE_MAPS_API_KEY=
# Communication Services
# Discord
DISCORD_BOT_TOKEN=
MEDIUM_API_KEY=
MEDIUM_AUTHOR_ID=
# SMTP/Email
SMTP_SERVER=
SMTP_PORT=
SMTP_USERNAME=
SMTP_PASSWORD=
# Business & Marketing Tools
# D-ID
DID_API_KEY=
# Open Weather Map
OPENWEATHERMAP_API_KEY=
# SMTP
SMTP_SERVER=
SMTP_PORT=
SMTP_USERNAME=
SMTP_PASSWORD=
# Medium
MEDIUM_API_KEY=
MEDIUM_AUTHOR_ID=
# Google Maps
GOOGLE_MAPS_API_KEY=
# Replicate
REPLICATE_API_KEY=
# Ideogram
IDEOGRAM_API_KEY=
# Fal
FAL_API_KEY=
# Exa
EXA_API_KEY=
# E2B
E2B_API_KEY=
# Mem0
MEM0_API_KEY=
# Nvidia
NVIDIA_API_KEY=
# Apollo
APOLLO_API_KEY=
ENRICHLAYER_API_KEY=
AYRSHARE_API_KEY=
AYRSHARE_JWT_KEY=
# SmartLead
SMARTLEAD_API_KEY=
# ZeroBounce
ZEROBOUNCE_API_KEY=
# Other Services
AUTOMOD_API_KEY=
# Ayrshare
AYRSHARE_API_KEY=
AYRSHARE_JWT_KEY=
## ===== OPTIONAL API KEYS END ===== ##
# Block Error Rate Monitoring
BLOCK_ERROR_RATE_THRESHOLD=0.5
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400
# Logging Configuration
LOG_LEVEL=INFO
ENABLE_CLOUD_LOGGING=false
ENABLE_FILE_LOGGING=false
# Use to manually set the log directory
# LOG_DIR=./logs
# Example Blocks Configuration
# Set to true to enable example blocks in development
# These blocks are disabled by default in production
ENABLE_EXAMPLE_BLOCKS=false
# Cloud Storage Configuration
# Cleanup interval for expired files (hours between cleanup runs, 1-24 hours)
CLOUD_STORAGE_CLEANUP_INTERVAL_HOURS=6

View File

@@ -1,4 +1,3 @@
.env
database.db
database.db-journal
dev.db

View File

@@ -1,34 +1,31 @@
FROM debian:13-slim AS builder
FROM python:3.11.10-slim-bookworm AS builder
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV DEBIAN_FRONTEND=noninteractive
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /app
RUN echo 'Acquire::http::Pipeline-Depth 0;\nAcquire::http::No-Cache true;\nAcquire::BrokenProxy true;\n' > /etc/apt/apt.conf.d/99fixbadproxy
# Update package list and install Python and build dependencies
RUN apt-get update --allow-releaseinfo-change --fix-missing \
&& apt-get install -y \
python3.13 \
python3.13-dev \
python3.13-venv \
python3-pip \
build-essential \
libpq5 \
libz-dev \
libssl-dev \
postgresql-client
RUN apt-get update --allow-releaseinfo-change --fix-missing
# Install build dependencies
RUN apt-get install -y build-essential
RUN apt-get install -y libpq5
RUN apt-get install -y libz-dev
RUN apt-get install -y libssl-dev
RUN apt-get install -y postgresql-client
ENV POETRY_HOME=/opt/poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=false
ENV PATH=/opt/poetry/bin:$PATH
RUN pip3 install poetry --break-system-packages
# Upgrade pip and setuptools to fix security vulnerabilities
RUN pip3 install --upgrade pip setuptools
RUN pip3 install poetry
# Copy and install dependencies
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
@@ -40,30 +37,27 @@ RUN poetry install --no-ansi --no-root
COPY autogpt_platform/backend/schema.prisma ./
RUN poetry run prisma generate
FROM debian:13-slim AS server_dependencies
FROM python:3.11.10-slim-bookworm AS server_dependencies
WORKDIR /app
ENV POETRY_HOME=/opt/poetry \
POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_CREATE=true \
POETRY_VIRTUALENVS_IN_PROJECT=true \
DEBIAN_FRONTEND=noninteractive
POETRY_VIRTUALENVS_CREATE=false
ENV PATH=/opt/poetry/bin:$PATH
# Install Python without upgrading system-managed packages
RUN apt-get update && apt-get install -y \
python3.13 \
python3-pip
# Upgrade pip and setuptools to fix security vulnerabilities
RUN pip3 install --upgrade pip setuptools
# Copy only necessary files from builder
COPY --from=builder /app /app
COPY --from=builder /usr/local/lib/python3* /usr/local/lib/python3*
COPY --from=builder /usr/local/bin/poetry /usr/local/bin/poetry
COPY --from=builder /usr/local/lib/python3.11 /usr/local/lib/python3.11
COPY --from=builder /usr/local/bin /usr/local/bin
# Copy Prisma binaries
COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries
ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH"
ENV PATH="/app/.venv/bin:$PATH"
RUN mkdir -p /app/autogpt_platform/autogpt_libs
RUN mkdir -p /app/autogpt_platform/backend
@@ -74,12 +68,6 @@ COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.tom
WORKDIR /app/autogpt_platform/backend
FROM server_dependencies AS migrate
# Migration stage only needs schema and migrations - much lighter than full backend
COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/
COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations
FROM server_dependencies AS server
COPY autogpt_platform/backend /app/autogpt_platform/backend

View File

@@ -132,58 +132,17 @@ def test_endpoint_success(snapshot: Snapshot):
### Testing with Authentication
For the main API routes that use JWT authentication, auth is provided by the `autogpt_libs.auth` module. If the test actually uses the `user_id`, the recommended approach for testing is to mock the `get_jwt_payload` function, which underpins all higher-level auth functions used in the API (`requires_user`, `requires_admin_user`, `get_user_id`).
If the test doesn't need the `user_id` specifically, mocking is not necessary as during tests auth is disabled anyway (see `conftest.py`).
#### Using Global Auth Fixtures
Two global auth fixtures are provided by `backend/server/conftest.py`:
- `mock_jwt_user` - Regular user with `test_user_id` ("test-user-id")
- `mock_jwt_admin` - Admin user with `admin_user_id` ("admin-user-id")
These provide the easiest way to set up authentication mocking in test modules:
```python
import fastapi
import fastapi.testclient
import pytest
from backend.server.v2.myroute import router
def override_auth_middleware():
return {"sub": "test-user-id"}
app = fastapi.FastAPI()
app.include_router(router)
client = fastapi.testclient.TestClient(app)
def override_get_user_id():
return "test-user-id"
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user['get_jwt_payload']
yield
app.dependency_overrides.clear()
app.dependency_overrides[auth_middleware] = override_auth_middleware
app.dependency_overrides[get_user_id] = override_get_user_id
```
For admin-only endpoints, use `mock_jwt_admin` instead:
```python
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_admin):
"""Setup auth overrides for admin tests"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_admin['get_jwt_payload']
yield
app.dependency_overrides.clear()
```
The IDs are also available separately as fixtures:
- `test_user_id`
- `admin_user_id`
- `target_user_id` (for admin <-> user operations)
### Mocking External Services
```python
@@ -194,10 +153,10 @@ def test_external_api_call(mocker, snapshot):
"backend.services.external_api.call",
return_value=mock_response
)
response = client.post("/api/process")
assert response.status_code == 200
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response.json(), indent=2, sort_keys=True),
@@ -228,17 +187,6 @@ def test_external_api_call(mocker, snapshot):
- Use `async def` with `@pytest.mark.asyncio` for testing async functions directly
### 5. Fixtures
#### Global Fixtures (conftest.py)
Authentication fixtures are available globally from `conftest.py`:
- `mock_jwt_user` - Standard user authentication
- `mock_jwt_admin` - Admin user authentication
- `configured_snapshot` - Pre-configured snapshot fixture
#### Custom Fixtures
Create reusable fixtures for common test data:
```python
@@ -254,18 +202,9 @@ def test_create_user(sample_user, snapshot):
# ... test implementation
```
#### Test Isolation
All tests must use fixtures that ensure proper isolation:
- Authentication overrides are automatically cleaned up after each test
- Database connections are properly managed with cleanup
- Mock objects are reset between tests
## CI/CD Integration
The GitHub Actions workflow automatically runs tests on:
- Pull requests
- Pushes to main branch
@@ -277,19 +216,16 @@ Snapshot tests work in CI by:
## Troubleshooting
### Snapshot Mismatches
- Review the diff carefully
- If changes are expected: `poetry run pytest --snapshot-update`
- If changes are unexpected: Fix the code causing the difference
### Async Test Issues
- Ensure async functions use `@pytest.mark.asyncio`
- Use `AsyncMock` for mocking async functions
- FastAPI TestClient handles async automatically
### Import Errors
- Check that all dependencies are in `pyproject.toml`
- Run `poetry install` to ensure dependencies are installed
- Verify import paths are correct
@@ -298,4 +234,4 @@ Snapshot tests work in CI by:
Snapshot testing provides a powerful way to ensure API responses remain consistent. Combined with traditional assertions, it creates a robust test suite that catches regressions while remaining maintainable.
Remember: Good tests are as important as good code!
Remember: Good tests are as important as good code!

View File

@@ -1,6 +1,8 @@
import logging
from typing import Any, Optional
from pydantic import JsonValue
from backend.data.block import (
Block,
BlockCategory,
@@ -10,7 +12,7 @@ from backend.data.block import (
BlockType,
get_block,
)
from backend.data.execution import ExecutionStatus, NodesInputMasks
from backend.data.execution import ExecutionStatus
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util.json import validate_with_jsonschema
from backend.util.retry import func_retry
@@ -23,15 +25,12 @@ class AgentExecutorBlock(Block):
user_id: str = SchemaField(description="User ID")
graph_id: str = SchemaField(description="Graph ID")
graph_version: int = SchemaField(description="Graph Version")
agent_name: Optional[str] = SchemaField(
default=None, description="Name to display in the Builder UI"
)
inputs: BlockInput = SchemaField(description="Input data for the graph")
input_schema: dict = SchemaField(description="Input schema for the graph")
output_schema: dict = SchemaField(description="Output schema for the graph")
nodes_input_masks: Optional[NodesInputMasks] = SchemaField(
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = SchemaField(
default=None, hidden=True
)

View File

@@ -1,154 +0,0 @@
from enum import Enum
from typing import Literal
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import MediaFileType
class GeminiImageModel(str, Enum):
NANO_BANANA = "google/nano-banana"
class OutputFormat(str, Enum):
JPG = "jpg"
PNG = "png"
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="replicate",
api_key=SecretStr("mock-replicate-api-key"),
title="Mock Replicate API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
class AIImageCustomizerBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
description="Replicate API key with permissions for Google Gemini image models",
)
prompt: str = SchemaField(
description="A text description of the image you want to generate",
title="Prompt",
)
model: GeminiImageModel = SchemaField(
description="The AI model to use for image generation and editing",
default=GeminiImageModel.NANO_BANANA,
title="Model",
)
images: list[MediaFileType] = SchemaField(
description="Optional list of input images to reference or modify",
default=[],
title="Input Images",
)
output_format: OutputFormat = SchemaField(
description="Format of the output image",
default=OutputFormat.PNG,
title="Output Format",
)
class Output(BlockSchema):
image_url: MediaFileType = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(
id="d76bbe4c-930e-4894-8469-b66775511f71",
description=(
"Generate and edit custom images using Google's Nano-Banana model from Gemini 2.5. "
"Provide a prompt and optional reference images to create or modify images."
),
categories={BlockCategory.AI, BlockCategory.MULTIMEDIA},
input_schema=AIImageCustomizerBlock.Input,
output_schema=AIImageCustomizerBlock.Output,
test_input={
"prompt": "Make the scene more vibrant and colorful",
"model": GeminiImageModel.NANO_BANANA,
"images": [],
"output_format": OutputFormat.JPG,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("image_url", "https://replicate.delivery/generated-image.jpg"),
],
test_mock={
"run_model": lambda *args, **kwargs: MediaFileType(
"https://replicate.delivery/generated-image.jpg"
),
},
test_credentials=TEST_CREDENTIALS,
)
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
graph_exec_id: str,
user_id: str,
**kwargs,
) -> BlockOutput:
try:
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.model.value,
prompt=input_data.prompt,
images=input_data.images,
output_format=input_data.output_format.value,
)
yield "image_url", result
except Exception as e:
yield "error", str(e)
async def run_model(
self,
api_key: SecretStr,
model_name: str,
prompt: str,
images: list[MediaFileType],
output_format: str,
) -> MediaFileType:
client = ReplicateClient(api_token=api_key.get_secret_value())
input_params: dict = {
"prompt": prompt,
"output_format": output_format,
}
# Add images to input if provided (API expects "image_input" parameter)
if images:
input_params["image_input"] = [str(img) for img in images]
output: FileOutput | str = await client.async_run( # type: ignore
model_name,
input=input_params,
wait=False,
)
if isinstance(output, FileOutput):
return MediaFileType(output.url)
if isinstance(output, str):
return MediaFileType(output)
raise ValueError("No output received from the model")

View File

@@ -166,7 +166,7 @@ class AIMusicGeneratorBlock(Block):
output_format=input_data.output_format,
normalization_strategy=input_data.normalization_strategy,
)
if result and isinstance(result, str) and result.startswith("http"):
if result and result != "No output received":
yield "result", result
return
else:

View File

@@ -661,167 +661,6 @@ async def update_field(
#################################################################
async def get_table_schema(
credentials: Credentials,
base_id: str,
table_id_or_name: str,
) -> dict:
"""
Get the schema for a specific table, including all field definitions.
Args:
credentials: Airtable API credentials
base_id: The base ID
table_id_or_name: The table ID or name
Returns:
Dict containing table schema with fields information
"""
# First get all tables to find the right one
response = await Requests().get(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables",
headers={"Authorization": credentials.auth_header()},
)
data = response.json()
tables = data.get("tables", [])
# Find the matching table
for table in tables:
if table.get("id") == table_id_or_name or table.get("name") == table_id_or_name:
return table
raise ValueError(f"Table '{table_id_or_name}' not found in base '{base_id}'")
def get_empty_value_for_field(field_type: str) -> Any:
"""
Return the appropriate empty value for a given Airtable field type.
Args:
field_type: The Airtable field type
Returns:
The appropriate empty value for that field type
"""
# Fields that should be false when empty
if field_type == "checkbox":
return False
# Fields that should be empty arrays
if field_type in [
"multipleSelects",
"multipleRecordLinks",
"multipleAttachments",
"multipleLookupValues",
"multipleCollaborators",
]:
return []
# Fields that should be 0 when empty (numeric types)
if field_type in [
"number",
"percent",
"currency",
"rating",
"duration",
"count",
"autoNumber",
]:
return 0
# Fields that should be empty strings
if field_type in [
"singleLineText",
"multilineText",
"email",
"url",
"phoneNumber",
"richText",
"barcode",
]:
return ""
# Everything else gets null (dates, single selects, formulas, etc.)
return None
async def normalize_records(
records: list[dict],
table_schema: dict,
include_field_metadata: bool = False,
) -> dict:
"""
Normalize Airtable records to include all fields with proper empty values.
Args:
records: List of record objects from Airtable API
table_schema: Table schema containing field definitions
include_field_metadata: Whether to include field metadata in response
Returns:
Dict with normalized records and optionally field metadata
"""
fields = table_schema.get("fields", [])
# Normalize each record
normalized_records = []
for record in records:
normalized = {
"id": record.get("id"),
"createdTime": record.get("createdTime"),
"fields": {},
}
# Add existing fields
existing_fields = record.get("fields", {})
# Add all fields from schema, using empty values for missing ones
for field in fields:
field_name = field["name"]
field_type = field["type"]
if field_name in existing_fields:
# Field exists, use its value
normalized["fields"][field_name] = existing_fields[field_name]
else:
# Field is missing, add appropriate empty value
normalized["fields"][field_name] = get_empty_value_for_field(field_type)
normalized_records.append(normalized)
# Build result dictionary
if include_field_metadata:
field_metadata = {}
for field in fields:
metadata = {"type": field["type"], "id": field["id"]}
# Add type-specific metadata
options = field.get("options", {})
if field["type"] == "currency" and "symbol" in options:
metadata["symbol"] = options["symbol"]
metadata["precision"] = options.get("precision", 2)
elif field["type"] == "duration" and "durationFormat" in options:
metadata["format"] = options["durationFormat"]
elif field["type"] == "percent" and "precision" in options:
metadata["precision"] = options["precision"]
elif (
field["type"] in ["singleSelect", "multipleSelects"]
and "choices" in options
):
metadata["choices"] = [choice["name"] for choice in options["choices"]]
elif field["type"] == "rating" and "max" in options:
metadata["max"] = options["max"]
metadata["icon"] = options.get("icon", "star")
metadata["color"] = options.get("color", "yellowBright")
field_metadata[field["name"]] = metadata
return {"records": normalized_records, "field_metadata": field_metadata}
else:
return {"records": normalized_records}
async def list_records(
credentials: Credentials,
base_id: str,
@@ -1410,26 +1249,3 @@ async def list_bases(
)
return response.json()
async def get_base_tables(
credentials: Credentials,
base_id: str,
) -> list[dict]:
"""
Get all tables for a specific base.
Args:
credentials: Airtable API credentials
base_id: The ID of the base
Returns:
list[dict]: List of table objects with their schemas
"""
response = await Requests().get(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables",
headers={"Authorization": credentials.auth_header()},
)
data = response.json()
return data.get("tables", [])

View File

@@ -14,13 +14,13 @@ from backend.sdk import (
SchemaField,
)
from ._api import create_base, get_base_tables, list_bases
from ._api import create_base, list_bases
from ._config import airtable
class AirtableCreateBaseBlock(Block):
"""
Creates a new base in an Airtable workspace, or returns existing base if one with the same name exists.
Creates a new base in an Airtable workspace.
"""
class Input(BlockSchema):
@@ -31,10 +31,6 @@ class AirtableCreateBaseBlock(Block):
description="The workspace ID where the base will be created"
)
name: str = SchemaField(description="The name of the new base")
find_existing: bool = SchemaField(
description="If true, return existing base with same name instead of creating duplicate",
default=True,
)
tables: list[dict] = SchemaField(
description="At least one table and field must be specified. Array of table objects to create in the base. Each table should have 'name' and 'fields' properties",
default=[
@@ -54,18 +50,14 @@ class AirtableCreateBaseBlock(Block):
)
class Output(BlockSchema):
base_id: str = SchemaField(description="The ID of the created or found base")
base_id: str = SchemaField(description="The ID of the created base")
tables: list[dict] = SchemaField(description="Array of table objects")
table: dict = SchemaField(description="A single table object")
was_created: bool = SchemaField(
description="True if a new base was created, False if existing was found",
default=True,
)
def __init__(self):
super().__init__(
id="f59b88a8-54ce-4676-a508-fd614b4e8dce",
description="Create or find a base in Airtable",
description="Create a new base in Airtable",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
@@ -74,31 +66,6 @@ class AirtableCreateBaseBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# If find_existing is true, check if a base with this name already exists
if input_data.find_existing:
# List all bases to check for existing one with same name
# Note: Airtable API doesn't have a direct search, so we need to list and filter
existing_bases = await list_bases(credentials)
for base in existing_bases.get("bases", []):
if base.get("name") == input_data.name:
# Base already exists, return it
base_id = base.get("id")
yield "base_id", base_id
yield "was_created", False
# Get the tables for this base
try:
tables = await get_base_tables(credentials, base_id)
yield "tables", tables
for table in tables:
yield "table", table
except Exception:
# If we can't get tables, return empty list
yield "tables", []
return
# No existing base found or find_existing is false, create new one
data = await create_base(
credentials,
input_data.workspace_id,
@@ -107,7 +74,6 @@ class AirtableCreateBaseBlock(Block):
)
yield "base_id", data.get("id", None)
yield "was_created", True
yield "tables", data.get("tables", [])
for table in data.get("tables", []):
yield "table", table

View File

@@ -2,7 +2,7 @@
Airtable record operation blocks.
"""
from typing import Optional, cast
from typing import Optional
from backend.sdk import (
APIKeyCredentials,
@@ -18,9 +18,7 @@ from ._api import (
create_record,
delete_multiple_records,
get_record,
get_table_schema,
list_records,
normalize_records,
update_multiple_records,
)
from ._config import airtable
@@ -56,24 +54,12 @@ class AirtableListRecordsBlock(Block):
return_fields: list[str] = SchemaField(
description="Specific fields to return (comma-separated)", default=[]
)
normalize_output: bool = SchemaField(
description="Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response)",
default=True,
)
include_field_metadata: bool = SchemaField(
description="Include field type and configuration metadata (requires normalize_output=true)",
default=False,
)
class Output(BlockSchema):
records: list[dict] = SchemaField(description="Array of record objects")
offset: Optional[str] = SchemaField(
description="Offset for next page (null if no more records)", default=None
)
field_metadata: Optional[dict] = SchemaField(
description="Field type and configuration metadata (only when include_field_metadata=true)",
default=None,
)
def __init__(self):
super().__init__(
@@ -87,7 +73,6 @@ class AirtableListRecordsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
data = await list_records(
credentials,
input_data.base_id,
@@ -103,33 +88,8 @@ class AirtableListRecordsBlock(Block):
fields=input_data.return_fields if input_data.return_fields else None,
)
records = data.get("records", [])
# Normalize output if requested
if input_data.normalize_output:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the records
normalized_data = await normalize_records(
records,
table_schema,
include_field_metadata=input_data.include_field_metadata,
)
yield "records", normalized_data["records"]
yield "offset", data.get("offset", None)
if (
input_data.include_field_metadata
and "field_metadata" in normalized_data
):
yield "field_metadata", normalized_data["field_metadata"]
else:
yield "records", records
yield "offset", data.get("offset", None)
yield "records", data.get("records", [])
yield "offset", data.get("offset", None)
class AirtableGetRecordBlock(Block):
@@ -144,23 +104,11 @@ class AirtableGetRecordBlock(Block):
base_id: str = SchemaField(description="The Airtable base ID")
table_id_or_name: str = SchemaField(description="Table ID or name")
record_id: str = SchemaField(description="The record ID to retrieve")
normalize_output: bool = SchemaField(
description="Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response)",
default=True,
)
include_field_metadata: bool = SchemaField(
description="Include field type and configuration metadata (requires normalize_output=true)",
default=False,
)
class Output(BlockSchema):
id: str = SchemaField(description="The record ID")
fields: dict = SchemaField(description="The record fields")
created_time: str = SchemaField(description="The record created time")
field_metadata: Optional[dict] = SchemaField(
description="Field type and configuration metadata (only when include_field_metadata=true)",
default=None,
)
def __init__(self):
super().__init__(
@@ -174,7 +122,6 @@ class AirtableGetRecordBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
record = await get_record(
credentials,
input_data.base_id,
@@ -182,34 +129,9 @@ class AirtableGetRecordBlock(Block):
input_data.record_id,
)
# Normalize output if requested
if input_data.normalize_output:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the single record (wrap in list and unwrap result)
normalized_data = await normalize_records(
[record],
table_schema,
include_field_metadata=input_data.include_field_metadata,
)
normalized_record = normalized_data["records"][0]
yield "id", normalized_record.get("id", None)
yield "fields", normalized_record.get("fields", None)
yield "created_time", normalized_record.get("createdTime", None)
if (
input_data.include_field_metadata
and "field_metadata" in normalized_data
):
yield "field_metadata", normalized_data["field_metadata"]
else:
yield "id", record.get("id", None)
yield "fields", record.get("fields", None)
yield "created_time", record.get("createdTime", None)
yield "id", record.get("id", None)
yield "fields", record.get("fields", None)
yield "created_time", record.get("createdTime", None)
class AirtableCreateRecordsBlock(Block):
@@ -226,10 +148,6 @@ class AirtableCreateRecordsBlock(Block):
records: list[dict] = SchemaField(
description="Array of records to create (each with 'fields' object)"
)
skip_normalization: bool = SchemaField(
description="Skip output normalization to get raw Airtable response (faster but may have missing fields)",
default=False,
)
typecast: bool = SchemaField(
description="Automatically convert string values to appropriate types",
default=False,
@@ -241,6 +159,7 @@ class AirtableCreateRecordsBlock(Block):
class Output(BlockSchema):
records: list[dict] = SchemaField(description="Array of created record objects")
details: dict = SchemaField(description="Details of the created records")
def __init__(self):
super().__init__(
@@ -254,7 +173,7 @@ class AirtableCreateRecordsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# The create_record API expects records in a specific format
data = await create_record(
credentials,
input_data.base_id,
@@ -263,22 +182,11 @@ class AirtableCreateRecordsBlock(Block):
typecast=input_data.typecast if input_data.typecast else None,
return_fields_by_field_id=input_data.return_fields_by_field_id,
)
result_records = cast(list[dict], data.get("records", []))
# Normalize output unless explicitly disabled
if not input_data.skip_normalization and result_records:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the records
normalized_data = await normalize_records(
result_records, table_schema, include_field_metadata=False
)
result_records = normalized_data["records"]
yield "records", result_records
yield "records", data.get("records", [])
details = data.get("details", None)
if details:
yield "details", details
class AirtableUpdateRecordsBlock(Block):

View File

@@ -1,205 +0,0 @@
"""
Meeting BaaS API client module.
All API calls centralized for consistency and maintainability.
"""
from typing import Any, Dict, List, Optional
from backend.sdk import Requests
class MeetingBaasAPI:
"""Client for Meeting BaaS API endpoints."""
BASE_URL = "https://api.meetingbaas.com"
def __init__(self, api_key: str):
"""Initialize API client with authentication key."""
self.api_key = api_key
self.headers = {"x-meeting-baas-api-key": api_key}
self.requests = Requests()
# Bot Management Endpoints
async def join_meeting(
self,
bot_name: str,
meeting_url: str,
reserved: bool = False,
bot_image: Optional[str] = None,
entry_message: Optional[str] = None,
start_time: Optional[int] = None,
speech_to_text: Optional[Dict[str, Any]] = None,
webhook_url: Optional[str] = None,
automatic_leave: Optional[Dict[str, Any]] = None,
extra: Optional[Dict[str, Any]] = None,
recording_mode: str = "speaker_view",
streaming: Optional[Dict[str, Any]] = None,
deduplication_key: Optional[str] = None,
zoom_sdk_id: Optional[str] = None,
zoom_sdk_pwd: Optional[str] = None,
) -> Dict[str, Any]:
"""
Deploy a bot to join and record a meeting.
POST /bots
"""
body = {
"bot_name": bot_name,
"meeting_url": meeting_url,
"reserved": reserved,
"recording_mode": recording_mode,
}
# Add optional fields if provided
if bot_image is not None:
body["bot_image"] = bot_image
if entry_message is not None:
body["entry_message"] = entry_message
if start_time is not None:
body["start_time"] = start_time
if speech_to_text is not None:
body["speech_to_text"] = speech_to_text
if webhook_url is not None:
body["webhook_url"] = webhook_url
if automatic_leave is not None:
body["automatic_leave"] = automatic_leave
if extra is not None:
body["extra"] = extra
if streaming is not None:
body["streaming"] = streaming
if deduplication_key is not None:
body["deduplication_key"] = deduplication_key
if zoom_sdk_id is not None:
body["zoom_sdk_id"] = zoom_sdk_id
if zoom_sdk_pwd is not None:
body["zoom_sdk_pwd"] = zoom_sdk_pwd
response = await self.requests.post(
f"{self.BASE_URL}/bots",
headers=self.headers,
json=body,
)
return response.json()
async def leave_meeting(self, bot_id: str) -> bool:
"""
Remove a bot from an ongoing meeting.
DELETE /bots/{uuid}
"""
response = await self.requests.delete(
f"{self.BASE_URL}/bots/{bot_id}",
headers=self.headers,
)
return response.status in [200, 204]
async def retranscribe(
self,
bot_uuid: str,
speech_to_text: Optional[Dict[str, Any]] = None,
webhook_url: Optional[str] = None,
) -> Dict[str, Any]:
"""
Re-run transcription on a bot's audio.
POST /bots/retranscribe
"""
body: Dict[str, Any] = {"bot_uuid": bot_uuid}
if speech_to_text is not None:
body["speech_to_text"] = speech_to_text
if webhook_url is not None:
body["webhook_url"] = webhook_url
response = await self.requests.post(
f"{self.BASE_URL}/bots/retranscribe",
headers=self.headers,
json=body,
)
if response.status == 202:
return {"accepted": True}
return response.json()
# Data Retrieval Endpoints
async def get_meeting_data(
self, bot_id: str, include_transcripts: bool = True
) -> Dict[str, Any]:
"""
Retrieve meeting data including recording and transcripts.
GET /bots/meeting_data
"""
params = {
"bot_id": bot_id,
"include_transcripts": str(include_transcripts).lower(),
}
response = await self.requests.get(
f"{self.BASE_URL}/bots/meeting_data",
headers=self.headers,
params=params,
)
return response.json()
async def get_screenshots(self, bot_id: str) -> List[Dict[str, Any]]:
"""
Retrieve screenshots captured during a meeting.
GET /bots/{uuid}/screenshots
"""
response = await self.requests.get(
f"{self.BASE_URL}/bots/{bot_id}/screenshots",
headers=self.headers,
)
result = response.json()
# Ensure we return a list
if isinstance(result, list):
return result
return []
async def delete_data(self, bot_id: str) -> bool:
"""
Delete a bot's recorded data.
POST /bots/{uuid}/delete_data
"""
response = await self.requests.post(
f"{self.BASE_URL}/bots/{bot_id}/delete_data",
headers=self.headers,
)
return response.status == 200
async def list_bots_with_metadata(
self,
limit: Optional[int] = None,
offset: Optional[int] = None,
sort_by: Optional[str] = None,
sort_order: Optional[str] = None,
filter_by: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
List bots with metadata including IDs, names, and meeting details.
GET /bots/bots_with_metadata
"""
params = {}
if limit is not None:
params["limit"] = limit
if offset is not None:
params["offset"] = offset
if sort_by is not None:
params["sort_by"] = sort_by
if sort_order is not None:
params["sort_order"] = sort_order
if filter_by is not None:
params.update(filter_by)
response = await self.requests.get(
f"{self.BASE_URL}/bots/bots_with_metadata",
headers=self.headers,
params=params,
)
return response.json()

View File

@@ -1,13 +0,0 @@
"""
Shared configuration for all Meeting BaaS blocks using the SDK pattern.
"""
from backend.sdk import BlockCostType, ProviderBuilder
# Configure the Meeting BaaS provider with API key authentication
baas = (
ProviderBuilder("baas")
.with_api_key("MEETING_BAAS_API_KEY", "Meeting BaaS API Key")
.with_base_cost(5, BlockCostType.RUN) # Higher cost for meeting recording service
.build()
)

View File

@@ -1,217 +0,0 @@
"""
Meeting BaaS bot (recording) blocks.
"""
from typing import Optional
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
)
from ._api import MeetingBaasAPI
from ._config import baas
class BaasBotJoinMeetingBlock(Block):
"""
Deploy a bot immediately or at a scheduled start_time to join and record a meeting.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
meeting_url: str = SchemaField(
description="The URL of the meeting the bot should join"
)
bot_name: str = SchemaField(
description="Display name for the bot in the meeting"
)
bot_image: str = SchemaField(
description="URL to an image for the bot's avatar (16:9 ratio recommended)",
default="",
)
entry_message: str = SchemaField(
description="Chat message the bot will post upon entry", default=""
)
reserved: bool = SchemaField(
description="Use a reserved bot slot (joins 4 min before meeting)",
default=False,
)
start_time: Optional[int] = SchemaField(
description="Unix timestamp (ms) when bot should join", default=None
)
webhook_url: str | None = SchemaField(
description="URL to receive webhook events for this bot", default=None
)
timeouts: dict = SchemaField(
description="Automatic leave timeouts configuration", default={}
)
extra: dict = SchemaField(
description="Custom metadata to attach to the bot", default={}
)
class Output(BlockSchema):
bot_id: str = SchemaField(description="UUID of the deployed bot")
join_response: dict = SchemaField(
description="Full response from join operation"
)
def __init__(self):
super().__init__(
id="377d1a6a-a99b-46cf-9af3-1d1b12758e04",
description="Deploy a bot to join and record a meeting",
categories={BlockCategory.COMMUNICATION},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
api_key = credentials.api_key.get_secret_value()
api = MeetingBaasAPI(api_key)
# Call API with all parameters
data = await api.join_meeting(
bot_name=input_data.bot_name,
meeting_url=input_data.meeting_url,
reserved=input_data.reserved,
bot_image=input_data.bot_image if input_data.bot_image else None,
entry_message=(
input_data.entry_message if input_data.entry_message else None
),
start_time=input_data.start_time,
speech_to_text={"provider": "Default"},
webhook_url=input_data.webhook_url if input_data.webhook_url else None,
automatic_leave=input_data.timeouts if input_data.timeouts else None,
extra=input_data.extra if input_data.extra else None,
)
yield "bot_id", data.get("bot_id", "")
yield "join_response", data
class BaasBotLeaveMeetingBlock(Block):
"""
Force the bot to exit the call.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot to remove from meeting")
class Output(BlockSchema):
left: bool = SchemaField(description="Whether the bot successfully left")
def __init__(self):
super().__init__(
id="bf77d128-8b25-4280-b5c7-2d553ba7e482",
description="Remove a bot from an ongoing meeting",
categories={BlockCategory.COMMUNICATION},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
api_key = credentials.api_key.get_secret_value()
api = MeetingBaasAPI(api_key)
# Leave meeting
left = await api.leave_meeting(input_data.bot_id)
yield "left", left
class BaasBotFetchMeetingDataBlock(Block):
"""
Pull MP4 URL, transcript & metadata for a completed meeting.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot whose data to fetch")
include_transcripts: bool = SchemaField(
description="Include transcript data in response", default=True
)
class Output(BlockSchema):
mp4_url: str = SchemaField(
description="URL to download the meeting recording (time-limited)"
)
transcript: list = SchemaField(description="Meeting transcript data")
metadata: dict = SchemaField(description="Meeting metadata and bot information")
def __init__(self):
super().__init__(
id="ea7c1309-303c-4da1-893f-89c0e9d64e78",
description="Retrieve recorded meeting data",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
api_key = credentials.api_key.get_secret_value()
api = MeetingBaasAPI(api_key)
# Fetch meeting data
data = await api.get_meeting_data(
bot_id=input_data.bot_id,
include_transcripts=input_data.include_transcripts,
)
yield "mp4_url", data.get("mp4", "")
yield "transcript", data.get("bot_data", {}).get("transcripts", [])
yield "metadata", data.get("bot_data", {}).get("bot", {})
class BaasBotDeleteRecordingBlock(Block):
"""
Purge MP4 + transcript data for privacy or storage management.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot whose data to delete")
class Output(BlockSchema):
deleted: bool = SchemaField(
description="Whether the data was successfully deleted"
)
def __init__(self):
super().__init__(
id="bf8d1aa6-42d8-4944-b6bd-6bac554c0d3b",
description="Permanently delete a meeting's recorded data",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
api_key = credentials.api_key.get_secret_value()
api = MeetingBaasAPI(api_key)
# Delete recording data
deleted = await api.delete_data(input_data.bot_id)
yield "deleted", deleted

View File

@@ -1,3 +0,0 @@
from .text_overlay import BannerbearTextOverlayBlock
__all__ = ["BannerbearTextOverlayBlock"]

View File

@@ -1,8 +0,0 @@
from backend.sdk import BlockCostType, ProviderBuilder
bannerbear = (
ProviderBuilder("bannerbear")
.with_api_key("BANNERBEAR_API_KEY", "Bannerbear API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -1,239 +0,0 @@
import uuid
from typing import TYPE_CHECKING, Any, Dict, List
if TYPE_CHECKING:
pass
from pydantic import SecretStr
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import bannerbear
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="bannerbear",
api_key=SecretStr("mock-bannerbear-api-key"),
title="Mock Bannerbear API Key",
)
class TextModification(BlockSchema):
name: str = SchemaField(
description="The name of the layer to modify in the template"
)
text: str = SchemaField(description="The text content to add to this layer")
color: str = SchemaField(
description="Hex color code for the text (e.g., '#FF0000')",
default="",
advanced=True,
)
font_family: str = SchemaField(
description="Font family to use for the text",
default="",
advanced=True,
)
font_size: int = SchemaField(
description="Font size in pixels",
default=0,
advanced=True,
)
font_weight: str = SchemaField(
description="Font weight (e.g., 'bold', 'normal')",
default="",
advanced=True,
)
text_align: str = SchemaField(
description="Text alignment (left, center, right)",
default="",
advanced=True,
)
class BannerbearTextOverlayBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = bannerbear.credentials_field(
description="API credentials for Bannerbear"
)
template_id: str = SchemaField(
description="The unique ID of your Bannerbear template"
)
project_id: str = SchemaField(
description="Optional: Project ID (required when using Master API Key)",
default="",
advanced=True,
)
text_modifications: List[TextModification] = SchemaField(
description="List of text layers to modify in the template"
)
image_url: str = SchemaField(
description="Optional: URL of an image to use in the template",
default="",
advanced=True,
)
image_layer_name: str = SchemaField(
description="Optional: Name of the image layer in the template",
default="photo",
advanced=True,
)
webhook_url: str = SchemaField(
description="Optional: URL to receive webhook notification when image is ready",
default="",
advanced=True,
)
metadata: str = SchemaField(
description="Optional: Custom metadata to attach to the image",
default="",
advanced=True,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the image generation was successfully initiated"
)
image_url: str = SchemaField(
description="URL of the generated image (if synchronous) or placeholder"
)
uid: str = SchemaField(description="Unique identifier for the generated image")
status: str = SchemaField(description="Status of the image generation")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(
id="c7d3a5c2-05fc-450e-8dce-3b0e04626009",
description="Add text overlay to images using Bannerbear templates. Perfect for creating social media graphics, marketing materials, and dynamic image content.",
categories={BlockCategory.PRODUCTIVITY, BlockCategory.AI},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"template_id": "jJWBKNELpQPvbX5R93Gk",
"text_modifications": [
{
"name": "headline",
"text": "Amazing Product Launch!",
"color": "#FF0000",
},
{
"name": "subtitle",
"text": "50% OFF Today Only",
},
],
"credentials": {
"provider": "bannerbear",
"id": str(uuid.uuid4()),
"type": "api_key",
},
},
test_output=[
("success", True),
("image_url", "https://cdn.bannerbear.com/test-image.jpg"),
("uid", "test-uid-123"),
("status", "completed"),
],
test_mock={
"_make_api_request": lambda *args, **kwargs: {
"uid": "test-uid-123",
"status": "completed",
"image_url": "https://cdn.bannerbear.com/test-image.jpg",
}
},
test_credentials=TEST_CREDENTIALS,
)
async def _make_api_request(self, payload: dict, api_key: str) -> dict:
"""Make the actual API request to Bannerbear. This is separated for easy mocking in tests."""
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
response = await Requests().post(
"https://sync.api.bannerbear.com/v2/images",
headers=headers,
json=payload,
)
if response.status in [200, 201, 202]:
return response.json()
else:
error_msg = f"API request failed with status {response.status}"
if response.text:
try:
error_data = response.json()
error_msg = (
f"{error_msg}: {error_data.get('message', response.text)}"
)
except Exception:
error_msg = f"{error_msg}: {response.text}"
raise Exception(error_msg)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Build the modifications array
modifications = []
# Add text modifications
for text_mod in input_data.text_modifications:
mod_data: Dict[str, Any] = {
"name": text_mod.name,
"text": text_mod.text,
}
# Add optional text styling parameters only if they have values
if text_mod.color and text_mod.color.strip():
mod_data["color"] = text_mod.color
if text_mod.font_family and text_mod.font_family.strip():
mod_data["font_family"] = text_mod.font_family
if text_mod.font_size and text_mod.font_size > 0:
mod_data["font_size"] = text_mod.font_size
if text_mod.font_weight and text_mod.font_weight.strip():
mod_data["font_weight"] = text_mod.font_weight
if text_mod.text_align and text_mod.text_align.strip():
mod_data["text_align"] = text_mod.text_align
modifications.append(mod_data)
# Add image modification if provided and not empty
if input_data.image_url and input_data.image_url.strip():
modifications.append(
{
"name": input_data.image_layer_name,
"image_url": input_data.image_url,
}
)
# Build the request payload - only include non-empty optional fields
payload = {
"template": input_data.template_id,
"modifications": modifications,
}
# Add project_id if provided (required for Master API keys)
if input_data.project_id and input_data.project_id.strip():
payload["project_id"] = input_data.project_id
if input_data.webhook_url and input_data.webhook_url.strip():
payload["webhook_url"] = input_data.webhook_url
if input_data.metadata and input_data.metadata.strip():
payload["metadata"] = input_data.metadata
# Make the API request using the private method
data = await self._make_api_request(
payload, credentials.api_key.get_secret_value()
)
# Synchronous request - image should be ready
yield "success", True
yield "image_url", data.get("image_url", "")
yield "uid", data.get("uid", "")
yield "status", data.get("status", "completed")

View File

@@ -1,178 +0,0 @@
"""
DataForSEO API client with async support using the SDK patterns.
"""
import base64
from typing import Any, Dict, List, Optional
from backend.sdk import Requests, UserPasswordCredentials
class DataForSeoClient:
"""Client for the DataForSEO API using async requests."""
API_URL = "https://api.dataforseo.com"
def __init__(self, credentials: UserPasswordCredentials):
self.credentials = credentials
self.requests = Requests(
trusted_origins=["https://api.dataforseo.com"],
raise_for_status=False,
)
def _get_headers(self) -> Dict[str, str]:
"""Generate the authorization header using Basic Auth."""
username = self.credentials.username.get_secret_value()
password = self.credentials.password.get_secret_value()
credentials_str = f"{username}:{password}"
encoded = base64.b64encode(credentials_str.encode("ascii")).decode("ascii")
return {
"Authorization": f"Basic {encoded}",
"Content-Type": "application/json",
}
async def keyword_suggestions(
self,
keyword: str,
location_code: Optional[int] = None,
language_code: Optional[str] = None,
include_seed_keyword: bool = True,
include_serp_info: bool = False,
include_clickstream_data: bool = False,
limit: int = 100,
) -> List[Dict[str, Any]]:
"""
Get keyword suggestions from DataForSEO Labs.
Args:
keyword: Seed keyword
location_code: Location code for targeting
language_code: Language code (e.g., "en")
include_seed_keyword: Include seed keyword in results
include_serp_info: Include SERP data
include_clickstream_data: Include clickstream metrics
limit: Maximum number of results (up to 3000)
Returns:
API response with keyword suggestions
"""
endpoint = f"{self.API_URL}/v3/dataforseo_labs/google/keyword_suggestions/live"
# Build payload only with non-None values to avoid sending null fields
task_data: dict[str, Any] = {
"keyword": keyword,
}
if location_code is not None:
task_data["location_code"] = location_code
if language_code is not None:
task_data["language_code"] = language_code
if include_seed_keyword is not None:
task_data["include_seed_keyword"] = include_seed_keyword
if include_serp_info is not None:
task_data["include_serp_info"] = include_serp_info
if include_clickstream_data is not None:
task_data["include_clickstream_data"] = include_clickstream_data
if limit is not None:
task_data["limit"] = limit
payload = [task_data]
response = await self.requests.post(
endpoint,
headers=self._get_headers(),
json=payload,
)
data = response.json()
# Check for API errors
if response.status != 200:
error_message = data.get("status_message", "Unknown error")
raise Exception(
f"DataForSEO API error ({response.status}): {error_message}"
)
# Extract the results from the response
if data.get("tasks") and len(data["tasks"]) > 0:
task = data["tasks"][0]
if task.get("status_code") == 20000: # Success code
return task.get("result", [])
else:
error_msg = task.get("status_message", "Task failed")
raise Exception(f"DataForSEO task error: {error_msg}")
return []
async def related_keywords(
self,
keyword: str,
location_code: Optional[int] = None,
language_code: Optional[str] = None,
include_seed_keyword: bool = True,
include_serp_info: bool = False,
include_clickstream_data: bool = False,
limit: int = 100,
) -> List[Dict[str, Any]]:
"""
Get related keywords from DataForSEO Labs.
Args:
keyword: Seed keyword
location_code: Location code for targeting
language_code: Language code (e.g., "en")
include_seed_keyword: Include seed keyword in results
include_serp_info: Include SERP data
include_clickstream_data: Include clickstream metrics
limit: Maximum number of results (up to 3000)
Returns:
API response with related keywords
"""
endpoint = f"{self.API_URL}/v3/dataforseo_labs/google/related_keywords/live"
# Build payload only with non-None values to avoid sending null fields
task_data: dict[str, Any] = {
"keyword": keyword,
}
if location_code is not None:
task_data["location_code"] = location_code
if language_code is not None:
task_data["language_code"] = language_code
if include_seed_keyword is not None:
task_data["include_seed_keyword"] = include_seed_keyword
if include_serp_info is not None:
task_data["include_serp_info"] = include_serp_info
if include_clickstream_data is not None:
task_data["include_clickstream_data"] = include_clickstream_data
if limit is not None:
task_data["limit"] = limit
payload = [task_data]
response = await self.requests.post(
endpoint,
headers=self._get_headers(),
json=payload,
)
data = response.json()
# Check for API errors
if response.status != 200:
error_message = data.get("status_message", "Unknown error")
raise Exception(
f"DataForSEO API error ({response.status}): {error_message}"
)
# Extract the results from the response
if data.get("tasks") and len(data["tasks"]) > 0:
task = data["tasks"][0]
if task.get("status_code") == 20000: # Success code
return task.get("result", [])
else:
error_msg = task.get("status_message", "Task failed")
raise Exception(f"DataForSEO task error: {error_msg}")
return []

View File

@@ -1,17 +0,0 @@
"""
Configuration for all DataForSEO blocks using the new SDK pattern.
"""
from backend.sdk import BlockCostType, ProviderBuilder
# Build the DataForSEO provider with username/password authentication
dataforseo = (
ProviderBuilder("dataforseo")
.with_user_password(
username_env_var="DATAFORSEO_USERNAME",
password_env_var="DATAFORSEO_PASSWORD",
title="DataForSEO Credentials",
)
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -1,273 +0,0 @@
"""
DataForSEO Google Keyword Suggestions block.
"""
from typing import Any, Dict, List, Optional
from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
)
from ._api import DataForSeoClient
from ._config import dataforseo
class KeywordSuggestion(BlockSchema):
"""Schema for a keyword suggestion result."""
keyword: str = SchemaField(description="The keyword suggestion")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None
)
competition: Optional[float] = SchemaField(
description="Competition level (0-1)", default=None
)
cpc: Optional[float] = SchemaField(
description="Cost per click in USD", default=None
)
keyword_difficulty: Optional[int] = SchemaField(
description="Keyword difficulty score", default=None
)
serp_info: Optional[Dict[str, Any]] = SchemaField(
description="data from SERP for each keyword", default=None
)
clickstream_data: Optional[Dict[str, Any]] = SchemaField(
description="Clickstream data metrics", default=None
)
class DataForSeoKeywordSuggestionsBlock(Block):
"""Block for getting keyword suggestions from DataForSEO Labs."""
class Input(BlockSchema):
credentials: CredentialsMetaInput = dataforseo.credentials_field(
description="DataForSEO credentials (username and password)"
)
keyword: str = SchemaField(description="Seed keyword to get suggestions for")
location_code: Optional[int] = SchemaField(
description="Location code for targeting (e.g., 2840 for USA)",
default=2840, # USA
)
language_code: Optional[str] = SchemaField(
description="Language code (e.g., 'en' for English)",
default="en",
)
include_seed_keyword: bool = SchemaField(
description="Include the seed keyword in results",
default=True,
)
include_serp_info: bool = SchemaField(
description="Include SERP information",
default=False,
)
include_clickstream_data: bool = SchemaField(
description="Include clickstream metrics",
default=False,
)
limit: int = SchemaField(
description="Maximum number of results (up to 3000)",
default=100,
ge=1,
le=3000,
)
class Output(BlockSchema):
suggestions: List[KeywordSuggestion] = SchemaField(
description="List of keyword suggestions with metrics"
)
suggestion: KeywordSuggestion = SchemaField(
description="A single keyword suggestion with metrics"
)
total_count: int = SchemaField(
description="Total number of suggestions returned"
)
seed_keyword: str = SchemaField(
description="The seed keyword used for the query"
)
def __init__(self):
super().__init__(
id="73c3e7c4-2b3f-4e9f-9e3e-8f7a5c3e2d45",
description="Get keyword suggestions from DataForSEO Labs Google API",
categories={BlockCategory.SEARCH, BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"credentials": dataforseo.get_test_credentials().model_dump(),
"keyword": "digital marketing",
"location_code": 2840,
"language_code": "en",
"limit": 1,
},
test_credentials=dataforseo.get_test_credentials(),
test_output=[
(
"suggestion",
lambda x: hasattr(x, "keyword")
and x.keyword == "digital marketing strategy",
),
("suggestions", lambda x: isinstance(x, list) and len(x) == 1),
("total_count", 1),
("seed_keyword", "digital marketing"),
],
test_mock={
"_fetch_keyword_suggestions": lambda *args, **kwargs: [
{
"items": [
{
"keyword": "digital marketing strategy",
"keyword_info": {
"search_volume": 10000,
"competition": 0.5,
"cpc": 2.5,
},
"keyword_properties": {
"keyword_difficulty": 50,
},
}
]
}
]
},
)
async def _fetch_keyword_suggestions(
self,
client: DataForSeoClient,
input_data: Input,
) -> Any:
"""Private method to fetch keyword suggestions - can be mocked for testing."""
return await client.keyword_suggestions(
keyword=input_data.keyword,
location_code=input_data.location_code,
language_code=input_data.language_code,
include_seed_keyword=input_data.include_seed_keyword,
include_serp_info=input_data.include_serp_info,
include_clickstream_data=input_data.include_clickstream_data,
limit=input_data.limit,
)
async def run(
self,
input_data: Input,
*,
credentials: UserPasswordCredentials,
**kwargs,
) -> BlockOutput:
"""Execute the keyword suggestions query."""
client = DataForSeoClient(credentials)
results = await self._fetch_keyword_suggestions(client, input_data)
# Process and format the results
suggestions = []
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", []) if isinstance(first_result, dict) else []
)
for item in items:
# Create the KeywordSuggestion object
suggestion = KeywordSuggestion(
keyword=item.get("keyword", ""),
search_volume=item.get("keyword_info", {}).get("search_volume"),
competition=item.get("keyword_info", {}).get("competition"),
cpc=item.get("keyword_info", {}).get("cpc"),
keyword_difficulty=item.get("keyword_properties", {}).get(
"keyword_difficulty"
),
serp_info=(
item.get("serp_info") if input_data.include_serp_info else None
),
clickstream_data=(
item.get("clickstream_keyword_info")
if input_data.include_clickstream_data
else None
),
)
yield "suggestion", suggestion
suggestions.append(suggestion)
yield "suggestions", suggestions
yield "total_count", len(suggestions)
yield "seed_keyword", input_data.keyword
class KeywordSuggestionExtractorBlock(Block):
"""Extracts individual fields from a KeywordSuggestion object."""
class Input(BlockSchema):
suggestion: KeywordSuggestion = SchemaField(
description="The keyword suggestion object to extract fields from"
)
class Output(BlockSchema):
keyword: str = SchemaField(description="The keyword suggestion")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None
)
competition: Optional[float] = SchemaField(
description="Competition level (0-1)", default=None
)
cpc: Optional[float] = SchemaField(
description="Cost per click in USD", default=None
)
keyword_difficulty: Optional[int] = SchemaField(
description="Keyword difficulty score", default=None
)
serp_info: Optional[Dict[str, Any]] = SchemaField(
description="data from SERP for each keyword", default=None
)
clickstream_data: Optional[Dict[str, Any]] = SchemaField(
description="Clickstream data metrics", default=None
)
def __init__(self):
super().__init__(
id="4193cb94-677c-48b0-9eec-6ac72fffd0f2",
description="Extract individual fields from a KeywordSuggestion object",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"suggestion": KeywordSuggestion(
keyword="test keyword",
search_volume=1000,
competition=0.5,
cpc=2.5,
keyword_difficulty=60,
).model_dump()
},
test_output=[
("keyword", "test keyword"),
("search_volume", 1000),
("competition", 0.5),
("cpc", 2.5),
("keyword_difficulty", 60),
("serp_info", None),
("clickstream_data", None),
],
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
"""Extract fields from the KeywordSuggestion object."""
suggestion = input_data.suggestion
yield "keyword", suggestion.keyword
yield "search_volume", suggestion.search_volume
yield "competition", suggestion.competition
yield "cpc", suggestion.cpc
yield "keyword_difficulty", suggestion.keyword_difficulty
yield "serp_info", suggestion.serp_info
yield "clickstream_data", suggestion.clickstream_data

View File

@@ -1,283 +0,0 @@
"""
DataForSEO Google Related Keywords block.
"""
from typing import Any, Dict, List, Optional
from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
)
from ._api import DataForSeoClient
from ._config import dataforseo
class RelatedKeyword(BlockSchema):
"""Schema for a related keyword result."""
keyword: str = SchemaField(description="The related keyword")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None
)
competition: Optional[float] = SchemaField(
description="Competition level (0-1)", default=None
)
cpc: Optional[float] = SchemaField(
description="Cost per click in USD", default=None
)
keyword_difficulty: Optional[int] = SchemaField(
description="Keyword difficulty score", default=None
)
serp_info: Optional[Dict[str, Any]] = SchemaField(
description="SERP data for the keyword", default=None
)
clickstream_data: Optional[Dict[str, Any]] = SchemaField(
description="Clickstream data metrics", default=None
)
class DataForSeoRelatedKeywordsBlock(Block):
"""Block for getting related keywords from DataForSEO Labs."""
class Input(BlockSchema):
credentials: CredentialsMetaInput = dataforseo.credentials_field(
description="DataForSEO credentials (username and password)"
)
keyword: str = SchemaField(
description="Seed keyword to find related keywords for"
)
location_code: Optional[int] = SchemaField(
description="Location code for targeting (e.g., 2840 for USA)",
default=2840, # USA
)
language_code: Optional[str] = SchemaField(
description="Language code (e.g., 'en' for English)",
default="en",
)
include_seed_keyword: bool = SchemaField(
description="Include the seed keyword in results",
default=True,
)
include_serp_info: bool = SchemaField(
description="Include SERP information",
default=False,
)
include_clickstream_data: bool = SchemaField(
description="Include clickstream metrics",
default=False,
)
limit: int = SchemaField(
description="Maximum number of results (up to 3000)",
default=100,
ge=1,
le=3000,
)
class Output(BlockSchema):
related_keywords: List[RelatedKeyword] = SchemaField(
description="List of related keywords with metrics"
)
related_keyword: RelatedKeyword = SchemaField(
description="A related keyword with metrics"
)
total_count: int = SchemaField(
description="Total number of related keywords returned"
)
seed_keyword: str = SchemaField(
description="The seed keyword used for the query"
)
def __init__(self):
super().__init__(
id="8f2e4d6a-1b3c-4a5e-9d7f-2c8e6a4b3f1d",
description="Get related keywords from DataForSEO Labs Google API",
categories={BlockCategory.SEARCH, BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"credentials": dataforseo.get_test_credentials().model_dump(),
"keyword": "content marketing",
"location_code": 2840,
"language_code": "en",
"limit": 1,
},
test_credentials=dataforseo.get_test_credentials(),
test_output=[
(
"related_keyword",
lambda x: hasattr(x, "keyword") and x.keyword == "content strategy",
),
("related_keywords", lambda x: isinstance(x, list) and len(x) == 1),
("total_count", 1),
("seed_keyword", "content marketing"),
],
test_mock={
"_fetch_related_keywords": lambda *args, **kwargs: [
{
"items": [
{
"keyword_data": {
"keyword": "content strategy",
"keyword_info": {
"search_volume": 8000,
"competition": 0.4,
"cpc": 3.0,
},
"keyword_properties": {
"keyword_difficulty": 45,
},
}
}
]
}
]
},
)
async def _fetch_related_keywords(
self,
client: DataForSeoClient,
input_data: Input,
) -> Any:
"""Private method to fetch related keywords - can be mocked for testing."""
return await client.related_keywords(
keyword=input_data.keyword,
location_code=input_data.location_code,
language_code=input_data.language_code,
include_seed_keyword=input_data.include_seed_keyword,
include_serp_info=input_data.include_serp_info,
include_clickstream_data=input_data.include_clickstream_data,
limit=input_data.limit,
)
async def run(
self,
input_data: Input,
*,
credentials: UserPasswordCredentials,
**kwargs,
) -> BlockOutput:
"""Execute the related keywords query."""
client = DataForSeoClient(credentials)
results = await self._fetch_related_keywords(client, input_data)
# Process and format the results
related_keywords = []
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", []) if isinstance(first_result, dict) else []
)
for item in items:
# Extract keyword_data from the item
keyword_data = item.get("keyword_data", {})
# Create the RelatedKeyword object
keyword = RelatedKeyword(
keyword=keyword_data.get("keyword", ""),
search_volume=keyword_data.get("keyword_info", {}).get(
"search_volume"
),
competition=keyword_data.get("keyword_info", {}).get("competition"),
cpc=keyword_data.get("keyword_info", {}).get("cpc"),
keyword_difficulty=keyword_data.get("keyword_properties", {}).get(
"keyword_difficulty"
),
serp_info=(
keyword_data.get("serp_info")
if input_data.include_serp_info
else None
),
clickstream_data=(
keyword_data.get("clickstream_keyword_info")
if input_data.include_clickstream_data
else None
),
)
yield "related_keyword", keyword
related_keywords.append(keyword)
yield "related_keywords", related_keywords
yield "total_count", len(related_keywords)
yield "seed_keyword", input_data.keyword
class RelatedKeywordExtractorBlock(Block):
"""Extracts individual fields from a RelatedKeyword object."""
class Input(BlockSchema):
related_keyword: RelatedKeyword = SchemaField(
description="The related keyword object to extract fields from"
)
class Output(BlockSchema):
keyword: str = SchemaField(description="The related keyword")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None
)
competition: Optional[float] = SchemaField(
description="Competition level (0-1)", default=None
)
cpc: Optional[float] = SchemaField(
description="Cost per click in USD", default=None
)
keyword_difficulty: Optional[int] = SchemaField(
description="Keyword difficulty score", default=None
)
serp_info: Optional[Dict[str, Any]] = SchemaField(
description="SERP data for the keyword", default=None
)
clickstream_data: Optional[Dict[str, Any]] = SchemaField(
description="Clickstream data metrics", default=None
)
def __init__(self):
super().__init__(
id="98342061-09d2-4952-bf77-0761fc8cc9a8",
description="Extract individual fields from a RelatedKeyword object",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"related_keyword": RelatedKeyword(
keyword="test related keyword",
search_volume=800,
competition=0.4,
cpc=3.0,
keyword_difficulty=55,
).model_dump()
},
test_output=[
("keyword", "test related keyword"),
("search_volume", 800),
("competition", 0.4),
("cpc", 3.0),
("keyword_difficulty", 55),
("serp_info", None),
("clickstream_data", None),
],
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
"""Extract fields from the RelatedKeyword object."""
related_keyword = input_data.related_keyword
yield "keyword", related_keyword.keyword
yield "search_volume", related_keyword.search_volume
yield "competition", related_keyword.competition
yield "cpc", related_keyword.cpc
yield "keyword_difficulty", related_keyword.keyword_difficulty
yield "serp_info", related_keyword.serp_info
yield "clickstream_data", related_keyword.clickstream_data

View File

@@ -2,29 +2,45 @@ import base64
import io
import mimetypes
from pathlib import Path
from typing import Any
from typing import Any, Literal
import aiohttp
import discord
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import APIKeyCredentials, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import store_media_file
from backend.util.type import MediaFileType
from ._auth import (
TEST_BOT_CREDENTIALS,
TEST_BOT_CREDENTIALS_INPUT,
DiscordBotCredentialsField,
DiscordBotCredentialsInput,
)
DiscordCredentials = CredentialsMetaInput[
Literal[ProviderName.DISCORD], Literal["api_key"]
]
# Keep backward compatibility alias
DiscordCredentials = DiscordBotCredentialsInput
DiscordCredentialsField = DiscordBotCredentialsField
TEST_CREDENTIALS = TEST_BOT_CREDENTIALS
TEST_CREDENTIALS_INPUT = TEST_BOT_CREDENTIALS_INPUT
def DiscordCredentialsField() -> DiscordCredentials:
return CredentialsField(description="Discord bot token")
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="discord",
api_key=SecretStr("test_api_key"),
title="Mock Discord API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.type,
}
class ReadDiscordMessagesBlock(Block):

View File

@@ -1,117 +0,0 @@
"""
Discord API helper functions for making authenticated requests.
"""
import logging
from typing import Optional
from pydantic import BaseModel
from backend.data.model import OAuth2Credentials
from backend.util.request import Requests
logger = logging.getLogger(__name__)
class DiscordAPIException(Exception):
"""Exception raised for Discord API errors."""
def __init__(self, message: str, status_code: int):
super().__init__(message)
self.status_code = status_code
class DiscordOAuthUser(BaseModel):
"""Model for Discord OAuth user response."""
user_id: str
username: str
avatar_url: str
banner: Optional[str] = None
accent_color: Optional[int] = None
def get_api(credentials: OAuth2Credentials) -> Requests:
"""
Create a Requests instance configured for Discord API calls with OAuth2 credentials.
Args:
credentials: The OAuth2 credentials containing the access token.
Returns:
A configured Requests instance for Discord API calls.
"""
return Requests(
trusted_origins=[],
extra_headers={
"Authorization": f"Bearer {credentials.access_token.get_secret_value()}",
"Content-Type": "application/json",
},
raise_for_status=False,
)
async def get_current_user(credentials: OAuth2Credentials) -> DiscordOAuthUser:
"""
Fetch the current user's information using Discord OAuth2 API.
Reference: https://discord.com/developers/docs/resources/user#get-current-user
Args:
credentials: The OAuth2 credentials.
Returns:
A model containing user data with avatar URL.
Raises:
DiscordAPIException: If the API request fails.
"""
api = get_api(credentials)
response = await api.get("https://discord.com/api/oauth2/@me")
if not response.ok:
error_text = response.text()
raise DiscordAPIException(
f"Failed to fetch user info: {response.status} - {error_text}",
response.status,
)
data = response.json()
logger.info(f"Discord OAuth2 API Response: {data}")
# The /api/oauth2/@me endpoint returns a user object nested in the response
user_info = data.get("user", {})
logger.info(f"User info extracted: {user_info}")
# Build avatar URL
user_id = user_info.get("id")
avatar_hash = user_info.get("avatar")
if avatar_hash:
# Custom avatar
avatar_ext = "gif" if avatar_hash.startswith("a_") else "png"
avatar_url = (
f"https://cdn.discordapp.com/avatars/{user_id}/{avatar_hash}.{avatar_ext}"
)
else:
# Default avatar based on discriminator or user ID
discriminator = user_info.get("discriminator", "0")
if discriminator == "0":
# New username system - use user ID for default avatar
default_avatar_index = (int(user_id) >> 22) % 6
else:
# Legacy discriminator system
default_avatar_index = int(discriminator) % 5
avatar_url = (
f"https://cdn.discordapp.com/embed/avatars/{default_avatar_index}.png"
)
result = DiscordOAuthUser(
user_id=user_id,
username=user_info.get("username", ""),
avatar_url=avatar_url,
banner=user_info.get("banner"),
accent_color=user_info.get("accent_color"),
)
logger.info(f"Returning user data: {result.model_dump()}")
return result

View File

@@ -1,74 +0,0 @@
from typing import Literal
from pydantic import SecretStr
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
OAuth2Credentials,
)
from backend.integrations.providers import ProviderName
from backend.util.settings import Secrets
secrets = Secrets()
DISCORD_OAUTH_IS_CONFIGURED = bool(
secrets.discord_client_id and secrets.discord_client_secret
)
# Bot token credentials (existing)
DiscordBotCredentials = APIKeyCredentials
DiscordBotCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.DISCORD], Literal["api_key"]
]
# OAuth2 credentials (new)
DiscordOAuthCredentials = OAuth2Credentials
DiscordOAuthCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.DISCORD], Literal["oauth2"]
]
def DiscordBotCredentialsField() -> DiscordBotCredentialsInput:
"""Creates a Discord bot token credentials field."""
return CredentialsField(description="Discord bot token")
def DiscordOAuthCredentialsField(scopes: list[str]) -> DiscordOAuthCredentialsInput:
"""Creates a Discord OAuth2 credentials field."""
return CredentialsField(
description="Discord OAuth2 credentials",
required_scopes=set(scopes) | {"identify"}, # Basic user info scope
)
# Test credentials for bot tokens
TEST_BOT_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="discord",
api_key=SecretStr("test_api_key"),
title="Mock Discord API key",
expires_at=None,
)
TEST_BOT_CREDENTIALS_INPUT = {
"provider": TEST_BOT_CREDENTIALS.provider,
"id": TEST_BOT_CREDENTIALS.id,
"type": TEST_BOT_CREDENTIALS.type,
"title": TEST_BOT_CREDENTIALS.type,
}
# Test credentials for OAuth2
TEST_OAUTH_CREDENTIALS = OAuth2Credentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="discord",
access_token=SecretStr("test_access_token"),
title="Mock Discord OAuth",
scopes=["identify"],
username="testuser",
)
TEST_OAUTH_CREDENTIALS_INPUT = {
"provider": TEST_OAUTH_CREDENTIALS.provider,
"id": TEST_OAUTH_CREDENTIALS.id,
"type": TEST_OAUTH_CREDENTIALS.type,
"title": TEST_OAUTH_CREDENTIALS.type,
}

View File

@@ -1,99 +0,0 @@
"""
Discord OAuth-based blocks.
"""
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import DiscordOAuthUser, get_current_user
from ._auth import (
DISCORD_OAUTH_IS_CONFIGURED,
TEST_OAUTH_CREDENTIALS,
TEST_OAUTH_CREDENTIALS_INPUT,
DiscordOAuthCredentialsField,
DiscordOAuthCredentialsInput,
)
class DiscordGetCurrentUserBlock(Block):
"""
Gets information about the currently authenticated Discord user using OAuth2.
This block requires Discord OAuth2 credentials (not bot tokens).
"""
class Input(BlockSchema):
credentials: DiscordOAuthCredentialsInput = DiscordOAuthCredentialsField(
["identify"]
)
class Output(BlockSchema):
user_id: str = SchemaField(description="The authenticated user's Discord ID")
username: str = SchemaField(description="The user's username")
avatar_url: str = SchemaField(description="URL to the user's avatar image")
banner_url: str = SchemaField(
description="URL to the user's banner image (if set)", default=""
)
accent_color: int = SchemaField(
description="The user's accent color as an integer", default=0
)
def __init__(self):
super().__init__(
id="8c7e39b8-4e9d-4f3a-b4e1-2a8c9d5f6e3b",
input_schema=DiscordGetCurrentUserBlock.Input,
output_schema=DiscordGetCurrentUserBlock.Output,
description="Gets information about the currently authenticated Discord user using OAuth2 credentials.",
categories={BlockCategory.SOCIAL},
disabled=not DISCORD_OAUTH_IS_CONFIGURED,
test_input={
"credentials": TEST_OAUTH_CREDENTIALS_INPUT,
},
test_credentials=TEST_OAUTH_CREDENTIALS,
test_output=[
("user_id", "123456789012345678"),
("username", "testuser"),
(
"avatar_url",
"https://cdn.discordapp.com/avatars/123456789012345678/avatar.png",
),
("banner_url", ""),
("accent_color", 0),
],
test_mock={
"get_user": lambda _: DiscordOAuthUser(
user_id="123456789012345678",
username="testuser",
avatar_url="https://cdn.discordapp.com/avatars/123456789012345678/avatar.png",
banner=None,
accent_color=0,
)
},
)
@staticmethod
async def get_user(credentials: OAuth2Credentials) -> DiscordOAuthUser:
user_info = await get_current_user(credentials)
return user_info
async def run(
self, input_data: Input, *, credentials: OAuth2Credentials, **kwargs
) -> BlockOutput:
try:
result = await self.get_user(credentials)
# Yield each output field
yield "user_id", result.user_id
yield "username", result.username
yield "avatar_url", result.avatar_url
# Handle banner URL if banner hash exists
if result.banner:
banner_url = f"https://cdn.discordapp.com/banners/{result.user_id}/{result.banner}.png"
yield "banner_url", banner_url
else:
yield "banner_url", ""
yield "accent_color", result.accent_color or 0
except Exception as e:
raise ValueError(f"Failed to get Discord user info: {e}")

View File

@@ -1,408 +0,0 @@
"""
API module for Enrichlayer integration.
This module provides a client for interacting with the Enrichlayer API,
which allows fetching LinkedIn profile data and related information.
"""
import datetime
import enum
import logging
from json import JSONDecodeError
from typing import Any, Optional, TypeVar
from pydantic import BaseModel, Field
from backend.data.model import APIKeyCredentials
from backend.util.request import Requests
logger = logging.getLogger(__name__)
T = TypeVar("T")
class EnrichlayerAPIException(Exception):
"""Exception raised for Enrichlayer API errors."""
def __init__(self, message: str, status_code: int):
super().__init__(message)
self.status_code = status_code
class FallbackToCache(enum.Enum):
ON_ERROR = "on-error"
NEVER = "never"
class UseCache(enum.Enum):
IF_PRESENT = "if-present"
NEVER = "never"
class SocialMediaProfiles(BaseModel):
"""Social media profiles model."""
twitter: Optional[str] = None
facebook: Optional[str] = None
github: Optional[str] = None
class Experience(BaseModel):
"""Experience model for LinkedIn profiles."""
company: Optional[str] = None
title: Optional[str] = None
description: Optional[str] = None
location: Optional[str] = None
starts_at: Optional[dict[str, int]] = None
ends_at: Optional[dict[str, int]] = None
company_linkedin_profile_url: Optional[str] = None
class Education(BaseModel):
"""Education model for LinkedIn profiles."""
school: Optional[str] = None
degree_name: Optional[str] = None
field_of_study: Optional[str] = None
starts_at: Optional[dict[str, int]] = None
ends_at: Optional[dict[str, int]] = None
school_linkedin_profile_url: Optional[str] = None
class PersonProfileResponse(BaseModel):
"""Response model for LinkedIn person profile.
This model represents the response from Enrichlayer's LinkedIn profile API.
The API returns comprehensive profile data including work experience,
education, skills, and contact information (when available).
Example API Response:
{
"public_identifier": "johnsmith",
"full_name": "John Smith",
"occupation": "Software Engineer at Tech Corp",
"experiences": [
{
"company": "Tech Corp",
"title": "Software Engineer",
"starts_at": {"year": 2020, "month": 1}
}
],
"education": [...],
"skills": ["Python", "JavaScript", ...]
}
"""
public_identifier: Optional[str] = None
profile_pic_url: Optional[str] = None
full_name: Optional[str] = None
first_name: Optional[str] = None
last_name: Optional[str] = None
occupation: Optional[str] = None
headline: Optional[str] = None
summary: Optional[str] = None
country: Optional[str] = None
country_full_name: Optional[str] = None
city: Optional[str] = None
state: Optional[str] = None
experiences: Optional[list[Experience]] = None
education: Optional[list[Education]] = None
languages: Optional[list[str]] = None
skills: Optional[list[str]] = None
inferred_salary: Optional[dict[str, Any]] = None
personal_email: Optional[str] = None
personal_contact_number: Optional[str] = None
social_media_profiles: Optional[SocialMediaProfiles] = None
extra: Optional[dict[str, Any]] = None
class SimilarProfile(BaseModel):
"""Similar profile model for LinkedIn person lookup."""
similarity: float
linkedin_profile_url: str
class PersonLookupResponse(BaseModel):
"""Response model for LinkedIn person lookup.
This model represents the response from Enrichlayer's person lookup API.
The API returns a LinkedIn profile URL and similarity scores when
searching for a person by name and company.
Example API Response:
{
"url": "https://www.linkedin.com/in/johnsmith/",
"name_similarity_score": 0.95,
"company_similarity_score": 0.88,
"title_similarity_score": 0.75,
"location_similarity_score": 0.60
}
"""
url: str | None = None
name_similarity_score: float | None
company_similarity_score: float | None
title_similarity_score: float | None
location_similarity_score: float | None
last_updated: datetime.datetime | None = None
profile: PersonProfileResponse | None = None
class RoleLookupResponse(BaseModel):
"""Response model for LinkedIn role lookup.
This model represents the response from Enrichlayer's role lookup API.
The API returns LinkedIn profile data for a specific role at a company.
Example API Response:
{
"linkedin_profile_url": "https://www.linkedin.com/in/johnsmith/",
"profile_data": {...} // Full PersonProfileResponse data when enrich_profile=True
}
"""
linkedin_profile_url: Optional[str] = None
profile_data: Optional[PersonProfileResponse] = None
class ProfilePictureResponse(BaseModel):
"""Response model for LinkedIn profile picture.
This model represents the response from Enrichlayer's profile picture API.
The API returns a URL to the person's LinkedIn profile picture.
Example API Response:
{
"tmp_profile_pic_url": "https://media.licdn.com/dms/image/..."
}
"""
tmp_profile_pic_url: str = Field(
..., description="URL of the profile picture", alias="tmp_profile_pic_url"
)
@property
def profile_picture_url(self) -> str:
"""Backward compatibility property for profile_picture_url."""
return self.tmp_profile_pic_url
class EnrichlayerClient:
"""Client for interacting with the Enrichlayer API."""
API_BASE_URL = "https://enrichlayer.com/api/v2"
def __init__(
self,
credentials: Optional[APIKeyCredentials] = None,
custom_requests: Optional[Requests] = None,
):
"""
Initialize the Enrichlayer client.
Args:
credentials: The credentials to use for authentication.
custom_requests: Custom Requests instance for testing.
"""
if custom_requests:
self._requests = custom_requests
else:
headers: dict[str, str] = {
"Content-Type": "application/json",
}
if credentials:
headers["Authorization"] = (
f"Bearer {credentials.api_key.get_secret_value()}"
)
self._requests = Requests(
extra_headers=headers,
raise_for_status=False,
)
async def _handle_response(self, response) -> Any:
"""
Handle API response and check for errors.
Args:
response: The response object from the request.
Returns:
The response data.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
if not response.ok:
try:
error_data = response.json()
error_message = error_data.get("message", "")
except JSONDecodeError:
error_message = response.text
raise EnrichlayerAPIException(
f"Enrichlayer API request failed ({response.status_code}): {error_message}",
response.status_code,
)
return response.json()
async def fetch_profile(
self,
linkedin_url: str,
fallback_to_cache: FallbackToCache = FallbackToCache.ON_ERROR,
use_cache: UseCache = UseCache.IF_PRESENT,
include_skills: bool = False,
include_inferred_salary: bool = False,
include_personal_email: bool = False,
include_personal_contact_number: bool = False,
include_social_media: bool = False,
include_extra: bool = False,
) -> PersonProfileResponse:
"""
Fetch a LinkedIn profile with optional parameters.
Args:
linkedin_url: The LinkedIn profile URL to fetch.
fallback_to_cache: Cache usage if live fetch fails ('on-error' or 'never').
use_cache: Cache utilization ('if-present' or 'never').
include_skills: Whether to include skills data.
include_inferred_salary: Whether to include inferred salary data.
include_personal_email: Whether to include personal email.
include_personal_contact_number: Whether to include personal contact number.
include_social_media: Whether to include social media profiles.
include_extra: Whether to include additional data.
Returns:
The LinkedIn profile data.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"url": linkedin_url,
"fallback_to_cache": fallback_to_cache.value.lower(),
"use_cache": use_cache.value.lower(),
}
if include_skills:
params["skills"] = "include"
if include_inferred_salary:
params["inferred_salary"] = "include"
if include_personal_email:
params["personal_email"] = "include"
if include_personal_contact_number:
params["personal_contact_number"] = "include"
if include_social_media:
params["twitter_profile_id"] = "include"
params["facebook_profile_id"] = "include"
params["github_profile_id"] = "include"
if include_extra:
params["extra"] = "include"
response = await self._requests.get(
f"{self.API_BASE_URL}/profile", params=params
)
return PersonProfileResponse(**await self._handle_response(response))
async def lookup_person(
self,
first_name: str,
company_domain: str,
last_name: str | None = None,
location: Optional[str] = None,
title: Optional[str] = None,
include_similarity_checks: bool = False,
enrich_profile: bool = False,
) -> PersonLookupResponse:
"""
Look up a LinkedIn profile by person's information.
Args:
first_name: The person's first name.
last_name: The person's last name.
company_domain: The domain of the company they work for.
location: The person's location.
title: The person's job title.
include_similarity_checks: Whether to include similarity checks.
enrich_profile: Whether to enrich the profile.
Returns:
The LinkedIn profile lookup result.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {"first_name": first_name, "company_domain": company_domain}
if last_name:
params["last_name"] = last_name
if location:
params["location"] = location
if title:
params["title"] = title
if include_similarity_checks:
params["similarity_checks"] = "include"
if enrich_profile:
params["enrich_profile"] = "enrich"
response = await self._requests.get(
f"{self.API_BASE_URL}/profile/resolve", params=params
)
return PersonLookupResponse(**await self._handle_response(response))
async def lookup_role(
self, role: str, company_name: str, enrich_profile: bool = False
) -> RoleLookupResponse:
"""
Look up a LinkedIn profile by role in a company.
Args:
role: The role title (e.g., CEO, CTO).
company_name: The name of the company.
enrich_profile: Whether to enrich the profile.
Returns:
The LinkedIn profile lookup result.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"role": role,
"company_name": company_name,
}
if enrich_profile:
params["enrich_profile"] = "enrich"
response = await self._requests.get(
f"{self.API_BASE_URL}/find/company/role", params=params
)
return RoleLookupResponse(**await self._handle_response(response))
async def get_profile_picture(
self, linkedin_profile_url: str
) -> ProfilePictureResponse:
"""
Get a LinkedIn profile picture URL.
Args:
linkedin_profile_url: The LinkedIn profile URL.
Returns:
The profile picture URL.
Raises:
EnrichlayerAPIException: If the API request fails.
"""
params = {
"linkedin_person_profile_url": linkedin_profile_url,
}
response = await self._requests.get(
f"{self.API_BASE_URL}/person/profile-picture", params=params
)
return ProfilePictureResponse(**await self._handle_response(response))

View File

@@ -1,34 +0,0 @@
"""
Authentication module for Enrichlayer API integration.
This module provides credential types and test credentials for the Enrichlayer API.
"""
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsMetaInput
from backend.integrations.providers import ProviderName
# Define the type of credentials input expected for Enrichlayer API
EnrichlayerCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.ENRICHLAYER], Literal["api_key"]
]
# Mock credentials for testing Enrichlayer API integration
TEST_CREDENTIALS = APIKeyCredentials(
id="1234a567-89bc-4def-ab12-3456cdef7890",
provider="enrichlayer",
api_key=SecretStr("mock-enrichlayer-api-key"),
title="Mock Enrichlayer API key",
expires_at=None,
)
# Dictionary representation of test credentials for input fields
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}

View File

@@ -1,527 +0,0 @@
"""
Block definitions for Enrichlayer API integration.
This module implements blocks for interacting with the Enrichlayer API,
which provides access to LinkedIn profile data and related information.
"""
import logging
from typing import Optional
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField
from backend.util.type import MediaFileType
from ._api import (
EnrichlayerClient,
Experience,
FallbackToCache,
PersonLookupResponse,
PersonProfileResponse,
RoleLookupResponse,
UseCache,
)
from ._auth import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, EnrichlayerCredentialsInput
logger = logging.getLogger(__name__)
class GetLinkedinProfileBlock(Block):
"""Block to fetch LinkedIn profile data using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for GetLinkedinProfileBlock."""
linkedin_url: str = SchemaField(
description="LinkedIn profile URL to fetch data from",
placeholder="https://www.linkedin.com/in/username/",
)
fallback_to_cache: FallbackToCache = SchemaField(
description="Cache usage if live fetch fails",
default=FallbackToCache.ON_ERROR,
advanced=True,
)
use_cache: UseCache = SchemaField(
description="Cache utilization strategy",
default=UseCache.IF_PRESENT,
advanced=True,
)
include_skills: bool = SchemaField(
description="Include skills data",
default=False,
advanced=True,
)
include_inferred_salary: bool = SchemaField(
description="Include inferred salary data",
default=False,
advanced=True,
)
include_personal_email: bool = SchemaField(
description="Include personal email",
default=False,
advanced=True,
)
include_personal_contact_number: bool = SchemaField(
description="Include personal contact number",
default=False,
advanced=True,
)
include_social_media: bool = SchemaField(
description="Include social media profiles",
default=False,
advanced=True,
)
include_extra: bool = SchemaField(
description="Include additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for GetLinkedinProfileBlock."""
profile: PersonProfileResponse = SchemaField(
description="LinkedIn profile data"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfileBlock."""
super().__init__(
id="f6e0ac73-4f1d-4acb-b4b7-b67066c5984e",
description="Fetch LinkedIn profile data using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=GetLinkedinProfileBlock.Input,
output_schema=GetLinkedinProfileBlock.Output,
test_input={
"linkedin_url": "https://www.linkedin.com/in/williamhgates/",
"include_skills": True,
"include_social_media": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"profile",
PersonProfileResponse(
public_identifier="williamhgates",
full_name="Bill Gates",
occupation="Co-chair at Bill & Melinda Gates Foundation",
experiences=[
Experience(
company="Bill & Melinda Gates Foundation",
title="Co-chair",
starts_at={"year": 2000},
)
],
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_fetch_profile": lambda *args, **kwargs: PersonProfileResponse(
public_identifier="williamhgates",
full_name="Bill Gates",
occupation="Co-chair at Bill & Melinda Gates Foundation",
experiences=[
Experience(
company="Bill & Melinda Gates Foundation",
title="Co-chair",
starts_at={"year": 2000},
)
],
),
},
)
@staticmethod
async def _fetch_profile(
credentials: APIKeyCredentials,
linkedin_url: str,
fallback_to_cache: FallbackToCache = FallbackToCache.ON_ERROR,
use_cache: UseCache = UseCache.IF_PRESENT,
include_skills: bool = False,
include_inferred_salary: bool = False,
include_personal_email: bool = False,
include_personal_contact_number: bool = False,
include_social_media: bool = False,
include_extra: bool = False,
):
client = EnrichlayerClient(credentials)
profile = await client.fetch_profile(
linkedin_url=linkedin_url,
fallback_to_cache=fallback_to_cache,
use_cache=use_cache,
include_skills=include_skills,
include_inferred_salary=include_inferred_salary,
include_personal_email=include_personal_email,
include_personal_contact_number=include_personal_contact_number,
include_social_media=include_social_media,
include_extra=include_extra,
)
return profile
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to fetch LinkedIn profile data.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
profile = await self._fetch_profile(
credentials=credentials,
linkedin_url=input_data.linkedin_url,
fallback_to_cache=input_data.fallback_to_cache,
use_cache=input_data.use_cache,
include_skills=input_data.include_skills,
include_inferred_salary=input_data.include_inferred_salary,
include_personal_email=input_data.include_personal_email,
include_personal_contact_number=input_data.include_personal_contact_number,
include_social_media=input_data.include_social_media,
include_extra=input_data.include_extra,
)
yield "profile", profile
except Exception as e:
logger.error(f"Error fetching LinkedIn profile: {str(e)}")
yield "error", str(e)
class LinkedinPersonLookupBlock(Block):
"""Block to look up LinkedIn profiles by person's information using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for LinkedinPersonLookupBlock."""
first_name: str = SchemaField(
description="Person's first name",
placeholder="John",
advanced=False,
)
last_name: str | None = SchemaField(
description="Person's last name",
placeholder="Doe",
default=None,
advanced=False,
)
company_domain: str = SchemaField(
description="Domain of the company they work for (optional)",
placeholder="example.com",
advanced=False,
)
location: Optional[str] = SchemaField(
description="Person's location (optional)",
placeholder="San Francisco",
default=None,
)
title: Optional[str] = SchemaField(
description="Person's job title (optional)",
placeholder="CEO",
default=None,
)
include_similarity_checks: bool = SchemaField(
description="Include similarity checks",
default=False,
advanced=True,
)
enrich_profile: bool = SchemaField(
description="Enrich the profile with additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for LinkedinPersonLookupBlock."""
lookup_result: PersonLookupResponse = SchemaField(
description="LinkedIn profile lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinPersonLookupBlock."""
super().__init__(
id="d237a98a-5c4b-4a1c-b9e3-e6f9a6c81df7",
description="Look up LinkedIn profiles by person information using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=LinkedinPersonLookupBlock.Input,
output_schema=LinkedinPersonLookupBlock.Output,
test_input={
"first_name": "Bill",
"last_name": "Gates",
"company_domain": "gatesfoundation.org",
"include_similarity_checks": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"lookup_result",
PersonLookupResponse(
url="https://www.linkedin.com/in/williamhgates/",
name_similarity_score=0.93,
company_similarity_score=0.83,
title_similarity_score=0.3,
location_similarity_score=0.20,
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_lookup_person": lambda *args, **kwargs: PersonLookupResponse(
url="https://www.linkedin.com/in/williamhgates/",
name_similarity_score=0.93,
company_similarity_score=0.83,
title_similarity_score=0.3,
location_similarity_score=0.20,
)
},
)
@staticmethod
async def _lookup_person(
credentials: APIKeyCredentials,
first_name: str,
company_domain: str,
last_name: str | None = None,
location: Optional[str] = None,
title: Optional[str] = None,
include_similarity_checks: bool = False,
enrich_profile: bool = False,
):
client = EnrichlayerClient(credentials=credentials)
lookup_result = await client.lookup_person(
first_name=first_name,
last_name=last_name,
company_domain=company_domain,
location=location,
title=title,
include_similarity_checks=include_similarity_checks,
enrich_profile=enrich_profile,
)
return lookup_result
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to look up LinkedIn profiles.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
lookup_result = await self._lookup_person(
credentials=credentials,
first_name=input_data.first_name,
last_name=input_data.last_name,
company_domain=input_data.company_domain,
location=input_data.location,
title=input_data.title,
include_similarity_checks=input_data.include_similarity_checks,
enrich_profile=input_data.enrich_profile,
)
yield "lookup_result", lookup_result
except Exception as e:
logger.error(f"Error looking up LinkedIn profile: {str(e)}")
yield "error", str(e)
class LinkedinRoleLookupBlock(Block):
"""Block to look up LinkedIn profiles by role in a company using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for LinkedinRoleLookupBlock."""
role: str = SchemaField(
description="Role title (e.g., CEO, CTO)",
placeholder="CEO",
)
company_name: str = SchemaField(
description="Name of the company",
placeholder="Microsoft",
)
enrich_profile: bool = SchemaField(
description="Enrich the profile with additional data",
default=False,
advanced=True,
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for LinkedinRoleLookupBlock."""
role_lookup_result: RoleLookupResponse = SchemaField(
description="LinkedIn role lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinRoleLookupBlock."""
super().__init__(
id="3b9fc742-06d4-49c7-b5ce-7e302dd7c8a7",
description="Look up LinkedIn profiles by role in a company using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=LinkedinRoleLookupBlock.Input,
output_schema=LinkedinRoleLookupBlock.Output,
test_input={
"role": "Co-chair",
"company_name": "Gates Foundation",
"enrich_profile": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"role_lookup_result",
RoleLookupResponse(
linkedin_profile_url="https://www.linkedin.com/in/williamhgates/",
),
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_lookup_role": lambda *args, **kwargs: RoleLookupResponse(
linkedin_profile_url="https://www.linkedin.com/in/williamhgates/",
),
},
)
@staticmethod
async def _lookup_role(
credentials: APIKeyCredentials,
role: str,
company_name: str,
enrich_profile: bool = False,
):
client = EnrichlayerClient(credentials=credentials)
role_lookup_result = await client.lookup_role(
role=role,
company_name=company_name,
enrich_profile=enrich_profile,
)
return role_lookup_result
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to look up LinkedIn profiles by role.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
role_lookup_result = await self._lookup_role(
credentials=credentials,
role=input_data.role,
company_name=input_data.company_name,
enrich_profile=input_data.enrich_profile,
)
yield "role_lookup_result", role_lookup_result
except Exception as e:
logger.error(f"Error looking up role in company: {str(e)}")
yield "error", str(e)
class GetLinkedinProfilePictureBlock(Block):
"""Block to get LinkedIn profile pictures using Enrichlayer API."""
class Input(BlockSchema):
"""Input schema for GetLinkedinProfilePictureBlock."""
linkedin_profile_url: str = SchemaField(
description="LinkedIn profile URL",
placeholder="https://www.linkedin.com/in/username/",
)
credentials: EnrichlayerCredentialsInput = CredentialsField(
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
"""Output schema for GetLinkedinProfilePictureBlock."""
profile_picture_url: MediaFileType = SchemaField(
description="LinkedIn profile picture URL"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfilePictureBlock."""
super().__init__(
id="68d5a942-9b3f-4e9a-b7c1-d96ea4321f0d",
description="Get LinkedIn profile pictures using Enrichlayer",
categories={BlockCategory.SOCIAL},
input_schema=GetLinkedinProfilePictureBlock.Input,
output_schema=GetLinkedinProfilePictureBlock.Output,
test_input={
"linkedin_profile_url": "https://www.linkedin.com/in/williamhgates/",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
(
"profile_picture_url",
"https://media.licdn.com/dms/image/C4D03AQFj-xjuXrLFSQ/profile-displayphoto-shrink_800_800/0/1576881858598?e=1686787200&v=beta&t=zrQC76QwsfQQIWthfOnrKRBMZ5D-qIAvzLXLmWgYvTk",
)
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_get_profile_picture": lambda *args, **kwargs: "https://media.licdn.com/dms/image/C4D03AQFj-xjuXrLFSQ/profile-displayphoto-shrink_800_800/0/1576881858598?e=1686787200&v=beta&t=zrQC76QwsfQQIWthfOnrKRBMZ5D-qIAvzLXLmWgYvTk",
},
)
@staticmethod
async def _get_profile_picture(
credentials: APIKeyCredentials, linkedin_profile_url: str
):
client = EnrichlayerClient(credentials=credentials)
profile_picture_response = await client.get_profile_picture(
linkedin_profile_url=linkedin_profile_url,
)
return profile_picture_response.profile_picture_url
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Run the block to get LinkedIn profile pictures.
Args:
input_data: Input parameters for the block
credentials: API key credentials for Enrichlayer
**kwargs: Additional keyword arguments
Yields:
Tuples of (output_name, output_value)
"""
try:
profile_picture = await self._get_profile_picture(
credentials=credentials,
linkedin_profile_url=input_data.linkedin_profile_url,
)
yield "profile_picture_url", profile_picture
except Exception as e:
logger.error(f"Error getting profile picture: {str(e)}")
yield "error", str(e)

View File

@@ -93,11 +93,11 @@ class Webset(BaseModel):
"""
Set of key-value pairs you want to associate with this object.
"""
created_at: Annotated[datetime | None, Field(alias="createdAt")] = None
created_at: Annotated[datetime, Field(alias="createdAt")] | None = None
"""
The date and time the webset was created
"""
updated_at: Annotated[datetime | None, Field(alias="updatedAt")] = None
updated_at: Annotated[datetime, Field(alias="updatedAt")] | None = None
"""
The date and time the webset was last updated
"""

View File

@@ -29,8 +29,8 @@ class FirecrawlExtractBlock(Block):
prompt: str | None = SchemaField(
description="The prompt to use for the crawl", default=None, advanced=False
)
output_schema: dict | None = SchemaField(
description="A Json Schema describing the output structure if more rigid structure is desired.",
output_schema: str | None = SchemaField(
description="A more rigid structure if you already know the JSON layout.",
default=None,
)
enable_web_search: bool = SchemaField(
@@ -56,6 +56,7 @@ class FirecrawlExtractBlock(Block):
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
# Sync call
extract_result = app.extract(
urls=input_data.urls,
prompt=input_data.prompt,

File diff suppressed because it is too large Load Diff

View File

@@ -30,7 +30,6 @@ TEST_CREDENTIALS_INPUT = {
class IdeogramModelName(str, Enum):
V3 = "V_3"
V2 = "V_2"
V1 = "V_1"
V1_TURBO = "V_1_TURBO"
@@ -96,8 +95,8 @@ class IdeogramModelBlock(Block):
title="Prompt",
)
ideogram_model_name: IdeogramModelName = SchemaField(
description="The name of the Image Generation Model, e.g., V_3",
default=IdeogramModelName.V3,
description="The name of the Image Generation Model, e.g., V_2",
default=IdeogramModelName.V2,
title="Image Generation Model",
advanced=False,
)
@@ -237,111 +236,6 @@ class IdeogramModelBlock(Block):
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
# Use V3 endpoint for V3 model, legacy endpoint for others
if model_name == "V_3":
return await self._run_model_v3(
api_key,
prompt,
seed,
aspect_ratio,
magic_prompt_option,
style_type,
negative_prompt,
color_palette_name,
custom_colors,
)
else:
return await self._run_model_legacy(
api_key,
model_name,
prompt,
seed,
aspect_ratio,
magic_prompt_option,
style_type,
negative_prompt,
color_palette_name,
custom_colors,
)
async def _run_model_v3(
self,
api_key: SecretStr,
prompt: str,
seed: Optional[int],
aspect_ratio: str,
magic_prompt_option: str,
style_type: str,
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
url = "https://api.ideogram.ai/v1/ideogram-v3/generate"
headers = {
"Api-Key": api_key.get_secret_value(),
"Content-Type": "application/json",
}
# Map legacy aspect ratio values to V3 format
aspect_ratio_map = {
"ASPECT_10_16": "10x16",
"ASPECT_16_10": "16x10",
"ASPECT_9_16": "9x16",
"ASPECT_16_9": "16x9",
"ASPECT_3_2": "3x2",
"ASPECT_2_3": "2x3",
"ASPECT_4_3": "4x3",
"ASPECT_3_4": "3x4",
"ASPECT_1_1": "1x1",
"ASPECT_1_3": "1x3",
"ASPECT_3_1": "3x1",
# Additional V3 supported ratios
"ASPECT_1_2": "1x2",
"ASPECT_2_1": "2x1",
"ASPECT_4_5": "4x5",
"ASPECT_5_4": "5x4",
}
v3_aspect_ratio = aspect_ratio_map.get(
aspect_ratio, "1x1"
) # Default to 1x1 if not found
# Use JSON for V3 endpoint (simpler than multipart/form-data)
data: Dict[str, Any] = {
"prompt": prompt,
"aspect_ratio": v3_aspect_ratio,
"magic_prompt": magic_prompt_option,
"style_type": style_type,
}
if seed is not None:
data["seed"] = seed
if negative_prompt:
data["negative_prompt"] = negative_prompt
# Note: V3 endpoint may have different color palette support
# For now, we'll omit color palettes for V3 to avoid errors
try:
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image with V3 endpoint: {str(e)}")
async def _run_model_legacy(
self,
api_key: SecretStr,
model_name: str,
prompt: str,
seed: Optional[int],
aspect_ratio: str,
magic_prompt_option: str,
style_type: str,
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
url = "https://api.ideogram.ai/generate"
headers = {
@@ -355,33 +249,28 @@ class IdeogramModelBlock(Block):
"model": model_name,
"aspect_ratio": aspect_ratio,
"magic_prompt_option": magic_prompt_option,
"style_type": style_type,
}
}
# Only add style_type for V2, V2_TURBO, and V3 models (V1 models don't support it)
if model_name in ["V_2", "V_2_TURBO", "V_3"]:
data["image_request"]["style_type"] = style_type
if seed is not None:
data["image_request"]["seed"] = seed
if negative_prompt:
data["image_request"]["negative_prompt"] = negative_prompt
# Only add color palette for V2 and V2_TURBO models (V1 models don't support it)
if model_name in ["V_2", "V_2_TURBO"]:
if color_palette_name != "NONE":
data["color_palette"] = {"name": color_palette_name}
elif custom_colors:
data["color_palette"] = {
"members": [{"color_hex": color} for color in custom_colors]
}
if color_palette_name != "NONE":
data["color_palette"] = {"name": color_palette_name}
elif custom_colors:
data["color_palette"] = {
"members": [{"color_hex": color} for color in custom_colors]
}
try:
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image with legacy endpoint: {str(e)}")
raise Exception(f"Failed to fetch image: {str(e)}")
async def upscale_image(self, api_key: SecretStr, image_url: str):
url = "https://api.ideogram.ai/upscale"

View File

@@ -37,7 +37,6 @@ LLMProviderName = Literal[
ProviderName.OPENAI,
ProviderName.OPEN_ROUTER,
ProviderName.LLAMA_API,
ProviderName.V0,
]
AICredentials = CredentialsMetaInput[LLMProviderName, Literal["api_key"]]
@@ -156,10 +155,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
LLAMA_API_LLAMA4_MAVERICK = "Llama-4-Maverick-17B-128E-Instruct-FP8"
LLAMA_API_LLAMA3_3_8B = "Llama-3.3-8B-Instruct"
LLAMA_API_LLAMA3_3_70B = "Llama-3.3-70B-Instruct"
# v0 by Vercel models
V0_1_5_MD = "v0-1.5-md"
V0_1_5_LG = "v0-1.5-lg"
V0_1_0_MD = "v0-1.0-md"
@property
def metadata(self) -> ModelMetadata:
@@ -285,10 +280,6 @@ MODEL_METADATA = {
LlmModel.LLAMA_API_LLAMA4_MAVERICK: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_8B: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_70B: ModelMetadata("llama_api", 128000, 4028),
# v0 by Vercel models
LlmModel.V0_1_5_MD: ModelMetadata("v0", 128000, 64000),
LlmModel.V0_1_5_LG: ModelMetadata("v0", 512000, 64000),
LlmModel.V0_1_0_MD: ModelMetadata("v0", 128000, 64000),
}
for model in LlmModel:
@@ -685,11 +676,7 @@ async def llm_call(
client = openai.OpenAI(
base_url="https://api.aimlapi.com/v2",
api_key=credentials.api_key.get_secret_value(),
default_headers={
"X-Project": "AutoGPT",
"X-Title": "AutoGPT",
"HTTP-Referer": "https://github.com/Significant-Gravitas/AutoGPT",
},
default_headers={"X-Project": "AutoGPT"},
)
completion = client.chat.completions.create(
@@ -709,42 +696,6 @@ async def llm_call(
),
reasoning=None,
)
elif provider == "v0":
tools_param = tools if tools else openai.NOT_GIVEN
client = openai.AsyncOpenAI(
base_url="https://api.v0.dev/v1",
api_key=credentials.api_key.get_secret_value(),
)
response_format = None
if json_format:
response_format = {"type": "json_object"}
parallel_tool_calls_param = get_parallel_tool_calls_param(
llm_model, parallel_tool_calls
)
response = await client.chat.completions.create(
model=llm_model.value,
messages=prompt, # type: ignore
response_format=response_format, # type: ignore
max_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=parallel_tool_calls_param,
)
tool_calls = extract_openai_tool_calls(response)
reasoning = extract_openai_reasoning(response)
return LLMResponse(
raw_response=response.choices[0].message,
prompt=prompt,
response=response.choices[0].message.content or "",
tool_calls=tool_calls,
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
reasoning=reasoning,
)
else:
raise ValueError(f"Unsupported LLM provider: {provider}")

View File

@@ -291,32 +291,9 @@ class SmartDecisionMakerBlock(Block):
for link in links:
sink_name = SmartDecisionMakerBlock.cleanup(link.sink_name)
# Handle dynamic fields (e.g., values_#_*, items_$_*, etc.)
# These are fields that get merged by the executor into their base field
if (
"_#_" in link.sink_name
or "_$_" in link.sink_name
or "_@_" in link.sink_name
):
# For dynamic fields, provide a generic string schema
# The executor will handle merging these into the appropriate structure
properties[sink_name] = {
"type": "string",
"description": f"Dynamic value for {link.sink_name}",
}
else:
# For regular fields, use the block's schema
try:
properties[sink_name] = sink_block_input_schema.get_field_schema(
link.sink_name
)
except (KeyError, AttributeError):
# If the field doesn't exist in the schema, provide a generic schema
properties[sink_name] = {
"type": "string",
"description": f"Value for {link.sink_name}",
}
properties[sink_name] = sink_block_input_schema.get_field_schema(
link.sink_name
)
tool_function["parameters"] = {
**block.input_schema.jsonschema(),
@@ -501,6 +478,10 @@ class SmartDecisionMakerBlock(Block):
}
)
prompt.extend(tool_output)
if input_data.multiple_tool_calls:
input_data.sys_prompt += "\nYou can call a tool (different tools) multiple times in a single response."
else:
input_data.sys_prompt += "\nOnly provide EXACTLY one function call, multiple tool calls is strictly prohibited."
values = input_data.prompt_values
if values:

View File

@@ -1,8 +0,0 @@
from backend.sdk import BlockCostType, ProviderBuilder
stagehand = (
ProviderBuilder("stagehand")
.with_api_key("STAGEHAND_API_KEY", "Stagehand API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -1,393 +0,0 @@
import logging
import signal
import threading
from contextlib import contextmanager
from enum import Enum
# Monkey patch Stagehands to prevent signal handling in worker threads
import stagehand.main
from stagehand import Stagehand
from backend.blocks.llm import (
MODEL_METADATA,
AICredentials,
AICredentialsField,
LlmModel,
ModelMetadata,
)
from backend.blocks.stagehand._config import stagehand as stagehand_provider
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
)
# Store the original method
original_register_signal_handlers = stagehand.main.Stagehand._register_signal_handlers
def safe_register_signal_handlers(self):
"""Only register signal handlers in the main thread"""
if threading.current_thread() is threading.main_thread():
original_register_signal_handlers(self)
else:
# Skip signal handling in worker threads
pass
# Replace the method
stagehand.main.Stagehand._register_signal_handlers = safe_register_signal_handlers
@contextmanager
def disable_signal_handling():
"""Context manager to temporarily disable signal handling"""
if threading.current_thread() is not threading.main_thread():
# In worker threads, temporarily replace signal.signal with a no-op
original_signal = signal.signal
def noop_signal(*args, **kwargs):
pass
signal.signal = noop_signal
try:
yield
finally:
signal.signal = original_signal
else:
# In main thread, don't modify anything
yield
logger = logging.getLogger(__name__)
class StagehandRecommendedLlmModel(str, Enum):
"""
This is subset of LLModel from autogpt_platform/backend/backend/blocks/llm.py
It contains only the models recommended by Stagehand
"""
# OpenAI
GPT41 = "gpt-4.1-2025-04-14"
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
# Anthropic
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
@property
def provider_name(self) -> str:
"""
Returns the provider name for the model in the required format for Stagehand:
provider/model_name
"""
model_metadata = MODEL_METADATA[LlmModel(self.value)]
model_name = self.value
if len(model_name.split("/")) == 1 and not self.value.startswith(
model_metadata.provider
):
assert (
model_metadata.provider != "open_router"
), "Logic failed and open_router provider attempted to be prepended to model name! in stagehand/_config.py"
model_name = f"{model_metadata.provider}/{model_name}"
logger.error(f"Model name: {model_name}")
return model_name
@property
def provider(self) -> str:
return MODEL_METADATA[LlmModel(self.value)].provider
@property
def metadata(self) -> ModelMetadata:
return MODEL_METADATA[LlmModel(self.value)]
@property
def context_window(self) -> int:
return MODEL_METADATA[LlmModel(self.value)].context_window
@property
def max_output_tokens(self) -> int | None:
return MODEL_METADATA[LlmModel(self.value)].max_output_tokens
class StagehandObserveBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
instruction: str = SchemaField(
description="Natural language description of elements or actions to discover.",
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
class Output(BlockSchema):
selector: str = SchemaField(description="XPath selector to locate element.")
description: str = SchemaField(description="Human-readable description")
method: str | None = SchemaField(description="Suggested action method")
arguments: list[str] | None = SchemaField(
description="Additional action parameters"
)
def __init__(self):
super().__init__(
id="d3863944-0eaf-45c4-a0c9-63e0fe1ee8b9",
description="Find suggested actions for your workflows",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandObserveBlock.Input,
output_schema=StagehandObserveBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"OBSERVE: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"OBSERVE: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
observe_results = await page.observe(
input_data.instruction,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
)
for result in observe_results:
yield "selector", result.selector
yield "description", result.description
yield "method", result.method
yield "arguments", result.arguments
class StagehandActBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
action: list[str] = SchemaField(
description="Action to perform. Suggested actions are: click, fill, type, press, scroll, select from dropdown. For multi-step actions, add an entry for each step.",
)
variables: dict[str, str] = SchemaField(
description="Variables to use in the action. Variables contains data you want the action to use.",
default_factory=dict,
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
timeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM ready. Extended timeout for slow-loading forms",
default=60000,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the action was completed successfully"
)
message: str = SchemaField(description="Details about the actions execution.")
action: str = SchemaField(description="Action performed")
def __init__(self):
super().__init__(
id="86eba68b-9549-4c0b-a0db-47d85a56cc27",
description="Interact with a web page by performing actions on a web page. Use it to build self-healing and deterministic automations that adapt to website chang.",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandActBlock.Input,
output_schema=StagehandActBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"ACT: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"ACT: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
for action in input_data.action:
action_results = await page.act(
action,
variables=input_data.variables,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
timeoutMs=input_data.timeoutMs,
)
yield "success", action_results.success
yield "message", action_results.message
yield "action", action_results.action
class StagehandExtractBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
instruction: str = SchemaField(
description="Natural language description of elements or actions to discover.",
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
class Output(BlockSchema):
extraction: str = SchemaField(description="Extracted data from the page.")
def __init__(self):
super().__init__(
id="fd3c0b18-2ba6-46ae-9339-fcb40537ad98",
description="Extract structured data from a webpage.",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandExtractBlock.Input,
output_schema=StagehandExtractBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"EXTRACT: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"EXTRACT: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
extraction = await page.extract(
input_data.instruction,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
)
yield "extraction", str(extraction.model_dump()["extraction"])

View File

@@ -1,283 +0,0 @@
import logging
from typing import Any
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
# Duplicate pydantic models for store data so we don't accidently change the data shape in the blocks unintentionally when editing the backend
class LibraryAgent(BaseModel):
"""Model representing an agent in the user's library."""
library_agent_id: str = ""
agent_id: str = ""
agent_version: int = 0
agent_name: str = ""
description: str = ""
creator: str = ""
is_archived: bool = False
categories: list[str] = []
class AddToLibraryFromStoreBlock(Block):
"""
Block that adds an agent from the store to the user's library.
This enables users to easily import agents from the marketplace into their personal collection.
"""
class Input(BlockSchema):
store_listing_version_id: str = SchemaField(
description="The ID of the store listing version to add to library"
)
agent_name: str | None = SchemaField(
description="Optional custom name for the agent in your library",
default=None,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the agent was successfully added to library"
)
library_agent_id: str = SchemaField(
description="The ID of the library agent entry"
)
agent_id: str = SchemaField(description="The ID of the agent graph")
agent_version: int = SchemaField(
description="The version number of the agent graph"
)
agent_name: str = SchemaField(description="The name of the agent")
message: str = SchemaField(description="Success or error message")
def __init__(self):
super().__init__(
id="2602a7b1-3f4d-4e5f-9c8b-1a2b3c4d5e6f",
description="Add an agent from the store to your personal library",
categories={BlockCategory.BASIC},
input_schema=AddToLibraryFromStoreBlock.Input,
output_schema=AddToLibraryFromStoreBlock.Output,
test_input={
"store_listing_version_id": "test-listing-id",
"agent_name": "My Custom Agent",
},
test_output=[
("success", True),
("library_agent_id", "test-library-id"),
("agent_id", "test-agent-id"),
("agent_version", 1),
("agent_name", "Test Agent"),
("message", "Agent successfully added to library"),
],
test_mock={
"_add_to_library": lambda *_, **__: LibraryAgent(
library_agent_id="test-library-id",
agent_id="test-agent-id",
agent_version=1,
agent_name="Test Agent",
)
},
)
async def run(
self,
input_data: Input,
*,
user_id: str,
**kwargs,
) -> BlockOutput:
library_agent = await self._add_to_library(
user_id=user_id,
store_listing_version_id=input_data.store_listing_version_id,
custom_name=input_data.agent_name,
)
yield "success", True
yield "library_agent_id", library_agent.library_agent_id
yield "agent_id", library_agent.agent_id
yield "agent_version", library_agent.agent_version
yield "agent_name", library_agent.agent_name
yield "message", "Agent successfully added to library"
async def _add_to_library(
self,
user_id: str,
store_listing_version_id: str,
custom_name: str | None = None,
) -> LibraryAgent:
"""
Add a store agent to the user's library using the existing library database function.
"""
library_agent = (
await get_database_manager_async_client().add_store_agent_to_library(
store_listing_version_id=store_listing_version_id, user_id=user_id
)
)
# If custom name is provided, we could update the library agent name here
# For now, we'll just return the agent info
agent_name = custom_name if custom_name else library_agent.name
return LibraryAgent(
library_agent_id=library_agent.id,
agent_id=library_agent.graph_id,
agent_version=library_agent.graph_version,
agent_name=agent_name,
)
class ListLibraryAgentsBlock(Block):
"""
Block that lists all agents in the user's library.
"""
class Input(BlockSchema):
search_query: str | None = SchemaField(
description="Optional search query to filter agents", default=None
)
limit: int = SchemaField(
description="Maximum number of agents to return", default=50, ge=1, le=100
)
page: int = SchemaField(
description="Page number for pagination", default=1, ge=1
)
class Output(BlockSchema):
agents: list[LibraryAgent] = SchemaField(
description="List of agents in the library",
default_factory=list,
)
agent: LibraryAgent = SchemaField(
description="Individual library agent (yielded for each agent)"
)
total_count: int = SchemaField(
description="Total number of agents in library", default=0
)
page: int = SchemaField(description="Current page number", default=1)
total_pages: int = SchemaField(description="Total number of pages", default=1)
def __init__(self):
super().__init__(
id="082602d3-a74d-4600-9e9c-15b3af7eae98",
description="List all agents in your personal library",
categories={BlockCategory.BASIC, BlockCategory.DATA},
input_schema=ListLibraryAgentsBlock.Input,
output_schema=ListLibraryAgentsBlock.Output,
test_input={
"search_query": None,
"limit": 10,
"page": 1,
},
test_output=[
(
"agents",
[
LibraryAgent(
library_agent_id="test-lib-id",
agent_id="test-agent-id",
agent_version=1,
agent_name="Test Library Agent",
description="A test agent in library",
creator="Test User",
),
],
),
("total_count", 1),
("page", 1),
("total_pages", 1),
(
"agent",
LibraryAgent(
library_agent_id="test-lib-id",
agent_id="test-agent-id",
agent_version=1,
agent_name="Test Library Agent",
description="A test agent in library",
creator="Test User",
),
),
],
test_mock={
"_list_library_agents": lambda *_, **__: {
"agents": [
LibraryAgent(
library_agent_id="test-lib-id",
agent_id="test-agent-id",
agent_version=1,
agent_name="Test Library Agent",
description="A test agent in library",
creator="Test User",
)
],
"total": 1,
"page": 1,
"total_pages": 1,
}
},
)
async def run(
self,
input_data: Input,
*,
user_id: str,
**kwargs,
) -> BlockOutput:
result = await self._list_library_agents(
user_id=user_id,
search_query=input_data.search_query,
limit=input_data.limit,
page=input_data.page,
)
agents = result["agents"]
yield "agents", agents
yield "total_count", result["total"]
yield "page", result["page"]
yield "total_pages", result["total_pages"]
# Yield each agent individually for better graph connectivity
for agent in agents:
yield "agent", agent
async def _list_library_agents(
self,
user_id: str,
search_query: str | None = None,
limit: int = 50,
page: int = 1,
) -> dict[str, Any]:
"""
List agents in the user's library using the database client.
"""
result = await get_database_manager_async_client().list_library_agents(
user_id=user_id,
search_term=search_query,
page=page,
page_size=limit,
)
agents = [
LibraryAgent(
library_agent_id=agent.id,
agent_id=agent.graph_id,
agent_version=agent.graph_version,
agent_name=agent.name,
description=getattr(agent, "description", ""),
creator=getattr(agent, "creator", ""),
is_archived=getattr(agent, "is_archived", False),
categories=getattr(agent, "categories", []),
)
for agent in result.agents
]
return {
"agents": agents,
"total": result.pagination.total_items,
"page": result.pagination.current_page,
"total_pages": result.pagination.total_pages,
}

View File

@@ -1,311 +0,0 @@
import logging
from typing import Literal
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
# Duplicate pydantic models for store data so we don't accidently change the data shape in the blocks unintentionally when editing the backend
class StoreAgent(BaseModel):
"""Model representing a store agent."""
slug: str = ""
name: str = ""
description: str = ""
creator: str = ""
rating: float = 0.0
runs: int = 0
categories: list[str] = []
class StoreAgentDict(BaseModel):
"""Dictionary representation of a store agent."""
slug: str
name: str
description: str
creator: str
rating: float
runs: int
class SearchAgentsResponse(BaseModel):
"""Response from searching store agents."""
agents: list[StoreAgentDict]
total_count: int
class StoreAgentDetails(BaseModel):
"""Detailed information about a store agent."""
found: bool
store_listing_version_id: str = ""
agent_name: str = ""
description: str = ""
creator: str = ""
categories: list[str] = []
runs: int = 0
rating: float = 0.0
class GetStoreAgentDetailsBlock(Block):
"""
Block that retrieves detailed information about an agent from the store.
"""
class Input(BlockSchema):
creator: str = SchemaField(description="The username of the agent creator")
slug: str = SchemaField(description="The name of the agent")
class Output(BlockSchema):
found: bool = SchemaField(
description="Whether the agent was found in the store"
)
store_listing_version_id: str = SchemaField(
description="The store listing version ID"
)
agent_name: str = SchemaField(description="Name of the agent")
description: str = SchemaField(description="Description of the agent")
creator: str = SchemaField(description="Creator of the agent")
categories: list[str] = SchemaField(
description="Categories the agent belongs to", default_factory=list
)
runs: int = SchemaField(
description="Number of times the agent has been run", default=0
)
rating: float = SchemaField(
description="Average rating of the agent", default=0.0
)
def __init__(self):
super().__init__(
id="b604f0ec-6e0d-40a7-bf55-9fd09997cced",
description="Get detailed information about an agent from the store",
categories={BlockCategory.BASIC, BlockCategory.DATA},
input_schema=GetStoreAgentDetailsBlock.Input,
output_schema=GetStoreAgentDetailsBlock.Output,
test_input={"creator": "test-creator", "slug": "test-agent-slug"},
test_output=[
("found", True),
("store_listing_version_id", "test-listing-id"),
("agent_name", "Test Agent"),
("description", "A test agent"),
("creator", "Test Creator"),
("categories", ["productivity", "automation"]),
("runs", 100),
("rating", 4.5),
],
test_mock={
"_get_agent_details": lambda *_, **__: StoreAgentDetails(
found=True,
store_listing_version_id="test-listing-id",
agent_name="Test Agent",
description="A test agent",
creator="Test Creator",
categories=["productivity", "automation"],
runs=100,
rating=4.5,
)
},
static_output=True,
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
details = await self._get_agent_details(
creator=input_data.creator, slug=input_data.slug
)
yield "found", details.found
yield "store_listing_version_id", details.store_listing_version_id
yield "agent_name", details.agent_name
yield "description", details.description
yield "creator", details.creator
yield "categories", details.categories
yield "runs", details.runs
yield "rating", details.rating
async def _get_agent_details(self, creator: str, slug: str) -> StoreAgentDetails:
"""
Retrieve detailed information about a store agent.
"""
# Get by specific version ID
agent_details = (
await get_database_manager_async_client().get_store_agent_details(
username=creator, agent_name=slug
)
)
return StoreAgentDetails(
found=True,
store_listing_version_id=agent_details.store_listing_version_id,
agent_name=agent_details.agent_name,
description=agent_details.description,
creator=agent_details.creator,
categories=(
agent_details.categories if hasattr(agent_details, "categories") else []
),
runs=agent_details.runs,
rating=agent_details.rating,
)
class SearchStoreAgentsBlock(Block):
"""
Block that searches for agents in the store based on various criteria.
"""
class Input(BlockSchema):
query: str | None = SchemaField(
description="Search query to find agents", default=None
)
category: str | None = SchemaField(
description="Filter by category", default=None
)
sort_by: Literal["rating", "runs", "name", "recent"] = SchemaField(
description="How to sort the results", default="rating"
)
limit: int = SchemaField(
description="Maximum number of results to return", default=10, ge=1, le=100
)
class Output(BlockSchema):
agents: list[StoreAgent] = SchemaField(
description="List of agents matching the search criteria",
default_factory=list,
)
agent: StoreAgent = SchemaField(description="Basic information of the agent")
total_count: int = SchemaField(
description="Total number of agents found", default=0
)
def __init__(self):
super().__init__(
id="39524701-026c-4328-87cc-1b88c8e2cb4c",
description="Search for agents in the store",
categories={BlockCategory.BASIC, BlockCategory.DATA},
input_schema=SearchStoreAgentsBlock.Input,
output_schema=SearchStoreAgentsBlock.Output,
test_input={
"query": "productivity",
"category": None,
"sort_by": "rating",
"limit": 10,
},
test_output=[
(
"agents",
[
{
"slug": "test-agent",
"name": "Test Agent",
"description": "A test agent",
"creator": "Test Creator",
"rating": 4.5,
"runs": 100,
}
],
),
("total_count", 1),
(
"agent",
{
"slug": "test-agent",
"name": "Test Agent",
"description": "A test agent",
"creator": "Test Creator",
"rating": 4.5,
"runs": 100,
},
),
],
test_mock={
"_search_agents": lambda *_, **__: SearchAgentsResponse(
agents=[
StoreAgentDict(
slug="test-agent",
name="Test Agent",
description="A test agent",
creator="Test Creator",
rating=4.5,
runs=100,
)
],
total_count=1,
)
},
)
async def run(
self,
input_data: Input,
**kwargs,
) -> BlockOutput:
result = await self._search_agents(
query=input_data.query,
category=input_data.category,
sort_by=input_data.sort_by,
limit=input_data.limit,
)
agents = result.agents
total_count = result.total_count
# Convert to dict for output
agents_as_dicts = [agent.model_dump() for agent in agents]
yield "agents", agents_as_dicts
yield "total_count", total_count
for agent_dict in agents_as_dicts:
yield "agent", agent_dict
async def _search_agents(
self,
query: str | None = None,
category: str | None = None,
sort_by: str = "rating",
limit: int = 10,
) -> SearchAgentsResponse:
"""
Search for agents in the store using the existing store database function.
"""
# Map our sort_by to the store's sorted_by parameter
sorted_by_map = {
"rating": "most_popular",
"runs": "most_runs",
"name": "alphabetical",
"recent": "recently_updated",
}
result = await get_database_manager_async_client().get_store_agents(
featured=False,
creators=None,
sorted_by=sorted_by_map.get(sort_by, "most_popular"),
search_query=query,
category=category,
page=1,
page_size=limit,
)
agents = [
StoreAgentDict(
slug=agent.slug,
name=agent.agent_name,
description=agent.description,
creator=agent.creator,
rating=agent.rating,
runs=agent.runs,
)
for agent in result.agents
]
return SearchAgentsResponse(agents=agents, total_count=len(agents))

View File

@@ -35,19 +35,20 @@ async def execute_graph(
logger.info("Input data: %s", input_data)
# --- Test adding new executions --- #
graph_exec = await agent_server.test_execute_graph(
response = await agent_server.test_execute_graph(
user_id=test_user.id,
graph_id=test_graph.id,
graph_version=test_graph.version,
node_input=input_data,
)
logger.info("Created execution with ID: %s", graph_exec.id)
graph_exec_id = response.graph_exec_id
logger.info("Created execution with ID: %s", graph_exec_id)
# Execution queue should be empty
logger.info("Waiting for execution to complete...")
result = await wait_execution(test_user.id, graph_exec.id, 30)
result = await wait_execution(test_user.id, graph_exec_id, 30)
logger.info("Execution completed with %d results", len(result))
return graph_exec.id
return graph_exec_id
@pytest.mark.asyncio(loop_scope="session")

View File

@@ -1,130 +0,0 @@
from unittest.mock import Mock
import pytest
from backend.blocks.data_manipulation import AddToListBlock, CreateDictionaryBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
@pytest.mark.asyncio
async def test_smart_decision_maker_handles_dynamic_dict_fields():
"""Test Smart Decision Maker can handle dynamic dictionary fields (_#_) for any block"""
# Create a mock node for CreateDictionaryBlock
mock_node = Mock()
mock_node.block = CreateDictionaryBlock()
mock_node.block_id = CreateDictionaryBlock().id
mock_node.input_default = {}
# Create mock links with dynamic dictionary fields
mock_links = [
Mock(
source_name="tools_^_create_dict_~_name",
sink_name="values_#_name", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_create_dict_~_age",
sink_name="values_#_age", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_create_dict_~_city",
sink_name="values_#_city", # Dynamic dict field
sink_id="dict_node_id",
source_id="smart_decision_node_id",
),
]
# Generate function signature
signature = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, mock_links # type: ignore
)
# Verify the signature was created successfully
assert signature["type"] == "function"
assert "parameters" in signature["function"]
assert "properties" in signature["function"]["parameters"]
# Check that dynamic fields are handled
properties = signature["function"]["parameters"]["properties"]
assert len(properties) == 3 # Should have all three fields
# Each dynamic field should have proper schema
for prop_value in properties.values():
assert "type" in prop_value
assert prop_value["type"] == "string" # Dynamic fields get string type
assert "description" in prop_value
assert "Dynamic value for" in prop_value["description"]
@pytest.mark.asyncio
async def test_smart_decision_maker_handles_dynamic_list_fields():
"""Test Smart Decision Maker can handle dynamic list fields (_$_) for any block"""
# Create a mock node for AddToListBlock
mock_node = Mock()
mock_node.block = AddToListBlock()
mock_node.block_id = AddToListBlock().id
mock_node.input_default = {}
# Create mock links with dynamic list fields
mock_links = [
Mock(
source_name="tools_^_add_to_list_~_0",
sink_name="entries_$_0", # Dynamic list field
sink_id="list_node_id",
source_id="smart_decision_node_id",
),
Mock(
source_name="tools_^_add_to_list_~_1",
sink_name="entries_$_1", # Dynamic list field
sink_id="list_node_id",
source_id="smart_decision_node_id",
),
]
# Generate function signature
signature = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, mock_links # type: ignore
)
# Verify dynamic list fields are handled properly
assert signature["type"] == "function"
properties = signature["function"]["parameters"]["properties"]
assert len(properties) == 2 # Should have both list items
# Each dynamic field should have proper schema
for prop_value in properties.values():
assert prop_value["type"] == "string"
assert "Dynamic value for" in prop_value["description"]
@pytest.mark.asyncio
async def test_create_dict_block_with_dynamic_values():
"""Test CreateDictionaryBlock processes dynamic values correctly"""
block = CreateDictionaryBlock()
# Simulate what happens when executor merges dynamic fields
# The executor merges values_#_* fields into the values dict
input_data = block.input_schema(
values={
"existing": "value",
"name": "Alice", # This would come from values_#_name
"age": 25, # This would come from values_#_age
}
)
# Run the block
result = {}
async for output_name, output_value in block.run(input_data):
result[output_name] = output_value
# Check the result
assert "dictionary" in result
assert result["dictionary"]["existing"] == "value"
assert result["dictionary"]["name"] == "Alice"
assert result["dictionary"]["age"] == 25

View File

@@ -1,155 +0,0 @@
from unittest.mock import MagicMock
import pytest
from backend.blocks.system.library_operations import (
AddToLibraryFromStoreBlock,
LibraryAgent,
)
from backend.blocks.system.store_operations import (
GetStoreAgentDetailsBlock,
SearchAgentsResponse,
SearchStoreAgentsBlock,
StoreAgentDetails,
StoreAgentDict,
)
@pytest.mark.asyncio
async def test_add_to_library_from_store_block_success(mocker):
"""Test successful addition of agent from store to library."""
block = AddToLibraryFromStoreBlock()
# Mock the library agent response
mock_library_agent = MagicMock()
mock_library_agent.id = "lib-agent-123"
mock_library_agent.graph_id = "graph-456"
mock_library_agent.graph_version = 1
mock_library_agent.name = "Test Agent"
mocker.patch.object(
block,
"_add_to_library",
return_value=LibraryAgent(
library_agent_id="lib-agent-123",
agent_id="graph-456",
agent_version=1,
agent_name="Test Agent",
),
)
input_data = block.Input(
store_listing_version_id="store-listing-v1", agent_name="Custom Agent Name"
)
outputs = {}
async for name, value in block.run(input_data, user_id="test-user"):
outputs[name] = value
assert outputs["success"] is True
assert outputs["library_agent_id"] == "lib-agent-123"
assert outputs["agent_id"] == "graph-456"
assert outputs["agent_version"] == 1
assert outputs["agent_name"] == "Test Agent"
assert outputs["message"] == "Agent successfully added to library"
@pytest.mark.asyncio
async def test_get_store_agent_details_block_success(mocker):
"""Test successful retrieval of store agent details."""
block = GetStoreAgentDetailsBlock()
mocker.patch.object(
block,
"_get_agent_details",
return_value=StoreAgentDetails(
found=True,
store_listing_version_id="version-123",
agent_name="Test Agent",
description="A test agent for testing",
creator="Test Creator",
categories=["productivity", "automation"],
runs=100,
rating=4.5,
),
)
input_data = block.Input(creator="Test Creator", slug="test-slug")
outputs = {}
async for name, value in block.run(input_data):
outputs[name] = value
assert outputs["found"] is True
assert outputs["store_listing_version_id"] == "version-123"
assert outputs["agent_name"] == "Test Agent"
assert outputs["description"] == "A test agent for testing"
assert outputs["creator"] == "Test Creator"
assert outputs["categories"] == ["productivity", "automation"]
assert outputs["runs"] == 100
assert outputs["rating"] == 4.5
@pytest.mark.asyncio
async def test_search_store_agents_block(mocker):
"""Test searching for store agents."""
block = SearchStoreAgentsBlock()
mocker.patch.object(
block,
"_search_agents",
return_value=SearchAgentsResponse(
agents=[
StoreAgentDict(
slug="creator1/agent1",
name="Agent One",
description="First test agent",
creator="Creator 1",
rating=4.8,
runs=500,
),
StoreAgentDict(
slug="creator2/agent2",
name="Agent Two",
description="Second test agent",
creator="Creator 2",
rating=4.2,
runs=200,
),
],
total_count=2,
),
)
input_data = block.Input(
query="test", category="productivity", sort_by="rating", limit=10
)
outputs = {}
async for name, value in block.run(input_data):
outputs[name] = value
assert len(outputs["agents"]) == 2
assert outputs["total_count"] == 2
assert outputs["agents"][0]["name"] == "Agent One"
assert outputs["agents"][0]["rating"] == 4.8
@pytest.mark.asyncio
async def test_search_store_agents_block_empty_results(mocker):
"""Test searching with no results."""
block = SearchStoreAgentsBlock()
mocker.patch.object(
block,
"_search_agents",
return_value=SearchAgentsResponse(agents=[], total_count=0),
)
input_data = block.Input(query="nonexistent", limit=10)
outputs = {}
async for name, value in block.run(input_data):
outputs[name] = value
assert outputs["agents"] == []
assert outputs["total_count"] == 0

View File

@@ -1,5 +1,4 @@
import asyncio
import logging
import time
from datetime import datetime, timedelta
from typing import Any, Literal, Union
@@ -8,7 +7,6 @@ from zoneinfo import ZoneInfo
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.execution import UserContext
from backend.data.model import SchemaField
# Shared timezone literal type for all time/date blocks
@@ -53,80 +51,16 @@ TimezoneLiteral = Literal[
"Etc/GMT+12", # UTC-12:00
]
logger = logging.getLogger(__name__)
def _get_timezone(
format_type: Any, # Any format type with timezone and use_user_timezone attributes
user_timezone: str | None,
) -> ZoneInfo:
"""
Determine which timezone to use based on format settings and user context.
Args:
format_type: The format configuration containing timezone settings
user_timezone: The user's timezone from context
Returns:
ZoneInfo object for the determined timezone
"""
if format_type.use_user_timezone and user_timezone:
tz = ZoneInfo(user_timezone)
logger.debug(f"Using user timezone: {user_timezone}")
else:
tz = ZoneInfo(format_type.timezone)
logger.debug(f"Using specified timezone: {format_type.timezone}")
return tz
def _format_datetime_iso8601(dt: datetime, include_microseconds: bool = False) -> str:
"""
Format a datetime object to ISO8601 string.
Args:
dt: The datetime object to format
include_microseconds: Whether to include microseconds in the output
Returns:
ISO8601 formatted string
"""
if include_microseconds:
return dt.isoformat()
else:
return dt.isoformat(timespec="seconds")
# BACKWARDS COMPATIBILITY NOTE:
# The timezone field is kept at the format level (not block level) for backwards compatibility.
# Existing graphs have timezone saved within format_type, moving it would break them.
#
# The use_user_timezone flag was added to allow using the user's profile timezone.
# Default is False to maintain backwards compatibility - existing graphs will continue
# using their specified timezone.
#
# KNOWN ISSUE: If a user switches between format types (strftime <-> iso8601),
# the timezone setting doesn't carry over. This is a UX issue but fixing it would
# require either:
# 1. Moving timezone to block level (breaking change, needs migration)
# 2. Complex state management to sync timezone across format types
#
# Future migration path: When we do a major version bump, consider moving timezone
# to the block Input level for better UX.
class TimeStrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%H:%M:%S"
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
class TimeISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
include_microseconds: bool = False
@@ -181,27 +115,25 @@ class GetCurrentTimeBlock(Block):
],
)
async def run(
self, input_data: Input, *, user_context: UserContext, **kwargs
) -> BlockOutput:
# Extract timezone from user_context (always present)
effective_timezone = user_context.timezone
# Get the appropriate timezone
tz = _get_timezone(input_data.format_type, effective_timezone)
dt = datetime.now(tz=tz)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
if isinstance(input_data.format_type, TimeISO8601Format):
# ISO 8601 format for time only (extract time portion from full ISO datetime)
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
# Get the full ISO format and extract just the time portion with timezone
full_iso = _format_datetime_iso8601(
dt, input_data.format_type.include_microseconds
)
if input_data.format_type.include_microseconds:
full_iso = dt.isoformat()
else:
full_iso = dt.isoformat(timespec="seconds")
# Extract time portion (everything after 'T')
current_time = full_iso.split("T")[1] if "T" in full_iso else full_iso
current_time = f"T{current_time}" # Add T prefix for ISO 8601 time format
else: # TimeStrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
current_time = dt.strftime(input_data.format_type.format)
yield "time", current_time
@@ -209,15 +141,11 @@ class DateStrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%Y-%m-%d"
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
class DateISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
class GetCurrentDateBlock(Block):
@@ -289,23 +217,20 @@ class GetCurrentDateBlock(Block):
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Extract timezone from user_context (required keyword argument)
user_context: UserContext = kwargs["user_context"]
effective_timezone = user_context.timezone
try:
offset = int(input_data.offset)
except ValueError:
offset = 0
# Get the appropriate timezone
tz = _get_timezone(input_data.format_type, effective_timezone)
current_date = datetime.now(tz=tz) - timedelta(days=offset)
if isinstance(input_data.format_type, DateISO8601Format):
# ISO 8601 format for date only (YYYY-MM-DD)
tz = ZoneInfo(input_data.format_type.timezone)
current_date = datetime.now(tz=tz) - timedelta(days=offset)
# ISO 8601 date format is YYYY-MM-DD
date_str = current_date.date().isoformat()
else: # DateStrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
current_date = datetime.now(tz=tz) - timedelta(days=offset)
date_str = current_date.strftime(input_data.format_type.format)
yield "date", date_str
@@ -315,15 +240,11 @@ class StrftimeFormat(BaseModel):
discriminator: Literal["strftime"]
format: str = "%Y-%m-%d %H:%M:%S"
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
class ISO8601Format(BaseModel):
discriminator: Literal["iso8601"]
timezone: TimezoneLiteral = "UTC"
# When True, overrides timezone with user's profile timezone
use_user_timezone: bool = False
include_microseconds: bool = False
@@ -395,22 +316,20 @@ class GetCurrentDateAndTimeBlock(Block):
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Extract timezone from user_context (required keyword argument)
user_context: UserContext = kwargs["user_context"]
effective_timezone = user_context.timezone
# Get the appropriate timezone
tz = _get_timezone(input_data.format_type, effective_timezone)
dt = datetime.now(tz=tz)
if isinstance(input_data.format_type, ISO8601Format):
# ISO 8601 format with specified timezone (also RFC3339-compliant)
current_date_time = _format_datetime_iso8601(
dt, input_data.format_type.include_microseconds
)
else: # StrftimeFormat
current_date_time = dt.strftime(input_data.format_type.format)
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
# Format with or without microseconds
if input_data.format_type.include_microseconds:
current_date_time = dt.isoformat()
else:
current_date_time = dt.isoformat(timespec="seconds")
else: # StrftimeFormat
tz = ZoneInfo(input_data.format_type.timezone)
dt = datetime.now(tz=tz)
current_date_time = dt.strftime(input_data.format_type.format)
yield "date_time", current_date_time

View File

@@ -4,8 +4,6 @@ from logging import getLogger
from typing import Any, Dict, List, Union
from urllib.parse import urlencode
from pydantic import field_serializer
from backend.sdk import BaseModel, Credentials, Requests
logger = getLogger(__name__)
@@ -384,9 +382,8 @@ class CreatePostRequest(BaseModel):
# Advanced
metadata: List[Dict[str, Any]] | None = None
@field_serializer("date")
def serialize_date(self, value: datetime | None) -> str | None:
return value.isoformat() if value else None
class Config:
json_encoders = {datetime: lambda v: v.isoformat()}
class PostAuthor(BaseModel):

View File

@@ -6,6 +6,8 @@ from dotenv import load_dotenv
from backend.util.logging import configure_logging
os.environ["ENABLE_AUTH"] = "false"
load_dotenv()
# NOTE: You can run tests like with the --log-cli-level=INFO to see the logs

View File

@@ -1,8 +1,5 @@
from backend.server.v2.library.model import LibraryAgentPreset
from .graph import NodeModel
from .integrations import Webhook # noqa: F401
# Resolve Webhook forward references
# Resolve Webhook <- NodeModel forward reference
NodeModel.model_rebuild()
LibraryAgentPreset.model_rebuild()

View File

@@ -1,31 +1,57 @@
import logging
import uuid
from datetime import datetime, timezone
from typing import Optional
from typing import List, Optional
from autogpt_libs.api_key.keysmith import APIKeySmith
from autogpt_libs.api_key.key_manager import APIKeyManager
from prisma.enums import APIKeyPermission, APIKeyStatus
from prisma.errors import PrismaError
from prisma.models import APIKey as PrismaAPIKey
from prisma.types import APIKeyWhereUniqueInput
from pydantic import BaseModel, Field
from prisma.types import (
APIKeyCreateInput,
APIKeyUpdateInput,
APIKeyWhereInput,
APIKeyWhereUniqueInput,
)
from pydantic import BaseModel
from backend.util.exceptions import NotAuthorizedError, NotFoundError
from backend.data.db import BaseDbModel
logger = logging.getLogger(__name__)
keysmith = APIKeySmith()
class APIKeyInfo(BaseModel):
id: str
# Some basic exceptions
class APIKeyError(Exception):
"""Base exception for API key operations"""
pass
class APIKeyNotFoundError(APIKeyError):
"""Raised when an API key is not found"""
pass
class APIKeyPermissionError(APIKeyError):
"""Raised when there are permission issues with API key operations"""
pass
class APIKeyValidationError(APIKeyError):
"""Raised when API key validation fails"""
pass
class APIKey(BaseDbModel):
name: str
head: str = Field(
description=f"The first {APIKeySmith.HEAD_LENGTH} characters of the key"
)
tail: str = Field(
description=f"The last {APIKeySmith.TAIL_LENGTH} characters of the key"
)
status: APIKeyStatus
permissions: list[APIKeyPermission]
prefix: str
key: str
status: APIKeyStatus = APIKeyStatus.ACTIVE
permissions: List[APIKeyPermission]
postfix: str
created_at: datetime
last_used_at: Optional[datetime] = None
revoked_at: Optional[datetime] = None
@@ -34,211 +60,266 @@ class APIKeyInfo(BaseModel):
@staticmethod
def from_db(api_key: PrismaAPIKey):
return APIKeyInfo(
id=api_key.id,
name=api_key.name,
head=api_key.head,
tail=api_key.tail,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
try:
return APIKey(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
key=api_key.key,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKey from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
class APIKeyInfoWithHash(APIKeyInfo):
hash: str
salt: str | None = None # None for legacy keys
def match(self, plaintext_key: str) -> bool:
"""Returns whether the given key matches this API key object."""
return keysmith.verify_key(plaintext_key, self.hash, self.salt)
class APIKeyWithoutHash(BaseModel):
id: str
name: str
prefix: str
postfix: str
status: APIKeyStatus
permissions: List[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime]
revoked_at: Optional[datetime]
description: Optional[str]
user_id: str
@staticmethod
def from_db(api_key: PrismaAPIKey):
return APIKeyInfoWithHash(
**APIKeyInfo.from_db(api_key).model_dump(),
hash=api_key.hash,
salt=api_key.salt,
)
def without_hash(self) -> APIKeyInfo:
return APIKeyInfo(**self.model_dump(exclude={"hash", "salt"}))
try:
return APIKeyWithoutHash(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKeyWithoutHash from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
async def create_api_key(
async def generate_api_key(
name: str,
user_id: str,
permissions: list[APIKeyPermission],
permissions: List[APIKeyPermission],
description: Optional[str] = None,
) -> tuple[APIKeyInfo, str]:
) -> tuple[APIKeyWithoutHash, str]:
"""
Generate a new API key and store it in the database.
Returns the API key object (without hash) and the plain text key.
"""
generated_key = keysmith.generate_key()
try:
api_manager = APIKeyManager()
key = api_manager.generate_api_key()
saved_key_obj = await PrismaAPIKey.prisma().create(
data={
"id": str(uuid.uuid4()),
"name": name,
"head": generated_key.head,
"tail": generated_key.tail,
"hash": generated_key.hash,
"salt": generated_key.salt,
"permissions": [p for p in permissions],
"description": description,
"userId": user_id,
}
)
api_key = await PrismaAPIKey.prisma().create(
data=APIKeyCreateInput(
id=str(uuid.uuid4()),
name=name,
prefix=key.prefix,
postfix=key.postfix,
key=key.hash,
permissions=[p for p in permissions],
description=description,
userId=user_id,
)
)
return APIKeyInfo.from_db(saved_key_obj), generated_key.key
api_key_without_hash = APIKeyWithoutHash.from_db(api_key)
return api_key_without_hash, key.raw
except PrismaError as e:
logger.error(f"Database error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
async def get_active_api_keys_by_head(head: str) -> list[APIKeyInfoWithHash]:
results = await PrismaAPIKey.prisma().find_many(
where={"head": head, "status": APIKeyStatus.ACTIVE}
)
return [APIKeyInfoWithHash.from_db(key) for key in results]
async def validate_api_key(plaintext_key: str) -> Optional[APIKeyInfo]:
async def validate_api_key(plain_text_key: str) -> Optional[APIKey]:
"""
Validate an API key and return the API key object if valid and active.
Validate an API key and return the API key object if valid.
"""
try:
if not plaintext_key.startswith(APIKeySmith.PREFIX):
if not plain_text_key.startswith(APIKeyManager.PREFIX):
logger.warning("Invalid API key format")
return None
head = plaintext_key[: APIKeySmith.HEAD_LENGTH]
potential_matches = await get_active_api_keys_by_head(head)
prefix = plain_text_key[: APIKeyManager.PREFIX_LENGTH]
api_manager = APIKeyManager()
matched_api_key = next(
(pm for pm in potential_matches if pm.match(plaintext_key)),
None,
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(prefix=prefix, status=(APIKeyStatus.ACTIVE))
)
if not matched_api_key:
# API key not found or invalid
if not api_key:
logger.warning(f"No active API key found with prefix {prefix}")
return None
# Migrate legacy keys to secure format on successful validation
if matched_api_key.salt is None:
matched_api_key = await _migrate_key_to_secure_hash(
plaintext_key, matched_api_key
is_valid = api_manager.verify_api_key(plain_text_key, api_key.key)
if not is_valid:
logger.warning("API key verification failed")
return None
return APIKey.from_db(api_key)
except Exception as e:
logger.error(f"Error validating API key: {str(e)}")
raise APIKeyValidationError(f"Failed to validate API key: {str(e)}")
async def revoke_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to revoke this API key."
)
return matched_api_key.without_hash()
except Exception as e:
logger.error(f"Error while validating API key: {e}")
raise RuntimeError("Failed to validate API key") from e
async def _migrate_key_to_secure_hash(
plaintext_key: str, key_obj: APIKeyInfoWithHash
) -> APIKeyInfoWithHash:
"""Replace the SHA256 hash of a legacy API key with a salted Scrypt hash."""
try:
new_hash, new_salt = keysmith.hash_key(plaintext_key)
await PrismaAPIKey.prisma().update(
where={"id": key_obj.id}, data={"hash": new_hash, "salt": new_salt}
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(
status=APIKeyStatus.REVOKED, revokedAt=datetime.now(timezone.utc)
),
)
logger.info(f"Migrated legacy API key #{key_obj.id} to secure format")
# Update the API key object with new values for return
key_obj.hash = new_hash
key_obj.salt = new_salt
except Exception as e:
logger.error(f"Failed to migrate legacy API key #{key_obj.id}: {e}")
return key_obj
async def revoke_api_key(key_id: str, user_id: str) -> APIKeyInfo:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise NotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to revoke this API key.")
updated_api_key = await PrismaAPIKey.prisma().update(
where={"id": key_id},
data={
"status": APIKeyStatus.REVOKED,
"revokedAt": datetime.now(timezone.utc),
},
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to revoke.")
return APIKeyInfo.from_db(updated_api_key)
async def list_user_api_keys(user_id: str) -> list[APIKeyInfo]:
api_keys = await PrismaAPIKey.prisma().find_many(
where={"userId": user_id}, order={"createdAt": "desc"}
)
return [APIKeyInfo.from_db(key) for key in api_keys]
async def suspend_api_key(key_id: str, user_id: str) -> APIKeyInfo:
selector: APIKeyWhereUniqueInput = {"id": key_id}
api_key = await PrismaAPIKey.prisma().find_unique(where=selector)
if not api_key:
raise NotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to suspend this API key.")
updated_api_key = await PrismaAPIKey.prisma().update(
where=selector, data={"status": APIKeyStatus.SUSPENDED}
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to suspend.")
return APIKeyInfo.from_db(updated_api_key)
def has_permission(api_key: APIKeyInfo, required_permission: APIKeyPermission) -> bool:
return required_permission in api_key.permissions
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyInfo]:
api_key = await PrismaAPIKey.prisma().find_first(
where={"id": key_id, "userId": user_id}
)
if not api_key:
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
return APIKeyInfo.from_db(api_key)
async def list_user_api_keys(user_id: str) -> List[APIKeyWithoutHash]:
try:
where_clause: APIKeyWhereInput = {"userId": user_id}
api_keys = await PrismaAPIKey.prisma().find_many(
where=where_clause, order={"createdAt": "desc"}
)
return [APIKeyWithoutHash.from_db(key) for key in api_keys]
except PrismaError as e:
logger.error(f"Database error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
async def suspend_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to suspend this API key."
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(status=APIKeyStatus.SUSPENDED),
)
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
def has_permission(api_key: APIKey, required_permission: APIKeyPermission) -> bool:
try:
return required_permission in api_key.permissions
except Exception as e:
logger.error(f"Error checking API key permissions: {str(e)}")
return False
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(id=key_id, userId=user_id)
)
if not api_key:
return None
return APIKeyWithoutHash.from_db(api_key)
except PrismaError as e:
logger.error(f"Database error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
async def update_api_key_permissions(
key_id: str, user_id: str, permissions: list[APIKeyPermission]
) -> APIKeyInfo:
key_id: str, user_id: str, permissions: List[APIKeyPermission]
) -> Optional[APIKeyWithoutHash]:
"""
Update the permissions of an API key.
"""
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if api_key is None:
raise NotFoundError("No such API key found.")
if api_key is None:
raise APIKeyNotFoundError("No such API key found.")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to update this API key.")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to update this API key."
)
updated_api_key = await PrismaAPIKey.prisma().update(
where={"id": key_id},
data={"permissions": permissions},
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to update.")
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(permissions=permissions),
)
return APIKeyInfo.from_db(updated_api_key)
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")

View File

@@ -8,7 +8,6 @@ from enum import Enum
from typing import (
TYPE_CHECKING,
Any,
Callable,
ClassVar,
Generic,
Optional,
@@ -45,10 +44,9 @@ if TYPE_CHECKING:
app_config = Config()
BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data).
BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data.
BlockOutputEntry = tuple[str, Any] # Output data should be a tuple of (name, value).
BlockOutput = AsyncGen[BlockOutputEntry, None] # Output: 1 output pin produces n data.
BlockTestOutput = BlockOutputEntry | tuple[str, Callable[[Any], bool]]
BlockOutput = AsyncGen[BlockData, None] # Output: 1 output pin produces n data.
CompletedBlockOutput = dict[str, list[Any]] # Completed stream, collected as a dict.
@@ -91,45 +89,6 @@ class BlockCategory(Enum):
return {"category": self.name, "description": self.value}
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)
class BlockInfo(BaseModel):
id: str
name: str
inputSchema: dict[str, Any]
outputSchema: dict[str, Any]
costs: list[BlockCost]
description: str
categories: list[dict[str, str]]
contributors: list[dict[str, Any]]
staticOutput: bool
uiType: str
class BlockSchema(BaseModel):
cached_jsonschema: ClassVar[dict[str, Any]]
@@ -347,7 +306,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
input_schema: Type[BlockSchemaInputType] = EmptySchema,
output_schema: Type[BlockSchemaOutputType] = EmptySchema,
test_input: BlockInput | list[BlockInput] | None = None,
test_output: BlockTestOutput | list[BlockTestOutput] | None = None,
test_output: BlockData | list[BlockData] | None = None,
test_mock: dict[str, Any] | None = None,
test_credentials: Optional[Credentials | dict[str, Credentials]] = None,
disabled: bool = False,
@@ -493,24 +452,6 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
"uiType": self.block_type.value,
}
def get_info(self) -> BlockInfo:
from backend.data.credit import get_block_cost
return BlockInfo(
id=self.id,
name=self.name,
inputSchema=self.input_schema.jsonschema(),
outputSchema=self.output_schema.jsonschema(),
costs=get_block_cost(self),
description=self.description,
categories=[category.dict() for category in self.categories],
contributors=[
contributor.model_dump() for contributor in self.contributors
],
staticOutput=self.static_output,
uiType=self.block_type.value,
)
async def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
if error := self.input_schema.validate_data(input_data):
raise ValueError(

View File

@@ -5,12 +5,6 @@ from backend.blocks.ai_shortform_video_block import AIShortformVideoCreatorBlock
from backend.blocks.apollo.organization import SearchOrganizationsBlock
from backend.blocks.apollo.people import SearchPeopleBlock
from backend.blocks.apollo.person import GetPersonDetailBlock
from backend.blocks.enrichlayer.linkedin import (
GetLinkedinProfileBlock,
GetLinkedinProfilePictureBlock,
LinkedinPersonLookupBlock,
LinkedinRoleLookupBlock,
)
from backend.blocks.flux_kontext import AIImageEditorBlock, FluxKontextModelName
from backend.blocks.ideogram import IdeogramModelBlock
from backend.blocks.jina.embeddings import JinaEmbeddingBlock
@@ -29,13 +23,13 @@ from backend.blocks.replicate.replicate_block import ReplicateModelBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.blocks.text_to_speech_block import UnrealTextToSpeechBlock
from backend.data.block import Block, BlockCost, BlockCostType
from backend.data.block import Block
from backend.data.cost import BlockCost, BlockCostType
from backend.integrations.credentials_store import (
aiml_api_credentials,
anthropic_credentials,
apollo_credentials,
did_credentials,
enrichlayer_credentials,
groq_credentials,
ideogram_credentials,
jina_credentials,
@@ -45,7 +39,6 @@ from backend.integrations.credentials_store import (
replicate_credentials,
revid_credentials,
unreal_credentials,
v0_credentials,
)
# =============== Configure the cost for each LLM Model call =============== #
@@ -122,10 +115,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: 1,
LlmModel.GEMINI_2_0_FLASH_LITE: 1,
LlmModel.DEEPSEEK_R1_0528: 1,
# v0 by Vercel models
LlmModel.V0_1_5_MD: 1,
LlmModel.V0_1_5_LG: 2,
LlmModel.V0_1_0_MD: 1,
}
for model in LlmModel:
@@ -215,23 +204,6 @@ LLM_COST = (
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "llama_api"
]
# v0 by Vercel Models
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {
"id": v0_credentials.id,
"provider": v0_credentials.provider,
"type": v0_credentials.type,
},
},
cost_amount=cost,
)
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "v0"
]
# AI/ML Api Models
+ [
BlockCost(
@@ -306,18 +278,7 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
"type": ideogram_credentials.type,
}
},
),
BlockCost(
cost_amount=18,
cost_filter={
"ideogram_model_name": "V_3",
"credentials": {
"id": ideogram_credentials.id,
"provider": ideogram_credentials.provider,
"type": ideogram_credentials.type,
},
},
),
)
],
AIShortformVideoCreatorBlock: [
BlockCost(
@@ -415,54 +376,6 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
},
)
],
GetLinkedinProfileBlock: [
BlockCost(
cost_amount=1,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
LinkedinPersonLookupBlock: [
BlockCost(
cost_amount=2,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
LinkedinRoleLookupBlock: [
BlockCost(
cost_amount=3,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
GetLinkedinProfilePictureBlock: [
BlockCost(
cost_amount=3,
cost_filter={
"credentials": {
"id": enrichlayer_credentials.id,
"provider": enrichlayer_credentials.provider,
"type": enrichlayer_credentials.type,
}
},
)
],
SmartDecisionMakerBlock: LLM_COST,
SearchOrganizationsBlock: [
BlockCost(

View File

@@ -0,0 +1,32 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from backend.data.block import BlockInput
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)

View File

@@ -2,7 +2,7 @@ import logging
from abc import ABC, abstractmethod
from collections import defaultdict
from datetime import datetime, timezone
from typing import TYPE_CHECKING, Any, cast
from typing import Any, cast
import stripe
from prisma import Json
@@ -23,6 +23,7 @@ from pydantic import BaseModel
from backend.data import db
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCost
from backend.data.model import (
AutoTopUpConfig,
RefundRequest,
@@ -33,16 +34,13 @@ from backend.data.model import (
from backend.data.notifications import NotificationEventModel, RefundRequestData
from backend.data.user import get_user_by_id, get_user_email_by_id
from backend.notifications.notifications import queue_notification_async
from backend.server.model import Pagination
from backend.server.v2.admin.model import UserHistoryResponse
from backend.util.exceptions import InsufficientBalanceError
from backend.util.json import SafeJson
from backend.util.models import Pagination
from backend.util.retry import func_retry
from backend.util.settings import Settings
if TYPE_CHECKING:
from backend.data.block import Block, BlockCost
settings = Settings()
stripe.api_key = settings.secrets.stripe_api_key
logger = logging.getLogger(__name__)
@@ -288,17 +286,11 @@ class UserCreditBase(ABC):
transaction = await CreditTransaction.prisma().find_first_or_raise(
where={"transactionKey": transaction_key, "userId": user_id}
)
if transaction.isActive:
return
async with db.locked_transaction(f"usr_trx_{user_id}"):
transaction = await CreditTransaction.prisma().find_first_or_raise(
where={"transactionKey": transaction_key, "userId": user_id}
)
if transaction.isActive:
return
user_balance, _ = await self._get_credits(user_id)
await CreditTransaction.prisma().update(
where={
@@ -999,14 +991,10 @@ def get_user_credit_model() -> UserCreditBase:
return UserCredit()
def get_block_costs() -> dict[str, list["BlockCost"]]:
def get_block_costs() -> dict[str, list[BlockCost]]:
return {block().id: costs for block, costs in BLOCK_COSTS.items()}
def get_block_cost(block: "Block") -> list["BlockCost"]:
return BLOCK_COSTS.get(block.__class__, [])
async def get_stripe_customer_id(user_id: str) -> str:
user = await get_user_by_id(user_id)

View File

@@ -7,7 +7,7 @@ from prisma.models import CreditTransaction
from backend.blocks.llm import AITextGeneratorBlock
from backend.data.block import get_block
from backend.data.credit import BetaUserCredit, UsageTransactionMetadata
from backend.data.execution import NodeExecutionEntry, UserContext
from backend.data.execution import NodeExecutionEntry
from backend.data.user import DEFAULT_USER_ID
from backend.executor.utils import block_usage_cost
from backend.integrations.credentials_store import openai_credentials
@@ -75,7 +75,6 @@ async def test_block_credit_usage(server: SpinTestServer):
"type": openai_credentials.type,
},
},
user_context=UserContext(timezone="UTC"),
),
)
assert spending_amount_1 > 0
@@ -89,7 +88,6 @@ async def test_block_credit_usage(server: SpinTestServer):
node_exec_id="test_node_exec",
block_id=AITextGeneratorBlock().id,
inputs={"model": "gpt-4-turbo", "api_key": "owned_api_key"},
user_context=UserContext(timezone="UTC"),
),
)
assert spending_amount_2 == 0

View File

@@ -11,14 +11,11 @@ from typing import (
Generator,
Generic,
Literal,
Mapping,
Optional,
TypeVar,
cast,
overload,
)
from prisma import Json
from prisma.enums import AgentExecutionStatus
from prisma.models import (
AgentGraphExecution,
@@ -27,6 +24,7 @@ from prisma.models import (
AgentNodeExecutionKeyValueData,
)
from prisma.types import (
AgentGraphExecutionCreateInput,
AgentGraphExecutionUpdateManyMutationInput,
AgentGraphExecutionWhereInput,
AgentNodeExecutionCreateInput,
@@ -35,13 +33,12 @@ from prisma.types import (
AgentNodeExecutionUpdateInput,
AgentNodeExecutionWhereInput,
)
from pydantic import BaseModel, ConfigDict, JsonValue, ValidationError
from pydantic import BaseModel, ConfigDict, JsonValue
from pydantic.fields import Field
from backend.server.v2.store.exceptions import DatabaseError
from backend.util import type as type_utils
from backend.util.json import SafeJson
from backend.util.models import Pagination
from backend.util.retry import func_retry
from backend.util.settings import Config
from backend.util.truncate import truncate
@@ -62,7 +59,7 @@ from .includes import (
GRAPH_EXECUTION_INCLUDE_WITH_NODES,
graph_execution_include,
)
from .model import CredentialsMetaInput, GraphExecutionStats, NodeExecutionStats
from .model import GraphExecutionStats
T = TypeVar("T")
@@ -89,19 +86,13 @@ class BlockErrorStats(BaseModel):
ExecutionStatus = AgentExecutionStatus
NodeInputMask = Mapping[str, JsonValue]
NodesInputMasks = Mapping[str, NodeInputMask]
class GraphExecutionMeta(BaseDbModel):
id: str # type: ignore # Override base class to make this required
user_id: str
graph_id: str
graph_version: int
inputs: Optional[BlockInput] # no default -> required in the OpenAPI spec
credential_inputs: Optional[dict[str, CredentialsMetaInput]]
nodes_input_masks: Optional[dict[str, BlockInput]]
preset_id: Optional[str]
preset_id: Optional[str] = None
status: ExecutionStatus
started_at: datetime
ended_at: datetime
@@ -186,18 +177,6 @@ class GraphExecutionMeta(BaseDbModel):
user_id=_graph_exec.userId,
graph_id=_graph_exec.agentGraphId,
graph_version=_graph_exec.agentGraphVersion,
inputs=cast(BlockInput | None, _graph_exec.inputs),
credential_inputs=(
{
name: CredentialsMetaInput.model_validate(cmi)
for name, cmi in cast(dict, _graph_exec.credentialInputs).items()
}
if _graph_exec.credentialInputs
else None
),
nodes_input_masks=cast(
dict[str, BlockInput] | None, _graph_exec.nodesInputMasks
),
preset_id=_graph_exec.agentPresetId,
status=ExecutionStatus(_graph_exec.executionStatus),
started_at=start_time,
@@ -225,7 +204,7 @@ class GraphExecutionMeta(BaseDbModel):
class GraphExecution(GraphExecutionMeta):
inputs: BlockInput # type: ignore - incompatible override is intentional
inputs: BlockInput
outputs: CompletedBlockOutput
@staticmethod
@@ -245,18 +224,15 @@ class GraphExecution(GraphExecutionMeta):
)
inputs = {
**(
graph_exec.inputs
or {
# fallback: extract inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
)
}
),
**{
# inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
)
},
**{
# input from webhook-triggered block
"payload": exec.input_data["payload"]
@@ -274,13 +250,14 @@ class GraphExecution(GraphExecutionMeta):
if (
block := get_block(exec.block_id)
) and block.block_type == BlockType.OUTPUT:
outputs[exec.input_data["name"]].append(exec.input_data.get("value"))
outputs[exec.input_data["name"]].append(
exec.input_data.get("value", None)
)
return GraphExecution(
**{
field_name: getattr(graph_exec, field_name)
for field_name in GraphExecutionMeta.model_fields
if field_name != "inputs"
},
inputs=inputs,
outputs=outputs,
@@ -313,18 +290,13 @@ class GraphExecutionWithNodes(GraphExecution):
node_executions=node_executions,
)
def to_graph_execution_entry(
self,
user_context: "UserContext",
compiled_nodes_input_masks: Optional[NodesInputMasks] = None,
):
def to_graph_execution_entry(self):
return GraphExecutionEntry(
user_id=self.user_id,
graph_id=self.graph_id,
graph_version=self.graph_version or 0,
graph_exec_id=self.id,
nodes_input_masks=compiled_nodes_input_masks,
user_context=user_context,
nodes_input_masks={}, # FIXME: store credentials on AgentGraphExecution
)
@@ -346,30 +318,18 @@ class NodeExecutionResult(BaseModel):
@staticmethod
def from_db(_node_exec: AgentNodeExecution, user_id: Optional[str] = None):
try:
stats = NodeExecutionStats.model_validate(_node_exec.stats or {})
except (ValueError, ValidationError):
stats = NodeExecutionStats()
if stats.cleared_inputs:
input_data: BlockInput = defaultdict()
for name, messages in stats.cleared_inputs.items():
input_data[name] = messages[-1] if messages else ""
elif _node_exec.executionData:
if _node_exec.executionData:
# Execution that has been queued for execution will persist its data.
input_data = type_utils.convert(_node_exec.executionData, dict[str, Any])
else:
# For incomplete execution, executionData will not be yet available.
input_data: BlockInput = defaultdict()
for data in _node_exec.Input or []:
input_data[data.name] = type_utils.convert(data.data, JsonValue)
input_data[data.name] = type_utils.convert(data.data, type[Any])
output_data: CompletedBlockOutput = defaultdict(list)
if stats.cleared_outputs:
for name, messages in stats.cleared_outputs.items():
output_data[name].extend(messages)
else:
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, JsonValue))
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
graph_execution: AgentGraphExecution | None = _node_exec.GraphExecution
if graph_execution:
@@ -396,9 +356,7 @@ class NodeExecutionResult(BaseModel):
end_time=_node_exec.endedTime,
)
def to_node_execution_entry(
self, user_context: "UserContext"
) -> "NodeExecutionEntry":
def to_node_execution_entry(self) -> "NodeExecutionEntry":
return NodeExecutionEntry(
user_id=self.user_id,
graph_exec_id=self.graph_exec_id,
@@ -407,7 +365,6 @@ class NodeExecutionResult(BaseModel):
node_id=self.node_id,
block_id=self.block_id,
inputs=self.input_data,
user_context=user_context,
)
@@ -415,13 +372,13 @@ class NodeExecutionResult(BaseModel):
async def get_graph_executions(
graph_exec_id: Optional[str] = None,
graph_id: Optional[str] = None,
user_id: Optional[str] = None,
statuses: Optional[list[ExecutionStatus]] = None,
created_time_gte: Optional[datetime] = None,
created_time_lte: Optional[datetime] = None,
limit: Optional[int] = None,
graph_exec_id: str | None = None,
graph_id: str | None = None,
user_id: str | None = None,
statuses: list[ExecutionStatus] | None = None,
created_time_gte: datetime | None = None,
created_time_lte: datetime | None = None,
limit: int | None = None,
) -> list[GraphExecutionMeta]:
"""⚠️ **Optional `user_id` check**: MUST USE check in user-facing endpoints."""
where_filter: AgentGraphExecutionWhereInput = {
@@ -449,60 +406,6 @@ async def get_graph_executions(
return [GraphExecutionMeta.from_db(execution) for execution in executions]
class GraphExecutionsPaginated(BaseModel):
"""Response schema for paginated graph executions."""
executions: list[GraphExecutionMeta]
pagination: Pagination
async def get_graph_executions_paginated(
user_id: str,
graph_id: Optional[str] = None,
page: int = 1,
page_size: int = 25,
statuses: Optional[list[ExecutionStatus]] = None,
created_time_gte: Optional[datetime] = None,
created_time_lte: Optional[datetime] = None,
) -> GraphExecutionsPaginated:
"""Get paginated graph executions for a specific graph."""
where_filter: AgentGraphExecutionWhereInput = {
"isDeleted": False,
"userId": user_id,
}
if graph_id:
where_filter["agentGraphId"] = graph_id
if created_time_gte or created_time_lte:
where_filter["createdAt"] = {
"gte": created_time_gte or datetime.min.replace(tzinfo=timezone.utc),
"lte": created_time_lte or datetime.max.replace(tzinfo=timezone.utc),
}
if statuses:
where_filter["OR"] = [{"executionStatus": status} for status in statuses]
total_count = await AgentGraphExecution.prisma().count(where=where_filter)
total_pages = (total_count + page_size - 1) // page_size
offset = (page - 1) * page_size
executions = await AgentGraphExecution.prisma().find_many(
where=where_filter,
order={"createdAt": "desc"},
take=page_size,
skip=offset,
)
return GraphExecutionsPaginated(
executions=[GraphExecutionMeta.from_db(execution) for execution in executions],
pagination=Pagination(
total_items=total_count,
total_pages=total_pages,
current_page=page,
page_size=page_size,
),
)
async def get_graph_execution_meta(
user_id: str, execution_id: str
) -> GraphExecutionMeta | None:
@@ -564,12 +467,9 @@ async def get_graph_execution(
async def create_graph_execution(
graph_id: str,
graph_version: int,
starting_nodes_input: list[tuple[str, BlockInput]], # list[(node_id, BlockInput)]
inputs: Mapping[str, JsonValue],
starting_nodes_input: list[tuple[str, BlockInput]],
user_id: str,
preset_id: Optional[str] = None,
credential_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
preset_id: str | None = None,
) -> GraphExecutionWithNodes:
"""
Create a new AgentGraphExecution record.
@@ -577,18 +477,11 @@ async def create_graph_execution(
The id of the AgentGraphExecution and the list of ExecutionResult for each node.
"""
result = await AgentGraphExecution.prisma().create(
data={
"agentGraphId": graph_id,
"agentGraphVersion": graph_version,
"executionStatus": ExecutionStatus.QUEUED,
"inputs": SafeJson(inputs),
"credentialInputs": (
SafeJson(credential_inputs) if credential_inputs else Json({})
),
"nodesInputMasks": (
SafeJson(nodes_input_masks) if nodes_input_masks else Json({})
),
"NodeExecutions": {
data=AgentGraphExecutionCreateInput(
agentGraphId=graph_id,
agentGraphVersion=graph_version,
executionStatus=ExecutionStatus.QUEUED,
NodeExecutions={
"create": [
AgentNodeExecutionCreateInput(
agentNodeId=node_id,
@@ -604,9 +497,9 @@ async def create_graph_execution(
for node_id, node_input in starting_nodes_input
]
},
"userId": user_id,
"agentPresetId": preset_id,
},
userId=user_id,
agentPresetId=preset_id,
),
include=GRAPH_EXECUTION_INCLUDE_WITH_NODES,
)
@@ -617,7 +510,7 @@ async def upsert_execution_input(
node_id: str,
graph_exec_id: str,
input_name: str,
input_data: JsonValue,
input_data: Any,
node_exec_id: str | None = None,
) -> tuple[str, BlockInput]:
"""
@@ -666,7 +559,7 @@ async def upsert_execution_input(
)
return existing_execution.id, {
**{
input_data.name: type_utils.convert(input_data.data, JsonValue)
input_data.name: type_utils.convert(input_data.data, type[Any])
for input_data in existing_execution.Input or []
},
input_name: input_data,
@@ -912,19 +805,12 @@ async def get_latest_node_execution(
# ----------------- Execution Infrastructure ----------------- #
class UserContext(BaseModel):
"""Generic user context for graph execution containing user-specific settings."""
timezone: str
class GraphExecutionEntry(BaseModel):
user_id: str
graph_exec_id: str
graph_id: str
graph_version: int
nodes_input_masks: Optional[NodesInputMasks] = None
user_context: UserContext
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None
class NodeExecutionEntry(BaseModel):
@@ -935,7 +821,6 @@ class NodeExecutionEntry(BaseModel):
node_id: str
block_id: str
inputs: BlockInput
user_context: UserContext
class ExecutionQueue(Generic[T]):

View File

@@ -1,109 +0,0 @@
import logging
from collections import defaultdict
from datetime import datetime
from prisma.enums import AgentExecutionStatus
from backend.data.execution import get_graph_executions
from backend.data.graph import get_graph_metadata
from backend.data.model import UserExecutionSummaryStats
from backend.server.v2.store.exceptions import DatabaseError
from backend.util.logging import TruncatedLogger
logger = TruncatedLogger(logging.getLogger(__name__), prefix="[SummaryData]")
async def get_user_execution_summary_data(
user_id: str, start_time: datetime, end_time: datetime
) -> UserExecutionSummaryStats:
"""Gather all summary data for a user in a time range.
This function fetches graph executions once and aggregates all required
statistics in a single pass for efficiency.
"""
try:
# Fetch graph executions once
executions = await get_graph_executions(
user_id=user_id,
created_time_gte=start_time,
created_time_lte=end_time,
)
# Initialize aggregation variables
total_credits_used = 0.0
total_executions = len(executions)
successful_runs = 0
failed_runs = 0
terminated_runs = 0
execution_times = []
agent_usage = defaultdict(int)
cost_by_graph_id = defaultdict(float)
# Single pass through executions to aggregate all stats
for execution in executions:
# Count execution statuses (including TERMINATED as failed)
if execution.status == AgentExecutionStatus.COMPLETED:
successful_runs += 1
elif execution.status == AgentExecutionStatus.FAILED:
failed_runs += 1
elif execution.status == AgentExecutionStatus.TERMINATED:
terminated_runs += 1
# Aggregate costs from stats
if execution.stats and hasattr(execution.stats, "cost"):
cost_in_dollars = execution.stats.cost / 100
total_credits_used += cost_in_dollars
cost_by_graph_id[execution.graph_id] += cost_in_dollars
# Collect execution times
if execution.stats and hasattr(execution.stats, "duration"):
execution_times.append(execution.stats.duration)
# Count agent usage
agent_usage[execution.graph_id] += 1
# Calculate derived stats
total_execution_time = sum(execution_times)
average_execution_time = (
total_execution_time / len(execution_times) if execution_times else 0
)
# Find most used agent
most_used_agent = "No agents used"
if agent_usage:
most_used_agent_id = max(agent_usage, key=lambda k: agent_usage[k])
try:
graph_meta = await get_graph_metadata(graph_id=most_used_agent_id)
most_used_agent = (
graph_meta.name if graph_meta else f"Agent {most_used_agent_id[:8]}"
)
except Exception:
logger.warning(f"Could not get metadata for graph {most_used_agent_id}")
most_used_agent = f"Agent {most_used_agent_id[:8]}"
# Convert graph_ids to agent names for cost breakdown
cost_breakdown = {}
for graph_id, cost in cost_by_graph_id.items():
try:
graph_meta = await get_graph_metadata(graph_id=graph_id)
agent_name = graph_meta.name if graph_meta else f"Agent {graph_id[:8]}"
except Exception:
logger.warning(f"Could not get metadata for graph {graph_id}")
agent_name = f"Agent {graph_id[:8]}"
cost_breakdown[agent_name] = cost
# Build the summary stats object (include terminated runs as failed)
return UserExecutionSummaryStats(
total_credits_used=total_credits_used,
total_executions=total_executions,
successful_runs=successful_runs,
failed_runs=failed_runs + terminated_runs,
most_used_agent=most_used_agent,
total_execution_time=total_execution_time,
average_execution_time=average_execution_time,
cost_breakdown=cost_breakdown,
)
except Exception as e:
logger.error(f"Failed to get user summary data: {e}")
raise DatabaseError(f"Failed to get user summary data: {e}") from e

View File

@@ -12,7 +12,7 @@ from prisma.types import (
AgentNodeLinkCreateInput,
StoreListingVersionWhereInput,
)
from pydantic import BaseModel, Field, create_model
from pydantic import Field, JsonValue, create_model
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
@@ -34,7 +34,6 @@ from .db import BaseDbModel, query_raw_with_schema, transaction
from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE
if TYPE_CHECKING:
from .execution import NodesInputMasks
from .integrations import Webhook
logger = logging.getLogger(__name__)
@@ -160,7 +159,6 @@ class BaseGraph(BaseDbModel):
is_active: bool = True
name: str
description: str
recommended_schedule_cron: str | None = None
nodes: list[Node] = []
links: list[Link] = []
forked_from_id: str | None = None
@@ -207,35 +205,6 @@ class BaseGraph(BaseDbModel):
None,
)
@computed_field
@property
def trigger_setup_info(self) -> "GraphTriggerInfo | None":
if not (
self.webhook_input_node
and (trigger_block := self.webhook_input_node.block).webhook_config
):
return None
return GraphTriggerInfo(
provider=trigger_block.webhook_config.provider,
config_schema={
**(json_schema := trigger_block.input_schema.jsonschema()),
"properties": {
pn: sub_schema
for pn, sub_schema in json_schema["properties"].items()
if not is_credentials_field_name(pn)
},
"required": [
pn
for pn in json_schema.get("required", [])
if not is_credentials_field_name(pn)
],
},
credentials_input_name=next(
iter(trigger_block.input_schema.get_credentials_fields()), None
),
)
@staticmethod
def _generate_schema(
*props: tuple[type[AgentInputBlock.Input] | type[AgentOutputBlock.Input], dict],
@@ -269,14 +238,6 @@ class BaseGraph(BaseDbModel):
}
class GraphTriggerInfo(BaseModel):
provider: ProviderName
config_schema: dict[str, Any] = Field(
description="Input schema for the trigger block"
)
credentials_input_name: Optional[str]
class Graph(BaseGraph):
sub_graphs: list[BaseGraph] = [] # Flattened sub-graphs
@@ -453,7 +414,7 @@ class GraphModel(Graph):
def validate_graph(
self,
for_run: bool = False,
nodes_input_masks: Optional["NodesInputMasks"] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
):
"""
Validate graph structure and raise `ValueError` on issues.
@@ -467,7 +428,7 @@ class GraphModel(Graph):
def _validate_graph(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional["NodesInputMasks"] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> None:
errors = GraphModel._validate_graph_get_errors(
graph, for_run, nodes_input_masks
@@ -481,7 +442,7 @@ class GraphModel(Graph):
def validate_graph_get_errors(
self,
for_run: bool = False,
nodes_input_masks: Optional["NodesInputMasks"] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
@@ -503,7 +464,7 @@ class GraphModel(Graph):
def _validate_graph_get_errors(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional["NodesInputMasks"] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
@@ -697,7 +658,6 @@ class GraphModel(Graph):
is_active=graph.isActive,
name=graph.name or "",
description=graph.description or "",
recommended_schedule_cron=graph.recommendedScheduleCron,
nodes=[NodeModel.from_db(node, for_export) for node in graph.Nodes or []],
links=list(
{
@@ -1085,7 +1045,6 @@ async def __create_graph(tx, graph: Graph, user_id: str):
version=graph.version,
name=graph.name,
description=graph.description,
recommendedScheduleCron=graph.recommended_schedule_cron,
isActive=graph.is_active,
userId=user_id,
forkedFromId=graph.forked_from_id,

View File

@@ -2,6 +2,7 @@ import json
from typing import Any
from uuid import UUID
import autogpt_libs.auth.models
import fastapi.exceptions
import pytest
from pytest_snapshot.plugin import Snapshot
@@ -316,7 +317,12 @@ async def test_access_store_listing_graph(server: SpinTestServer):
is_approved=True,
comments="Test comments",
),
user_id=admin_user.id,
autogpt_libs.auth.models.User(
user_id=admin_user.id,
role="admin",
email=admin_user.email,
phone_number="1234567890",
),
)
# Now we check the graph can be accessed by a user that does not own the graph

View File

@@ -59,15 +59,9 @@ def graph_execution_include(
}
AGENT_PRESET_INCLUDE: prisma.types.AgentPresetInclude = {
"InputPresets": True,
"Webhook": True,
}
INTEGRATION_WEBHOOK_INCLUDE: prisma.types.IntegrationWebhookInclude = {
"AgentNodes": {"include": AGENT_NODE_INCLUDE},
"AgentPresets": {"include": AGENT_PRESET_INCLUDE},
"AgentPresets": {"include": {"InputPresets": True}},
}

Some files were not shown because too many files have changed in this diff Show More