Compare commits

..

5 Commits

Author SHA1 Message Date
Otto
47d2e1aca1 feat(chat): add input schema to discovery tools + validate unknown fields
- Add 'inputs' field to AgentInfo model for find_agent/find_library_agent
- Fetch input schemas from graphs during agent search
- Reject unknown input fields in run_agent with helpful error message

Closes OPEN-2980
2026-01-30 23:16:42 +00:00
Reinier van der Leer
350ad3591b fix(backend/chat): Filter credentials for graph execution by scopes (#11881)
[SECRT-1842: run_agent tool does not correctly use credentials - agents
fail with insufficient auth
scopes](https://linear.app/autogpt/issue/SECRT-1842)

### Changes 🏗️

- Include scopes in credentials filter in
`backend.api.features.chat.tools.utils.match_user_credentials_to_graph`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI must pass
- It's broken now and a simple change so we'll test in the dev
deployment
2026-01-30 11:01:51 +00:00
Bently
de0ec3d388 chore(llm): remove deprecated Claude 3.7 Sonnet model with migration and defensive handling (#11841)
## Summary
Remove `claude-3-7-sonnet-20250219` from LLM model definitions ahead of
Anthropic's API retirement, with comprehensive migration and defensive
error handling.

## Background
Anthropic is retiring Claude 3.7 Sonnet (`claude-3-7-sonnet-20250219`)
on **February 19, 2026 at 9:00 AM PT**. This PR removes the model from
the platform and migrates existing users to prevent service
interruptions.

## Changes

### Code Changes
- Remove `CLAUDE_3_7_SONNET` enum member from `LlmModel` in `llm.py`
- Remove corresponding `ModelMetadata` entry
- Remove `CLAUDE_3_7_SONNET` from `StagehandRecommendedLlmModel` enum
- Remove `CLAUDE_3_7_SONNET` from block cost config
- Add `CLAUDE_4_5_SONNET` to `StagehandRecommendedLlmModel` enum
- Update Stagehand block defaults from `CLAUDE_3_7_SONNET` to
`CLAUDE_4_5_SONNET` (staying in Claude family)
- Add defensive error handling in `CredentialsFieldInfo.discriminate()`
for deprecated model values

### Database Migration
- Adds migration `20260126120000_migrate_claude_3_7_to_4_5_sonnet`
- Migrates `AgentNode.constantInput` model references
- Migrates `AgentNodeExecutionInputOutput.data` preset overrides

### Documentation
- Updated `docs/integrations/block-integrations/llm.md` to remove
deprecated model
- Updated `docs/integrations/block-integrations/stagehand/blocks.md` to
remove deprecated model and add Claude 4.5 Sonnet

## Notes
- Agent JSON files in `autogpt_platform/backend/agents/` still reference
this model in their provider mappings. These are auto-generated and
should be regenerated separately.

## Testing
- [ ] Verify LLM block still functions with remaining models
- [ ] Confirm no import errors in affected files
- [ ] Verify migration runs successfully
- [ ] Verify deprecated model gives helpful error message instead of
KeyError
2026-01-30 08:40:55 +00:00
Otto
7cb1e588b0 fix(frontend): Refocus ChatInput after voice transcription completes (#11893)
## Summary
Refocuses the chat input textarea after voice transcription finishes,
allowing users to immediately use `spacebar+enter` to record and send
their prompt.

## Changes
- Added `inputId` parameter to `useVoiceRecording` hook
- After transcription completes, the input is automatically focused
- This improves the voice input UX flow

## Testing
1. Click mic button or press spacebar to record voice
2. Record a message and stop
3. After transcription completes, the input should be focused
4. User can now press Enter to send or spacebar to record again

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-01-30 14:49:05 +07:00
Otto
582c6cad36 fix(e2e): Make E2E test data deterministic and fix flaky tests (#11890)
## Summary
Fixes flaky E2E marketplace and library tests that were causing PRs to
be removed from the merge queue.

## Root Cause
1. **Test data was probabilistic** - `e2e_test_data.py` used random
chances (40% approve, then 20-50% feature), which could result in 0
featured agents
2. **Library pagination threshold wrong** - Checked `>= 10`, but page
size is 20
3. **Fixed timeouts** - Used `waitForTimeout(2000)` /
`waitForTimeout(10000)` instead of proper waits

## Changes

### Backend (`e2e_test_data.py`)
- Add guaranteed minimums: 8 featured agents, 5 featured creators, 10
top agents
- First N submissions are deterministically approved and featured
- Increase agents per user from 15 → 25 (for pagination with
page_size=20)
- Fix library agent creation to use constants instead of hardcoded `10`

### Frontend Tests
- `library.spec.ts`: Fix pagination threshold to `PAGE_SIZE` (20)
- `library.page.ts`: Replace 2s timeout with `networkidle` +
`waitForFunction`
- `marketplace.page.ts`: Add `networkidle` wait, 30s waits in
`getFirst*` methods
- `marketplace.spec.ts`: Replace 10s timeout with `waitForFunction`
- `marketplace-creator.spec.ts`: Add `networkidle` + element waits

## Related
- Closes SECRT-1848, SECRT-1849
- Should unblock #11841 and other PRs in merge queue

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2026-01-30 05:12:35 +00:00
2279 changed files with 820157 additions and 38507 deletions

View File

@@ -6,15 +6,11 @@ on:
paths:
- '.github/workflows/classic-autogpt-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/direct_benchmark/**'
- 'classic/forge/**'
pull_request:
branches: [ master, dev, release-* ]
paths:
- '.github/workflows/classic-autogpt-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/direct_benchmark/**'
- 'classic/forge/**'
concurrency:
group: ${{ format('classic-autogpt-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -23,22 +19,47 @@ concurrency:
defaults:
run:
shell: bash
working-directory: classic
working-directory: classic/original_autogpt
jobs:
test:
permissions:
contents: read
timeout-minutes: 30
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
steps:
- name: Start MinIO service
# Quite slow on macOS (2~4 minutes to set up Docker)
# - name: Set up Docker (macOS)
# if: runner.os == 'macOS'
# uses: crazy-max/ghaction-setup-docker@v3
- name: Start MinIO service (Linux)
if: runner.os == 'Linux'
working-directory: '.'
run: |
docker pull minio/minio:edge-cicd
docker run -d -p 9000:9000 minio/minio:edge-cicd
- name: Start MinIO service (macOS)
if: runner.os == 'macOS'
working-directory: ${{ runner.temp }}
run: |
brew install minio/stable/minio
mkdir data
minio server ./data &
# No MinIO on Windows:
# - Windows doesn't support running Linux Docker containers
# - It doesn't seem possible to start background processes on Windows. They are
# killed after the step returns.
# See: https://github.com/actions/runner/issues/598#issuecomment-2011890429
- name: Checkout repository
uses: actions/checkout@v4
with:
@@ -50,23 +71,41 @@ jobs:
git config --global user.name "Auto-GPT-Bot"
git config --global user.email "github-bot@agpt.co"
- name: Set up Python 3.12
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: "3.12"
python-version: ${{ matrix.python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('classic/poetry.lock') }}
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/original_autogpt/poetry.lock') }}
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python3 -
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Python dependencies
run: poetry install
@@ -77,12 +116,12 @@ jobs:
--cov=autogpt --cov-branch --cov-report term-missing --cov-report xml \
--numprocesses=logical --durations=10 \
--junitxml=junit.xml -o junit_family=legacy \
original_autogpt/tests/unit original_autogpt/tests/integration
tests/unit tests/integration
env:
CI: true
PLAIN_OUTPUT: True
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
S3_ENDPOINT_URL: http://127.0.0.1:9000
S3_ENDPOINT_URL: ${{ runner.os != 'Windows' && 'http://127.0.0.1:9000' || '' }}
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
@@ -96,11 +135,11 @@ jobs:
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: autogpt-agent
flags: autogpt-agent,${{ runner.os }}
- name: Upload logs to artifact
if: always()
uses: actions/upload-artifact@v4
with:
name: test-logs
path: classic/logs/
path: classic/original_autogpt/logs/

View File

@@ -11,6 +11,9 @@ on:
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/run'
- 'classic/cli.py'
- 'classic/setup.py'
- '!**/*.md'
pull_request:
branches: [ master, dev, release-* ]
@@ -19,6 +22,9 @@ on:
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/run'
- 'classic/cli.py'
- 'classic/setup.py'
- '!**/*.md'
defaults:
@@ -29,9 +35,13 @@ defaults:
jobs:
serve-agent-protocol:
runs-on: ubuntu-latest
strategy:
matrix:
agent-name: [ original_autogpt ]
fail-fast: false
timeout-minutes: 20
env:
min-python-version: '3.12'
min-python-version: '3.10'
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -45,22 +55,22 @@ jobs:
python-version: ${{ env.min-python-version }}
- name: Install Poetry
working-directory: ./classic/${{ matrix.agent-name }}/
run: |
curl -sSL https://install.python-poetry.org | python -
- name: Install dependencies
run: poetry install
- name: Run smoke tests with direct-benchmark
- name: Run regression tests
run: |
poetry run direct-benchmark run \
--strategies one_shot \
--models claude \
--tests ReadFile,WriteFile \
--json
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
poetry run agbenchmark --mock --test=BasicRetrieval --test=Battleship --test=WebArenaTask_0
poetry run agbenchmark --test=WriteFile
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AGENT_NAME: ${{ matrix.agent-name }}
REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
NONINTERACTIVE_MODE: "true"
CI: true
HELICONE_CACHE_ENABLED: false
HELICONE_PROPERTY_AGENT: ${{ matrix.agent-name }}
REPORTS_FOLDER: ${{ format('../../reports/{0}', matrix.agent-name) }}
TELEMETRY_ENVIRONMENT: autogpt-ci
TELEMETRY_OPT_IN: ${{ github.ref_name == 'master' }}

View File

@@ -1,21 +1,17 @@
name: Classic - Direct Benchmark CI
name: Classic - AGBenchmark CI
on:
push:
branches: [ master, dev, ci-test* ]
paths:
- 'classic/direct_benchmark/**'
- 'classic/benchmark/agbenchmark/challenges/**'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- '!classic/benchmark/reports/**'
- .github/workflows/classic-benchmark-ci.yml
pull_request:
branches: [ master, dev, release-* ]
paths:
- 'classic/direct_benchmark/**'
- 'classic/benchmark/agbenchmark/challenges/**'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- '!classic/benchmark/reports/**'
- .github/workflows/classic-benchmark-ci.yml
concurrency:
@@ -27,16 +23,23 @@ defaults:
shell: bash
env:
min-python-version: '3.12'
min-python-version: '3.10'
jobs:
benchmark-tests:
runs-on: ubuntu-latest
test:
permissions:
contents: read
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
defaults:
run:
shell: bash
working-directory: classic
working-directory: classic/benchmark
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -44,88 +47,71 @@ jobs:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ env.min-python-version }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.min-python-version }}
python-version: ${{ matrix.python-version }}
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('classic/poetry.lock') }}
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/benchmark/poetry.lock') }}
- name: Install Poetry
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
- name: Install dependencies
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Python dependencies
run: poetry install
- name: Run basic benchmark tests
- name: Run pytest with coverage
run: |
echo "Testing ReadFile challenge with one_shot strategy..."
poetry run direct-benchmark run \
--fresh \
--strategies one_shot \
--models claude \
--tests ReadFile \
--json
echo "Testing WriteFile challenge..."
poetry run direct-benchmark run \
--fresh \
--strategies one_shot \
--models claude \
--tests WriteFile \
--json
poetry run pytest -vv \
--cov=agbenchmark --cov-branch --cov-report term-missing --cov-report xml \
--durations=10 \
--junitxml=junit.xml -o junit_family=legacy \
tests
env:
CI: true
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NONINTERACTIVE_MODE: "true"
- name: Test category filtering
run: |
echo "Testing coding category..."
poetry run direct-benchmark run \
--fresh \
--strategies one_shot \
--models claude \
--categories coding \
--tests ReadFile,WriteFile \
--json
env:
CI: true
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NONINTERACTIVE_MODE: "true"
- name: Upload test results to Codecov
if: ${{ !cancelled() }} # Run even if tests fail
uses: codecov/test-results-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: Test multiple strategies
run: |
echo "Testing multiple strategies..."
poetry run direct-benchmark run \
--fresh \
--strategies one_shot,plan_execute \
--models claude \
--tests ReadFile \
--parallel 2 \
--json
env:
CI: true
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NONINTERACTIVE_MODE: "true"
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: agbenchmark,${{ runner.os }}
# Run regression tests on maintain challenges
regression-tests:
self-test-with-agent:
runs-on: ubuntu-latest
timeout-minutes: 45
if: github.ref == 'refs/heads/master' || github.ref == 'refs/heads/dev'
defaults:
run:
shell: bash
working-directory: classic
strategy:
matrix:
agent-name: [forge]
fail-fast: false
timeout-minutes: 20
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -140,23 +126,51 @@ jobs:
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python3 -
- name: Install dependencies
run: poetry install
curl -sSL https://install.python-poetry.org | python -
- name: Run regression tests
working-directory: classic
run: |
echo "Running regression tests (previously beaten challenges)..."
poetry run direct-benchmark run \
--fresh \
--strategies one_shot \
--models claude \
--maintain \
--parallel 4 \
--json
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
set +e # Ignore non-zero exit codes and continue execution
echo "Running the following command: poetry run agbenchmark --maintain --mock"
poetry run agbenchmark --maintain --mock
EXIT_CODE=$?
set -e # Stop ignoring non-zero exit codes
# Check if the exit code was 5, and if so, exit with 0 instead
if [ $EXIT_CODE -eq 5 ]; then
echo "regression_tests.json is empty."
fi
echo "Running the following command: poetry run agbenchmark --mock"
poetry run agbenchmark --mock
echo "Running the following command: poetry run agbenchmark --mock --category=data"
poetry run agbenchmark --mock --category=data
echo "Running the following command: poetry run agbenchmark --mock --category=coding"
poetry run agbenchmark --mock --category=coding
# echo "Running the following command: poetry run agbenchmark --test=WriteFile"
# poetry run agbenchmark --test=WriteFile
cd ../benchmark
poetry install
echo "Adding the BUILD_SKILL_TREE environment variable. This will attempt to add new elements in the skill tree. If new elements are added, the CI fails because they should have been pushed"
export BUILD_SKILL_TREE=true
# poetry run agbenchmark --mock
# CHANGED=$(git diff --name-only | grep -E '(agbenchmark/challenges)|(../classic/frontend/assets)') || echo "No diffs"
# if [ ! -z "$CHANGED" ]; then
# echo "There are unstaged changes please run agbenchmark and commit those changes since they are needed."
# echo "$CHANGED"
# exit 1
# else
# echo "No unstaged changes."
# fi
env:
CI: true
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NONINTERACTIVE_MODE: "true"
TELEMETRY_ENVIRONMENT: autogpt-benchmark-ci
TELEMETRY_OPT_IN: ${{ github.ref_name == 'master' }}

View File

@@ -6,11 +6,13 @@ on:
paths:
- '.github/workflows/classic-forge-ci.yml'
- 'classic/forge/**'
- '!classic/forge/tests/vcr_cassettes'
pull_request:
branches: [ master, dev, release-* ]
paths:
- '.github/workflows/classic-forge-ci.yml'
- 'classic/forge/**'
- '!classic/forge/tests/vcr_cassettes'
concurrency:
group: ${{ format('forge-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -19,38 +21,115 @@ concurrency:
defaults:
run:
shell: bash
working-directory: classic
working-directory: classic/forge
jobs:
test:
permissions:
contents: read
timeout-minutes: 30
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
steps:
- name: Start MinIO service
# Quite slow on macOS (2~4 minutes to set up Docker)
# - name: Set up Docker (macOS)
# if: runner.os == 'macOS'
# uses: crazy-max/ghaction-setup-docker@v3
- name: Start MinIO service (Linux)
if: runner.os == 'Linux'
working-directory: '.'
run: |
docker pull minio/minio:edge-cicd
docker run -d -p 9000:9000 minio/minio:edge-cicd
- name: Start MinIO service (macOS)
if: runner.os == 'macOS'
working-directory: ${{ runner.temp }}
run: |
brew install minio/stable/minio
mkdir data
minio server ./data &
# No MinIO on Windows:
# - Windows doesn't support running Linux Docker containers
# - It doesn't seem possible to start background processes on Windows. They are
# killed after the step returns.
# See: https://github.com/actions/runner/issues/598#issuecomment-2011890429
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python 3.12
- name: Checkout cassettes
if: ${{ startsWith(github.event_name, 'pull_request') }}
env:
PR_BASE: ${{ github.event.pull_request.base.ref }}
PR_BRANCH: ${{ github.event.pull_request.head.ref }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
cassette_branch="${PR_AUTHOR}-${PR_BRANCH}"
cassette_base_branch="${PR_BASE}"
cd tests/vcr_cassettes
if ! git ls-remote --exit-code --heads origin $cassette_base_branch ; then
cassette_base_branch="master"
fi
if git ls-remote --exit-code --heads origin $cassette_branch ; then
git fetch origin $cassette_branch
git fetch origin $cassette_base_branch
git checkout $cassette_branch
# Pick non-conflicting cassette updates from the base branch
git merge --no-commit --strategy-option=ours origin/$cassette_base_branch
echo "Using cassettes from mirror branch '$cassette_branch'," \
"synced to upstream branch '$cassette_base_branch'."
else
git checkout -b $cassette_branch
echo "Branch '$cassette_branch' does not exist in cassette submodule." \
"Using cassettes from '$cassette_base_branch'."
fi
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: "3.12"
python-version: ${{ matrix.python-version }}
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('classic/poetry.lock') }}
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/forge/poetry.lock') }}
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python3 -
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Python dependencies
run: poetry install
@@ -61,15 +140,12 @@ jobs:
--cov=forge --cov-branch --cov-report term-missing --cov-report xml \
--durations=10 \
--junitxml=junit.xml -o junit_family=legacy \
forge/forge forge/tests
forge
env:
CI: true
PLAIN_OUTPUT: True
# API keys - tests that need these will skip if not available
# Secrets are not available to fork PRs (GitHub security feature)
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
S3_ENDPOINT_URL: http://127.0.0.1:9000
S3_ENDPOINT_URL: ${{ runner.os != 'Windows' && 'http://127.0.0.1:9000' || '' }}
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
@@ -83,11 +159,85 @@ jobs:
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: forge
flags: forge,${{ runner.os }}
- id: setup_git_auth
name: Set up git token authentication
# Cassettes may be pushed even when tests fail
if: success() || failure()
run: |
config_key="http.${{ github.server_url }}/.extraheader"
if [ "${{ runner.os }}" = 'macOS' ]; then
base64_pat=$(echo -n "pat:${{ secrets.PAT_REVIEW }}" | base64)
else
base64_pat=$(echo -n "pat:${{ secrets.PAT_REVIEW }}" | base64 -w0)
fi
git config "$config_key" \
"Authorization: Basic $base64_pat"
cd tests/vcr_cassettes
git config "$config_key" \
"Authorization: Basic $base64_pat"
echo "config_key=$config_key" >> $GITHUB_OUTPUT
- id: push_cassettes
name: Push updated cassettes
# For pull requests, push updated cassettes even when tests fail
if: github.event_name == 'push' || (! github.event.pull_request.head.repo.fork && (success() || failure()))
env:
PR_BRANCH: ${{ github.event.pull_request.head.ref }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
if [ "${{ startsWith(github.event_name, 'pull_request') }}" = "true" ]; then
is_pull_request=true
cassette_branch="${PR_AUTHOR}-${PR_BRANCH}"
else
cassette_branch="${{ github.ref_name }}"
fi
cd tests/vcr_cassettes
# Commit & push changes to cassettes if any
if ! git diff --quiet; then
git add .
git commit -m "Auto-update cassettes"
git push origin HEAD:$cassette_branch
if [ ! $is_pull_request ]; then
cd ../..
git add tests/vcr_cassettes
git commit -m "Update cassette submodule"
git push origin HEAD:$cassette_branch
fi
echo "updated=true" >> $GITHUB_OUTPUT
else
echo "updated=false" >> $GITHUB_OUTPUT
echo "No cassette changes to commit"
fi
- name: Post Set up git token auth
if: steps.setup_git_auth.outcome == 'success'
run: |
git config --unset-all '${{ steps.setup_git_auth.outputs.config_key }}'
git submodule foreach git config --unset-all '${{ steps.setup_git_auth.outputs.config_key }}'
- name: Apply "behaviour change" label and comment on PR
if: ${{ startsWith(github.event_name, 'pull_request') }}
run: |
PR_NUMBER="${{ github.event.pull_request.number }}"
TOKEN="${{ secrets.PAT_REVIEW }}"
REPO="${{ github.repository }}"
if [[ "${{ steps.push_cassettes.outputs.updated }}" == "true" ]]; then
echo "Adding label and comment..."
echo $TOKEN | gh auth login --with-token
gh issue edit $PR_NUMBER --add-label "behaviour change"
gh issue comment $PR_NUMBER --body "You changed AutoGPT's behaviour on ${{ runner.os }}. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged."
fi
- name: Upload logs to artifact
if: always()
uses: actions/upload-artifact@v4
with:
name: test-logs
path: classic/logs/
path: classic/forge/logs/

View File

@@ -0,0 +1,60 @@
name: Classic - Frontend CI/CD
on:
push:
branches:
- master
- dev
- 'ci-test*' # This will match any branch that starts with "ci-test"
paths:
- 'classic/frontend/**'
- '.github/workflows/classic-frontend-ci.yml'
pull_request:
paths:
- 'classic/frontend/**'
- '.github/workflows/classic-frontend-ci.yml'
jobs:
build:
permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
env:
BUILD_BRANCH: ${{ format('classic-frontend-build/{0}', github.ref_name) }}
steps:
- name: Checkout Repo
uses: actions/checkout@v4
- name: Setup Flutter
uses: subosito/flutter-action@v2
with:
flutter-version: '3.13.2'
- name: Build Flutter to Web
run: |
cd classic/frontend
flutter build web --base-href /app/
# - name: Commit and Push to ${{ env.BUILD_BRANCH }}
# if: github.event_name == 'push'
# run: |
# git config --local user.email "action@github.com"
# git config --local user.name "GitHub Action"
# git add classic/frontend/build/web
# git checkout -B ${{ env.BUILD_BRANCH }}
# git commit -m "Update frontend build to ${GITHUB_SHA:0:7}" -a
# git push -f origin ${{ env.BUILD_BRANCH }}
- name: Create PR ${{ env.BUILD_BRANCH }} -> ${{ github.ref_name }}
if: github.event_name == 'push'
uses: peter-evans/create-pull-request@v7
with:
add-paths: classic/frontend/build/web
base: ${{ github.ref_name }}
branch: ${{ env.BUILD_BRANCH }}
delete-branch: true
title: "Update frontend build in `${{ github.ref_name }}`"
body: "This PR updates the frontend build based on commit ${{ github.sha }}."
commit-message: "Update frontend build based on commit ${{ github.sha }}"

View File

@@ -7,9 +7,7 @@ on:
- '.github/workflows/classic-python-checks-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/direct_benchmark/**'
- 'classic/pyproject.toml'
- 'classic/poetry.lock'
- 'classic/benchmark/**'
- '**.py'
- '!classic/forge/tests/vcr_cassettes'
pull_request:
@@ -18,9 +16,7 @@ on:
- '.github/workflows/classic-python-checks-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/direct_benchmark/**'
- 'classic/pyproject.toml'
- 'classic/poetry.lock'
- 'classic/benchmark/**'
- '**.py'
- '!classic/forge/tests/vcr_cassettes'
@@ -31,13 +27,44 @@ concurrency:
defaults:
run:
shell: bash
working-directory: classic
jobs:
get-changed-parts:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- id: changes-in
name: Determine affected subprojects
uses: dorny/paths-filter@v3
with:
filters: |
original_autogpt:
- classic/original_autogpt/autogpt/**
- classic/original_autogpt/tests/**
- classic/original_autogpt/poetry.lock
forge:
- classic/forge/forge/**
- classic/forge/tests/**
- classic/forge/poetry.lock
benchmark:
- classic/benchmark/agbenchmark/**
- classic/benchmark/tests/**
- classic/benchmark/poetry.lock
outputs:
changed-parts: ${{ steps.changes-in.outputs.changes }}
lint:
needs: get-changed-parts
runs-on: ubuntu-latest
env:
min-python-version: "3.12"
min-python-version: "3.10"
strategy:
matrix:
sub-package: ${{ fromJson(needs.get-changed-parts.outputs.changed-parts) }}
fail-fast: false
steps:
- name: Checkout repository
@@ -54,31 +81,42 @@ jobs:
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: ${{ runner.os }}-poetry-${{ hashFiles('classic/poetry.lock') }}
key: ${{ runner.os }}-poetry-${{ hashFiles(format('{0}/poetry.lock', matrix.sub-package)) }}
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies
- name: Install Python dependencies
run: poetry install
run: poetry -C classic/${{ matrix.sub-package }} install
# Lint
- name: Lint (isort)
run: poetry run isort --check .
working-directory: classic/${{ matrix.sub-package }}
- name: Lint (Black)
if: success() || failure()
run: poetry run black --check .
working-directory: classic/${{ matrix.sub-package }}
- name: Lint (Flake8)
if: success() || failure()
run: poetry run flake8 .
working-directory: classic/${{ matrix.sub-package }}
types:
needs: get-changed-parts
runs-on: ubuntu-latest
env:
min-python-version: "3.12"
min-python-version: "3.10"
strategy:
matrix:
sub-package: ${{ fromJson(needs.get-changed-parts.outputs.changed-parts) }}
fail-fast: false
steps:
- name: Checkout repository
@@ -95,16 +133,19 @@ jobs:
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: ${{ runner.os }}-poetry-${{ hashFiles('classic/poetry.lock') }}
key: ${{ runner.os }}-poetry-${{ hashFiles(format('{0}/poetry.lock', matrix.sub-package)) }}
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies
- name: Install Python dependencies
run: poetry install
run: poetry -C classic/${{ matrix.sub-package }} install
# Typecheck
- name: Typecheck
if: success() || failure()
run: poetry run pyright
working-directory: classic/${{ matrix.sub-package }}

9
.gitignore vendored
View File

@@ -3,7 +3,6 @@
classic/original_autogpt/keys.py
classic/original_autogpt/*.json
auto_gpt_workspace/*
.autogpt/
*.mpeg
.env
# Root .env files
@@ -160,10 +159,6 @@ CURRENT_BULLETIN.md
# AgBenchmark
classic/benchmark/agbenchmark/reports/
classic/reports/
classic/direct_benchmark/reports/
classic/.benchmark_workspaces/
classic/direct_benchmark/.benchmark_workspaces/
# Nodejs
package-lock.json
@@ -182,10 +177,6 @@ autogpt_platform/backend/settings.py
*.ign.*
.test-contents
**/.claude/settings.local.json
.claude/settings.local.json
CLAUDE.local.md
/autogpt_platform/backend/logs
# Test database
test.db

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "classic/forge/tests/vcr_cassettes"]
path = classic/forge/tests/vcr_cassettes
url = https://github.com/Significant-Gravitas/Auto-GPT-test-cassettes

View File

@@ -43,10 +43,29 @@ repos:
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic
alias: poetry-install-classic
entry: poetry -C classic install
files: ^classic/poetry\.lock$
name: Check & Install dependencies - Classic - AutoGPT
alias: poetry-install-classic-autogpt
entry: poetry -C classic/original_autogpt install
# include forge source (since it's a path dependency)
files: ^classic/(original_autogpt|forge)/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic - Forge
alias: poetry-install-classic-forge
entry: poetry -C classic/forge install
files: ^classic/forge/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic - Benchmark
alias: poetry-install-classic-benchmark
entry: poetry -C classic/benchmark install
files: ^classic/benchmark/poetry\.lock$
types: [file]
language: system
pass_filenames: false
@@ -97,10 +116,26 @@ repos:
language: system
- id: isort
name: Lint (isort) - Classic
alias: isort-classic
entry: bash -c 'cd classic && poetry run isort $(echo "$@" | sed "s|classic/||g")' --
files: ^classic/(original_autogpt|forge|direct_benchmark)/
name: Lint (isort) - Classic - AutoGPT
alias: isort-classic-autogpt
entry: poetry -P classic/original_autogpt run isort -p autogpt
files: ^classic/original_autogpt/
types: [file, python]
language: system
- id: isort
name: Lint (isort) - Classic - Forge
alias: isort-classic-forge
entry: poetry -P classic/forge run isort -p forge
files: ^classic/forge/
types: [file, python]
language: system
- id: isort
name: Lint (isort) - Classic - Benchmark
alias: isort-classic-benchmark
entry: poetry -P classic/benchmark run isort -p agbenchmark
files: ^classic/benchmark/
types: [file, python]
language: system
@@ -114,13 +149,26 @@ repos:
- repo: https://github.com/PyCQA/flake8
rev: 7.0.0
# Use consolidated flake8 config at classic/.flake8
# To have flake8 load the config of the individual subprojects, we have to call
# them separately.
hooks:
- id: flake8
name: Lint (Flake8) - Classic
alias: flake8-classic
files: ^classic/(original_autogpt|forge|direct_benchmark)/
args: [--config=classic/.flake8]
name: Lint (Flake8) - Classic - AutoGPT
alias: flake8-classic-autogpt
files: ^classic/original_autogpt/(autogpt|scripts|tests)/
args: [--config=classic/original_autogpt/.flake8]
- id: flake8
name: Lint (Flake8) - Classic - Forge
alias: flake8-classic-forge
files: ^classic/forge/(forge|tests)/
args: [--config=classic/forge/.flake8]
- id: flake8
name: Lint (Flake8) - Classic - Benchmark
alias: flake8-classic-benchmark
files: ^classic/benchmark/(agbenchmark|tests)/((?!reports).)*[/.]
args: [--config=classic/benchmark/.flake8]
- repo: local
hooks:
@@ -156,10 +204,29 @@ repos:
pass_filenames: false
- id: pyright
name: Typecheck - Classic
alias: pyright-classic
entry: poetry -C classic run pyright
files: ^classic/(original_autogpt|forge|direct_benchmark)/.*\.py$|^classic/poetry\.lock$
name: Typecheck - Classic - AutoGPT
alias: pyright-classic-autogpt
entry: poetry -C classic/original_autogpt run pyright
# include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(classic/original_autogpt/((autogpt|scripts|tests)/|poetry\.lock$)|classic/forge/(forge/.*(?<!_test)\.py|poetry\.lock)$)
types: [file]
language: system
pass_filenames: false
- id: pyright
name: Typecheck - Classic - Forge
alias: pyright-classic-forge
entry: poetry -C classic/forge run pyright
files: ^classic/forge/(forge/|poetry\.lock$)
types: [file]
language: system
pass_filenames: false
- id: pyright
name: Typecheck - Classic - Benchmark
alias: pyright-classic-benchmark
entry: poetry -C classic/benchmark run pyright
files: ^classic/benchmark/(agbenchmark/|tests/|poetry\.lock$)
types: [file]
language: system
pass_filenames: false

View File

@@ -1,10 +1,11 @@
"""Shared agent search functionality for find_agent and find_library_agent tools."""
import logging
from typing import Literal
from typing import Any, Literal
from backend.api.features.library import db as library_db
from backend.api.features.store import db as store_db
from backend.data.graph import get_graph
from backend.util.exceptions import DatabaseError, NotFoundError
from .models import (
@@ -14,12 +15,39 @@ from .models import (
NoResultsResponse,
ToolResponseBase,
)
from .utils import fetch_graph_from_store_slug
logger = logging.getLogger(__name__)
SearchSource = Literal["marketplace", "library"]
async def _fetch_input_schema_for_store_agent(
creator: str, slug: str
) -> dict[str, Any] | None:
"""Fetch input schema for a marketplace agent. Returns None on error."""
try:
graph, _ = await fetch_graph_from_store_slug(creator, slug)
if graph and graph.input_schema:
return graph.input_schema.get("properties", {})
except Exception as e:
logger.debug(f"Could not fetch input schema for {creator}/{slug}: {e}")
return None
async def _fetch_input_schema_for_library_agent(
graph_id: str, graph_version: int, user_id: str
) -> dict[str, Any] | None:
"""Fetch input schema for a library agent. Returns None on error."""
try:
graph = await get_graph(graph_id, graph_version, user_id=user_id)
if graph and graph.input_schema:
return graph.input_schema.get("properties", {})
except Exception as e:
logger.debug(f"Could not fetch input schema for graph {graph_id}: {e}")
return None
async def search_agents(
query: str,
source: SearchSource,
@@ -55,6 +83,10 @@ async def search_agents(
logger.info(f"Searching marketplace for: {query}")
results = await store_db.get_store_agents(search_query=query, page_size=5)
for agent in results.agents:
# Fetch input schema for this agent
inputs = await _fetch_input_schema_for_store_agent(
agent.creator, agent.slug
)
agents.append(
AgentInfo(
id=f"{agent.creator}/{agent.slug}",
@@ -67,6 +99,7 @@ async def search_agents(
rating=agent.rating,
runs=agent.runs,
is_featured=False,
inputs=inputs,
)
)
else: # library
@@ -77,6 +110,10 @@ async def search_agents(
page_size=10,
)
for agent in results.agents:
# Fetch input schema for this agent
inputs = await _fetch_input_schema_for_library_agent(
agent.graph_id, agent.graph_version, user_id # type: ignore[arg-type]
)
agents.append(
AgentInfo(
id=agent.id,
@@ -90,6 +127,7 @@ async def search_agents(
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
inputs=inputs,
)
)
logger.info(f"Found {len(agents)} agents in {source}")

View File

@@ -68,6 +68,10 @@ class AgentInfo(BaseModel):
has_external_trigger: bool | None = None
new_output: bool | None = None
graph_id: str | None = None
inputs: dict[str, Any] | None = Field(
default=None,
description="Input schema for the agent (properties from input_schema)",
)
class AgentsFoundResponse(ToolResponseBase):

View File

@@ -273,6 +273,27 @@ class RunAgentTool(BaseTool):
input_properties = graph.input_schema.get("properties", {})
required_fields = set(graph.input_schema.get("required", []))
provided_inputs = set(params.inputs.keys())
valid_fields = set(input_properties.keys())
# Check for unknown fields - reject early with helpful message
unknown_fields = provided_inputs - valid_fields
if unknown_fields:
valid_list = ", ".join(sorted(valid_fields)) if valid_fields else "none"
return AgentDetailsResponse(
message=(
f"Unknown input field(s) provided: {', '.join(sorted(unknown_fields))}. "
f"Valid fields for '{graph.name}': {valid_list}. "
"Please check the field names and try again."
),
session_id=session_id,
agent=self._build_agent_details(
graph,
extract_credentials_from_schema(graph.credentials_input_schema),
),
user_authenticated=True,
graph_id=graph.id,
graph_version=graph.version,
)
# If agent has inputs but none were provided AND use_defaults is not set,
# always show what's available first so user can decide

View File

@@ -8,7 +8,7 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db
from backend.data import graph as graph_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput
from backend.data.model import Credentials, CredentialsFieldInfo, CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import NotFoundError
@@ -266,13 +266,14 @@ async def match_user_credentials_to_graph(
credential_requirements,
_node_fields,
) in aggregated_creds.items():
# Find first matching credential by provider and type
# Find first matching credential by provider, type, and scopes
matching_cred = next(
(
cred
for cred in available_creds
if cred.provider in credential_requirements.provider
and cred.type in credential_requirements.supported_types
and _credential_has_required_scopes(cred, credential_requirements)
),
None,
)
@@ -296,10 +297,17 @@ async def match_user_credentials_to_graph(
f"{credential_field_name} (validation failed: {e})"
)
else:
# Build a helpful error message including scope requirements
error_parts = [
f"provider in {list(credential_requirements.provider)}",
f"type in {list(credential_requirements.supported_types)}",
]
if credential_requirements.required_scopes:
error_parts.append(
f"scopes including {list(credential_requirements.required_scopes)}"
)
missing_creds.append(
f"{credential_field_name} "
f"(requires provider in {list(credential_requirements.provider)}, "
f"type in {list(credential_requirements.supported_types)})"
f"{credential_field_name} (requires {', '.join(error_parts)})"
)
logger.info(
@@ -309,6 +317,28 @@ async def match_user_credentials_to_graph(
return graph_credentials_inputs, missing_creds
def _credential_has_required_scopes(
credential: Credentials,
requirements: CredentialsFieldInfo,
) -> bool:
"""
Check if a credential has all the scopes required by the block.
For OAuth2 credentials, verifies that the credential's scopes are a superset
of the required scopes. For other credential types, returns True (no scope check).
"""
# Only OAuth2 credentials have scopes to check
if credential.type != "oauth2":
return True
# If no scopes are required, any credential matches
if not requirements.required_scopes:
return True
# Check that credential scopes are a superset of required scopes
return set(credential.scopes).issuperset(requirements.required_scopes)
async def check_user_has_required_credentials(
user_id: str,
required_credentials: list[CredentialsMetaInput],

View File

@@ -115,7 +115,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
CLAUDE_4_5_OPUS = "claude-opus-4-5-20251101"
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
CLAUDE_4_5_HAIKU = "claude-haiku-4-5-20251001"
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
# AI/ML API models
AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo"
@@ -280,9 +279,6 @@ MODEL_METADATA = {
LlmModel.CLAUDE_4_5_HAIKU: ModelMetadata(
"anthropic", 200000, 64000, "Claude Haiku 4.5", "Anthropic", "Anthropic", 2
), # claude-haiku-4-5-20251001
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 64000, "Claude 3.7 Sonnet", "Anthropic", "Anthropic", 2
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_HAIKU: ModelMetadata(
"anthropic", 200000, 4096, "Claude 3 Haiku", "Anthropic", "Anthropic", 1
), # claude-3-haiku-20240307

View File

@@ -83,7 +83,7 @@ class StagehandRecommendedLlmModel(str, Enum):
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
# Anthropic
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
@property
def provider_name(self) -> str:
@@ -137,7 +137,7 @@ class StagehandObserveBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
@@ -230,7 +230,7 @@ class StagehandActBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
@@ -330,7 +330,7 @@ class StagehandExtractBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()

View File

@@ -81,7 +81,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.CLAUDE_4_5_HAIKU: 4,
LlmModel.CLAUDE_4_5_OPUS: 14,
LlmModel.CLAUDE_4_5_SONNET: 9,
LlmModel.CLAUDE_3_7_SONNET: 5,
LlmModel.CLAUDE_3_HAIKU: 1,
LlmModel.AIML_API_QWEN2_5_72B: 1,
LlmModel.AIML_API_LLAMA3_1_70B: 1,

View File

@@ -666,10 +666,16 @@ class CredentialsFieldInfo(BaseModel, Generic[CP, CT]):
if not (self.discriminator and self.discriminator_mapping):
return self
try:
provider = self.discriminator_mapping[discriminator_value]
except KeyError:
raise ValueError(
f"Model '{discriminator_value}' is not supported. "
"It may have been deprecated. Please update your agent configuration."
)
return CredentialsFieldInfo(
credentials_provider=frozenset(
[self.discriminator_mapping[discriminator_value]]
),
credentials_provider=frozenset([provider]),
credentials_types=self.supported_types,
credentials_scopes=self.required_scopes,
discriminator=self.discriminator,

View File

@@ -0,0 +1,22 @@
-- Migrate Claude 3.7 Sonnet to Claude 4.5 Sonnet
-- This updates all AgentNode blocks that use the deprecated Claude 3.7 Sonnet model
-- Anthropic is retiring claude-3-7-sonnet-20250219 on February 19, 2026
-- Update AgentNode constant inputs
UPDATE "AgentNode"
SET "constantInput" = JSONB_SET(
"constantInput"::jsonb,
'{model}',
'"claude-sonnet-4-5-20250929"'::jsonb
)
WHERE "constantInput"::jsonb->>'model' = 'claude-3-7-sonnet-20250219';
-- Update AgentPreset input overrides (stored in AgentNodeExecutionInputOutput)
UPDATE "AgentNodeExecutionInputOutput"
SET "data" = JSONB_SET(
"data"::jsonb,
'{model}',
'"claude-sonnet-4-5-20250929"'::jsonb
)
WHERE "agentPresetId" IS NOT NULL
AND "data"::jsonb->>'model' = 'claude-3-7-sonnet-20250219';

View File

@@ -43,19 +43,24 @@ faker = Faker()
# Constants for data generation limits (reduced for E2E tests)
NUM_USERS = 15
NUM_AGENT_BLOCKS = 30
MIN_GRAPHS_PER_USER = 15
MAX_GRAPHS_PER_USER = 15
MIN_GRAPHS_PER_USER = 25
MAX_GRAPHS_PER_USER = 25
MIN_NODES_PER_GRAPH = 3
MAX_NODES_PER_GRAPH = 6
MIN_PRESETS_PER_USER = 2
MAX_PRESETS_PER_USER = 3
MIN_AGENTS_PER_USER = 15
MAX_AGENTS_PER_USER = 15
MIN_AGENTS_PER_USER = 25
MAX_AGENTS_PER_USER = 25
MIN_EXECUTIONS_PER_GRAPH = 2
MAX_EXECUTIONS_PER_GRAPH = 8
MIN_REVIEWS_PER_VERSION = 2
MAX_REVIEWS_PER_VERSION = 5
# Guaranteed minimums for marketplace tests (deterministic)
GUARANTEED_FEATURED_AGENTS = 8
GUARANTEED_FEATURED_CREATORS = 5
GUARANTEED_TOP_AGENTS = 10
def get_image():
"""Generate a consistent image URL using picsum.photos service."""
@@ -385,7 +390,7 @@ class TestDataCreator:
library_agents = []
for user in self.users:
num_agents = 10 # Create exactly 10 agents per user
num_agents = random.randint(MIN_AGENTS_PER_USER, MAX_AGENTS_PER_USER)
# Get available graphs for this user
user_graphs = [
@@ -507,14 +512,17 @@ class TestDataCreator:
existing_profiles, min(num_creators, len(existing_profiles))
)
# Mark about 50% of creators as featured (more for testing)
num_featured = max(2, int(num_creators * 0.5))
# Guarantee at least GUARANTEED_FEATURED_CREATORS featured creators
num_featured = max(GUARANTEED_FEATURED_CREATORS, int(num_creators * 0.5))
num_featured = min(
num_featured, len(selected_profiles)
) # Don't exceed available profiles
featured_profile_ids = set(
random.sample([p.id for p in selected_profiles], num_featured)
)
print(
f"🎯 Creating {num_featured} featured creators (min: {GUARANTEED_FEATURED_CREATORS})"
)
for profile in selected_profiles:
try:
@@ -545,21 +553,25 @@ class TestDataCreator:
return profiles
async def create_test_store_submissions(self) -> List[Dict[str, Any]]:
"""Create test store submissions using the API function."""
"""Create test store submissions using the API function.
DETERMINISTIC: Guarantees minimum featured agents for E2E tests.
"""
print("Creating test store submissions...")
submissions = []
approved_submissions = []
featured_count = 0
submission_counter = 0
# Create a special test submission for test123@gmail.com
# Create a special test submission for test123@gmail.com (ALWAYS approved + featured)
test_user = next(
(user for user in self.users if user["email"] == "test123@gmail.com"), None
)
if test_user:
# Special test data for consistent testing
if test_user and self.agent_graphs:
test_submission_data = {
"user_id": test_user["id"],
"agent_id": self.agent_graphs[0]["id"], # Use first available graph
"agent_id": self.agent_graphs[0]["id"],
"agent_version": 1,
"slug": "test-agent-submission",
"name": "Test Agent Submission",
@@ -580,37 +592,24 @@ class TestDataCreator:
submissions.append(test_submission.model_dump())
print("✅ Created special test store submission for test123@gmail.com")
# Randomly approve, reject, or leave pending the test submission
# ALWAYS approve and feature the test submission
if test_submission.store_listing_version_id:
random_value = random.random()
if random_value < 0.4: # 40% chance to approve
approved_submission = await review_store_submission(
store_listing_version_id=test_submission.store_listing_version_id,
is_approved=True,
external_comments="Test submission approved",
internal_comments="Auto-approved test submission",
reviewer_id=test_user["id"],
)
approved_submissions.append(approved_submission.model_dump())
print("✅ Approved test store submission")
approved_submission = await review_store_submission(
store_listing_version_id=test_submission.store_listing_version_id,
is_approved=True,
external_comments="Test submission approved",
internal_comments="Auto-approved test submission",
reviewer_id=test_user["id"],
)
approved_submissions.append(approved_submission.model_dump())
print("✅ Approved test store submission")
# Mark approved submission as featured
await prisma.storelistingversion.update(
where={"id": test_submission.store_listing_version_id},
data={"isFeatured": True},
)
print("🌟 Marked test agent as FEATURED")
elif random_value < 0.7: # 30% chance to reject (40% to 70%)
await review_store_submission(
store_listing_version_id=test_submission.store_listing_version_id,
is_approved=False,
external_comments="Test submission rejected - needs improvements",
internal_comments="Auto-rejected test submission for E2E testing",
reviewer_id=test_user["id"],
)
print("❌ Rejected test store submission")
else: # 30% chance to leave pending (70% to 100%)
print("⏳ Left test submission pending for review")
await prisma.storelistingversion.update(
where={"id": test_submission.store_listing_version_id},
data={"isFeatured": True},
)
featured_count += 1
print("🌟 Marked test agent as FEATURED")
except Exception as e:
print(f"Error creating test store submission: {e}")
@@ -620,7 +619,6 @@ class TestDataCreator:
# Create regular submissions for all users
for user in self.users:
# Get available graphs for this specific user
user_graphs = [
g for g in self.agent_graphs if g.get("userId") == user["id"]
]
@@ -631,18 +629,17 @@ class TestDataCreator:
)
continue
# Create exactly 4 store submissions per user
for submission_index in range(4):
graph = random.choice(user_graphs)
submission_counter += 1
try:
print(
f"Creating store submission for user {user['id']} with graph {graph['id']} (owner: {graph.get('userId')})"
f"Creating store submission for user {user['id']} with graph {graph['id']}"
)
# Use the API function to create store submission with correct parameters
submission = await create_store_submission(
user_id=user["id"], # Must match graph's userId
user_id=user["id"],
agent_id=graph["id"],
agent_version=graph.get("version", 1),
slug=faker.slug(),
@@ -651,22 +648,24 @@ class TestDataCreator:
video_url=get_video_url() if random.random() < 0.3 else None,
image_urls=[get_image() for _ in range(3)],
description=faker.text(),
categories=[
get_category()
], # Single category from predefined list
categories=[get_category()],
changes_summary="Initial E2E test submission",
)
submissions.append(submission.model_dump())
print(f"✅ Created store submission: {submission.name}")
# Randomly approve, reject, or leave pending the submission
if submission.store_listing_version_id:
random_value = random.random()
if random_value < 0.4: # 40% chance to approve
try:
# Pick a random user as the reviewer (admin)
reviewer_id = random.choice(self.users)["id"]
# DETERMINISTIC: First N submissions are always approved
# First GUARANTEED_FEATURED_AGENTS of those are always featured
should_approve = (
submission_counter <= GUARANTEED_TOP_AGENTS
or random.random() < 0.4
)
should_feature = featured_count < GUARANTEED_FEATURED_AGENTS
if should_approve:
try:
reviewer_id = random.choice(self.users)["id"]
approved_submission = await review_store_submission(
store_listing_version_id=submission.store_listing_version_id,
is_approved=True,
@@ -681,16 +680,7 @@ class TestDataCreator:
f"✅ Approved store submission: {submission.name}"
)
# Mark some agents as featured during creation (30% chance)
# More likely for creators and first submissions
is_creator = user["id"] in [
p.get("userId") for p in self.profiles
]
feature_chance = (
0.5 if is_creator else 0.2
) # 50% for creators, 20% for others
if random.random() < feature_chance:
if should_feature:
try:
await prisma.storelistingversion.update(
where={
@@ -698,8 +688,25 @@ class TestDataCreator:
},
data={"isFeatured": True},
)
featured_count += 1
print(
f"🌟 Marked agent as FEATURED: {submission.name}"
f"🌟 Marked agent as FEATURED ({featured_count}/{GUARANTEED_FEATURED_AGENTS}): {submission.name}"
)
except Exception as e:
print(
f"Warning: Could not mark submission as featured: {e}"
)
elif random.random() < 0.2:
try:
await prisma.storelistingversion.update(
where={
"id": submission.store_listing_version_id
},
data={"isFeatured": True},
)
featured_count += 1
print(
f"🌟 Marked agent as FEATURED (bonus): {submission.name}"
)
except Exception as e:
print(
@@ -710,11 +717,9 @@ class TestDataCreator:
print(
f"Warning: Could not approve submission {submission.name}: {e}"
)
elif random_value < 0.7: # 30% chance to reject (40% to 70%)
elif random.random() < 0.5:
try:
# Pick a random user as the reviewer (admin)
reviewer_id = random.choice(self.users)["id"]
await review_store_submission(
store_listing_version_id=submission.store_listing_version_id,
is_approved=False,
@@ -729,7 +734,7 @@ class TestDataCreator:
print(
f"Warning: Could not reject submission {submission.name}: {e}"
)
else: # 30% chance to leave pending (70% to 100%)
else:
print(
f"⏳ Left submission pending for review: {submission.name}"
)
@@ -743,9 +748,13 @@ class TestDataCreator:
traceback.print_exc()
continue
print("\n📊 Store Submissions Summary:")
print(f" Created: {len(submissions)}")
print(f" Approved: {len(approved_submissions)}")
print(
f"Created {len(submissions)} store submissions, approved {len(approved_submissions)}"
f" Featured: {featured_count} (guaranteed min: {GUARANTEED_FEATURED_AGENTS})"
)
self.store_submissions = submissions
return submissions
@@ -825,12 +834,15 @@ class TestDataCreator:
print(f"✅ Agent blocks available: {len(self.agent_blocks)}")
print(f"✅ Agent graphs created: {len(self.agent_graphs)}")
print(f"✅ Library agents created: {len(self.library_agents)}")
print(f"✅ Creator profiles updated: {len(self.profiles)} (some featured)")
print(
f"✅ Store submissions created: {len(self.store_submissions)} (some marked as featured during creation)"
)
print(f"✅ Creator profiles updated: {len(self.profiles)}")
print(f"✅ Store submissions created: {len(self.store_submissions)}")
print(f"✅ API keys created: {len(self.api_keys)}")
print(f"✅ Presets created: {len(self.presets)}")
print("\n🎯 Deterministic Guarantees:")
print(f" • Featured agents: >= {GUARANTEED_FEATURED_AGENTS}")
print(f" • Featured creators: >= {GUARANTEED_FEATURED_CREATORS}")
print(f" • Top agents (approved): >= {GUARANTEED_TOP_AGENTS}")
print(f" • Library agents per user: >= {MIN_AGENTS_PER_USER}")
print("\n🚀 Your E2E test database is ready to use!")

View File

@@ -0,0 +1,185 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import { FavoritesSection } from "../FavoritesSection/FavoritesSection";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const mockFavoriteAgent = {
id: "fav-agent-id",
graph_id: "fav-graph-id",
graph_version: 1,
owner_user_id: "test-owner-id",
image_url: null,
creator_name: "Test Creator",
creator_image_url: "https://example.com/avatar.png",
status: "READY",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
name: "Favorite Agent Name",
description: "Test favorite agent",
input_schema: {},
output_schema: {},
credentials_input_schema: null,
has_external_trigger: false,
has_human_in_the_loop: false,
has_sensitive_action: false,
new_output: false,
can_access_graph: true,
is_latest_version: true,
is_favorite: true,
};
describe("FavoritesSection", () => {
afterEach(() => {
resetAuthState();
});
test("renders favorites section when there are favorites", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText(/favorites/i)).toBeInTheDocument();
});
});
test("renders favorite agent cards", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText("Favorite Agent Name")).toBeInTheDocument();
});
});
test("shows agent count", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByTestId("agents-count")).toBeInTheDocument();
});
});
test("does not render when there are no favorites", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
const { container } = render(<FavoritesSection searchTerm="" />);
// Wait for loading to complete
await waitFor(() => {
// Component should return null when no favorites
expect(container.textContent).toBe("");
});
});
test("filters favorites based on search term", async () => {
mockAuthenticatedUser();
// Mock that returns different results based on search term
server.use(
http.get("*/api/library/agents/favorites*", ({ request }) => {
const url = new URL(request.url);
const searchTerm = url.searchParams.get("search_term");
if (searchTerm === "nonexistent") {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
const { rerender } = render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText("Favorite Agent Name")).toBeInTheDocument();
});
// Rerender with search term that yields no results
rerender(<FavoritesSection searchTerm="nonexistent" />);
await waitFor(() => {
expect(screen.queryByText("Favorite Agent Name")).not.toBeInTheDocument();
});
});
});

View File

@@ -0,0 +1,122 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { LibraryAgentCard } from "../LibraryAgentCard/LibraryAgentCard";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const mockAgent: LibraryAgent = {
id: "test-agent-id",
graph_id: "test-graph-id",
graph_version: 1,
owner_user_id: "test-owner-id",
image_url: null,
creator_name: "Test Creator",
creator_image_url: "https://example.com/avatar.png",
status: "READY",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
name: "Test Agent Name",
description: "Test agent description",
input_schema: {},
output_schema: {},
credentials_input_schema: null,
has_external_trigger: false,
has_human_in_the_loop: false,
has_sensitive_action: false,
new_output: false,
can_access_graph: true,
is_latest_version: true,
is_favorite: false,
};
describe("LibraryAgentCard", () => {
afterEach(() => {
resetAuthState();
});
test("renders agent name", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText("Test Agent Name")).toBeInTheDocument();
});
test("renders see runs link", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/see runs/i)).toBeInTheDocument();
});
test("renders open in builder link when can_access_graph is true", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/open in builder/i)).toBeInTheDocument();
});
test("does not render open in builder link when can_access_graph is false", () => {
mockAuthenticatedUser();
const agentWithoutAccess = { ...mockAgent, can_access_graph: false };
render(<LibraryAgentCard agent={agentWithoutAccess} />);
expect(screen.queryByText(/open in builder/i)).not.toBeInTheDocument();
});
test("shows 'FROM MARKETPLACE' label for marketplace agents", () => {
mockAuthenticatedUser();
const marketplaceAgent = {
...mockAgent,
marketplace_listing: {
id: "listing-id",
name: "Marketplace Agent",
slug: "marketplace-agent",
creator: {
id: "creator-id",
name: "Creator Name",
slug: "creator-slug",
},
},
};
render(<LibraryAgentCard agent={marketplaceAgent} />);
expect(screen.getByText(/from marketplace/i)).toBeInTheDocument();
});
test("shows 'Built by you' label for user's own agents", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/built by you/i)).toBeInTheDocument();
});
test("renders favorite button", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
// The favorite button should be present (as a heart icon button)
const card = screen.getByTestId("library-agent-card");
expect(card).toBeInTheDocument();
});
test("links to correct agent detail page", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
const link = screen.getByTestId("library-agent-card-see-runs-link");
expect(link).toHaveAttribute("href", "/library/agents/test-agent-id");
});
test("links to correct builder page", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
const builderLink = screen.getByTestId(
"library-agent-card-open-in-builder-link",
);
expect(builderLink).toHaveAttribute("href", "/build?flowID=test-graph-id");
});
});

View File

@@ -0,0 +1,53 @@
import { describe, expect, test, vi } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import { LibrarySearchBar } from "../LibrarySearchBar/LibrarySearchBar";
describe("LibrarySearchBar", () => {
test("renders search input", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
test("renders search icon", () => {
const setSearchTerm = vi.fn();
const { container } = render(
<LibrarySearchBar setSearchTerm={setSearchTerm} />,
);
// Check for the magnifying glass icon (SVG element)
const searchIcon = container.querySelector("svg");
expect(searchIcon).toBeInTheDocument();
});
test("calls setSearchTerm on input change", async () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
const input = screen.getByPlaceholderText(/search agents/i);
fireEvent.change(input, { target: { value: "test query" } });
// The search bar uses debouncing, so we need to wait
await waitFor(
() => {
expect(setSearchTerm).toHaveBeenCalled();
},
{ timeout: 1000 },
);
});
test("has correct test id", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
test("input has correct test id", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByTestId("library-textbox")).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,53 @@
import { describe, expect, test, vi } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import { LibrarySortMenu } from "../LibrarySortMenu/LibrarySortMenu";
describe("LibrarySortMenu", () => {
test("renders sort dropdown", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByTestId("sort-by-dropdown")).toBeInTheDocument();
});
test("shows 'sort by' label on larger screens", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByText(/sort by/i)).toBeInTheDocument();
});
test("shows default placeholder text", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByText(/last modified/i)).toBeInTheDocument();
});
test("opens dropdown when clicked", async () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
const trigger = screen.getByRole("combobox");
fireEvent.click(trigger);
await waitFor(() => {
expect(screen.getByText(/creation date/i)).toBeInTheDocument();
});
});
test("shows both sort options in dropdown", async () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
const trigger = screen.getByRole("combobox");
fireEvent.click(trigger);
await waitFor(() => {
expect(screen.getByText(/creation date/i)).toBeInTheDocument();
expect(
screen.getAllByText(/last modified/i).length,
).toBeGreaterThanOrEqual(1);
});
});
});

View File

@@ -0,0 +1,78 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import LibraryUploadAgentDialog from "../LibraryUploadAgentDialog/LibraryUploadAgentDialog";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryUploadAgentDialog", () => {
afterEach(() => {
resetAuthState();
});
test("renders upload button", () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
expect(
screen.getByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("opens dialog when upload button is clicked", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByText("Upload Agent")).toBeInTheDocument();
});
});
test("dialog contains agent name input", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByLabelText(/agent name/i)).toBeInTheDocument();
});
});
test("dialog contains agent description input", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByLabelText(/agent description/i)).toBeInTheDocument();
});
});
test("upload button is disabled when form is incomplete", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const triggerButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(triggerButton);
await waitFor(() => {
const submitButton = screen.getByRole("button", { name: /^upload$/i });
expect(submitButton).toBeDisabled();
});
});
test("has correct test id on trigger button", () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
expect(screen.getByTestId("upload-agent-button")).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,40 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import {
mockAuthenticatedUser,
mockUnauthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Auth State", () => {
afterEach(() => {
resetAuthState();
});
test("renders page correctly when logged in", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for upload button text to appear (indicates page is rendered)
expect(
await screen.findByText("Upload agent", { exact: false }),
).toBeInTheDocument();
// Search bar should be visible
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
test("renders page correctly when logged out", async () => {
mockUnauthenticatedUser();
render(<LibraryPage />);
// Wait for upload button text to appear (indicates page is rendered)
expect(
await screen.findByText("Upload agent", { exact: false }),
).toBeInTheDocument();
// Search bar should still be visible
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,82 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Empty State", () => {
afterEach(() => {
resetAuthState();
});
test("handles empty agents list gracefully", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<LibraryPage />);
// Page should still render without crashing
// Search bar should be visible even with no agents
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
// Upload button should be visible
expect(
screen.getByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("handles empty favorites gracefully", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<LibraryPage />);
// Page should still render without crashing
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,59 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
import { create500Handler } from "@/tests/integrations/helpers/create-500-handler";
import {
getGetV2ListLibraryAgentsMockHandler422,
getGetV2ListFavoriteLibraryAgentsMockHandler422,
} from "@/app/api/__generated__/endpoints/library/library.msw";
describe("LibraryPage - Error Handling", () => {
afterEach(() => {
resetAuthState();
});
test("handles API 422 error gracefully", async () => {
mockAuthenticatedUser();
server.use(getGetV2ListLibraryAgentsMockHandler422());
render(<LibraryPage />);
// Page should still render without crashing
// Search bar should be visible even with error
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
test("handles favorites API 422 error gracefully", async () => {
mockAuthenticatedUser();
server.use(getGetV2ListFavoriteLibraryAgentsMockHandler422());
render(<LibraryPage />);
// Page should still render without crashing
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
test("handles API 500 error gracefully", async () => {
mockAuthenticatedUser();
server.use(create500Handler("get", "*/api/library/agents*"));
render(<LibraryPage />);
// Page should still render without crashing
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
});

View File

@@ -0,0 +1,55 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse, delay } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Loading State", () => {
afterEach(() => {
resetAuthState();
});
test("shows loading spinner while agents are being fetched", async () => {
mockAuthenticatedUser();
// Override handlers to add delay to simulate loading
server.use(
http.get("*/api/library/agents*", async () => {
await delay(500);
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
http.get("*/api/library/agents/favorites*", async () => {
await delay(500);
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
const { container } = render(<LibraryPage />);
// Check for loading spinner (LoadingSpinner component)
const loadingElements = container.querySelectorAll(
'[class*="animate-spin"]',
);
expect(loadingElements.length).toBeGreaterThan(0);
});
});

View File

@@ -0,0 +1,65 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Rendering", () => {
afterEach(() => {
resetAuthState();
});
test("renders search bar", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
});
test("renders upload agent button", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
expect(
await screen.findByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("renders agent cards when data is loaded", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for agent cards to appear (from mock data)
await waitFor(() => {
const agentCards = screen.getAllByTestId("library-agent-card");
expect(agentCards.length).toBeGreaterThan(0);
});
});
test("agent cards display agent name", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for agent cards and check they have names
await waitFor(() => {
const agentNames = screen.getAllByTestId("library-agent-card-name");
expect(agentNames.length).toBeGreaterThan(0);
});
});
test("agent cards have see runs link", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
await waitFor(() => {
const seeRunsLinks = screen.getAllByTestId(
"library-agent-card-see-runs-link",
);
expect(seeRunsLinks.length).toBeGreaterThan(0);
});
});
});

View File

@@ -0,0 +1,246 @@
/**
* useChatContainerAiSdk - ChatContainer hook using Vercel AI SDK
*
* This is a drop-in replacement for useChatContainer that uses @ai-sdk/react
* instead of the custom streaming implementation. The API surface is identical
* to enable easy A/B testing and gradual migration.
*/
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { useEffect, useMemo, useRef } from "react";
import type { UIMessage } from "ai";
import { useAiSdkChat } from "../../useAiSdkChat";
import { usePageContext } from "../../usePageContext";
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import {
filterAuthMessages,
hasSentInitialPrompt,
markInitialPromptSent,
processInitialMessages,
} from "./helpers";
// Helper to convert backend messages to AI SDK UIMessage format
function convertToUIMessages(
messages: SessionDetailResponse["messages"],
): UIMessage[] {
const result: UIMessage[] = [];
for (const msg of messages) {
if (!msg.role || !msg.content) continue;
// Create parts based on message type
const parts: UIMessage["parts"] = [];
if (msg.role === "user" || msg.role === "assistant") {
if (typeof msg.content === "string") {
parts.push({ type: "text", text: msg.content });
}
}
// Handle tool calls in assistant messages
if (msg.role === "assistant" && msg.tool_calls) {
for (const toolCall of msg.tool_calls as Array<{
id: string;
type: string;
function: { name: string; arguments: string };
}>) {
if (toolCall.type === "function") {
let args = {};
try {
args = JSON.parse(toolCall.function.arguments);
} catch {
// Keep empty args
}
parts.push({
type: `tool-${toolCall.function.name}` as `tool-${string}`,
toolCallId: toolCall.id,
toolName: toolCall.function.name,
state: "input-available",
input: args,
} as UIMessage["parts"][number]);
}
}
}
// Handle tool responses
if (msg.role === "tool" && msg.tool_call_id) {
// Find the matching tool call to get the name
const toolName = "unknown";
let output: unknown = msg.content;
try {
output =
typeof msg.content === "string"
? JSON.parse(msg.content)
: msg.content;
} catch {
// Keep as string
}
parts.push({
type: `tool-${toolName}` as `tool-${string}`,
toolCallId: msg.tool_call_id as string,
toolName,
state: "output-available",
output,
} as UIMessage["parts"][number]);
}
if (parts.length > 0) {
result.push({
id: msg.id || `msg-${Date.now()}-${Math.random()}`,
role: msg.role === "tool" ? "assistant" : (msg.role as "user" | "assistant"),
parts,
createdAt: msg.created_at ? new Date(msg.created_at as string) : new Date(),
});
}
}
return result;
}
interface Args {
sessionId: string | null;
initialMessages: SessionDetailResponse["messages"];
initialPrompt?: string;
onOperationStarted?: () => void;
}
export function useChatContainerAiSdk({
sessionId,
initialMessages,
initialPrompt,
onOperationStarted,
}: Args) {
const { capturePageContext } = usePageContext();
const sendMessageRef = useRef<
(
content: string,
isUserMessage?: boolean,
context?: { url: string; content: string },
) => Promise<void>
>();
// Convert initial messages to AI SDK format
const uiMessages = useMemo(
() => convertToUIMessages(initialMessages),
[initialMessages],
);
const {
messages: aiSdkMessages,
streamingChunks,
isStreaming,
error,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessage,
stopStreaming,
} = useAiSdkChat({
sessionId,
initialMessages: uiMessages,
onOperationStarted,
});
// Keep ref updated for initial prompt handling
sendMessageRef.current = sendMessage;
// Merge AI SDK messages with processed initial messages
// This ensures we show both historical messages and new streaming messages
const allMessages = useMemo(() => {
const processedInitial = processInitialMessages(initialMessages);
// Build a set of message keys for deduplication
const seenKeys = new Set<string>();
const result: ChatMessageData[] = [];
// Add processed initial messages first
for (const msg of processedInitial) {
const key = getMessageKey(msg);
if (!seenKeys.has(key)) {
seenKeys.add(key);
result.push(msg);
}
}
// Add AI SDK messages that aren't duplicates
for (const msg of aiSdkMessages) {
const key = getMessageKey(msg);
if (!seenKeys.has(key)) {
seenKeys.add(key);
result.push(msg);
}
}
return result;
}, [initialMessages, aiSdkMessages]);
// Handle initial prompt
useEffect(
function handleInitialPrompt() {
if (!initialPrompt || !sessionId) return;
if (initialMessages.length > 0) return;
if (hasSentInitialPrompt(sessionId)) return;
markInitialPromptSent(sessionId);
const context = capturePageContext();
sendMessageRef.current?.(initialPrompt, true, context);
},
[initialPrompt, sessionId, initialMessages.length, capturePageContext],
);
// Send message with page context
async function sendMessageWithContext(
content: string,
isUserMessage: boolean = true,
) {
const context = capturePageContext();
await sendMessage(content, isUserMessage, context);
}
function handleRegionModalOpenChange(open: boolean) {
setIsRegionBlockedModalOpen(open);
}
function handleRegionModalClose() {
setIsRegionBlockedModalOpen(false);
}
return {
messages: filterAuthMessages(allMessages),
streamingChunks,
isStreaming,
error,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessageWithContext,
handleRegionModalOpenChange,
handleRegionModalClose,
sendMessage,
stopStreaming,
};
}
// Helper to generate deduplication key for a message
function getMessageKey(msg: ChatMessageData): string {
if (msg.type === "message") {
return `msg:${msg.role}:${msg.content}`;
} else if (msg.type === "tool_call") {
return `toolcall:${msg.toolId}`;
} else if (msg.type === "tool_response") {
return `toolresponse:${(msg as { toolId?: string }).toolId}`;
} else if (
msg.type === "operation_started" ||
msg.type === "operation_pending" ||
msg.type === "operation_in_progress"
) {
const typedMsg = msg as {
toolId?: string;
operationId?: string;
toolCallId?: string;
toolName?: string;
};
return `op:${typedMsg.toolId || typedMsg.operationId || typedMsg.toolCallId || ""}:${typedMsg.toolName || ""}`;
} else {
return `${msg.type}:${JSON.stringify(msg).slice(0, 100)}`;
}
}

View File

@@ -57,6 +57,7 @@ export function ChatInput({
isStreaming,
value,
baseHandleKeyDown,
inputId,
});
return (

View File

@@ -15,6 +15,7 @@ interface Args {
isStreaming?: boolean;
value: string;
baseHandleKeyDown: (event: KeyboardEvent<HTMLTextAreaElement>) => void;
inputId?: string;
}
export function useVoiceRecording({
@@ -23,6 +24,7 @@ export function useVoiceRecording({
isStreaming = false,
value,
baseHandleKeyDown,
inputId,
}: Args) {
const [isRecording, setIsRecording] = useState(false);
const [isTranscribing, setIsTranscribing] = useState(false);
@@ -103,7 +105,7 @@ export function useVoiceRecording({
setIsTranscribing(false);
}
},
[handleTranscription],
[handleTranscription, inputId],
);
const stopRecording = useCallback(() => {
@@ -201,6 +203,15 @@ export function useVoiceRecording({
}
}, [error, toast]);
useEffect(() => {
if (!isTranscribing && inputId) {
const inputElement = document.getElementById(inputId);
if (inputElement) {
inputElement.focus();
}
}
}, [isTranscribing, inputId]);
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => {
if (event.key === " " && !value.trim() && !isTranscribing) {

View File

@@ -0,0 +1,421 @@
"use client";
/**
* useAiSdkChat - Vercel AI SDK integration for CoPilot Chat
*
* This hook wraps @ai-sdk/react's useChat to provide:
* - Streaming chat with the existing Python backend (already AI SDK protocol compatible)
* - Integration with existing session management
* - Custom tool response parsing for AutoGPT-specific types
* - Page context injection
*
* The Python backend already implements the AI SDK Data Stream Protocol (v1),
* so this hook can communicate directly without any backend changes.
*/
import { useChat as useAiSdkChatBase } from "@ai-sdk/react";
import { DefaultChatTransport, type UIMessage } from "ai";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { toast } from "sonner";
import type { ChatMessageData } from "./components/ChatMessage/useChatMessage";
// Tool response types from the backend
type OperationType =
| "operation_started"
| "operation_pending"
| "operation_in_progress";
interface ToolOutputBase {
type: string;
[key: string]: unknown;
}
interface UseAiSdkChatOptions {
sessionId: string | null;
initialMessages?: UIMessage[];
onOperationStarted?: () => void;
onStreamingChange?: (isStreaming: boolean) => void;
}
/**
* Parse tool output from AI SDK message parts into ChatMessageData format
*/
function parseToolOutput(
toolCallId: string,
toolName: string,
output: unknown,
): ChatMessageData | null {
if (!output) return null;
let parsed: ToolOutputBase;
try {
parsed =
typeof output === "string"
? JSON.parse(output)
: (output as ToolOutputBase);
} catch {
return null;
}
const type = parsed.type;
// Handle operation status types
if (
type === "operation_started" ||
type === "operation_pending" ||
type === "operation_in_progress"
) {
return {
type: type as OperationType,
toolId: toolCallId,
toolName: toolName,
operationId: (parsed.operation_id as string) || undefined,
message: (parsed.message as string) || undefined,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle agent carousel
if (type === "agent_carousel" && Array.isArray(parsed.agents)) {
return {
type: "agent_carousel",
toolId: toolCallId,
toolName: toolName,
agents: parsed.agents,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle execution started
if (type === "execution_started") {
return {
type: "execution_started",
toolId: toolCallId,
toolName: toolName,
graphId: parsed.graph_id as string,
graphVersion: parsed.graph_version as number,
graphExecId: parsed.graph_exec_id as string,
nodeExecIds: parsed.node_exec_ids as string[],
timestamp: new Date(),
} as ChatMessageData;
}
// Handle error responses
if (type === "error") {
return {
type: "tool_response",
toolId: toolCallId,
toolName: toolName,
result: parsed,
success: false,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle clarification questions
if (type === "clarification_questions" && Array.isArray(parsed.questions)) {
return {
type: "clarification_questions",
toolId: toolCallId,
toolName: toolName,
questions: parsed.questions,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle credentials needed
if (type === "credentials_needed" || type === "setup_requirements") {
const credentials = parsed.credentials as
| Array<{
provider: string;
provider_name: string;
credential_type: string;
scopes?: string[];
}>
| undefined;
if (credentials && credentials.length > 0) {
return {
type: "credentials_needed",
toolId: toolCallId,
toolName: toolName,
credentials: credentials,
timestamp: new Date(),
} as ChatMessageData;
}
}
// Default: generic tool response
return {
type: "tool_response",
toolId: toolCallId,
toolName: toolName,
result: parsed,
success: true,
timestamp: new Date(),
} as ChatMessageData;
}
/**
* Convert AI SDK UIMessage parts to ChatMessageData array
*/
function convertMessageToChatData(message: UIMessage): ChatMessageData[] {
const result: ChatMessageData[] = [];
for (const part of message.parts) {
switch (part.type) {
case "text":
if (part.text.trim()) {
result.push({
type: "message",
role: message.role as "user" | "assistant",
content: part.text,
timestamp: new Date(message.createdAt || Date.now()),
});
}
break;
default:
// Handle tool parts (tool-*)
if (part.type.startsWith("tool-")) {
const toolPart = part as {
type: string;
toolCallId: string;
toolName: string;
state: string;
input?: Record<string, unknown>;
output?: unknown;
};
// Show tool call in progress
if (
toolPart.state === "input-streaming" ||
toolPart.state === "input-available"
) {
result.push({
type: "tool_call",
toolId: toolPart.toolCallId,
toolName: toolPart.toolName,
arguments: toolPart.input || {},
timestamp: new Date(),
});
}
// Parse tool output when available
if (
toolPart.state === "output-available" &&
toolPart.output !== undefined
) {
const parsed = parseToolOutput(
toolPart.toolCallId,
toolPart.toolName,
toolPart.output,
);
if (parsed) {
result.push(parsed);
}
}
// Handle tool errors
if (toolPart.state === "output-error") {
result.push({
type: "tool_response",
toolId: toolPart.toolCallId,
toolName: toolPart.toolName,
response: {
type: "error",
message: (toolPart as { errorText?: string }).errorText,
},
success: false,
timestamp: new Date(),
} as ChatMessageData);
}
}
break;
}
}
return result;
}
export function useAiSdkChat({
sessionId,
initialMessages = [],
onOperationStarted,
onStreamingChange,
}: UseAiSdkChatOptions) {
const [isRegionBlockedModalOpen, setIsRegionBlockedModalOpen] =
useState(false);
const previousSessionIdRef = useRef<string | null>(null);
const hasNotifiedOperationRef = useRef<Set<string>>(new Set());
// Create transport with session-specific endpoint
const transport = useMemo(() => {
if (!sessionId) return undefined;
return new DefaultChatTransport({
api: `/api/chat/sessions/${sessionId}/stream`,
headers: {
"Content-Type": "application/json",
},
});
}, [sessionId]);
const {
messages: aiMessages,
status,
error,
stop,
setMessages,
sendMessage: aiSendMessage,
} = useAiSdkChatBase({
transport,
initialMessages,
onError: (err) => {
console.error("[useAiSdkChat] Error:", err);
// Check for region blocking
if (
err.message?.toLowerCase().includes("not available in your region") ||
(err as { code?: string }).code === "MODEL_NOT_AVAILABLE_REGION"
) {
setIsRegionBlockedModalOpen(true);
return;
}
toast.error("Chat Error", {
description: err.message || "An error occurred",
});
},
onFinish: ({ message }) => {
console.info("[useAiSdkChat] Message finished:", {
id: message.id,
partsCount: message.parts.length,
});
},
});
// Track streaming status
const isStreaming = status === "streaming" || status === "submitted";
// Notify parent of streaming changes
useEffect(() => {
onStreamingChange?.(isStreaming);
}, [isStreaming, onStreamingChange]);
// Handle session changes - reset state
useEffect(() => {
if (sessionId === previousSessionIdRef.current) return;
if (previousSessionIdRef.current && status === "streaming") {
stop();
}
previousSessionIdRef.current = sessionId;
hasNotifiedOperationRef.current = new Set();
if (sessionId) {
setMessages(initialMessages);
}
}, [sessionId, status, stop, setMessages, initialMessages]);
// Convert AI SDK messages to ChatMessageData format
const messages = useMemo(() => {
const result: ChatMessageData[] = [];
for (const message of aiMessages) {
const converted = convertMessageToChatData(message);
result.push(...converted);
// Check for operation_started and notify
for (const msg of converted) {
if (
msg.type === "operation_started" &&
!hasNotifiedOperationRef.current.has(
(msg as { toolId?: string }).toolId || "",
)
) {
hasNotifiedOperationRef.current.add(
(msg as { toolId?: string }).toolId || "",
);
onOperationStarted?.();
}
}
}
return result;
}, [aiMessages, onOperationStarted]);
// Get streaming text chunks from the last assistant message
const streamingChunks = useMemo(() => {
if (!isStreaming) return [];
const lastMessage = aiMessages[aiMessages.length - 1];
if (!lastMessage || lastMessage.role !== "assistant") return [];
const chunks: string[] = [];
for (const part of lastMessage.parts) {
if (part.type === "text" && part.text) {
chunks.push(part.text);
}
}
return chunks;
}, [aiMessages, isStreaming]);
// Send message with optional context
const sendMessage = useCallback(
async (
content: string,
isUserMessage: boolean = true,
context?: { url: string; content: string },
) => {
if (!sessionId || !transport) {
console.error("[useAiSdkChat] Cannot send message: no session");
return;
}
setIsRegionBlockedModalOpen(false);
try {
await aiSendMessage(
{ text: content },
{
body: {
is_user_message: isUserMessage,
context: context || null,
},
},
);
} catch (err) {
console.error("[useAiSdkChat] Failed to send message:", err);
if (err instanceof Error && err.name === "AbortError") return;
toast.error("Failed to send message", {
description:
err instanceof Error ? err.message : "Failed to send message",
});
}
},
[sessionId, transport, aiSendMessage],
);
// Stop streaming
const stopStreaming = useCallback(() => {
stop();
}, [stop]);
return {
messages,
streamingChunks,
isStreaming,
error,
status,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessage,
stopStreaming,
// Expose raw AI SDK state for advanced use cases
aiMessages,
setAiMessages: setMessages,
};
}

View File

@@ -59,12 +59,13 @@ test.describe("Library", () => {
});
test("pagination works correctly", async ({ page }, testInfo) => {
test.setTimeout(testInfo.timeout * 3); // Increase timeout for pagination operations
test.setTimeout(testInfo.timeout * 3);
await page.goto("/library");
const PAGE_SIZE = 20;
const paginationResult = await libraryPage.testPagination();
if (paginationResult.initialCount >= 10) {
if (paginationResult.initialCount >= PAGE_SIZE) {
expect(paginationResult.finalCount).toBeGreaterThanOrEqual(
paginationResult.initialCount,
);
@@ -133,7 +134,10 @@ test.describe("Library", () => {
test.expect(clearedSearchValue).toBe("");
});
test("pagination while searching works correctly", async ({ page }) => {
test("pagination while searching works correctly", async ({
page,
}, testInfo) => {
test.setTimeout(testInfo.timeout * 3);
await page.goto("/library");
const allAgents = await libraryPage.getAgents();
@@ -152,9 +156,10 @@ test.describe("Library", () => {
);
expect(matchingResults.length).toEqual(initialSearchResults.length);
const PAGE_SIZE = 20;
const searchPaginationResult = await libraryPage.testPagination();
if (searchPaginationResult.initialCount >= 10) {
if (searchPaginationResult.initialCount >= PAGE_SIZE) {
expect(searchPaginationResult.finalCount).toBeGreaterThanOrEqual(
searchPaginationResult.initialCount,
);

View File

@@ -69,9 +69,12 @@ test.describe("Marketplace Creator Page Basic Functionality", () => {
await marketplacePage.getFirstCreatorProfile(page);
await firstCreatorProfile.click();
await page.waitForURL("**/marketplace/creator/**");
await page.waitForLoadState("networkidle").catch(() => {});
const firstAgent = page
.locator('[data-testid="store-card"]:visible')
.first();
await firstAgent.waitFor({ state: "visible", timeout: 30000 });
await firstAgent.click();
await page.waitForURL("**/marketplace/agent/**");

View File

@@ -77,7 +77,6 @@ test.describe("Marketplace Basic Functionality", () => {
const firstFeaturedAgent =
await marketplacePage.getFirstFeaturedAgent(page);
await firstFeaturedAgent.waitFor({ state: "visible" });
await firstFeaturedAgent.click();
await page.waitForURL("**/marketplace/agent/**");
await matchesUrl(page, /\/marketplace\/agent\/.+/);
@@ -116,7 +115,15 @@ test.describe("Marketplace Basic Functionality", () => {
const searchTerm = page.getByText("DummyInput").first();
await isVisible(searchTerm);
await page.waitForTimeout(10000);
await page.waitForLoadState("networkidle").catch(() => {});
await page
.waitForFunction(
() =>
document.querySelectorAll('[data-testid="store-card"]').length > 0,
{ timeout: 15000 },
)
.catch(() => console.log("No search results appeared within timeout"));
const results = await marketplacePage.getSearchResultsCount(page);
expect(results).toBeGreaterThan(0);

View File

@@ -300,21 +300,27 @@ export class LibraryPage extends BasePage {
async scrollToLoadMore(): Promise<void> {
console.log(`scrolling to load more agents`);
// Get initial agent count
const initialCount = await this.getAgentCount();
console.log(`Initial agent count: ${initialCount}`);
const initialCount = await this.getAgentCountByListLength();
console.log(`Initial agent count (DOM cards): ${initialCount}`);
// Scroll down to trigger pagination
await this.scrollToBottom();
// Wait for potential new agents to load
await this.page.waitForTimeout(2000);
await this.page
.waitForLoadState("networkidle", { timeout: 10000 })
.catch(() => console.log("Network idle timeout, continuing..."));
// Check if more agents loaded
const newCount = await this.getAgentCount();
console.log(`New agent count after scroll: ${newCount}`);
await this.page
.waitForFunction(
(prevCount) =>
document.querySelectorAll('[data-testid="library-agent-card"]')
.length > prevCount,
initialCount,
{ timeout: 5000 },
)
.catch(() => {});
return;
const newCount = await this.getAgentCountByListLength();
console.log(`New agent count after scroll (DOM cards): ${newCount}`);
}
async testPagination(): Promise<{

View File

@@ -9,6 +9,7 @@ export class MarketplacePage extends BasePage {
async goto(page: Page) {
await page.goto("/marketplace");
await page.waitForLoadState("networkidle").catch(() => {});
}
async getMarketplaceTitle(page: Page) {
@@ -109,16 +110,24 @@ export class MarketplacePage extends BasePage {
async getFirstFeaturedAgent(page: Page) {
const { getId } = getSelectors(page);
return getId("featured-store-card").first();
const card = getId("featured-store-card").first();
await card.waitFor({ state: "visible", timeout: 30000 });
return card;
}
async getFirstTopAgent() {
return this.page.locator('[data-testid="store-card"]:visible').first();
const card = this.page
.locator('[data-testid="store-card"]:visible')
.first();
await card.waitFor({ state: "visible", timeout: 30000 });
return card;
}
async getFirstCreatorProfile(page: Page) {
const { getId } = getSelectors(page);
return getId("creator-card").first();
const card = getId("creator-card").first();
await card.waitFor({ state: "visible", timeout: 30000 });
return card;
}
async getSearchResultsCount(page: Page) {

View File

@@ -1,15 +1,12 @@
[flake8]
max-line-length = 88
extend-ignore = E203
exclude =
.tox,
__pycache__,
*.pyc,
.env,
venv*,
.venv,
reports,
dist,
data,
.benchmark_workspaces,
.autogpt,
.env
venv*/*,
.venv/*,
reports/*,
dist/*,
data/*,

View File

@@ -1,291 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
AutoGPT Classic is an experimental, **unsupported** project demonstrating autonomous GPT-4 operation. Dependencies will not be updated, and the codebase contains known vulnerabilities. This is preserved for educational/historical purposes.
## Repository Structure
```
classic/
├── pyproject.toml # Single consolidated Poetry project
├── poetry.lock # Single lock file
├── forge/
│ └── forge/ # Core agent framework package
├── original_autogpt/
│ └── autogpt/ # AutoGPT agent package
├── direct_benchmark/
│ └── direct_benchmark/ # Benchmark harness package
└── benchmark/ # Challenge definitions (data, not code)
```
All packages are managed by a single `pyproject.toml` at the classic/ root.
## Common Commands
### Setup & Install
```bash
# Install everything from classic/ directory
cd classic
poetry install
```
### Running Agents
```bash
# Run forge agent
poetry run python -m forge
# Run original autogpt server
poetry run serve --debug
# Run autogpt CLI
poetry run autogpt
```
Agents run on `http://localhost:8000` by default.
### Benchmarking
```bash
# Run benchmarks
poetry run direct-benchmark run
# Run specific strategies and models
poetry run direct-benchmark run \
--strategies one_shot,rewoo \
--models claude \
--parallel 4
# Run a single test
poetry run direct-benchmark run --tests ReadFile
# List available commands
poetry run direct-benchmark --help
```
### Testing
```bash
poetry run pytest # All tests
poetry run pytest forge/tests/ # Forge tests only
poetry run pytest original_autogpt/tests/ # AutoGPT tests only
poetry run pytest -k test_name # Single test by name
poetry run pytest path/to/test.py # Specific test file
poetry run pytest --cov # With coverage
```
### Linting & Formatting
Run from the classic/ directory:
```bash
# Format everything (recommended to run together)
poetry run black . && poetry run isort .
# Check formatting (CI-style, no changes)
poetry run black --check . && poetry run isort --check-only .
# Lint
poetry run flake8 # Style linting
# Type check
poetry run pyright # Type checking (some errors are expected in infrastructure code)
```
Note: Always run linters over the entire directory, not specific files, for best results.
## Architecture
### Forge (Core Framework)
The `forge` package is the foundation that other components depend on:
- `forge/agent/` - Agent implementation and protocols
- `forge/llm/` - Multi-provider LLM integrations (OpenAI, Anthropic, Groq, LiteLLM)
- `forge/components/` - Reusable agent components
- `forge/file_storage/` - File system abstraction
- `forge/config/` - Configuration management
### Original AutoGPT
- `original_autogpt/autogpt/app/` - CLI application entry points
- `original_autogpt/autogpt/agents/` - Agent implementations
- `original_autogpt/autogpt/agent_factory/` - Agent creation logic
### Direct Benchmark
Benchmark harness for testing agent performance:
- `direct_benchmark/direct_benchmark/` - CLI and harness code
- `benchmark/agbenchmark/challenges/` - Test cases organized by category (code, retrieval, data, etc.)
- Reports generated in `direct_benchmark/reports/`
### Package Structure
All three packages are included in a single Poetry project. Imports are fully qualified:
- `from forge.agent.base import BaseAgent`
- `from autogpt.agents.agent import Agent`
- `from direct_benchmark.harness import BenchmarkHarness`
## Code Style
- Python 3.12 target
- Line length: 88 characters (Black default)
- Black for formatting, isort for imports (profile="black")
- Type hints with Pyright checking
## Testing Patterns
- Async support via pytest-asyncio
- Fixtures defined in `conftest.py` files provide: `tmp_project_root`, `storage`, `config`, `llm_provider`, `agent`
- Tests requiring API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY) will skip if not set
## Environment Setup
Copy `.env.example` to `.env` in the relevant directory and add your API keys:
```bash
cp .env.example .env
# Edit .env with your OPENAI_API_KEY, etc.
```
## Workspaces
Agents operate within a **workspace** - a directory containing all agent data and files. The workspace root defaults to the current working directory.
### Workspace Structure
```
{workspace}/
├── .autogpt/
│ ├── autogpt.yaml # Workspace-level permissions
│ ├── ap_server.db # Agent Protocol database (server mode)
│ └── agents/
│ └── AutoGPT-{agent_id}/
│ ├── state.json # Agent profile, directives, action history
│ ├── permissions.yaml # Agent-specific permission overrides
│ └── workspace/ # Agent's sandboxed working directory
```
### Key Concepts
- **Multiple agents** can coexist in the same workspace (each gets its own subdirectory)
- **File access** is sandboxed to the agent's `workspace/` directory by default
- **State persistence** - agent state saves to `state.json` and survives across sessions
- **Storage backends** - supports local filesystem, S3, and GCS (via `FILE_STORAGE_BACKEND` env var)
### Specifying a Workspace
```bash
# Default: uses current directory
cd /path/to/my/project && poetry run autogpt
# Or specify explicitly via CLI (if supported)
poetry run autogpt --workspace /path/to/workspace
```
## Settings Location
Configuration uses a **layered system** with three levels (in order of precedence):
### 1. Environment Variables (Global)
Loaded from `.env` file in the working directory:
```bash
# Required
OPENAI_API_KEY=sk-...
# Optional LLM settings
SMART_LLM=gpt-4o # Model for complex reasoning
FAST_LLM=gpt-4o-mini # Model for simple tasks
EMBEDDING_MODEL=text-embedding-3-small
# Optional search providers (for web search component)
TAVILY_API_KEY=tvly-...
SERPER_API_KEY=...
GOOGLE_API_KEY=...
GOOGLE_CUSTOM_SEARCH_ENGINE_ID=...
# Optional infrastructure
LOG_LEVEL=DEBUG # DEBUG, INFO, WARNING, ERROR
DATABASE_STRING=sqlite:///agent.db # Agent Protocol database
PORT=8000 # Server port
FILE_STORAGE_BACKEND=local # local, s3, or gcs
```
### 2. Workspace Settings (`{workspace}/.autogpt/autogpt.yaml`)
Workspace-wide permissions that apply to **all agents** in this workspace:
```yaml
allow:
- read_file({workspace}/**)
- write_to_file({workspace}/**)
- list_folder({workspace}/**)
- web_search(*)
deny:
- read_file(**.env)
- read_file(**.env.*)
- read_file(**.key)
- read_file(**.pem)
- execute_shell(rm -rf:*)
- execute_shell(sudo:*)
```
Auto-generated with sensible defaults if missing.
### 3. Agent Settings (`{workspace}/.autogpt/agents/{id}/permissions.yaml`)
Agent-specific permission overrides:
```yaml
allow:
- execute_python(*)
- web_search(*)
deny:
- execute_shell(*)
```
## Permissions
The permission system uses **pattern matching** with a **first-match-wins** evaluation order.
### Permission Check Order
1. Agent deny list → **Block**
2. Workspace deny list → **Block**
3. Agent allow list → **Allow**
4. Workspace allow list → **Allow**
5. Session denied list → **Block** (commands denied during this session)
6. **Prompt user** → Interactive approval (if in interactive mode)
### Pattern Syntax
Format: `command_name(glob_pattern)`
| Pattern | Description |
|---------|-------------|
| `read_file({workspace}/**)` | Read any file in workspace (recursive) |
| `write_to_file({workspace}/*.txt)` | Write only .txt files in workspace root |
| `execute_shell(python:**)` | Execute Python commands only |
| `execute_shell(git:*)` | Execute any git command |
| `web_search(*)` | Allow all web searches |
Special tokens:
- `{workspace}` - Replaced with actual workspace path
- `**` - Matches any path including `/`
- `*` - Matches any characters except `/`
### Interactive Approval Scopes
When prompted for permission, users can choose:
| Scope | Effect |
|-------|--------|
| **Once** | Allow this one time only (not saved) |
| **Agent** | Always allow for this agent (saves to agent `permissions.yaml`) |
| **Workspace** | Always allow for all agents (saves to `autogpt.yaml`) |
| **Deny** | Deny this command (saves to appropriate deny list) |
### Default Security
Out of the box, the following are **denied by default**:
- Reading sensitive files (`.env`, `.key`, `.pem`)
- Destructive shell commands (`rm -rf`, `sudo`)
- Operations outside the workspace directory

182
classic/CLI-USAGE.md Executable file
View File

@@ -0,0 +1,182 @@
## CLI Documentation
This document describes how to interact with the project's CLI (Command Line Interface). It includes the types of outputs you can expect from each command. Note that the `agents stop` command will terminate any process running on port 8000.
### 1. Entry Point for the CLI
Running the `./run` command without any parameters will display the help message, which provides a list of available commands and options. Additionally, you can append `--help` to any command to view help information specific to that command.
```sh
./run
```
**Output**:
```
Usage: cli.py [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
agent Commands to create, start and stop agents
benchmark Commands to start the benchmark and list tests and categories
setup Installs dependencies needed for your system.
```
If you need assistance with any command, simply add the `--help` parameter to the end of your command, like so:
```sh
./run COMMAND --help
```
This will display a detailed help message regarding that specific command, including a list of any additional options and arguments it accepts.
### 2. Setup Command
```sh
./run setup
```
**Output**:
```
Setup initiated
Installation has been completed.
```
This command initializes the setup of the project.
### 3. Agents Commands
**a. List All Agents**
```sh
./run agent list
```
**Output**:
```
Available agents: 🤖
🐙 forge
🐙 autogpt
```
Lists all the available agents.
**b. Create a New Agent**
```sh
./run agent create my_agent
```
**Output**:
```
🎉 New agent 'my_agent' created and switched to the new directory in agents folder.
```
Creates a new agent named 'my_agent'.
**c. Start an Agent**
```sh
./run agent start my_agent
```
**Output**:
```
... (ASCII Art representing the agent startup)
[Date and Time] [forge.sdk.db] [DEBUG] 🐛 Initializing AgentDB with database_string: sqlite:///agent.db
[Date and Time] [forge.sdk.agent] [INFO] 📝 Agent server starting on http://0.0.0.0:8000
```
Starts the 'my_agent' and displays startup ASCII art and logs.
**d. Stop an Agent**
```sh
./run agent stop
```
**Output**:
```
Agent stopped
```
Stops the running agent.
### 4. Benchmark Commands
**a. List Benchmark Categories**
```sh
./run benchmark categories list
```
**Output**:
```
Available categories: 📚
📖 code
📖 safety
📖 memory
... (and so on)
```
Lists all available benchmark categories.
**b. List Benchmark Tests**
```sh
./run benchmark tests list
```
**Output**:
```
Available tests: 📚
📖 interface
🔬 Search - TestSearch
🔬 Write File - TestWriteFile
... (and so on)
```
Lists all available benchmark tests.
**c. Show Details of a Benchmark Test**
```sh
./run benchmark tests details TestWriteFile
```
**Output**:
```
TestWriteFile
-------------
Category: interface
Task: Write the word 'Washington' to a .txt file
... (and other details)
```
Displays the details of the 'TestWriteFile' benchmark test.
**d. Start Benchmark for the Agent**
```sh
./run benchmark start my_agent
```
**Output**:
```
(more details about the testing process shown whilst the test are running)
============= 13 failed, 1 passed in 0.97s ============...
```
Displays the results of the benchmark tests on 'my_agent'.

View File

@@ -2,7 +2,7 @@
ARG BUILD_TYPE=dev
# Use an official Python base image from the Docker Hub
FROM python:3.12-slim AS autogpt-base
FROM python:3.10-slim AS autogpt-base
# Install browsers
RUN apt-get update && apt-get install -y \
@@ -34,6 +34,9 @@ COPY original_autogpt/pyproject.toml original_autogpt/poetry.lock ./
# Include forge so it can be used as a path dependency
COPY forge/ ../forge
# Include frontend
COPY frontend/ ../frontend
# Set the entrypoint
ENTRYPOINT ["poetry", "run", "autogpt"]
CMD []

173
classic/FORGE-QUICKSTART.md Normal file
View File

@@ -0,0 +1,173 @@
# Quickstart Guide
> For the complete getting started [tutorial series](https://aiedge.medium.com/autogpt-forge-e3de53cc58ec) <- click here
Welcome to the Quickstart Guide! This guide will walk you through setting up, building, and running your own AutoGPT agent. Whether you're a seasoned AI developer or just starting out, this guide will provide you with the steps to jumpstart your journey in AI development with AutoGPT.
## System Requirements
This project supports Linux (Debian-based), Mac, and Windows Subsystem for Linux (WSL). If you use a Windows system, you must install WSL. You can find the installation instructions for WSL [here](https://learn.microsoft.com/en-us/windows/wsl/).
## Getting Setup
1. **Fork the Repository**
To fork the repository, follow these steps:
- Navigate to the main page of the repository.
![Repository](../docs/content/imgs/quickstart/001_repo.png)
- In the top-right corner of the page, click Fork.
![Create Fork UI](../docs/content/imgs/quickstart/002_fork.png)
- On the next page, select your GitHub account to create the fork.
- Wait for the forking process to complete. You now have a copy of the repository in your GitHub account.
2. **Clone the Repository**
To clone the repository, you need to have Git installed on your system. If you don't have Git installed, download it from [here](https://git-scm.com/downloads). Once you have Git installed, follow these steps:
- Open your terminal.
- Navigate to the directory where you want to clone the repository.
- Run the git clone command for the fork you just created
![Clone the Repository](../docs/content/imgs/quickstart/003_clone.png)
- Then open your project in your ide
![Open the Project in your IDE](../docs/content/imgs/quickstart/004_ide.png)
4. **Setup the Project**
Next, we need to set up the required dependencies. We have a tool to help you perform all the tasks on the repo.
It can be accessed by running the `run` command by typing `./run` in the terminal.
The first command you need to use is `./run setup.` This will guide you through setting up your system.
Initially, you will get instructions for installing Flutter and Chrome and setting up your GitHub access token like the following image:
![Setup the Project](../docs/content/imgs/quickstart/005_setup.png)
### For Windows Users
If you're a Windows user and experience issues after installing WSL, follow the steps below to resolve them.
#### Update WSL
Run the following command in Powershell or Command Prompt:
1. Enable the optional WSL and Virtual Machine Platform components.
2. Download and install the latest Linux kernel.
3. Set WSL 2 as the default.
4. Download and install the Ubuntu Linux distribution (a reboot may be required).
```shell
wsl --install
```
For more detailed information and additional steps, refer to [Microsoft's WSL Setup Environment Documentation](https://learn.microsoft.com/en-us/windows/wsl/setup/environment).
#### Resolve FileNotFoundError or "No such file or directory" Errors
When you run `./run setup`, if you encounter errors like `No such file or directory` or `FileNotFoundError`, it might be because Windows-style line endings (CRLF - Carriage Return Line Feed) are not compatible with Unix/Linux style line endings (LF - Line Feed).
To resolve this, you can use the `dos2unix` utility to convert the line endings in your script from CRLF to LF. Heres how to install and run `dos2unix` on the script:
```shell
sudo apt update
sudo apt install dos2unix
dos2unix ./run
```
After executing the above commands, running `./run setup` should work successfully.
#### Store Project Files within the WSL File System
If you continue to experience issues, consider storing your project files within the WSL file system instead of the Windows file system. This method avoids path translations and permissions issues and provides a more consistent development environment.
You can keep running the command to get feedback on where you are up to with your setup.
When setup has been completed, the command will return an output like this:
![Setup Complete](../docs/content/imgs/quickstart/006_setup_complete.png)
## Creating Your Agent
After completing the setup, the next step is to create your agent template.
Execute the command `./run agent create YOUR_AGENT_NAME`, where `YOUR_AGENT_NAME` should be replaced with your chosen name.
Tips for naming your agent:
* Give it its own unique name, or name it after yourself
* Include an important aspect of your agent in the name, such as its purpose
Examples: `SwiftyosAssistant`, `PwutsPRAgent`, `MySuperAgent`
![Create an Agent](../docs/content/imgs/quickstart/007_create_agent.png)
## Running your Agent
Your agent can be started using the command: `./run agent start YOUR_AGENT_NAME`
This starts the agent on the URL: `http://localhost:8000/`
![Start the Agent](../docs/content/imgs/quickstart/009_start_agent.png)
The front end can be accessed from `http://localhost:8000/`; first, you must log in using either a Google account or your GitHub account.
![Login](../docs/content/imgs/quickstart/010_login.png)
Upon logging in, you will get a page that looks something like this: your task history down the left-hand side of the page, and the 'chat' window to send tasks to your agent.
![Login](../docs/content/imgs/quickstart/011_home.png)
When you have finished with your agent or just need to restart it, use Ctl-C to end the session. Then, you can re-run the start command.
If you are having issues and want to ensure the agent has been stopped, there is a `./run agent stop` command, which will kill the process using port 8000, which should be the agent.
## Benchmarking your Agent
The benchmarking system can also be accessed using the CLI too:
```bash
agpt % ./run benchmark
Usage: cli.py benchmark [OPTIONS] COMMAND [ARGS]...
Commands to start the benchmark and list tests and categories
Options:
--help Show this message and exit.
Commands:
categories Benchmark categories group command
start Starts the benchmark command
tests Benchmark tests group command
agpt % ./run benchmark categories
Usage: cli.py benchmark categories [OPTIONS] COMMAND [ARGS]...
Benchmark categories group command
Options:
--help Show this message and exit.
Commands:
list List benchmark categories command
agpt % ./run benchmark tests
Usage: cli.py benchmark tests [OPTIONS] COMMAND [ARGS]...
Benchmark tests group command
Options:
--help Show this message and exit.
Commands:
details Benchmark test details command
list List benchmark tests command
```
The benchmark has been split into different categories of skills you can test your agent on. You can see what categories are available with
```bash
./run benchmark categories list
# And what tests are available with
./run benchmark tests list
```
![Login](../docs/content/imgs/quickstart/012_tests.png)
Finally, you can run the benchmark with
```bash
./run benchmark start YOUR_AGENT_NAME
```
>

View File

@@ -4,7 +4,7 @@ AutoGPT Classic was an experimental project to demonstrate autonomous GPT-4 oper
## Project Status
**This project is unsupported, and dependencies will not be updated.** It was an experiment that has concluded its initial research phase. If you want to use AutoGPT, you should use the [AutoGPT Platform](/autogpt_platform).
⚠️ **This project is unsupported, and dependencies will not be updated. It was an experiment that has concluded its initial research phase. If you want to use AutoGPT, you should use the [AutoGPT Platform](/autogpt_platform)**
For those interested in autonomous AI agents, we recommend exploring more actively maintained alternatives or referring to this codebase for educational purposes only.
@@ -16,171 +16,37 @@ AutoGPT Classic was one of the first implementations of autonomous AI agents - A
- Learn from the results and adjust its approach
- Chain multiple actions together to achieve an objective
## Key Features
- 🔄 Autonomous task chaining
- 🛠 Tool and API integration capabilities
- 💾 Memory management for context retention
- 🔍 Web browsing and information gathering
- 📝 File operations and content creation
- 🔄 Self-prompting and task breakdown
## Structure
```
classic/
├── pyproject.toml # Single consolidated Poetry project
├── poetry.lock # Single lock file
├── forge/ # Core autonomous agent framework
├── original_autogpt/ # Original implementation
├── direct_benchmark/ # Benchmark harness
└── benchmark/ # Challenge definitions (data)
```
The project is organized into several key components:
- `/benchmark` - Performance testing tools
- `/forge` - Core autonomous agent framework
- `/frontend` - User interface components
- `/original_autogpt` - Original implementation
## Getting Started
### Prerequisites
- Python 3.12+
- [Poetry](https://python-poetry.org/docs/#installation)
### Installation
While this project is no longer actively maintained, you can still explore the codebase:
1. Clone the repository:
```bash
# Clone the repository
git clone https://github.com/Significant-Gravitas/AutoGPT.git
cd classic
# Install everything
poetry install
```
### Configuration
Configuration uses a layered system:
1. **Environment variables** (`.env` file)
2. **Workspace settings** (`.autogpt/autogpt.yaml`)
3. **Agent settings** (`.autogpt/agents/{id}/permissions.yaml`)
Copy the example environment file and add your API keys:
```bash
cp .env.example .env
```
Key environment variables:
```bash
# Required
OPENAI_API_KEY=sk-...
# Optional LLM settings
SMART_LLM=gpt-4o # Model for complex reasoning
FAST_LLM=gpt-4o-mini # Model for simple tasks
# Optional search providers
TAVILY_API_KEY=tvly-...
SERPER_API_KEY=...
# Optional infrastructure
LOG_LEVEL=DEBUG
PORT=8000
FILE_STORAGE_BACKEND=local # local, s3, or gcs
```
### Running
All commands run from the `classic/` directory:
```bash
# Run forge agent
poetry run python -m forge
# Run original autogpt server
poetry run serve --debug
# Run autogpt CLI
poetry run autogpt
```
Agents run on `http://localhost:8000` by default.
### Benchmarking
```bash
poetry run direct-benchmark run
```
### Testing
```bash
poetry run pytest # All tests
poetry run pytest forge/tests/ # Forge tests only
poetry run pytest original_autogpt/tests/ # AutoGPT tests only
```
## Workspaces
Agents operate within a **workspace** directory that contains all agent data and files:
```
{workspace}/
├── .autogpt/
│ ├── autogpt.yaml # Workspace-level permissions
│ ├── ap_server.db # Agent Protocol database (server mode)
│ └── agents/
│ └── AutoGPT-{agent_id}/
│ ├── state.json # Agent profile, directives, history
│ ├── permissions.yaml # Agent-specific permissions
│ └── workspace/ # Agent's sandboxed working directory
```
- The workspace defaults to the current working directory
- Multiple agents can coexist in the same workspace
- Agent file access is sandboxed to their `workspace/` subdirectory
- State persists across sessions via `state.json`
## Permissions
AutoGPT uses a **layered permission system** with pattern matching:
### Permission Files
| File | Scope | Location |
|------|-------|----------|
| `autogpt.yaml` | All agents in workspace | `.autogpt/autogpt.yaml` |
| `permissions.yaml` | Single agent | `.autogpt/agents/{id}/permissions.yaml` |
### Permission Format
```yaml
allow:
- read_file({workspace}/**) # Read any file in workspace
- write_to_file({workspace}/**) # Write any file in workspace
- web_search(*) # All web searches
deny:
- read_file(**.env) # Block .env files
- execute_shell(sudo:*) # Block sudo commands
```
### Check Order (First Match Wins)
1. Agent deny → Block
2. Workspace deny → Block
3. Agent allow → Allow
4. Workspace allow → Allow
5. Prompt user → Interactive approval
### Interactive Approval
When prompted, users can approve commands with different scopes:
- **Once** - Allow this one time only
- **Agent** - Always allow for this agent
- **Workspace** - Always allow for all agents
- **Deny** - Block this command
### Default Security
Denied by default:
- Sensitive files (`.env`, `.key`, `.pem`)
- Destructive commands (`rm -rf`, `sudo`)
- Operations outside the workspace
## Security Notice
This codebase has **known vulnerabilities** and issues with its dependencies. It will not be updated to new dependencies. Use for educational purposes only.
2. Review the documentation:
- For reference, see the [documentation](https://docs.agpt.co). You can browse at the same point in time as this commit so the docs don't change.
- Check `CLI-USAGE.md` for command-line interface details
- Refer to `TROUBLESHOOTING.md` for common issues
## License
@@ -189,3 +55,27 @@ This project segment is licensed under the MIT License - see the [LICENSE](LICEN
## Documentation
Please refer to the [documentation](https://docs.agpt.co) for more detailed information about the project's architecture and concepts.
You can browse at the same point in time as this commit so the docs don't change.
## Historical Impact
AutoGPT Classic played a significant role in advancing the field of autonomous AI agents:
- Demonstrated practical implementation of AI autonomy
- Inspired numerous derivative projects and research
- Contributed to the development of AI agent architectures
- Helped identify key challenges in AI autonomy
## Security Notice
If you're studying this codebase, please understand this has KNOWN vulnerabilities and issues with its dependencies. It will not be updated to new dependencies.
## Community & Support
While active development has concluded:
- The codebase remains available for study and reference
- Historical discussions can be found in project issues
- Related research and developments continue in the broader AI agent community
## Acknowledgments
Thanks to all contributors who participated in this experimental project and helped advance the field of autonomous AI agents.

View File

@@ -0,0 +1,4 @@
AGENT_NAME=mini-agi
REPORTS_FOLDER="reports/mini-agi"
OPENAI_API_KEY="sk-" # for LLM eval
BUILD_SKILL_TREE=false # set to true to build the skill tree.

12
classic/benchmark/.flake8 Normal file
View File

@@ -0,0 +1,12 @@
[flake8]
max-line-length = 88
# Ignore rules that conflict with Black code style
extend-ignore = E203, W503
exclude =
__pycache__/,
*.pyc,
.pytest_cache/,
venv*/,
.venv/,
reports/,
agbenchmark/reports/,

174
classic/benchmark/.gitignore vendored Normal file
View File

@@ -0,0 +1,174 @@
agbenchmark_config/workspace/
backend/backend_stdout.txt
reports/df*.pkl
reports/raw*
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
.idea/
.DS_Store
```
secrets.json
agbenchmark_config/challenges_already_beaten.json
agbenchmark_config/challenges/pri_*
agbenchmark_config/updates.json
agbenchmark_config/reports/*
agbenchmark_config/reports/success_rate.json
agbenchmark_config/reports/regression_tests.json

21
classic/benchmark/LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 AutoGPT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,25 @@
# Auto-GPT Benchmarks
Built for the purpose of benchmarking the performance of agents regardless of how they work.
Objectively know how well your agent is performing in categories like code, retrieval, memory, and safety.
Save time and money while doing it through smart dependencies. The best part? It's all automated.
## Scores:
<img width="733" alt="Screenshot 2023-07-25 at 10 35 01 AM" src="https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/assets/9652976/98963e0b-18b9-4b17-9a6a-4d3e4418af70">
## Ranking overall:
- 1- [Beebot](https://github.com/AutoPackAI/beebot)
- 2- [mini-agi](https://github.com/muellerberndt/mini-agi)
- 3- [Auto-GPT](https://github.com/Significant-Gravitas/AutoGPT)
## Detailed results:
<img width="733" alt="Screenshot 2023-07-25 at 10 42 15 AM" src="https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/assets/9652976/39be464c-c842-4437-b28a-07d878542a83">
[Click here to see the results and the raw data!](https://docs.google.com/spreadsheets/d/1WXm16P2AHNbKpkOI0LYBpcsGG0O7D8HYTG5Uj0PaJjA/edit#gid=203558751)!
More agents coming soon !

View File

@@ -0,0 +1,69 @@
## As a user
1. `pip install auto-gpt-benchmarks`
2. Add boilerplate code to run and kill agent
3. `agbenchmark`
- `--category challenge_category` to run tests in a specific category
- `--mock` to only run mock tests if they exists for each test
- `--noreg` to skip any tests that have passed in the past. When you run without this flag and a previous challenge that passed fails, it will now not be regression tests
4. We call boilerplate code for your agent
5. Show pass rate of tests, logs, and any other metrics
## Contributing
##### Diagrams: https://whimsical.com/agbenchmark-5n4hXBq1ZGzBwRsK4TVY7x
### To run the existing mocks
1. clone the repo `auto-gpt-benchmarks`
2. `pip install poetry`
3. `poetry shell`
4. `poetry install`
5. `cp .env_example .env`
6. `git submodule update --init --remote --recursive`
7. `uvicorn server:app --reload`
8. `agbenchmark --mock`
Keep config the same and watch the logs :)
### To run with mini-agi
1. Navigate to `auto-gpt-benchmarks/agent/mini-agi`
2. `pip install -r requirements.txt`
3. `cp .env_example .env`, set `PROMPT_USER=false` and add your `OPENAI_API_KEY=`. Sset `MODEL="gpt-3.5-turbo"` if you don't have access to `gpt-4` yet. Also make sure you have Python 3.10^ installed
4. set `AGENT_NAME=mini-agi` in `.env` file and where you want your `REPORTS_FOLDER` to be
5. Make sure to follow the commands above, and remove mock flag `agbenchmark`
- To add requirements `poetry add requirement`.
Feel free to create prs to merge with `main` at will (but also feel free to ask for review) - if you can't send msg in R&D chat for access.
If you push at any point and break things - it'll happen to everyone - fix it asap. Step 1 is to revert `master` to last working commit
Let people know what beautiful code you write does, document everything well
Share your progress :)
#### Dataset
Manually created, existing challenges within Auto-Gpt, https://osu-nlp-group.github.io/Mind2Web/
## How do I add new agents to agbenchmark ?
Example with smol developer.
1- Create a github branch with your agent following the same pattern as this example:
https://github.com/smol-ai/developer/pull/114/files
2- Create the submodule and the github workflow by following the same pattern as this example:
https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/pull/48/files
## How do I run agent in different environments?
**To just use as the benchmark for your agent**. `pip install` the package and run `agbenchmark`
**For internal Auto-GPT ci runs**, specify the `AGENT_NAME` you want you use and set the `HOME_ENV`.
Ex. `AGENT_NAME=mini-agi`
**To develop agent alongside benchmark**, you can specify the `AGENT_NAME` you want you use and add as a submodule to the repo

View File

@@ -0,0 +1,352 @@
import logging
import os
import sys
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Optional
import click
from click_default_group import DefaultGroup
from dotenv import load_dotenv
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.utils.logging import configure_logging
load_dotenv()
# try:
# if os.getenv("HELICONE_API_KEY"):
# import helicone # noqa
# helicone_enabled = True
# else:
# helicone_enabled = False
# except ImportError:
# helicone_enabled = False
class InvalidInvocationError(ValueError):
pass
logger = logging.getLogger(__name__)
BENCHMARK_START_TIME_DT = datetime.now(timezone.utc)
BENCHMARK_START_TIME = BENCHMARK_START_TIME_DT.strftime("%Y-%m-%dT%H:%M:%S+00:00")
# if helicone_enabled:
# from helicone.lock import HeliconeLockManager
# HeliconeLockManager.write_custom_property(
# "benchmark_start_time", BENCHMARK_START_TIME
# )
@click.group(cls=DefaultGroup, default_if_no_args=True)
@click.option("--debug", is_flag=True, help="Enable debug output")
def cli(
debug: bool,
) -> Any:
configure_logging(logging.DEBUG if debug else logging.INFO)
@cli.command(hidden=True)
def start():
raise DeprecationWarning(
"`agbenchmark start` is deprecated. Use `agbenchmark run` instead."
)
@cli.command(default=True)
@click.option(
"-N", "--attempts", default=1, help="Number of times to run each challenge."
)
@click.option(
"-c",
"--category",
multiple=True,
help="(+) Select a category to run.",
)
@click.option(
"-s",
"--skip-category",
multiple=True,
help="(+) Exclude a category from running.",
)
@click.option("--test", multiple=True, help="(+) Select a test to run.")
@click.option("--maintain", is_flag=True, help="Run only regression tests.")
@click.option("--improve", is_flag=True, help="Run only non-regression tests.")
@click.option(
"--explore",
is_flag=True,
help="Run only challenges that have never been beaten.",
)
@click.option(
"--no-dep",
is_flag=True,
help="Run all (selected) challenges, regardless of dependency success/failure.",
)
@click.option("--cutoff", type=int, help="Override the challenge time limit (seconds).")
@click.option("--nc", is_flag=True, help="Disable the challenge time limit.")
@click.option("--mock", is_flag=True, help="Run with mock")
@click.option("--keep-answers", is_flag=True, help="Keep answers")
@click.option(
"--backend",
is_flag=True,
help="Write log output to a file instead of the terminal.",
)
# @click.argument(
# "agent_path",
# type=click.Path(exists=True, file_okay=False, path_type=Path),
# required=False,
# )
def run(
maintain: bool,
improve: bool,
explore: bool,
mock: bool,
no_dep: bool,
nc: bool,
keep_answers: bool,
test: tuple[str],
category: tuple[str],
skip_category: tuple[str],
attempts: int,
cutoff: Optional[int] = None,
backend: Optional[bool] = False,
# agent_path: Optional[Path] = None,
) -> None:
"""
Run the benchmark on the agent in the current directory.
Options marked with (+) can be specified multiple times, to select multiple items.
"""
from agbenchmark.main import run_benchmark, validate_args
agbenchmark_config = AgentBenchmarkConfig.load()
logger.debug(f"agbenchmark_config: {agbenchmark_config.agbenchmark_config_dir}")
try:
validate_args(
maintain=maintain,
improve=improve,
explore=explore,
tests=test,
categories=category,
skip_categories=skip_category,
no_cutoff=nc,
cutoff=cutoff,
)
except InvalidInvocationError as e:
logger.error("Error: " + "\n".join(e.args))
sys.exit(1)
original_stdout = sys.stdout # Save the original standard output
exit_code = None
if backend:
with open("backend/backend_stdout.txt", "w") as f:
sys.stdout = f
exit_code = run_benchmark(
config=agbenchmark_config,
maintain=maintain,
improve=improve,
explore=explore,
mock=mock,
no_dep=no_dep,
no_cutoff=nc,
keep_answers=keep_answers,
tests=test,
categories=category,
skip_categories=skip_category,
attempts_per_challenge=attempts,
cutoff=cutoff,
)
sys.stdout = original_stdout
else:
exit_code = run_benchmark(
config=agbenchmark_config,
maintain=maintain,
improve=improve,
explore=explore,
mock=mock,
no_dep=no_dep,
no_cutoff=nc,
keep_answers=keep_answers,
tests=test,
categories=category,
skip_categories=skip_category,
attempts_per_challenge=attempts,
cutoff=cutoff,
)
sys.exit(exit_code)
@cli.command()
@click.option("--port", type=int, help="Port to run the API on.")
def serve(port: Optional[int] = None):
"""Serve the benchmark frontend and API on port 8080."""
import uvicorn
from agbenchmark.app import setup_fastapi_app
config = AgentBenchmarkConfig.load()
app = setup_fastapi_app(config)
# Run the FastAPI application using uvicorn
port = port or int(os.getenv("PORT", 8080))
uvicorn.run(app, host="0.0.0.0", port=port)
@cli.command()
def config():
"""Displays info regarding the present AGBenchmark config."""
from .utils.utils import pretty_print_model
try:
config = AgentBenchmarkConfig.load()
except FileNotFoundError as e:
click.echo(e, err=True)
return 1
pretty_print_model(config, include_header=False)
@cli.group()
def challenge():
logging.getLogger().setLevel(logging.WARNING)
@challenge.command("list")
@click.option(
"--all", "include_unavailable", is_flag=True, help="Include unavailable challenges."
)
@click.option(
"--names", "only_names", is_flag=True, help="List only the challenge names."
)
@click.option("--json", "output_json", is_flag=True)
def list_challenges(include_unavailable: bool, only_names: bool, output_json: bool):
"""Lists [available|all] challenges."""
import json
from tabulate import tabulate
from .challenges.builtin import load_builtin_challenges
from .challenges.webarena import load_webarena_challenges
from .utils.data_types import Category, DifficultyLevel
from .utils.utils import sorted_by_enum_index
DIFFICULTY_COLORS = {
difficulty: color
for difficulty, color in zip(
DifficultyLevel,
["black", "blue", "cyan", "green", "yellow", "red", "magenta", "white"],
)
}
CATEGORY_COLORS = {
category: f"bright_{color}"
for category, color in zip(
Category,
["blue", "cyan", "green", "yellow", "magenta", "red", "white", "black"],
)
}
# Load challenges
challenges = filter(
lambda c: c.info.available or include_unavailable,
[
*load_builtin_challenges(),
*load_webarena_challenges(skip_unavailable=False),
],
)
challenges = sorted_by_enum_index(
challenges, DifficultyLevel, key=lambda c: c.info.difficulty
)
if only_names:
if output_json:
click.echo(json.dumps([c.info.name for c in challenges]))
return
for c in challenges:
click.echo(
click.style(c.info.name, fg=None if c.info.available else "black")
)
return
if output_json:
click.echo(
json.dumps([json.loads(c.info.model_dump_json()) for c in challenges])
)
return
headers = tuple(
click.style(h, bold=True) for h in ("Name", "Difficulty", "Categories")
)
table = [
tuple(
v if challenge.info.available else click.style(v, fg="black")
for v in (
challenge.info.name,
(
click.style(
challenge.info.difficulty.value,
fg=DIFFICULTY_COLORS[challenge.info.difficulty],
)
if challenge.info.difficulty
else click.style("-", fg="black")
),
" ".join(
click.style(cat.value, fg=CATEGORY_COLORS[cat])
for cat in sorted_by_enum_index(challenge.info.category, Category)
),
)
)
for challenge in challenges
]
click.echo(tabulate(table, headers=headers))
@challenge.command()
@click.option("--json", is_flag=True)
@click.argument("name")
def info(name: str, json: bool):
from itertools import chain
from .challenges.builtin import load_builtin_challenges
from .challenges.webarena import load_webarena_challenges
from .utils.utils import pretty_print_model
for challenge in chain(
load_builtin_challenges(),
load_webarena_challenges(skip_unavailable=False),
):
if challenge.info.name != name:
continue
if json:
click.echo(challenge.info.model_dump_json())
break
pretty_print_model(challenge.info)
break
else:
click.echo(click.style(f"Unknown challenge '{name}'", fg="red"), err=True)
@cli.command()
def version():
"""Print version info for the AGBenchmark application."""
import toml
package_root = Path(__file__).resolve().parent.parent
pyproject = toml.load(package_root / "pyproject.toml")
version = pyproject["tool"]["poetry"]["version"]
click.echo(f"AGBenchmark version {version}")
if __name__ == "__main__":
cli()

View File

@@ -0,0 +1,111 @@
import logging
import time
from pathlib import Path
from typing import AsyncIterator, Optional
from agent_protocol_client import (
AgentApi,
ApiClient,
Configuration,
Step,
TaskRequestBody,
)
from agbenchmark.agent_interface import get_list_of_file_paths
from agbenchmark.config import AgentBenchmarkConfig
logger = logging.getLogger(__name__)
async def run_api_agent(
task: str,
config: AgentBenchmarkConfig,
timeout: int,
artifacts_location: Optional[Path] = None,
*,
mock: bool = False,
) -> AsyncIterator[Step]:
configuration = Configuration(host=config.host)
async with ApiClient(configuration) as api_client:
api_instance = AgentApi(api_client)
task_request_body = TaskRequestBody(input=task, additional_input=None)
start_time = time.time()
response = await api_instance.create_agent_task(
task_request_body=task_request_body
)
task_id = response.task_id
if artifacts_location:
logger.debug("Uploading task input artifacts to agent...")
await upload_artifacts(
api_instance, artifacts_location, task_id, "artifacts_in"
)
logger.debug("Running agent until finished or timeout...")
while True:
step = await api_instance.execute_agent_task_step(task_id=task_id)
yield step
if time.time() - start_time > timeout:
raise TimeoutError("Time limit exceeded")
if step and mock:
step.is_last = True
if not step or step.is_last:
break
if artifacts_location:
# In "mock" mode, we cheat by giving the correct artifacts to pass the test
if mock:
logger.debug("Uploading mock artifacts to agent...")
await upload_artifacts(
api_instance, artifacts_location, task_id, "artifacts_out"
)
logger.debug("Downloading agent artifacts...")
await download_agent_artifacts_into_folder(
api_instance, task_id, config.temp_folder
)
async def download_agent_artifacts_into_folder(
api_instance: AgentApi, task_id: str, folder: Path
):
artifacts = await api_instance.list_agent_task_artifacts(task_id=task_id)
for artifact in artifacts.artifacts:
# current absolute path of the directory of the file
if artifact.relative_path:
path: str = (
artifact.relative_path
if not artifact.relative_path.startswith("/")
else artifact.relative_path[1:]
)
folder = (folder / path).parent
if not folder.exists():
folder.mkdir(parents=True)
file_path = folder / artifact.file_name
logger.debug(f"Downloading agent artifact {artifact.file_name} to {folder}")
with open(file_path, "wb") as f:
content = await api_instance.download_agent_task_artifact(
task_id=task_id, artifact_id=artifact.artifact_id
)
f.write(content)
async def upload_artifacts(
api_instance: AgentApi, artifacts_location: Path, task_id: str, type: str
) -> None:
for file_path in get_list_of_file_paths(artifacts_location, type):
relative_path: Optional[str] = "/".join(
str(file_path).split(f"{type}/", 1)[-1].split("/")[:-1]
)
if not relative_path:
relative_path = None
await api_instance.upload_agent_task_artifacts(
task_id=task_id, file=str(file_path), relative_path=relative_path
)

View File

@@ -0,0 +1,27 @@
import os
import shutil
from pathlib import Path
from dotenv import load_dotenv
load_dotenv()
HELICONE_GRAPHQL_LOGS = os.getenv("HELICONE_GRAPHQL_LOGS", "").lower() == "true"
def get_list_of_file_paths(
challenge_dir_path: str | Path, artifact_folder_name: str
) -> list[Path]:
source_dir = Path(challenge_dir_path) / artifact_folder_name
if not source_dir.exists():
return []
return list(source_dir.iterdir())
def copy_challenge_artifacts_into_workspace(
challenge_dir_path: str | Path, artifact_folder_name: str, workspace: str | Path
) -> None:
file_paths = get_list_of_file_paths(challenge_dir_path, artifact_folder_name)
for file_path in file_paths:
if file_path.is_file():
shutil.copy(file_path, workspace)

View File

@@ -0,0 +1,339 @@
import datetime
import glob
import json
import logging
import sys
import time
import uuid
from collections import deque
from multiprocessing import Process
from pathlib import Path
from typing import Optional
import httpx
import psutil
from agent_protocol_client import AgentApi, ApiClient, ApiException, Configuration
from agent_protocol_client.models import Task, TaskRequestBody
from fastapi import APIRouter, FastAPI, HTTPException, Request, Response
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel, ConfigDict, ValidationError
from agbenchmark.challenges import ChallengeInfo
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.reports.processing.report_types_v2 import (
BenchmarkRun,
Metrics,
RepositoryInfo,
RunDetails,
TaskInfo,
)
from agbenchmark.schema import TaskEvalRequestBody
from agbenchmark.utils.utils import write_pretty_json
sys.path.append(str(Path(__file__).parent.parent))
logger = logging.getLogger(__name__)
CHALLENGES: dict[str, ChallengeInfo] = {}
challenges_path = Path(__file__).parent / "challenges"
challenge_spec_files = deque(
glob.glob(
f"{challenges_path}/**/data.json",
recursive=True,
)
)
logger.debug("Loading challenges...")
while challenge_spec_files:
challenge_spec_file = Path(challenge_spec_files.popleft())
challenge_relpath = challenge_spec_file.relative_to(challenges_path.parent)
if challenge_relpath.is_relative_to("challenges/deprecated"):
continue
logger.debug(f"Loading {challenge_relpath}...")
try:
challenge_info = ChallengeInfo.model_validate_json(
challenge_spec_file.read_text()
)
except ValidationError as e:
if logging.getLogger().level == logging.DEBUG:
logger.warning(f"Spec file {challenge_relpath} failed to load:\n{e}")
logger.debug(f"Invalid challenge spec: {challenge_spec_file.read_text()}")
continue
if not challenge_info.eval_id:
challenge_info.eval_id = str(uuid.uuid4())
# this will sort all the keys of the JSON systematically
# so that the order is always the same
write_pretty_json(challenge_info.model_dump(), challenge_spec_file)
CHALLENGES[challenge_info.eval_id] = challenge_info
class BenchmarkTaskInfo(BaseModel):
task_id: str
start_time: datetime.datetime
challenge_info: ChallengeInfo
task_informations: dict[str, BenchmarkTaskInfo] = {}
def find_agbenchmark_without_uvicorn():
pids = []
for process in psutil.process_iter(
attrs=[
"pid",
"cmdline",
"name",
"username",
"status",
"cpu_percent",
"memory_info",
"create_time",
"cwd",
"connections",
]
):
try:
# Convert the process.info dictionary values to strings and concatenate them
full_info = " ".join([str(v) for k, v in process.as_dict().items()])
if "agbenchmark" in full_info and "uvicorn" not in full_info:
pids.append(process.pid)
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
pass
return pids
class CreateReportRequest(BaseModel):
test: str
test_run_id: str
# category: Optional[str] = []
mock: Optional[bool] = False
model_config = ConfigDict(extra="forbid")
updates_list = []
origins = [
"http://localhost:8000",
"http://localhost:8080",
"http://127.0.0.1:5000",
"http://localhost:5000",
]
def stream_output(pipe):
for line in pipe:
print(line, end="")
def setup_fastapi_app(agbenchmark_config: AgentBenchmarkConfig) -> FastAPI:
from agbenchmark.agent_api_interface import upload_artifacts
from agbenchmark.challenges import get_challenge_from_source_uri
from agbenchmark.main import run_benchmark
configuration = Configuration(
host=agbenchmark_config.host or "http://localhost:8000"
)
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
router = APIRouter()
@router.post("/reports")
def run_single_test(body: CreateReportRequest) -> dict:
pids = find_agbenchmark_without_uvicorn()
logger.info(f"pids already running with agbenchmark: {pids}")
logger.debug(f"Request to /reports: {body.model_dump()}")
# Start the benchmark in a separate thread
benchmark_process = Process(
target=lambda: run_benchmark(
config=agbenchmark_config,
tests=(body.test,),
mock=body.mock or False,
)
)
benchmark_process.start()
# Wait for the benchmark to finish, with a timeout of 200 seconds
timeout = 200
start_time = time.time()
while benchmark_process.is_alive():
if time.time() - start_time > timeout:
logger.warning(f"Benchmark run timed out after {timeout} seconds")
benchmark_process.terminate()
break
time.sleep(1)
else:
logger.debug(f"Benchmark finished running in {time.time() - start_time} s")
# List all folders in the current working directory
reports_folder = agbenchmark_config.reports_folder
folders = [folder for folder in reports_folder.iterdir() if folder.is_dir()]
# Sort the folders based on their names
sorted_folders = sorted(folders, key=lambda x: x.name)
# Get the last folder
latest_folder = sorted_folders[-1] if sorted_folders else None
# Read report.json from this folder
if latest_folder:
report_path = latest_folder / "report.json"
logger.debug(f"Getting latest report from {report_path}")
if report_path.exists():
with report_path.open() as file:
data = json.load(file)
logger.debug(f"Report data: {data}")
else:
raise HTTPException(
502,
"Could not get result after running benchmark: "
f"'report.json' does not exist in '{latest_folder}'",
)
else:
raise HTTPException(
504, "Could not get result after running benchmark: no reports found"
)
return data
@router.post("/agent/tasks", tags=["agent"])
async def create_agent_task(task_eval_request: TaskEvalRequestBody) -> Task:
"""
Creates a new task using the provided TaskEvalRequestBody and returns a Task.
Args:
task_eval_request: `TaskRequestBody` including an eval_id.
Returns:
Task: A new task with task_id, input, additional_input,
and empty lists for artifacts and steps.
Example:
Request (TaskEvalRequestBody defined in schema.py):
{
...,
"eval_id": "50da533e-3904-4401-8a07-c49adf88b5eb"
}
Response (Task defined in `agent_protocol_client.models`):
{
"task_id": "50da533e-3904-4401-8a07-c49adf88b5eb",
"input": "Write the word 'Washington' to a .txt file",
"artifacts": []
}
"""
try:
challenge_info = CHALLENGES[task_eval_request.eval_id]
async with ApiClient(configuration) as api_client:
api_instance = AgentApi(api_client)
task_input = challenge_info.task
task_request_body = TaskRequestBody(
input=task_input, additional_input=None
)
task_response = await api_instance.create_agent_task(
task_request_body=task_request_body
)
task_info = BenchmarkTaskInfo(
task_id=task_response.task_id,
start_time=datetime.datetime.now(datetime.timezone.utc),
challenge_info=challenge_info,
)
task_informations[task_info.task_id] = task_info
if input_artifacts_dir := challenge_info.task_artifacts_dir:
await upload_artifacts(
api_instance,
input_artifacts_dir,
task_response.task_id,
"artifacts_in",
)
return task_response
except ApiException as e:
logger.error(f"Error whilst trying to create a task:\n{e}")
logger.error(
"The above error was caused while processing request: "
f"{task_eval_request}"
)
raise HTTPException(500)
@router.post("/agent/tasks/{task_id}/steps")
async def proxy(request: Request, task_id: str):
timeout = httpx.Timeout(300.0, read=300.0) # 5 minutes
async with httpx.AsyncClient(timeout=timeout) as client:
# Construct the new URL
new_url = f"{configuration.host}/ap/v1/agent/tasks/{task_id}/steps"
# Forward the request
response = await client.post(
new_url,
content=await request.body(),
headers=dict(request.headers),
)
# Return the response from the forwarded request
return Response(content=response.content, status_code=response.status_code)
@router.post("/agent/tasks/{task_id}/evaluations")
async def create_evaluation(task_id: str) -> BenchmarkRun:
task_info = task_informations[task_id]
challenge = get_challenge_from_source_uri(task_info.challenge_info.source_uri)
try:
async with ApiClient(configuration) as api_client:
api_instance = AgentApi(api_client)
eval_results = await challenge.evaluate_task_state(
api_instance, task_id
)
eval_info = BenchmarkRun(
repository_info=RepositoryInfo(),
run_details=RunDetails(
command=f"agbenchmark --test={challenge.info.name}",
benchmark_start_time=(
task_info.start_time.strftime("%Y-%m-%dT%H:%M:%S+00:00")
),
test_name=challenge.info.name,
),
task_info=TaskInfo(
data_path=challenge.info.source_uri,
is_regression=None,
category=[c.value for c in challenge.info.category],
task=challenge.info.task,
answer=challenge.info.reference_answer or "",
description=challenge.info.description or "",
),
metrics=Metrics(
success=all(e.passed for e in eval_results),
success_percentage=(
100 * sum(e.score for e in eval_results) / len(eval_results)
if eval_results # avoid division by 0
else 0
),
attempted=True,
),
config={},
)
logger.debug(
f"Returning evaluation data:\n{eval_info.model_dump_json(indent=4)}"
)
return eval_info
except ApiException as e:
logger.error(f"Error {e} whilst trying to evaluate task: {task_id}")
raise HTTPException(500)
app.include_router(router, prefix="/ap/v1")
return app

View File

@@ -1,98 +1,19 @@
import logging
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Any, AsyncIterator, Awaitable, ClassVar, Optional
from typing import AsyncIterator, Awaitable, ClassVar, Optional
import pytest
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.utils.data_types import Category, DifficultyLevel, EvalResult
from agent_protocol_client import AgentApi, Step
from colorama import Fore, Style
from pydantic import BaseModel, Field
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.utils.data_types import Category, DifficultyLevel, EvalResult
logger = logging.getLogger(__name__)
def format_step_output(step: Step, step_num: int, challenge_name: str) -> str:
"""Format a step for concise, informative console output.
Format: [Challenge] step N: tool_name(args) result [$cost]
"""
parts = [f"[{challenge_name}]", f"step {step_num}:"]
# Get additional_output data
ao: dict[str, Any] = step.additional_output or {}
# Get the tool being used in this step
use_tool = ao.get("use_tool", {})
tool_name = use_tool.get("name", "")
tool_args = use_tool.get("arguments", {})
if tool_name:
# Format tool call with abbreviated arguments
args_str = _format_tool_args(tool_name, tool_args)
parts.append(f"{Fore.CYAN}{tool_name}{Fore.RESET}({args_str})")
else:
parts.append(f"{Fore.YELLOW}(no tool){Fore.RESET}")
# Get result from last action (this step's tool will be executed next iteration)
last_action = ao.get("last_action", {})
if last_action:
result = last_action.get("result", {})
if isinstance(result, dict):
if result.get("error"):
parts.append(f"{Fore.RED}error{Fore.RESET}")
elif result.get("status") == "success":
parts.append(f"{Fore.GREEN}{Fore.RESET}")
# Add cost if available
cost = ao.get("task_cumulative_cost", 0)
if cost > 0:
parts.append(f"{Fore.BLUE}${cost:.3f}{Fore.RESET}")
return " ".join(parts)
def _format_tool_args(tool_name: str, args: dict) -> str:
"""Format tool arguments for display, keeping it concise."""
if not args:
return ""
# For common tools, show the most relevant argument
key_args = {
"read_file": ["filename"],
"write_file": ["filename"],
"open_file": ["filename", "file_path"],
"execute_python": ["filename"],
"execute_shell": ["command_line"],
"web_search": ["query"],
"read_webpage": ["url"],
"finish": ["reason"],
"ask_user": ["question"],
"todo_write": [], # Skip args for todo_write (too verbose)
}
if tool_name in key_args:
keys = key_args[tool_name]
if not keys:
return "..."
values = [str(args.get(k, ""))[:40] for k in keys if k in args]
if values:
return ", ".join(
f'"{v}"' if " " not in v else f'"{v[:20]}..."' for v in values
)
# Default: show first arg value, abbreviated
if args:
first_key = next(iter(args))
first_val = str(args[first_key])[:30]
return f'{first_key}="{first_val}"' + (
"..." if len(str(args[first_key])) > 30 else ""
)
return ""
class ChallengeInfo(BaseModel):
eval_id: str = ""
name: str
@@ -174,7 +95,7 @@ class BaseChallenge(ABC):
cls.info.task, config, timeout, cls.info.task_artifacts_dir, mock=mock
):
i += 1
print(format_step_output(step, i, cls.info.name))
print(f"[{cls.info.name}] - step {step.name} ({i}. request)")
yield step
logger.debug(f"Finished {cls.info.name} challenge run")
@@ -182,4 +103,5 @@ class BaseChallenge(ABC):
@abstractmethod
async def evaluate_task_state(
cls, agent: AgentApi, task_id: str
) -> list[EvalResult]: ...
) -> list[EvalResult]:
...

View File

@@ -10,16 +10,6 @@ from pathlib import Path
from typing import Annotated, Any, ClassVar, Iterator, Literal, Optional
import pytest
from agbenchmark.agent_api_interface import download_agent_artifacts_into_folder
from agbenchmark.agent_interface import copy_challenge_artifacts_into_workspace
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.utils.data_types import Category, DifficultyLevel, EvalResult
from agbenchmark.utils.prompts import (
END_PROMPT,
FEW_SHOT_EXAMPLES,
PROMPT_MAP,
SCORING_MAP,
)
from agent_protocol_client import AgentApi, ApiClient
from agent_protocol_client import Configuration as ClientConfig
from agent_protocol_client import Step
@@ -33,6 +23,17 @@ from pydantic import (
field_validator,
)
from agbenchmark.agent_api_interface import download_agent_artifacts_into_folder
from agbenchmark.agent_interface import copy_challenge_artifacts_into_workspace
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.utils.data_types import Category, DifficultyLevel, EvalResult
from agbenchmark.utils.prompts import (
END_PROMPT,
FEW_SHOT_EXAMPLES,
PROMPT_MAP,
SCORING_MAP,
)
from .base import BaseChallenge, ChallengeInfo
logger = logging.getLogger(__name__)
@@ -68,9 +69,9 @@ class BuiltinChallengeSpec(BaseModel):
class Eval(BaseModel):
type: str
scoring: Optional[Literal["percentage", "scale", "binary"]] = None
template: Optional[Literal["rubric", "reference", "question", "custom"]] = (
None
)
template: Optional[
Literal["rubric", "reference", "question", "custom"]
] = None
examples: Optional[str] = None
@field_validator("scoring", "template")
@@ -227,11 +228,9 @@ class BuiltinChallenge(BaseChallenge):
request.node.user_properties.append(
(
"answers",
(
[r.result for r in eval_results]
if request.config.getoption("--keep-answers")
else None
),
[r.result for r in eval_results]
if request.config.getoption("--keep-answers")
else None,
)
)
request.node.user_properties.append(("scores", [r.score for r in eval_results]))

Some files were not shown because too many files have changed in this diff Show More