Compare commits

..

5 Commits

Author SHA1 Message Date
Nicholas Tindle
57985eb8ed Merge branch 'master' into security/analysis-workflows-sandbox 2024-06-13 17:31:52 -05:00
Reinier van der Leer
575be818ca Merge branch 'master' into security/analysis-workflows-sandbox 2024-01-29 18:29:24 +01:00
Reinier van der Leer
58762c7977 Update ossf-scorecard.yml 2023-07-10 18:51:28 +02:00
Reinier van der Leer
be69ad2a9f Apparently this OSSF tool only supports the default branch 2023-07-10 18:49:10 +02:00
Reinier van der Leer
3b9a467deb Add OSSF Scorecard 2023-07-10 18:44:18 +02:00
2848 changed files with 18007 additions and 129255 deletions

View File

@@ -2,11 +2,11 @@
*
# AutoGPT
!original_autogpt/autogpt/
!original_autogpt/pyproject.toml
!original_autogpt/poetry.lock
!original_autogpt/README.md
!original_autogpt/tests/
!autogpt/autogpt/
!autogpt/pyproject.toml
!autogpt/poetry.lock
!autogpt/README.md
!autogpt/tests/
# Benchmark
!benchmark/agbenchmark/
@@ -15,7 +15,7 @@
!benchmark/README.md
# Forge
!forge/
!forge/forge/
!forge/pyproject.toml
!forge/poetry.lock
!forge/README.md

7
.gitattributes vendored
View File

@@ -1,10 +1,5 @@
classic/frontend/build/** linguist-generated
frontend/build/** linguist-generated
**/poetry.lock linguist-generated
docs/_javascript/** linguist-vendored
# Exclude VCR cassettes from stats
classic/forge/tests/vcr_cassettes/**/**.y*ml linguist-generated
* text=auto

12
.github/CODEOWNERS vendored
View File

@@ -1,7 +1,5 @@
* @Significant-Gravitas/maintainers
.github/workflows/ @Significant-Gravitas/devops
classic/forge/ @Significant-Gravitas/forge-maintainers
classic/benchmark/ @Significant-Gravitas/benchmark-maintainers
classic/frontend/ @Significant-Gravitas/frontend-maintainers
autogpt_platform/infra @Significant-Gravitas/devops
.github/CODEOWNERS @Significant-Gravitas/admins
.github/workflows/ @Significant-Gravitas/devops
autogpt/ @Significant-Gravitas/maintainers
forge/ @Significant-Gravitas/forge-maintainers
benchmark/ @Significant-Gravitas/benchmark-maintainers
frontend/ @Significant-Gravitas/frontend-maintainers

View File

@@ -88,16 +88,14 @@ body:
- type: dropdown
attributes:
label: What LLM Provider do you use?
label: Do you use OpenAI GPT-3 or GPT-4?
description: >
If you are using AutoGPT with `SMART_LLM=gpt-3.5-turbo`, your problems may be caused by
the [limitations](https://github.com/Significant-Gravitas/AutoGPT/issues?q=is%3Aissue+label%3A%22AI+model+limitation%22) of GPT-3.5.
options:
- Azure
- Groq
- Anthropic
- Llamafile
- Other (detail in issue)
- GPT-3.5
- GPT-4
- GPT-4(32k)
validations:
required: true
@@ -128,13 +126,6 @@ body:
label: Specify the area
description: Please specify the area you think is best related to the issue.
- type: input
attributes:
label: What commit or version are you using?
description: It is helpful for us to reproduce to know what version of the software you were using when this happened. Please run `git log -n 1 --pretty=format:"%H"` to output the full commit hash.
validations:
required: true
- type: textarea
attributes:
label: Describe your issue.

View File

@@ -6,18 +6,26 @@
<!-- Concisely describe all of the changes made in this pull request: -->
### Testing 🔍
> [!NOTE]
Only for the new autogpt platform, currently in autogpt_platform/
### PR Quality Scorecard ✨
<!--
Please make sure your changes have been tested and are in good working condition.
Here is a list of our critical paths, if you need some inspiration on what and how to test:
Check out our contribution guide:
https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing
1. Avoid duplicate work, issues, PRs etc.
2. Also consider contributing something other than code; see the [contribution guide]
for options.
3. Clearly explain your changes.
4. Avoid making unnecessary changes, especially if they're purely based on personal
preferences. Doing so is the maintainers' job. ;-)
-->
- Create from scratch and execute an agent with at least 3 blocks
- Import an agent from file upload, and confirm it executes correctly
- Upload agent to marketplace
- Import an agent from marketplace and confirm it executes correctly
- Edit an agent from monitor, and confirm it executes correctly
- [x] Have you used the PR description template? &ensp; `+2 pts`
- [ ] Is your pull request atomic, focusing on a single change? &ensp; `+5 pts`
- [ ] Have you linked the GitHub issue(s) that this PR addresses? &ensp; `+5 pts`
- [ ] Have you documented your changes clearly and comprehensively? &ensp; `+5 pts`
- [ ] Have you changed or added a feature? &ensp; `-4 pts`
- [ ] Have you added/updated corresponding documentation? &ensp; `+4 pts`
- [ ] Have you added/updated corresponding integration tests? &ensp; `+5 pts`
- [ ] Have you changed the behavior of AutoGPT? &ensp; `-5 pts`
- [ ] Have you also run `agbenchmark` to verify that these changes do not regress performance? &ensp; `+10 pts`

16
.github/labeler.yml vendored
View File

@@ -1,27 +1,19 @@
AutoGPT Agent:
- changed-files:
- any-glob-to-any-file: classic/original_autogpt/**
- any-glob-to-any-file: autogpt/**
Forge:
- changed-files:
- any-glob-to-any-file: classic/forge/**
- any-glob-to-any-file: forge/**
Benchmark:
- changed-files:
- any-glob-to-any-file: classic/benchmark/**
- any-glob-to-any-file: benchmark/**
Frontend:
- changed-files:
- any-glob-to-any-file: classic/frontend/**
- any-glob-to-any-file: frontend/**
documentation:
- changed-files:
- any-glob-to-any-file: docs/**
Builder:
- changed-files:
- any-glob-to-any-file: autogpt_platform/autogpt_builder/**
Server:
- changed-files:
- any-glob-to-any-file: autogpt_platform/autogpt_server/**

View File

@@ -1,27 +1,27 @@
name: Classic - Forge CI
name: AutoGPT CI
on:
push:
branches: [ master, development, ci-test* ]
paths:
- '.github/workflows/classic-forge-ci.yml'
- 'classic/forge/**'
- '!classic/forge/tests/vcr_cassettes'
- '.github/workflows/autogpt-ci.yml'
- 'autogpt/**'
- '!autogpt/tests/vcr_cassettes'
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/classic-forge-ci.yml'
- 'classic/forge/**'
- '!classic/forge/tests/vcr_cassettes'
- '.github/workflows/autogpt-ci.yml'
- 'autogpt/**'
- '!autogpt/tests/vcr_cassettes'
concurrency:
group: ${{ format('forge-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
group: ${{ format('autogpt-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults:
run:
shell: bash
working-directory: classic/forge
working-directory: autogpt
jobs:
test:
@@ -68,6 +68,11 @@ jobs:
fetch-depth: 0
submodules: true
- name: Configure git user Auto-GPT-Bot
run: |
git config --global user.name "Auto-GPT-Bot"
git config --global user.email "github-bot@agpt.co"
- name: Checkout cassettes
if: ${{ startsWith(github.event_name, 'pull_request') }}
env:
@@ -104,13 +109,17 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/forge/poetry.lock') }}
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
@@ -137,9 +146,9 @@ jobs:
- name: Run pytest with coverage
run: |
poetry run pytest -vv \
--cov=forge --cov-branch --cov-report term-missing --cov-report xml \
--durations=10 \
forge
--cov=autogpt --cov-branch --cov-report term-missing --cov-report xml \
--numprocesses=logical --durations=10 \
tests/unit tests/integration
env:
CI: true
PLAIN_OUTPUT: True
@@ -152,7 +161,7 @@ jobs:
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: forge,${{ runner.os }}
flags: autogpt-agent,${{ runner.os }}
- id: setup_git_auth
name: Set up git token authentication
@@ -233,4 +242,4 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: test-logs
path: classic/forge/logs/
path: autogpt/logs/

View File

@@ -1,4 +1,4 @@
name: Classic - Purge Auto-GPT Docker CI cache
name: Purge Auto-GPT Docker CI cache
on:
schedule:
@@ -25,8 +25,7 @@ jobs:
name: Build image
uses: docker/build-push-action@v5
with:
context: classic/
file: classic/Dockerfile.autogpt
file: Dockerfile.autogpt
build-args: BUILD_TYPE=${{ matrix.build-type }}
load: true # save to docker images
# use GHA cache as read-only

View File

@@ -1,26 +1,26 @@
name: Classic - AutoGPT Docker CI
name: AutoGPT Docker CI
on:
push:
branches: [ master, development ]
paths:
- '.github/workflows/classic-autogpt-docker-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- '.github/workflows/autogpt-docker-ci.yml'
- 'autogpt/**'
- '!autogpt/tests/vcr_cassettes'
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/classic-autogpt-docker-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- '.github/workflows/autogpt-docker-ci.yml'
- 'autogpt/**'
- '!autogpt/tests/vcr_cassettes'
concurrency:
group: ${{ format('classic-autogpt-docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
group: ${{ format('autogpt-docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
defaults:
run:
working-directory: classic/original_autogpt
working-directory: autogpt
env:
IMAGE_NAME: auto-gpt
@@ -49,8 +49,7 @@ jobs:
name: Build image
uses: docker/build-push-action@v5
with:
context: classic/
file: classic/Dockerfile.autogpt
file: Dockerfile.autogpt
build-args: BUILD_TYPE=${{ matrix.build-type }}
tags: ${{ env.IMAGE_NAME }}
labels: GIT_REVISION=${{ github.sha }}
@@ -119,8 +118,7 @@ jobs:
name: Build image
uses: docker/build-push-action@v5
with:
context: classic/
file: classic/Dockerfile.autogpt
file: Dockerfile.autogpt
build-args: BUILD_TYPE=dev # include pytest
tags: >
${{ env.IMAGE_NAME }},

View File

@@ -1,4 +1,4 @@
name: Classic - AutoGPT Docker Release
name: AutoGPT Docker Release
on:
release:
@@ -44,7 +44,6 @@ jobs:
name: Build image
uses: docker/build-push-action@v5
with:
context: classic/
file: Dockerfile.autogpt
build-args: BUILD_TYPE=release
load: true # save to docker images

268
.github/workflows/autogpt-server-ci.yml vendored Normal file
View File

@@ -0,0 +1,268 @@
name: AutoGPT Server CI
on:
push:
branches: [master, development, ci-test*]
paths:
- ".github/workflows/autogpt-server-ci.yml"
- "rnd/autogpt_server/**"
- "!autogpt/tests/vcr_cassettes"
pull_request:
branches: [master, development, release-*]
paths:
- ".github/workflows/autogpt-server-ci.yml"
- "rnd/autogpt_server/**"
- "!autogpt/tests/vcr_cassettes"
concurrency:
group: ${{ format('autogpt-server-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults:
run:
shell: bash
working-directory: rnd/autogpt_server
jobs:
test:
permissions:
contents: read
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
steps:
# Quite slow on macOS (2~4 minutes to set up Docker)
# - name: Set up Docker (macOS)
# if: runner.os == 'macOS'
# uses: crazy-max/ghaction-setup-docker@v3
- name: Start MinIO service (Linux)
if: runner.os == 'Linux'
working-directory: "."
run: |
docker pull minio/minio:edge-cicd
docker run -d -p 9000:9000 minio/minio:edge-cicd
- name: Start MinIO service (macOS)
if: runner.os == 'macOS'
working-directory: ${{ runner.temp }}
run: |
brew install minio/stable/minio
mkdir data
minio server ./data &
# No MinIO on Windows:
# - Windows doesn't support running Linux Docker containers
# - It doesn't seem possible to start background processes on Windows. They are
# killed after the step returns.
# See: https://github.com/actions/runner/issues/598#issuecomment-2011890429
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('rnd/autogpt_server/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Python dependencies
run: poetry install
- name: Generate Prisma Client
run: poetry run prisma generate
- name: Run Database Migrations
run: poetry run prisma migrate dev --name updates
- name: Run pytest with coverage
run: |
poetry run pytest -vv \
test
env:
CI: true
PLAIN_OUTPUT: True
# - name: Upload coverage reports to Codecov
# uses: codecov/codecov-action@v4
# with:
# token: ${{ secrets.CODECOV_TOKEN }}
# flags: autogpt-server,${{ runner.os }}
build:
permissions:
contents: read
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('rnd/autogpt_server/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Python dependencies
run: poetry install
- name: Generate Prisma Client
run: poetry run prisma generate
- name: Run Database Migrations
run: poetry run prisma migrate dev --name updates
- name: install rpm
if: matrix.platform-os == 'ubuntu'
run: sudo apt-get install -y alien fakeroot rpm
- name: Build distribution
run: |
case "${{ matrix.platform-os }}" in
"macos" | "macos-arm64")
${MAC_COMMAND}
;;
"windows")
${WINDOWS_COMMAND}
;;
*)
${LINUX_COMMAND}
;;
esac
env:
MAC_COMMAND: "poetry run poe dist_dmg"
WINDOWS_COMMAND: "poetry run poe dist_msi"
LINUX_COMMAND: "poetry run poe dist_appimage"
# break this into seperate steps each with their own name that matches the file
- name: Upload App artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-app-${{ matrix.platform-os }}
path: /Users/runner/work/AutoGPT/AutoGPT/rnd/autogpt_server/build/*.app
- name: Upload dmg artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-dmg-${{ matrix.platform-os }}
path: /Users/runner/work/AutoGPT/AutoGPT/rnd/autogpt_server/build/AutoGPTServer.dmg
- name: Upload msi artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-msi-${{ matrix.platform-os }}
path: D:\a\AutoGPT\AutoGPT\rnd\autogpt_server\dist\*.msi
- name: Upload deb artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-deb-${{ matrix.platform-os }}
path: /Users/runner/work/AutoGPT/AutoGPT/rnd/autogpt_server/build/*.deb
- name: Upload rpm artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-rpm-${{ matrix.platform-os }}
path: /Users/runner/work/AutoGPT/AutoGPT/rnd/autogpt_server/build/*.rpm
- name: Upload tar.gz artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-tar.gz-${{ matrix.platform-os }}
path: /Users/runner/work/AutoGPT/AutoGPT/rnd/autogpt_server/build/*.tar.gz
- name: Upload zip artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-zip-${{ matrix.platform-os }}
path: /Users/runner/work/AutoGPT/AutoGPT/rnd/autogpt_server/build/*.zip
- name: Upload pkg artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-pkg-${{ matrix.platform-os }}
path: /Users/runner/work/AutoGPT/AutoGPT/rnd/autogpt_server/build/*.pkg
- name: Upload AppImage artifact
uses: actions/upload-artifact@v4
with:
name: autogptserver-AppImage-${{ matrix.platform-os }}
path: /Users/runner/work/AutoGPT/AutoGPT/rnd/autogpt_server/build/*.AppImage

View File

@@ -0,0 +1,97 @@
name: AutoGPTs Nightly Benchmark
on:
workflow_dispatch:
schedule:
- cron: '0 2 * * *'
jobs:
benchmark:
permissions:
contents: write
runs-on: ubuntu-latest
strategy:
matrix:
agent-name: [ autogpt ]
fail-fast: false
timeout-minutes: 120
env:
min-python-version: '3.10'
REPORTS_BRANCH: data/benchmark-reports
REPORTS_FOLDER: ${{ format('benchmark/reports/{0}', matrix.agent-name) }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.min-python-version }}
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python -
- name: Prepare reports folder
run: mkdir -p ${{ env.REPORTS_FOLDER }}
- run: poetry -C benchmark install
- name: Benchmark ${{ matrix.agent-name }}
run: |
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
set +e # Do not quit on non-zero exit codes
poetry run agbenchmark run -N 3 \
--test=ReadFile \
--test=BasicRetrieval --test=RevenueRetrieval2 \
--test=CombineCsv --test=LabelCsv --test=AnswerQuestionCombineCsv \
--test=UrlShortener --test=TicTacToe --test=Battleship \
--test=WebArenaTask_0 --test=WebArenaTask_21 --test=WebArenaTask_124 \
--test=WebArenaTask_134 --test=WebArenaTask_163
# Convert exit code 1 (some challenges failed) to exit code 0
if [ $? -eq 0 ] || [ $? -eq 1 ]; then
exit 0
else
exit $?
fi
env:
AGENT_NAME: ${{ matrix.agent-name }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
REPORTS_FOLDER: ${{ format('../../{0}', env.REPORTS_FOLDER) }} # account for changed workdir
TELEMETRY_ENVIRONMENT: autogpt-benchmark-ci
TELEMETRY_OPT_IN: ${{ github.ref_name == 'master' }}
- name: Push reports to data branch
run: |
# BODGE: Remove success_rate.json and regression_tests.json to avoid conflicts on checkout
rm ${{ env.REPORTS_FOLDER }}/*.json
# Find folder with newest (untracked) report in it
report_subfolder=$(find ${{ env.REPORTS_FOLDER }} -type f -name 'report.json' \
| xargs -I {} dirname {} \
| xargs -I {} git ls-files --others --exclude-standard {} \
| xargs -I {} dirname {} \
| sort -u)
json_report_file="$report_subfolder/report.json"
# Convert JSON report to Markdown
markdown_report_file="$report_subfolder/report.md"
poetry -C benchmark run benchmark/reports/format.py "$json_report_file" > "$markdown_report_file"
cat "$markdown_report_file" >> $GITHUB_STEP_SUMMARY
git config --global user.name 'GitHub Actions'
git config --global user.email 'github-actions@agpt.co'
git fetch origin ${{ env.REPORTS_BRANCH }}:${{ env.REPORTS_BRANCH }} \
&& git checkout ${{ env.REPORTS_BRANCH }} \
|| git checkout --orphan ${{ env.REPORTS_BRANCH }}
git reset --hard
git add ${{ env.REPORTS_FOLDER }}
git commit -m "Benchmark report for ${{ matrix.agent-name }} @ $(date +'%Y-%m-%d')" \
&& git push origin ${{ env.REPORTS_BRANCH }}

View File

@@ -1,4 +1,4 @@
name: Classic - Agent smoke tests
name: Agent smoke tests
on:
workflow_dispatch:
@@ -7,37 +7,32 @@ on:
push:
branches: [ master, development, ci-test* ]
paths:
- '.github/workflows/classic-autogpts-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/run'
- 'classic/cli.py'
- 'classic/setup.py'
- '.github/workflows/autogpts-ci.yml'
- 'autogpt/**'
- 'forge/**'
- 'benchmark/**'
- 'run'
- 'cli.py'
- 'setup.py'
- '!**/*.md'
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/classic-autogpts-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/run'
- 'classic/cli.py'
- 'classic/setup.py'
- '.github/workflows/autogpts-ci.yml'
- 'autogpt/**'
- 'forge/**'
- 'benchmark/**'
- 'run'
- 'cli.py'
- 'setup.py'
- '!**/*.md'
defaults:
run:
shell: bash
working-directory: classic
jobs:
serve-agent-protocol:
runs-on: ubuntu-latest
strategy:
matrix:
agent-name: [ original_autogpt ]
agent-name: [ autogpt ]
fail-fast: false
timeout-minutes: 20
env:
@@ -55,7 +50,7 @@ jobs:
python-version: ${{ env.min-python-version }}
- name: Install Poetry
working-directory: ./classic/${{ matrix.agent-name }}/
working-directory: ./${{ matrix.agent-name }}/
run: |
curl -sSL https://install.python-poetry.org | python -

View File

@@ -1,18 +1,18 @@
name: Classic - AGBenchmark CI
name: AGBenchmark CI
on:
push:
branches: [ master, development, ci-test* ]
paths:
- 'classic/benchmark/**'
- '!classic/benchmark/reports/**'
- .github/workflows/classic-benchmark-ci.yml
- 'benchmark/**'
- .github/workflows/benchmark-ci.yml
- '!benchmark/reports/**'
pull_request:
branches: [ master, development, release-* ]
paths:
- 'classic/benchmark/**'
- '!classic/benchmark/reports/**'
- .github/workflows/classic-benchmark-ci.yml
- 'benchmark/**'
- '!benchmark/reports/**'
- .github/workflows/benchmark-ci.yml
concurrency:
group: ${{ format('benchmark-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -39,7 +39,7 @@ jobs:
defaults:
run:
shell: bash
working-directory: classic/benchmark
working-directory: benchmark
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -58,7 +58,7 @@ jobs:
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/benchmark/poetry.lock') }}
key: poetry-${{ runner.os }}-${{ hashFiles('benchmark/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
@@ -122,7 +122,7 @@ jobs:
curl -sSL https://install.python-poetry.org | python -
- name: Run regression tests
working-directory: classic
working-directory: .
run: |
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
@@ -155,7 +155,7 @@ jobs:
poetry run agbenchmark --mock
CHANGED=$(git diff --name-only | grep -E '(agclassic/benchmark/challenges)|(../classic/frontend/assets)') || echo "No diffs"
CHANGED=$(git diff --name-only | grep -E '(agbenchmark/challenges)|(../frontend/assets)') || echo "No diffs"
if [ ! -z "$CHANGED" ]; then
echo "There are unstaged changes please run agbenchmark and commit those changes since they are needed."
echo "$CHANGED"

View File

@@ -1,4 +1,4 @@
name: Classic - Publish to PyPI
name: Publish to PyPI
on:
workflow_dispatch:
@@ -21,21 +21,21 @@ jobs:
python-version: 3.8
- name: Install Poetry
working-directory: ./classic/benchmark/
working-directory: ./benchmark/
run: |
curl -sSL https://install.python-poetry.org | python3 -
echo "$HOME/.poetry/bin" >> $GITHUB_PATH
- name: Build project for distribution
working-directory: ./classic/benchmark/
working-directory: ./benchmark/
run: poetry build
- name: Install dependencies
working-directory: ./classic/benchmark/
working-directory: ./benchmark/
run: poetry install
- name: Check Version
working-directory: ./classic/benchmark/
working-directory: ./benchmark/
id: check-version
run: |
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
@@ -43,7 +43,7 @@ jobs:
- name: Create Release
uses: ncipollo/release-action@v1
with:
artifacts: "classic/benchmark/dist/*"
artifacts: "benchmark/dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
draft: false
generateReleaseNotes: false
@@ -51,5 +51,5 @@ jobs:
commit: master
- name: Build and publish
working-directory: ./classic/benchmark/
working-directory: ./benchmark/
run: poetry publish -u __token__ -p ${{ secrets.PYPI_API_TOKEN }}

View File

@@ -1,4 +1,4 @@
name: Repo - Close stale issues
name: 'Close stale issues'
on:
schedule:
- cron: '30 1 * * *'

View File

@@ -1,25 +1,25 @@
name: Classic - AutoGPT CI
name: Forge CI
on:
push:
branches: [ master, development, ci-test* ]
paths:
- '.github/workflows/classic-autogpt-ci.yml'
- 'classic/original_autogpt/**'
- '.github/workflows/forge-ci.yml'
- 'forge/**'
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/classic-autogpt-ci.yml'
- 'classic/original_autogpt/**'
- '.github/workflows/forge-ci.yml'
- 'forge/**'
concurrency:
group: ${{ format('classic-autogpt-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
group: ${{ format('forge-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults:
run:
shell: bash
working-directory: classic/original_autogpt
working-directory: forge
jobs:
test:
@@ -66,27 +66,18 @@ jobs:
fetch-depth: 0
submodules: true
- name: Configure git user Auto-GPT-Bot
run: |
git config --global user.name "Auto-GPT-Bot"
git config --global user.email "github-bot@agpt.co"
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/original_autogpt/poetry.lock') }}
key: poetry-${{ runner.os }}-${{ hashFiles('forge/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
@@ -113,9 +104,9 @@ jobs:
- name: Run pytest with coverage
run: |
poetry run pytest -vv \
--cov=autogpt --cov-branch --cov-report term-missing --cov-report xml \
--numprocesses=logical --durations=10 \
tests/unit tests/integration
--cov=forge --cov-branch --cov-report term-missing --cov-report xml \
--durations=10 \
forge
env:
CI: true
PLAIN_OUTPUT: True
@@ -128,11 +119,11 @@ jobs:
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: autogpt-agent,${{ runner.os }}
flags: forge,${{ runner.os }}
- name: Upload logs to artifact
if: always()
uses: actions/upload-artifact@v4
with:
name: test-logs
path: classic/original_autogpt/logs/
path: forge/logs/

View File

@@ -1,4 +1,4 @@
name: Classic - Frontend CI/CD
name: Frontend CI/CD
on:
push:
@@ -7,11 +7,11 @@ on:
- development
- 'ci-test*' # This will match any branch that starts with "ci-test"
paths:
- 'classic/frontend/**'
- 'frontend/**'
- '.github/workflows/frontend-ci.yml'
pull_request:
paths:
- 'classic/frontend/**'
- 'frontend/**'
- '.github/workflows/frontend-ci.yml'
jobs:
@@ -34,7 +34,7 @@ jobs:
- name: Build Flutter to Web
run: |
cd classic/frontend
cd frontend
flutter build web --base-href /app/
# - name: Commit and Push to ${{ env.BUILD_BRANCH }}
@@ -42,7 +42,7 @@ jobs:
# run: |
# git config --local user.email "action@github.com"
# git config --local user.name "GitHub Action"
# git add classic/frontend/build/web
# git add frontend/build/web
# git checkout -B ${{ env.BUILD_BRANCH }}
# git commit -m "Update frontend build to ${GITHUB_SHA:0:7}" -a
# git push -f origin ${{ env.BUILD_BRANCH }}
@@ -51,7 +51,7 @@ jobs:
if: github.event_name == 'push'
uses: peter-evans/create-pull-request@v6
with:
add-paths: classic/frontend/build/web
add-paths: frontend/build/web
base: ${{ github.ref_name }}
branch: ${{ env.BUILD_BRANCH }}
delete-branch: true

133
.github/workflows/hackathon.yml vendored Normal file
View File

@@ -0,0 +1,133 @@
name: Hackathon
on:
workflow_dispatch:
inputs:
agents:
description: "Agents to run (comma-separated)"
required: false
default: "autogpt" # Default agents if none are specified
jobs:
matrix-setup:
runs-on: ubuntu-latest
# Service containers to run with `matrix-setup`
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
# Maps tcp port 5432 on service container to the host
- 5432:5432
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
env-name: ${{ steps.set-matrix.outputs.env-name }}
steps:
- id: set-matrix
run: |
if [ "${{ github.event_name }}" == "schedule" ]; then
echo "::set-output name=env-name::production"
echo "::set-output name=matrix::[ 'irrelevant']"
elif [ "${{ github.event_name }}" == "workflow_dispatch" ]; then
IFS=',' read -ra matrix_array <<< "${{ github.event.inputs.agents }}"
matrix_string="[ \"$(echo "${matrix_array[@]}" | sed 's/ /", "/g')\" ]"
echo "::set-output name=env-name::production"
echo "::set-output name=matrix::$matrix_string"
else
echo "::set-output name=env-name::testing"
echo "::set-output name=matrix::[ 'irrelevant' ]"
fi
tests:
environment:
name: "${{ needs.matrix-setup.outputs.env-name }}"
needs: matrix-setup
env:
min-python-version: "3.10"
name: "${{ matrix.agent-name }}"
runs-on: ubuntu-latest
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
# Maps tcp port 5432 on service container to the host
- 5432:5432
timeout-minutes: 50
strategy:
fail-fast: false
matrix:
agent-name: ${{fromJson(needs.matrix-setup.outputs.matrix)}}
steps:
- name: Print Environment Name
run: |
echo "Matrix Setup Environment Name: ${{ needs.matrix-setup.outputs.env-name }}"
- name: Check Docker Container
id: check
run: docker ps
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.min-python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python -
- name: Install Node.js
uses: actions/setup-node@v4
with:
node-version: v18.15
- name: Run benchmark
run: |
link=$(jq -r '.["github_repo_url"]' arena/$AGENT_NAME.json)
branch=$(jq -r '.["branch_to_benchmark"]' arena/$AGENT_NAME.json)
git clone "$link" -b "$branch" "$AGENT_NAME"
cd $AGENT_NAME
cp ./$AGENT_NAME/.env.example ./$AGENT_NAME/.env || echo "file not found"
./run agent start $AGENT_NAME
cd ../benchmark
poetry install
poetry run agbenchmark --no-dep
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
SERP_API_KEY: ${{ secrets.SERP_API_KEY }}
SERPAPI_API_KEY: ${{ secrets.SERP_API_KEY }}
WEAVIATE_API_KEY: ${{ secrets.WEAVIATE_API_KEY }}
WEAVIATE_URL: ${{ secrets.WEAVIATE_URL }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_CUSTOM_SEARCH_ENGINE_ID: ${{ secrets.GOOGLE_CUSTOM_SEARCH_ENGINE_ID }}
AGENT_NAME: ${{ matrix.agent-name }}

68
.github/workflows/ossf-scorecard.yml vendored Normal file
View File

@@ -0,0 +1,68 @@
name: "Security - OSSF Scorecard"
on:
# For Branch-Protection check. Only the default branch is supported. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection
branch_protection_rule:
# To guarantee Maintained check is occasionally updated. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
schedule:
- cron: '28 12 * * 2'
push:
branches: [ "master" ]
# Declare default permissions as read only.
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
# Needed to publish results and get a badge (see publish_results below).
id-token: write
# Uncomment the permissions below if installing in a private repository.
# contents: read
# actions: read
steps:
- name: "Checkout code"
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # v3.1.0
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@e38b1902ae4f44df626f11ba0734b14fb91f8f86 # v2.1.2
with:
results_file: results.sarif
results_format: sarif
# (Optional) "write" PAT token. Uncomment the `repo_token` line below if:
# - you want to enable the Branch-Protection check on a *public* repository, or
# - you are installing Scorecard on a *private* repository
# To create the PAT, follow the steps in https://github.com/ossf/scorecard-action#authentication-with-pat.
# repo_token: ${{ secrets.SCORECARD_TOKEN }}
# Public repositories:
# - Publish results to OpenSSF REST API for easy access by consumers
# - Allows the repository to include the Scorecard badge.
# - See https://github.com/ossf/scorecard-action#publishing-results.
# For private repositories:
# - `publish_results` will always be set to `false`, regardless
# of the value entered here.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8 # v3.1.0
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@17573ee1cc1b9d061760f3a006fc4aac4f944fd5 # v2.2.4
with:
sarif_file: results.sarif

View File

@@ -1,41 +0,0 @@
name: Platform - AutoGPT Builder CI
on:
push:
branches: [ master ]
paths:
- '.github/workflows/autogpt-builder-ci.yml'
- 'autogpt_platform/autogpt_builder/**'
pull_request:
paths:
- '.github/workflows/autogpt-builder-ci.yml'
- 'autogpt_platform/autogpt_builder/**'
defaults:
run:
shell: bash
working-directory: autogpt_platform/autogpt_builder
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '21'
- name: Install dependencies
run: |
npm install
- name: Check formatting with Prettier
run: |
npx prettier --check .
- name: Run lint
run: |
npm run lint

View File

@@ -1,56 +0,0 @@
name: Platform - AutoGPT Builder Infra
on:
push:
branches: [ master ]
paths:
- '.github/workflows/autogpt-infra-ci.yml'
- 'autogpt_platform/infra/**'
pull_request:
paths:
- '.github/workflows/autogpt-infra-ci.yml'
- 'autogpt_platform/infra/**'
defaults:
run:
shell: bash
working-directory: autogpt_platform/infra
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: TFLint
uses: pauloconnor/tflint-action@v0.0.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tflint_path: terraform/
tflint_recurse: true
tflint_changed_only: false
- name: Set up Helm
uses: azure/setup-helm@v4.2.0
with:
version: v3.14.4
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --target-branch ${{ github.event.repository.default_branch }})
if [[ -n "$changed" ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
- name: Run chart-testing (lint)
if: steps.list-changed.outputs.changed == 'true'
run: ct lint --target-branch ${{ github.event.repository.default_branch }}

View File

@@ -1,155 +0,0 @@
name: Platform - AutoGPT Server CI
on:
push:
branches: [master, development, ci-test*]
paths:
- ".github/workflows/autogpt-server-ci.yml"
- "autogpt_platform/autogpt_server/**"
pull_request:
branches: [master, development, release-*]
paths:
- ".github/workflows/autogpt-server-ci.yml"
- "autogpt_platform/autogpt_server/**"
concurrency:
group: ${{ format('autogpt-server-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults:
run:
shell: bash
working-directory: autogpt_platform/autogpt_server
jobs:
test:
permissions:
contents: read
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
steps:
- name: Setup PostgreSQL
uses: ikalnytskyi/action-setup-postgres@v6
with:
username: ${{ secrets.DB_USER || 'postgres' }}
password: ${{ secrets.DB_PASS || 'postgres' }}
database: postgres
port: 5432
id: postgres
# Quite slow on macOS (2~4 minutes to set up Docker)
# - name: Set up Docker (macOS)
# if: runner.os == 'macOS'
# uses: crazy-max/ghaction-setup-docker@v3
- name: Start MinIO service (Linux)
if: runner.os == 'Linux'
working-directory: "."
run: |
docker pull minio/minio:edge-cicd
docker run -d -p 9000:9000 minio/minio:edge-cicd
- name: Start MinIO service (macOS)
if: runner.os == 'macOS'
working-directory: ${{ runner.temp }}
run: |
brew install minio/stable/minio
mkdir data
minio server ./data &
# No MinIO on Windows:
# - Windows doesn't support running Linux Docker containers
# - It doesn't seem possible to start background processes on Windows. They are
# killed after the step returns.
# See: https://github.com/actions/runner/issues/598#issuecomment-2011890429
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/autogpt_server/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Python dependencies
run: poetry install
- name: Generate Prisma Client
run: poetry run prisma generate
- name: Run Database Migrations
run: poetry run prisma migrate dev --name updates
env:
CONNECTION_STR: ${{ steps.postgres.outputs.connection-uri }}
- id: lint
name: Run Linter
run: poetry run lint
- name: Run pytest with coverage
run: |
if [[ "${{ runner.debug }}" == "1" ]]; then
poetry run pytest -vv -o log_cli=true -o log_cli_level=DEBUG test
else
poetry run pytest -vv test
fi
if: success() || (failure() && steps.lint.outcome == 'failure')
env:
LOG_LEVEL: ${{ runner.debug && 'DEBUG' || 'INFO' }}
env:
CI: true
PLAIN_OUTPUT: True
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
DB_USER: ${{ secrets.DB_USER || 'postgres' }}
DB_PASS: ${{ secrets.DB_PASS || 'postgres' }}
DB_NAME: postgres
DB_PORT: 5432
RUN_ENV: local
PORT: 8080
DATABASE_URL: postgresql://${{ secrets.DB_USER || 'postgres' }}:${{ secrets.DB_PASS || 'postgres' }}@localhost:5432/${{ secrets.DB_NAME || 'postgres'}}
# - name: Upload coverage reports to Codecov
# uses: codecov/codecov-action@v4
# with:
# token: ${{ secrets.CODECOV_TOKEN }}
# flags: autogpt-server,${{ runner.os }}

View File

@@ -1,12 +1,12 @@
name: Repo - Pull Request auto-label
name: "Pull Request auto-label"
on:
# So that PRs touching the same files as the push are updated
push:
branches: [ master, development, release-* ]
paths-ignore:
- 'classic/forge/tests/vcr_cassettes'
- 'classic/benchmark/reports/**'
- 'autogpt/tests/vcr_cassettes'
- 'benchmark/reports/**'
# So that the `dirtyLabel` is removed if conflicts are resolve
# We recommend `pull_request_target` so that github secrets are available.
# In `pull_request` we wouldn't be able to change labels of fork PRs

View File

@@ -1,24 +1,24 @@
name: Classic - Python checks
name: Python checks
on:
push:
branches: [ master, development, ci-test* ]
paths:
- '.github/workflows/lint-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'autogpt/**'
- 'forge/**'
- 'benchmark/**'
- '**.py'
- '!classic/forge/tests/vcr_cassettes'
- '!autogpt/tests/vcr_cassettes'
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/lint-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'autogpt/**'
- 'forge/**'
- 'benchmark/**'
- '**.py'
- '!classic/forge/tests/vcr_cassettes'
- '!autogpt/tests/vcr_cassettes'
concurrency:
group: ${{ format('lint-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -40,18 +40,18 @@ jobs:
uses: dorny/paths-filter@v3
with:
filters: |
original_autogpt:
- classic/original_autogpt/autogpt/**
- classic/original_autogpt/tests/**
- classic/original_autogpt/poetry.lock
autogpt:
- autogpt/autogpt/**
- autogpt/tests/**
- autogpt/poetry.lock
forge:
- classic/forge/forge/**
- classic/forge/tests/**
- classic/forge/poetry.lock
- forge/forge/**
- forge/tests/**
- forge/poetry.lock
benchmark:
- classic/benchmark/agbenchmark/**
- classic/benchmark/tests/**
- classic/benchmark/poetry.lock
- benchmark/agbenchmark/**
- benchmark/tests/**
- benchmark/poetry.lock
outputs:
changed-parts: ${{ steps.changes-in.outputs.changes }}
@@ -89,23 +89,23 @@ jobs:
# Install dependencies
- name: Install Python dependencies
run: poetry -C classic/${{ matrix.sub-package }} install
run: poetry -C ${{ matrix.sub-package }} install
# Lint
- name: Lint (isort)
run: poetry run isort --check .
working-directory: classic/${{ matrix.sub-package }}
working-directory: ${{ matrix.sub-package }}
- name: Lint (Black)
if: success() || failure()
run: poetry run black --check .
working-directory: classic/${{ matrix.sub-package }}
working-directory: ${{ matrix.sub-package }}
- name: Lint (Flake8)
if: success() || failure()
run: poetry run flake8 .
working-directory: classic/${{ matrix.sub-package }}
working-directory: ${{ matrix.sub-package }}
types:
needs: get-changed-parts
@@ -141,11 +141,11 @@ jobs:
# Install dependencies
- name: Install Python dependencies
run: poetry -C classic/${{ matrix.sub-package }} install
run: poetry -C ${{ matrix.sub-package }} install
# Typecheck
- name: Typecheck
if: success() || failure()
run: poetry run pyright
working-directory: classic/${{ matrix.sub-package }}
working-directory: ${{ matrix.sub-package }}

View File

@@ -1,4 +1,4 @@
name: Repo - Github Stats
name: github-repo-stats
on:
schedule:

View File

@@ -1,31 +0,0 @@
name: Repo - PR Status Checker
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
status-check:
name: Check PR Status
runs-on: ubuntu-latest
steps:
# - name: Wait some time for all actions to start
# run: sleep 30
- uses: actions/checkout@v4
# with:
# fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install requests
- name: Check PR Status
run: |
echo "Current directory before running Python script:"
pwd
echo "Attempting to run Python script:"
python .github/workflows/scripts/check_actions_status.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,109 +0,0 @@
import json
import os
import requests
import sys
import time
from typing import Dict, List, Tuple
def get_environment_variables() -> Tuple[str, str, str, str, str]:
"""Retrieve and return necessary environment variables."""
try:
with open(os.environ["GITHUB_EVENT_PATH"]) as f:
event = json.load(f)
sha = event["pull_request"]["head"]["sha"]
return (
os.environ["GITHUB_API_URL"],
os.environ["GITHUB_REPOSITORY"],
sha,
os.environ["GITHUB_TOKEN"],
os.environ["GITHUB_RUN_ID"],
)
except KeyError as e:
print(f"Error: Missing required environment variable or event data: {e}")
sys.exit(1)
def make_api_request(url: str, headers: Dict[str, str]) -> Dict:
"""Make an API request and return the JSON response."""
try:
print("Making API request to:", url)
response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status()
return response.json()
except requests.RequestException as e:
print(f"Error: API request failed. {e}")
sys.exit(1)
def process_check_runs(check_runs: List[Dict]) -> Tuple[bool, bool]:
"""Process check runs and return their status."""
runs_in_progress = False
all_others_passed = True
for run in check_runs:
if str(run["name"]) != "Check PR Status":
status = run["status"]
conclusion = run["conclusion"]
if status == "completed":
if conclusion not in ["success", "skipped", "neutral"]:
all_others_passed = False
print(
f"Check run {run['name']} (ID: {run['id']}) has conclusion: {conclusion}"
)
else:
runs_in_progress = True
print(f"Check run {run['name']} (ID: {run['id']}) is still {status}.")
all_others_passed = False
else:
print(
f"Skipping check run {run['name']} (ID: {run['id']}) as it is the current run."
)
return runs_in_progress, all_others_passed
def main():
api_url, repo, sha, github_token, current_run_id = get_environment_variables()
endpoint = f"{api_url}/repos/{repo}/commits/{sha}/check-runs"
headers = {
"Accept": "application/vnd.github.v3+json",
}
if github_token:
headers["Authorization"] = f"token {github_token}"
print(f"Current run ID: {current_run_id}")
while True:
data = make_api_request(endpoint, headers)
check_runs = data["check_runs"]
print("Processing check runs...")
print(check_runs)
runs_in_progress, all_others_passed = process_check_runs(check_runs)
if not runs_in_progress:
break
print(
"Some check runs are still in progress. Waiting 3 minutes before checking again..."
)
time.sleep(180)
if all_others_passed:
print("All other completed check runs have passed. This check passes.")
sys.exit(0)
else:
print("Some check runs have failed or have not completed. This check fails.")
sys.exit(1)
if __name__ == "__main__":
main()

11
.gitignore vendored
View File

@@ -1,7 +1,7 @@
## Original ignores
.github_access_token
classic/original_autogpt/keys.py
classic/original_autogpt/*.json
autogpt/keys.py
autogpt/*.json
auto_gpt_workspace/*
*.mpeg
.env
@@ -32,6 +32,7 @@ dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
@@ -157,11 +158,11 @@ openai/
CURRENT_BULLETIN.md
# AgBenchmark
agclassic/benchmark/reports/
agbenchmark/reports/
# Nodejs
package-lock.json
package.json
# Allow for locally private items
# private
@@ -169,5 +170,3 @@ pri*
# ignore
ig*
.github_access_token
LICENSE.rtf
autogpt_platform/autogpt_server/settings.py

7
.gitmodules vendored
View File

@@ -1,6 +1,3 @@
[submodule "classic/forge/tests/vcr_cassettes"]
path = classic/forge/tests/vcr_cassettes
[submodule "autogpt/tests/vcr_cassettes"]
path = autogpt/tests/vcr_cassettes
url = https://github.com/Significant-Gravitas/Auto-GPT-test-cassettes
[submodule "autogpt_platform/supabase"]
path = autogpt_platform/supabase
url = https://github.com/supabase/supabase.git

View File

@@ -16,22 +16,22 @@ repos:
hooks:
- id: isort-autogpt
name: Lint (isort) - AutoGPT
entry: poetry -C classic/original_autogpt run isort
files: ^classic/original_autogpt/
entry: poetry -C autogpt run isort
files: ^autogpt/
types: [file, python]
language: system
- id: isort-forge
name: Lint (isort) - Forge
entry: poetry -C classic/forge run isort
files: ^classic/forge/
entry: poetry -C forge run isort
files: ^forge/
types: [file, python]
language: system
- id: isort-benchmark
name: Lint (isort) - Benchmark
entry: poetry -C classic/benchmark run isort
files: ^classic/benchmark/
entry: poetry -C benchmark run isort
files: ^benchmark/
types: [file, python]
language: system
@@ -52,20 +52,20 @@ repos:
- id: flake8
name: Lint (Flake8) - AutoGPT
alias: flake8-autogpt
files: ^classic/original_autogpt/(autogpt|scripts|tests)/
args: [--config=classic/original_autogpt/.flake8]
files: ^autogpt/(autogpt|scripts|tests)/
args: [--config=autogpt/.flake8]
- id: flake8
name: Lint (Flake8) - Forge
alias: flake8-forge
files: ^classic/forge/(forge|tests)/
args: [--config=classic/forge/.flake8]
files: ^forge/(forge|tests)/
args: [--config=forge/.flake8]
- id: flake8
name: Lint (Flake8) - Benchmark
alias: flake8-benchmark
files: ^classic/benchmark/(agbenchmark|tests)/((?!reports).)*[/.]
args: [--config=classic/benchmark/.flake8]
files: ^benchmark/(agbenchmark|tests)/((?!reports).)*[/.]
args: [--config=benchmark/.flake8]
- repo: local
# To have watertight type checking, we check *all* the files in an affected
@@ -74,10 +74,10 @@ repos:
- id: pyright
name: Typecheck - AutoGPT
alias: pyright-autogpt
entry: poetry -C classic/original_autogpt run pyright
entry: poetry -C autogpt run pyright
args: [-p, autogpt, autogpt]
# include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(classic/original_autogpt/((autogpt|scripts|tests)/|poetry\.lock$)|classic/forge/(classic/forge/.*(?<!_test)\.py|poetry\.lock)$)
files: ^(autogpt/((autogpt|scripts|tests)/|poetry\.lock$)|forge/(forge/.*(?<!_test)\.py|poetry\.lock)$)
types: [file]
language: system
pass_filenames: false
@@ -85,9 +85,9 @@ repos:
- id: pyright
name: Typecheck - Forge
alias: pyright-forge
entry: poetry -C classic/forge run pyright
entry: poetry -C forge run pyright
args: [-p, forge, forge]
files: ^classic/forge/(classic/forge/|poetry\.lock$)
files: ^forge/(forge/|poetry\.lock$)
types: [file]
language: system
pass_filenames: false
@@ -95,9 +95,9 @@ repos:
- id: pyright
name: Typecheck - Benchmark
alias: pyright-benchmark
entry: poetry -C classic/benchmark run pyright
entry: poetry -C benchmark run pyright
args: [-p, benchmark, benchmark]
files: ^classic/benchmark/(agclassic/benchmark/|tests/|poetry\.lock$)
files: ^benchmark/(agbenchmark|tests)/
types: [file]
language: system
pass_filenames: false
@@ -106,22 +106,22 @@ repos:
hooks:
- id: pytest-autogpt
name: Run tests - AutoGPT (excl. slow tests)
entry: bash -c 'cd classic/original_autogpt && poetry run pytest --cov=autogpt -m "not slow" tests/unit tests/integration'
entry: bash -c 'cd autogpt && poetry run pytest --cov=autogpt -m "not slow" tests/unit tests/integration'
# include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(classic/original_autogpt/((autogpt|tests)/|poetry\.lock$)|classic/forge/(classic/forge/.*(?<!_test)\.py|poetry\.lock)$)
files: ^(autogpt/((autogpt|tests)/|poetry\.lock$)|forge/(forge/.*(?<!_test)\.py|poetry\.lock)$)
language: system
pass_filenames: false
- id: pytest-forge
name: Run tests - Forge (excl. slow tests)
entry: bash -c 'cd classic/forge && poetry run pytest --cov=forge -m "not slow"'
files: ^classic/forge/(classic/forge/|tests/|poetry\.lock$)
entry: bash -c 'cd forge && poetry run pytest --cov=forge -m "not slow"'
files: ^forge/(forge/|tests/|poetry\.lock$)
language: system
pass_filenames: false
- id: pytest-benchmark
name: Run tests - Benchmark
entry: bash -c 'cd classic/benchmark && poetry run pytest --cov=benchmark'
files: ^classic/benchmark/(agclassic/benchmark/|tests/|poetry\.lock$)
entry: bash -c 'cd benchmark && poetry run pytest --cov=benchmark'
files: ^benchmark/(agbenchmark/|tests/|poetry\.lock$)
language: system
pass_filenames: false

View File

@@ -1,61 +0,0 @@
{
"folders": [
{
"name": "autogpt_server",
"path": "../autogpt_platform/autogpt_server"
},
{
"name": "autogpt_builder",
"path": "../autogpt_platform/autogpt_builder"
},
{
"name": "market",
"path": "../autogpt_platform/market"
},
{
"name": "lib",
"path": "../autogpt_platform/autogpt_libs"
},
{
"name": "infra",
"path": "../autogpt_platform/infra"
},
{
"name": "docs",
"path": "../docs"
},
{
"name": "[root]",
"path": ".."
},
{
"name": "classic - autogpt",
"path": "../classic/original_autogpt"
},
{
"name": "classic - benchmark",
"path": "../classic/benchmark"
},
{
"name": "classic - forge",
"path": "../classic/forge"
},
{
"name": "classic - frontend",
"path": "../classic/frontend"
},
],
"settings": {
"python.analysis.typeCheckingMode": "basic"
},
"extensions": {
"recommendations": [
"charliermarsh.ruff",
"dart-code.flutter",
"ms-python.black-formatter",
"ms-python.vscode-pylance",
"prisma.prisma",
"qwtel.sqlite-viewer"
]
}
}

View File

@@ -1,5 +1,5 @@
# AutoGPT Contribution Guide
If you are reading this, you are probably looking for the full **[contribution guide]**,
If you are reading this, you are probably looking for our **[contribution guide]**,
which is part of our [wiki].
Also check out our [🚀 Roadmap][roadmap] for information about our priorities and associated tasks.
@@ -15,17 +15,15 @@ Also check out our [🚀 Roadmap][roadmap] for information about our priorities
2. We encourage you to collaborate with fellow community members on some of our bigger
[todo's][roadmap]!
* We highly recommend to post your idea and discuss it in the [dev channel].
3. Create a draft PR when starting work on bigger changes.
4. Adhere to the [Code Guidelines]
4. Create a draft PR when starting work on bigger changes.
3. Please also consider contributing something other than code; see the
[contribution guide] for options.
5. Clearly explain your changes when submitting a PR.
6. Don't submit broken code: test/validate your changes.
7. Avoid making unnecessary changes, especially if they're purely based on your personal
preferences. Doing so is the maintainers' job. ;-)
8. Please also consider contributing something other than code; see the
[contribution guide] for options.
[dev channel]: https://discord.com/channels/1092243196446249134/1095817829405704305
[code guidelines]: https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing#code-guidelines
If you wish to involve with the project (beyond just contributing PRs), please read the
wiki page about [Catalyzing](https://github.com/Significant-Gravitas/AutoGPT/wiki/Catalyzing).

View File

@@ -29,9 +29,9 @@ ENV PATH="$POETRY_HOME/bin:$PATH"
RUN poetry config installer.max-workers 10
WORKDIR /app/autogpt
COPY original_autogpt/pyproject.toml original_autogpt/poetry.lock ./
COPY autogpt/pyproject.toml autogpt/poetry.lock ./
# Include forge so it can be used AS a path dependency
# Include forge so it can be used as a path dependency
COPY forge/ ../forge
# Include frontend
@@ -42,17 +42,19 @@ ENTRYPOINT ["poetry", "run", "autogpt"]
CMD []
# dev build -> include everything
FROM autogpt-base AS autogpt-dev
FROM autogpt-base as autogpt-dev
RUN poetry install --no-cache --no-root \
&& rm -rf $(poetry env info --path)/src
ONBUILD COPY original_autogpt/ ./
ONBUILD COPY autogpt/ ./
# release build -> include bare minimum
FROM autogpt-base AS autogpt-release
FROM autogpt-base as autogpt-release
RUN poetry install --no-cache --no-root --without dev \
&& rm -rf $(poetry env info --path)/src
ONBUILD COPY original_autogpt/ ./autogpt
ONBUILD COPY original_autogpt/README.md ./README.md
ONBUILD COPY autogpt/autogpt/ ./autogpt
ONBUILD COPY autogpt/scripts/ ./scripts
ONBUILD COPY autogpt/plugins/ ./plugins
ONBUILD COPY autogpt/README.md ./README.md
ONBUILD RUN mkdir ./data
FROM autogpt-${BUILD_TYPE} AS autogpt

View File

@@ -2,11 +2,11 @@
> For the complete getting started [tutorial series](https://aiedge.medium.com/autogpt-forge-e3de53cc58ec) <- click here
Welcome to the Quickstart Guide! This guide will walk you through setting up, building, and running your own AutoGPT agent. Whether you're a seasoned AI developer or just starting out, this guide will provide you with the steps to jumpstart your journey in AI development with AutoGPT.
Welcome to the Quickstart Guide! This guide will walk you through the process of setting up and running your own AutoGPT agent. Whether you're a seasoned AI developer or just starting out, this guide will provide you with the necessary steps to jumpstart your journey in the world of AI development with AutoGPT.
## System Requirements
This project supports Linux (Debian-based), Mac, and Windows Subsystem for Linux (WSL). If you use a Windows system, you must install WSL. You can find the installation instructions for WSL [here](https://learn.microsoft.com/en-us/windows/wsl/).
This project supports Linux (Debian based), Mac, and Windows Subsystem for Linux (WSL). If you are using a Windows system, you will need to install WSL. You can find the installation instructions for WSL [here](https://learn.microsoft.com/en-us/windows/wsl/).
## Getting Setup
@@ -18,11 +18,11 @@ This project supports Linux (Debian-based), Mac, and Windows Subsystem for Linux
- In the top-right corner of the page, click Fork.
![Create Fork UI](docs/content/imgs/quickstart/002_fork.png)
- On the next page, select your GitHub account to create the fork.
- On the next page, select your GitHub account to create the fork under.
- Wait for the forking process to complete. You now have a copy of the repository in your GitHub account.
2. **Clone the Repository**
To clone the repository, you need to have Git installed on your system. If you don't have Git installed, download it from [here](https://git-scm.com/downloads). Once you have Git installed, follow these steps:
To clone the repository, you need to have Git installed on your system. If you don't have Git installed, you can download it from [here](https://git-scm.com/downloads). Once you have Git installed, follow these steps:
- Open your terminal.
- Navigate to the directory where you want to clone the repository.
- Run the git clone command for the fork you just created
@@ -34,11 +34,11 @@ This project supports Linux (Debian-based), Mac, and Windows Subsystem for Linux
![Open the Project in your IDE](docs/content/imgs/quickstart/004_ide.png)
4. **Setup the Project**
Next, we need to set up the required dependencies. We have a tool to help you perform all the tasks on the repo.
Next we need to setup the required dependencies. We have a tool for helping you do all the tasks you need to on the repo.
It can be accessed by running the `run` command by typing `./run` in the terminal.
The first command you need to use is `./run setup.` This will guide you through setting up your system.
Initially, you will get instructions for installing Flutter and Chrome and setting up your GitHub access token like the following image:
The first command you need to use is `./run setup` This will guide you through the process of setting up your system.
Initially you will get instructions for installing flutter, chrome and setting up your github access token like the following image:
![Setup the Project](docs/content/imgs/quickstart/005_setup.png)
@@ -47,7 +47,7 @@ This project supports Linux (Debian-based), Mac, and Windows Subsystem for Linux
If you're a Windows user and experience issues after installing WSL, follow the steps below to resolve them.
#### Update WSL
Run the following command in Powershell or Command Prompt:
Run the following command in Powershell or Command Prompt to:
1. Enable the optional WSL and Virtual Machine Platform components.
2. Download and install the latest Linux kernel.
3. Set WSL 2 as the default.
@@ -73,7 +73,7 @@ dos2unix ./run
After executing the above commands, running `./run setup` should work successfully.
#### Store Project Files within the WSL File System
If you continue to experience issues, consider storing your project files within the WSL file system instead of the Windows file system. This method avoids path translations and permissions issues and provides a more consistent development environment.
If you continue to experience issues, consider storing your project files within the WSL file system instead of the Windows file system. This method avoids issues related to path translations and permissions and provides a more consistent development environment.
You can keep running the command to get feedback on where you are up to with your setup.
When setup has been completed, the command will return an output like this:
@@ -83,7 +83,7 @@ When setup has been completed, the command will return an output like this:
## Creating Your Agent
After completing the setup, the next step is to create your agent template.
Execute the command `./run agent create YOUR_AGENT_NAME`, where `YOUR_AGENT_NAME` should be replaced with your chosen name.
Execute the command `./run agent create YOUR_AGENT_NAME`, where `YOUR_AGENT_NAME` should be replaced with a name of your choosing.
Tips for naming your agent:
* Give it its own unique name, or name it after yourself
@@ -101,21 +101,21 @@ This starts the agent on the URL: `http://localhost:8000/`
![Start the Agent](docs/content/imgs/quickstart/009_start_agent.png)
The front end can be accessed from `http://localhost:8000/`; first, you must log in using either a Google account or your GitHub account.
The frontend can be accessed from `http://localhost:8000/`, you will first need to login using either a google account or your github account.
![Login](docs/content/imgs/quickstart/010_login.png)
Upon logging in, you will get a page that looks something like this: your task history down the left-hand side of the page, and the 'chat' window to send tasks to your agent.
Upon logging in you will get a page that looks something like this. With your task history down the left hand side of the page and the 'chat' window to send tasks to your agent.
![Login](docs/content/imgs/quickstart/011_home.png)
When you have finished with your agent or just need to restart it, use Ctl-C to end the session. Then, you can re-run the start command.
When you have finished with your agent, or if you just need to restart it, use Ctl-C to end the session then you can re-run the start command.
If you are having issues and want to ensure the agent has been stopped, there is a `./run agent stop` command, which will kill the process using port 8000, which should be the agent.
If you are having issues and want to ensure the agent has been stopped there is a `./run agent stop` command which will kill the process using port 8000, which should be the agent.
## Benchmarking your Agent
The benchmarking system can also be accessed using the CLI too:
The benchmarking system can also be accessed using the cli too:
```bash
agpt % ./run benchmark
@@ -163,7 +163,7 @@ The benchmark has been split into different categories of skills you can test yo
![Login](docs/content/imgs/quickstart/012_tests.png)
Finally, you can run the benchmark with
Finally you can run the benchmark with
```bash
./run benchmark start YOUR_AGENT_NAME

View File

@@ -1,43 +1,17 @@
# AutoGPT: Build & Use AI Agents
# AutoGPT: build & use AI agents
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) &ensp;
[![Twitter Follow](https://img.shields.io/twitter/follow/Auto_GPT?style=social)](https://twitter.com/Auto_GPT) &ensp;
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
**AutoGPT** is a powerful tool that lets you create and run intelligent agents. These agents can perform various tasks automatically, making your life easier.
**AutoGPT** is a generalist LLM based AI agent that can autonomously accomplish minor tasks.
## How to Get Started
**Examples**:
https://github.com/user-attachments/assets/8508f4dc-b362-4cab-900f-644964a96cdf
- Look up and summarize this research paper
- Write a marketing for food supplements
- Write a blog post detailing the news in AI
### 🧱 AutoGPT Builder
The AutoGPT Builder is the frontend. It allows you to design agents using an easy flowchart style. You build your agent by connecting blocks, where each block performs a single action. It's simple and intuitive!
[Read this guide](https://docs.agpt.co/server/new_blocks/) to learn how to build your own custom blocks.
### 💽 AutoGPT Server
The AutoGPT Server is the backend. This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously.
### 🐙 Example Agents
Here are two examples of what you can do with AutoGPT:
1. **Reddit Marketing Agent**
- This agent reads comments on Reddit.
- It looks for people asking about your product.
- It then automatically responds to them.
2. **YouTube Content Repurposing Agent**
- This agent subscribes to your YouTube channel.
- When you post a new video, it transcribes it.
- It uses AI to write a search engine optimized blog post.
- Then, it publishes this blog post to your Medium account.
These examples show just a glimpse of what you can achieve with AutoGPT!
---
Our mission is to provide the tools, so that you can focus on what matters:
- 🏗️ **Building** - Lay the foundation for something amazing.
@@ -49,18 +23,16 @@ Be part of the revolution! **AutoGPT** is here to stay, at the forefront of AI i
**📖 [Documentation](https://docs.agpt.co)**
&ensp;|&ensp;
**🚀 [Contributing](CONTRIBUTING.md)**
&ensp;|&ensp;
**🛠️ [Build your own Agent - Quickstart](QUICKSTART.md)**
## 🧱 Building blocks
---
## 🤖 AutoGPT Classic
> Below is information about the classic version of AutoGPT.
**🛠️ [Build your own Agent - Quickstart](FORGE-QUICKSTART.md)**
### 🏗️ Forge
**Forge your own agent!** &ndash; Forge is a ready-to-go template for your agent application. All the boilerplate code is already handled, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from the [`forge.sdk`](/forge/sdk) can also be used individually to speed up development and reduce boilerplate in your agent project.
**Forge your own agent!** &ndash; Forge is a ready-to-go template for your agent application. All the boilerplate code is already handled, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from the [`forge.sdk`](/forge/forge/sdk) can also be used individually to speed up development and reduce boilerplate in your agent project.
🚀 [**Getting Started with Forge**](https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/forge/tutorials/001_getting_started.md) &ndash;
🚀 [**Getting Started with Forge**](https://github.com/Significant-Gravitas/AutoGPT/blob/master/forge/tutorials/001_getting_started.md) &ndash;
This guide will walk you through the process of creating your own agent and using the benchmark and user interface.
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/forge) about Forge
@@ -71,7 +43,7 @@ This guide will walk you through the process of creating your own agent and usin
<!-- TODO: insert visual demonstrating the benchmark -->
📦 [`agbenchmark`](https://pypi.org/project/agclassic/benchmark/) on Pypi
📦 [`agbenchmark`](https://pypi.org/project/agbenchmark/) on Pypi
&ensp;|&ensp;
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/blob/master/benchmark) about the Benchmark

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

View File

@@ -1,2 +1,2 @@
[run]
[run]
relative_files = true

View File

@@ -11,15 +11,12 @@
## GROQ_API_KEY - Groq API Key (Example: gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)
# GROQ_API_KEY=
## LLAMAFILE_API_BASE - Llamafile API base URL
# LLAMAFILE_API_BASE=http://localhost:8080/v1
## TELEMETRY_OPT_IN - Share telemetry on errors and other issues with the AutoGPT team, e.g. through Sentry.
## This helps us to spot and solve problems earlier & faster. (Default: DISABLED)
# TELEMETRY_OPT_IN=true
## COMPONENT_CONFIG_FILE - Path to the json config file (Default: None)
# COMPONENT_CONFIG_FILE=
## EXECUTE_LOCAL_COMMANDS - Allow local command execution (Default: False)
# EXECUTE_LOCAL_COMMANDS=False
### Workspace ###
@@ -47,6 +44,9 @@
### Miscellaneous ###
## USER_AGENT - Define the user-agent used by the requests library to browse website (string)
# USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y
@@ -96,21 +96,59 @@
## EMBEDDING_MODEL - Model to use for creating embeddings
# EMBEDDING_MODEL=text-embedding-3-small
################################################################################
### SHELL EXECUTION
################################################################################
## SHELL_COMMAND_CONTROL - Whether to use "allowlist" or "denylist" to determine what shell commands can be executed (Default: denylist)
# SHELL_COMMAND_CONTROL=denylist
## ONLY if SHELL_COMMAND_CONTROL is set to denylist:
## SHELL_DENYLIST - List of shell commands that ARE NOT allowed to be executed by AutoGPT (Default: sudo,su)
# SHELL_DENYLIST=sudo,su
## ONLY if SHELL_COMMAND_CONTROL is set to allowlist:
## SHELL_ALLOWLIST - List of shell commands that ARE allowed to be executed by AutoGPT (Default: None)
# SHELL_ALLOWLIST=
################################################################################
### IMAGE GENERATION PROVIDER
################################################################################
### Common
## IMAGE_PROVIDER - Image provider (Default: dalle)
# IMAGE_PROVIDER=dalle
## IMAGE_SIZE - Image size (Default: 256)
# IMAGE_SIZE=256
### Huggingface (IMAGE_PROVIDER=huggingface)
## HUGGINGFACE_IMAGE_MODEL - Text-to-image model from Huggingface (Default: CompVis/stable-diffusion-v1-4)
# HUGGINGFACE_IMAGE_MODEL=CompVis/stable-diffusion-v1-4
## HUGGINGFACE_API_TOKEN - HuggingFace API token (Default: None)
# HUGGINGFACE_API_TOKEN=
### Stable Diffusion (IMAGE_PROVIDER=sdwebui)
## SD_WEBUI_AUTH - Stable Diffusion Web UI username:password pair (Default: None)
# SD_WEBUI_AUTH=
## SD_WEBUI_URL - Stable Diffusion Web UI API URL (Default: http://localhost:7860)
# SD_WEBUI_URL=http://localhost:7860
################################################################################
### AUDIO TO TEXT PROVIDER
################################################################################
## AUDIO_TO_TEXT_PROVIDER - Audio-to-text provider (Default: huggingface)
# AUDIO_TO_TEXT_PROVIDER=huggingface
## HUGGINGFACE_AUDIO_TO_TEXT_MODEL - The model for HuggingFace to use (Default: CompVis/stable-diffusion-v1-4)
# HUGGINGFACE_AUDIO_TO_TEXT_MODEL=CompVis/stable-diffusion-v1-4
################################################################################
### GITHUB
################################################################################
@@ -125,6 +163,18 @@
### WEB BROWSING
################################################################################
## HEADLESS_BROWSER - Whether to run the browser in headless mode (default: True)
# HEADLESS_BROWSER=True
## USE_WEB_BROWSER - Sets the web-browser driver to use with selenium (default: chrome)
# USE_WEB_BROWSER=chrome
## BROWSE_CHUNK_MAX_LENGTH - When browsing website, define the length of chunks to summarize (Default: 3000)
# BROWSE_CHUNK_MAX_LENGTH=3000
## BROWSE_SPACY_LANGUAGE_MODEL - spaCy language model](https://spacy.io/usage/models) to use when creating chunks. (Default: en_core_web_sm)
# BROWSE_SPACY_LANGUAGE_MODEL=en_core_web_sm
## GOOGLE_API_KEY - Google API key (Default: None)
# GOOGLE_API_KEY=
@@ -148,6 +198,13 @@
## ELEVENLABS_VOICE_ID - Eleven Labs voice ID (Example: None)
# ELEVENLABS_VOICE_ID=
################################################################################
### CHAT MESSAGES
################################################################################
## CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False)
# CHAT_MESSAGES_ENABLED=False
################################################################################
### LOGGING
################################################################################

5
autogpt/.gitattributes vendored Normal file
View File

@@ -0,0 +1,5 @@
# Exclude VCR cassettes from stats
tests/vcr_cassettes/**/**.y*ml linguist-generated
# Mark documentation as such
docs/**.md linguist-documentation

View File

@@ -1,6 +1,6 @@
## Original ignores
classic/original_autogpt/keys.py
classic/original_autogpt/*.json
autogpt/keys.py
autogpt/*.json
*.mpeg
.env
azure.yaml

View File

@@ -68,6 +68,10 @@ Options:
continuous mode
--speak Enable Speak Mode
--debug Enable Debug Mode
-b, --browser-name TEXT Specifies which web-browser to use when
using selenium to scrape the web.
--allow-downloads Dangerous: Allows AutoGPT to download files
natively.
--skip-news Specifies whether to suppress the output of
latest news on startup.
--install-plugin-deps Installs external dependencies for 3rd party
@@ -86,7 +90,6 @@ Options:
--override-directives If specified, --constraint, --resource and
--best-practice will override the AI's
directives instead of being appended to them
--component-config-file TEXT Path to the json configuration file.
--help Show this message and exit.
```
</details>
@@ -108,6 +111,10 @@ Usage: python -m autogpt serve [OPTIONS]
Options:
--debug Enable Debug Mode
-b, --browser-name TEXT Specifies which web-browser to use when using
selenium to scrape the web.
--allow-downloads Dangerous: Allows AutoGPT to download files
natively.
--install-plugin-deps Installs external dependencies for 3rd party
plugins.
--help Show this message and exit.
@@ -120,9 +127,9 @@ by default on `http://localhost:8000`.
For more comprehensive instructions, see the [user guide][docs/usage].
[docs]: https://docs.agpt.co/autogpt
[docs/setup]: https://docs.agpt.co/classic/original_autogpt/setup
[docs/usage]: https://docs.agpt.co/classic/original_autogpt/usage
[docs/plugins]: https://docs.agpt.co/classic/original_autogpt/plugins
[docs/setup]: https://docs.agpt.co/autogpt/setup
[docs/usage]: https://docs.agpt.co/autogpt/usage
[docs/plugins]: https://docs.agpt.co/autogpt/plugins
## 📚 Resources
* 📔 AutoGPT [project wiki](https://github.com/Significant-Gravitas/AutoGPT/wiki)

View File

@@ -2,17 +2,17 @@ from typing import Optional
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config
from forge.file_storage.base import FileStorage
from forge.llm.providers import MultiProvider
from autogpt.agents.agent import Agent, AgentConfiguration, AgentSettings
from autogpt.app.config import AppConfig
def create_agent(
agent_id: str,
task: str,
app_config: AppConfig,
app_config: Config,
file_storage: FileStorage,
llm_provider: MultiProvider,
ai_profile: Optional[AIProfile] = None,
@@ -38,7 +38,7 @@ def create_agent(
def configure_agent_with_state(
state: AgentSettings,
app_config: AppConfig,
app_config: Config,
file_storage: FileStorage,
llm_provider: MultiProvider,
) -> Agent:
@@ -51,7 +51,7 @@ def configure_agent_with_state(
def _configure_agent(
app_config: AppConfig,
app_config: Config,
llm_provider: MultiProvider,
file_storage: FileStorage,
agent_id: str = "",
@@ -80,7 +80,7 @@ def _configure_agent(
settings=agent_state,
llm_provider=llm_provider,
file_storage=file_storage,
app_config=app_config,
legacy_config=app_config,
)
@@ -89,7 +89,7 @@ def create_agent_state(
task: str,
ai_profile: AIProfile,
directives: AIDirectives,
app_config: AppConfig,
app_config: Config,
) -> AgentSettings:
return AgentSettings(
agent_id=agent_id,
@@ -104,5 +104,5 @@ def create_agent_state(
allow_fs_access=not app_config.restrict_to_workspace,
use_functions_api=app_config.openai_functions,
),
history=Agent.default_settings.history.model_copy(deep=True),
history=Agent.default_settings.history.copy(deep=True),
)

View File

@@ -6,7 +6,7 @@ from forge.file_storage.base import FileStorage
if TYPE_CHECKING:
from autogpt.agents.agent import Agent
from autogpt.app.config import AppConfig
from forge.config.config import Config
from forge.llm.providers import MultiProvider
from .configurators import _configure_agent
@@ -16,7 +16,7 @@ from .profile_generator import generate_agent_profile_for_task
async def generate_agent_for_task(
agent_id: str,
task: str,
app_config: AppConfig,
app_config: Config,
file_storage: FileStorage,
llm_provider: MultiProvider,
) -> Agent:

View File

@@ -3,6 +3,7 @@ import logging
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config
from forge.llm.prompting import ChatPrompt, LanguageModelClassification, PromptStrategy
from forge.llm.providers import MultiProvider
from forge.llm.providers.schema import (
@@ -13,13 +14,11 @@ from forge.llm.providers.schema import (
from forge.models.config import SystemConfiguration, UserConfigurable
from forge.models.json_schema import JSONSchema
from autogpt.app.config import AppConfig
logger = logging.getLogger(__name__)
class AgentProfileGeneratorConfiguration(SystemConfiguration):
llm_classification: LanguageModelClassification = UserConfigurable(
model_classification: LanguageModelClassification = UserConfigurable(
default=LanguageModelClassification.SMART_MODEL
)
_example_call: object = {
@@ -137,7 +136,7 @@ class AgentProfileGeneratorConfiguration(SystemConfiguration):
required=True,
),
},
).model_dump()
).dict()
)
@@ -148,21 +147,21 @@ class AgentProfileGenerator(PromptStrategy):
def __init__(
self,
llm_classification: LanguageModelClassification,
model_classification: LanguageModelClassification,
system_prompt: str,
user_prompt_template: str,
create_agent_function: dict,
):
self._llm_classification = llm_classification
self._model_classification = model_classification
self._system_prompt_message = system_prompt
self._user_prompt_template = user_prompt_template
self._create_agent_function = CompletionModelFunction.model_validate(
self._create_agent_function = CompletionModelFunction.parse_obj(
create_agent_function
)
@property
def llm_classification(self) -> LanguageModelClassification:
return self._llm_classification
def model_classification(self) -> LanguageModelClassification:
return self._model_classification
def build_prompt(self, user_objective: str = "", **kwargs) -> ChatPrompt:
system_message = ChatMessage.system(self._system_prompt_message)
@@ -213,7 +212,7 @@ class AgentProfileGenerator(PromptStrategy):
async def generate_agent_profile_for_task(
task: str,
app_config: AppConfig,
app_config: Config,
llm_provider: MultiProvider,
) -> tuple[AIProfile, AIDirectives]:
"""Generates an AIConfig object from the given string.
@@ -222,7 +221,7 @@ async def generate_agent_profile_for_task(
AIConfig: The AIConfig object tailored to the user's input
"""
agent_profile_generator = AgentProfileGenerator(
**AgentProfileGenerator.default_configuration.model_dump() # HACK
**AgentProfileGenerator.default_configuration.dict() # HACK
)
prompt = agent_profile_generator.build_prompt(task)

View File

@@ -26,12 +26,12 @@ class MyAgent(Agent):
settings: AgentSettings,
llm_provider: MultiProvider
file_storage: FileStorage,
app_config: AppConfig,
legacy_config: Config,
):
# Call the parent constructor to bring in the default components
super().__init__(settings, llm_provider, file_storage, app_config)
super().__init__(settings, llm_provider, file_storage, legacy_config)
# Add your custom component
self.my_component = MyComponent()
```
For more customization, you can override the `propose_action` and `execute` or even subclass `BaseAgent` directly. This way you can have full control over the agent's components and behavior. Have a look at the [implementation of Agent](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/original_autogpt/agents/agent.py) for more details.
For more customization, you can override the `propose_action` and `execute` or even subclass `BaseAgent` directly. This way you can have full control over the agent's components and behavior. Have a look at the [implementation of Agent](https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt/autogpt/agents/agent.py) for more details.

View File

@@ -18,11 +18,7 @@ from forge.components.action_history import (
ActionHistoryComponent,
EpisodicActionHistory,
)
from forge.components.action_history.action_history import ActionHistoryConfiguration
from forge.components.code_executor.code_executor import (
CodeExecutorComponent,
CodeExecutorConfiguration,
)
from forge.components.code_executor.code_executor import CodeExecutorComponent
from forge.components.context.context import AgentContext, ContextComponent
from forge.components.file_manager import FileManagerComponent
from forge.components.git_operations import GitOperationsComponent
@@ -62,7 +58,7 @@ from .prompt_strategies.one_shot import (
)
if TYPE_CHECKING:
from autogpt.app.config import AppConfig
from forge.config.config import Config
logger = logging.getLogger(__name__)
@@ -95,14 +91,12 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
settings: AgentSettings,
llm_provider: MultiProvider,
file_storage: FileStorage,
app_config: AppConfig,
legacy_config: Config,
):
super().__init__(settings)
self.llm_provider = llm_provider
prompt_config = OneShotAgentPromptStrategy.default_configuration.model_copy(
deep=True
)
prompt_config = OneShotAgentPromptStrategy.default_configuration.copy(deep=True)
prompt_config.use_functions_api = (
settings.config.use_functions_api
# Anthropic currently doesn't support tools + prefilling :(
@@ -113,41 +107,33 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
# Components
self.system = SystemComponent()
self.history = (
ActionHistoryComponent(
settings.history,
lambda x: self.llm_provider.count_tokens(x, self.llm.name),
llm_provider,
ActionHistoryConfiguration(
llm_name=app_config.fast_llm, max_tokens=self.send_token_limit
),
)
.run_after(WatchdogComponent)
.run_after(SystemComponent)
)
if not app_config.noninteractive_mode:
self.user_interaction = UserInteractionComponent()
self.file_manager = FileManagerComponent(file_storage, settings)
self.history = ActionHistoryComponent(
settings.history,
self.send_token_limit,
lambda x: self.llm_provider.count_tokens(x, self.llm.name),
legacy_config,
llm_provider,
).run_after(WatchdogComponent)
self.user_interaction = UserInteractionComponent(legacy_config)
self.file_manager = FileManagerComponent(settings, file_storage)
self.code_executor = CodeExecutorComponent(
self.file_manager.workspace,
CodeExecutorConfiguration(
docker_container_name=f"{settings.agent_id}_sandbox"
),
settings,
legacy_config,
)
self.git_ops = GitOperationsComponent()
self.image_gen = ImageGeneratorComponent(self.file_manager.workspace)
self.web_search = WebSearchComponent()
self.web_selenium = WebSeleniumComponent(
llm_provider,
app_config.app_data_dir,
self.git_ops = GitOperationsComponent(legacy_config)
self.image_gen = ImageGeneratorComponent(
self.file_manager.workspace, legacy_config
)
self.web_search = WebSearchComponent(legacy_config)
self.web_selenium = WebSeleniumComponent(legacy_config, llm_provider, self.llm)
self.context = ContextComponent(self.file_manager.workspace, settings.context)
self.watchdog = WatchdogComponent(settings.config, settings.history).run_after(
ContextComponent
)
self.event_history = settings.history
self.app_config = app_config
self.legacy_config = legacy_config
async def propose_action(self) -> OneShotAgentActionProposal:
"""Proposes the next action to execute, based on the task and current state.
@@ -162,7 +148,7 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
constraints = await self.run_pipeline(DirectiveProvider.get_constraints)
best_practices = await self.run_pipeline(DirectiveProvider.get_best_practices)
directives = self.state.directives.model_copy(deep=True)
directives = self.state.directives.copy(deep=True)
directives.resources += resources
directives.constraints += constraints
directives.best_practices += best_practices
@@ -174,19 +160,13 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
# Get messages
messages = await self.run_pipeline(MessageProvider.get_messages)
include_os_info = (
self.code_executor.config.execute_local_commands
if hasattr(self, "code_executor")
else False
)
prompt: ChatPrompt = self.prompt_strategy.build_prompt(
messages=messages,
task=self.state.task,
ai_profile=self.state.ai_profile,
ai_directives=directives,
commands=function_specs_from_commands(self.commands),
include_os_info=include_os_info,
include_os_info=self.legacy_config.execute_local_commands,
)
logger.debug(f"Executing prompt:\n{dump_prompt(prompt)}")
@@ -297,7 +277,7 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
command
for command in self.commands
if not any(
name in self.app_config.disabled_commands for name in command.names
name in self.legacy_config.disabled_commands for name in command.names
)
]

View File

@@ -28,13 +28,15 @@ _RESPONSE_INTERFACE_NAME = "AssistantResponse"
class AssistantThoughts(ModelWithSummary):
observations: str = Field(
description="Relevant observations from your last action (if any)"
..., description="Relevant observations from your last action (if any)"
)
text: str = Field(description="Thoughts")
reasoning: str = Field(description="Reasoning behind the thoughts")
self_criticism: str = Field(description="Constructive self-criticism")
plan: list[str] = Field(description="Short list that conveys the long-term plan")
speak: str = Field(description="Summary of thoughts, to say to user")
text: str = Field(..., description="Thoughts")
reasoning: str = Field(..., description="Reasoning behind the thoughts")
self_criticism: str = Field(..., description="Constructive self-criticism")
plan: list[str] = Field(
..., description="Short list that conveys the long-term plan"
)
speak: str = Field(..., description="Summary of thoughts, to say to user")
def summary(self) -> str:
return self.text
@@ -94,13 +96,11 @@ class OneShotAgentPromptStrategy(PromptStrategy):
logger: Logger,
):
self.config = configuration
self.response_schema = JSONSchema.from_dict(
OneShotAgentActionProposal.model_json_schema()
)
self.response_schema = JSONSchema.from_dict(OneShotAgentActionProposal.schema())
self.logger = logger
@property
def llm_classification(self) -> LanguageModelClassification:
def model_classification(self) -> LanguageModelClassification:
return LanguageModelClassification.FAST_MODEL # FIXME: dynamic switching
def build_prompt(
@@ -182,7 +182,7 @@ class OneShotAgentPromptStrategy(PromptStrategy):
)
def response_format_instruction(self, use_functions_api: bool) -> tuple[str, str]:
response_schema = self.response_schema.model_copy(deep=True)
response_schema = self.response_schema.copy(deep=True)
assert response_schema.properties
if use_functions_api and "use_tool" in response_schema.properties:
del response_schema.properties["use_tool"]
@@ -274,8 +274,5 @@ class OneShotAgentPromptStrategy(PromptStrategy):
raise InvalidAgentResponseError("Assistant did not use a tool")
assistant_reply_dict["use_tool"] = response.tool_calls[0].function
parsed_response = OneShotAgentActionProposal.model_validate(
assistant_reply_dict
)
parsed_response.raw_message = response.copy()
parsed_response = OneShotAgentActionProposal.parse_obj(assistant_reply_dict)
return parsed_response

View File

@@ -23,6 +23,7 @@ from forge.agent_protocol.models import (
TaskRequestBody,
TaskStepsListResponse,
)
from forge.config.config import Config
from forge.file_storage import FileStorage
from forge.llm.providers import ModelProviderBudget, MultiProvider
from forge.models.action import ActionErrorResult, ActionSuccessResult
@@ -34,7 +35,6 @@ from sentry_sdk import set_user
from autogpt.agent_factory.configurators import configure_agent_with_state, create_agent
from autogpt.agents.agent_manager import AgentManager
from autogpt.app.config import AppConfig
from autogpt.app.utils import is_port_free
logger = logging.getLogger(__name__)
@@ -45,7 +45,7 @@ class AgentProtocolServer:
def __init__(
self,
app_config: AppConfig,
app_config: Config,
database: AgentDB,
file_storage: FileStorage,
llm_provider: MultiProvider,
@@ -97,9 +97,7 @@ class AgentProtocolServer:
app.include_router(router, prefix="/ap/v1")
script_dir = os.path.dirname(os.path.realpath(__file__))
frontend_path = (
pathlib.Path(script_dir)
.joinpath("../../../classic/frontend/build/web")
.resolve()
pathlib.Path(script_dir).joinpath("../../../frontend/build/web").resolve()
)
if os.path.exists(frontend_path):
@@ -316,7 +314,7 @@ class AgentProtocolServer:
""
if tool_result is None
else (
orjson.loads(tool_result.model_dump_json())
orjson.loads(tool_result.json())
if not isinstance(tool_result, ActionErrorResult)
else {
"error": str(tool_result.error),
@@ -329,7 +327,7 @@ class AgentProtocolServer:
if last_proposal and tool_result
else {}
),
**assistant_response.model_dump(),
**assistant_response.dict(),
}
task_cumulative_cost = agent.llm_provider.get_incurred_cost()
@@ -453,9 +451,7 @@ class AgentProtocolServer:
"""
task_llm_budget = self._task_budgets[task.task_id]
task_llm_provider_config = self.llm_provider._configuration.model_copy(
deep=True
)
task_llm_provider_config = self.llm_provider._configuration.copy(deep=True)
_extra_request_headers = task_llm_provider_config.extra_request_headers
_extra_request_headers["AP-TaskID"] = task.task_id
if step_id:
@@ -463,7 +459,7 @@ class AgentProtocolServer:
if task.additional_input and (user_id := task.additional_input.get("user_id")):
_extra_request_headers["AutoGPT-UserID"] = user_id
settings = self.llm_provider._settings.model_copy()
settings = self.llm_provider._settings.copy()
settings.budget = task_llm_budget
settings.configuration = task_llm_provider_config
task_llm_provider = self.llm_provider.__class__(

View File

@@ -28,6 +28,24 @@ def cli(ctx: click.Context):
help="Defines the number of times to run in continuous mode",
)
@click.option("--speak", is_flag=True, help="Enable Speak Mode")
@click.option(
"-b",
"--browser-name",
help="Specifies which web-browser to use when using selenium to scrape the web.",
)
@click.option(
"--allow-downloads",
is_flag=True,
help="Dangerous: Allows AutoGPT to download files natively.",
)
@click.option(
# TODO: this is a hidden option for now, necessary for integration testing.
# We should make this public once we're ready to roll out agent specific workspaces.
"--workspace-directory",
"-w",
type=click.Path(file_okay=False),
hidden=True,
)
@click.option(
"--install-plugin-deps",
is_flag=True,
@@ -110,15 +128,13 @@ def cli(ctx: click.Context):
),
type=click.Choice([i.value for i in LogFormatName]),
)
@click.option(
"--component-config-file",
help="Path to a json configuration file",
type=click.Path(exists=True, dir_okay=False, resolve_path=True, path_type=Path),
)
def run(
continuous: bool,
continuous_limit: Optional[int],
speak: bool,
browser_name: Optional[str],
allow_downloads: bool,
workspace_directory: Optional[Path],
install_plugin_deps: bool,
skip_news: bool,
skip_reprompt: bool,
@@ -132,7 +148,6 @@ def run(
log_level: Optional[str],
log_format: Optional[str],
log_file_format: Optional[str],
component_config_file: Optional[Path],
) -> None:
"""
Sets up and runs an agent, based on the task specified by the user, or resumes an
@@ -150,7 +165,10 @@ def run(
log_level=log_level,
log_format=log_format,
log_file_format=log_file_format,
browser_name=browser_name,
allow_downloads=allow_downloads,
skip_news=skip_news,
workspace_directory=workspace_directory,
install_plugin_deps=install_plugin_deps,
override_ai_name=ai_name,
override_ai_role=ai_role,
@@ -158,11 +176,20 @@ def run(
constraints=list(constraint),
best_practices=list(best_practice),
override_directives=override_directives,
component_config_file=component_config_file,
)
@cli.command()
@click.option(
"-b",
"--browser-name",
help="Specifies which web-browser to use when using selenium to scrape the web.",
)
@click.option(
"--allow-downloads",
is_flag=True,
help="Dangerous: Allows AutoGPT to download files natively.",
)
@click.option(
"--install-plugin-deps",
is_flag=True,
@@ -190,6 +217,8 @@ def run(
type=click.Choice([i.value for i in LogFormatName]),
)
def serve(
browser_name: Optional[str],
allow_downloads: bool,
install_plugin_deps: bool,
debug: bool,
log_level: Optional[str],
@@ -208,6 +237,8 @@ def serve(
log_level=log_level,
log_format=log_format,
log_file_format=log_file_format,
browser_name=browser_name,
allow_downloads=allow_downloads,
install_plugin_deps=install_plugin_deps,
)

View File

@@ -5,18 +5,20 @@ import logging
from typing import Literal, Optional
import click
from colorama import Back, Style
from forge.config.config import GPT_3_MODEL, Config
from forge.llm.providers import ModelName, MultiProvider
from autogpt.app.config import GPT_3_MODEL, AppConfig
logger = logging.getLogger(__name__)
async def apply_overrides_to_config(
config: AppConfig,
config: Config,
continuous: bool = False,
continuous_limit: Optional[int] = None,
skip_reprompt: bool = False,
browser_name: Optional[str] = None,
allow_downloads: bool = False,
skip_news: bool = False,
) -> None:
"""Updates the config object with the given arguments.
@@ -31,6 +33,8 @@ async def apply_overrides_to_config(
log_level (int): The global log level for the application.
log_format (str): The format for the log(s).
log_file_format (str): Override the format for the log file.
browser_name (str): The name of the browser to use for scraping the web.
allow_downloads (bool): Whether to allow AutoGPT to download files natively.
skips_news (bool): Whether to suppress the output of latest news on startup.
"""
config.continuous_mode = False
@@ -51,33 +55,44 @@ async def apply_overrides_to_config(
raise click.UsageError("--continuous-limit can only be used with --continuous")
# Check availability of configured LLMs; fallback to other LLM if unavailable
config.fast_llm, config.smart_llm = await check_models(
(config.fast_llm, "fast_llm"), (config.smart_llm, "smart_llm")
)
config.fast_llm = await check_model(config.fast_llm, "fast_llm")
config.smart_llm = await check_model(config.smart_llm, "smart_llm")
if skip_reprompt:
config.skip_reprompt = True
if browser_name:
config.selenium_web_browser = browser_name
if allow_downloads:
logger.warning(
msg=f"{Back.LIGHTYELLOW_EX}"
"AutoGPT will now be able to download and save files to your machine."
f"{Back.RESET}"
" It is recommended that you monitor any files it downloads carefully.",
)
logger.warning(
msg=f"{Back.RED + Style.BRIGHT}"
"NEVER OPEN FILES YOU AREN'T SURE OF!"
f"{Style.RESET_ALL}",
)
config.allow_downloads = True
if skip_news:
config.skip_news = True
async def check_models(
*models: tuple[ModelName, Literal["smart_llm", "fast_llm"]]
) -> tuple[ModelName, ...]:
async def check_model(
model_name: ModelName, model_type: Literal["smart_llm", "fast_llm"]
) -> ModelName:
"""Check if model is available for use. If not, return gpt-3.5-turbo."""
multi_provider = MultiProvider()
available_models = await multi_provider.get_available_chat_models()
models = await multi_provider.get_available_chat_models()
checked_models: list[ModelName] = []
for model, model_type in models:
if any(model == m.name for m in available_models):
checked_models.append(model)
else:
logger.warning(
f"You don't have access to {model}. "
f"Setting {model_type} to {GPT_3_MODEL}."
)
checked_models.append(GPT_3_MODEL)
if any(model_name == m.name for m in models):
return model_name
return tuple(checked_models)
logger.warning(
f"You don't have access to {model_name}. Setting {model_type} to {GPT_3_MODEL}."
)
return GPT_3_MODEL

View File

@@ -21,6 +21,7 @@ from forge.components.code_executor.code_executor import (
)
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config, ConfigBuilder, assert_config_has_openai_api_key
from forge.file_storage import FileStorageBackendName, get_storage
from forge.llm.providers import MultiProvider
from forge.logging.config import configure_logging
@@ -33,11 +34,6 @@ from forge.utils.exceptions import AgentTerminated, InvalidAgentResponseError
from autogpt.agent_factory.configurators import configure_agent_with_state, create_agent
from autogpt.agents.agent_manager import AgentManager
from autogpt.agents.prompt_strategies.one_shot import AssistantThoughts
from autogpt.app.config import (
AppConfig,
ConfigBuilder,
assert_config_has_required_llm_api_keys,
)
if TYPE_CHECKING:
from autogpt.agents.agent import Agent
@@ -66,7 +62,10 @@ async def run_auto_gpt(
log_level: Optional[str] = None,
log_format: Optional[str] = None,
log_file_format: Optional[str] = None,
browser_name: Optional[str] = None,
allow_downloads: bool = False,
skip_news: bool = False,
workspace_directory: Optional[Path] = None,
install_plugin_deps: bool = False,
override_ai_name: Optional[str] = None,
override_ai_role: Optional[str] = None,
@@ -74,7 +73,6 @@ async def run_auto_gpt(
constraints: Optional[list[str]] = None,
best_practices: Optional[list[str]] = None,
override_directives: bool = False,
component_config_file: Optional[Path] = None,
):
# Set up configuration
config = ConfigBuilder.build_config_from_env()
@@ -100,13 +98,16 @@ async def run_auto_gpt(
tts_config=config.tts_config,
)
await assert_config_has_required_llm_api_keys(config)
# TODO: fill in llm values here
assert_config_has_openai_api_key(config)
await apply_overrides_to_config(
config=config,
continuous=continuous,
continuous_limit=continuous_limit,
skip_reprompt=skip_reprompt,
browser_name=browser_name,
allow_downloads=allow_downloads,
skip_news=skip_news,
)
@@ -131,12 +132,15 @@ async def run_auto_gpt(
print_python_version_info(logger)
print_attribute("Smart LLM", config.smart_llm)
print_attribute("Fast LLM", config.fast_llm)
print_attribute("Browser", config.selenium_web_browser)
if config.continuous_mode:
print_attribute("Continuous Mode", "ENABLED", title_color=Fore.YELLOW)
if continuous_limit:
print_attribute("Continuous Limit", config.continuous_limit)
if config.tts_config.speak_mode:
print_attribute("Speak Mode", "ENABLED")
if config.allow_downloads:
print_attribute("Native Downloading", "ENABLED")
if we_are_running_in_a_docker_container() or is_docker_available():
print_attribute("Code Execution", "ENABLED")
else:
@@ -323,14 +327,6 @@ async def run_auto_gpt(
# )
# ).add_done_callback(update_agent_directives)
# Load component configuration from file
if _config_file := component_config_file or config.component_config_file:
try:
logger.info(f"Loading component configuration from {_config_file}")
agent.load_component_configs(_config_file.read_text())
except Exception as e:
logger.error(f"Could not load component configuration: {e}")
#################
# Run the Agent #
#################
@@ -357,6 +353,8 @@ async def run_auto_gpt_server(
log_level: Optional[str] = None,
log_format: Optional[str] = None,
log_file_format: Optional[str] = None,
browser_name: Optional[str] = None,
allow_downloads: bool = False,
install_plugin_deps: bool = False,
):
from .agent_protocol_server import AgentProtocolServer
@@ -382,10 +380,13 @@ async def run_auto_gpt_server(
tts_config=config.tts_config,
)
await assert_config_has_required_llm_api_keys(config)
# TODO: fill in llm values here
assert_config_has_openai_api_key(config)
await apply_overrides_to_config(
config=config,
browser_name=browser_name,
allow_downloads=allow_downloads,
)
llm_provider = _configure_llm_provider(config)
@@ -410,7 +411,7 @@ async def run_auto_gpt_server(
)
def _configure_llm_provider(config: AppConfig) -> MultiProvider:
def _configure_llm_provider(config: Config) -> MultiProvider:
multi_provider = MultiProvider()
for model in [config.smart_llm, config.fast_llm]:
# Ensure model providers for configured LLMs are available
@@ -450,15 +451,15 @@ async def run_interaction_loop(
None
"""
# These contain both application config and agent config, so grab them here.
app_config = agent.app_config
legacy_config = agent.legacy_config
ai_profile = agent.state.ai_profile
logger = logging.getLogger(__name__)
cycle_budget = cycles_remaining = _get_cycle_budget(
app_config.continuous_mode, app_config.continuous_limit
legacy_config.continuous_mode, legacy_config.continuous_limit
)
spinner = Spinner(
"Thinking...", plain_output=app_config.logging.plain_console_output
"Thinking...", plain_output=legacy_config.logging.plain_console_output
)
stop_reason = None
@@ -507,25 +508,22 @@ async def run_interaction_loop(
########
handle_stop_signal()
# Have the agent determine the next action to take.
if not (_ep := agent.event_history.current_episode) or _ep.result:
with spinner:
try:
action_proposal = await agent.propose_action()
except InvalidAgentResponseError as e:
logger.warning(f"The agent's thoughts could not be parsed: {e}")
consecutive_failures += 1
if consecutive_failures >= 3:
logger.error(
"The agent failed to output valid thoughts"
f" {consecutive_failures} times in a row. Terminating..."
)
raise AgentTerminated(
"The agent failed to output valid thoughts"
f" {consecutive_failures} times in a row."
)
continue
else:
action_proposal = _ep.action
with spinner:
try:
action_proposal = await agent.propose_action()
except InvalidAgentResponseError as e:
logger.warning(f"The agent's thoughts could not be parsed: {e}")
consecutive_failures += 1
if consecutive_failures >= 3:
logger.error(
"The agent failed to output valid thoughts"
f" {consecutive_failures} times in a row. Terminating..."
)
raise AgentTerminated(
"The agent failed to output valid thoughts"
f" {consecutive_failures} times in a row."
)
continue
consecutive_failures = 0
@@ -536,7 +534,7 @@ async def run_interaction_loop(
update_user(
ai_profile,
action_proposal,
speak_mode=app_config.tts_config.speak_mode,
speak_mode=legacy_config.tts_config.speak_mode,
)
##################
@@ -545,7 +543,7 @@ async def run_interaction_loop(
handle_stop_signal()
if cycles_remaining == 1: # Last cycle
feedback_type, feedback, new_cycles_remaining = await get_user_feedback(
app_config,
legacy_config,
ai_profile,
)
@@ -656,7 +654,7 @@ def update_user(
async def get_user_feedback(
config: AppConfig,
config: Config,
ai_profile: AIProfile,
) -> tuple[UserFeedback, str, int | None]:
"""Gets the user's feedback on the assistant's reply.

View File

@@ -4,10 +4,9 @@ from typing import Optional
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config
from forge.logging.utils import print_attribute
from autogpt.app.config import AppConfig
from .input import clean_input
logger = logging.getLogger(__name__)
@@ -47,7 +46,7 @@ def apply_overrides_to_ai_settings(
async def interactively_revise_ai_settings(
ai_profile: AIProfile,
directives: AIDirectives,
app_config: AppConfig,
app_config: Config,
):
"""Interactively revise the AI settings.

View File

@@ -22,7 +22,7 @@ logger = logging.getLogger(__name__)
def get_bulletin_from_web():
try:
response = requests.get(
"https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/classic/original_autogpt/BULLETIN.md" # noqa: E501
"https://raw.githubusercontent.com/Significant-Gravitas/AutoGPT/master/autogpt/BULLETIN.md" # noqa: E501
)
if response.status_code == 200:
return response.text
@@ -45,7 +45,7 @@ def vcs_state_diverges_from_master() -> bool:
"""
Returns whether a git repo is present and contains changes that are not in `master`.
"""
paths_we_care_about = "classic/original_autogpt/classic/original_autogpt/**/*.py"
paths_we_care_about = "autogpt/autogpt/**/*.py"
try:
repo = Repo(search_parent_directories=True)

View File

@@ -14,7 +14,7 @@ services:
ports:
- "8000:8000"
volumes:
- ./:/app/classic/original_autogpt/
- ./:/app/autogpt/
- ./docker-compose.yml:/app/docker-compose.yml:ro
# - ./Dockerfile:/app/Dockerfile:ro
profiles: ["exclude-from-up"]
@@ -33,8 +33,8 @@ services:
entrypoint: ["poetry", "run"]
command: ["pytest", "-v"]
volumes:
- ./autogpt:/app/classic/original_autogpt/autogpt
- ./tests:/app/classic/original_autogpt/tests
- ./autogpt:/app/autogpt/autogpt
- ./tests:/app/autogpt/tests
depends_on:
- minio
profiles: ["exclude-from-up"]

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -20,24 +20,44 @@ serve = "autogpt.app.cli:serve"
[tool.poetry.dependencies]
python = "^3.10"
anthropic = "^0.25.1"
autogpt-forge = { path = "../forge", develop = true }
# autogpt-forge = {git = "https://github.com/Significant-Gravitas/AutoGPT.git", subdirectory = "forge"}
beautifulsoup4 = "^4.12.2"
charset-normalizer = "^3.1.0"
click = "*"
colorama = "^0.4.6"
distro = "^1.8.0"
en-core-web-sm = { url = "https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl" }
fastapi = "^0.109.1"
gitpython = "^3.1.32"
ftfy = "^6.1.1"
google-api-python-client = "*"
hypercorn = "^0.14.4"
inflection = "*"
jsonschema = "*"
numpy = "*"
openai = "^1.7.2"
orjson = "^3.8.10"
pydantic = "^2.7.2"
Pillow = "*"
pydantic = "*"
python-docx = "*"
python-dotenv = "^1.0.0"
pyyaml = "^6.0"
readability-lxml = "^0.8.1"
requests = "*"
sentry-sdk = "^1.40.4"
spacy = "^3.7.4"
tenacity = "^8.2.2"
# OpenAI and Generic plugins import
openapi-python-client = "^0.14.0"
# Benchmarking
agbenchmark = { path = "../benchmark", optional = true }
# agbenchmark = {git = "https://github.com/Significant-Gravitas/AutoGPT.git", subdirectory = "benchmark", optional = true}
psycopg2-binary = "^2.9.9"
multidict = "6.0.5"
cx-freeze = "7.0.0"
[tool.poetry.extras]
benchmark = ["agbenchmark"]
@@ -45,28 +65,27 @@ benchmark = ["agbenchmark"]
[tool.poetry.group.dev.dependencies]
black = "^23.12.1"
flake8 = "^7.0.0"
gitpython = "^3.1.32"
isort = "^5.13.1"
pre-commit = "*"
pyright = "^1.1.364"
# Type stubs
types-beautifulsoup4 = "*"
types-colorama = "*"
types-Markdown = "*"
types-Pillow = "*"
# Testing
asynctest = "*"
coverage = "*"
pytest = "*"
pytest-asyncio = "*"
pytest-benchmark = "*"
pytest-cov = "*"
pytest-integration = "*"
pytest-mock = "*"
pytest-recording = "*"
pytest-xdist = "*"
[tool.poetry.group.build]
optional = true
[tool.poetry.group.build.dependencies]
cx-freeze = { git = "https://github.com/ntindle/cx_Freeze.git", rev = "main" }
# HACK: switch to cx-freeze release package after #2442 and #2472 are merged: https://github.com/marcelotduarte/cx_Freeze/pulls?q=is:pr+%232442+OR+%232472+
# cx-freeze = { version = "^7.2.0", optional = true }
vcrpy = { git = "https://github.com/Significant-Gravitas/vcrpy.git", rev = "master" }
[build-system]
@@ -88,4 +107,8 @@ skip_glob = ["data"]
[tool.pyright]
pythonVersion = "3.10"
exclude = ["data/**", "**/node_modules", "**/__pycache__", "**/.*"]
ignore = ["../classic/forge/**"]
ignore = ["../forge/**"]
[tool.pytest.ini_options]
markers = ["slow", "requires_openai_api_key", "requires_huggingface_api_key"]

View File

@@ -19,7 +19,7 @@ from autogpt.app.utils import coroutine
help="Path to the git repository",
)
@coroutine
async def generate_release_notes(repo_path: Optional[str | Path] = None):
async def generate_release_notes(repo_path: Optional[Path] = None):
logger = logging.getLogger(generate_release_notes.name) # pyright: ignore
repo = Repo(repo_path, search_parent_directories=True)
@@ -55,7 +55,7 @@ async def generate_release_notes(repo_path: Optional[str | Path] = None):
git_log = repo.git.log(
f"{last_release_tag.name}...{new_release_ref}",
"classic/original_autogpt/",
"autogpt/",
no_merges=True,
follow=True,
)
@@ -81,7 +81,7 @@ EXAMPLE_RELEASE_NOTES = """
First some important notes w.r.t. using the application:
* `run.sh` has been renamed to `autogpt.sh`
* The project has been restructured. The AutoGPT Agent is now located in `autogpt`.
* The application no longer uses a single workspace for all tasks. Instead, every task that you run the agent on creates a new workspace folder. See the [usage guide](https://docs.agpt.co/classic/original_autogpt/usage/#workspace) for more information.
* The application no longer uses a single workspace for all tasks. Instead, every task that you run the agent on creates a new workspace folder. See the [usage guide](https://docs.agpt.co/autogpt/usage/#workspace) for more information.
## New features ✨
@@ -94,7 +94,7 @@ First some important notes w.r.t. using the application:
* **Resuming agents 🔄**
In TTY mode, the application will now save the agent's state when quitting, and allows resuming where you left off at a later time!
* **GCS and S3 workspace backends 📦**
To further support running the application as part of a larger system, Google Cloud Storage and S3 workspace backends were added. Configuration options for this can be found in [`.env.template`](/classic/original_autogpt/.env.template).
To further support running the application as part of a larger system, Google Cloud Storage and S3 workspace backends were added. Configuration options for this can be found in [`.env.template`](/autogpt/.env.template).
* **Documentation Rewrite 📖**
The [documentation](https://docs.agpt.co) has been restructured and mostly rewritten to clarify and simplify the instructions, and also to accommodate the other subprojects that are now in the repo.
* **New Project CLI 🔧**

60
autogpt/setup.py Normal file
View File

@@ -0,0 +1,60 @@
from pkgutil import iter_modules
from shutil import which
from cx_Freeze import Executable, setup
packages = [
m.name
for m in iter_modules()
if m.ispkg
and m.module_finder
and ("poetry" in m.module_finder.path) # type: ignore
]
icon = (
"../../assets/gpt_dark_RGB.icns"
if which("sips")
else "../../assets/gpt_dark_RGB.ico"
)
setup(
executables=[
Executable(
"autogpt/__main__.py", target_name="autogpt", base="console", icon=icon
),
],
options={
"build_exe": {
"packages": packages,
"includes": [
"autogpt",
"spacy",
"spacy.lang",
"spacy.vocab",
"spacy.lang.lex_attrs",
"uvicorn.loops.auto",
"srsly.msgpack.util",
"blis",
"uvicorn.protocols.http.auto",
"uvicorn.protocols.websockets.auto",
"uvicorn.lifespan.on",
],
"excludes": ["readability.compat.two"],
},
"bdist_mac": {
"bundle_name": "AutoGPT",
"iconfile": "../assets/gpt_dark_RGB.icns",
"include_resources": [""],
},
"bdist_dmg": {
"applications_shortcut": True,
"volume_label": "AutoGPT",
},
"bdist_msi": {
"target_name": "AutoGPT",
"add_to_path": True,
"install_icon": "../assets/gpt_dark_RGB.ico",
},
},
)

View File

@@ -6,6 +6,7 @@ from pathlib import Path
import pytest
from forge.config.ai_profile import AIProfile
from forge.config.config import Config, ConfigBuilder
from forge.file_storage.local import (
FileStorage,
FileStorageConfiguration,
@@ -15,11 +16,11 @@ from forge.llm.providers import MultiProvider
from forge.logging.config import configure_logging
from autogpt.agents.agent import Agent, AgentConfiguration, AgentSettings
from autogpt.app.config import AppConfig, ConfigBuilder
from autogpt.app.main import _configure_llm_provider
pytest_plugins = [
"tests.integration.agent_factory",
"tests.vcr",
]
@@ -61,7 +62,7 @@ def config(
@pytest.fixture(scope="session")
def setup_logger():
def setup_logger(config: Config):
configure_logging(
debug=True,
log_dir=Path(__file__).parent / "logs",
@@ -70,14 +71,12 @@ def setup_logger():
@pytest.fixture
def llm_provider(config: AppConfig) -> MultiProvider:
def llm_provider(config: Config) -> MultiProvider:
return _configure_llm_provider(config)
@pytest.fixture
def agent(
config: AppConfig, llm_provider: MultiProvider, storage: FileStorage
) -> Agent:
def agent(config: Config, llm_provider: MultiProvider, storage: FileStorage) -> Agent:
ai_profile = AIProfile(
ai_name="Base",
ai_role="A base AI",
@@ -95,13 +94,13 @@ def agent(
allow_fs_access=not config.restrict_to_workspace,
use_functions_api=config.openai_functions,
),
history=Agent.default_settings.history.model_copy(deep=True),
history=Agent.default_settings.history.copy(deep=True),
)
agent = Agent(
settings=agent_settings,
llm_provider=llm_provider,
file_storage=storage,
app_config=config,
legacy_config=config,
)
return agent

Some files were not shown because too many files have changed in this diff Show More