refactor: AutoGPT Platform Stealth Launch Repo Re-Org (#8113)

Restructuring the Repo to make it clear the difference between classic autogpt and the autogpt platform:
* Move the "classic" projects `autogpt`, `forge`, `frontend`, and `benchmark` into a `classic` folder
  * Also rename `autogpt` to `original_autogpt` for absolute clarity
* Rename `rnd/` to `autogpt_platform/`
  * `rnd/autogpt_builder` -> `autogpt_platform/frontend`
  * `rnd/autogpt_server` -> `autogpt_platform/backend`
* Adjust any paths accordingly
This commit is contained in:
Swifty
2024-09-20 16:50:43 +02:00
committed by GitHub
parent 2dfc927f03
commit ef7cfbb860
2820 changed files with 77772 additions and 12178 deletions

4
.gitattributes vendored
View File

@@ -1,10 +1,10 @@
frontend/build/** linguist-generated classic/frontend/build/** linguist-generated
**/poetry.lock linguist-generated **/poetry.lock linguist-generated
docs/_javascript/** linguist-vendored docs/_javascript/** linguist-vendored
# Exclude VCR cassettes from stats # Exclude VCR cassettes from stats
forge/tests/vcr_cassettes/**/**.y*ml linguist-generated classic/forge/tests/vcr_cassettes/**/**.y*ml linguist-generated
* text=auto * text=auto

8
.github/CODEOWNERS vendored
View File

@@ -1,7 +1,7 @@
* @Significant-Gravitas/maintainers * @Significant-Gravitas/maintainers
.github/workflows/ @Significant-Gravitas/devops .github/workflows/ @Significant-Gravitas/devops
forge/ @Significant-Gravitas/forge-maintainers classic/forge/ @Significant-Gravitas/forge-maintainers
benchmark/ @Significant-Gravitas/benchmark-maintainers classic/benchmark/ @Significant-Gravitas/benchmark-maintainers
frontend/ @Significant-Gravitas/frontend-maintainers classic/frontend/ @Significant-Gravitas/frontend-maintainers
rnd/infra @Significant-Gravitas/devops autogpt_platform/infra @Significant-Gravitas/devops
.github/CODEOWNERS @Significant-Gravitas/admins .github/CODEOWNERS @Significant-Gravitas/admins

View File

@@ -9,7 +9,7 @@
### Testing 🔍 ### Testing 🔍
> [!NOTE] > [!NOTE]
Only for the new autogpt platform, currently in rnd/ Only for the new autogpt platform, currently in autogpt_platform/
<!-- <!--
Please make sure your changes have been tested and are in good working condition. Please make sure your changes have been tested and are in good working condition.

30
.github/labeler.yml vendored
View File

@@ -1,27 +1,27 @@
AutoGPT Agent: Classic AutoGPT Agent:
- changed-files: - changed-files:
- any-glob-to-any-file: autogpt/** - any-glob-to-any-file: classic/original_autogpt/**
Classic Benchmark:
- changed-files:
- any-glob-to-any-file: classic/benchmark/**
Classic Frontend:
- changed-files:
- any-glob-to-any-file: classic/frontend/**
Forge: Forge:
- changed-files: - changed-files:
- any-glob-to-any-file: forge/** - any-glob-to-any-file: classic/forge/**
Benchmark:
- changed-files:
- any-glob-to-any-file: benchmark/**
Frontend:
- changed-files:
- any-glob-to-any-file: frontend/**
documentation: documentation:
- changed-files: - changed-files:
- any-glob-to-any-file: docs/** - any-glob-to-any-file: docs/**
Builder: platform/frontend:
- changed-files: - changed-files:
- any-glob-to-any-file: rnd/autogpt_builder/** - any-glob-to-any-file: autogpt_platform/frontend/**
Server: platform/backend:
- changed-files: - changed-files:
- any-glob-to-any-file: rnd/autogpt_server/** - any-glob-to-any-file: autogpt_platform/backend/**

View File

@@ -1,97 +0,0 @@
name: AutoGPTs Nightly Benchmark
on:
workflow_dispatch:
schedule:
- cron: '0 2 * * *'
jobs:
benchmark:
permissions:
contents: write
runs-on: ubuntu-latest
strategy:
matrix:
agent-name: [ autogpt ]
fail-fast: false
timeout-minutes: 120
env:
min-python-version: '3.10'
REPORTS_BRANCH: data/benchmark-reports
REPORTS_FOLDER: ${{ format('benchmark/reports/{0}', matrix.agent-name) }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.min-python-version }}
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python -
- name: Prepare reports folder
run: mkdir -p ${{ env.REPORTS_FOLDER }}
- run: poetry -C benchmark install
- name: Benchmark ${{ matrix.agent-name }}
run: |
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
set +e # Do not quit on non-zero exit codes
poetry run agbenchmark run -N 3 \
--test=ReadFile \
--test=BasicRetrieval --test=RevenueRetrieval2 \
--test=CombineCsv --test=LabelCsv --test=AnswerQuestionCombineCsv \
--test=UrlShortener --test=TicTacToe --test=Battleship \
--test=WebArenaTask_0 --test=WebArenaTask_21 --test=WebArenaTask_124 \
--test=WebArenaTask_134 --test=WebArenaTask_163
# Convert exit code 1 (some challenges failed) to exit code 0
if [ $? -eq 0 ] || [ $? -eq 1 ]; then
exit 0
else
exit $?
fi
env:
AGENT_NAME: ${{ matrix.agent-name }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
REPORTS_FOLDER: ${{ format('../../{0}', env.REPORTS_FOLDER) }} # account for changed workdir
TELEMETRY_ENVIRONMENT: autogpt-benchmark-ci
TELEMETRY_OPT_IN: ${{ github.ref_name == 'master' }}
- name: Push reports to data branch
run: |
# BODGE: Remove success_rate.json and regression_tests.json to avoid conflicts on checkout
rm ${{ env.REPORTS_FOLDER }}/*.json
# Find folder with newest (untracked) report in it
report_subfolder=$(find ${{ env.REPORTS_FOLDER }} -type f -name 'report.json' \
| xargs -I {} dirname {} \
| xargs -I {} git ls-files --others --exclude-standard {} \
| xargs -I {} dirname {} \
| sort -u)
json_report_file="$report_subfolder/report.json"
# Convert JSON report to Markdown
markdown_report_file="$report_subfolder/report.md"
poetry -C benchmark run benchmark/reports/format.py "$json_report_file" > "$markdown_report_file"
cat "$markdown_report_file" >> $GITHUB_STEP_SUMMARY
git config --global user.name 'GitHub Actions'
git config --global user.email 'github-actions@agpt.co'
git fetch origin ${{ env.REPORTS_BRANCH }}:${{ env.REPORTS_BRANCH }} \
&& git checkout ${{ env.REPORTS_BRANCH }} \
|| git checkout --orphan ${{ env.REPORTS_BRANCH }}
git reset --hard
git add ${{ env.REPORTS_FOLDER }}
git commit -m "Benchmark report for ${{ matrix.agent-name }} @ $(date +'%Y-%m-%d')" \
&& git push origin ${{ env.REPORTS_BRANCH }}

View File

@@ -1,25 +1,25 @@
name: AutoGPT CI name: Classic - AutoGPT CI
on: on:
push: push:
branches: [ master, development, ci-test* ] branches: [ master, development, ci-test* ]
paths: paths:
- '.github/workflows/autogpt-ci.yml' - '.github/workflows/classic-autogpt-ci.yml'
- 'autogpt/**' - 'classic/original_autogpt/**'
pull_request: pull_request:
branches: [ master, development, release-* ] branches: [ master, development, release-* ]
paths: paths:
- '.github/workflows/autogpt-ci.yml' - '.github/workflows/classic-autogpt-ci.yml'
- 'autogpt/**' - 'classic/original_autogpt/**'
concurrency: concurrency:
group: ${{ format('autogpt-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }} group: ${{ format('classic-autogpt-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }} cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: autogpt working-directory: classic/original_autogpt
jobs: jobs:
test: test:
@@ -86,7 +86,7 @@ jobs:
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }} path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('classic/original_autogpt/poetry.lock') }}
- name: Install Poetry (Unix) - name: Install Poetry (Unix)
if: runner.os != 'Windows' if: runner.os != 'Windows'
@@ -135,4 +135,4 @@ jobs:
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: test-logs name: test-logs
path: autogpt/logs/ path: classic/original_autogpt/logs/

View File

@@ -1,4 +1,4 @@
name: Purge Auto-GPT Docker CI cache name: Classic - Purge Auto-GPT Docker CI cache
on: on:
schedule: schedule:
@@ -25,7 +25,8 @@ jobs:
name: Build image name: Build image
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
file: Dockerfile.autogpt context: classic/
file: classic/Dockerfile.autogpt
build-args: BUILD_TYPE=${{ matrix.build-type }} build-args: BUILD_TYPE=${{ matrix.build-type }}
load: true # save to docker images load: true # save to docker images
# use GHA cache as read-only # use GHA cache as read-only

View File

@@ -1,24 +1,26 @@
name: AutoGPT Docker CI name: Classic - AutoGPT Docker CI
on: on:
push: push:
branches: [ master, development ] branches: [ master, development ]
paths: paths:
- '.github/workflows/autogpt-docker-ci.yml' - '.github/workflows/classic-autogpt-docker-ci.yml'
- 'autogpt/**' - 'classic/original_autogpt/**'
- 'classic/forge/**'
pull_request: pull_request:
branches: [ master, development, release-* ] branches: [ master, development, release-* ]
paths: paths:
- '.github/workflows/autogpt-docker-ci.yml' - '.github/workflows/classic-autogpt-docker-ci.yml'
- 'autogpt/**' - 'classic/original_autogpt/**'
- 'classic/forge/**'
concurrency: concurrency:
group: ${{ format('autogpt-docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }} group: ${{ format('classic-autogpt-docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }} cancel-in-progress: ${{ github.event_name == 'pull_request' }}
defaults: defaults:
run: run:
working-directory: autogpt working-directory: classic/original_autogpt
env: env:
IMAGE_NAME: auto-gpt IMAGE_NAME: auto-gpt
@@ -47,7 +49,8 @@ jobs:
name: Build image name: Build image
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
file: Dockerfile.autogpt context: classic/
file: classic/Dockerfile.autogpt
build-args: BUILD_TYPE=${{ matrix.build-type }} build-args: BUILD_TYPE=${{ matrix.build-type }}
tags: ${{ env.IMAGE_NAME }} tags: ${{ env.IMAGE_NAME }}
labels: GIT_REVISION=${{ github.sha }} labels: GIT_REVISION=${{ github.sha }}
@@ -116,7 +119,8 @@ jobs:
name: Build image name: Build image
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
file: Dockerfile.autogpt context: classic/
file: classic/Dockerfile.autogpt
build-args: BUILD_TYPE=dev # include pytest build-args: BUILD_TYPE=dev # include pytest
tags: > tags: >
${{ env.IMAGE_NAME }}, ${{ env.IMAGE_NAME }},

View File

@@ -1,4 +1,4 @@
name: AutoGPT Docker Release name: Classic - AutoGPT Docker Release
on: on:
release: release:
@@ -44,6 +44,7 @@ jobs:
name: Build image name: Build image
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: classic/
file: Dockerfile.autogpt file: Dockerfile.autogpt
build-args: BUILD_TYPE=release build-args: BUILD_TYPE=release
load: true # save to docker images load: true # save to docker images

View File

@@ -1,4 +1,4 @@
name: Agent smoke tests name: Classic - Agent smoke tests
on: on:
workflow_dispatch: workflow_dispatch:
@@ -7,32 +7,37 @@ on:
push: push:
branches: [ master, development, ci-test* ] branches: [ master, development, ci-test* ]
paths: paths:
- '.github/workflows/autogpts-ci.yml' - '.github/workflows/classic-autogpts-ci.yml'
- 'autogpt/**' - 'classic/original_autogpt/**'
- 'forge/**' - 'classic/forge/**'
- 'benchmark/**' - 'classic/benchmark/**'
- 'run' - 'classic/run'
- 'cli.py' - 'classic/cli.py'
- 'setup.py' - 'classic/setup.py'
- '!**/*.md' - '!**/*.md'
pull_request: pull_request:
branches: [ master, development, release-* ] branches: [ master, development, release-* ]
paths: paths:
- '.github/workflows/autogpts-ci.yml' - '.github/workflows/classic-autogpts-ci.yml'
- 'autogpt/**' - 'classic/original_autogpt/**'
- 'forge/**' - 'classic/forge/**'
- 'benchmark/**' - 'classic/benchmark/**'
- 'run' - 'classic/run'
- 'cli.py' - 'classic/cli.py'
- 'setup.py' - 'classic/setup.py'
- '!**/*.md' - '!**/*.md'
defaults:
run:
shell: bash
working-directory: classic
jobs: jobs:
serve-agent-protocol: serve-agent-protocol:
runs-on: ubuntu-latest runs-on: ubuntu-latest
strategy: strategy:
matrix: matrix:
agent-name: [ autogpt ] agent-name: [ original_autogpt ]
fail-fast: false fail-fast: false
timeout-minutes: 20 timeout-minutes: 20
env: env:
@@ -50,7 +55,7 @@ jobs:
python-version: ${{ env.min-python-version }} python-version: ${{ env.min-python-version }}
- name: Install Poetry - name: Install Poetry
working-directory: ./${{ matrix.agent-name }}/ working-directory: ./classic/${{ matrix.agent-name }}/
run: | run: |
curl -sSL https://install.python-poetry.org | python - curl -sSL https://install.python-poetry.org | python -

View File

@@ -1,18 +1,18 @@
name: AGBenchmark CI name: Classic - AGBenchmark CI
on: on:
push: push:
branches: [ master, development, ci-test* ] branches: [ master, development, ci-test* ]
paths: paths:
- 'benchmark/**' - 'classic/benchmark/**'
- .github/workflows/benchmark-ci.yml - '!classic/benchmark/reports/**'
- '!benchmark/reports/**' - .github/workflows/classic-benchmark-ci.yml
pull_request: pull_request:
branches: [ master, development, release-* ] branches: [ master, development, release-* ]
paths: paths:
- 'benchmark/**' - 'classic/benchmark/**'
- '!benchmark/reports/**' - '!classic/benchmark/reports/**'
- .github/workflows/benchmark-ci.yml - .github/workflows/classic-benchmark-ci.yml
concurrency: concurrency:
group: ${{ format('benchmark-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }} group: ${{ format('benchmark-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -39,7 +39,7 @@ jobs:
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: benchmark working-directory: classic/benchmark
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -58,7 +58,7 @@ jobs:
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }} path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('benchmark/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('classic/benchmark/poetry.lock') }}
- name: Install Poetry (Unix) - name: Install Poetry (Unix)
if: runner.os != 'Windows' if: runner.os != 'Windows'
@@ -122,7 +122,7 @@ jobs:
curl -sSL https://install.python-poetry.org | python - curl -sSL https://install.python-poetry.org | python -
- name: Run regression tests - name: Run regression tests
working-directory: . working-directory: classic
run: | run: |
./run agent start ${{ matrix.agent-name }} ./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }} cd ${{ matrix.agent-name }}
@@ -155,7 +155,7 @@ jobs:
poetry run agbenchmark --mock poetry run agbenchmark --mock
CHANGED=$(git diff --name-only | grep -E '(agbenchmark/challenges)|(../frontend/assets)') || echo "No diffs" CHANGED=$(git diff --name-only | grep -E '(agbenchmark/challenges)|(../classic/frontend/assets)') || echo "No diffs"
if [ ! -z "$CHANGED" ]; then if [ ! -z "$CHANGED" ]; then
echo "There are unstaged changes please run agbenchmark and commit those changes since they are needed." echo "There are unstaged changes please run agbenchmark and commit those changes since they are needed."
echo "$CHANGED" echo "$CHANGED"

View File

@@ -1,4 +1,4 @@
name: Publish to PyPI name: Classic - Publish to PyPI
on: on:
workflow_dispatch: workflow_dispatch:
@@ -21,21 +21,21 @@ jobs:
python-version: 3.8 python-version: 3.8
- name: Install Poetry - name: Install Poetry
working-directory: ./benchmark/ working-directory: ./classic/benchmark/
run: | run: |
curl -sSL https://install.python-poetry.org | python3 - curl -sSL https://install.python-poetry.org | python3 -
echo "$HOME/.poetry/bin" >> $GITHUB_PATH echo "$HOME/.poetry/bin" >> $GITHUB_PATH
- name: Build project for distribution - name: Build project for distribution
working-directory: ./benchmark/ working-directory: ./classic/benchmark/
run: poetry build run: poetry build
- name: Install dependencies - name: Install dependencies
working-directory: ./benchmark/ working-directory: ./classic/benchmark/
run: poetry install run: poetry install
- name: Check Version - name: Check Version
working-directory: ./benchmark/ working-directory: ./classic/benchmark/
id: check-version id: check-version
run: | run: |
echo version=$(poetry version --short) >> $GITHUB_OUTPUT echo version=$(poetry version --short) >> $GITHUB_OUTPUT
@@ -43,7 +43,7 @@ jobs:
- name: Create Release - name: Create Release
uses: ncipollo/release-action@v1 uses: ncipollo/release-action@v1
with: with:
artifacts: "benchmark/dist/*" artifacts: "classic/benchmark/dist/*"
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
draft: false draft: false
generateReleaseNotes: false generateReleaseNotes: false
@@ -51,5 +51,5 @@ jobs:
commit: master commit: master
- name: Build and publish - name: Build and publish
working-directory: ./benchmark/ working-directory: ./classic/benchmark/
run: poetry publish -u __token__ -p ${{ secrets.PYPI_API_TOKEN }} run: poetry publish -u __token__ -p ${{ secrets.PYPI_API_TOKEN }}

View File

@@ -1,18 +1,18 @@
name: Forge CI name: Classic - Forge CI
on: on:
push: push:
branches: [ master, development, ci-test* ] branches: [ master, development, ci-test* ]
paths: paths:
- '.github/workflows/forge-ci.yml' - '.github/workflows/classic-forge-ci.yml'
- 'forge/**' - 'classic/forge/**'
- '!forge/tests/vcr_cassettes' - '!classic/forge/tests/vcr_cassettes'
pull_request: pull_request:
branches: [ master, development, release-* ] branches: [ master, development, release-* ]
paths: paths:
- '.github/workflows/forge-ci.yml' - '.github/workflows/classic-forge-ci.yml'
- 'forge/**' - 'classic/forge/**'
- '!forge/tests/vcr_cassettes' - '!classic/forge/tests/vcr_cassettes'
concurrency: concurrency:
group: ${{ format('forge-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }} group: ${{ format('forge-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -21,7 +21,7 @@ concurrency:
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: forge working-directory: classic/forge
jobs: jobs:
test: test:
@@ -110,7 +110,7 @@ jobs:
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }} path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('forge/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('classic/forge/poetry.lock') }}
- name: Install Poetry (Unix) - name: Install Poetry (Unix)
if: runner.os != 'Windows' if: runner.os != 'Windows'
@@ -233,4 +233,4 @@ jobs:
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: test-logs name: test-logs
path: forge/logs/ path: classic/forge/logs/

View File

@@ -1,4 +1,4 @@
name: Frontend CI/CD name: Classic - Frontend CI/CD
on: on:
push: push:
@@ -7,12 +7,12 @@ on:
- development - development
- 'ci-test*' # This will match any branch that starts with "ci-test" - 'ci-test*' # This will match any branch that starts with "ci-test"
paths: paths:
- 'frontend/**' - 'classic/frontend/**'
- '.github/workflows/frontend-ci.yml' - '.github/workflows/classic-frontend-ci.yml'
pull_request: pull_request:
paths: paths:
- 'frontend/**' - 'classic/frontend/**'
- '.github/workflows/frontend-ci.yml' - '.github/workflows/classic-frontend-ci.yml'
jobs: jobs:
build: build:
@@ -21,7 +21,7 @@ jobs:
pull-requests: write pull-requests: write
runs-on: ubuntu-latest runs-on: ubuntu-latest
env: env:
BUILD_BRANCH: ${{ format('frontend-build/{0}', github.ref_name) }} BUILD_BRANCH: ${{ format('classic-frontend-build/{0}', github.ref_name) }}
steps: steps:
- name: Checkout Repo - name: Checkout Repo
@@ -34,7 +34,7 @@ jobs:
- name: Build Flutter to Web - name: Build Flutter to Web
run: | run: |
cd frontend cd classic/frontend
flutter build web --base-href /app/ flutter build web --base-href /app/
# - name: Commit and Push to ${{ env.BUILD_BRANCH }} # - name: Commit and Push to ${{ env.BUILD_BRANCH }}
@@ -42,7 +42,7 @@ jobs:
# run: | # run: |
# git config --local user.email "action@github.com" # git config --local user.email "action@github.com"
# git config --local user.name "GitHub Action" # git config --local user.name "GitHub Action"
# git add frontend/build/web # git add classic/frontend/build/web
# git checkout -B ${{ env.BUILD_BRANCH }} # git checkout -B ${{ env.BUILD_BRANCH }}
# git commit -m "Update frontend build to ${GITHUB_SHA:0:7}" -a # git commit -m "Update frontend build to ${GITHUB_SHA:0:7}" -a
# git push -f origin ${{ env.BUILD_BRANCH }} # git push -f origin ${{ env.BUILD_BRANCH }}
@@ -51,7 +51,7 @@ jobs:
if: github.event_name == 'push' if: github.event_name == 'push'
uses: peter-evans/create-pull-request@v6 uses: peter-evans/create-pull-request@v6
with: with:
add-paths: frontend/build/web add-paths: classic/frontend/build/web
base: ${{ github.ref_name }} base: ${{ github.ref_name }}
branch: ${{ env.BUILD_BRANCH }} branch: ${{ env.BUILD_BRANCH }}
delete-branch: true delete-branch: true

View File

@@ -1,27 +1,27 @@
name: Python checks name: Classic - Python checks
on: on:
push: push:
branches: [ master, development, ci-test* ] branches: [ master, development, ci-test* ]
paths: paths:
- '.github/workflows/lint-ci.yml' - '.github/workflows/classic-python-checks-ci.yml'
- 'autogpt/**' - 'classic/original_autogpt/**'
- 'forge/**' - 'classic/forge/**'
- 'benchmark/**' - 'classic/benchmark/**'
- '**.py' - '**.py'
- '!forge/tests/vcr_cassettes' - '!classic/forge/tests/vcr_cassettes'
pull_request: pull_request:
branches: [ master, development, release-* ] branches: [ master, development, release-* ]
paths: paths:
- '.github/workflows/lint-ci.yml' - '.github/workflows/classic-python-checks-ci.yml'
- 'autogpt/**' - 'classic/original_autogpt/**'
- 'forge/**' - 'classic/forge/**'
- 'benchmark/**' - 'classic/benchmark/**'
- '**.py' - '**.py'
- '!forge/tests/vcr_cassettes' - '!classic/forge/tests/vcr_cassettes'
concurrency: concurrency:
group: ${{ format('lint-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }} group: ${{ format('classic-python-checks-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }} cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults: defaults:
@@ -40,18 +40,18 @@ jobs:
uses: dorny/paths-filter@v3 uses: dorny/paths-filter@v3
with: with:
filters: | filters: |
autogpt: original_autogpt:
- autogpt/autogpt/** - classic/original_autogpt/autogpt/**
- autogpt/tests/** - classic/original_autogpt/tests/**
- autogpt/poetry.lock - classic/original_autogpt/poetry.lock
forge: forge:
- forge/forge/** - classic/forge/forge/**
- forge/tests/** - classic/forge/tests/**
- forge/poetry.lock - classic/forge/poetry.lock
benchmark: benchmark:
- benchmark/agbenchmark/** - classic/benchmark/agbenchmark/**
- benchmark/tests/** - classic/benchmark/tests/**
- benchmark/poetry.lock - classic/benchmark/poetry.lock
outputs: outputs:
changed-parts: ${{ steps.changes-in.outputs.changes }} changed-parts: ${{ steps.changes-in.outputs.changes }}
@@ -89,23 +89,23 @@ jobs:
# Install dependencies # Install dependencies
- name: Install Python dependencies - name: Install Python dependencies
run: poetry -C ${{ matrix.sub-package }} install run: poetry -C classic/${{ matrix.sub-package }} install
# Lint # Lint
- name: Lint (isort) - name: Lint (isort)
run: poetry run isort --check . run: poetry run isort --check .
working-directory: ${{ matrix.sub-package }} working-directory: classic/${{ matrix.sub-package }}
- name: Lint (Black) - name: Lint (Black)
if: success() || failure() if: success() || failure()
run: poetry run black --check . run: poetry run black --check .
working-directory: ${{ matrix.sub-package }} working-directory: classic/${{ matrix.sub-package }}
- name: Lint (Flake8) - name: Lint (Flake8)
if: success() || failure() if: success() || failure()
run: poetry run flake8 . run: poetry run flake8 .
working-directory: ${{ matrix.sub-package }} working-directory: classic/${{ matrix.sub-package }}
types: types:
needs: get-changed-parts needs: get-changed-parts
@@ -141,11 +141,11 @@ jobs:
# Install dependencies # Install dependencies
- name: Install Python dependencies - name: Install Python dependencies
run: poetry -C ${{ matrix.sub-package }} install run: poetry -C classic/${{ matrix.sub-package }} install
# Typecheck # Typecheck
- name: Typecheck - name: Typecheck
if: success() || failure() if: success() || failure()
run: poetry run pyright run: poetry run pyright
working-directory: ${{ matrix.sub-package }} working-directory: classic/${{ matrix.sub-package }}

View File

@@ -1,133 +0,0 @@
name: Hackathon
on:
workflow_dispatch:
inputs:
agents:
description: "Agents to run (comma-separated)"
required: false
default: "autogpt" # Default agents if none are specified
jobs:
matrix-setup:
runs-on: ubuntu-latest
# Service containers to run with `matrix-setup`
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
# Maps tcp port 5432 on service container to the host
- 5432:5432
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
env-name: ${{ steps.set-matrix.outputs.env-name }}
steps:
- id: set-matrix
run: |
if [ "${{ github.event_name }}" == "schedule" ]; then
echo "::set-output name=env-name::production"
echo "::set-output name=matrix::[ 'irrelevant']"
elif [ "${{ github.event_name }}" == "workflow_dispatch" ]; then
IFS=',' read -ra matrix_array <<< "${{ github.event.inputs.agents }}"
matrix_string="[ \"$(echo "${matrix_array[@]}" | sed 's/ /", "/g')\" ]"
echo "::set-output name=env-name::production"
echo "::set-output name=matrix::$matrix_string"
else
echo "::set-output name=env-name::testing"
echo "::set-output name=matrix::[ 'irrelevant' ]"
fi
tests:
environment:
name: "${{ needs.matrix-setup.outputs.env-name }}"
needs: matrix-setup
env:
min-python-version: "3.10"
name: "${{ matrix.agent-name }}"
runs-on: ubuntu-latest
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
# Maps tcp port 5432 on service container to the host
- 5432:5432
timeout-minutes: 50
strategy:
fail-fast: false
matrix:
agent-name: ${{fromJson(needs.matrix-setup.outputs.matrix)}}
steps:
- name: Print Environment Name
run: |
echo "Matrix Setup Environment Name: ${{ needs.matrix-setup.outputs.env-name }}"
- name: Check Docker Container
id: check
run: docker ps
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.min-python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python -
- name: Install Node.js
uses: actions/setup-node@v4
with:
node-version: v18.15
- name: Run benchmark
run: |
link=$(jq -r '.["github_repo_url"]' arena/$AGENT_NAME.json)
branch=$(jq -r '.["branch_to_benchmark"]' arena/$AGENT_NAME.json)
git clone "$link" -b "$branch" "$AGENT_NAME"
cd $AGENT_NAME
cp ./$AGENT_NAME/.env.example ./$AGENT_NAME/.env || echo "file not found"
./run agent start $AGENT_NAME
cd ../benchmark
poetry install
poetry run agbenchmark --no-dep
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
SERP_API_KEY: ${{ secrets.SERP_API_KEY }}
SERPAPI_API_KEY: ${{ secrets.SERP_API_KEY }}
WEAVIATE_API_KEY: ${{ secrets.WEAVIATE_API_KEY }}
WEAVIATE_URL: ${{ secrets.WEAVIATE_URL }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_CUSTOM_SEARCH_ENGINE_ID: ${{ secrets.GOOGLE_CUSTOM_SEARCH_ENGINE_ID }}
AGENT_NAME: ${{ matrix.agent-name }}

View File

@@ -1,20 +1,20 @@
name: AutoGPT Builder Infra name: AutoGPT Platform - Infra
on: on:
push: push:
branches: [ master ] branches: [ master ]
paths: paths:
- '.github/workflows/autogpt-infra-ci.yml' - '.github/workflows/platform-autogpt-infra-ci.yml'
- 'rnd/infra/**' - 'autogpt_platform/infra/**'
pull_request: pull_request:
paths: paths:
- '.github/workflows/autogpt-infra-ci.yml' - '.github/workflows/platform-autogpt-infra-ci.yml'
- 'rnd/infra/**' - 'autogpt_platform/infra/**'
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: rnd/infra working-directory: autogpt_platform/infra
jobs: jobs:
lint: lint:
@@ -53,4 +53,4 @@ jobs:
- name: Run chart-testing (lint) - name: Run chart-testing (lint)
if: steps.list-changed.outputs.changed == 'true' if: steps.list-changed.outputs.changed == 'true'
run: ct lint --target-branch ${{ github.event.repository.default_branch }} run: ct lint --target-branch ${{ github.event.repository.default_branch }}

View File

@@ -1,25 +1,25 @@
name: AutoGPT Server CI name: AutoGPT Platform - Backend CI
on: on:
push: push:
branches: [master, development, ci-test*] branches: [master, development, ci-test*]
paths: paths:
- ".github/workflows/autogpt-server-ci.yml" - ".github/workflows/platform-backend-ci.yml"
- "rnd/autogpt_server/**" - "autogpt_platform/backend/**"
pull_request: pull_request:
branches: [master, development, release-*] branches: [master, development, release-*]
paths: paths:
- ".github/workflows/autogpt-server-ci.yml" - ".github/workflows/platform-backend-ci.yml"
- "rnd/autogpt_server/**" - "autogpt_platform/backend/**"
concurrency: concurrency:
group: ${{ format('autogpt-server-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }} group: ${{ format('backend-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }} cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: rnd/autogpt_server working-directory: autogpt_platform/backend
jobs: jobs:
test: test:
@@ -90,7 +90,7 @@ jobs:
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }} path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('rnd/autogpt_server/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry (Unix) - name: Install Poetry (Unix)
if: runner.os != 'Windows' if: runner.os != 'Windows'
@@ -152,4 +152,4 @@ jobs:
# uses: codecov/codecov-action@v4 # uses: codecov/codecov-action@v4
# with: # with:
# token: ${{ secrets.CODECOV_TOKEN }} # token: ${{ secrets.CODECOV_TOKEN }}
# flags: autogpt-server,${{ runner.os }} # flags: backend,${{ runner.os }}

View File

@@ -1,20 +1,20 @@
name: AutoGPT Builder CI name: AutoGPT Platform - Frontend CI
on: on:
push: push:
branches: [ master ] branches: [ master ]
paths: paths:
- '.github/workflows/autogpt-builder-ci.yml' - '.github/workflows/platform-frontend-ci.yml'
- 'rnd/autogpt_builder/**' - 'autogpt_platform/frontend/**'
pull_request: pull_request:
paths: paths:
- '.github/workflows/autogpt-builder-ci.yml' - '.github/workflows/platform-frontend-ci.yml'
- 'rnd/autogpt_builder/**' - 'autogpt_platform/frontend/**'
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: rnd/autogpt_builder working-directory: autogpt_platform/frontend
jobs: jobs:

View File

@@ -1,4 +1,4 @@
name: 'Close stale issues' name: Repo - Close stale issues
on: on:
schedule: schedule:
- cron: '30 1 * * *' - cron: '30 1 * * *'

View File

@@ -1,12 +1,12 @@
name: "Pull Request auto-label" name: Repo - Pull Request auto-label
on: on:
# So that PRs touching the same files as the push are updated # So that PRs touching the same files as the push are updated
push: push:
branches: [ master, development, release-* ] branches: [ master, development, release-* ]
paths-ignore: paths-ignore:
- 'forge/tests/vcr_cassettes' - 'classic/forge/tests/vcr_cassettes'
- 'benchmark/reports/**' - 'classic/benchmark/reports/**'
# So that the `dirtyLabel` is removed if conflicts are resolve # So that the `dirtyLabel` is removed if conflicts are resolve
# We recommend `pull_request_target` so that github secrets are available. # We recommend `pull_request_target` so that github secrets are available.
# In `pull_request` we wouldn't be able to change labels of fork PRs # In `pull_request` we wouldn't be able to change labels of fork PRs

View File

@@ -1,4 +1,4 @@
name: github-repo-stats name: Repo - Github Stats
on: on:
schedule: schedule:

View File

@@ -1,4 +1,4 @@
name: PR Status Checker name: Repo - PR Status Checker
on: on:
pull_request: pull_request:
types: [opened, synchronize, reopened] types: [opened, synchronize, reopened]
@@ -26,6 +26,6 @@ jobs:
echo "Current directory before running Python script:" echo "Current directory before running Python script:"
pwd pwd
echo "Attempting to run Python script:" echo "Attempting to run Python script:"
python check_actions_status.py python .github/workflows/scripts/check_actions_status.py
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

8
.gitignore vendored
View File

@@ -1,7 +1,7 @@
## Original ignores ## Original ignores
.github_access_token .github_access_token
autogpt/keys.py classic/original_autogpt/keys.py
autogpt/*.json classic/original_autogpt/*.json
auto_gpt_workspace/* auto_gpt_workspace/*
*.mpeg *.mpeg
.env .env
@@ -157,7 +157,7 @@ openai/
CURRENT_BULLETIN.md CURRENT_BULLETIN.md
# AgBenchmark # AgBenchmark
agbenchmark/reports/ classic/benchmark/agbenchmark/reports/
# Nodejs # Nodejs
package-lock.json package-lock.json
@@ -170,4 +170,4 @@ pri*
ig* ig*
.github_access_token .github_access_token
LICENSE.rtf LICENSE.rtf
rnd/autogpt_server/settings.py autogpt_platform/backend/settings.py

8
.gitmodules vendored
View File

@@ -1,6 +1,6 @@
[submodule "forge/tests/vcr_cassettes"] [submodule "classic/forge/tests/vcr_cassettes"]
path = forge/tests/vcr_cassettes path = classic/forge/tests/vcr_cassettes
url = https://github.com/Significant-Gravitas/Auto-GPT-test-cassettes url = https://github.com/Significant-Gravitas/Auto-GPT-test-cassettes
[submodule "rnd/supabase"] [submodule "autogpt_platform/supabase"]
path = rnd/supabase path = autogpt_platform/supabase
url = https://github.com/supabase/supabase.git url = https://github.com/supabase/supabase.git

View File

@@ -16,22 +16,22 @@ repos:
hooks: hooks:
- id: isort-autogpt - id: isort-autogpt
name: Lint (isort) - AutoGPT name: Lint (isort) - AutoGPT
entry: poetry -C autogpt run isort entry: poetry -C classic/original_autogpt run isort
files: ^autogpt/ files: ^classic/original_autogpt/
types: [file, python] types: [file, python]
language: system language: system
- id: isort-forge - id: isort-forge
name: Lint (isort) - Forge name: Lint (isort) - Forge
entry: poetry -C forge run isort entry: poetry -C classic/forge run isort
files: ^forge/ files: ^classic/forge/
types: [file, python] types: [file, python]
language: system language: system
- id: isort-benchmark - id: isort-benchmark
name: Lint (isort) - Benchmark name: Lint (isort) - Benchmark
entry: poetry -C benchmark run isort entry: poetry -C classic/benchmark run isort
files: ^benchmark/ files: ^classic/benchmark/
types: [file, python] types: [file, python]
language: system language: system
@@ -52,20 +52,20 @@ repos:
- id: flake8 - id: flake8
name: Lint (Flake8) - AutoGPT name: Lint (Flake8) - AutoGPT
alias: flake8-autogpt alias: flake8-autogpt
files: ^autogpt/(autogpt|scripts|tests)/ files: ^classic/original_autogpt/(autogpt|scripts|tests)/
args: [--config=autogpt/.flake8] args: [--config=classic/original_autogpt/.flake8]
- id: flake8 - id: flake8
name: Lint (Flake8) - Forge name: Lint (Flake8) - Forge
alias: flake8-forge alias: flake8-forge
files: ^forge/(forge|tests)/ files: ^classic/forge/(forge|tests)/
args: [--config=forge/.flake8] args: [--config=classic/forge/.flake8]
- id: flake8 - id: flake8
name: Lint (Flake8) - Benchmark name: Lint (Flake8) - Benchmark
alias: flake8-benchmark alias: flake8-benchmark
files: ^benchmark/(agbenchmark|tests)/((?!reports).)*[/.] files: ^classic/benchmark/(agbenchmark|tests)/((?!reports).)*[/.]
args: [--config=benchmark/.flake8] args: [--config=classic/benchmark/.flake8]
- repo: local - repo: local
# To have watertight type checking, we check *all* the files in an affected # To have watertight type checking, we check *all* the files in an affected
@@ -74,10 +74,10 @@ repos:
- id: pyright - id: pyright
name: Typecheck - AutoGPT name: Typecheck - AutoGPT
alias: pyright-autogpt alias: pyright-autogpt
entry: poetry -C autogpt run pyright entry: poetry -C classic/original_autogpt run pyright
args: [-p, autogpt, autogpt] args: [-p, autogpt, autogpt]
# include forge source (since it's a path dependency) but exclude *_test.py files: # include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(autogpt/((autogpt|scripts|tests)/|poetry\.lock$)|forge/(forge/.*(?<!_test)\.py|poetry\.lock)$) files: ^(classic/original_autogpt/((autogpt|scripts|tests)/|poetry\.lock$)|classic/forge/(classic/forge/.*(?<!_test)\.py|poetry\.lock)$)
types: [file] types: [file]
language: system language: system
pass_filenames: false pass_filenames: false
@@ -85,9 +85,9 @@ repos:
- id: pyright - id: pyright
name: Typecheck - Forge name: Typecheck - Forge
alias: pyright-forge alias: pyright-forge
entry: poetry -C forge run pyright entry: poetry -C classic/forge run pyright
args: [-p, forge, forge] args: [-p, forge, forge]
files: ^forge/(forge/|poetry\.lock$) files: ^classic/forge/(classic/forge/|poetry\.lock$)
types: [file] types: [file]
language: system language: system
pass_filenames: false pass_filenames: false
@@ -95,9 +95,9 @@ repos:
- id: pyright - id: pyright
name: Typecheck - Benchmark name: Typecheck - Benchmark
alias: pyright-benchmark alias: pyright-benchmark
entry: poetry -C benchmark run pyright entry: poetry -C classic/benchmark run pyright
args: [-p, benchmark, benchmark] args: [-p, benchmark, benchmark]
files: ^benchmark/(agbenchmark/|tests/|poetry\.lock$) files: ^classic/benchmark/(agbenchmark/|tests/|poetry\.lock$)
types: [file] types: [file]
language: system language: system
pass_filenames: false pass_filenames: false
@@ -106,22 +106,22 @@ repos:
hooks: hooks:
- id: pytest-autogpt - id: pytest-autogpt
name: Run tests - AutoGPT (excl. slow tests) name: Run tests - AutoGPT (excl. slow tests)
entry: bash -c 'cd autogpt && poetry run pytest --cov=autogpt -m "not slow" tests/unit tests/integration' entry: bash -c 'cd classic/original_autogpt && poetry run pytest --cov=autogpt -m "not slow" tests/unit tests/integration'
# include forge source (since it's a path dependency) but exclude *_test.py files: # include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(autogpt/((autogpt|tests)/|poetry\.lock$)|forge/(forge/.*(?<!_test)\.py|poetry\.lock)$) files: ^(classic/original_autogpt/((autogpt|tests)/|poetry\.lock$)|classic/forge/(classic/forge/.*(?<!_test)\.py|poetry\.lock)$)
language: system language: system
pass_filenames: false pass_filenames: false
- id: pytest-forge - id: pytest-forge
name: Run tests - Forge (excl. slow tests) name: Run tests - Forge (excl. slow tests)
entry: bash -c 'cd forge && poetry run pytest --cov=forge -m "not slow"' entry: bash -c 'cd classic/forge && poetry run pytest --cov=forge -m "not slow"'
files: ^forge/(forge/|tests/|poetry\.lock$) files: ^classic/forge/(classic/forge/|tests/|poetry\.lock$)
language: system language: system
pass_filenames: false pass_filenames: false
- id: pytest-benchmark - id: pytest-benchmark
name: Run tests - Benchmark name: Run tests - Benchmark
entry: bash -c 'cd benchmark && poetry run pytest --cov=benchmark' entry: bash -c 'cd classic/benchmark && poetry run pytest --cov=benchmark'
files: ^benchmark/(agbenchmark/|tests/|poetry\.lock$) files: ^classic/benchmark/(agbenchmark/|tests/|poetry\.lock$)
language: system language: system
pass_filenames: false pass_filenames: false

View File

@@ -1,49 +1,49 @@
{ {
"folders": [ "folders": [
{ {
"name": "autogpt", "name": "autogpt_server",
"path": "../autogpt" "path": "../autogpt_platform/autogpt_server"
}, },
{ {
"name": "benchmark", "name": "autogpt_builder",
"path": "../benchmark" "path": "../autogpt_platform/autogpt_builder"
},
{
"name": "market",
"path": "../autogpt_platform/market"
},
{
"name": "lib",
"path": "../autogpt_platform/autogpt_libs"
},
{
"name": "infra",
"path": "../autogpt_platform/infra"
}, },
{ {
"name": "docs", "name": "docs",
"path": "../docs" "path": "../docs"
}, },
{
"name": "forge",
"path": "../forge"
},
{
"name": "frontend",
"path": "../frontend"
},
{
"name": "autogpt_server",
"path": "../rnd/autogpt_server"
},
{
"name": "autogpt_builder",
"path": "../rnd/autogpt_builder"
},
{
"name": "market",
"path": "../rnd/market"
},
{
"name": "lib",
"path": "../rnd/autogpt_libs"
},
{
"name": "infra",
"path": "../rnd/infra"
},
{ {
"name": "[root]", "name": "[root]",
"path": ".." "path": ".."
} },
{
"name": "classic - autogpt",
"path": "../classic/original_autogpt"
},
{
"name": "classic - benchmark",
"path": "../classic/benchmark"
},
{
"name": "classic - forge",
"path": "../classic/forge"
},
{
"name": "classic - frontend",
"path": "../classic/frontend"
},
], ],
"settings": { "settings": {
"python.analysis.typeCheckingMode": "basic" "python.analysis.typeCheckingMode": "basic"

View File

@@ -55,15 +55,16 @@ Be part of the revolution! **AutoGPT** is here to stay, at the forefront of AI i
## 🤖 AutoGPT Classic ## 🤖 AutoGPT Classic
> Below is information about the classic version of AutoGPT. > Below is information about the classic version of AutoGPT.
**🛠️ [Build your own Agent - Quickstart](FORGE-QUICKSTART.md)** **🛠️ [Build your own Agent - Quickstart](classic/FORGE-QUICKSTART.md)**
### 🏗️ Forge ### 🏗️ Forge
**Forge your own agent!** &ndash; Forge is a ready-to-go template for your agent application. All the boilerplate code is already handled, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from the [`forge.sdk`](/forge/forge/sdk) can also be used individually to speed up development and reduce boilerplate in your agent project. **Forge your own agent!** &ndash; Forge is a ready-to-go toolkit to build your own agent application. It handles most of the boilerplate code, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from [`forge`](/classic/forge/) can also be used individually to speed up development and reduce boilerplate in your agent project.
🚀 [**Getting Started with Forge**](https://github.com/Significant-Gravitas/AutoGPT/blob/master/forge/tutorials/001_getting_started.md) &ndash; 🚀 [**Getting Started with Forge**](https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/forge/tutorials/001_getting_started.md) &ndash;
This guide will walk you through the process of creating your own agent and using the benchmark and user interface. This guide will walk you through the process of creating your own agent and using the benchmark and user interface.
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/forge) about Forge 📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/forge) about Forge
### 🎯 Benchmark ### 🎯 Benchmark
@@ -83,7 +84,7 @@ This guide will walk you through the process of creating your own agent and usin
The frontend works out-of-the-box with all agents in the repo. Just use the [CLI] to run your agent of choice! The frontend works out-of-the-box with all agents in the repo. Just use the [CLI] to run your agent of choice!
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/frontend) about the Frontend 📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/frontend) about the Frontend
### ⌨️ CLI ### ⌨️ CLI

View File

@@ -1,3 +0,0 @@
{
"python.analysis.typeCheckingMode": "basic",
}

View File

@@ -14,12 +14,12 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
To run the AutoGPT Platform, follow these steps: To run the AutoGPT Platform, follow these steps:
1. Clone this repository to your local machine. 1. Clone this repository to your local machine.
2. Navigate to rnd/supabase 2. Navigate to autogpt_platform/supabase
3. Run the following command: 3. Run the following command:
``` ```
git submodule update --init --recursive git submodule update --init --recursive
``` ```
4. Navigate back to rnd (cd ..) 4. Navigate back to autogpt_platform (cd ..)
5. Run the following command: 5. Run the following command:
``` ```
cp supabase/docker/.env.example .env cp supabase/docker/.env.example .env
@@ -32,7 +32,7 @@ To run the AutoGPT Platform, follow these steps:
``` ```
This command will start all the necessary backend services defined in the `docker-compose.combined.yml` file in detached mode. This command will start all the necessary backend services defined in the `docker-compose.combined.yml` file in detached mode.
7. Navigate to rnd/autogpt_builder. 7. Navigate to autogpt_platform/frontend.
8. Run the following command: 8. Run the following command:
``` ```
cp .env.example .env.local cp .env.example .env.local

View File

@@ -24,14 +24,14 @@ RUN pip3 install --upgrade pip setuptools
RUN pip3 install poetry RUN pip3 install poetry
# Copy and install dependencies # Copy and install dependencies
COPY rnd/autogpt_libs /app/rnd/autogpt_libs COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
COPY rnd/autogpt_server/poetry.lock rnd/autogpt_server/pyproject.toml /app/rnd/autogpt_server/ COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/
WORKDIR /app/rnd/autogpt_server WORKDIR /app/autogpt_platform/backend
RUN poetry config virtualenvs.create false \ RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-ansi && poetry install --no-interaction --no-ansi
# Generate Prisma client # Generate Prisma client
COPY rnd/autogpt_server/schema.prisma ./ COPY autogpt_platform/backend/schema.prisma ./
RUN poetry config virtualenvs.create false \ RUN poetry config virtualenvs.create false \
&& poetry run prisma generate && poetry run prisma generate
@@ -59,21 +59,20 @@ COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-pyth
ENV PATH="/app/.venv/bin:$PATH" ENV PATH="/app/.venv/bin:$PATH"
RUN mkdir -p /app/rnd/autogpt_libs RUN mkdir -p /app/autogpt_platform/autogpt_libs
RUN mkdir -p /app/rnd/autogpt_server RUN mkdir -p /app/autogpt_platform/backend
COPY rnd/autogpt_libs /app/rnd/autogpt_libs COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
COPY rnd/autogpt_server/poetry.lock rnd/autogpt_server/pyproject.toml /app/rnd/autogpt_server/ COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/
WORKDIR /app/rnd/autogpt_server WORKDIR /app/autogpt_platform/backend
FROM server_dependencies AS server FROM server_dependencies AS server
COPY rnd/autogpt_server /app/rnd/autogpt_server COPY autogpt_platform/backend /app/autogpt_platform/backend
ENV DATABASE_URL="" ENV DATABASE_URL=""
ENV PORT=8000 ENV PORT=8000
CMD ["poetry", "run", "rest"] CMD ["poetry", "run", "rest"]

View File

@@ -48,19 +48,19 @@ We use the Poetry to manage the dependencies. To set up the project, follow thes
> ``` > ```
> >
> Then run the generation again. The path *should* look something like this: > Then run the generation again. The path *should* look something like this:
> `<some path>/pypoetry/virtualenvs/autogpt-server-TQIRSwR6-py3.12/bin/prisma` > `<some path>/pypoetry/virtualenvs/backend-TQIRSwR6-py3.12/bin/prisma`
6. Run the postgres database from the /rnd folder 6. Run the postgres database from the /rnd folder
```sh ```sh
cd rnd/ cd autogpt_platform/
docker compose up -d docker compose up -d
``` ```
7. Run the migrations (from the autogpt_server folder) 7. Run the migrations (from the backend folder)
```sh ```sh
cd ../autogpt_server cd ../backend
prisma migrate dev --schema postgres/schema.prisma prisma migrate dev --schema postgres/schema.prisma
``` ```

View File

@@ -53,7 +53,7 @@ We use the Poetry to manage the dependencies. To set up the project, follow thes
> ``` > ```
> >
> Then run the generation again. The path *should* look something like this: > Then run the generation again. The path *should* look something like this:
> `<some path>/pypoetry/virtualenvs/autogpt-server-TQIRSwR6-py3.12/bin/prisma` > `<some path>/pypoetry/virtualenvs/backend-TQIRSwR6-py3.12/bin/prisma`
6. Migrate the database. Be careful because this deletes current data in the database. 6. Migrate the database. Be careful because this deletes current data in the database.
@@ -193,7 +193,7 @@ Rest Server Daemon: 8004
## Adding a New Agent Block ## Adding a New Agent Block
To add a new agent block, you need to create a new class that inherits from `Block` and provides the following information: To add a new agent block, you need to create a new class that inherits from `Block` and provides the following information:
* All the block code should live in the `blocks` (`autogpt_server.blocks`) module. * All the block code should live in the `blocks` (`backend.blocks`) module.
* `input_schema`: the schema of the input data, represented by a Pydantic object. * `input_schema`: the schema of the input data, represented by a Pydantic object.
* `output_schema`: the schema of the output data, represented by a Pydantic object. * `output_schema`: the schema of the output data, represented by a Pydantic object.
* `run` method: the main logic of the block. * `run` method: the main logic of the block.

View File

@@ -1,7 +1,7 @@
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
if TYPE_CHECKING: if TYPE_CHECKING:
from autogpt_server.util.process import AppProcess from backend.util.process import AppProcess
def run_processes(*processes: "AppProcess", **kwargs): def run_processes(*processes: "AppProcess", **kwargs):
@@ -24,8 +24,8 @@ def main(**kwargs):
Run all the processes required for the AutoGPT-server (REST and WebSocket APIs). Run all the processes required for the AutoGPT-server (REST and WebSocket APIs).
""" """
from autogpt_server.executor import ExecutionManager, ExecutionScheduler from backend.executor import ExecutionManager, ExecutionScheduler
from autogpt_server.server import AgentServer, WebsocketServer from backend.server import AgentServer, WebsocketServer
run_processes( run_processes(
ExecutionManager(), ExecutionManager(),

View File

@@ -4,9 +4,9 @@ import os
import re import re
from pathlib import Path from pathlib import Path
from autogpt_server.data.block import Block from backend.data.block import Block
# Dynamically load all modules under autogpt_server.blocks # Dynamically load all modules under backend.blocks
AVAILABLE_MODULES = [] AVAILABLE_MODULES = []
current_dir = os.path.dirname(__file__) current_dir = os.path.dirname(__file__)
modules = glob.glob(os.path.join(current_dir, "*.py")) modules = glob.glob(os.path.join(current_dir, "*.py"))

View File

@@ -4,15 +4,15 @@ from typing import Any, List
from jinja2 import BaseLoader, Environment from jinja2 import BaseLoader, Environment
from pydantic import Field from pydantic import Field
from autogpt_server.data.block import ( from backend.data.block import (
Block, Block,
BlockCategory, BlockCategory,
BlockOutput, BlockOutput,
BlockSchema, BlockSchema,
BlockUIType, BlockUIType,
) )
from autogpt_server.data.model import SchemaField from backend.data.model import SchemaField
from autogpt_server.util.mock import MockObject from backend.util.mock import MockObject
jinja = Environment(loader=BaseLoader()) jinja = Environment(loader=BaseLoader())
@@ -85,7 +85,6 @@ class PrintToConsoleBlock(Block):
class FindInDictionaryBlock(Block): class FindInDictionaryBlock(Block):
class Input(BlockSchema): class Input(BlockSchema):
input: Any = Field(description="Dictionary to lookup from") input: Any = Field(description="Dictionary to lookup from")
key: str | int = Field(description="Key to lookup in the dictionary") key: str | int = Field(description="Key to lookup in the dictionary")

View File

@@ -2,7 +2,7 @@ import os
import re import re
from typing import Type from typing import Type
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
class BlockInstallationBlock(Block): class BlockInstallationBlock(Block):
@@ -48,7 +48,7 @@ class BlockInstallationBlock(Block):
block_dir = os.path.dirname(__file__) block_dir = os.path.dirname(__file__)
file_path = f"{block_dir}/{file_name}.py" file_path = f"{block_dir}/{file_name}.py"
module_name = f"autogpt_server.blocks.{file_name}" module_name = f"backend.blocks.{file_name}"
with open(file_path, "w") as f: with open(file_path, "w") as f:
f.write(code) f.write(code)
@@ -57,7 +57,7 @@ class BlockInstallationBlock(Block):
block_class: Type[Block] = getattr(module, class_name) block_class: Type[Block] = getattr(module, class_name)
block = block_class() block = block_class()
from autogpt_server.util.test import execute_block_test from backend.util.test import execute_block_test
execute_block_test(block) execute_block_test(block)
yield "success", "Block installed successfully." yield "success", "Block installed successfully."

View File

@@ -1,8 +1,8 @@
from enum import Enum from enum import Enum
from typing import Any from typing import Any
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField from backend.data.model import SchemaField
class ComparisonOperator(Enum): class ComparisonOperator(Enum):

View File

@@ -1,5 +1,5 @@
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import ContributorDetails from backend.data.model import ContributorDetails
class ReadCsvBlock(Block): class ReadCsvBlock(Block):

View File

@@ -4,8 +4,8 @@ import aiohttp
import discord import discord
from pydantic import Field from pydantic import Field
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SecretField from backend.data.model import BlockSecret, SecretField
class ReadDiscordMessagesBlock(Block): class ReadDiscordMessagesBlock(Block):

View File

@@ -4,8 +4,8 @@ from email.mime.text import MIMEText
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, ConfigDict, Field
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SchemaField, SecretField from backend.data.model import BlockSecret, SchemaField, SecretField
class EmailCredentials(BaseModel): class EmailCredentials(BaseModel):

View File

@@ -3,7 +3,7 @@ from enum import Enum
import requests import requests
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
class HttpMethod(Enum): class HttpMethod(Enum):

View File

@@ -1,7 +1,7 @@
from typing import Any, List, Tuple from typing import Any, List, Tuple
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField from backend.data.model import SchemaField
class ListIteratorBlock(Block): class ListIteratorBlock(Block):

View File

@@ -8,9 +8,9 @@ import ollama
import openai import openai
from groq import Groq from groq import Groq
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SchemaField, SecretField from backend.data.model import BlockSecret, SchemaField, SecretField
from autogpt_server.util import json from backend.util import json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@@ -2,8 +2,8 @@ import operator
from enum import Enum from enum import Enum
from typing import Any from typing import Any
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField from backend.data.model import SchemaField
class Operation(Enum): class Operation(Enum):

View File

@@ -2,8 +2,8 @@ from typing import List
import requests import requests
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SchemaField, SecretField from backend.data.model import BlockSecret, SchemaField, SecretField
class PublishToMediumBlock(Block): class PublishToMediumBlock(Block):

View File

@@ -4,9 +4,9 @@ from typing import Iterator
import praw import praw
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, ConfigDict, Field
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SecretField from backend.data.model import BlockSecret, SecretField
from autogpt_server.util.mock import MockObject from backend.util.mock import MockObject
class RedditCredentials(BaseModel): class RedditCredentials(BaseModel):

View File

@@ -5,8 +5,8 @@ from typing import Any
import feedparser import feedparser
import pydantic import pydantic
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField from backend.data.model import SchemaField
class RSSEntry(pydantic.BaseModel): class RSSEntry(pydantic.BaseModel):

View File

@@ -3,8 +3,8 @@ from collections import defaultdict
from enum import Enum from enum import Enum
from typing import Any, Dict, List, Optional, Union from typing import Any, Dict, List, Optional, Union
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField from backend.data.model import SchemaField
class SamplingMethod(str, Enum): class SamplingMethod(str, Enum):

View File

@@ -3,8 +3,8 @@ from urllib.parse import quote
import requests import requests
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SecretField from backend.data.model import BlockSecret, SecretField
class GetRequest: class GetRequest:

View File

@@ -3,8 +3,8 @@ from typing import Literal
import requests import requests
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SchemaField, SecretField from backend.data.model import BlockSecret, SchemaField, SecretField
class CreateTalkingAvatarVideoBlock(Block): class CreateTalkingAvatarVideoBlock(Block):

View File

@@ -4,8 +4,8 @@ from typing import Any
from jinja2 import BaseLoader, Environment from jinja2 import BaseLoader, Environment
from pydantic import Field from pydantic import Field
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.util import json from backend.util import json
jinja = Environment(loader=BaseLoader()) jinja = Environment(loader=BaseLoader())

View File

@@ -2,7 +2,7 @@ import time
from datetime import datetime, timedelta from datetime import datetime, timedelta
from typing import Any, Union from typing import Any, Union
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
class GetCurrentTimeBlock(Block): class GetCurrentTimeBlock(Block):
@@ -130,7 +130,6 @@ class CountdownTimerBlock(Block):
) )
def run(self, input_data: Input) -> BlockOutput: def run(self, input_data: Input) -> BlockOutput:
seconds = int(input_data.seconds) seconds = int(input_data.seconds)
minutes = int(input_data.minutes) minutes = int(input_data.minutes)
hours = int(input_data.hours) hours = int(input_data.hours)

View File

@@ -3,8 +3,8 @@ from urllib.parse import parse_qs, urlparse
from youtube_transcript_api import YouTubeTranscriptApi from youtube_transcript_api import YouTubeTranscriptApi
from youtube_transcript_api.formatters import TextFormatter from youtube_transcript_api.formatters import TextFormatter
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField from backend.data.model import SchemaField
class TranscribeYouTubeVideoBlock(Block): class TranscribeYouTubeVideoBlock(Block):

View File

@@ -8,8 +8,8 @@ import pathlib
import click import click
import psutil import psutil
from autogpt_server import app from backend import app
from autogpt_server.util.process import AppProcess from backend.util.process import AppProcess
def get_pid_path() -> pathlib.Path: def get_pid_path() -> pathlib.Path:
@@ -109,7 +109,7 @@ def reddit(server_address: str):
""" """
import requests import requests
from autogpt_server.usecases.reddit_marketing import create_test_graph from backend.usecases.reddit_marketing import create_test_graph
test_graph = create_test_graph() test_graph = create_test_graph()
url = f"{server_address}/graphs" url = f"{server_address}/graphs"
@@ -130,7 +130,7 @@ def populate_db(server_address: str):
""" """
import requests import requests
from autogpt_server.usecases.sample import create_test_graph from backend.usecases.sample import create_test_graph
test_graph = create_test_graph() test_graph = create_test_graph()
url = f"{server_address}/graphs" url = f"{server_address}/graphs"
@@ -166,7 +166,7 @@ def graph(server_address: str):
""" """
import requests import requests
from autogpt_server.usecases.sample import create_test_graph from backend.usecases.sample import create_test_graph
url = f"{server_address}/graphs" url = f"{server_address}/graphs"
headers = {"Content-Type": "application/json"} headers = {"Content-Type": "application/json"}
@@ -219,7 +219,7 @@ def websocket(server_address: str, graph_id: str):
import websockets import websockets
from autogpt_server.server.ws_api import ExecutionSubscription, Methods, WsMessage from backend.server.ws_api import ExecutionSubscription, Methods, WsMessage
async def send_message(server_address: str): async def send_message(server_address: str):
uri = f"ws://{server_address}" uri = f"ws://{server_address}"

View File

@@ -7,8 +7,8 @@ import jsonschema
from prisma.models import AgentBlock from prisma.models import AgentBlock
from pydantic import BaseModel from pydantic import BaseModel
from autogpt_server.data.model import ContributorDetails from backend.data.model import ContributorDetails
from autogpt_server.util import json from backend.util import json
BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data). BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data).
BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data. BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data.
@@ -225,7 +225,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
def get_blocks() -> dict[str, Block]: def get_blocks() -> dict[str, Block]:
from autogpt_server.blocks import AVAILABLE_BLOCKS # noqa: E402 from backend.blocks import AVAILABLE_BLOCKS # noqa: E402
return AVAILABLE_BLOCKS return AVAILABLE_BLOCKS

View File

@@ -9,7 +9,7 @@ from prisma.enums import UserBlockCreditType
from prisma.models import UserBlockCredit from prisma.models import UserBlockCredit
from pydantic import BaseModel from pydantic import BaseModel
from autogpt_server.blocks.llm import ( from backend.blocks.llm import (
MODEL_METADATA, MODEL_METADATA,
AIConversationBlock, AIConversationBlock,
AIStructuredResponseGeneratorBlock, AIStructuredResponseGeneratorBlock,
@@ -17,9 +17,9 @@ from autogpt_server.blocks.llm import (
AITextSummarizerBlock, AITextSummarizerBlock,
LlmModel, LlmModel,
) )
from autogpt_server.blocks.talking_head import CreateTalkingAvatarVideoBlock from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from autogpt_server.data.block import Block, BlockInput from backend.data.block import Block, BlockInput
from autogpt_server.util.settings import Config from backend.util.settings import Config
class BlockCostType(str, Enum): class BlockCostType(str, Enum):

View File

@@ -16,8 +16,8 @@ from prisma.types import (
) )
from pydantic import BaseModel from pydantic import BaseModel
from autogpt_server.data.block import BlockData, BlockInput, CompletedBlockOutput from backend.data.block import BlockData, BlockInput, CompletedBlockOutput
from autogpt_server.util import json, mock from backend.util import json, mock
class GraphExecution(BaseModel): class GraphExecution(BaseModel):

View File

@@ -9,11 +9,11 @@ from prisma.models import AgentGraph, AgentNode, AgentNodeLink
from pydantic import BaseModel, PrivateAttr from pydantic import BaseModel, PrivateAttr
from pydantic_core import PydanticUndefinedType from pydantic_core import PydanticUndefinedType
from autogpt_server.blocks.basic import AgentInputBlock, AgentOutputBlock from backend.blocks.basic import AgentInputBlock, AgentOutputBlock
from autogpt_server.data.block import BlockInput, get_block, get_blocks from backend.data.block import BlockInput, get_block, get_blocks
from autogpt_server.data.db import BaseDbModel, transaction from backend.data.db import BaseDbModel, transaction
from autogpt_server.data.user import DEFAULT_USER_ID from backend.data.user import DEFAULT_USER_ID
from autogpt_server.util import json from backend.util import json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -274,7 +274,6 @@ class Graph(GraphMeta):
PydanticUndefinedType, PydanticUndefinedType,
) )
): ):
input_schema.append( input_schema.append(
InputSchemaItem( InputSchemaItem(
node_id=node.id, node_id=node.id,

View File

@@ -11,7 +11,7 @@ from pydantic_core import (
core_schema, core_schema,
) )
from autogpt_server.util.settings import Secrets from backend.util.settings import Secrets
T = TypeVar("T") T = TypeVar("T")
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@@ -6,7 +6,7 @@ from datetime import datetime
from redis.asyncio import Redis from redis.asyncio import Redis
from autogpt_server.data.execution import ExecutionResult from backend.data.execution import ExecutionResult
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -37,7 +37,6 @@ class AsyncEventQueue(ABC):
class AsyncRedisEventQueue(AsyncEventQueue): class AsyncRedisEventQueue(AsyncEventQueue):
def __init__(self): def __init__(self):
self.host = os.getenv("REDIS_HOST", "localhost") self.host = os.getenv("REDIS_HOST", "localhost")
self.port = int(os.getenv("REDIS_PORT", "6379")) self.port = int(os.getenv("REDIS_PORT", "6379"))

View File

@@ -3,9 +3,9 @@ from typing import Optional
from prisma.models import AgentGraphExecutionSchedule from prisma.models import AgentGraphExecutionSchedule
from autogpt_server.data.block import BlockInput from backend.data.block import BlockInput
from autogpt_server.data.db import BaseDbModel from backend.data.db import BaseDbModel
from autogpt_server.util import json from backend.util import json
class ExecutionSchedule(BaseDbModel): class ExecutionSchedule(BaseDbModel):

View File

@@ -3,14 +3,13 @@ from typing import Optional
from fastapi import HTTPException from fastapi import HTTPException
from prisma.models import User from prisma.models import User
from autogpt_server.data.db import prisma from backend.data.db import prisma
DEFAULT_USER_ID = "3e53486c-cf57-477e-ba2a-cb02dc828e1a" DEFAULT_USER_ID = "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
DEFAULT_EMAIL = "default@example.com" DEFAULT_EMAIL = "default@example.com"
async def get_or_create_user(user_data: dict) -> User: async def get_or_create_user(user_data: dict) -> User:
user_id = user_data.get("sub") user_id = user_data.get("sub")
if not user_id: if not user_id:
raise HTTPException(status_code=401, detail="User ID not found in token") raise HTTPException(status_code=401, detail="User ID not found in token")

View File

@@ -1,5 +1,5 @@
from autogpt_server.app import run_processes from backend.app import run_processes
from autogpt_server.executor import ExecutionManager from backend.executor import ExecutionManager
def main(): def main():

View File

@@ -12,13 +12,13 @@ from multiprocessing.pool import AsyncResult, Pool
from typing import TYPE_CHECKING, Any, Coroutine, Generator, TypeVar from typing import TYPE_CHECKING, Any, Coroutine, Generator, TypeVar
if TYPE_CHECKING: if TYPE_CHECKING:
from autogpt_server.server.rest_api import AgentServer from backend.server.rest_api import AgentServer
from autogpt_server.blocks.basic import AgentInputBlock from backend.blocks.basic import AgentInputBlock
from autogpt_server.data import db from backend.data import db
from autogpt_server.data.block import Block, BlockData, BlockInput, get_block from backend.data.block import Block, BlockData, BlockInput, get_block
from autogpt_server.data.credit import get_user_credit_model from backend.data.credit import get_user_credit_model
from autogpt_server.data.execution import ( from backend.data.execution import (
ExecutionQueue, ExecutionQueue,
ExecutionResult, ExecutionResult,
ExecutionStatus, ExecutionStatus,
@@ -36,13 +36,13 @@ from autogpt_server.data.execution import (
upsert_execution_input, upsert_execution_input,
upsert_execution_output, upsert_execution_output,
) )
from autogpt_server.data.graph import Graph, Link, Node, get_graph, get_node from backend.data.graph import Graph, Link, Node, get_graph, get_node
from autogpt_server.util import json from backend.util import json
from autogpt_server.util.decorator import error_logged, time_measured from backend.util.decorator import error_logged, time_measured
from autogpt_server.util.logging import configure_logging from backend.util.logging import configure_logging
from autogpt_server.util.service import AppService, expose, get_service_client from backend.util.service import AppService, expose, get_service_client
from autogpt_server.util.settings import Config from backend.util.settings import Config
from autogpt_server.util.type import convert from backend.util.type import convert
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -382,7 +382,7 @@ def validate_exec(
def get_agent_server_client() -> "AgentServer": def get_agent_server_client() -> "AgentServer":
from autogpt_server.server.rest_api import AgentServer from backend.server.rest_api import AgentServer
return get_service_client(AgentServer, Config().agent_server_port) return get_service_client(AgentServer, Config().agent_server_port)

View File

@@ -5,11 +5,11 @@ from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger from apscheduler.triggers.cron import CronTrigger
from autogpt_server.data import schedule as model from backend.data import schedule as model
from autogpt_server.data.block import BlockInput from backend.data.block import BlockInput
from autogpt_server.executor.manager import ExecutionManager from backend.executor.manager import ExecutionManager
from autogpt_server.util.service import AppService, expose, get_service_client from backend.util.service import AppService, expose, get_service_client
from autogpt_server.util.settings import Config from backend.util.settings import Config
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

Some files were not shown because too many files have changed in this diff Show More