* feat(frontend,backend): testing * feat: testing * feat(backend): it works for reading email * feat(backend): more docs on google * fix(frontend,backend): formatting * feat(backend): more logigin (i know this should be debug) * feat(backend): make real the default scopes * feat(backend): tests and linting * fix: code review prep * feat: sheets block * feat: liniting * Update route.ts * Update autogpt_platform/backend/backend/integrations/oauth/google.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * Update autogpt_platform/backend/backend/server/routers/integrations.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * fix: revert opener change * feat(frontend): add back opener required to work on mac edge * feat(frontend): drop typing list import from gmail * fix: code review comments * feat: code review changes * feat: code review changes * fix(backend): move from asserts to checks so they don't get optimized away in the future * fix(backend): code review changes * fix(backend): remove google specific check * fix: add typing * fix: only enable google blocks when oauth is configured for google * fix: errors are real and valid outputs always when output * fix(backend): add provider detail for debuging scope declines * Update autogpt_platform/frontend/src/components/integrations/credentials-input.tsx Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * fix(frontend): enhance with comment, typeof error isn't known so this is best way to ensure the stringifyication will work * feat: code review change requests * fix: linting * fix: reduce error catching * fix: doc messages in code * fix: check the correct scopes object 😄 * fix: remove double (and not needed) try catch * fix: lint * fix: scopes * feat: handle the default scopes better * feat: better email objectification * feat: process attachements turns out an email doesn't need a body * fix: lint * Update google.py * Update autogpt_platform/backend/backend/data/block.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * fix: quit trying and except failure * Update autogpt_platform/backend/backend/server/routers/integrations.py Co-authored-by: Reinier van der Leer <pwuts@agpt.co> * feat: don't allow expired states * fix: clarify function name and purpose * feat: code links updates * feat: additional docs on adding a block * fix: type hint missing which means the block won't work * fix: linting * fix: docs formatting * Update issues.py * fix: improve the naming * fix: formatting * Update new_blocks.md * Update new_blocks.md * feat: better docs on what the args mean * feat: more details on yield * Update new_blocks.md * fix: remove ignore from docs build * feat: initial migration * feat: migration tested with supabase-> prisma data location * add custom migrations and script * update migration command * formatting and linting * updated migration script * add direct db url * add find files * rename * use binary instead of source * temp adding supabase * remove unused functions * adding missed merge * fix: commit hash for lock * ci: fix lint * fix: minor bugs that prevented connecting and migrating to dbs and auth * fix: linting * fix: missed await * fix(backend): phase one pr updates * fix: handle error with returning user object from database_manager * fix: linting * Address comments * Make the migration safe * Update migration doc * Move misplaced model functions * Grammar * Revert lock * Remove irrelevant changes * Remove irrelevant changes * Avoid adding trigger on public schema --------- Co-authored-by: Reinier van der Leer <pwuts@agpt.co> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> Co-authored-by: Aarushi <aarushik93@gmail.com> Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
AutoGPT Agent Server
This is an initial project for creating the next generation of agent execution, which is an AutoGPT agent server. The agent server will enable the creation of composite multi-agent systems that utilize AutoGPT agents and other non-agent components as its primitives.
Docs
You can access the docs for the AutoGPT Agent Server here.
Setup
We use the Poetry to manage the dependencies. To set up the project, follow these steps inside this directory:
-
Install Poetry
pip install poetry -
Configure Poetry to use .venv in your project directory
poetry config virtualenvs.in-project true -
Enter the poetry shell
poetry shell -
Install dependencies
poetry install -
Copy .env.example to .env
cp .env.example .env -
Generate the Prisma client
poetry run prisma generateIn case Prisma generates the client for the global Python installation instead of the virtual environment, the current mitigation is to just uninstall the global Prisma package:
pip uninstall prismaThen run the generation again. The path should look something like this:
<some path>/pypoetry/virtualenvs/backend-TQIRSwR6-py3.12/bin/prisma -
Migrate the database. Be careful because this deletes current data in the database.
docker compose up db redis -d poetry run prisma migrate deploy
Running The Server
Starting the server without Docker
Run the following command to build the dockerfiles:
poetry run app
Starting the server with Docker
Run the following command to build the dockerfiles:
docker compose build
Run the following command to run the app:
docker compose up
Run the following to automatically rebuild when code changes, in another terminal:
docker compose watch
Run the following command to shut down:
docker compose down
If you run into issues with dangling orphans, try:
docker compose down --volumes --remove-orphans && docker-compose up --force-recreate --renew-anon-volumes --remove-orphans
Testing
To run the tests:
poetry run test
Development
Formatting & Linting
Auto formatter and linter are set up in the project. To run them:
Install:
poetry install --with dev
Format the code:
poetry run format
Lint the code:
poetry run lint
Project Outline
The current project has the following main modules:
blocks
This module stores all the Agent Blocks, which are reusable components to build a graph that represents the agent's behavior.
data
This module stores the logical model that is persisted in the database. It abstracts the database operations into functions that can be called by the service layer. Any code that interacts with Prisma objects or the database should reside in this module. The main models are:
block: anything related to the block used in the graphexecution: anything related to the execution graph executiongraph: anything related to the graph, node, and its relations
execution
This module stores the business logic of executing the graph. It currently has the following main modules:
manager: A service that consumes the queue of the graph execution and executes the graph. It contains both pieces of logic.scheduler: A service that triggers scheduled graph execution based on a cron expression. It pushes an execution request to the manager.
server
This module stores the logic for the server API.
It contains all the logic used for the API that allows the client to create, execute, and monitor the graph and its execution.
This API service interacts with other services like those defined in manager and scheduler.
utils
This module stores utility functions that are used across the project. Currently, it has two main modules:
process: A module that contains the logic to spawn a new process.service: A module that serves as a parent class for all the services in the project.
Service Communication
Currently, there are only 3 active services:
- AgentServer (the API, defined in
server.py) - ExecutionManager (the executor, defined in
manager.py) - ExecutionScheduler (the scheduler, defined in
scheduler.py)
The services run in independent Python processes and communicate through an IPC.
A communication layer (service.py) is created to decouple the communication library from the implementation.
Currently, the IPC is done using Pyro5 and abstracted in a way that allows a function decorated with @expose to be called from a different process.
By default the daemons run on the following ports:
Execution Manager Daemon: 8002 Execution Scheduler Daemon: 8003 Rest Server Daemon: 8004
Adding a New Agent Block
To add a new agent block, you need to create a new class that inherits from Block and provides the following information:
- All the block code should live in the
blocks(backend.blocks) module. input_schema: the schema of the input data, represented by a Pydantic object.output_schema: the schema of the output data, represented by a Pydantic object.runmethod: the main logic of the block.test_input&test_output: the sample input and output data for the block, which will be used to auto-test the block.- You can mock the functions declared in the block using the
test_mockfield for your unit tests. - Once you finish creating the block, you can test it by running
pytest -s test/block/test_block.py.