- Deleted the InputValidationBlock class from checking_input_validation.py, which was previously used for input validation with required and dependent fields.
- Updated InputValidationBlock to include default values for required fields.
- Implemented validation logic in manager.py to ensure dependent fields are validated based on their dependencies.
- Added 'depends_on' parameter to SchemaField in model.py to specify field dependencies.
- Updated useAgentGraph hook to validate input fields based on their dependencies, ensuring required fields are set when dependent fields are filled.
- Modified BlockIOSubSchemaMeta to include 'depends_on' as an optional property.
When calculating the next month, we are not rolling the month number
causing an error on credits.
### Changes 🏗️
Add modulo while calculating next month.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
- Move `autogpt_libs.supabase_integration_credentials_store` into
`backend`
- `.store` -> `backend.integrations.credentials_store`
- `.types` -> added to `backend.data.model`
- Rename `SupabaseIntegrationCredentialsStore` to
`IntegrationCredentialsStore`
We wanted to get a few security things in quickly in #8403 and had to
make some compromises to do so. This picks those up and fixes them.
- Resolves#8540
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
This fix is triggered by an error observed on db connection failure on
SupaBase:
```
2024-11-28 07:45:24,724 INFO [DatabaseManager] Starting...
2024-11-28 07:45:24,726 INFO [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection started...
2024-11-28 07:45:24,726 INFO [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection completed successfully.
{"is_panic":false,"message":"Can't reach database server at `...pooler.supabase.com:5432`\n\nPlease make sure your database server is running at `....pooler.supabase.com:5432`.","meta":{"database_host":"...pooler.supabase.com","database_port":5432},"error_code":"P1001"}
2024-11-28 07:45:35,153 INFO [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection failed: Could not connect to the query engine. Retrying now...
2024-11-28 07:45:36,155 INFO [PID-18|DatabaseManager|Redis-e14a33de-2d81-4536-b48b-a8aa4b1f4766] Acquiring connection started...
2024-11-28 07:45:36,181 INFO [PID-18|DatabaseManager|Redis-e14a33de-2d81-4536-b48b-a8aa4b1f4766] Acquiring connection completed successfully.
2024-11-28 07:45:36,183 INFO [PID-18|DatabaseManager|Pyro-2722cd29-4dbd-4cf9-882f-73842658599d] Starting Pyro Service started...
2024-11-28 07:45:36,189 INFO [DatabaseManager] Connected to Pyro; URI = PYRO:DatabaseManager@0.0.0.0:8005
2024-11-28 07:46:28,241 ERROR Error in get_user_integrations: All connection attempts failed
```
Where even
```
2024-11-28 07:45:35,153 INFO [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection failed: Could not connect to the query engine. Retrying now...
```
is present, the Redis connection is still proceeding without waiting for
the retry to complete. This was likely caused by Tenacity not fully
awaiting the DB connection acquisition command.
### Changes 🏗️
* Add special handling for the async function to explicitly await the
function execution result on each retry.
* Explicitly raise exceptions on `db.connect()` if the db is not
connected even after `prisma.connect()` command.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
This PR adds the first few Hubspot blocks so we can create _real_ sales
and marketing agents.
### Changes 🏗️
Added Hubspot blocks;
- Aded auth for hubspot
- Added Company block
- Added Contact block
- Added Engagement block
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
We've started enabling cost based on the *partial value* of the
`credentials` field. And this logic has never been supported.
### Changes 🏗️
* Add partial object matching on the input data filter for evaluating
the block cost.
* Add missing credentials for `ExtractWebsiteContentBlock`
* Removed fallback cost on LLM blocks.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Blocks should be easy to search, the name is sometimes not
straightforward, but the description does.
<img width="576" alt="image"
src="https://github.com/user-attachments/assets/0528b019-0ebc-4e6f-8a3c-40323a671b13">
### Changes 🏗️
Make the block description searchable.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
* docs(backend): Add `--build` to docker command in Getting Started guide (#8762)
* updated URL on README.md
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Add `/integrations/credentials` endpoint which lists all credentials for the authenticated user
- Amend credential fetching logic in front end to fetch all at once instead of per provider
- Resolves#8770
- Resolves (hopefully) #8613
- feat(blocks): Add GitHub Pull Request Trigger block
## feat(platform): Add support for Webhook-triggered blocks
- ⚠️ Add `PLATFORM_BASE_URL` setting
- Add webhook config option and `BlockType.WEBHOOK` to `Block`
- Add check to `Block.__init__` to enforce type and shape of webhook event filter
- Add check to `Block.__init__` to enforce `payload` input on webhook blocks
- Add check to `Block.__init__` to disable webhook blocks if `PLATFORM_BASE_URL` is not set
- Add `Webhook` model + CRUD functions in `backend.data.integrations` to represent webhooks created by our system
- Add `IntegrationWebhook` to DB schema + reference `AgentGraphNode.webhook_id`
- Add `set_node_webhook(..)` in `backend.data.graph`
- Add webhook-related endpoints:
- `POST /integrations/{provider}/webhooks/{webhook_id}/ingress` endpoint, to receive webhook payloads, and for all associated nodes create graph executions
- Add `Node.is_triggered_by_event_type(..)` helper method
- `POST /integrations/{provider}/webhooks/{webhook_id}/ping` endpoint, to allow testing a webhook
- Add `WebhookEvent` + pub/sub functions in `backend.data.integrations`
- Add `backend.integrations.webhooks` module, including:
- `graph_lifecycle_hooks`, e.g. `on_graph_activate(..)`, to handle corresponding webhook creation etc.
- Add calls to these hooks in the graph create/update endpoints
- `BaseWebhooksManager` + `GithubWebhooksManager` to handle creating + registering, removing + deregistering, and retrieving existing webhooks, and validating incoming payloads
## Other improvements
- fix(blocks): Allow having an input and output pin with the same name
- fix(blocks): Add tooltip with description in places where block inputs are rendered without `NodeHandle`
- feat(blocks): Allow hiding inputs (e.g. `payload`) with `SchemaField(hidden=True)`
- fix(frontend): Fix `MultiSelector` component styling
- feat(frontend): Add `AlertDialog` UI component
- feat(frontend): Add `NodeMultiSelectInput` component
- feat(backend/data): Add `NodeModel` with `graph_id`, `graph_version`; `GraphModel` with `user_id`
- Add `make_graph_model(..)` helper function in `backend.data.graph`
- refactor(backend/data): Make `RedisEventQueue` generic and move to `backend.data.execution`
- refactor(frontend): Deduplicate & clean up code for different block types in `generateInputHandles(..)` in `CustomNode`
- dx(backend): Add `MissingConfigError`, `NeedConfirmation` exception
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* fix: hide content except login when not authenticated to prevent errors
* Remove supabase folder from tracking
* Remove supabase folder from Git tracking
* adding git submodule
* adding git submodule
* Discard changes to .gitignore
* only showing AutoGPT logo if user is not present
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
This PR reduces image size by 4.9GB (93%) and reduces uncached build time from ~7m to ~5m20s.
- Use cache mount to prevent Yarn cache from being included in `yarn install` layer
- Leverage Next.js output tracing to generate minimal application w/ tree-shaken dependencies
- Add non-root user following the Next.js reference Dockerfile
* feat: Add Open Router integration credentials
- Added support for Open Router integration credentials in the Supabase integration credentials store.
- Updated the LLM provider field to include "open_router" as a valid provider option.
- Added Open Router API key field to the backend settings.
- Updated the profile page to display the Open Router integration credentials.
- Updated the credentials input and provider components to include Open Router as a provider option.
- Updated the autogpt-server-api types to include "open_router" as a provider name.
- Updated the LLM provider schema to include "open_router" as a valid provider name.
- Added GEMINI_FLASH_1_5_8B as the first Open Router LLM
* Add type ignore to new llm prompt to match the rest of them.
* Update LlmModel with a selection of new OpenRouter models
* format
- Remove `secrets_dir` and other references to `get_secrets_path()`
- Remove unused `get_config_path()`
Follow-up to #8521, which removed the `secrets` dir but not the references to it.
In #8524, the "llm" credentials provider was replaced. There are still entries with `"provider": "llm"` in the system though, and those break if not migrated.
- SQL migration to fix the obvious ones where we know the provider from `credentials.id`
- Non-SQL migration to fix the rest
In #8524, the "llm" credentials provider was replaced. There are still entries with "provider": "llm" in the system though, and those break if not migrated.
- SQL migration to fix the obvious ones where we know the provider from `credentials.id`
- Non-SQL migration to fix the rest
* fix(backend): Add execution persistence for execution scheduler service
* scheduler REST API cleanup
* Fix to binary
* Adapt UI with new API
* Remove schedule.py
* Remove unused class
* Fix linting
* ci(frontend,backend,classic): update branch from develop to dev
* ci(frontend, infra): enable ci on other tools
* Update classic-autogpt-docker-ci.yml
* fix: don't error if the folder exists
* fix: drop bad test
* Revert "fix: drop bad test"
This reverts commit c478d3cf4c.
* fix: turn off the correct test 👀
* fix: remove more
* Discard changes to .github/workflows/classic-autogpt-ci.yml
* Update classic-autogpt-docker-ci.yml
* Update classic-autogpt-docker-release.yml
* Update classic-autogpts-ci.yml
* Discard changes to .github/workflows/classic-forge-ci.yml
* Discard changes to .github/workflows/classic-autogpts-ci.yml
* Discard changes to .github/workflows/classic-python-checks.yml
* Discard changes to .github/workflows/repo-pr-label.yml
* Discard changes to .github/workflows/platform-backend-ci.yml
* Update classic-benchmark-ci.yml
* Update classic-frontend-ci.yml
Reverts c707ee9 (#8646)
The problem analysis that led to #8646 contained some errors, so the migration removed in the PR doesn't seem to have been the cause of the problem we were hunting. Also, this migration is an essential part of the security improvement that we made 2 weeks ago.
* add: api generator functions and endpoints
* Rebase onto dev, refactor API manager location, remove suspended key revoke, and update API code for Prisma compatibility
* add: key_manager
* reversing changes og poetry.lock
* add: changing hash mexhansim in API Manager
* add: changing hash mexhansim in API Manager
* fixing some simple bugs
* fix linting and adding better error handling
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* feat(block): Add AIImageGeneratorBlock
This commit adds the AIImageGeneratorBlock class to the backend. The AIImageGeneratorBlock is responsible for generating images using various AI models through a unified interface.
* Remove unsupported inputs and add more styles
* Update autogpt_platform/backend/backend/blocks/ai_image_generator_block.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* run format
* Add test mock
* mock client run
* Refactor AIImageGeneratorBlock to use a separate function for running the client
* Update Credential description
* Rename ModelProvider to ImageGenModel
* Add missing block run function
* fix mock
* .
* Refactor AIImageGeneratorBlock to move run_client function inside class
* Fix broken reference to run client and tidy code.
* Refactor AIImageGeneratorBlock to improve code structure and error handling
* Move client into run client instantiation function.
* Refactor AIImageGeneratorBlock to handle output as FileOutput and improve error handling
* run format
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Add condition to hide `credentials` input title in `CustomNode:generateInputHandles`
- Add `title={schema.description}` to `<CredentialsInput>` title element
* Add support for default credentials to unreal block
* Refactor block cost configuration and add new blocks
This commit refactors the block cost configuration file and adds support for new blocks. The changes include:
- Importing the `AIMusicGeneratorBlock`, `JinaEmbeddingBlock`, and `UnrealTextToSpeechBlock` classes
- Updating the `BLOCK_COSTS` dictionary to include costs for the new blocks
These changes enable the usage of the newly introduced blocks.
- Resolves#8635
- fix(frontend): Fix type mismatch of `CredentialsField` schema between frontend and backend
- Fix usages of `credentialsSchema.credentials_provider`
- refactor(backend): Create `CredentialsFieldSchemaExtra` model in backend so it can be mirrored directly in frontend
- Add check to enforce multi-provider `CredentialsField` always has `discriminator`
- dx: Add type checking shortcut `yarn type-check` / `npm run type-check` for frontend
- Change `provider` of default credentials to actual provider names (e.g. `anthropic`), remove `llm` provider
- Add `discriminator` and `discriminator_mapping` to `CredentialsField` that allows to filter credentials input to only allow providers for matching models in `useCredentials` hook (thanks @ntindle for the idea!); e.g. user chooses `GPT4_TURBO` so then only OpenAI credentials are allowed
- Choose credentials automatically and hide credentials input on the node completely if there's only one possible option
- Move `getValue` and `parseKeys` to utils
- Add `ANTHROPIC`, `GROQ` and `OLLAMA` to providers in frontend `types.ts`
- Add `hidden` field to credentials that is used for default system keys to hide them in user profile
- Now `provider` field in `CredentialsField` can accept multiple providers as a list
-----------------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat(platform): Add AIMusicGeneratorBlock for music generation
* refactor(platform): Refactor AIMusicGeneratorBlock for improved error handling and logging
* refactor(ui): Refactor ContentRenderer to support audio rendering
* format
* Frontend format and lint
- fix naming of hooks
- fix `pyright` hooks (b0rked by repo restructure)
- fix `forge` path (b0rked by faulty replace-all when the repo was restructured)
- fix `black` hook to work on all Python versions
- add `poetry install` hooks
- add `ruff`, `isort`, `pyright`, `pytest`, and `prisma generate` hooks for `backend/`
- add `ruff` and `pyright` hooks for `autogpt_libs/`
* reseal secrets
* update DB url
* rotate prod db
* rotate prod
* rotate server
* builder valuse
* public env vars in env files
* public env vars in env files
* reseal secrets
* update DB url
* rotate prod db
* rotate prod
* rotate server
* builder valuse
* public env vars in env files
* public env vars in env files
* add pinecone and jina blocks
* udpate based on comments
* backend updates
* frontend updates
* type hint
* more type hints
* another type hint
* update run signature
* shared jina provider
* fix linting
* lockfile
* remove noqa
* remove noqa
* remove vector db folder
* line
* update pincone credentials provider
* fix imports
* formating
* update frontend
* Test (#8425)
* h
* Discard changes to autogpt_platform/backend/poetry.lock
* fix: broken dep
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
* Fix issue where marketplace breaks if no agents are returned
* Fix issue where marketplace breaks if no agents are returned
* Remove supabase folder from tracking
* adding supabase submodule
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* fix(backend): Fix error pin output not being propagated into the next nodes
* fix(backend): Reverse pyro config refactor
* Revert "fix(backend): Fix error pin output not being propagated into the next nodes"
This reverts commit 2ff50a94ec.
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* ci with workload identity
* temp update
* update name
* wip
* update auth step
* update provider name
* remove audience
* temp set to false
* update registry naming
* update context
* update login
* revert temp updates
* add prod iam and pool
* add release deploy with approval
* use gha default approval behaviour
* add back in release trigger
* add new line
* add prod migrations
* prod migrations without check
* ci with workload identity
* temp update
* update name
* wip
* update auth step
* update provider name
* remove audience
* temp set to false
* update registry naming
* update context
* update login
* revert temp updates
* add prod iam and pool
* add release deploy with approval
* use gha default approval behaviour
* add back in release trigger
* add new line
* feat(frontend,backend): testing
* feat: testing
* feat(backend): it works for reading email
* feat(backend): more docs on google
* fix(frontend,backend): formatting
* feat(backend): more logigin (i know this should be debug)
* feat(backend): make real the default scopes
* feat(backend): tests and linting
* fix: code review prep
* feat: sheets block
* feat: liniting
* Update route.ts
* Update autogpt_platform/backend/backend/integrations/oauth/google.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: revert opener change
* feat(frontend): add back opener
required to work on mac edge
* feat(frontend): drop typing list import from gmail
* fix: code review comments
* feat: code review changes
* feat: code review changes
* fix(backend): move from asserts to checks so they don't get optimized away in the future
* fix(backend): code review changes
* fix(backend): remove google specific check
* fix: add typing
* fix: only enable google blocks when oauth is configured for google
* fix: errors are real and valid outputs always when output
* fix(backend): add provider detail for debuging scope declines
* Update autogpt_platform/frontend/src/components/integrations/credentials-input.tsx
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix(frontend): enhance with comment, typeof error isn't known so this is best way to ensure the stringifyication will work
* feat: code review change requests
* fix: linting
* fix: reduce error catching
* fix: doc messages in code
* fix: check the correct scopes object 😄
* fix: remove double (and not needed) try catch
* fix: lint
* fix: scopes
* feat: handle the default scopes better
* feat: better email objectification
* feat: process attachements
turns out an email doesn't need a body
* fix: lint
* Update google.py
* Update autogpt_platform/backend/backend/data/block.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: quit trying and except failure
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat: don't allow expired states
* fix: clarify function name and purpose
* feat: code links updates
* feat: additional docs on adding a block
* fix: type hint missing which means the block won't work
* fix: linting
* fix: docs formatting
* Update issues.py
* fix: improve the naming
* fix: formatting
* Update new_blocks.md
* Update new_blocks.md
* feat: better docs on what the args mean
* feat: more details on yield
* Update new_blocks.md
* fix: remove ignore from docs build
* feat: initial migration
* feat: migration tested with supabase-> prisma data location
* add custom migrations and script
* update migration command
* formatting and linting
* updated migration script
* add direct db url
* add find files
* rename
* use binary instead of source
* temp adding supabase
* remove unused functions
* adding missed merge
* fix: commit hash for lock
* ci: fix lint
* fix: minor bugs that prevented connecting and migrating to dbs and auth
* fix: linting
* fix: missed await
* fix(backend): phase one pr updates
* fix: handle error with returning user object from database_manager
* fix: linting
* Address comments
* Make the migration safe
* Update migration doc
* Move misplaced model functions
* Grammar
* Revert lock
* Remove irrelevant changes
* Remove irrelevant changes
* Avoid adding trigger on public schema
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Aarushi <aarushik93@gmail.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* ci: create dependabot
* ci: target the dev branch for dependabot
* ci: group prs
* ci: group updates
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* Create repo-pr-enforce-base-branch.yml
* fix quotes
* test
* fix github token
* fix trigger and CLI config
* change back trigger because otherwise I can't test it
* fix the fix
* fix repo selection
* fix perms?
* fix quotes and newlines escaping in message
* Update repo-pr-enforce-base-branch.yml
* grrr escape sequences in bash
* test
* clean up
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* fix(market): agent pagination and search errors
* fix(frontend): search was not paginated
* fix: linting
* feat(market): linting ci
* fix(ci): branch limit name
* feat(platform): List and revoke credentials in user profile (#8207)
Display existing credentials (OAuth and API keys) for all current providers: Google, Github, Notion and allow user to remove them. For providers that support it, we also revoke the tokens through the API: of the providers we currently have, Google and GitHub support it; Notion doesn't.
- Add credentials list and `Delete` button in `/profile`
- Add `revoke_tokens` abstract method to `BaseOAuthHandler` and implement it in each provider
- Revoke OAuth tokens for providers on `DELETE` `/{provider}/credentials/{cred_id}`, and return whether tokens could be revoked
- Update `autogpt-server-api/baseClient.ts:deleteCredentials` with `CredentialsDeleteResponse` return type
Bonus:
- Update `autogpt-server-api/baseClient.ts:_request` to properly handle empty server responses
* fix(backend): Lower the number of node workers to save DB connections (#8331)
Change [graph]×[node] worker limit from 10×5 to 10×3
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix(ci,platform): Add dev branch trigger to all ci (#8339)
* update ci for dev
* update classic
* remove duplicate dev
* fix(frontend): Fix styling inconsistencies in input elements (#8337)
- Apply consistent border styling to `Input`, `Select`, and `Textarea`
- Remove `rounded-xl` from node input elements
- Add `whitespace-nowrap` to `CustomNode` header category tags
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* feat(builder): Use configmap for builder (#8343)
use configmap in builder
* fix(platform,infra): Checkin non secret values (#8344)
checkin non secrets
* security(platform): Add sealed secrets (#8342)
* add sealed secrets
* add encrypted secrets
* remove extra space
* Tf public media buckets (#8324)
* fix(infra): Fix sealed secret names (#8350)
* fix sealed secret names
* fix names and add annotation
* feat(backend): Introduce executors shared DB connection (#8340)
* update health checkendpoint
---------
Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
Display existing credentials (OAuth and API keys) for all current providers: Google, Github, Notion and allow user to remove them. For providers that support it, we also revoke the tokens through the API: of the providers we currently have, Google and GitHub support it; Notion doesn't.
- Add credentials list and `Delete` button in `/profile`
- Add `revoke_tokens` abstract method to `BaseOAuthHandler` and implement it in each provider
- Revoke OAuth tokens for providers on `DELETE` `/{provider}/credentials/{cred_id}`, and return whether tokens could be revoked
- Update `autogpt-server-api/baseClient.ts:deleteCredentials` with `CredentialsDeleteResponse` return type
Bonus:
- Update `autogpt-server-api/baseClient.ts:_request` to properly handle empty server responses
- ci(frontend): Ensure CI fails if `yarn.lock` is inconsistent with `package.json`
- dx(frontend): Add Prettier check to `lint` script in `package.json`
- dx(frontend): Add `packageManager` to `package.json` for Corepack support
- build(frontend): Use `yarn` consistently in the Dockerfile
- feat(backend/executor): Change credential injection mechanism to acquire credentials from `AgentServer` just before execution
- Also locks the credentials for the duration of the execution
- feat(backend/server): Add thread-safe `IntegrationCredentialsManager` to handle and synchronize credentials-related operations
- feat(libs): Add mutexes to `SupabaseIntegrationCredentialsStore` to ensure thread-safety
Also:
- feat(backend): Added Pydantic model (de)serialization support to `@expose` decorator
Refactorings:
- refactor(backend, libs): Move `KeyedMutex` to `autogpt_libs.utils.synchronize`
- refactor(backend/server): Make `backend.server.integrations` module with `router`, `creds_manager`, and `utils` in it
* feat(backend): logic to disable enums based on python logic
* feat(backend): add behave as setting and clarify its purpose and APP_ENV
APP_ENV is used for not cloud vs local but the application environment such as local/dev/prod so we need BehaveAs as well
* fix(backend): various uses of AppEnvironment without the Enum or incorrectly
AppEnv in the logging library will never be cloud due to the restrictions applied when loading settings in by pydantic settings. This commit fixes this error, however the code path for logging may now be incorrect
* feat(backend): use a metaclass to disable ollama in the cloud environment
* fix: formatting
* fix(backend): typing improvements
* fix(backend): more linting 😭
* feat(frontend,backend): testing
* feat: testing
* feat(backend): it works for reading email
* feat(backend): more docs on google
* fix(frontend,backend): formatting
* feat(backend): more logigin (i know this should be debug)
* feat(backend): make real the default scopes
* feat(backend): tests and linting
* fix: code review prep
* feat: sheets block
* feat: liniting
* Update route.ts
* Update autogpt_platform/backend/backend/integrations/oauth/google.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: revert opener change
* feat(frontend): add back opener
required to work on mac edge
* feat(frontend): drop typing list import from gmail
* fix: code review comments
* feat: code review changes
* feat: code review changes
* fix(backend): move from asserts to checks so they don't get optimized away in the future
* fix(backend): code review changes
* fix(backend): remove google specific check
* fix: add typing
* fix: only enable google blocks when oauth is configured for google
* fix: errors are real and valid outputs always when output
* fix(backend): add provider detail for debuging scope declines
* Update autogpt_platform/frontend/src/components/integrations/credentials-input.tsx
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix(frontend): enhance with comment, typeof error isn't known so this is best way to ensure the stringifyication will work
* feat: code review change requests
* fix: linting
* fix: reduce error catching
* fix: doc messages in code
* fix: check the correct scopes object 😄
* fix: remove double (and not needed) try catch
* fix: lint
* fix: scopes
* feat: handle the default scopes better
* feat: better email objectification
* feat: process attachements
turns out an email doesn't need a body
* fix: lint
* Update google.py
* Update autogpt_platform/backend/backend/data/block.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: quit trying and except failure
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat: don't allow expired states
* fix: clarify function name and purpose
* feat: code links updates
* feat: additional docs on adding a block
* fix: type hint missing which means the block won't work
* fix: linting
* fix: docs formatting
* Update issues.py
* fix: improve the naming
* fix: formatting
* Update new_blocks.md
* Update new_blocks.md
* feat: better docs on what the args mean
* feat: more details on yield
* Update new_blocks.md
* fix: remove ignore from docs build
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* add ideogram ai image gen
* fixed revid secret api key being removed
* fixed auto checks errors
* Add AI Upscale option to IdeogramModelBlock
- Introduced an 'Upscale Image' option in the input schema to allow users to upscale generated images.
- Created the 'UpscaleOption' enum with options 'AI Upscale' and 'No Upscale'.
- Implemented the 'upscale_image' method to download the generated image into RAM and send it to the Ideogram AI upscale API without saving it to disk.
- Updated the 'run' method to handle the upscaling process based on the user's input.
- Ensured that the image processing is done entirely in memory (RAM) without writing to disk.
- Updated test inputs and mocks to reflect the new 'Upscale Image' option.
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* updates to tutorial
* updates to get user to save
* Update tutorial.ts
* final updates to end of tutorial
* Prettier
* add back data-id for badge within the blocks
* Prettier
* Updated onOpenChange code style
* modifying how handle text is rendered
* Rounding input boxes
* Modifying layout of nodes
* formatting
* update edge start / end positions
* updated handle rendering
* moved outputs down and disabled toggle
* formatting
* update font
* update key name formatting
* modify layout of input items
* updated the add property button
* feat(platform): Sync on new UI design
* simplify UI
* block list add border and remove padding
* add highlight on navbar button
* Change block header so block costs line up correctly
* fix history type issue
* formatting
* tweaking css to hide white spot
* fixed white spot
* Added context menu
* Changed status badge color
* getting error colors just right
* Added a NodeOutputs component for rendering the outputs
* tidy up
* Change Add Item Button Color
* changed cursor on hover in block control panel
* formatting
* updated formatting of tutoral and tally buttons
* fix(platform): Fix text area input not updating input field
* Address comments
* Add missing color
* fix lint errors
* Cleanup context logic
* Make inputref reliable
* Update coloring
* fix(platform): Fix unexpected closing block list on tutorial
* Add X-scrolling
* Remove excessive shadows
* Remove another excessive shadows
* Another patch patch patch
* Add border on context menu
* Cleanup executions
* Cleanup executions
* Makr border darker
* Make border darker
* Fix input reset
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
- refactor(blocks): Assign new IDs to 13 blocks
- Create DB migration to update block IDs in existing DB entities
- feat(frontend): Add `updateBlockIDs` "middleware" to `AgentImportForm` loader in front end
* feat(blocks): Add AIShortformVideoCreatorBlock
- Added a new block called AIShortformVideoCreatorBlock to create shortform videos using revid.ai.
- The block takes input parameters such as script, background music, and voice ID.
- It uses the revid.ai API to create the video and waits for completion using a webhook proxy service.
- Once the video is ready, it returns the URL of the created video.
- This block takes anywhere from 5 seconds to several minutes to complete depending on the length of the video.
Add revid.ai API key to Secrets
- Added a new field in the Secrets class to store the revid.ai API key.
* refactor(blocks): Remove unused webhook code in AIShortformVideoCreatorBlock
* Add background music track options.
* Add preset voice options
* Add generation preset and visual style configuration options.
* Remove "morpher" video type due to long generation times and low quality.
Plus extend timeout cut-off.
* Add audio track configuration options.
* refactor AudioTrack selection into single class
* format
* Add test mocks
* run format
* Added Replicate Flux Blocks
* updated poetry lock file for replicate
* Refactor ReplicateFluxAdvancedModelBlock to use an enum for replicate_model_name rather than free strings.
* Refactor ReplicateFluxAdvancedModelBlock to use an enum for output_format instead of free strings
* Refactor ReplicateFluxAdvancedModelBlock to stop requiring people to type a random seed
* Refactor ReplicateFluxAdvancedModelBlock to stop requiring people to type a random seed
* run format
* poetry run format
* Delete ReplicateFluxBasicModelBlock
* Mark model name as not advanced
* tweak input order
* Fix test
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
* Feat(Builder): Add Google Maps Search Block
* format
* Updates to google maps search block
* fixes
* format + updates again
* fix for pytest
* format again
* updates based on new comments
* fix for format?
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* feat(platform): Enhance AITextSummarizerBlock with configurable summary style and focus
The AITextSummarizerBlock in the autogpt_platform/backend/backend/blocks/llm.py file has been enhanced to include the following changes:
- Added a new enum class, SummaryStyle, with options for concise, detailed, bullet points, and numbered list styles.
- Added a new input parameter, focus, to specify the topic of the summary.
- Modified the _summarize_chunk method to include the style and focus in the prompt.
- Modified the _combine_summaries method to include the style and focus in the prompt.
These changes allow users to customize the style and focus of the generated summaries, providing more flexibility and control.
* run formatting and linting
## Config
- For Supabase, the back end needs `SUPABASE_URL`, `SUPABASE_SERVICE_ROLE_KEY`, and `SUPABASE_JWT_SECRET`
- For the GitHub integration to work, the back end needs `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`
- For integrations OAuth flows to work in local development, the back end needs `FRONTEND_BASE_URL` to generate login URLs with accurate redirect URLs
## REST API
- Tweak output of OAuth `/login` endpoint: add `state_token` separately in response
- Add `POST /integrations/{provider}/credentials` (for API keys)
- Add `DELETE /integrations/{provider}/credentials/{cred_id}`
## Back end
- Add Supabase support to `AppService`
- Add `FRONTEND_BASE_URL` config option, mainly for local development use
### `autogpt_libs.supabase_integration_credentials_store`
- Add `CredentialsType` alias
- Add `.bearer()` helper methods to `APIKeyCredentials` and `OAuth2Credentials`
### Blocks
- Add `CredentialsField(..) -> CredentialsMetaInput`
## Front end
### UI components
- `CredentialsInput` for use on `CustomNode`: allows user to add/select credentials for a service.
- `APIKeyCredentialsModal`: a dialog for creating API keys
- `OAuth2FlowWaitingModal`: a dialog to indicate that the application is waiting for the user to log in to the 3rd party service in the provided pop-up window
- `NodeCredentialsInput`: wrapper for `CredentialsInput` with the "usual" interface of node input components
- New icons: `IconKey`, `IconKeyPlus`, `IconUser`, `IconUserPlus`
### Data model
- `CredentialsProvider`: introduces the app-level `CredentialsProvidersContext`, which acts as an application-wide store and cache for credentials metadata.
- `useCredentials` for use on `CustomNode`: uses `CredentialsProvidersContext` and provides node-specific credential data and provider-specific data/functions
- `/auth/integrations/oauth_callback` route to close the loop to the `CredentialsInput` after a user completes sign-in to the external service
- Add `BlockIOCredentialsSubSchema`
### API client
- Add `isAuthenticated` method
- Add methods for integration OAuth flow: `oAuthLogin`, `oAuthCallback`
- Add CRD methods for credentials: `createAPIKeyCredentials`, `listCredentials`, `getCredentials`, `deleteCredentials`
- Add mirrored types `CredentialsMetaResponse`, `CredentialsMetaInput`, `OAuth2Credentials`, `APIKeyCredentials`
- Add GitHub blocks + "DEVELOPER_TOOLS" category
- Add `**kwargs` to `Block.run(..)` signature to support additional kwargs
- Add support for loading blocks from nested modules (e.g. `blocks/github/issues.py`)
#### Executor
- Add strict support for `credentials` fields on blocks
- Fetch credentials for graph execution and pass them down through to the node execution
* move to supabase pg instance
* remove postgres and bind supabase port
* Updated setup
- Switched db name to postgres to work with prisma studio
- Added platform schema
- Added Market-migartions
- bound prisma studio port
* remove studio port
* updated .env
* updated readmes
---------
Co-authored-by: SwiftyOS <craigswift13@gmail.com>
Restructuring the Repo to make it clear the difference between classic autogpt and the autogpt platform:
* Move the "classic" projects `autogpt`, `forge`, `frontend`, and `benchmark` into a `classic` folder
* Also rename `autogpt` to `original_autogpt` for absolute clarity
* Rename `rnd/` to `autogpt_platform/`
* `rnd/autogpt_builder` -> `autogpt_platform/frontend`
* `rnd/autogpt_server` -> `autogpt_platform/backend`
* Adjust any paths accordingly
* update pr template wording
* add what and how
* Update .github/PULL_REQUEST_TEMPLATE.md
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
- Add two endpoints to OAuth `integrations.py`:
- `GET /integrations/{provider}/credentials` - list all credentials for a provider, without secrets (metadata only)
- `GET /integrations/{provider}/credentials/{cred_id}` - retrieve a set of credentials (including secrets)
- Add `username` property to `Credentials` types
- Add logic to populate `username` in OAuth handlers
- Expand `CredentialsMetaResponse` and remove `credentials_` prefix from properties
- Fix `autogpt_libs` dependency caching issue
- Remove accidentally duplicated OAuth handler files in `autogpt_server/integrations`
* Feat(Builder): Add Runner input and ouput screens
* Fix run button not working
* prettier
* prettier again -- forgot flow
* fix input scaling + auto close on run
* removed "Runner Input" button to make it auto open runner input if input block is + Fixed issue with output not showing in output UI
* replaced runner output icon and added a new icon for it
* replaced IconOutput icon with LogOut from lucide-react
* prettier
* fix type safety issue + add error handling for formatOutput
* Updates based on comments
* prettier for utils
### Background
We need a way to set an execution quota per user for each block execution.
### Changes 🏗️
* Introduced a `UserBlockCredit`, a transaction table tracking the block usage along with it cost/quota.
* The tracking is toggled by `ENABLE_CREDIT` config, default = false.
* Introduced `BLOCK_COSTS` | `GET /blocks/costs` as a source of information for the cost on each block depending on the input configuration.
Improvements:
* Refactor logging in manager.py to always print a prefix and pass the metadata.
* Make executionStatus on AgentNodeExecution prisma enum. And add executionStatus on AgentGraphExecution.
* Use executionStatus from AgentGraphExecution to improve waiting logic on test_manager.py.
- feat(server): Initial draft of OAuth init and exchange endpoints
- Add `supabase` dependency
- Add Supabase credentials to `Secrets`
- Add `get_supabase` utility to `.server.utils`
- Add `.server.integrations` API segment with initial implementations for OAuth init and exchange endpoints
- Move integration OAuth handlers to `autogpt_server.integrations.oauth`
- Change constructor of `SupabaseIntegrationCredentialsStore` to take a Supabase client
- Fix type issues in `GoogleOAuthHandler`
* feat(builder): Add skeleton loading components for Monitor views
Introduce skeleton components for Agents, Flow Runs List, and Flow Runs Status sections to enhance loading state indication. These components help improve user experience by visually outlining content placeholders while data is being fetched.
* feat(builder): Leveraging NextJS's error boundary with error.tsx
Replace the basic error page with a more detailed and interactive error component. The new component includes a retry option, a link to the homepage, and logs the error details to the console. It also aligns with NextJS standards
---------
- Make process/service startup/shutdown messages consistent
- Configure `uvicorn` to use our logging config instead of its own
- Replace `print(..)` statements in ws_api.py with log statements
- Improve log statements in ws_api.py
- Handle JSON-encoding inside `.data.execution.upsert_execution_output(..)` to ensure it is always encoded the same
- Amend `.executor.manager.execute_node(..)` to pass unencoded data into `upsert_execution_output(..)`
- Add SIGTERM handler and `cleanup()` hook to `AppProcess`
- Implement `cleanup()` on `AppService` to close DB and Redis connections
- Implement `cleanup()` on `ExecutionManager` to shut down worker pool
- Add `atexit` and SIGTERM handlers to node executor to close DB connection and shut down node workers
- Improve logging in `.executor.manager`
- Fix shutdown order of `.util.test:SpinTestServer`
- feat(builder): Add "Stop Run" buttons to monitor and builder
- Implement additional state management in `useAgentGraph` hook
- Add "stop" request mechanism
- Implement execution status tracking using WebSockets
- Add `isSaving`, `isRunning`, `isStopping` outputs
- Add `requestStopRun` method
- Rename `requestSaveRun` to `requestSaveAndRun` for clarity
- Add needed functionality for the above to `AutoGPTServerAPI` client
- Add `stopGraphExecution` method
- Add support for multiple handlers per WebSocket method
- Fix parsing of timestamps in `execution_event` WebSocket messages
- Add `IconSquare` from Lucide to `@/components/ui/icons`
- feat(server): Add `POST /graphs/{graph_id}/executions/{graph_exec_id}/stop` route
- Add `stop_graph_run` method to `AgentServer`
- feat(server): Add `cancel_execution` method to `ExecutionManager`
- Replace node executor `ProcessPoolExecutor` by `multiprocessing.Pool` (which has a `terminate()` method)
- Remove now unnecessary `Executor.wait_future(..)` method
- Add `get_graph_execution(..)` in `.data.execution`
- fix(server): Reduce number of node executors to 5 per graph executor
This is necessary because `multiprocessing.Pool` spawns its workers on init, instead of based on demand like `ProcessPoolExecutor` does
- dx(server): Improve debug logging in `ExecutionManager`
- ci(server): Add debug logging mode to CI Pytest step
### Other improvements
Server:
- Improve output type of `ExecutionManager.add_execution(..)`
- Renamed a few things in `.server.rest_api` for consistency
Front end:
- Improved typing in `AutoGPTServerAPI` client
In `autogpt_server.util.lock:KeyedMutex`:
- track number of pending requests for each lock
- only remove a lock from `self.locks` when the number of pending lock requests hits 0
* move migrations, update networking and dockignore
* update docs
* remove sqlite from ci
* remove schema linting checks
* fix formatting
* remove schema linting
* add test script
* formatting and linting
* stop pg not down
* seperate test db
* diff port
* remove duplicate
### **User description**
### Background
The scope of this change is collecting the required information that will be needed for the execution analytics.
### Changes 🏗️
* Add sentry integration.
* Refactor logging_metadata on manager.py.
* Collect graph-level & node-level instrumentation.
* Introduced `stats` column for `AgentNodeExecution` & `AgentGraphExecution`.
- Add `advanced` to `SchemaField` and pass it to `json_extra`
- Add `advanced` to `BlockIOSubSchemaMeta` type
- Update `CustomNode`, so that:
- non-required advanced inputs are hidden
- non-advanced and required inputs are always shown
- Add minimize/maximize button in the corner of modal to make it significantly larger and centered
- Add copy button to copy all text
- Add optional `title` to display as a modal header
- fix type propagation by `AppService.run_and_wait(..)`
- fix type propagation by `@expose` and add note
- fix type propagation by `wait(..)` in `.executor.manager.execute_node(..)`
- fix type propagation by `wait(..)` in `.executor.manager._enqueue_next_nodes(..)`
- remove unnecessary null checks for `.data.graph.get_node(..)`
- fix type issue in `ExecutionScheduler`
- reduce use of `# type: ignore` in `.data.execution`
- reduce usage of `# type: ignore` in `.executor.manager`
- reduce usage of `# type: ignore` in `.server`
- reduce usage of `# type: ignore` in cli.py
- update `pyright` to v1.1.378
* standalone websocket server
* add websocket url
* wip: talk to ws directly
* rename to api server
* dockerfile and queue
* fix paths
* update poetry lock
* helm charts for websockets
* create seperate deployments for websockets and rest server with redis queue for async comms
* delete duplicate queue
* add depends in ws_api
* singleton for conn manager
* update from review
* fix CI
* address feedback
* update readme
* update docker file and add migration step in readm
* ad watch
* add step to copy example env file
* put connect back in
- Update styling and use tailwind more
- Add `react-toast` dependency
- Fix output button not changing checked state on execution
- Make status a badge in node's corner
- Rename `output_data` to `executionResults` and store multiple results with execution UUIDs
- Add `DataTable` component that displays execution outputs
- Outputs can be copied and there's a toast displayed as a feedback
Issue 1:
Input text field cursor keeps moving to the end of the text.
Try to type "Hello World!" into the input text. Then try to type "some string" in the middle of the "Hello" and "World".
Issue 2:
History should only tracks on the input box onBlur/onLeave
Try to type a "longcharacters" and try to undo it, the undo is removing 1 character at a time, polluting the history, and make the undo pretty much unusable.
Issue 3:
KeyValue & ArrayInput is non-undoable.
Try to add key-value or add an entry to the list, it doesn't undo the value, but you need to click as many number of entries being added to make the undo work again
* Feat(Builder): Add first guide tutorial
* added more steps + some fixes
* added local storage to fix starting every time going to build
* update copy & paste to support mac
* small fix
* Prettier fixes
* Added "Skip Tutorial" button to first step
* some fixes based on requests
* revert camelCase change
* add ability to use url to reset tutorial
* prettier
* Added Tutorial button next to tally
* prettier
* change pinBlocksPopover to setPinBlocksPopover
* fixes + update + prettier
* made the resetTutorial url dynamic
* force to /build on reset tutorial
* fix renaming
* prettier
Update ReactFlow to version 12 and split `Flow.tsx` into `useAgentGraph` hook that takes care of agent state and API calls to the server.
- Update ReactFlow to v12 ([migration guide](https://reactflow.dev/learn/troubleshooting/migrate-to-v12))
- Move `setIsAnyModalOpen` to `FlowContext`
- Make `setHardcodedValues` and `setErrors` functions of `CustomNode` and utilize new `updateNodeData` ReactFlow API
- Fix type errors
- `useAgentGraph` hook
- Take care of all API calls, websocket, agent state and logic
- Make saving and execution async and thus more consistent and reliable
- Save&run requests are state
- Wait for node ids to sync with backend reactively
- Queue execution updates
- Memoize functions using `useCallback`
### Background
Currently, there is no way to construct the output of nodes into a composite data structure (list/dict/object) using the builder UI.
The backend already supports this feature by connecting the output pin to the input pin using these format:
* <pin_name>_$_<list_index> for constructing list
* <pin_name>_#_<dict_key> for constructing dict
* <pin_name>_@_<field_name> for constructing object
The scope of this PR is implementing the UX for this in the builder UI.
### Changes 🏗️
<img width="765" alt="image" src="https://github.com/user-attachments/assets/8fc319a4-1350-410f-98cf-24f2aa2bc34b">
This allows you to add more pins in a key value & list input: `_$_` list constructor & `_#_` dict constructor.
### Background
Boolean without default value is a UX problem. It's currently displayed as a toggle and it has no way to describe the `null` value.
So we need to prevent blocks from introducing a nullable boolean.
### Changes 🏗️
Add explicit check to prevent nullable boolean. Fix existing block field that has nullable boolean.
* talking head
* linting
* remove clip id, not needed
* add more descriptive name
* add min requirement to polling attempts and intervals
* add docs and link to docs
* remove extra space
* force new tab
* fix linting
* add did key to .env.template
### Background
We don't have an ordering guarantee on the node execution.
Let's say we have a node that has to execute different data A, B, and C.
The current implementation limits the execution to 1 execution at a time, but there is no guarantee that A, B, and C will be executed in order.
The initial implementation did not have any restrictions, so it used to be A, B, and C executed in parallel
In the current implementation with the per-node constraint, it's A, B, C are executed serially but with no guarantee of ordering.
The scope of this PR is to guarantee that order.
### Changes 🏗️
Guaranteeing the execution per node ordering by avoiding any re-enqueue mechanism. If there are two executions run in the same node, the first one will be executed and the other will block. The blocking mechanism is indeed sub-optimal, the performance improvement can be done later (a follow-up issue will be added).
* feat(builder): checkbox for tos on login page
* feat(builder): submit agent page
DOES NOT WORK
* feat(builder): basic upload (not working)
* feat(builder): submit page more working but still not
* fix(builder): working categories, not dynamic
* feat(builder, server): enable submissions (auth error)
* fix(lint): linting
* feat(builder): submit page terms of service
* fix(builder): update lockfile
* lint(builder): lint marketplace files
- Add static link/connection support on the frontend and display them as dashed lines
- Remove queueing for static connections - there'll always be only one bead waiting at the end
- Make beads slightly larger and further from the end arrow
* Add Advanced Chatbot with History using Discord
* Update Discord Chatbot with History_v145.json
update is_active to false and is_template to true
---------
Co-authored-by: Bently <tomnoon9@gmail.com>
The execution graph is supposed to be typed, but there are cases where generic types like Any were used, and there are cases, where incompatible data passed into the wrong type.
If such a thing happens on runtime, we should do the best-effort conversion instead of breaking the run. E.g.: try to json-stringify the object to str input, or try to parse number in the string to int input, etc.
* Make `userId` required on DB entities `AgentGraph`, `AgentGraphExecution`, and `AgentGraphExecutionSchedule`
* Add SQLite and Postgres migrations to make `userId` required and set `userId` to `3e53486c-cf57-477e-ba2a-cb02dc828e1a` on existing entries without `userId`
* Amend `create_graph` endpoint and `.data.graph`, `.data.execution` methods to handle required `user_id`
* Add `.data.user.DEFAULT_USER_ID` constant to replace hardcoded literals
* refactor(builder): Migrate `FlowEditor` to use ReactFlow's state management system
We have been keeping two copies of node and edge data: one inside ReactFlow and one outside.
It works, but it's accidental and implicit and there is no reason to be using shadow copies rather than a single data source.
- Replace `useNodesState` and `useEdgesState` with `useReactFlow` hook
- Use `addNodes`, `addEdges`, and `deleteElements` where appropriate instead of `setNodes`/`setEdges` to allow use of event hooks
- Consolidate all edge -> node state sync logic into `onEdgesChange` event handler
This replaces `updateNodesOnEdgeChange`, part of `onConnect`, and `onEdgesDelete`.
- Move node deletion logic from `CustomNode` to `FlowEditor:onNodesChange`
* fix(builder): Refactor and fix copy-paste mechanism
- Rename variables for readability
- Use an ID map to correctly set the source and target IDs for the pasted edges
- Move `monitor/page.tsx` to `page.tsx`
- Remove redirect from `/` to `/build`
- Set temporary redirect from `/monitor` to `/` to prevent breaking UX (may be removed after a grace period, e.g. 2024-09-01)
* adding auth to store
* Add ability to submit agents and review them before being added to the market
* Added auth decorator
* Added auth to market api client
* fix(builder): Fix drag-select behavior on `NodeKeyValueInput`
* fix(github): Added in fallback variables for postgres testing (#7715)
Co-authored-by: Leslie Cruz <lelcruz@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
* feat(builder): basic tally feedback form (#7725)
* removed database changes
* moved auth to libs project
* fixed formatting
* cleaned up auth
* Added tests and database migration
* delete decorator
* feat(builder): Add new icons and integrate with ControlPanel
Introduce SVG icons and replace ControlPanel dependencies with new icons. The ControlPanel component now uses the new icon components for improved consistency.
* feat(builder): add additional icon and update icons in NavBar
Introduced the new icon with relevant documentation and examples. Replaced existing icon imports with the newly defined icons in the NavBar component for consistency.
* feat(builder): Add icon for megaphone and replace usage
Introduced the IconMegaphone for reuse across the application. Updated TallyPopup to utilize IconMegaphone instead of the previous Megaphone icon from lucide-react.
* fix(builder): Running prettier to format changed files.
Adjusted various files for consistent code formatting. Ensured proper spacing and alignment of imports, JSX tags, and object properties to enhance readability and maintain coding standards. No functional changes made.
- Amend `AutoGPTServerAPI.sendWebSocketMessage(..)` to automatically (re)connect the websocket if disconnected
- Amend `AutoGPTServerAPI.connectWebSocket()` to prevent race conditions
* adding auth to store
* Added auth decorator
* Added auth to market api client
* removed database changes
* moved auth to libs project
* fixed formatting
* Switched to using fastapi dependencies
* Return a user object
* removed logging of the token
* Added tests
### Background
CurrentDateAndTimeBlock would fail if the test is not complete within 1-second wall-time.
In the case a test started at the second 01:59:59, it becomes flaky.
We can change the test to only assert the type. But this is also a good chance to add more assertion options for Block: a custom function.
### Changes 🏗️
Change assertion for the time block using an additional margin of error.
* Added listing, sorting, filtering and ordering of agents
* feat(market): general upkeep for vscode and small docs
* feat(market): most of search
* fix(market): hinting on the sort was weird + linting
* feat(market): migrations and schema updates
* lint(market): autolint
* feat(market): better search
* feat(market): file download
* feat(market): analytics of downloads
* Added tracking of views
* changed all imports to be fully qualified
* Upgrade sentry sdk
* Added an admin endpoint to submit new agents
* fixes
* Added endpoint that just tracks download
* Starting adding the marketplace page
* Marketplace client
* Create template of the marketplace page
* Updated client
* fix(market): debug port
* feat(market): agents by downloads
* fix(market, builder): hook up frontend and backend
* feat(builder, market): build a "better" market page that loads data
* feat(builder): updated search (working) and page (kinda working)
* feat(builder): add a feature agents ui (not backed yet)
* feat(builder): improve detail page content
* Added run script
* Added pre populate database command
* Add AnalyticsTracker on create agent
* Add download counts for top agents
* Add hb page prometheus metrics
* Added featured agents funcitonality
* renamed endpoint to health
* Adding download flow
* normalised api routes
* update readme
* feat(market) : default featured
* formatting
* revert changes to autogpt and forge
* Updated Readme
* Eerror when creating an agent from a template installed from (#7697)
fix creating graph from template
* Add dockerfile
* z level fix
* Updated env vars
* updated populate url
* formatting
* fixed linting error
* Set defaults
* Allow only next.js dev server
* fixed url
* removed graph reassignment as due to change in master
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
### Background
This change brings the capability to decompose a graph into sub-graphs. The objective of this feature is to allow a user to build a visually modular, and easier-to-understand graph. Also, allowing you to import a graph into your existing graph, without decluttering your existing graph.
This feature will require more implementation on the UI side, to allow the grouping of subgraph to be represented as a node in the builder.
### Changes 🏗️
Introduced a subgraph functionality with the following property:
* Sub-graph is simply a set of nodes that are grouped together, making it representable as a node.
* Sub-graph input & output pins/schema are the `InputBlock` / `OutputBlock` nodes present in the subgraph.
* The previous point implies that connecting two nodes from different sub-graphs, other than input/output nodes, is not allowed.
* Graph can be nested, but defined flatly, e.g.: graph is now only represented by three components: nodes, links, and subgraphs (a set of list of nodes). A nested subgraph is simply connecting a node inside a subgraph into another `InputBlock` node of another subgraph.
* fix(builder): Adding prettier configuration files and to package.
* fix(builder): Running script "format" added to the package.json
* feat(builder): Adding a job to the yaml file. This job will run "format" which leverages prettier.
* feat(builder): Running script "format" and merging master
* feat(builder): Setting configuration to prettier defaults in .prettierrc.json, and adding a requested newline in the .prettierignore
* feat(builder): Updating the CI to not add a job for prettier but instead add a check to verify prettier was run before commiting.
* feat(builder): Confirming CI update fails when user does not run prettier first. Updating with file changes after prettier
* feat(builder): Running prettier write to fix warnings
* fix(builder): Removing .prett
per PR change request
* fix(builder): Running prettier formatter
* fix(builder): Running prettier formatter check found additional files
* fix(builder): Running prettier format
* fix(builder): Removing running "format" command from PR due to a change request.
* fix(builder): Removing running "format" command from PR due to a change request.
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
* feat(Block): Add AdvancedLlmCallBlock
Adds a block for handling advanced LLM calls, enabling messages to be handled within the AutoGPT builder.
* fix linting
* Super early version of the discord bots blocks
* updated to add secrets for token + other fixes
* lint & format
* rename DiscordBot to DiscordReader
* add discord-py
* fix poetry lock
* rm duplicated file
* updated name to add Block to end
* update .env.template to add DISCORD_BOT_TOKEN
* updates to add description Field
* swap channel name and message content + add field description
---------
Co-authored-by: Bently <bently@bentlybro.com>
- Merge `Flow.tsx:CustomNodeData` type definition into `CustomNode.tsx:CustomNodeData`
- Move `lib/types:BlockSchema` type definition into `lib/autogpt-server-api/types` in place of `ObjectSchema`
- Expand and rename `BlockSchema` -> `BlockIOSchema` + `BlockIORootSchema`
- Fix all `BlockIOSchema` related type narrowing checks
- Add warning messages to fallback cases in `NodeInputField` logic
Co-authored-by: Swifty <craigswift13@gmail.com>
* feat(Builder): Implement undo/redo functionality
* updates to work with latest UI
* add CTRL + Z & CTRL + Y support
* fixed undo/redo for inputs in nodes
* fix for deleting node
* fixes to make the undo/redo work better
* small fix
* Updates based on feedback
* add margin right to space out buttons
* added CTRL + SHIFT + Z for redo
This PR adds Supabase Auth (cloud) integration, login and profile UI, configures password login and three OAuth providers: Google, GitHub and Discord.
For `Account` button to show up and ability to login two env vars need to be set in `.env.local`: `NEXT_PUBLIC_SUPABASE_URL` and `NEXT_PUBLIC_SUPABASE_ANON_KEY`. OAuth providers are by the Supabase and don't require env vars.
Email confirmation (for email/password signup) is disabled because there's limit of 3 emails per hour without custom SMTP server configuration. [Link](https://supabase.com/dashboard/project/adfjtextkuilwuhzdjpf/auth/templates) to configure custom SMTP server and email template.
### Added dependencies:
- "@supabase/ssr": "^0.4.0"
- "@supabase/supabase-js": "^2.45.0"
- "react-icons": "^5.2.1"
### Added pages/routes:
- `app/auth/auth-code-error/page.tsx`: displayed when login using OAuth provider fails
- `app/auth/callback/route.ts`: route accessed when logging in using OAuth provider; it passes session code to Supabase
- `app/auth/confirm/route.ts`: accessed when confirming email, users will be directed here from email they get after signing in.
- `app/error/page.tsx`: Generic error page without explanation (any errors should be visible in the console)
- `app/login/page.tsx` and `app/login/actions.ts`: Login page and related login/signup server actions
- `app/profile/page.tsx`: Profile page, displays email address of the user and button to logout
### Changes
- Update `layout.tsx`: add `Log In` button and make icons consistent. The log in button shows up if user is logged out, avatar is shown when logged in, and if supabase is unavailable nothing shows up.
- Login form is verified using `zod` on the frontend (recommended by shadcn) and in case login fails feedback is displayed. On successful login users are redirected to `/profile`
- Add `PasswordInput` component, [source](https://gist.github.com/mjbalcueva/b21f39a8787e558d4c536bf68e267398)
- Add `SupabaseProvider` with context for Supabase accessed via hook `useSupabase(): { supabase: SupabaseClient | null, isLoading: boolean }`
- Add `useUser` hook to get `{ user, session, isLoading, error }` on the client
- Add `getServerUser`: async function to get `{ user: User | null, error: string | null }` on the server side
- Add `src/middleware.ts` and `client.ts`, `server.ts`, `middleware.ts` in `src/lib/supabase` which are utility functions and middleware to refresh auth token
### Background
We need an explicit block for providing input & output for the graph.
This will later allow us to build a subgraph with pre-declared input & output schema.
This will also allow us to set input for the node in the middle of the graph, and enable a graph to have output values.
### Changes 🏗️
* Add InputBlock & OutputBlock
* Add graph structure validation on the graph execution step that asserts the following property:
- All mandatory input pin, has to be connected or have a default value, except the `InputBlock` node.
- All links have to connect valid nodes, and the sink & source name using the valid block field.
* Feat(Builder): Clear Status and Output upon graph edit
* close output dropdown on graph edit
* make deleting edge clear block outputs
* make it so deleting nodes clears block output
### Background
Migration not being synced has caused issues and CI is not catching this yet.
### Changes 🏗️
* Added schema check in both sqlite & postgres prisma.schema on linting step.
* Introduced `poetry run schema` for syncing prisma.schema + generating migration in both files.
* Set up helm and tf for backend
* update helm charts and settings
* remove example files
* use latest tag
* delay and timeouts for probes
* env based pyro host
* default backend
* linting
* add helm linting in CI
* read from settings
* fix formatting
* update to use config
* Set up helm and tf for backend
* update helm charts and settings
* remove example files
* use latest tag
* delay and timeouts for probes
* env based pyro host
* default backend
* linting
* read from settings
* fix formatting
* update to use config
* fix(builder): Implementing a basic shadCn theme until color palette is decided upon
* feat(builder): Separating NavBar into its own component and providing a standard UI/UX Approach
* feat(builder): Removing old implementation of logo, removing excessive css implementation, updating styles to better support standard desktop views.
* feature(builder): Addition of UI component Sheet from ShadCn for update
### Background
Pyro for API Server is not using Prisma, but still holding a Prisma connection.
The fast-API thread is also holding a Prisma connection, making Prisma connected in two different loop within a single process.
### Changes 🏗️
Disable a Prisma connection on Pyro thread for Server API process.
Fix test flakiness issue due to concurrency issue.
### Background
Input from the input pin is consumed only once. While this is required in most of the use cases, there are some cases where the input can only be produced once, and that input needs to be re-used just like an input default value, that is passively providing input data, without triggering any execution. The scope of this change is providing that functionality in the link level, this property will be called **`static link`** in this system.
### Changes 🏗️
Provides a static link feature with the following behaviours:
* A link can be marked `static` to become a static link.
* Once a node produces an output it will persist the output data and propagate the output to the other nodes through the link, for a static link, instead of making the data queued in the input pin, it will override the default value.
* Any input executions still waiting for the input will be backfilled using this output produced by the static link.
* And any upcoming executions that will use the input will always reuse the output produced by the static link.
See the added test to see the expected usage.
- Add `ajv` dependency to check values against json schema
- Add `errors` and `setErrors` to `CustomNodeData`
- Add `validateNodes` run before executing agent
- Add `*` on labels for required fields
- Add `setNestedProperty` and `removeEmptyStringsAndNulls` utility function
- Fix type signatures of `sendWebSocketMessage(..)`, `onWebSocketMessage(..)`, `runGraph(..)` in `autogpt-server-api/client`
- Add `WebsocketMessageTypeMap`
- Fix type signature of `updateNodesWithExecutionData` in `FlowEditor`
### Background
When multiple executors are executing the same node within the same graph execution, two node executions can read the same queue of input and read the same value—making the data that is supposed to be consumed once, consumed by two executions. The lack of lock & concurrency support for parallel execution within a single graph causes this issue.
Node concurrency also introduces poor UX in the current frontend implementation, when two nodes are executed in parallel, the current UI will not display its parallel execution update, but instead, it shows the updates that override each other. Until the execution observability is improved on the builder UI, this capability will be limited.
### Changes 🏗️
The scope of this change is to solve this issue by:
* Decouple Graph execution & Node execution, each has its own configured process pool.
* Make sure there is only 1 execution per node (we still allow parallel executions on different nodes) in a graph.
* Fixed concurrency issue by adding distributed lock API on agent_server.
* Few cleanups:
- Add more logging with geid & neid prefix on graph/node executions
- Moved execution status update to agent-server for a single source of status update (required by conn-manager/web-socket)
- Configured node parallelism to 10 & graph parallelism to 10 by default, so in the very rare worst-case, there can be 100 node executions.
- Re-use server resource for each integration test run
- Set `node.data.connections` based on `graph.links` in `loadGraph(..)`
- Use reactflow's built-in `useNodesState`, `useEdgesState` to replace some boilerplate functions that do exactly the same
Resolves#7392
* refactor(MathsBlock): Simplify output to return numeric result directly
- Remove MathsResult class and explanation field
- Update Output schema to use float type
- Simplify run method to yield numeric result only
- Adjust error handling to return inf or nan for errors
- Update test cases to reflect new output structure
* run format
* refactor(CounterBlock): Simplify output to return count as integer
- Remove CounterResult class
- Update Output schema to use int type directly
- Simplify run method to yield count without explanation
- Modify error handling to return -1 for any errors
- Update test case to reflect new output structure
* ci(server): add sqlite processing
* ci(server): try setting DATABASE_URL based on db platform
* fix(server): swap default back to sqlite
* ci(server): go back to database url
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* replace SQLite with Postgres
* dockerfiles and optional docker compose set up
* Update rnd/autogpt_builder/Dockerfile
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* address feedback
* Update .dockerignore
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* Remove example files folder
* remove backend and frontend from docker compose
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat: Add YouTubeTranscriber block for extracting transcripts from YouTube videos
* feat: Add youtube-transcript-api dependency to pyproject.toml
* feat: Add SchemaField and test_mock to YoutTube Transcriber.
* feat: update lock
* fix(server): the agbenchmark was out of date
* fix(server): linting
* fix(server): drop mock
* fix(server): poetry locked in
* fix(server): test had ... at the end?
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Fixes#7508
- Amend `app/configurator.py:check_model(..)` to check multiple models at once and save duplicate API calls
- Amend `MultiProvider.get_available_providers()` to verify availability by fetching models and handle failure
feat: Add support for new Groq models
The commit adds support for new Groq models, including LLAMA3_1_405B, LLAMA3_1_70B, and LLAMA3_1_8B. These models are part of the preview release and offer enhanced reasoning and versatility capabilities.
* feat(blocks): Add MathsBlock for performing mathematical operations
The commit adds a new block called MathsBlock to perform various mathematical operations such as addition, subtraction, multiplication, division, and exponentiation. The block takes input parameters for the operation type, two numbers, and an option to round the result. It returns the result of the calculation along with an explanation of the performed operation.
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
* feat: Add RSSReaderBlock for reading RSS feeds
The commit adds a new `RSSReaderBlock` class in the `rss-reader-block.py` file. This block allows users to read RSS feeds by providing the URL of the feed, start datetime, polling rate, and a flag to run the block continuously. The block fetches the feed using the `feedparser` library and returns the title, link, description, publication date, author, and categories of each RSS item.
This commit also includes the addition of the `feedparser` dependency in the `pyproject.toml` file.
* fix(server): update lock file
* updated poetry lock
* fixed rss reader testing
* Updated error message in test to include check info
* Set starttime as 1 day ago
* Changed start time to time period
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
- Handles:
- Add `NodeHandle` to draw input and output handles
- Position handles relatively
- Make entire handle label clickable/connectable
- Add input/output types below labels
- Change color on hover and when connected
- "Connected" no longer shows up when connected
- Edges:
- Draw edge above node when connecting to the same node
- Add custom `ConnectionLine`; drawn when making a connection
- Add `CustomEdge`; drawn for existing connections
- Add arrow to the edge end
- Colorize depending on type
- Input field modal:
- Select all text when opened
- Disable node dragging
- CSS:
- Remove not needed styling
- Use tailwind classes instead of css for some components
- Minor style changes
- Add shadcn switch
- Change bottom node buttons (for properties and advanced) to switches
- Format code
- fix(builder/monitor): Export `Graph` rather than `GraphMeta`
- Fixes#7557
- refactor(builder): Split up `lib/autogpt_server_api` into multi-file module
- Resolves#7555
- Rename `lib/autogpt_server_api` to `lib/autogpt-server-api`
- Split up `lib/autogpt-server-api` into `/client`, `/types`
- Move `ObjectSchema` from `lib/types` to `lib/autogpt-server-api/types`
- Make definition of `Node['metadata']['position']` independent of `reactflow.XYPosition`
- fix(builder/monitor): Strip secrets from graph on export
- Resolves#7492
- Add `safeCopyGraph` function in `lib/autogpt-server-api/utils`
- Use `safeCopyGraph` to strip secrets from graph on export in `/monitor` > `FlowInfo`
### Background
Input from the input pin is consumed only once, and the default input can always be used. So when you have an input pin overriding the default input, the value will be used only once and the following run will always fall back to the default input. This can mislead the user.
Expected behaviour: the node should NOT RUN, making connected pins only use their connection(s) for sources of data.
### Changes 🏗️
* Make pin connection the mandatory source of input and not falling back to default value.
* Fix the type flakiness on block input & output. Unify the typing for BlockInput & BlockOutput using the right alias to avoid wrong typing.
* Add comment on alias
* automated test on the new behaviour.
- Let `GET /graphs` return `GraphMeta[]` instead of `string[]` (list of IDs)
- Rename `AutoGPTServerAPI` method `listGraphIDs` -> `listGraphs` and adjust return type
- Replace all usages of `Graph` with `GraphMeta` in `/monitor`
- Delete `data.graph:get_graph_ids()`
This commit updates the `CreateMediumPostBlock` class in `create_medium_post.py` to use the secret value for the `author_id` parameter. Previously, it was using the plain value, which caused the value to be sent incorrectly to the API.
* feat: Add CreateMediumPostBlock to create Medium posts
* feat: Add medium_api_key to Secrets class in settings.py
* feat: Update medium post block to work with latest system.
* feat: Add medium_author_id field to Secrets class in settings.py
* run isort
* run black
Builder:
* Add download button to agent info view
- Add download button to `FlowInfo`
- Add `exportAsJSONFile(..)` function to lib/utils.ts
* Add Create button + menu to /monitor page
- Add "Create | v" split button to Agent list
- Add list of templates to Create menu
- Add "Import from file" option + dialog
- Create `AgentImportForm` component
- Install `Form`, `Label`, `Switch` components from shad/cn UI
- Install `Dialog` component from shad/cn
* Support saving/editing Templates
- Add `templateID` query parameter to `/build`
- Use `templateID` query parameter in `AgentImportForm` redirect if imported as template
- Make `FlowEditor` suitable for saving/editing templates
- Add `template` (boolean) parameter to `FlowEditor` component
- Add conditions to all `createGraph` or `updateGraph` calls, to use `createTemplate`/`updateTemplate` if applicable
- Add "Save as Template" button, visible if the graph is new (unsaved)
- Hide "Save & Run Agent" button when editing a template
* Add template endpoints to `AutoGPTServerAPI` client
- Add `listTemplates()`
- Add `getTemplate(id, version?)`
- Add `getTemplateAllVersions(id)`
- Add `createTemplate(templateCreateBody)`
- Add `updateTemplate(id, template)`
* fix inner alignment of `<Input type="file">` elements
Server:
* fix(server): Fix return of `create_graph` for templates
- Add prefix `/api` to `APIRouter` in server.py
- Update API client in Builder
- Update default `AGPT_SERVER_URL` in .env.template
- Update default `baseUrl` in `AutoGPTServerAPI` constructor
* Add minimal implementation of `LlamafileProvider`, a new `ChatModelProvider` for llamafiles. It extends `BaseOpenAIProvider` and only overrides methods that are necessary to get the system to work at a basic level.
* Add support for `mistral-7b-instruct-v0.2`. This is the only model currently supported by `LlamafileProvider` because this is the only model I tested anything with.
* Add instructions to use AutoGPT with llamafile in the docs at `autogpt/setup/index.md`
* Add helper script to get it running quickly at `scripts/llamafile/serve.py`
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
### Background
Add formatter & linter command.
Tools: ruff --> isort --> black --> pyright.
### Changes 🏗️
Introduced:
* `poetry run format`
* `poetry run lint`
`poetry run lint` will be executed on CI.
- Renamed `Schema` to `BlockSchema` and moved to `lib/types.ts`
- Add `SchemaTooltip` component that renders markdown tooltip for node fields
- Add `SecretField` function (which uses `BlockSecret` as value) that replaces `BlockFieldSecret` functionality for models
- Rename `get` to `get_secret_value` to make name clearer and inline with pydantic `Secret` types
- Add shadcn tooltip
- Add `react-markdown` dependency
- Add `SchemaField` that works like Pydantic `Field` but allows to add extra json schema values. This PR adds `placeholder` entry but it could be extended with other data.
- Render `placeholder` inside input fields if available.
- Restyle placeholders so they are visually distinct from user-entered values
### Background
The main scope of this change is enhancing the system capability (by fixing bug, correcting execution behaviour) to allow for creating a graph with a loop, to allow the use case of block auto-generation agent.
### Changes 🏗️
* Main changes: Add block_autogen.py (block auto-generation agent graph example).
* Refactor on test boilerplate: introduced `util/test` for initiating a server, and waiting graph execution.
* Removing unnecessary db lookup and duplicated codes used for sending execution updates on agent executor.
* Removed redundant code on test and cli code.
* Moved block test execution helper into the main code (so blockinstallerblock can use it).
* Eliminate the need of explicitly add a module into the `AVAILABLE_BLOCKS` list, any block class under the `block` folder will be auto-discovered.
* Few patches on the existing blocks.
1. Add graph versioning functionality:
- Add `version`, `isActive` fields in the `AgentGraph` model
- Add `agentGraphVersion` field in related models
- Amend & add API endpoints for graph versioning (see below)
- Amend & add data layer functions (`autogpt_server.data`) to support new operations & data schema
2. Add graph template functionality:
- Add `isTemplate` fields in the `AgentGraph` model
- Add `GraphMeta` model for template/graph metadata
- Add API endpoints for template management (see below)
- Amend & add data layer functions (`autogpt_server.data`) to support new operations & data schema
3. Enhance graph creation:
- Amended `create_graph` route to handle template-based graph creation
4. Code refactoring:
- Improved import statements
- Enhanced error handling in graph creation
5. Minor improvements:
- Add validator to auto-assign `Graph.id` if not set
## API Changes
New endpoints:
- `GET /templates`: Retrieve all templates (metadata only)
- `POST /templates`: Create a new template
- `PUT /graphs/{graph_id}`: Create a new version of a graph
- `GET /templates/{graph_id}`: Get a specific template
- `PUT /templates/{graph_id}`: Create a new version of a graph template
- `GET /graphs/{graph_id}/versions`: Get all versions of a graph
- `GET /templates/{graph_id}/versions`: Get all versions of a graph template
- `GET /graphs/{graph_id}/versions/{version}`: Get a specific graph version
- `PUT /graphs/{graph_id}/versions/active`: Set active graph version
Modified endpoints:
- `POST /graphs`: Now supports creating graphs directly from templates
- `GET /graphs/{graph_id}`: Add `version` query parameter
- `GET /graphs/{graph_id}/executions`: Add `graph_version` query parameters
## UI changes
- Improve `/build` / `FlowEditor` save mechanism
- Implement updating current agent instead of creating a new agent on every save
- Add check to only save a new version if local graph has been edited
- Add `deepEquals` function to lib/utils.ts
- Add version indicators and selector on `/monitor`

- Add shad/cn `DropdownMenu` component
- Update `AutoGPTServerAPI` client
- Update input/output types with added attributes (see above)
- Add parameter `version` to `getFlow`
- Add parameter `flowVersion?` to `listFlowRunIDs`
- Add endpoint `updateFlow(flowID, FlowUpdateable)`
- Add endpoint `createFlow(fromTemplateID, templateVersion)` (overload)
- Add endpoint `getFlowAllVersions(id)`
- Add endpoint `setFlowActiveVersion(flowID, version)`
This commit adds support for the following models:
```python
# OpenAI Models
GPT4O = "gpt-4o"
GPT4_TURBO = "gpt-4-turbo"
GPT3_5_TURBO = "gpt-3.5-turbo"
# Anthropic models
CLAUDE_3_5_SONNET = "claude-3-5-sonnet-20240620"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
# Groq models
LLAMA3_8B = "llama3-8b-8192"
LLAMA3_70B = "llama3-70b-8192"
MIXTRAL_8X7B = "mixtral-8x7b-32768"
GEMMA_7B = "gemma-7b-it"
GEMMA2_9B = "gemma2-9b-it"
```
Every model has been tested with a single LLM block and is confirmed to be working in that setup.
- Add `autogpt` and `forge` dependency to the `autogpt_server`
- Add `AutoGPTAgentBlock` that initializes and runs a single agent loop on execution
- Add `BlockAgent` that inherits from `autogpt` `Agent` and is a thin extension on the agent that allows to disable components
- Add `OutputComponent` that adds `output` command for the agent
- Improve responsive grid layout
- Remove `container` class from `<main>` to utilize full screen width
- Improve detail views & add view for run details
- Make flow run list entries selectable
- Create `FlowInfo` and `FlowRunInfo` components
- Improve layout of `FlowRunsStats`
- Improve ScrollableLegend spacing & styling
- Hide scroll bar of scrollable legend
- Center legend items if there is space left
- Round icons
- Vertically align icons with labels
- FIX: Add condition to not display legend items for series with `legendType="none"`
- Add periodic 5s refresh of non-terminal flow run statuses
- Split off `refreshFlowRuns(flowID)` from `fetchFlowsAndRuns()`
- Add effect to run `refreshFlowRuns` every 5 seconds
- Improve and expand FlowRun info
- Add `FlowRun.totalRunTime`: sum of the individual execution durations of all nodes
- Add `FlowRun.endTime`
- Use `NodeExecutionResult.add_time` instead of `start_time` as `FlowRun.startTime`
- Sort Flows by last executed
- Add icons to navbar items & hide unused items Backtrack and Explore
- Change UI mentions of "(Agent) Flow" to "Agent"
### Background
Credentials for blocks could only be defined through the block input. The scope of this change is providing system-wide that becomes the default value for these input blocks.
### Changes 🏗️
* Add system-wide credential support for agent blocks `BlockFieldSecret`.
* Update llmcall & reddit block to adopt `BlockFieldSecret`.
* reverts dark theme for now
* change "Show/Hide nodes" button to be "Icon"
* swap over to light mode + fix sizing
* fix color for agent name + description text
* Change navbar to white
* Added darkmode tag for the navbar
* Added dark mode text color
* Changed to tailwind classes
---------
Co-authored-by: Bentlybro <tomnoon9@gmail.com>
* reverts dark theme for now
* change "Show/Hide nodes" button to be "Icon"
* swap over to light mode + fix sizing
* fix color for agent name + description text
* Change navbar to white
* Added darkmode tag for the navbar
* Added dark mode text color
---------
Co-authored-by: Bentlybro <tomnoon9@gmail.com>
* reverts dark theme for now
* change "Show/Hide nodes" button to be "Icon"
* swap over to light mode + fix sizing
* fix color for agent name + description text
* update icon
Sample test input and output on the block can serve as documentation and auto-generated unit-testing code for the agent block.
What's within the scope of this change:
Adding the fields for block test (input, output, mocks), and its execution.
What's still outside the scope:
Handling of mock and stub for a block using sensitive credentials or network calls or 3rd-party connections.
* Refactor on the link structure and API
* Refactor on the link structure and API
* Cleanup IDS
* Remove run_id
* Update block interface
* Added websockets dependency
* Adding routes
* Adding in websocket code
* Added cli to test the websocket
* Added an outline of the message formats I plan on using
* Added webscoket message types
* Updated poetry lock
* Adding subscription logic
* Updating subscription mechanisms
* update cli
* Send updates to server
* Get single execution data
* Fix type hints and renamed function
* add callback function and type hints
* fix type hints
* Updated manager to use property
* Added in websocket updates
* Added connection manager tests
* Added tests for ws_api
* trying to work around process issues
* test formatting
* Added a create and execute command for the cli
* Updated send format
* websockets command working
* cli update
* Added model.py
* feat: Update server.py and manager.py
- Initialize blocks in AgentServer lifespan context
- Remove unnecessary await in AgentServer get_graph_blocks
- Fix type hinting in manager.py
- Validate input data in validate_exec function
* fix tests
* feat: Add autogpt_server.blocks.sample and autogpt_server.blocks.text modules
This commit adds the `autogpt_server.blocks.sample` and `autogpt_server.blocks.text` modules to the project. These modules contain blocks that are used in the execution of the Autogpt server. The `ParrotBlock` and `PrintingBlock` classes are imported from `autogpt_server.blocks.sample`, while the `TextFormatterBlock` class is imported from `autogpt_server.blocks.text`. This addition enhances the functionality of the server by providing additional blocks for text processing and sample operations.
* fixed circular import issue
* Update readme
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* feat(autogpt_builder): Add `AutoGPTServerAPI` client
* migrate API calls in Flow.tsx to new API client
* feat(autogpt_server): Add `/graphs/{graph_id}/executions` endpoint
In `data/execution.py`:
- Add `list_executions` function
- Rename `get_executions` to `get_execution_results`
In `server/server.py`:
- Add route
- Add `AgentServer.list_graph_runs`
- Rename `AgentServer.get_executions` to `get_run_execution_results`
* feat(autogpt_builder): Add `listFlowRunIDs` endpoint to `AutoGPTServerAPI` client
* Move `Schema` to `types.ts` and rename to `ObjectSchema`
* feat(rnd): Add type hint and strong pydantic type validation for block input/output + add reddit agent-blocks.
* feat(rnd): Add type hint and strong pydantic type validation for block input/output + add reddit agent-blocks.
* Fix reddit block
* Fix serialization
* Eliminate deprecated class property
* Remove RedditCredentialsBlock
* Cache jsonschema computation, add dictionary construction
* Add dict_split and list_split to output, add more blocks
* Add objc_split for completeness, int both input and output
* Update reddit block
* Add reddit test (untested)
* Resolved json issue on pydantic
* Add creds check on client
* Add dict <--> pydantic object flexibility
* Fix error retry
* Skip reddit test
* Code cleanup
* Chang prompt
* Make this work
* Fix linting
* Hide input_links and output_links from Node
* Add docs
* updating UI to handle deeply nested data structures for reddit usecase
* changing expected key in reddit post to comment
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* feat(rnd): Add type hint and strong pydantic type validation for block input/output + add reddit agent-blocks.
* feat(rnd): Add type hint and strong pydantic type validation for block input/output + add reddit agent-blocks.
* Fix reddit block
* Fix serialization
* Eliminate deprecated class property
* Remove RedditCredentialsBlock
* Cache jsonschema computation, add dictionary construction
* Add dict_split and list_split to output, add more blocks
* Add objc_split for completeness, int both input and output
* Update reddit block
* Add reddit test (untested)
* Resolved json issue on pydantic
* Add creds check on client
* Add dict <--> pydantic object flexibility
* Fix error retry
* Skip reddit test
* Code cleanup
* Chang prompt
* Make this work
* Fix linting
* Hide input_links and output_links from Node
* Add docs
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
- Add `google-api-python-client-stubs` dev dependency
- Add version specification to `google-api-python-client` dependency
- Fix type error (by ignoring it) in forge/components/web/search.py
Update Pydantic dependency of `autogpt`, `forge` and `benchmark` to `^2.7`
[Pydantic Migration Guide](https://docs.pydantic.dev/2.7/migration/)
- Migrate usages of now-deprecated functions to their replacements
- Update `Field` definitions
- Ellipsis `...` for required fields is deprecated
- `Field` no longer supports extra `kwargs`, replace use of this feature with field metadata
- Replace `Config` class for specifying model configuration with `model_config = ConfigDict(..)`
- Removed `ModelContainer` in `BaseAgent`, component configuration dict is now directly serialized using Pydantic v2 helper functions
- Forked `agent-protocol` and updated `packages/client/python` for Pydantic v2 support: https://github.com/Significant-Gravitas/agent-protocol
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Implement message based history in `ActionHistoryComponent`
- Make non-summarized message count configurable (`ActionHistoryComponent.full_message_count`)
- Run `ActionHistoryComponent` after `SystemComponent` so that history messages are last in the prompt
- Omit final instruction message if prompt already contains assistant messages
- Filter `raw_message` from `ActionProposal.schema()`
---------
Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
* Create optional `build` dependency group
* Move `cx-freeze` dependency to `build` dependency group
To include the `build` group when installing dependencies, run `poetry install --with=build`.
Fixes#7297 (`cx-freeze` dependency install fails after #7271)
On AgentServer, To create a Block like StringFormatterBlock or LllmCallBlock, we need some way to dynamically link input pins and aggregate them into a single list input. This will give a better experience for the user to construct an input and link it from the output of the other nodes. The scope of this change is adding support for that in the least intrusive way.
Proposal
To differentiate the input list name and its singular entry we are using the $_<index> prefix. For example:
For the input items: list[int], you can set a pin items with values like [1,2,3,4]. But you can also add input pins like items_$_0 or items_$_4 with values 1 or 2, which will be appended to the items input in alphabetical order.
The execution engine will guarantee to wait for the execution until all the input pin value is produced, so input pin with list input will produce fix-sized list.
* Getting started with nextjs
* fix linting
* remove gitignore for package.json
* pulling in reactflow components
* updating css
* use environment variables
* clean up css / ui a lil
* Fixed nodes/run button animation
so they are always visible
---------
Co-authored-by: Bentlybro <tomnoon9@gmail.com>
### Background
The current implementation of AgentServer doesn't allow for a single pin to be connected to multiple nodes, this will be problematic when you have a single output node that needs to be propagated into many nodes. Or multiple nodes that possibly feed the data into a single pin (first come first serve).
This infra change is also part of the preparation for changing the `block` interface to return a stream of output instead of a single output. Treating blocks as streams requires this capability.
### Changes 🏗️
* Update block run interface from returning `(output_name, output_data)` to `Generator[(output_name, output_data)]`
* Removed `agent` term in the API, replace it with `graph` for consistency.
* Reintroduced `AgentNodeExecutionInputOutput`. `AgentNodeExecution` input & output will be a list of `AgentNodeExecutionInputOutput` which describes the input & output data of its execution. Making an execution has 1-many relation to its input output data.
* Propagating the relation and block interface change into the execution engine.
### Background
Agent execution should be able to be triggered in a recurring manner.
This PR introduced an ExecutionScheduling service, a process responsible for managing the execution schedule and triggering its execution based on a predefined cron expression.
### Changes 🏗️
* Added `scheduler.py` / `ExecutionScheduler` implementation.
* Added scheduler test.
* Added `AgentExecutionSchedule` table and its logical model & prisma queries.
* Moved `add_execution` from API server to `execution_manager`
### Background
This PR adds support on IPC on autogpt_server.
To make this happen, there are a couple of refactoring efforts being made (will be described in the `Changes` section).
Currently, there are three independent processes:
```
AgentServer ----> ExecutionManager
|
--> ExecutionScheduler
```
### Changes 🏗️
* Added Pyro5 for IPC support.
* Introduced `AppService`: a class to construct an independent process that can expose a method to other running processes (this is analogous to a microservice).
* Introduced `AppProcess`: used by `AppService` a class for creating a child process that can be executed in the background.
* Adapting existing codebase to user `AppService`.
Remove many env vars and use component-level configuration that could be loaded from file instead.
### Changed
- `BaseAgent` provides `serialize_configs` and `deserialize_configs` that can save and load all component configuration as json `str`. Deserialized components/values overwrite existing values, so not all values need to be present in the serialized config.
- Decoupled `forge/content_processing/text.py` from `Config`
- Kept `execute_local_commands` in `Config` because it's needed to know if OS info should be included in the prompt
- Updated docs to reflect changes
- Renamed `Config` to `AppConfig`
### Added
- Added `ConfigurableComponent` class for components and following configs:
- `ActionHistoryConfiguration`
- `CodeExecutorConfiguration`
- `FileManagerConfiguration` - now file manager allows to have multiple agents using the same workspace
- `GitOperationsConfiguration`
- `ImageGeneratorConfiguration`
- `WebSearchConfiguration`
- `WebSeleniumConfiguration`
- `BaseConfig` in `forge` and moved `Config` (now inherits from `BaseConfig`) back to `autogpt`
- Required `config_class` attribute for the `ConfigurableComponent` class that should be set to configuration class for a component
`--component-config-file` CLI option and `COMPONENT_CONFIG_FILE` env var and field in `Config`. This option allows to load configuration from a specific file, CLI option takes precedence over env var.
- Added comments to config models
### Removed
- Unused `change_agent_id` method from `FileManagerComponent`
- Unused `allow_downloads` from `Config` and CLI options (it should be in web component config if needed)
- CLI option `--browser-name` (the option is inside `WebSeleniumConfiguration`)
- Unused `workspace_directory` from CLI options
- No longer needed variables from `Config` and docs
- Unused fields from `Config`: `image_size`, `audio_to_text_provider`, `huggingface_audio_to_text_model`
- Removed `files` and `workspace` class attributes from `FileManagerComponent`
When an agent is resumed from a mid-cycle state (having made a proposal but not executed it yet), we need to use the previously determined `current_episode.action` proposal instead of calling `agent.propose_action()` again.
* Rename `assert_config_has_openai_api_key` to `assert_config_has_required_llm_api_keys`
* Make OpenAI credential check conditional (only if an OpenAI model is selected in the config)
* Implement checks for Groq and Anthropic credentials
* Use API calls for Groq and OpenAI credential checks to make sure the keys are valid
Revert some changes to fix forge agent and enable components support.
- Rename forge `Agent` to `ProtocolAgent`
- Bring back and update `forge/app.py` and `forge/agent/forge_agent.py`
- `ForgeAgent` inherits from `BaseAgent`, supports component execution and runs the same pipelines as autogpt Agent
- Update forge version from 0.1.0 to 0.2.0
- Update code comments
### Background
This PR implements the main logic of the block execution engine for AutoGPT-Server.
An integration test is added to test the behavior.
*What you can do now with this PR*:
You can manually create a graph, by using the existing blocks as nodes (or write your own). Then execute the graph with an input.
*What you can't do yet*:
Listen to the graph execution result/update (you can follow the `AgentNodeExecution` table result, though).
### Changes 🏗️
* Split `data.py` (model file) into three modules:
* `execution`: a model for node execution.
* `graph`: a model for graph structure.
* `block`: a model for agent block/component.
* Implemented executor main logic
* Simplify db structure:
* Remove `AgentBlockInputOutput` in favor of `inputSchema` & `outputSchema` using serialized json/dict structure.
* Remove `id` on `AgentBlock` in favor of using name (class name of the block) as its identifier.
* Added `constantInput` column for `AgentNode` for hard-coded input/block configuration. Hence, removing`executionStateData` on `AgentNodeExecution`.
* Rename AgentNodeLink input/output to source/sink to avoid confusion
* Change multithreading to multiprocessing, to allow the use of multiple `prisma` asynchronous client.
Frontend broke in #7171 because of changes to the request models in `forge.agent_protocol`. This PR unbreaks it.
Changes:
- Make `input` required on `TaskRequestBody` and `StepRequestBody`
- Amend `toJson()` on `TaskRequestBody` and `StepRequestBody` to omit attributes with `null` value
### Background
Introduced initial database schema for AutoGPT server.
It currently consists of 7 tables:
* `AgentGraph`: This model describes the Agent Graph/Flow (Multi Agent System).
* `AgentNode`: This model describes a single node in the Agent Graph/Flow (Multi Agent System).
* `AgentNodeLink`: This model describes the link between two AgentNodes.
* `AgentNodeExecution`: This model describes the execution of an AgentNode.
* `AgentBlock`: This model describes a component that will be executed by the AgentNode (all the details required, like name, code, input/output).
* `AgentBlockInputOutput`: This model describes the output (produced event) or input (consumed event) of an AgentBlock.
* `FileDefinition`: This model describe a file that can be used as input/output of an AgentNodeExecution.
### Changes 🏗️
* Add Prisma
* Add sqlite3
* Initialize database.
* Update instructions to set up OpenAI / GPT-4 access
* Add instructions to set up Anthropic access
* Add instructions to set up Groq access
* Remove GPT-specific `--gpt3only`, `--gpt4only` CLI flags and related logic
* Remove duplicate config instructions from docker setup page, replace it by a link to the standard setup instructions
### Background
###### Project Outline
Currently, the project mainly consists of these components:
*agent_api*
A component that will expose API endpoints for the creation & execution of agents.
This component will make connections to the database to persist and read the agents.
It will also trigger the agent execution by pushing its execution request to the ExecutionQueue.
*agent_executor*
A component that will execute the agents.
This component will be a pool of processes/threads that will consume the ExecutionQueue and execute the agent accordingly.
The result and progress of its execution will be persisted in the database.
###### How to test
Execute `poetry run app`.
Access the swagger page `http://localhost:8000/docs`, there is one API to trigger an execution of one dummy slow task, you fire the API a couple of times and see the `agent_executor` executes the multiple slow tasks concurrently by the pool of Python processes.
The pool size is currently set to `5` (hardcoded in app.py, the code entry point).
##### Changes 🏗️
* Initialize FastAPI for the AutoGPT server project.
* Reduced number of queues to 1 and abstracted into `ExecutionQueue` class.
* Reduced the number of main components into two `api` and `executor`.
- Add `_BaseOpenAIProvider`, `BaseOpenAIChatProvider`, and `BaseOpenAIEmbeddingProvider`, which implement the shared functionality of OpenAI-like providers, e.g. `GroqProvider` and `OpenAIProvider`
- (Re)move as much code as possible from `GroqProvider` and `OpenAIProvider` by rebasing them on `BaseOpenAI(Chat|Embedding)Provider`
Also:
- Rename `get_available_models()` to `get_available_chat_models()` on `BaseChatModelProvider`
- Add `get_available_models()` to `BaseModelProvider`
- Add `get_available_embedding_models()` to `BaseEmbeddingModelProvider`
- Move common `fix_failed_parse_tries` config attribute into base `ModelProviderConfiguration`
* Add default AutoGPT profile to ai_profile.py & disable profile generator
* Disable custom AI profile generation in agent_protocol_server.py
- Replace `generate_agent_for_task` by `create_agent`
- Make `ai_profile` parameter on `create_agent` optional (use default `AIProfile` if not passed)
* Generalize example call in profile_generator.py
Currently it's specified in an OpenAI-specific format, which might adversely affect performance with other providers.
* Remove dead `AIProfile.api_budget` attribute
* Remove `agent.ai_profile` and `agent.directives` attributes, and replace usages with `agent.state.*`
This prevents potential state inconsistency between `agent` and `agent.state` when other values are assigned to `agent.ai_profile` and `agent.directives`
- **FIX ALL LINT/TYPE ERRORS IN AUTOGPT, FORGE, AND BENCHMARK**
### Linting
- Clean up linter configs for `autogpt`, `forge`, and `benchmark`
- Add type checking with Pyright
- Create unified pre-commit config
- Create unified linting and type checking CI workflow
### Testing
- Synchronize CI test setups for `autogpt`, `forge`, and `benchmark`
- Add missing pytest-cov to benchmark dependencies
- Mark GCS tests as slow to speed up pre-commit test runs
- Repair `forge` test suite
- Add `AgentDB.close()` method for test DB teardown in db_test.py
- Use actual temporary dir instead of forge/test_workspace/
- Move left-behind dependencies for moved `forge`-code to from autogpt to forge
### Notable type changes
- Replace uses of `ChatModelProvider` by `MultiProvider`
- Removed unnecessary exports from various __init__.py
- Simplify `FileStorage.open_file` signature by removing `IOBase` from return type union
- Implement `S3BinaryIOWrapper(BinaryIO)` type interposer for `S3FileStorage`
- Expand overloads of `GCSFileStorage.open_file` for improved typing of read and write modes
Had to silence type checking for the extra overloads, because (I think) Pyright is reporting a false-positive:
https://github.com/microsoft/pyright/issues/8007
- Change `count_tokens`, `get_tokenizer`, `count_message_tokens` methods on `ModelProvider`s from class methods to instance methods
- Move `CompletionModelFunction.schema` method -> helper function `format_function_def_for_openai` in `forge.llm.providers.openai`
- Rename `ModelProvider` -> `BaseModelProvider`
- Rename `ChatModelProvider` -> `BaseChatModelProvider`
- Add type `ChatModelProvider` which is a union of all subclasses of `BaseChatModelProvider`
### Removed rather than fixed
- Remove deprecated and broken autogpt/agbenchmark_config/benchmarks.py
- Various base classes and properties on base classes in `forge.llm.providers.schema` and `forge.models.providers`
### Fixes for other issues that came to light
- Clean up `forge.agent_protocol.api_router`, `forge.agent_protocol.database`, and `forge.agent.agent`
- Add fallback behavior to `ImageGeneratorComponent`
- Remove test for deprecated failure behavior
- Fix `agbenchmark.challenges.builtin` challenge exclusion mechanism on Windows
- Fix `_tool_calls_compat_extract_calls` in `forge.llm.providers.openai`
- Add support for `any` (= no type specified) in `JSONSchema.typescript_type`
* Add `FileStorage.mount()` method, which mounts (part of) the workspace to a local path
* Add `watchdog` library to watch file changes in mount
* Amend `CodeExecutorComponent`
* Amend `execute_python_file` to execute Python files in a workspace mount
* Amend `execute_python_code` to create temporary .py file in workspace instead of as a local file
* Add support for `Path` argument to `filename` parameter on `execute_python_file`
* Fix `test_execute_python_code` (by making it async)
- Move `autogpt/Dockerfile` to `Dockerfile.autogpt`
- Write new selective `.dockerignore` (in repo root) to keep build context clean
- Amend `autogpt/docker-compose.yml` and all `autogpt-docker-*.yml` workflows accordingly
- Include `forge/` in docker build context so it can be used as a path dependency
- Include `frontend/` in docker builds
- Moved `autogpt` and `forge` to project root
- Removed `autogpts` directory
- Moved and renamed submodule `autogpts/autogpt/tests/vcr_cassettes` to `autogpt/tests/vcr_cassettes`
- When using CLI agents will be created in `agents` directory (instead of `autogpts`)
- Renamed relevant docs, code and config references from `autogpts/[forge|autogpt]` to `[forge|autogpt]` and from `*../../*` to `*../*`
- Updated `CODEOWNERS`, GitHub Actions and Docker `*.yml` configs
- Updated symbolic links in `docs`
Remove unused `forge` code and improve structure of `forge`.
* Put all Agent Protocol stuff together in `forge.agent_protocol`
* ... including `forge.agent_protocol.database` (was `forge.db`)
* Remove duplicate/unused parts from `forge`
* `forge.actions`, containing old commands; replaced by `forge.components` from `autogpt`
* `forge/agent.py` (the old one, `ForgeAgent`)
* `forge/app.py`, which was used to serve and run the `ForgeAgent`
* `forge/db.py` (`ForgeDatabase`), which was used for `ForgeAgent`
* `forge/llm.py`, which has been replaced by new `forge.llm` module which was ported from `autogpt.core.resource.model_providers`
* `forge.memory`, which is not in use and not being maintained
* `forge.sdk`, much of which was moved into other modules and the rest is deprecated
* `AccessDeniedError`: unused
* `forge_log.py`: replaced with `logging`
* `validate_yaml_file`: not needed
* `ai_settings_file` and associated loading logic and env var `AI_SETTINGS_FILE`: unused
* `prompt_settings_file` and associated loading logic and env var `PROMPT_SETTINGS_FILE`: default directives are now provided by the `SystemComponent`
* `request_user_double_check`, which was only used in `AIDirectives.load`
* `TypingConsoleHandler`: not used
Moved from `autogpt` to `forge`:
- `autogpt.config` -> `forge.config`
- `autogpt.processing` -> `forge.content_processing`
- `autogpt.file_storage` -> `forge.file_storage`
- `autogpt.logs` -> `forge.logging`
- `autogpt.speech` -> `forge.speech`
- `autogpt.agents.(base|components|protocols)` -> `forge.agent.*`
- `autogpt.command_decorator` -> `forge.command.decorator`
- `autogpt.models.(command|command_parameter)` -> `forge.command.(command|parameter)`
- `autogpt.(commands|components|features)` -> `forge.components`
- `autogpt.core.utils.json_utils` -> `forge.json.parsing`
- `autogpt.prompts.utils` -> `forge.llm.prompting.utils`
- `autogpt.core.prompting.(base|schema|utils)` -> `forge.llm.prompting.*`
- `autogpt.core.resource.model_providers` -> `forge.llm.providers`
- `autogpt.llm.providers.openai` + `autogpt.core.resource.model_providers.utils`
-> `forge.llm.providers.utils`
- `autogpt.models.action_history:Action*` -> `forge.models.action`
- `autogpt.core.configuration.schema` -> `forge.models.config`
- `autogpt.core.utils.json_schema` -> `forge.models.json_schema`
- `autogpt.core.resource.schema` -> `forge.models.providers`
- `autogpt.models.utils` -> `forge.models.utils`
- `forge.sdk.(errors|utils)` + `autogpt.utils.(exceptions|file_operations_utils|validators)`
-> `forge.utils.(exceptions|file_operations|url_validator)`
- `autogpt.utils.utils` -> `forge.utils.const` + `forge.utils.yaml_validator`
Moved within `forge`:
- forge/prompts/* -> forge/llm/prompting/*
The rest are mostly import updates, and some sporadic removals and necessary updates (for example to fix circular deps):
- Changed `CommandOutput = Any` to remove coupling with `ContextItem` (no longer needed)
- Removed unused `Singleton` class
- Reluctantly moved `speech` to forge due to coupling (tts needs to be changed into component)
- Moved `function_specs_from_commands` and `core/resource/model_providers` to `llm/providers` (resources were a `core` thing and are no longer relevant)
- Keep tests in `autogpt` to reduce changes in this PR
- Removed unused memory-related code from tests
- Removed duplicated classes: `FancyConsoleFormatter`, `BelowLevelFilter`
- `prompt_settings.yaml` is in both `autogpt` and `forge` because for some reason doesn't work when placed in just one dir (need to be taken care of)
- Removed `config` param from `clean_input`, it wasn't used and caused circular dependency
- Renamed `BaseAgentActionProposal` to `ActionProposal`
- Updated `pyproject.toml` in `forge` and `autogpt`
- Moved `Action*` models from `forge/components/action_history/model.py` to `forge/models/action.py` as those are relevant to the entire agent and not just `EventHistoryComponent` + to reduce coupling
- Renamed `DEFAULT_ASK_COMMAND` to `ASK_COMMAND` and `DEFAULT_FINISH_COMMAND` to `FINISH_COMMAND`
- Renamed `AutoGptFormatter` to `ForgeFormatter` and moved to `forge`
Includes changes from PR https://github.com/Significant-Gravitas/AutoGPT/pull/7148
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Persist the agent's `AgentContext` so that it works in rehydrated agent instances. This makes context usable in the `AgentProtocolServer`, where the agent instance is loaded and destroyed for every step.
- Make `AgentContext` a Pydantic model
- Add `context` parameter to `ContextComponent.__init__` so we can pass in an existing instance
- Add `context: AgentContext` to `AgentSettings` so it is persisted
- Add `type` attribute to `ContextItem` implementations as a discriminator
- Rename `ContextItem` base class to `BaseContextItem` and make new `ContextItem` type alias (union of the implementation types)
[wiki page on Contributing]: https://github.com/Significant-Gravitas/Nexus/wiki/Contributing
[wiki page on Contributing]: https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing
- type:checkboxes
attributes:
@@ -88,14 +88,16 @@ body:
- type:dropdown
attributes:
label:Do you use OpenAI GPT-3 or GPT-4?
label:What LLM Provider do you use?
description:>
If you are using AutoGPT with `--gpt3only`, your problems may be caused by
If you are using AutoGPT with `SMART_LLM=gpt-3.5-turbo`, your problems may be caused by
the [limitations](https://github.com/Significant-Gravitas/AutoGPT/issues?q=is%3Aissue+label%3A%22AI+model+limitation%22) of GPT-3.5.
options:
- GPT-3.5
- GPT-4
- GPT-4(32k)
- Azure
- Groq
- Anthropic
- Llamafile
- Other (detail in issue)
validations:
required:true
@@ -126,6 +128,13 @@ body:
label:Specify the area
description:Please specify the area you think is best related to the issue.
- type:input
attributes:
label:What commit or version are you using?
description:It is helpful for us to reproduce to know what version of the software you were using when this happened. Please run `git log -n 1 --pretty=format:"%H"` to output the full commit hash.
set +e # Ignore non-zero exit codes and continue execution
echo "Running the following command: poetry run agbenchmark --maintain --mock"
poetry run agbenchmark --maintain --mock
EXIT_CODE=$?
set -e # Stop ignoring non-zero exit codes
# Check if the exit code was 5, and if so, exit with 0 instead
if [ $EXIT_CODE -eq 5 ]; then
echo "regression_tests.json is empty."
fi
echo "Running the following command: poetry run agbenchmark --mock"
poetry run agbenchmark --mock
echo "Running the following command: poetry run agbenchmark --mock --category=data"
poetry run agbenchmark --mock --category=data
echo "Running the following command: poetry run agbenchmark --mock --category=coding"
poetry run agbenchmark --mock --category=coding
echo "Running the following command: poetry run agbenchmark --test=WriteFile"
poetry run agbenchmark --test=WriteFile
cd ../../benchmark
poetry install
echo "Adding the BUILD_SKILL_TREE environment variable. This will attempt to add new elements in the skill tree. If new elements are added, the CI fails because they should have been pushed"
set +e # Ignore non-zero exit codes and continue execution
echo "Running the following command: poetry run agbenchmark --maintain --mock"
poetry run agbenchmark --maintain --mock
EXIT_CODE=$?
set -e # Stop ignoring non-zero exit codes
# Check if the exit code was 5, and if so, exit with 0 instead
if [ $EXIT_CODE -eq 5 ]; then
echo "regression_tests.json is empty."
fi
echo "Running the following command: poetry run agbenchmark --mock"
poetry run agbenchmark --mock
echo "Running the following command: poetry run agbenchmark --mock --category=data"
poetry run agbenchmark --mock --category=data
echo "Running the following command: poetry run agbenchmark --mock --category=coding"
poetry run agbenchmark --mock --category=coding
# echo "Running the following command: poetry run agbenchmark --test=WriteFile"
# poetry run agbenchmark --test=WriteFile
cd ../benchmark
poetry install
echo "Adding the BUILD_SKILL_TREE environment variable. This will attempt to add new elements in the skill tree. If new elements are added, the CI fails because they should have been pushed"
# required to fetch internal or private CodeQL packs
packages:read
# only required for workflows in private repositories
actions:read
contents:read
strategy:
fail-fast:false
matrix:
include:
- language:typescript
build-mode:none
- language:python
build-mode:none
# CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name:Checkout repository
uses:actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name:Initialize CodeQL
uses:github/codeql-action/init@v3
with:
languages:${{ matrix.language }}
build-mode:${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
config:|
paths-ignore:
- classic/frontend/build/**
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if:matrix.build-mode == 'manual'
shell:bash
run:|
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
All contributions to [the autogpt_platform folder](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform) will be under our [Contribution License Agreement](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/Contributor%20License%20Agreement%20(CLA).md). By making a pull request contributing to this folder, you agree to the terms of our CLA for your contribution. All contributions to other folders will be under the MIT license.
## In short
1. Avoid duplicate work, issues, PRs etc.
2. We encourage you to collaborate with fellow community members on some of our bigger
[todo's][kanban board]!
[todo's][roadmap]!
* We highly recommend to post your idea and discuss it in the [dev channel].
4. Create a draft PR when starting work on bigger changes.
3.Please also consider contributing something other than code; see the
[contribution guide] for options.
3. Create a draft PR when starting work on bigger changes.
4.Adhere to the [Code Guidelines]
5. Clearly explain your changes when submitting a PR.
6. Don't submit stuff that's broken.
6. Don't submit broken code: test/validate your changes.
7. Avoid making unnecessary changes, especially if they're purely based on your personal
preferences. Doing so is the maintainers' job. ;-)
8. Please also consider contributing something other than code; see the
All portions of this repository are under one of two licenses. The majority of the AutoGPT repository is under the MIT License below. The autogpt_platform folder is under the
Polyform Shield License.
MIT License
Copyright (c) 2023 Toran Bruce Richards
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
> For the complete getting started [tutorial series](https://aiedge.medium.com/autogpt-forge-e3de53cc58ec) <- click here
Welcome to the Quickstart Guide! This guide will walk you through the process of setting up and running your own AutoGPT agent. Whether you're a seasoned AI developer or just starting out, this guide will provide you with the necessary steps to jumpstart your journey in the world of AI development with AutoGPT.
## System Requirements
This project supports Linux (Debian based), Mac, and Windows Subsystem for Linux (WSL). If you are using a Windows system, you will need to install WSL. You can find the installation instructions for WSL [here](https://learn.microsoft.com/en-us/windows/wsl/).
- On the next page, select your GitHub account to create the fork under.
- Wait for the forking process to complete. You now have a copy of the repository in your GitHub account.
2.**Clone the Repository**
To clone the repository, you need to have Git installed on your system. If you don't have Git installed, you can download it from [here](https://git-scm.com/downloads). Once you have Git installed, follow these steps:
- Open your terminal.
- Navigate to the directory where you want to clone the repository.
- Run the git clone command for the fork you just created

- Then open your project in your ide

4.**Setup the Project**
Next we need to setup the required dependencies. We have a tool for helping you do all the tasks you need to on the repo.
It can be accessed by running the `run` command by typing `./run` in the terminal.
The first command you need to use is `./run setup` This will guide you through the process of setting up your system.
Initially you will get instructions for installing flutter, chrome and setting up your github access token like the following image:
> Note: for advanced users. The github access token is only needed for the ./run arena enter command so the system can automatically create a PR

### For Windows Users
If you're a Windows user and experience issues after installing WSL, follow the steps below to resolve them.
#### Update WSL
Run the following command in Powershell or Command Prompt to:
1. Enable the optional WSL and Virtual Machine Platform components.
2. Download and install the latest Linux kernel.
3. Set WSL 2 as the default.
4. Download and install the Ubuntu Linux distribution (a reboot may be required).
```shell
wsl --install
```
For more detailed information and additional steps, refer to [Microsoft's WSL Setup Environment Documentation](https://learn.microsoft.com/en-us/windows/wsl/setup/environment).
#### Resolve FileNotFoundError or "No such file or directory" Errors
When you run `./run setup`, if you encounter errors like `No such file or directory` or `FileNotFoundError`, it might be because Windows-style line endings (CRLF - Carriage Return Line Feed) are not compatible with Unix/Linux style line endings (LF - Line Feed).
To resolve this, you can use the `dos2unix` utility to convert the line endings in your script from CRLF to LF. Here’s how to install and run `dos2unix` on the script:
```shell
sudo apt update
sudo apt install dos2unix
dos2unix ./run
```
After executing the above commands, running `./run setup` should work successfully.
#### Store Project Files within the WSL File System
If you continue to experience issues, consider storing your project files within the WSL file system instead of the Windows file system. This method avoids issues related to path translations and permissions and provides a more consistent development environment.
You can keep running the command to get feedback on where you are up to with your setup.
When setup has been completed, the command will return an output like this:

### Optional: Entering the Arena
Entering the Arena is an optional step intended for those who wish to actively participate in the agent leaderboard. If you decide to participate, you can enter the Arena by running `./run arena enter YOUR_AGENT_NAME`. This step is not mandatory for the development or testing of your agent.
Entries with names like `agent`, `ExampleAgent`, `test_agent` or `MyExampleGPT` will NOT be merged. We also don't accept copycat entries that use the name of other projects, like `AutoGPT` or `evo.ninja`.

> **Note**
> For advanced users, create a new branch and create a file called YOUR_AGENT_NAME.json in the arena directory. Then commit this and create a PR to merge into the main repo. Only single file entries will be permitted. The json file needs the following format:
> - `timestamp`: timestamp of the last update of this file
> - `commit_hash_to_benchmark`: the commit hash of your entry. You update each time you have an something ready to be officially entered into the hackathon
> - `branch_to_benchmark`: the branch you are using to develop your agent on, default is master.
## Running your Agent
Your agent can started using the `./run agent start YOUR_AGENT_NAME`
This start the agent on `http://localhost:8000/`

The frontend can be accessed from `http://localhost:8000/`, you will first need to login using either a google account or your github account.
Upon logging in you will get a page that looks something like this. With your task history down the left hand side of the page and the 'chat' window to send tasks to your agent.
When you have finished with your agent, or if you just need to restart it, use Ctl-C to end the session then you can re-run the start command.
If you are having issues and want to ensure the agent has been stopped there is a `./run agent stop` command which will kill the process using port 8000, which should be the agent.
## Benchmarking your Agent
The benchmarking system can also be accessed using the cli too:
**AutoGPT** is the vision of the power of AI accessible to everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters:
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
## Hosting Options
- Download to self-host
- [Join the Waitlist](https://bit.ly/3ZDijAI) for the cloud-hosted beta
## How to Setup for Self-Hosting
> [!NOTE]
> Setting up and hosting the AutoGPT Platform yourself is a technical process.
> If you'd rather something that just works, we recommend [joining the waitlist](https://bit.ly/3ZDijAI) for the cloud-hosted beta.
This tutorial assumes you have Docker, VSCode, git and npm installed.
### 🧱 AutoGPT Frontend
The AutoGPT frontend is where users interact with our powerful AI automation platform. It offers multiple ways to engage with and leverage our AI agents. This is the interface where you'll bring your AI automation ideas to life:
**Agent Builder:** For those who want to customize, our intuitive, low-code interface allows you to design and configure your own AI agents.
**Workflow Management:** Build, modify, and optimize your automation workflows with ease. You build your agent by connecting blocks, where each block performs a single action.
**Deployment Controls:** Manage the lifecycle of your agents, from testing to production.
**Ready-to-Use Agents:** Don't want to build? Simply select from our library of pre-configured agents and put them to work immediately.
**Agent Interaction:** Whether you've built your own or are using pre-configured agents, easily run and interact with them through our user-friendly interface.
**Monitoring and Analytics:** Keep track of your agents' performance and gain insights to continually improve your automation processes.
[Read this guide](https://docs.agpt.co/platform/new_blocks/) to learn how to build your own custom blocks.
### 💽 AutoGPT Server
The AutoGPT Server is the powerhouse of our platform This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously. It contains all the essential components that make AutoGPT run smoothly.
**Source Code:** The core logic that drives our agents and automation processes.
**Infrastructure:** Robust systems that ensure reliable and scalable performance.
**Marketplace:** A comprehensive marketplace where you can find and deploy a wide range of pre-built agents.
### 🐙 Example Agents
Here are two examples of what you can do with AutoGPT:
1.**Generate Viral Videos from Trending Topics**
- This agent reads topics on Reddit.
- It identifies trending topics.
- It then automatically creates a short-form video based on the content.
2.**Identify Top Quotes from Videos for Social Media**
- This agent subscribes to your YouTube channel.
- When you post a new video, it transcribes it.
- It uses AI to identify the most impactful quotes to generate a summary.
- Then, it writes a post to automatically publish to your social media.
These examples show just a glimpse of what you can achieve with AutoGPT! You can create customized workflows to build agents for any use case.
---
### Mission and Licencing
Our mission is to provide the tools, so that you can focus on what matters:
- 🏗️ **Building** - Lay the foundation for something amazing.
- 🧪 **Testing** - Fine-tune your agent to perfection.
@@ -15,26 +77,29 @@ Be part of the revolution! **AutoGPT** is here to stay, at the forefront of AI i
**📖 [Documentation](https://docs.agpt.co)**
 | 
**🚀 [Contributing](CONTRIBUTING.md)**
 | 
**🛠️ [Build your own Agent - Quickstart](QUICKSTART.md)**
## 🥇 Current Best Agent: evo.ninja
[Current Best Agent]: #-current-best-agent-evoninja
**Licensing:**
The AutoGPT Arena Hackathon saw [**evo.ninja**](https://github.com/polywrap/evo.ninja) earn the top spot on our Arena Leaderboard, proving itself as the best open-source generalist agent. Try it now at https://evo.ninja!
MIT License: The majority of the AutoGPT repository is under the MIT License.
📈 To challenge evo.ninja, AutoGPT, and others, submit your benchmark run to the [Leaderboard](#-leaderboard), and maybe your agent will be up here next!
Polyform Shield License: This license applies to the autogpt_platform folder.
## 🧱 Building blocks
For more information, see https://agpt.co/blog/introducing-the-autogpt-platform
---
## 🤖 AutoGPT Classic
> Below is information about the classic version of AutoGPT.
**🛠️ [Build your own Agent - Quickstart](classic/FORGE-QUICKSTART.md)**
### 🏗️ Forge
**Forge your own agent!** – Forge is a ready-to-go template for your agent application. All the boilerplate code is already handled, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from the [`forge.sdk`](/autogpts/forge/forge/sdk) can also be used individually to speed up development and reduce boilerplate in your agent project.
**Forge your own agent!** – Forge is a ready-to-go toolkit to build your own agent application. It handles most of the boilerplate code, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from [`forge`](/classic/forge/) can also be used individually to speed up development and reduce boilerplate in your agent project.
🚀 [**Getting Started with Forge**](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpts/forge/tutorials/001_getting_started.md) –
🚀 [**Getting Started with Forge**](https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/forge/tutorials/001_getting_started.md) –
This guide will walk you through the process of creating your own agent and using the benchmark and user interface.
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpts/forge) about Forge
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/forge) about Forge
### 🎯 Benchmark
@@ -44,24 +109,17 @@ This guide will walk you through the process of creating your own agent and usin
📦 [`agbenchmark`](https://pypi.org/project/agbenchmark/) on Pypi
 | 
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/blob/master/benchmark) about the Benchmark
#### 🏆 [Leaderboard][leaderboard]
[leaderboard]: https://leaderboard.agpt.co
Submit your benchmark run through the UI and claim your place on the AutoGPT Arena Leaderboard! The best scoring general agent earns the title of **[Current Best Agent]**, and will be adopted into our repo so people can easily run it through the [CLI].
[][leaderboard]
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/benchmark) about the Benchmark
### 💻 UI
**Makes agents easy to use!** The `frontend` gives you a user-friendly interface to control and monitor your agents. It connects to agents through the [agent protocol](#-agent-protocol), ensuring compatibility with many agents from both inside and outside of our ecosystem.
<!-- TODO: instert screenshot of front end -->
<!-- TODO: insert screenshot of front end -->
The frontend works out-of-the-box with all agents in the repo. Just use the [CLI] to run your agent of choice!
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/frontend) about the Frontend
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/frontend) about the Frontend
### ⌨️ CLI
@@ -78,7 +136,6 @@ Options:
Commands:
agent Commands to create, start and stop agents
arena Commands to enter the arena
benchmark Commands to start the benchmark and list tests and categories
setup Installs dependencies needed for your system.
```
@@ -101,6 +158,8 @@ To maintain a uniform standard and ensure seamless compatibility with many curre
- [**Reporting a Vulnerability**](#reporting-a-vulnerability)
## Reporting Security Issues
## Using AutoGPT Securely
We take the security of our project seriously. If you believe you have found a security vulnerability, please report it to us privately. **Please do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.**
### Restrict Workspace
> **Important Note**: Any code within the `classic/` folder is considered legacy, unsupported, and out of scope for security reports. We will not address security vulnerabilities in this deprecated code.
Since agents can read and write files, it is important to keep them restricted to a specific workspace. This happens by default *unless* RESTRICT_TO_WORKSPACE is set to False.
- [Huntr.dev](https://huntr.com/repos/significant-gravitas/autogpt) - where you may be eligible for a bounty
Disabling RESTRICT_TO_WORKSPACE can increase security risks. However, if you still need to disable it, consider running AutoGPT inside a [sandbox](https://developers.google.com/code-sandboxing), to mitigate some of these risks.
### Reporting Process
1.**Submit Report**: Use one of the above channels to submit your report
2.**Response Time**: Our team will acknowledge receipt of your report within 14 business days.
3.**Collaboration**: We will collaborate with you to understand and validate the issue
4.**Resolution**: We will work on a fix and coordinate the release process
### Untrusted inputs
### Disclosure Policy
- Please provide detailed reports with reproducible steps
- Include the version/commit hash where you discovered the vulnerability
- Allow us a 90-day security fix window before any public disclosure
- Share any potential mitigations or workarounds if known
When handling untrusted inputs, it's crucial to isolate the execution and carefully pre-process inputs to mitigate script injection risks.
## Supported Versions
Only the following versions are eligible for security updates:
For maximum security when handling untrusted inputs, you may need to employ the following:
| Version | Supported |
|---------|-----------|
| Latest release on master branch | ✅ |
| Development commits (pre-master) | ✅ |
| Classic folder (deprecated) | ❌ |
| All other versions | ❌ |
* Sandboxing: Isolate the process.
* Updates: Keep your libraries (including AutoGPT) updated with the latest security patches.
* Input Sanitation: Before feeding data to the model, sanitize inputs rigorously. This involves techniques such as:
* Validation: Enforce strict rules on allowed characters and data types.
* Filtering: Remove potentially malicious scripts or code fragments.
* Encoding: Convert special characters into safe representations.
* Verification: Run tooling that identifies potential script injections (e.g. [models that detect prompt injection attempts](https://python.langchain.com/docs/guides/safety/hugging_face_prompt_injection)).
## Security Best Practices
When using this project:
1. Always use the latest stable version
2. Review security advisories before updating
3. Follow our security documentation and guidelines
4. Keep your dependencies up to date
5. Do not use code from the `classic/` folder as it is deprecated and unsupported
### Data privacy
## Past Security Advisories
For a list of past security advisories, please visit our [Security Advisory Page](https://github.com/Significant-Gravitas/AutoGPT/security/advisories) and [Huntr Disclosures Page](https://huntr.com/repos/significant-gravitas/autogpt).
To protect sensitive data from potential leaks or unauthorized access, it is crucial to sandbox the agent execution. This means running it in a secure, isolated environment, which helps mitigate many attack vectors.
### Untrusted environments or networks
Since AutoGPT performs network calls to the OpenAI API, it is important to always run it with trusted environments and networks. Running it on untrusted environments can expose your API KEY to attackers.
Additionally, running it on an untrusted network can expose your data to potential network attacks.
However, even when running on trusted networks, it is important to always encrypt sensitive data while sending it over the network.
### Multi-Tenant environments
If you intend to run multiple AutoGPT brains in parallel, it is your responsibility to ensure the models do not interact or access each other's data.
The primary areas of concern are tenant isolation, resource allocation, model sharing and hardware attacks.
- Tenant Isolation: you must make sure that the tenants run separately to prevent unwanted access to the data from other tenants. Keeping model network traffic separate is also important because you not only prevent unauthorized access to data, but also prevent malicious users or tenants sending prompts to execute under another tenant’s identity.
- Resource Allocation: a denial of service caused by one tenant can affect the overall system health. Implement safeguards like rate limits, access controls, and health monitoring.
- Data Sharing: in a multi-tenant design with data sharing, ensure tenants and users understand the security risks and sandbox agent execution to mitigate risks.
- Hardware Attacks: the hardware (GPUs or TPUs) can also be attacked. [Research](https://scholar.google.com/scholar?q=gpu+side+channel) has shown that side channel attacks on GPUs are possible, which can make data leak from other brains or processes running on the same system at the same time.
## Reporting a Vulnerability
Beware that none of the topics under [Using AutoGPT Securely](#using-AutoGPT-securely) are considered vulnerabilities on AutoGPT.
However, If you have discovered a security vulnerability in this project, please report it privately. **Do not disclose it as a public issue.** This gives us time to work with you to fix the issue before public exposure, reducing the chance that the exploit will be used before a patch is released.
Please disclose it as a private [security advisory](https://github.com/Significant-Gravitas/AutoGPT/security/advisories/new).
A team of volunteers on a reasonable-effort basis maintains this project. As such, please give us at least 90 days to work on a fix before public exposure.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.