- Deleted the InputValidationBlock class from checking_input_validation.py, which was previously used for input validation with required and dependent fields.
- Updated InputValidationBlock to include default values for required fields.
- Implemented validation logic in manager.py to ensure dependent fields are validated based on their dependencies.
- Added 'depends_on' parameter to SchemaField in model.py to specify field dependencies.
- Updated useAgentGraph hook to validate input fields based on their dependencies, ensuring required fields are set when dependent fields are filled.
- Modified BlockIOSubSchemaMeta to include 'depends_on' as an optional property.
When calculating the next month, we are not rolling the month number
causing an error on credits.
### Changes 🏗️
Add modulo while calculating next month.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
- Move `autogpt_libs.supabase_integration_credentials_store` into
`backend`
- `.store` -> `backend.integrations.credentials_store`
- `.types` -> added to `backend.data.model`
- Rename `SupabaseIntegrationCredentialsStore` to
`IntegrationCredentialsStore`
We wanted to get a few security things in quickly in #8403 and had to
make some compromises to do so. This picks those up and fixes them.
- Resolves#8540
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
This fix is triggered by an error observed on db connection failure on
SupaBase:
```
2024-11-28 07:45:24,724 INFO [DatabaseManager] Starting...
2024-11-28 07:45:24,726 INFO [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection started...
2024-11-28 07:45:24,726 INFO [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection completed successfully.
{"is_panic":false,"message":"Can't reach database server at `...pooler.supabase.com:5432`\n\nPlease make sure your database server is running at `....pooler.supabase.com:5432`.","meta":{"database_host":"...pooler.supabase.com","database_port":5432},"error_code":"P1001"}
2024-11-28 07:45:35,153 INFO [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection failed: Could not connect to the query engine. Retrying now...
2024-11-28 07:45:36,155 INFO [PID-18|DatabaseManager|Redis-e14a33de-2d81-4536-b48b-a8aa4b1f4766] Acquiring connection started...
2024-11-28 07:45:36,181 INFO [PID-18|DatabaseManager|Redis-e14a33de-2d81-4536-b48b-a8aa4b1f4766] Acquiring connection completed successfully.
2024-11-28 07:45:36,183 INFO [PID-18|DatabaseManager|Pyro-2722cd29-4dbd-4cf9-882f-73842658599d] Starting Pyro Service started...
2024-11-28 07:45:36,189 INFO [DatabaseManager] Connected to Pyro; URI = PYRO:DatabaseManager@0.0.0.0:8005
2024-11-28 07:46:28,241 ERROR Error in get_user_integrations: All connection attempts failed
```
Where even
```
2024-11-28 07:45:35,153 INFO [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection failed: Could not connect to the query engine. Retrying now...
```
is present, the Redis connection is still proceeding without waiting for
the retry to complete. This was likely caused by Tenacity not fully
awaiting the DB connection acquisition command.
### Changes 🏗️
* Add special handling for the async function to explicitly await the
function execution result on each retry.
* Explicitly raise exceptions on `db.connect()` if the db is not
connected even after `prisma.connect()` command.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
This PR adds the first few Hubspot blocks so we can create _real_ sales
and marketing agents.
### Changes 🏗️
Added Hubspot blocks;
- Aded auth for hubspot
- Added Company block
- Added Contact block
- Added Engagement block
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
We've started enabling cost based on the *partial value* of the
`credentials` field. And this logic has never been supported.
### Changes 🏗️
* Add partial object matching on the input data filter for evaluating
the block cost.
* Add missing credentials for `ExtractWebsiteContentBlock`
* Removed fallback cost on LLM blocks.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Blocks should be easy to search, the name is sometimes not
straightforward, but the description does.
<img width="576" alt="image"
src="https://github.com/user-attachments/assets/0528b019-0ebc-4e6f-8a3c-40323a671b13">
### Changes 🏗️
Make the block description searchable.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
* docs(backend): Add `--build` to docker command in Getting Started guide (#8762)
* updated URL on README.md
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Add `/integrations/credentials` endpoint which lists all credentials for the authenticated user
- Amend credential fetching logic in front end to fetch all at once instead of per provider
- Resolves#8770
- Resolves (hopefully) #8613
- feat(blocks): Add GitHub Pull Request Trigger block
## feat(platform): Add support for Webhook-triggered blocks
- ⚠️ Add `PLATFORM_BASE_URL` setting
- Add webhook config option and `BlockType.WEBHOOK` to `Block`
- Add check to `Block.__init__` to enforce type and shape of webhook event filter
- Add check to `Block.__init__` to enforce `payload` input on webhook blocks
- Add check to `Block.__init__` to disable webhook blocks if `PLATFORM_BASE_URL` is not set
- Add `Webhook` model + CRUD functions in `backend.data.integrations` to represent webhooks created by our system
- Add `IntegrationWebhook` to DB schema + reference `AgentGraphNode.webhook_id`
- Add `set_node_webhook(..)` in `backend.data.graph`
- Add webhook-related endpoints:
- `POST /integrations/{provider}/webhooks/{webhook_id}/ingress` endpoint, to receive webhook payloads, and for all associated nodes create graph executions
- Add `Node.is_triggered_by_event_type(..)` helper method
- `POST /integrations/{provider}/webhooks/{webhook_id}/ping` endpoint, to allow testing a webhook
- Add `WebhookEvent` + pub/sub functions in `backend.data.integrations`
- Add `backend.integrations.webhooks` module, including:
- `graph_lifecycle_hooks`, e.g. `on_graph_activate(..)`, to handle corresponding webhook creation etc.
- Add calls to these hooks in the graph create/update endpoints
- `BaseWebhooksManager` + `GithubWebhooksManager` to handle creating + registering, removing + deregistering, and retrieving existing webhooks, and validating incoming payloads
## Other improvements
- fix(blocks): Allow having an input and output pin with the same name
- fix(blocks): Add tooltip with description in places where block inputs are rendered without `NodeHandle`
- feat(blocks): Allow hiding inputs (e.g. `payload`) with `SchemaField(hidden=True)`
- fix(frontend): Fix `MultiSelector` component styling
- feat(frontend): Add `AlertDialog` UI component
- feat(frontend): Add `NodeMultiSelectInput` component
- feat(backend/data): Add `NodeModel` with `graph_id`, `graph_version`; `GraphModel` with `user_id`
- Add `make_graph_model(..)` helper function in `backend.data.graph`
- refactor(backend/data): Make `RedisEventQueue` generic and move to `backend.data.execution`
- refactor(frontend): Deduplicate & clean up code for different block types in `generateInputHandles(..)` in `CustomNode`
- dx(backend): Add `MissingConfigError`, `NeedConfirmation` exception
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* fix: hide content except login when not authenticated to prevent errors
* Remove supabase folder from tracking
* Remove supabase folder from Git tracking
* adding git submodule
* adding git submodule
* Discard changes to .gitignore
* only showing AutoGPT logo if user is not present
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
This PR reduces image size by 4.9GB (93%) and reduces uncached build time from ~7m to ~5m20s.
- Use cache mount to prevent Yarn cache from being included in `yarn install` layer
- Leverage Next.js output tracing to generate minimal application w/ tree-shaken dependencies
- Add non-root user following the Next.js reference Dockerfile
* feat: Add Open Router integration credentials
- Added support for Open Router integration credentials in the Supabase integration credentials store.
- Updated the LLM provider field to include "open_router" as a valid provider option.
- Added Open Router API key field to the backend settings.
- Updated the profile page to display the Open Router integration credentials.
- Updated the credentials input and provider components to include Open Router as a provider option.
- Updated the autogpt-server-api types to include "open_router" as a provider name.
- Updated the LLM provider schema to include "open_router" as a valid provider name.
- Added GEMINI_FLASH_1_5_8B as the first Open Router LLM
* Add type ignore to new llm prompt to match the rest of them.
* Update LlmModel with a selection of new OpenRouter models
* format
- Remove `secrets_dir` and other references to `get_secrets_path()`
- Remove unused `get_config_path()`
Follow-up to #8521, which removed the `secrets` dir but not the references to it.
In #8524, the "llm" credentials provider was replaced. There are still entries with `"provider": "llm"` in the system though, and those break if not migrated.
- SQL migration to fix the obvious ones where we know the provider from `credentials.id`
- Non-SQL migration to fix the rest
In #8524, the "llm" credentials provider was replaced. There are still entries with "provider": "llm" in the system though, and those break if not migrated.
- SQL migration to fix the obvious ones where we know the provider from `credentials.id`
- Non-SQL migration to fix the rest
* fix(backend): Add execution persistence for execution scheduler service
* scheduler REST API cleanup
* Fix to binary
* Adapt UI with new API
* Remove schedule.py
* Remove unused class
* Fix linting
* ci(frontend,backend,classic): update branch from develop to dev
* ci(frontend, infra): enable ci on other tools
* Update classic-autogpt-docker-ci.yml
* fix: don't error if the folder exists
* fix: drop bad test
* Revert "fix: drop bad test"
This reverts commit c478d3cf4c.
* fix: turn off the correct test 👀
* fix: remove more
* Discard changes to .github/workflows/classic-autogpt-ci.yml
* Update classic-autogpt-docker-ci.yml
* Update classic-autogpt-docker-release.yml
* Update classic-autogpts-ci.yml
* Discard changes to .github/workflows/classic-forge-ci.yml
* Discard changes to .github/workflows/classic-autogpts-ci.yml
* Discard changes to .github/workflows/classic-python-checks.yml
* Discard changes to .github/workflows/repo-pr-label.yml
* Discard changes to .github/workflows/platform-backend-ci.yml
* Update classic-benchmark-ci.yml
* Update classic-frontend-ci.yml
Reverts c707ee9 (#8646)
The problem analysis that led to #8646 contained some errors, so the migration removed in the PR doesn't seem to have been the cause of the problem we were hunting. Also, this migration is an essential part of the security improvement that we made 2 weeks ago.
* add: api generator functions and endpoints
* Rebase onto dev, refactor API manager location, remove suspended key revoke, and update API code for Prisma compatibility
* add: key_manager
* reversing changes og poetry.lock
* add: changing hash mexhansim in API Manager
* add: changing hash mexhansim in API Manager
* fixing some simple bugs
* fix linting and adding better error handling
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* feat(block): Add AIImageGeneratorBlock
This commit adds the AIImageGeneratorBlock class to the backend. The AIImageGeneratorBlock is responsible for generating images using various AI models through a unified interface.
* Remove unsupported inputs and add more styles
* Update autogpt_platform/backend/backend/blocks/ai_image_generator_block.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* run format
* Add test mock
* mock client run
* Refactor AIImageGeneratorBlock to use a separate function for running the client
* Update Credential description
* Rename ModelProvider to ImageGenModel
* Add missing block run function
* fix mock
* .
* Refactor AIImageGeneratorBlock to move run_client function inside class
* Fix broken reference to run client and tidy code.
* Refactor AIImageGeneratorBlock to improve code structure and error handling
* Move client into run client instantiation function.
* Refactor AIImageGeneratorBlock to handle output as FileOutput and improve error handling
* run format
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Add condition to hide `credentials` input title in `CustomNode:generateInputHandles`
- Add `title={schema.description}` to `<CredentialsInput>` title element
* Add support for default credentials to unreal block
* Refactor block cost configuration and add new blocks
This commit refactors the block cost configuration file and adds support for new blocks. The changes include:
- Importing the `AIMusicGeneratorBlock`, `JinaEmbeddingBlock`, and `UnrealTextToSpeechBlock` classes
- Updating the `BLOCK_COSTS` dictionary to include costs for the new blocks
These changes enable the usage of the newly introduced blocks.
- Resolves#8635
- fix(frontend): Fix type mismatch of `CredentialsField` schema between frontend and backend
- Fix usages of `credentialsSchema.credentials_provider`
- refactor(backend): Create `CredentialsFieldSchemaExtra` model in backend so it can be mirrored directly in frontend
- Add check to enforce multi-provider `CredentialsField` always has `discriminator`
- dx: Add type checking shortcut `yarn type-check` / `npm run type-check` for frontend
- Change `provider` of default credentials to actual provider names (e.g. `anthropic`), remove `llm` provider
- Add `discriminator` and `discriminator_mapping` to `CredentialsField` that allows to filter credentials input to only allow providers for matching models in `useCredentials` hook (thanks @ntindle for the idea!); e.g. user chooses `GPT4_TURBO` so then only OpenAI credentials are allowed
- Choose credentials automatically and hide credentials input on the node completely if there's only one possible option
- Move `getValue` and `parseKeys` to utils
- Add `ANTHROPIC`, `GROQ` and `OLLAMA` to providers in frontend `types.ts`
- Add `hidden` field to credentials that is used for default system keys to hide them in user profile
- Now `provider` field in `CredentialsField` can accept multiple providers as a list
-----------------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat(platform): Add AIMusicGeneratorBlock for music generation
* refactor(platform): Refactor AIMusicGeneratorBlock for improved error handling and logging
* refactor(ui): Refactor ContentRenderer to support audio rendering
* format
* Frontend format and lint
- fix naming of hooks
- fix `pyright` hooks (b0rked by repo restructure)
- fix `forge` path (b0rked by faulty replace-all when the repo was restructured)
- fix `black` hook to work on all Python versions
- add `poetry install` hooks
- add `ruff`, `isort`, `pyright`, `pytest`, and `prisma generate` hooks for `backend/`
- add `ruff` and `pyright` hooks for `autogpt_libs/`
* reseal secrets
* update DB url
* rotate prod db
* rotate prod
* rotate server
* builder valuse
* public env vars in env files
* public env vars in env files
* reseal secrets
* update DB url
* rotate prod db
* rotate prod
* rotate server
* builder valuse
* public env vars in env files
* public env vars in env files
* add pinecone and jina blocks
* udpate based on comments
* backend updates
* frontend updates
* type hint
* more type hints
* another type hint
* update run signature
* shared jina provider
* fix linting
* lockfile
* remove noqa
* remove noqa
* remove vector db folder
* line
* update pincone credentials provider
* fix imports
* formating
* update frontend
* Test (#8425)
* h
* Discard changes to autogpt_platform/backend/poetry.lock
* fix: broken dep
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
* Fix issue where marketplace breaks if no agents are returned
* Fix issue where marketplace breaks if no agents are returned
* Remove supabase folder from tracking
* adding supabase submodule
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* fix(backend): Fix error pin output not being propagated into the next nodes
* fix(backend): Reverse pyro config refactor
* Revert "fix(backend): Fix error pin output not being propagated into the next nodes"
This reverts commit 2ff50a94ec.
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* ci with workload identity
* temp update
* update name
* wip
* update auth step
* update provider name
* remove audience
* temp set to false
* update registry naming
* update context
* update login
* revert temp updates
* add prod iam and pool
* add release deploy with approval
* use gha default approval behaviour
* add back in release trigger
* add new line
* add prod migrations
* prod migrations without check
* ci with workload identity
* temp update
* update name
* wip
* update auth step
* update provider name
* remove audience
* temp set to false
* update registry naming
* update context
* update login
* revert temp updates
* add prod iam and pool
* add release deploy with approval
* use gha default approval behaviour
* add back in release trigger
* add new line
* feat(frontend,backend): testing
* feat: testing
* feat(backend): it works for reading email
* feat(backend): more docs on google
* fix(frontend,backend): formatting
* feat(backend): more logigin (i know this should be debug)
* feat(backend): make real the default scopes
* feat(backend): tests and linting
* fix: code review prep
* feat: sheets block
* feat: liniting
* Update route.ts
* Update autogpt_platform/backend/backend/integrations/oauth/google.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: revert opener change
* feat(frontend): add back opener
required to work on mac edge
* feat(frontend): drop typing list import from gmail
* fix: code review comments
* feat: code review changes
* feat: code review changes
* fix(backend): move from asserts to checks so they don't get optimized away in the future
* fix(backend): code review changes
* fix(backend): remove google specific check
* fix: add typing
* fix: only enable google blocks when oauth is configured for google
* fix: errors are real and valid outputs always when output
* fix(backend): add provider detail for debuging scope declines
* Update autogpt_platform/frontend/src/components/integrations/credentials-input.tsx
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix(frontend): enhance with comment, typeof error isn't known so this is best way to ensure the stringifyication will work
* feat: code review change requests
* fix: linting
* fix: reduce error catching
* fix: doc messages in code
* fix: check the correct scopes object 😄
* fix: remove double (and not needed) try catch
* fix: lint
* fix: scopes
* feat: handle the default scopes better
* feat: better email objectification
* feat: process attachements
turns out an email doesn't need a body
* fix: lint
* Update google.py
* Update autogpt_platform/backend/backend/data/block.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: quit trying and except failure
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat: don't allow expired states
* fix: clarify function name and purpose
* feat: code links updates
* feat: additional docs on adding a block
* fix: type hint missing which means the block won't work
* fix: linting
* fix: docs formatting
* Update issues.py
* fix: improve the naming
* fix: formatting
* Update new_blocks.md
* Update new_blocks.md
* feat: better docs on what the args mean
* feat: more details on yield
* Update new_blocks.md
* fix: remove ignore from docs build
* feat: initial migration
* feat: migration tested with supabase-> prisma data location
* add custom migrations and script
* update migration command
* formatting and linting
* updated migration script
* add direct db url
* add find files
* rename
* use binary instead of source
* temp adding supabase
* remove unused functions
* adding missed merge
* fix: commit hash for lock
* ci: fix lint
* fix: minor bugs that prevented connecting and migrating to dbs and auth
* fix: linting
* fix: missed await
* fix(backend): phase one pr updates
* fix: handle error with returning user object from database_manager
* fix: linting
* Address comments
* Make the migration safe
* Update migration doc
* Move misplaced model functions
* Grammar
* Revert lock
* Remove irrelevant changes
* Remove irrelevant changes
* Avoid adding trigger on public schema
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Aarushi <aarushik93@gmail.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* ci: create dependabot
* ci: target the dev branch for dependabot
* ci: group prs
* ci: group updates
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* Create repo-pr-enforce-base-branch.yml
* fix quotes
* test
* fix github token
* fix trigger and CLI config
* change back trigger because otherwise I can't test it
* fix the fix
* fix repo selection
* fix perms?
* fix quotes and newlines escaping in message
* Update repo-pr-enforce-base-branch.yml
* grrr escape sequences in bash
* test
* clean up
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* fix(market): agent pagination and search errors
* fix(frontend): search was not paginated
* fix: linting
* feat(market): linting ci
* fix(ci): branch limit name
* feat(platform): List and revoke credentials in user profile (#8207)
Display existing credentials (OAuth and API keys) for all current providers: Google, Github, Notion and allow user to remove them. For providers that support it, we also revoke the tokens through the API: of the providers we currently have, Google and GitHub support it; Notion doesn't.
- Add credentials list and `Delete` button in `/profile`
- Add `revoke_tokens` abstract method to `BaseOAuthHandler` and implement it in each provider
- Revoke OAuth tokens for providers on `DELETE` `/{provider}/credentials/{cred_id}`, and return whether tokens could be revoked
- Update `autogpt-server-api/baseClient.ts:deleteCredentials` with `CredentialsDeleteResponse` return type
Bonus:
- Update `autogpt-server-api/baseClient.ts:_request` to properly handle empty server responses
* fix(backend): Lower the number of node workers to save DB connections (#8331)
Change [graph]×[node] worker limit from 10×5 to 10×3
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix(ci,platform): Add dev branch trigger to all ci (#8339)
* update ci for dev
* update classic
* remove duplicate dev
* fix(frontend): Fix styling inconsistencies in input elements (#8337)
- Apply consistent border styling to `Input`, `Select`, and `Textarea`
- Remove `rounded-xl` from node input elements
- Add `whitespace-nowrap` to `CustomNode` header category tags
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* feat(builder): Use configmap for builder (#8343)
use configmap in builder
* fix(platform,infra): Checkin non secret values (#8344)
checkin non secrets
* security(platform): Add sealed secrets (#8342)
* add sealed secrets
* add encrypted secrets
* remove extra space
* Tf public media buckets (#8324)
* fix(infra): Fix sealed secret names (#8350)
* fix sealed secret names
* fix names and add annotation
* feat(backend): Introduce executors shared DB connection (#8340)
* update health checkendpoint
---------
Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
Display existing credentials (OAuth and API keys) for all current providers: Google, Github, Notion and allow user to remove them. For providers that support it, we also revoke the tokens through the API: of the providers we currently have, Google and GitHub support it; Notion doesn't.
- Add credentials list and `Delete` button in `/profile`
- Add `revoke_tokens` abstract method to `BaseOAuthHandler` and implement it in each provider
- Revoke OAuth tokens for providers on `DELETE` `/{provider}/credentials/{cred_id}`, and return whether tokens could be revoked
- Update `autogpt-server-api/baseClient.ts:deleteCredentials` with `CredentialsDeleteResponse` return type
Bonus:
- Update `autogpt-server-api/baseClient.ts:_request` to properly handle empty server responses
- ci(frontend): Ensure CI fails if `yarn.lock` is inconsistent with `package.json`
- dx(frontend): Add Prettier check to `lint` script in `package.json`
- dx(frontend): Add `packageManager` to `package.json` for Corepack support
- build(frontend): Use `yarn` consistently in the Dockerfile
- feat(backend/executor): Change credential injection mechanism to acquire credentials from `AgentServer` just before execution
- Also locks the credentials for the duration of the execution
- feat(backend/server): Add thread-safe `IntegrationCredentialsManager` to handle and synchronize credentials-related operations
- feat(libs): Add mutexes to `SupabaseIntegrationCredentialsStore` to ensure thread-safety
Also:
- feat(backend): Added Pydantic model (de)serialization support to `@expose` decorator
Refactorings:
- refactor(backend, libs): Move `KeyedMutex` to `autogpt_libs.utils.synchronize`
- refactor(backend/server): Make `backend.server.integrations` module with `router`, `creds_manager`, and `utils` in it
* feat(backend): logic to disable enums based on python logic
* feat(backend): add behave as setting and clarify its purpose and APP_ENV
APP_ENV is used for not cloud vs local but the application environment such as local/dev/prod so we need BehaveAs as well
* fix(backend): various uses of AppEnvironment without the Enum or incorrectly
AppEnv in the logging library will never be cloud due to the restrictions applied when loading settings in by pydantic settings. This commit fixes this error, however the code path for logging may now be incorrect
* feat(backend): use a metaclass to disable ollama in the cloud environment
* fix: formatting
* fix(backend): typing improvements
* fix(backend): more linting 😭
* feat(frontend,backend): testing
* feat: testing
* feat(backend): it works for reading email
* feat(backend): more docs on google
* fix(frontend,backend): formatting
* feat(backend): more logigin (i know this should be debug)
* feat(backend): make real the default scopes
* feat(backend): tests and linting
* fix: code review prep
* feat: sheets block
* feat: liniting
* Update route.ts
* Update autogpt_platform/backend/backend/integrations/oauth/google.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: revert opener change
* feat(frontend): add back opener
required to work on mac edge
* feat(frontend): drop typing list import from gmail
* fix: code review comments
* feat: code review changes
* feat: code review changes
* fix(backend): move from asserts to checks so they don't get optimized away in the future
* fix(backend): code review changes
* fix(backend): remove google specific check
* fix: add typing
* fix: only enable google blocks when oauth is configured for google
* fix: errors are real and valid outputs always when output
* fix(backend): add provider detail for debuging scope declines
* Update autogpt_platform/frontend/src/components/integrations/credentials-input.tsx
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix(frontend): enhance with comment, typeof error isn't known so this is best way to ensure the stringifyication will work
* feat: code review change requests
* fix: linting
* fix: reduce error catching
* fix: doc messages in code
* fix: check the correct scopes object 😄
* fix: remove double (and not needed) try catch
* fix: lint
* fix: scopes
* feat: handle the default scopes better
* feat: better email objectification
* feat: process attachements
turns out an email doesn't need a body
* fix: lint
* Update google.py
* Update autogpt_platform/backend/backend/data/block.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* fix: quit trying and except failure
* Update autogpt_platform/backend/backend/server/routers/integrations.py
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
* feat: don't allow expired states
* fix: clarify function name and purpose
* feat: code links updates
* feat: additional docs on adding a block
* fix: type hint missing which means the block won't work
* fix: linting
* fix: docs formatting
* Update issues.py
* fix: improve the naming
* fix: formatting
* Update new_blocks.md
* Update new_blocks.md
* feat: better docs on what the args mean
* feat: more details on yield
* Update new_blocks.md
* fix: remove ignore from docs build
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* add ideogram ai image gen
* fixed revid secret api key being removed
* fixed auto checks errors
* Add AI Upscale option to IdeogramModelBlock
- Introduced an 'Upscale Image' option in the input schema to allow users to upscale generated images.
- Created the 'UpscaleOption' enum with options 'AI Upscale' and 'No Upscale'.
- Implemented the 'upscale_image' method to download the generated image into RAM and send it to the Ideogram AI upscale API without saving it to disk.
- Updated the 'run' method to handle the upscaling process based on the user's input.
- Ensured that the image processing is done entirely in memory (RAM) without writing to disk.
- Updated test inputs and mocks to reflect the new 'Upscale Image' option.
---------
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
* updates to tutorial
* updates to get user to save
* Update tutorial.ts
* final updates to end of tutorial
* Prettier
* add back data-id for badge within the blocks
* Prettier
* Updated onOpenChange code style
* modifying how handle text is rendered
* Rounding input boxes
* Modifying layout of nodes
* formatting
* update edge start / end positions
* updated handle rendering
* moved outputs down and disabled toggle
* formatting
* update font
* update key name formatting
* modify layout of input items
* updated the add property button
* feat(platform): Sync on new UI design
* simplify UI
* block list add border and remove padding
* add highlight on navbar button
* Change block header so block costs line up correctly
* fix history type issue
* formatting
* tweaking css to hide white spot
* fixed white spot
* Added context menu
* Changed status badge color
* getting error colors just right
* Added a NodeOutputs component for rendering the outputs
* tidy up
* Change Add Item Button Color
* changed cursor on hover in block control panel
* formatting
* updated formatting of tutoral and tally buttons
* fix(platform): Fix text area input not updating input field
* Address comments
* Add missing color
* fix lint errors
* Cleanup context logic
* Make inputref reliable
* Update coloring
* fix(platform): Fix unexpected closing block list on tutorial
* Add X-scrolling
* Remove excessive shadows
* Remove another excessive shadows
* Another patch patch patch
* Add border on context menu
* Cleanup executions
* Cleanup executions
* Makr border darker
* Make border darker
* Fix input reset
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
- refactor(blocks): Assign new IDs to 13 blocks
- Create DB migration to update block IDs in existing DB entities
- feat(frontend): Add `updateBlockIDs` "middleware" to `AgentImportForm` loader in front end
* feat(blocks): Add AIShortformVideoCreatorBlock
- Added a new block called AIShortformVideoCreatorBlock to create shortform videos using revid.ai.
- The block takes input parameters such as script, background music, and voice ID.
- It uses the revid.ai API to create the video and waits for completion using a webhook proxy service.
- Once the video is ready, it returns the URL of the created video.
- This block takes anywhere from 5 seconds to several minutes to complete depending on the length of the video.
Add revid.ai API key to Secrets
- Added a new field in the Secrets class to store the revid.ai API key.
* refactor(blocks): Remove unused webhook code in AIShortformVideoCreatorBlock
* Add background music track options.
* Add preset voice options
* Add generation preset and visual style configuration options.
* Remove "morpher" video type due to long generation times and low quality.
Plus extend timeout cut-off.
* Add audio track configuration options.
* refactor AudioTrack selection into single class
* format
* Add test mocks
* run format
* Added Replicate Flux Blocks
* updated poetry lock file for replicate
* Refactor ReplicateFluxAdvancedModelBlock to use an enum for replicate_model_name rather than free strings.
* Refactor ReplicateFluxAdvancedModelBlock to use an enum for output_format instead of free strings
* Refactor ReplicateFluxAdvancedModelBlock to stop requiring people to type a random seed
* Refactor ReplicateFluxAdvancedModelBlock to stop requiring people to type a random seed
* run format
* poetry run format
* Delete ReplicateFluxBasicModelBlock
* Mark model name as not advanced
* tweak input order
* Fix test
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
* Feat(Builder): Add Google Maps Search Block
* format
* Updates to google maps search block
* fixes
* format + updates again
* fix for pytest
* format again
* updates based on new comments
* fix for format?
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
* feat(platform): Enhance AITextSummarizerBlock with configurable summary style and focus
The AITextSummarizerBlock in the autogpt_platform/backend/backend/blocks/llm.py file has been enhanced to include the following changes:
- Added a new enum class, SummaryStyle, with options for concise, detailed, bullet points, and numbered list styles.
- Added a new input parameter, focus, to specify the topic of the summary.
- Modified the _summarize_chunk method to include the style and focus in the prompt.
- Modified the _combine_summaries method to include the style and focus in the prompt.
These changes allow users to customize the style and focus of the generated summaries, providing more flexibility and control.
* run formatting and linting
## Config
- For Supabase, the back end needs `SUPABASE_URL`, `SUPABASE_SERVICE_ROLE_KEY`, and `SUPABASE_JWT_SECRET`
- For the GitHub integration to work, the back end needs `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`
- For integrations OAuth flows to work in local development, the back end needs `FRONTEND_BASE_URL` to generate login URLs with accurate redirect URLs
## REST API
- Tweak output of OAuth `/login` endpoint: add `state_token` separately in response
- Add `POST /integrations/{provider}/credentials` (for API keys)
- Add `DELETE /integrations/{provider}/credentials/{cred_id}`
## Back end
- Add Supabase support to `AppService`
- Add `FRONTEND_BASE_URL` config option, mainly for local development use
### `autogpt_libs.supabase_integration_credentials_store`
- Add `CredentialsType` alias
- Add `.bearer()` helper methods to `APIKeyCredentials` and `OAuth2Credentials`
### Blocks
- Add `CredentialsField(..) -> CredentialsMetaInput`
## Front end
### UI components
- `CredentialsInput` for use on `CustomNode`: allows user to add/select credentials for a service.
- `APIKeyCredentialsModal`: a dialog for creating API keys
- `OAuth2FlowWaitingModal`: a dialog to indicate that the application is waiting for the user to log in to the 3rd party service in the provided pop-up window
- `NodeCredentialsInput`: wrapper for `CredentialsInput` with the "usual" interface of node input components
- New icons: `IconKey`, `IconKeyPlus`, `IconUser`, `IconUserPlus`
### Data model
- `CredentialsProvider`: introduces the app-level `CredentialsProvidersContext`, which acts as an application-wide store and cache for credentials metadata.
- `useCredentials` for use on `CustomNode`: uses `CredentialsProvidersContext` and provides node-specific credential data and provider-specific data/functions
- `/auth/integrations/oauth_callback` route to close the loop to the `CredentialsInput` after a user completes sign-in to the external service
- Add `BlockIOCredentialsSubSchema`
### API client
- Add `isAuthenticated` method
- Add methods for integration OAuth flow: `oAuthLogin`, `oAuthCallback`
- Add CRD methods for credentials: `createAPIKeyCredentials`, `listCredentials`, `getCredentials`, `deleteCredentials`
- Add mirrored types `CredentialsMetaResponse`, `CredentialsMetaInput`, `OAuth2Credentials`, `APIKeyCredentials`
- Add GitHub blocks + "DEVELOPER_TOOLS" category
- Add `**kwargs` to `Block.run(..)` signature to support additional kwargs
- Add support for loading blocks from nested modules (e.g. `blocks/github/issues.py`)
#### Executor
- Add strict support for `credentials` fields on blocks
- Fetch credentials for graph execution and pass them down through to the node execution
* move to supabase pg instance
* remove postgres and bind supabase port
* Updated setup
- Switched db name to postgres to work with prisma studio
- Added platform schema
- Added Market-migartions
- bound prisma studio port
* remove studio port
* updated .env
* updated readmes
---------
Co-authored-by: SwiftyOS <craigswift13@gmail.com>
Restructuring the Repo to make it clear the difference between classic autogpt and the autogpt platform:
* Move the "classic" projects `autogpt`, `forge`, `frontend`, and `benchmark` into a `classic` folder
* Also rename `autogpt` to `original_autogpt` for absolute clarity
* Rename `rnd/` to `autogpt_platform/`
* `rnd/autogpt_builder` -> `autogpt_platform/frontend`
* `rnd/autogpt_server` -> `autogpt_platform/backend`
* Adjust any paths accordingly
* update pr template wording
* add what and how
* Update .github/PULL_REQUEST_TEMPLATE.md
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
- Add two endpoints to OAuth `integrations.py`:
- `GET /integrations/{provider}/credentials` - list all credentials for a provider, without secrets (metadata only)
- `GET /integrations/{provider}/credentials/{cred_id}` - retrieve a set of credentials (including secrets)
- Add `username` property to `Credentials` types
- Add logic to populate `username` in OAuth handlers
- Expand `CredentialsMetaResponse` and remove `credentials_` prefix from properties
- Fix `autogpt_libs` dependency caching issue
- Remove accidentally duplicated OAuth handler files in `autogpt_server/integrations`
* Feat(Builder): Add Runner input and ouput screens
* Fix run button not working
* prettier
* prettier again -- forgot flow
* fix input scaling + auto close on run
* removed "Runner Input" button to make it auto open runner input if input block is + Fixed issue with output not showing in output UI
* replaced runner output icon and added a new icon for it
* replaced IconOutput icon with LogOut from lucide-react
* prettier
* fix type safety issue + add error handling for formatOutput
* Updates based on comments
* prettier for utils
### Background
We need a way to set an execution quota per user for each block execution.
### Changes 🏗️
* Introduced a `UserBlockCredit`, a transaction table tracking the block usage along with it cost/quota.
* The tracking is toggled by `ENABLE_CREDIT` config, default = false.
* Introduced `BLOCK_COSTS` | `GET /blocks/costs` as a source of information for the cost on each block depending on the input configuration.
Improvements:
* Refactor logging in manager.py to always print a prefix and pass the metadata.
* Make executionStatus on AgentNodeExecution prisma enum. And add executionStatus on AgentGraphExecution.
* Use executionStatus from AgentGraphExecution to improve waiting logic on test_manager.py.
- feat(server): Initial draft of OAuth init and exchange endpoints
- Add `supabase` dependency
- Add Supabase credentials to `Secrets`
- Add `get_supabase` utility to `.server.utils`
- Add `.server.integrations` API segment with initial implementations for OAuth init and exchange endpoints
- Move integration OAuth handlers to `autogpt_server.integrations.oauth`
- Change constructor of `SupabaseIntegrationCredentialsStore` to take a Supabase client
- Fix type issues in `GoogleOAuthHandler`
2024-09-09 17:21:56 +02:00
3115 changed files with 133846 additions and 36587 deletions
echo "Running the following command: poetry run agbenchmark --mock --category=coding"
poetry run agbenchmark --mock --category=coding
echo "Running the following command: poetry run agbenchmark --test=WriteFile"
poetry run agbenchmark --test=WriteFile
# echo "Running the following command: poetry run agbenchmark --test=WriteFile"
# poetry run agbenchmark --test=WriteFile
cd ../benchmark
poetry install
echo "Adding the BUILD_SKILL_TREE environment variable. This will attempt to add new elements in the skill tree. If new elements are added, the CI fails because they should have been pushed"
# required to fetch internal or private CodeQL packs
packages:read
# only required for workflows in private repositories
actions:read
contents:read
strategy:
fail-fast:false
matrix:
include:
- language:typescript
build-mode:none
- language:python
build-mode:none
# CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name:Checkout repository
uses:actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name:Initialize CodeQL
uses:github/codeql-action/init@v3
with:
languages:${{ matrix.language }}
build-mode:${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
config:|
paths-ignore:
- classic/frontend/build/**
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if:matrix.build-mode == 'manual'
shell:bash
run:|
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
All contributions to [the autogpt_platform folder](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform) will be under our [Contribution License Agreement](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/Contributor%20License%20Agreement%20(CLA).md). By making a pull request contributing to this folder, you agree to the terms of our CLA for your contribution. All contributions to other folders will be under the MIT license.
## In short
1. Avoid duplicate work, issues, PRs etc.
2. We encourage you to collaborate with fellow community members on some of our bigger
All portions of this repository are under one of two licenses. The majority of the AutoGPT repository is under the MIT License below. The autogpt_platform folder is under the
Polyform Shield License.
MIT License
Copyright (c) 2023 Toran Bruce Richards
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
**AutoGPT** is a powerful tool that lets you create and run intelligent agents. These agents can perform various tasks automatically, making your life easier.
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
## How to Get Started
## Hosting Options
- Download to self-host
- [Join the Waitlist](https://bit.ly/3ZDijAI) for the cloud-hosted beta
The AutoGPT Builder is the frontend. It allows you to design agents using an easy flowchart style. You build your agent by connecting blocks, where each block performs a single action. It's simple and intuitive!
This tutorial assumes you have Docker, VSCode, git and npm installed.
[Read this guide](https://docs.agpt.co/server/new_blocks/) to learn how to build your own custom blocks.
### 🧱 AutoGPT Frontend
The AutoGPT frontend is where users interact with our powerful AI automation platform. It offers multiple ways to engage with and leverage our AI agents. This is the interface where you'll bring your AI automation ideas to life:
**Agent Builder:** For those who want to customize, our intuitive, low-code interface allows you to design and configure your own AI agents.
**Workflow Management:** Build, modify, and optimize your automation workflows with ease. You build your agent by connecting blocks, where each block performs a single action.
**Deployment Controls:** Manage the lifecycle of your agents, from testing to production.
**Ready-to-Use Agents:** Don't want to build? Simply select from our library of pre-configured agents and put them to work immediately.
**Agent Interaction:** Whether you've built your own or are using pre-configured agents, easily run and interact with them through our user-friendly interface.
**Monitoring and Analytics:** Keep track of your agents' performance and gain insights to continually improve your automation processes.
[Read this guide](https://docs.agpt.co/platform/new_blocks/) to learn how to build your own custom blocks.
### 💽 AutoGPT Server
The AutoGPT Server is the backend. This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously.
The AutoGPT Server is the powerhouse of our platform This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously. It contains all the essential components that make AutoGPT run smoothly.
**Source Code:** The core logic that drives our agents and automation processes.
**Infrastructure:** Robust systems that ensure reliable and scalable performance.
**Marketplace:** A comprehensive marketplace where you can find and deploy a wide range of pre-built agents.
### 🐙 Example Agents
Here are two examples of what you can do with AutoGPT:
1.**Reddit Marketing Agent**
- This agent reads comments on Reddit.
- It looks for people asking about your product.
- It then automatically responds to them.
1.**Generate Viral Videos from Trending Topics**
- This agent reads topics on Reddit.
- It identifies trending topics.
- It then automatically creates a short-form video based on the content.
2.**YouTube Content Repurposing Agent**
2.**Identify Top Quotes from Videos for Social Media**
- This agent subscribes to your YouTube channel.
- When you post a new video, it transcribes it.
- It uses AI to write a search engine optimized blog post.
- Then, it publishes this blog post to your Medium account.
- It uses AI to identify the most impactful quotes to generate a summary.
- Then, it writes a post to automatically publish to your social media.
These examples show just a glimpse of what you can achieve with AutoGPT!
These examples show just a glimpse of what you can achieve with AutoGPT! You can create customized workflows to build agents for any use case.
---
### Mission and Licencing
Our mission is to provide the tools, so that you can focus on what matters:
- 🏗️ **Building** - Lay the foundation for something amazing.
@@ -50,20 +78,28 @@ Be part of the revolution! **AutoGPT** is here to stay, at the forefront of AI i
 | 
**🚀 [Contributing](CONTRIBUTING.md)**
**Licensing:**
MIT License: The majority of the AutoGPT repository is under the MIT License.
Polyform Shield License: This license applies to the autogpt_platform folder.
For more information, see https://agpt.co/blog/introducing-the-autogpt-platform
---
## 🤖 AutoGPT Classic
> Below is information about the classic version of AutoGPT.
**🛠️ [Build your own Agent - Quickstart](FORGE-QUICKSTART.md)**
**🛠️ [Build your own Agent - Quickstart](classic/FORGE-QUICKSTART.md)**
### 🏗️ Forge
**Forge your own agent!** – Forge is a ready-to-go template for your agent application. All the boilerplate code is already handled, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from the [`forge.sdk`](/forge/forge/sdk) can also be used individually to speed up development and reduce boilerplate in your agent project.
**Forge your own agent!** – Forge is a ready-to-go toolkit to build your own agent application. It handles most of the boilerplate code, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from [`forge`](/classic/forge/) can also be used individually to speed up development and reduce boilerplate in your agent project.
🚀 [**Getting Started with Forge**](https://github.com/Significant-Gravitas/AutoGPT/blob/master/forge/tutorials/001_getting_started.md) –
🚀 [**Getting Started with Forge**](https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/forge/tutorials/001_getting_started.md) –
This guide will walk you through the process of creating your own agent and using the benchmark and user interface.
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/forge) about Forge
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/forge) about Forge
### 🎯 Benchmark
@@ -73,7 +109,7 @@ This guide will walk you through the process of creating your own agent and usin
📦 [`agbenchmark`](https://pypi.org/project/agbenchmark/) on Pypi
 | 
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/blob/master/benchmark) about the Benchmark
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/benchmark) about the Benchmark
### 💻 UI
@@ -83,7 +119,7 @@ This guide will walk you through the process of creating your own agent and usin
The frontend works out-of-the-box with all agents in the repo. Just use the [CLI] to run your agent of choice!
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/frontend) about the Frontend
📘 [Learn More](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/frontend) about the Frontend
### ⌨️ CLI
@@ -122,6 +158,8 @@ To maintain a uniform standard and ensure seamless compatibility with many curre
- [**Reporting a Vulnerability**](#reporting-a-vulnerability)
## Reporting Security Issues
## Using AutoGPT Securely
We take the security of our project seriously. If you believe you have found a security vulnerability, please report it to us privately. **Please do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.**
### Restrict Workspace
> **Important Note**: Any code within the `classic/` folder is considered legacy, unsupported, and out of scope for security reports. We will not address security vulnerabilities in this deprecated code.
Since agents can read and write files, it is important to keep them restricted to a specific workspace. This happens by default *unless* RESTRICT_TO_WORKSPACE is set to False.
- [Huntr.dev](https://huntr.com/repos/significant-gravitas/autogpt) - where you may be eligible for a bounty
Disabling RESTRICT_TO_WORKSPACE can increase security risks. However, if you still need to disable it, consider running AutoGPT inside a [sandbox](https://developers.google.com/code-sandboxing), to mitigate some of these risks.
### Reporting Process
1.**Submit Report**: Use one of the above channels to submit your report
2.**Response Time**: Our team will acknowledge receipt of your report within 14 business days.
3.**Collaboration**: We will collaborate with you to understand and validate the issue
4.**Resolution**: We will work on a fix and coordinate the release process
### Untrusted inputs
### Disclosure Policy
- Please provide detailed reports with reproducible steps
- Include the version/commit hash where you discovered the vulnerability
- Allow us a 90-day security fix window before any public disclosure
- Share any potential mitigations or workarounds if known
When handling untrusted inputs, it's crucial to isolate the execution and carefully pre-process inputs to mitigate script injection risks.
## Supported Versions
Only the following versions are eligible for security updates:
For maximum security when handling untrusted inputs, you may need to employ the following:
| Version | Supported |
|---------|-----------|
| Latest release on master branch | ✅ |
| Development commits (pre-master) | ✅ |
| Classic folder (deprecated) | ❌ |
| All other versions | ❌ |
* Sandboxing: Isolate the process.
* Updates: Keep your libraries (including AutoGPT) updated with the latest security patches.
* Input Sanitation: Before feeding data to the model, sanitize inputs rigorously. This involves techniques such as:
* Validation: Enforce strict rules on allowed characters and data types.
* Filtering: Remove potentially malicious scripts or code fragments.
* Encoding: Convert special characters into safe representations.
* Verification: Run tooling that identifies potential script injections (e.g. [models that detect prompt injection attempts](https://python.langchain.com/docs/guides/safety/hugging_face_prompt_injection)).
## Security Best Practices
When using this project:
1. Always use the latest stable version
2. Review security advisories before updating
3. Follow our security documentation and guidelines
4. Keep your dependencies up to date
5. Do not use code from the `classic/` folder as it is deprecated and unsupported
### Data privacy
## Past Security Advisories
For a list of past security advisories, please visit our [Security Advisory Page](https://github.com/Significant-Gravitas/AutoGPT/security/advisories) and [Huntr Disclosures Page](https://huntr.com/repos/significant-gravitas/autogpt).
To protect sensitive data from potential leaks or unauthorized access, it is crucial to sandbox the agent execution. This means running it in a secure, isolated environment, which helps mitigate many attack vectors.
### Untrusted environments or networks
Since AutoGPT performs network calls to the OpenAI API, it is important to always run it with trusted environments and networks. Running it on untrusted environments can expose your API KEY to attackers.
Additionally, running it on an untrusted network can expose your data to potential network attacks.
However, even when running on trusted networks, it is important to always encrypt sensitive data while sending it over the network.
### Multi-Tenant environments
If you intend to run multiple AutoGPT brains in parallel, it is your responsibility to ensure the models do not interact or access each other's data.
The primary areas of concern are tenant isolation, resource allocation, model sharing and hardware attacks.
- Tenant Isolation: you must make sure that the tenants run separately to prevent unwanted access to the data from other tenants. Keeping model network traffic separate is also important because you not only prevent unauthorized access to data, but also prevent malicious users or tenants sending prompts to execute under another tenant’s identity.
- Resource Allocation: a denial of service caused by one tenant can affect the overall system health. Implement safeguards like rate limits, access controls, and health monitoring.
- Data Sharing: in a multi-tenant design with data sharing, ensure tenants and users understand the security risks and sandbox agent execution to mitigate risks.
- Hardware Attacks: the hardware (GPUs or TPUs) can also be attacked. [Research](https://scholar.google.com/scholar?q=gpu+side+channel) has shown that side channel attacks on GPUs are possible, which can make data leak from other brains or processes running on the same system at the same time.
## Reporting a Vulnerability
Beware that none of the topics under [Using AutoGPT Securely](#using-AutoGPT-securely) are considered vulnerabilities on AutoGPT.
However, If you have discovered a security vulnerability in this project, please report it privately. **Do not disclose it as a public issue.** This gives us time to work with you to fix the issue before public exposure, reducing the chance that the exploit will be used before a patch is released.
Please disclose it as a private [security advisory](https://github.com/Significant-Gravitas/AutoGPT/security/advisories/new).
A team of volunteers on a reasonable-effort basis maintains this project. As such, please give us at least 90 days to work on a fix before public exposure.
Thank you for your interest in the AutoGPT open source project at [https://github.com/Significant-Gravitas/AutoGPT](https://github.com/Significant-Gravitas/AutoGPT) stewarded by Determinist Ltd (“**Determinist**”), with offices at 3rd Floor 1 Ashley Road, Altrincham, Cheshire, WA14 2DT, United Kingdom. The form of license below is a document that clarifies the terms under which You, the person listed below, may contribute software code described below (the “**Contribution**”) to the project. We appreciate your participation in our project, and your help in improving our products, so we want you to understand what will be done with the Contributions. This license is for your protection as well as the protection of Determinist and its licensees; it does not change your rights to use your own Contributions for any other purpose.
By submitting a Pull Request which modifies the content of the “autogpt\_platform” folder at [https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt\_platform](https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt_platform), You hereby agree:
1\. **You grant us the ability to use the Contributions in any way**. You hereby grant to Determinist a non-exclusive, irrevocable, worldwide, royalty-free, sublicenseable, transferable license under all of Your relevant intellectual property rights (including copyright, patent, and any other rights), to use, copy, prepare derivative works of, distribute and publicly perform and display the Contributions on any licensing terms, including without limitation: (a) open source licenses like the GNU General Public License (GPL), the GNU Lesser General Public License (LGPL), the Common Public License, or the Berkeley Science Division license (BSD); and (b) binary, proprietary, or commercial licenses.
2\. **Grant of Patent License**. You hereby grant to Determinist a worldwide, non-exclusive, royalty-free, irrevocable, license, under any rights you may have, now or in the future, in any patents or patent applications, to make, have made, use, offer to sell, sell, and import products containing the Contribution or portions of the Contribution. This license extends to patent claims that are infringed by the Contribution alone or by combination of the Contribution with other inventions.
4\. **Limitations on Licenses**. The licenses granted in this Agreement will continue for the duration of the applicable patent or intellectual property right under which such license is granted. The licenses granted in this Agreement will include the right to grant and authorize sublicenses, so long as the sublicenses are within the scope of the licenses granted in this Agreement. Except for the licenses granted herein, You reserve all right, title, and interest in and to the Contribution.
5\. **You are able to grant us these rights**. You represent that You are legally entitled to grant the above license. If Your employer has rights to intellectual property that You create, You represent that You are authorized to make the Contributions on behalf of that employer, or that Your employer has waived such rights for the Contributions.
3\. **The Contributions are your original work**. You represent that the Contributions are Your original works of authorship, and to Your knowledge, no other person claims, or has the right to claim, any right in any invention or patent related to the Contributions. You also represent that You are not legally obligated, whether by entering into an agreement or otherwise, in any way that conflicts with the terms of this license. For example, if you have signed an agreement requiring you to assign the intellectual property rights in the Contributions to an employer or customer, that would conflict with the terms of this license.
6\. **We determine the code that is in our products**. You understand that the decision to include the Contribution in any product or source repository is entirely that of Determinist, and this agreement does not guarantee that the Contributions will be included in any product.
7\. **No Implied Warranties.** Determinist acknowledges that, except as explicitly described in this Agreement, the Contribution is provided on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
Welcome to the AutoGPT Platform - a powerful system for creating and running AI agents to solve business problems. This platform enables you to harness the power of artificial intelligence to automate tasks, analyze data, and generate insights for your organization.
## Getting Started
### Prerequisites
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
- Node.js & NPM (for running the frontend application)
### Running the System
To run the AutoGPT Platform, follow these steps:
1. Clone this repository to your local machine and navigate to the `autogpt_platform` directory within the repository:
This command will initialize and update the submodules in the repository. The `supabase` folder will be cloned to the root directory.
3. Run the following command:
```
cp supabase/docker/.env.example .env
```
This command will copy the `.env.example` file to `.env` in the `supabase/docker` directory. You can modify the `.env` file to add your own environment variables.
4. Run the following command:
```
docker compose up -d
```
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
5. Navigate to `frontend` within the `autogpt_platform` directory:
```
cd frontend
```
You will need to run your frontend application separately on your local machine.
6. Run the following command:
```
cp .env.example .env.local
```
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
7. Run the following command:
```
npm install
npm run dev
```
This command will install the necessary dependencies and start the frontend application in development mode.
If you are using Yarn, you can run the following commands instead:
```
yarn install && yarn dev
```
8. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
### Docker Compose Commands
Here are some useful Docker Compose commands for managing your AutoGPT Platform:
- `docker compose up -d`: Start the services in detached mode.
- `docker compose stop`: Stop the running services without removing them.
- `docker compose rm`: Remove stopped service containers.
- `docker compose build`: Build or rebuild services.
- `docker compose down`: Stop and remove containers, networks, and volumes.
- `docker compose watch`: Watch for changes in your services and automatically update them.
### Sample Scenarios
Here are some common scenarios where you might use multiple Docker Compose commands:
1. Updating and restarting a specific service:
```
docker compose build api_srv
docker compose up -d --no-deps api_srv
```
This rebuilds the `api_srv` service and restarts it without affecting other services.
2. Viewing logs for troubleshooting:
```
docker compose logs -f api_srv ws_srv
```
This shows and follows the logs for both `api_srv` and `ws_srv` services.
3. Scaling a service for increased load:
```
docker compose up -d --scale executor=3
```
This scales the `executor` service to 3 instances to handle increased load.
4. Stopping the entire system for maintenance:
```
docker compose stop
docker compose rm -f
docker compose pull
docker compose up -d
```
This stops all services, removes containers, pulls the latest images, and restarts the system.
5. Developing with live updates:
```
docker compose watch
```
This watches for changes in your code and automatically updates the relevant services.
6. Checking the status of services:
```
docker compose ps
```
This shows the current status of all services defined in your docker-compose.yml file.
These scenarios demonstrate how to use Docker Compose commands in combination to manage your AutoGPT Platform effectively.
### Persisting Data
To persist data for PostgreSQL and Redis, you can modify the `docker-compose.yml` file to add volumes. Here's how:
1. Open the `docker-compose.yml` file in a text editor.
2. Add volume configurations for PostgreSQL and Redis services:
```yaml
services:
postgres:
# ... other configurations ...
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
# ... other configurations ...
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
```
3. Save the file and run `docker compose up -d` to apply the changes.
This configuration will create named volumes for PostgreSQL and Redis, ensuring that your data persists across container restarts.
description="Enter your Replicate API key to access the image generation API. You can obtain an API key from https://replicate.com/account/api-tokens.",
)
)
prompt:str=SchemaField(
description="Text prompt for image generation",
placeholder="e.g., 'A red panda using a laptop in a snowy forest'",
title="Prompt",
)
model:ImageGenModel=SchemaField(
description="The AI model to use for image generation",
default=ImageGenModel.SD3_5,
title="Model",
)
size:ImageSize=SchemaField(
description=(
"Format of the generated image:\n"
"- Square: Perfect for profile pictures, icons\n"
"- Landscape: Traditional photo format\n"
"- Portrait: Vertical photos, portraits\n"
"- Wide: Cinematic format, desktop wallpapers\n"
"- Tall: Mobile wallpapers, social media stories"
),
default=ImageSize.SQUARE,
title="Image Format",
)
style:ImageStyle=SchemaField(
description="Visual style for the generated image",
default=ImageStyle.ANY,
title="Image Style",
)
classOutput(BlockSchema):
image_url:str=SchemaField(description="URL of the generated image")
error:str=SchemaField(description="Error message if generation failed")
def__init__(self):
super().__init__(
id="ed1ae7a0-b770-4089-b520-1f0005fad19a",
description="Generate images using various AI models through a unified interface",
categories={BlockCategory.AI},
input_schema=AIImageGeneratorBlock.Input,
output_schema=AIImageGeneratorBlock.Output,
test_input={
"credentials":TEST_CREDENTIALS_INPUT,
"prompt":"An octopus using a laptop in a snowy forest with 'AutoGPT' clearly visible on the screen",
description="The revid.ai integration can be used with "
"any API key with sufficient permissions for the blocks it is used on.",
)
)
script:str=SchemaField(
description="""1. Use short and punctuated sentences\n\n2. Use linebreaks to create a new clip\n\n3. Text outside of brackets is spoken by the AI, and [text between brackets] will be used to guide the visual generation. For example, [close-up of a cat] will show a close-up of a cat.""",
placeholder="[close-up of a cat] Meow!",
)
ratio:str=SchemaField(
description="Aspect ratio of the video",default="9 / 16"
)
resolution:str=SchemaField(
description="Resolution of the video",default="720p"
)
frame_rate:int=SchemaField(description="Frame rate of the video",default=60)
generation_preset:GenerationPreset=SchemaField(
description="Generation preset for visual style - only effects AI generated visuals",
default=GenerationPreset.LEONARDO,
placeholder=GenerationPreset.LEONARDO,
)
background_music:AudioTrack=SchemaField(
description="Background music track",
default=AudioTrack.HIGHWAY_NOCTURNE,
placeholder=AudioTrack.HIGHWAY_NOCTURNE,
)
voice:Voice=SchemaField(
description="AI voice to use for narration",
default=Voice.LILY,
placeholder=Voice.LILY,
)
video_style:VisualMediaType=SchemaField(
description="Type of visual media to use for the video",
default=VisualMediaType.STOCK_VIDEOS,
placeholder=VisualMediaType.STOCK_VIDEOS,
)
classOutput(BlockSchema):
video_url:str=SchemaField(description="The URL of the created video")
error:str=SchemaField(description="Error message if the request failed")
def__init__(self):
super().__init__(
id="361697fb-0c4f-4feb-aed3-8320c88c771b",
description="Creates a shortform video using revid.ai",
scope: The authorization scope needed for the block to work. ([list of available scopes](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/scopes-for-oauth-apps#available-scopes))
description="The GitHub integration can be used with OAuth, "
"or any API key with sufficient permissions for the blocks it is used on.",
)
TEST_CREDENTIALS=APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="github",
api_key=SecretStr("mock-github-api-key"),
title="Mock GitHub API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT={
"provider":TEST_CREDENTIALS.provider,
"id":TEST_CREDENTIALS.id,
"type":TEST_CREDENTIALS.type,
"title":TEST_CREDENTIALS.type,
}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.