Compare commits

..

429 Commits

Author SHA1 Message Date
Aarushi
e94654aa11 fix typo 2024-09-20 10:22:50 +01:00
Aarushi
e9de43b0d2 all paths 2024-09-20 10:20:14 +01:00
Aarushi
de8add92e7 psuh 2024-09-20 10:18:47 +01:00
Aarushi
597cbdcc58 updated docker ci file 2024-09-20 10:17:42 +01:00
Aarushi
e3f35d79c7 tweak(.github): Update pr template wording (#8103)
* update pr template wording

* add what and how

* Update .github/PULL_REQUEST_TEMPLATE.md

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-09-19 12:44:50 +00:00
Aarushi
0040495143 tweak(.github): Update PR template (#8100)
* update PR template

* Update .github/PULL_REQUEST_TEMPLATE.md

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>

* add note

* typo

---------

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
2024-09-19 13:00:16 +01:00
Aarushi
d3eac86f9a fix(frontend): Update REST API port (#8096)
update server port to 8006

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-09-19 01:06:04 +02:00
Zamil Majdy
c3cb90ac20 feat(rnd): Add initial block execution credit accounting UI on AutoGPT Builder (#8078) 2024-09-19 04:21:40 +07:00
matanm
9b5bf81d7c Fix typo in Groq setup docs (#8018)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-09-18 20:22:57 +00:00
Nicholas Tindle
86db4deef9 feat(server): backend analytics endpoints (#8030) 2024-09-18 18:23:20 +00:00
Aarushi
d8f989daf8 docs(rnd): Update submodules info in readme (#8095)
update submodules info in readme
2024-09-18 18:59:23 +01:00
Aarushi
00f2b134cb tweak(rnd): add env var to docker compose so no messing with .env (#8091)
add env var to docker compose so no messing with .env
2024-09-18 16:39:15 +01:00
Aarushi
a3959712dc tweak(builder): Update .env.example server url with right port (#8090)
update server url with right port
2024-09-18 15:51:44 +01:00
Aarushi
8477b25c5a tweak(builder) Add local supabse credentials (#8089)
add local supabse credentials
2024-09-18 15:45:09 +01:00
Swifty
f133c9c1ef fix(rnd): incorrect docker image for migrate (#8086)
fix incorrect docker image for migrate
2024-09-18 15:21:38 +02:00
Aarushi
dc72ec97bc feat(rnd): Add support for supabase locally (#8077)
* add just auth for now

* add supabase script

* add to docker compose

* update docker compose

* tweak(rnd) Add prefix in logs (#8001)

* add prefix

* fix typos

* fix conflicts

* feat(rnd): Reduce container size remove dep with forge and autogpt (#8040)

* Remove forge and autogpt

* update lock files

* Update build process to reduce image size

* Reduced built image size

* fixed docker compose watch

* Updated logging

* updated env.example

* formatting

* linting issue

* linting not working in github actions..

* trying to get around github action linting issue

* updated version

* sleep for prisma issues

* add exp backoff on connection issues

* updated config based on review comments

* Sorting alphabetical

* updated default config

* updated depends checks

* fixed missing prisma binaries

* remove dead layer

* remove try

* remove dead layer

* updated lock file

* add to docker compose

* update for init

* add local supabase variables to docker compose

* wip supbase connectioon

* subabase submodule

* combined docker file wth new supbase url pointing to kong

* updated combined

* ngix

* updated docker compose without frontend

* updated docker compose

* update to remove frontend

* update docs

* update newline

* remove unescessary change

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2024-09-18 09:50:39 +01:00
Nicholas Tindle
0c915cb558 feat(server): anthropic updates, csv, sampling, and code blocks (#7803)
Co-authored-by: Bentlybro <tomnoon9@gmail.com>
2024-09-17 21:29:35 -05:00
Nicholas Tindle
f6ab15db47 feat(market): add filters to the market queries (#8064) 2024-09-17 14:59:25 +00:00
Krzysztof Czerwinski
80161decb9 feat(server): Add credentials API endpoints (#8024)
- Add two endpoints to OAuth `integrations.py`:
  - `GET /integrations/{provider}/credentials` - list all credentials for a provider, without secrets (metadata only)
   - `GET /integrations/{provider}/credentials/{cred_id}` - retrieve a set of credentials (including secrets)

- Add `username` property to `Credentials` types
   - Add logic to populate `username` in OAuth handlers

- Expand `CredentialsMetaResponse` and remove `credentials_` prefix from properties

- Fix `autogpt_libs` dependency caching issue

- Remove accidentally duplicated OAuth handler files in `autogpt_server/integrations`
2024-09-17 11:16:16 +00:00
Swifty
0bf8edcd96 fix(autogpt_server): Fix vulnerability in Dockerfile (#8071) 2024-09-17 11:37:22 +01:00
Zamil Majdy
b1347a92de fix(rnd): Fix execution error on non-saved agent (#8054) 2024-09-16 19:35:31 +00:00
Nicholas Tindle
22ce8e0047 feat(builder): sentry integration (#8053) 2024-09-16 23:19:52 +07:00
Bently
5a7193cfb7 Feat(Builder): Add Runner input and ouput screens (#8038)
* Feat(Builder): Add Runner input and ouput screens

* Fix run button not working

* prettier

* prettier again -- forgot flow

* fix input scaling + auto close on run

* removed "Runner Input" button to make it auto open runner input if input block is  + Fixed issue with output not showing in output UI

* replaced runner output icon and added a new icon for it

* replaced IconOutput icon with LogOut from lucide-react

* prettier

* fix type safety issue + add error handling for formatOutput

* Updates based on comments

* prettier for utils
2024-09-16 13:05:07 +02:00
Zamil Majdy
c1f301ab8b feat(rnd): Add initial credit accounting system for block execution (#8047)
### Background

We need a way to set an execution quota per user for each block execution.

### Changes 🏗️

* Introduced a `UserBlockCredit`, a transaction table tracking the block usage along with it cost/quota.
* The tracking is toggled by `ENABLE_CREDIT` config, default = false.
* Introduced  `BLOCK_COSTS` | `GET /blocks/costs` as a source of information for the cost on each block depending on the input configuration.

Improvements:
* Refactor logging in manager.py to always print a prefix and pass the metadata.
* Make executionStatus on AgentNodeExecution prisma enum. And add executionStatus on AgentGraphExecution.
* Use executionStatus from AgentGraphExecution to improve waiting logic on test_manager.py.
2024-09-14 23:47:28 +07:00
Zamil Majdy
f32244a112 fix(rnd): Fix broken save feature on Agent Builder (#8052) 2024-09-13 18:04:51 -05:00
Aarushi
9395706841 fix(rnd,market): Fix docker issues with market, and DB connection (#8050)
fix docker issues with market, and DB connection
2024-09-13 16:15:06 +01:00
SwiftyOS
a98677b79d Revert "updated lock file"
This reverts commit 056eb46c0f.
2024-09-12 17:59:39 +02:00
SwiftyOS
056eb46c0f updated lock file 2024-09-12 16:18:13 +02:00
Swifty
6fde030c37 feat(rnd): Reduce container size remove dep with forge and autogpt (#8040)
* Remove forge and autogpt

* update lock files

* Update build process to reduce image size

* Reduced built image size

* fixed docker compose watch

* Updated logging

* updated env.example

* formatting

* linting issue

* linting not working in github actions..

* trying to get around github action linting issue

* updated version

* sleep for prisma issues

* add exp backoff on connection issues

* updated config based on review comments

* Sorting alphabetical

* updated default config

* updated depends checks

* fixed missing prisma binaries

* remove dead layer

* remove try

* remove dead layer
2024-09-12 13:03:37 +02:00
Aarushi
bf1e01d423 tweak(rnd) Add prefix in logs (#8001)
* add prefix

* fix typos

* fix conflicts
2024-09-12 11:48:47 +01:00
Zamil Majdy
52c731abd6 fix(rnd): Fix decorator function type hint (#8043) 2024-09-12 05:35:33 +07:00
Aarushi
c8fbce643e fix(rnd): Add connection timeout (#8041)
add connection timeout
2024-09-11 18:32:07 +01:00
Swifty
6c001bd595 Create Input Node Custom UI Node (#8016) 2024-09-11 14:14:03 +02:00
Aarushi
f5b89672f8 feat(rnd): Add k8s default health check (#8037)
add k8s default health check
2024-09-11 12:30:34 +01:00
Aarushi
76480ffa03 fix(rnd): Update port in market (#8036)
update port
2024-09-11 12:19:11 +01:00
Aarushi
ab60a57379 tweak(rnd): Ignore .env in market (#8035)
ignore .env
2024-09-11 11:01:34 +01:00
Aarushi
1d9b01fc77 tweak(rnd): Use docker compose not docker-compose (#8034)
* use docker compose not docker-compose

* linting
2024-09-11 10:20:31 +01:00
Swifty
e81d9f9f0b docker nits (#8033) 2024-09-11 10:31:12 +02:00
Bentlybro
0d5d0270ea Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT 2024-09-10 18:24:53 +01:00
SwiftyOS
bd25f9223c expose schedular port and fix marketplace port 2024-09-10 17:09:39 +02:00
SwiftyOS
07305b55ff fix(rnd) use migrate deploy 2024-09-10 16:18:42 +02:00
SwiftyOS
cdfe3e5fbc Updated rnd/README.md 2024-09-10 16:16:16 +02:00
Swifty
e992cdf8c2 Fixing docker setup for local testing (#8026)
* Fixing docker setup

* Updated docker compose setup

* update helm charts

* Corrected agent server host name
2024-09-10 15:46:22 +02:00
Aarushi
ebd2ecd84c docs(server): Update docs (#8031)
update docs
2024-09-10 10:24:54 +01:00
Aarushi
0b919522ae feat(rnd): Split Execution Manager (#8008)
* split execution manager and removed ns and use direct uri with k8s and docker specific dns

* formating

* split execution manager

* refactor(builder): Fix linting warning and errors (#8021)

* Fix lint errors

* Fix dependency loop

* address feedback

* docker compose

* remove ns entirely

* remove yarn lock changes

* update readme

* remove ref

* dockerfile and log

* update log

* debug

* rename to executor

* remove execution from rest

* exec.py

* linting

* udpate tests to use config

* fix test

---------

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
2024-09-10 10:05:31 +01:00
Nicholas Tindle
ef691359b7 feat: document the use of isolation better (#8028) 2024-09-09 20:55:05 +00:00
Aarushi
f8815c3053 fix(builder): Use escaped apostrophe (#8027)
* use escaped apostrophe

* linter
2024-09-09 18:08:32 +00:00
Reinier van der Leer
a60ed21404 feat(server): Add OAuth flow endpoints for integrations (#7872)
- feat(server): Initial draft of OAuth init and exchange endpoints
  - Add `supabase` dependency
  - Add Supabase credentials to `Secrets`
  - Add `get_supabase` utility to `.server.utils`
  - Add `.server.integrations` API segment with initial implementations for OAuth init and exchange endpoints
- Move integration OAuth handlers to `autogpt_server.integrations.oauth`
- Change constructor of `SupabaseIntegrationCredentialsStore` to take a Supabase client
- Fix type issues in `GoogleOAuthHandler`
2024-09-09 17:21:56 +02:00
Krzysztof Czerwinski
2618d1d87c refactor(builder): Fix linting warning and errors (#8021)
* Fix lint errors

* Fix dependency loop
2024-09-09 09:54:36 +02:00
Andy Hooker
e17ea22a0a feat(builder): Aligning error and loading for NextJS best practices (#7894)
* feat(builder): Add skeleton loading components for Monitor views

Introduce skeleton components for Agents, Flow Runs List, and Flow Runs Status sections to enhance loading state indication. These components help improve user experience by visually outlining content placeholders while data is being fetched.

* feat(builder): Leveraging NextJS's error boundary with error.tsx

Replace the basic error page with a more detailed and interactive error component. The new component includes a retry option, a link to the homepage, and logs the error details to the console. It also aligns with NextJS standards

---------
2024-09-08 12:47:36 +02:00
Nicholas Tindle
60669903a0 fix: couple block categories improved (#8017) 2024-09-06 22:47:40 -05:00
Zamil Majdy
b1b31390a4 feat(rnd): Add creation & update time for AgentGraphExecutionSchedule, AgentGraphExecution, and AgentGraph (#8015) 2024-09-06 21:29:53 +00:00
Nicholas Tindle
3c12a398ae feat: marketplace analytics (#7998) 2024-09-06 19:22:24 +00:00
Aarushi
126d070396 tweak(rnd,redis) Make redis logging more clear (#8014)
make redis logging more clear
2024-09-06 15:23:33 +00:00
Reinier van der Leer
090f22b05c fix(server): Improve logging consistency (#8012)
- Make process/service startup/shutdown messages consistent
- Configure `uvicorn` to use our logging config instead of its own
- Replace `print(..)` statements in ws_api.py with log statements
- Improve log statements in ws_api.py
2024-09-06 17:05:30 +02:00
Reinier van der Leer
1b9adf5434 fix(server): Always JSON-encode AgentNodeExecutionInputOutput data (#8010)
- Handle JSON-encoding inside `.data.execution.upsert_execution_output(..)` to ensure it is always encoded the same
- Amend `.executor.manager.execute_node(..)` to pass unencoded data into `upsert_execution_output(..)`
2024-09-06 16:58:04 +02:00
Reinier van der Leer
3bd8040d6a feat(server): Clean up resources when spinning down services/processes (#7938)
- Add SIGTERM handler and `cleanup()` hook to `AppProcess`
- Implement `cleanup()` on `AppService` to close DB and Redis connections
- Implement `cleanup()` on `ExecutionManager` to shut down worker pool
- Add `atexit` and SIGTERM handlers to node executor to close DB connection and shut down node workers
- Improve logging in `.executor.manager`
- Fix shutdown order of `.util.test:SpinTestServer`
2024-09-06 16:50:59 +02:00
Aarushi
b12dba13f4 docs(readme) Update readme to include non docker steps (#8013)
update readme
2024-09-06 15:33:17 +01:00
Swifty
2cae9ba8da feat(server): Updated Output block (#7997)
* Add Block UI Types and StickyNote Block

* Renamed StickyNote to Note

* Add comment

* Updated Input Block

* rename default_values to placeholder_values

* Update sample graph

* Update input block output to match previous change

* fixing test

* Updated the output block

* re-remove old output block
2024-09-06 15:56:41 +02:00
Swifty
3753906482 feat(builder): Block UI Types and StickyNote (#7994)
* Add Block UI Types and StickyNote Block

* Renamed StickyNote to Note

* Add comment
2024-09-06 15:52:03 +02:00
Swifty
fd54ad8666 update(server): update input nodes (#7996)
* Add Block UI Types and StickyNote Block

* Updated Input Block
2024-09-06 13:27:37 +01:00
Zamil Majdy
e645cc4b33 tweak(rnd): Remove duplicated call 2024-09-05 17:20:55 -05:00
Zamil Majdy
010a8ffaaf tweak(rnd): Stop publishing custom Sentry metrics (#8000) 2024-09-05 21:51:00 +00:00
Zamil Majdy
2df325d033 fix(rnd): Fix prisma connection acquisition intermittent error on linux (#7999) 2024-09-06 03:22:25 +07:00
Nicholas Tindle
79ebc4c13b feat(builder): promotion/demotion of featured agents (#7932) 2024-09-05 20:04:11 +00:00
Zamil Majdy
e5eb42d84a tweak(rnd): Post infra change cleanup - fix process creation lifecycle (#7981) 2024-09-06 01:41:24 +07:00
Bently
d62b940baf Feat(Monitor/Builder): Fix loading of "Import from file" modal (#7976)
* Feat(Monitor/Builder): Fix loading of "Import from file" modal

* prettier

* prettier - forgot page.tsx
2024-09-05 15:02:48 +00:00
Reinier van der Leer
8fd22bcfd7 feat(server, builder): Implement "STOP" button for graph runs (#7892)
- feat(builder): Add "Stop Run" buttons to monitor and builder
  - Implement additional state management in `useAgentGraph` hook
    - Add "stop" request mechanism
    - Implement execution status tracking using WebSockets
    - Add `isSaving`, `isRunning`, `isStopping` outputs
    - Add `requestStopRun` method
      - Rename `requestSaveRun` to `requestSaveAndRun` for clarity
  - Add needed functionality for the above to `AutoGPTServerAPI` client
    - Add `stopGraphExecution` method
    - Add support for multiple handlers per WebSocket method
    - Fix parsing of timestamps in `execution_event` WebSocket messages
  - Add `IconSquare` from Lucide to `@/components/ui/icons`

- feat(server): Add `POST /graphs/{graph_id}/executions/{graph_exec_id}/stop` route
  - Add `stop_graph_run` method to `AgentServer`

- feat(server): Add `cancel_execution` method to `ExecutionManager`
  - Replace node executor `ProcessPoolExecutor` by `multiprocessing.Pool` (which has a `terminate()` method)
    - Remove now unnecessary `Executor.wait_future(..)` method
  - Add `get_graph_execution(..)` in `.data.execution`

- fix(server): Reduce number of node executors to 5 per graph executor
  This is necessary because `multiprocessing.Pool` spawns its workers on init, instead of based on demand like `ProcessPoolExecutor` does

- dx(server): Improve debug logging in `ExecutionManager`
- ci(server): Add debug logging mode to CI Pytest step

### Other improvements
Server:
- Improve output type of `ExecutionManager.add_execution(..)`
- Renamed a few things in `.server.rest_api` for consistency

Front end:
- Improved typing in `AutoGPTServerAPI` client
2024-09-05 14:42:28 +02:00
Bently
11827835a0 Feat(Builder): Add tooltips to Blocks and Save buttons (#7975) 2024-09-05 12:04:29 +00:00
Zamil Majdy
70fab8711a fix(rnd): avoid duplicating name on input/output pin for blocks (#7979) 2024-09-05 10:54:02 +00:00
Swifty
8ec015ba72 fix(builder): update tutorial routing to work on safari (#7992) 2024-09-05 10:27:15 +00:00
SwiftyOS
bc7d2f0f37 added test password as default for redis queue 2024-09-05 11:41:53 +02:00
Zamil Majdy
54694709bb tweak(rnd): Hide non required field on Blocks (#7977)
Add prompt_values for LLM block, and make it non-advanced.
Make all-field, advanced by default.
cleanup .env.example and .env.template
2024-09-05 05:00:14 +07:00
Zamil Majdy
b4b5a09b6b fix(rnd): Dockerfile Avoid full rebuild on each file change (#7971)
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-09-04 17:30:13 +00:00
Reinier van der Leer
82239dd129 fix(server): Fix node input concurrency deadlock (#7936)
In `autogpt_server.util.lock:KeyedMutex`:
- track number of pending requests for each lock
- only remove a lock from `self.locks` when the number of pending lock requests hits 0
2024-09-04 18:13:35 +02:00
Aarushi
078ad29356 update docker compose (#7972) 2024-09-04 16:19:41 +01:00
Aarushi
5000aa7ee0 tweak(rnd,docker) Remove SQLite (#7966)
* move migrations, update networking and dockignore

* update docs

* remove sqlite from ci

* remove schema linting checks

* fix formatting

* remove schema linting

* add test script

* formatting and linting

* stop pg not down

* seperate test db

* diff port

* remove duplicate
2024-09-04 10:18:57 +01:00
Bently
dc1077f893 Feat(Builder): Replace `Math Block with Calculator Block` (#7969) 2024-09-04 08:12:01 +00:00
Swifty
80df44a978 feat(server): Add endpoint to calculate required graph inputs (#7965) 2024-09-04 09:45:45 +02:00
Zamil Majdy
c2a79d2f10 feat(rnd): Add Node & Graph level execution stats instrumentation (#7957)
### **User description**
### Background

The scope of this change is collecting the required information that will be needed for the execution analytics.

### Changes 🏗️

* Add sentry integration.
* Refactor logging_metadata on manager.py.
* Collect graph-level & node-level instrumentation.
* Introduced `stats` column for `AgentNodeExecution` & `AgentGraphExecution`.
2024-09-04 02:45:19 +07:00
Bently
7db85a8990 Feat(Builder): Update tutorial to work with latest block UI (#7962)
* Feat(Builder): Update tutorial to work with latest block UI

* prettier
2024-09-03 14:56:55 +00:00
Krzysztof Czerwinski
0454a9a7be fix(builder): Prevent zooming on input field modal
- Add `nowheel` class to Textarea parent div
2024-09-03 14:30:35 +02:00
Krzysztof Czerwinski
09951fed4b feat(builder, server): Add advanced block inputs (#7934)
- Add `advanced` to `SchemaField` and pass it to `json_extra`
- Add `advanced` to `BlockIOSubSchemaMeta` type
- Update `CustomNode`, so that:
  - non-required advanced inputs are hidden
  - non-advanced and required inputs are always shown
2024-09-03 12:01:42 +01:00
Aarushi
6204d82d84 feat(server): Integrate forge.logging (#7915)
* feat(server): Integrate `forge.logging`

- Add `configure_logging()` in `.util.logging` - a wrapper for `forge.logging.configure_logging()` with project-specific extras
- Call `configure_logging()` in `.app.main()`, and in child process initializers (e.g. `AppProcess.execute_run_command(..)`, `ExecutorManager.on_graph_executor_start()`)
- Change some `logger.warning` statements to `logger.info` where appropriate

* fix warnings to info

* fix(rnd): Fix broken test and Input/Output block field renaming

* Rename

* fix(rnd): Fix flaky CI

* feat(server): Add OAuth handlers for GitHub, Notion, Google & amend store data structure (#7868)

- Add `BaseOAuthHandler` + 3 initial implementations
  - Add `GitHubOAuthHandler`
  - Add `NotionOAuthHandler`
  - Add `GoogleOAuthHandler`
- Amend `OAuth2Credentials` type
  - Add `metadata` attribute
  - Make `access_token_expires_at`, `refresh_token`, `refresh_token_expires_at` optional

* extend GCP Logger

* update manager & add flag

* linting

* use default logger behaviour

* update messages

* update another message

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-09-03 11:56:21 +01:00
Krzysztof Czerwinski
8c9fe5c167 feat(builder): Resizable input modal (#7955)
- Add minimize/maximize button in the corner of modal to make it significantly larger and centered
- Add copy button to copy all text
- Add optional `title` to display as a modal header
2024-09-03 11:42:25 +01:00
Reinier van der Leer
71de1a6a5e fix(server): Fix type checking and propagation issues (#7941)
- fix type propagation by `AppService.run_and_wait(..)`
- fix type propagation by `@expose` and add note
- fix type propagation by `wait(..)` in `.executor.manager.execute_node(..)`
- fix type propagation by `wait(..)` in `.executor.manager._enqueue_next_nodes(..)`
- remove unnecessary null checks for `.data.graph.get_node(..)`
- fix type issue in `ExecutionScheduler`
- reduce use of `# type: ignore` in `.data.execution`
- reduce usage of `# type: ignore` in `.executor.manager`
- reduce usage of `# type: ignore` in `.server`
- reduce usage of `# type: ignore` in cli.py
- update `pyright` to v1.1.378
2024-09-02 14:13:56 +00:00
Bently
956165adf3 Feat(Builder): Fix block menu width (#7946) 2024-09-02 09:54:52 +00:00
Aarushi
e4dc16a867 docs(server): Update docs with new docker compose steps (#7944)
* update setup

* add docker links
2024-09-02 10:55:54 +02:00
Aarushi
cfa0b6610c feat(rnd,infra): Pull out websockets away from server api (#7899)
* standalone websocket server

* add websocket url

* wip: talk to ws directly

* rename to api server

* dockerfile and queue

* fix paths

* update poetry lock

* helm charts for websockets

* create seperate deployments for websockets and rest server with redis queue for async comms

* delete duplicate queue

* add depends in ws_api

* singleton for conn manager

* update from review

* fix CI

* address feedback

* update readme

* update docker file and add migration step in readm

* ad watch

* add step to copy example env file

* put connect back in
2024-09-02 10:32:51 +02:00
Krzysztof Czerwinski
933baa0e8d feat(builder): Store and display output data history in nodes (#7930)
- Update styling and use tailwind more
- Add `react-toast` dependency
- Fix output button not changing checked state on execution
- Make status a badge in node's corner
- Rename `output_data` to `executionResults` and store multiple results with execution UUIDs
- Add `DataTable` component that displays execution outputs
- Outputs can be copied and there's a toast displayed as a feedback
2024-09-01 11:18:57 +01:00
Bently
370b2dabe8 Feat(Builder): Add block categories to block menu (#7918) 2024-08-31 19:22:37 +00:00
Nicholas Tindle
baa00a5b03 feat(builder,server): add review of agent submissions to the admin portal (#7914)
Co-authored-by: Aarushi <aarushik93@gmail.com>
2024-08-30 18:20:59 +00:00
Zamil Majdy
60a8e00578 fix(rnd): Fix overflowing card due to long block name (#7931) 2024-08-30 18:02:55 +00:00
Bently
85e7d678ce Feat(Builder): Fix Multiple Connections Between Pins (#7924) 2024-08-30 18:15:18 +01:00
Krzysztof Czerwinski
476b307d69 feat(builder): Draw selected edges above nodes (#7893) 2024-08-30 16:39:22 +01:00
Bently
5dbfb4e3f1 feat(builder): Turn block border red on error (#7910)
* Turn block border red on error

* prettier
2024-08-30 15:09:19 +00:00
Zamil Majdy
f6d09c74f5 feat(rnd): Rename Blocks (#7925)
### Background

Standardize block names with a focus of making them intuitive to a non-technical person.

### Changes 🏗️

Replace these names:
* TextLlmCallBlock -> AITextGeneratorBlock
* ObjectLlmCallBlock -> AIStructuredResponseBlock
* AdvancedLlmCallBlock -> AIConversationBlock
* CreateTalkingAvatarClipBlock -> CreateTalkingAvatarVideoBlock
* DiscordReaderBlock -> ReadDiscordMessagesBlock
* DiscordMessageSenderBlock -> SendDiscordMessageBlock
* YouTubeTranscriberBlock -> TranscribeYouTubeVideoBlock
* CreateMediumPostBlock -> PublishToMediumBlock
* ForEachBlock -> ListIteratorBlock
* MathsBlock -> CalculatorBlock
* CounterBlock -> CountItemsBlock
* ValueBlock -> StoreValueBlock
* PrintingBlock -> PrintToConsoleBlock
* DictionaryAddEntryBlock -> AddToDictionaryBlock
* ListAddEntryBlock -> AddToListBlock
* ObjectLookupBlock -> FindInDictionaryBlock
* RedditGetPostsBlock -> GetRedditPostsBlock
* RedditPostCommentBlock -> PostRedditCommentBlock
* WikipediaSummaryBlock -> GetWikipediaSummaryBlock
* WebSearchBlock -> SearchTheWebBlock
* WebScraperBlock -> ExtractWebsiteContentBlock
* GetOpenWeatherMapBlock -> GetWeatherInformationBlock
* HttpRequestBlock -> SendWebRequestBlock
* CurrentTimeBlock -> GetCurrentTimeBlock
* CurrentDateBlock -> GetCurrentDateBlock
* CurrentDateAndTimeBlock -> GetCurrentDateAndTimeBlock
* TimerBlock -> CountdownTimerBlock
* RSSReaderBlock -> ReadRSSFeedBlock
* TextMatcherBlock -> MatchTextPatternBlock
* TextParserBlock -> ExtractTextInformationBlock
* TextFormatterBlock -> FillTextTemplateBlock
* TextCombinerBlock -> CombineTextsBlock
2024-08-30 21:36:42 +07:00
Zamil Majdy
6d17e627e8 fix(rnd): Add null checking & remove console logs on Input UI component (#7927)
fix(rnd): And null checking & remove logs
2024-08-30 12:21:09 +00:00
Bently
5cfa807f00 Feat(Builder): Make the copy block button work (#7920)
* Feat(Builder): Make the copy block button work

* prettier

* Fixes

* fix block id

* prettier

* fix type
2024-08-30 11:48:33 +00:00
Bently
6fff06f0f6 Feat(Builder): Fix delete block is non-undoable (#7922)
* Feat(Builder): Fix delete block is non-undoable

* prettier
2024-08-30 08:05:37 +00:00
Aarushi
cbe553a547 feat(infra): Add deployment for Market (#7907)
* deployment for marketplace

* set up deployment for marketplace
update nodes

update helm

* update health check & allow builder origin
2024-08-29 18:23:40 +00:00
Reinier van der Leer
96ef35536c ci(server): Run tests even if linting fails 2024-08-29 15:42:49 +02:00
Reinier van der Leer
087d3a3760 test(server): Add type to server fixture usages 2024-08-29 15:42:15 +02:00
Zamil Majdy
5da58aa284 fix(rnd): Fix jumping caret problem on builder input text field (#7917)
Issue 1:
Input text field cursor keeps moving to the end of the text.
Try to type "Hello World!" into the input text. Then try to type "some string" in the middle of the "Hello" and "World".

Issue 2:
History should only tracks on the input box onBlur/onLeave
Try to type a "longcharacters" and try to undo it, the undo is removing 1 character at a time, polluting the history, and make the undo pretty much unusable.

Issue 3:
KeyValue & ArrayInput is non-undoable.
Try to add key-value or add an entry to the list, it doesn't undo the value, but you need to click as many number of entries being added to make the undo work again
2024-08-29 13:12:10 +00:00
Bently
7de12a2200 Feat(Builder): Add first guide tutorial (#7862)
* Feat(Builder): Add first guide tutorial

* added more steps + some fixes

* added local storage to fix starting every time going to build

* update copy & paste to support mac

* small fix

* Prettier fixes

* Added "Skip Tutorial" button to first step

* some fixes based on requests

* revert camelCase change

* add ability to use url to reset tutorial

* prettier

* Added Tutorial button next to tally

* prettier

* change pinBlocksPopover to setPinBlocksPopover

* fixes + update + prettier

* made the resetTutorial url dynamic

* force to /build on reset tutorial

* fix renaming

* prettier
2024-08-29 07:53:45 -05:00
Reinier van der Leer
8f1c63a7ea feat(server): Add OAuth handlers for GitHub, Notion, Google & amend store data structure (#7868)
- Add `BaseOAuthHandler` + 3 initial implementations
  - Add `GitHubOAuthHandler`
  - Add `NotionOAuthHandler`
  - Add `GoogleOAuthHandler`
- Amend `OAuth2Credentials` type
  - Add `metadata` attribute
  - Make `access_token_expires_at`, `refresh_token`, `refresh_token_expires_at` optional
2024-08-29 09:11:37 +00:00
Zamil Majdy
6ec200f912 fix(rnd): Fix flaky CI 2024-08-28 15:59:51 -05:00
Zamil Majdy
b5db7f575e Rename 2024-08-28 15:56:02 -05:00
Zamil Majdy
98c909f99f fix(rnd): Fix broken test and Input/Output block field renaming 2024-08-28 15:54:48 -05:00
Krzysztof Czerwinski
c5615aa862 fix(builder): Prevent overflow in node output (#7912) 2024-08-28 16:27:42 +01:00
Bently
e725305e15 fix(builder): Set default value in NodeStringInput to silence uncontrolled input warning (#7909)
add default value to fix bug
2024-08-28 12:44:06 +01:00
Aarushi
9551f54c35 feat(infra): Add websockets IP (#7905)
tf changes from original branch to unblock other work
2024-08-27 20:35:31 +01:00
Bently
777f7d25bf Fix copy-paste of text creating unintended blocks (#7903) 2024-08-27 11:49:35 +01:00
Zamil Majdy
ea6f37bf98 feat(rnd): Integrate Jinja2 into TextFormatter (#7891)
Add Jinja2 to TextFormatter while maintaining backward compatibility.
2024-08-27 07:17:57 +00:00
Krzysztof Czerwinski
299530cf95 refactor(builder): Update ReactFlow to version 12 & split up Flow.tsx (#7808)
Update ReactFlow to version 12 and split `Flow.tsx` into `useAgentGraph` hook that takes care of agent state and API calls to the server.

- Update ReactFlow to v12 ([migration guide](https://reactflow.dev/learn/troubleshooting/migrate-to-v12))
- Move `setIsAnyModalOpen` to `FlowContext`
- Make `setHardcodedValues` and `setErrors` functions of `CustomNode` and utilize new `updateNodeData` ReactFlow API
- Fix type errors
- `useAgentGraph` hook
  - Take care of all API calls, websocket, agent state and logic
  - Make saving and execution async and thus more consistent and reliable
	- Save&run requests are state
	- Wait for node ids to sync with backend reactively
	- Queue execution updates
  - Memoize functions using `useCallback`
2024-08-26 11:45:05 +01:00
Nicholas Tindle
1df7d527dd Remove debug (#7890) 2024-08-25 17:59:41 +00:00
Aarushi
407cf858e7 fix(server): Ensure nameserver is started before other processes (#7788)
ensure nameserver is started before other processes
2024-08-24 20:54:11 +00:00
Aarushi
a670b384f6 tweak(infra): Add 1 more node to GKE (#7889)
add one more node
2024-08-24 18:53:24 +01:00
Zamil Majdy
f9b8b0a41a feat(rnd): Add dynamic input pin for input object construction (#7871)
### Background

Currently, there is no way to construct the output of nodes into a composite data structure (list/dict/object) using the builder UI.

The backend already supports this feature by connecting the output pin to the input pin using these format:
* <pin_name>_$_<list_index> for constructing list
* <pin_name>_#_<dict_key> for constructing dict
* <pin_name>_@_<field_name> for constructing object

The scope of this PR is implementing the UX for this in the builder UI.

### Changes 🏗️

<img width="765" alt="image" src="https://github.com/user-attachments/assets/8fc319a4-1350-410f-98cf-24f2aa2bc34b">

This allows you to add more pins in a key value & list input: `_$_` list constructor & `_#_` dict constructor.
2024-08-23 18:21:38 +00:00
Zamil Majdy
e59e138352 fix(rnd): Prevent boolean with no default value on AGPT-builder (#7884)
### Background

Boolean without default value is a UX problem. It's currently displayed as a toggle and it has no way to describe the `null` value.
So we need to prevent blocks from introducing a nullable boolean.

### Changes 🏗️

Add explicit check to prevent nullable boolean. Fix existing block field that has nullable boolean.
2024-08-23 18:03:21 +00:00
Aarushi
a95ee693dd fix(rnd): Add missing libs folder (#7887)
add missing libs folder
2024-08-23 16:42:24 +00:00
Aarushi
26f56114d1 feat(infra): Add ws server infra changes (#7886)
add ws server infra changes
2024-08-23 11:25:57 -05:00
Aarushi
45ace8ccab Add number of workers to environment variables in in charts (#7869)
add workers to env vars
2024-08-23 20:21:51 +07:00
Aarushi
95af63b5ad feat(libs): Add integration credentials store (#7826)
- Add `SupabaseIntegrationCredentialsStore` in `.supabase_integration_credentials_store`
- Add `supabase` dependency
- Add `pydantic` dependency

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-08-23 12:39:26 +02:00
Aarushi
012bad72e8 feat(rnd,blocks): Add D-ID Block (#7798)
* talking head

* linting

* remove clip id, not needed

* add more descriptive name

* add min requirement to polling attempts and intervals

* add docs and link to docs

* remove extra space

* force new tab

* fix linting

* add did key to .env.template
2024-08-23 09:39:55 +01:00
Nicholas Tindle
efcd0f93ed ci(all): take two on the global status checker (#7849)
* ci(all): fundamentally change how we do this

* Update workflow-checker.yml

* ci: dupe and move file in attempt to figure out why its not accessible

* Update workflow-checker.yml

* fix: sleep before checking, move files, improve code

* Update workflow-checker.yml

* Update check_actions_status.py

* Update check_actions_status.py

* Update workflow-checker.yml

* Update workflow-checker.yml

* ci: remove debug, add more

* Update check_actions_status.py

* Update check_actions_status.py

* Update check_actions_status.py

* Revert "Update check_actions_status.py"

This reverts commit 36134527af.

* Revert "Update check_actions_status.py"

This reverts commit 828aabf532.

* Update check_actions_status.py

* Update check_actions_status.py

* Update check_actions_status.py

* Update check_actions_status.py
2024-08-22 07:06:15 -05:00
Swifty
4c32b46d40 feat(builder): Add google analytics to the project (#7860) 2024-08-22 10:24:12 +01:00
Konstantinos Voulgaropoulos
41fbfe35fb tweak(rnd): Trim Whitespace from BlockSecret to Prevent Authentication Issues (#7789)
* ensure secret value of BlockSecret is always trimmed

* avoid logging secret values when trimming
2024-08-21 14:44:26 -05:00
Zamil Majdy
c719e4f177 fix(rnd): Fix JS event-loop freeze caused by websocket connection retry (#7861)
### Background

Websocket connection retry has no backoff period which causes event-loop freeze.

### Changes 🏗️

Add a back-off period on retry.
2024-08-21 21:41:15 +07:00
Zamil Majdy
3d62cec553 fix(rnd): Guarantee execution ordering per node by waiting the node completion (#7855)
### Background

We don't have an ordering guarantee on the node execution.
Let's say we have a node that has to execute different data A, B, and C.
The current implementation limits the execution to 1 execution at a time, but there is no guarantee that A, B, and C will be executed in order.

The initial implementation did not have any restrictions, so it used to be A, B, and C executed in parallel
In the current implementation with the per-node constraint, it's A, B, C are executed serially but with no guarantee of ordering.

The scope of this PR is to guarantee that order.

### Changes 🏗️

Guaranteeing the execution per node ordering by avoiding any re-enqueue mechanism. If there are two executions run in the same node, the first one will be executed and the other will block. The blocking mechanism is indeed sub-optimal, the performance improvement can be done later (a follow-up issue will be added).
2024-08-21 19:08:18 +07:00
Aarushi
fa12564954 bug(infra): Add frontend config to ingres (#7854)
add frontend config to ingres
2024-08-21 11:31:16 +01:00
Nicholas Tindle
f6d8e597e1 clean(builder): learned we should be using this to keep our tailwind classes sorted (#7836) 2024-08-21 05:14:21 -05:00
Krzysztof Czerwinski
a1cbc101a5 fix(builder): Correctly display static links on new edges (#7851)
Fix static links on edge creation
2024-08-21 10:24:01 +01:00
Aarushi
afc8338145 feat(infra) Create builder account with necessary permissions (#7840)
* create builder account with necessary permissions

* remove unrelated changes
2024-08-21 10:18:51 +01:00
Aarushi
7fe4e455fd feat(infra): Ensure http is always redirected to HTTPS (#7853)
redirect to https always
2024-08-21 10:01:06 +01:00
Bently
52d40d0f8b Feat(Builder): Make blocks spawn in the center of the screen not at 0,0 (#7805)
Feat(Builder): Make nodes spawn in the center of the screen not at 0,0
2024-08-20 15:05:12 +01:00
Aarushi
9e35f8c5cb tweak(infra): Rename builder for consistency (#7837)
rename builder for consistency
2024-08-20 13:37:40 +01:00
Nicholas Tindle
c0afb133a7 feat(builder): checkbox for tos on login page and submit agent (#7745)
* feat(builder): checkbox for tos on login page

* feat(builder): submit agent page

DOES NOT WORK

* feat(builder): basic upload (not working)

* feat(builder): submit page more working but still not

* fix(builder): working categories, not dynamic

* feat(builder, server): enable submissions (auth error)

* fix(lint): linting

* feat(builder): submit page terms of service

* fix(builder): update lockfile

* lint(builder): lint marketplace files
2024-08-20 07:04:22 -05:00
Zamil Majdy
526364297c feat(rnd): Add staticOutput field on block API (#7802) 2024-08-16 22:13:10 +07:00
Krzysztof Czerwinski
aed067e61c feat(builder): Support static connections (#7799)
- Add static link/connection support on the frontend and display them as dashed lines
- Remove queueing for static connections - there'll always be only one bead waiting at the end
- Make beads slightly larger and further from the end arrow
2024-08-16 13:39:48 +01:00
Zamil Majdy
653eb4964f fix(rnd): Fix graph validation error message (#7797) 2024-08-16 17:10:11 +07:00
SwiftyOS
406206f5d0 Add categories to all the blocks 2024-08-16 10:11:30 +02:00
SwiftyOS
1e05d6a8e9 added categories 2024-08-16 10:10:18 +02:00
Krzysztof Czerwinski
848637bfeb feat(builder): Visualise data beads on connections (#7791)
* Visualise data beads on edges

* Add `useBezierPath` hook

* Fix edge color on load

* Updates

* Merge branch 'master' into kpczerwinski/open-1580-visualise-data-coming-down-connections

* Add `visualizeBeads` state in `FlowEditor`

* Add `FlowContext`

Allow disabling beads animation

* fix(builder): linting

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-14 13:29:19 -05:00
Swifty
cea81bfe4e feat(builder): Updated Block Categories and colouring (#7794)
* Updated Block Categories

* formatting

* Added category coloring
2024-08-14 11:06:01 +02:00
Zamil Majdy
1e92c284d9 fix(rnd): Fix text blocks, make outbound links on ValueBlock static (#7796) 2024-08-14 10:03:43 +01:00
Swifty
98c1cb8ff9 feat(builder): Added tooltips for blocks (#7793) 2024-08-14 10:41:01 +02:00
Zamil Majdy
58dc8296db fix(rnd): Fix unparsed string on HTTP block (#7795) 2024-08-14 06:19:46 +02:00
Zamil Majdy
4782f4383c fix(rnd): string to list auto-conversion 2024-08-13 18:08:24 +02:00
Zamil Majdy
2b60a392fb fix(rnd): Fix broken test 2024-08-13 17:51:59 +02:00
Konstantinos Voulgaropoulos
f30b2cdf25 inclusion of poetry run prisma generate in the documentation (#7790) 2024-08-13 10:46:28 -05:00
Zamil Majdy
9084c31662 fix(rnd): Make discord bot continuously running (#7792) 2024-08-13 10:39:39 -05:00
SwiftyOS
183c72b2d0 fix typo 2024-08-13 15:07:57 +02:00
Swifty
55e100ee1e tweak(builder) Changes to transparency, connector color and title boldness (#7766)
Changes to transparency, connector color and title boldness
2024-08-13 15:07:07 +02:00
Toran Bruce Richards
82c5cd2d79 Add Advanced Chatbot with History using Discord (#7786)
* Add Advanced Chatbot with History using Discord

* Update Discord Chatbot with History_v145.json

update is_active to false and is_template to true

---------

Co-authored-by: Bently <tomnoon9@gmail.com>
2024-08-13 09:03:14 +01:00
Aarushi
f0ab795248 fix(builder,server) Add missing create user calls (#7784)
* fix missing create user calls

* formating and linting
2024-08-09 16:49:21 +01:00
Zamil Majdy
5b9caa4345 fix(rnd): Added run-time implicit type conversion on agent node execution (#7775)
The execution graph is supposed to be typed, but there are cases where generic types like Any were used, and there are cases, where incompatible data passed into the wrong type. 

If such a thing happens on runtime, we should do the best-effort conversion instead of breaking the run. E.g.: try to json-stringify the object to str input, or try to parse number in the string to int input, etc.
2024-08-09 20:38:33 +07:00
Bently
1e054064f6 Feat(Builder): Allow for zooming out more on builder UI (#7760)
add min and max zoom
2024-08-09 07:43:06 -05:00
Aarushi
646d98470f fix(builder): Temporarily disbale TS checks (#7777)
* disable ts checks temporarily

* prettier
2024-08-09 09:59:39 +01:00
Reinier van der Leer
5a68be5419 refactor(server, builder): Remove unused/duplicate WebSocket endpoints (#7740)
* Remove WS API method `runGraph` in favor of REST method `executeGraph`

* Remove unused WebSocket methods from `autogpt_server.server`
2024-08-08 17:01:37 +02:00
Reinier van der Leer
2ff8a0743a fix(server): Make userId required on DB entities and apply default ID to existing entries (#7755)
* Make `userId` required on DB entities `AgentGraph`, `AgentGraphExecution`, and `AgentGraphExecutionSchedule`

* Add SQLite and Postgres migrations to make `userId` required and set `userId` to `3e53486c-cf57-477e-ba2a-cb02dc828e1a` on existing entries without `userId`

* Amend `create_graph` endpoint and `.data.graph`, `.data.execution` methods to handle required `user_id`

* Add `.data.user.DEFAULT_USER_ID` constant to replace hardcoded literals
2024-08-08 16:57:59 +02:00
Reinier van der Leer
582571631e fix+refactor(builder): Sort out FlowEditor+ReactFlow state management (#7737)
* refactor(builder): Migrate `FlowEditor` to use ReactFlow's state management system

  We have been keeping two copies of node and edge data: one inside ReactFlow and one outside.
  It works, but it's accidental and implicit and there is no reason to be using shadow copies rather than a single data source.

  - Replace `useNodesState` and `useEdgesState` with `useReactFlow` hook
  - Use `addNodes`, `addEdges`, and `deleteElements` where appropriate instead of `setNodes`/`setEdges` to allow use of event hooks
  - Consolidate all edge -> node state sync logic into `onEdgesChange` event handler
    This replaces `updateNodesOnEdgeChange`, part of `onConnect`, and `onEdgesDelete`.
  - Move node deletion logic from `CustomNode` to `FlowEditor:onNodesChange`

* fix(builder): Refactor and fix copy-paste mechanism
  - Rename variables for readability
  - Use an ID map to correctly set the source and target IDs for the pasted edges
2024-08-08 16:18:15 +02:00
Reinier van der Leer
bf10df612e feat(builder): Move /monitor to / (#7769)
- Move `monitor/page.tsx` to `page.tsx`
- Remove redirect from `/` to `/build`
- Set temporary redirect from `/monitor` to `/` to prevent breaking UX (may be removed after a grace period, e.g. 2024-09-01)
2024-08-08 16:07:12 +02:00
SwiftyOS
c577d04692 Changed background to white and add an effect on it 2024-08-08 15:50:59 +02:00
Swifty
85d895ef77 tweak(builder): Node styling (#7763)
* wix width

* teak(builder): blocks control styling

* tweak node styling

* formatting
2024-08-08 13:16:38 +01:00
Swifty
deacc2bd8f tweak(builder) block control styling (#7762)
* wix width

* teak(builder): blocks control styling
2024-08-08 13:15:04 +01:00
Swifty
3eb0d73461 fix(builder): Fix the control panel being too narrow for its elements (#7758)
wix width
2024-08-08 13:38:40 +02:00
Swifty
be0f6498ed fix(builder): Added header background and moved some styling to tailwind (#7756)
Added header background and moved some styling to tailwind
2024-08-08 12:12:20 +01:00
Swifty
0bab2714e9 feat(market): Agent Submission Process (#7718)
* adding auth to store

* Add ability to submit agents and review them before being added to the market

* Added auth decorator

* Added auth to market api client

* fix(builder): Fix drag-select behavior on `NodeKeyValueInput`

* fix(github): Added in fallback variables for postgres testing (#7715)

Co-authored-by: Leslie Cruz <lelcruz@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>

* feat(builder): basic tally feedback form (#7725)

* removed database changes

* moved auth to libs project

* fixed formatting

* cleaned up auth

* Added tests and database migration

* delete decorator
2024-08-08 11:12:35 +02:00
Aarushi
9c74d76a3a fix(tests): Ensure user ID is not None on schedule in tests (#7749)
* ensure user id is never None on schedule

* remove comment
2024-08-08 11:11:57 +02:00
Bently
78e96f8a1a Feat(Builder): Add TimerBlock (#7728) 2024-08-08 09:59:06 +01:00
Bently
904b444b13 Feat(Builder): Add email sending block (#7736) 2024-08-08 10:57:39 +02:00
Andy Hooker
3cad0f89ee feat(builder): Standardize icons and integrate with ControlPanel and NavBar (#7743)
* feat(builder): Add new icons and integrate with ControlPanel

Introduce SVG icons and replace ControlPanel dependencies with new icons. The ControlPanel component now uses the new icon components for improved consistency.

* feat(builder): add additional icon and update icons in NavBar

Introduced the new icon with relevant documentation and examples. Replaced existing icon imports with the newly defined icons in the NavBar component for consistency.

* feat(builder): Add icon for megaphone and replace usage

Introduced the IconMegaphone for reuse across the application. Updated TallyPopup to utilize IconMegaphone instead of the previous Megaphone icon from lucide-react.

* fix(builder): Running prettier to format changed files.

Adjusted various files for consistent code formatting. Ensured proper spacing and alignment of imports, JSX tags, and object properties to enhance readability and maintain coding standards. No functional changes made.
2024-08-08 10:55:57 +02:00
lelcruz
8131fc385b fix(builder): Added option to pan view to center agents on build screen (#7742)
Co-authored-by: Leslie Cruz <lelcruz@users.noreply.github.com>
Co-authored-by: Bently <tomnoon9@gmail.com>
2024-08-08 10:54:37 +02:00
Reinier van der Leer
335fea8605 fix(builder): Always show connected optional pins on node (#7724)
- Always show connected optional pins on node
- Refactor `hasOptionalFields()` to a constant `hasOptionalFields`
2024-08-08 10:48:14 +02:00
Reinier van der Leer
55d32f0324 fix(builder): Automatically (re)connect websocket when sending message (#7739)
- Amend `AutoGPTServerAPI.sendWebSocketMessage(..)` to automatically (re)connect the websocket if disconnected
- Amend `AutoGPTServerAPI.connectWebSocket()` to prevent race conditions
2024-08-08 09:09:50 +01:00
Swifty
56ce7ac628 feat(market): Added Auth to Market and Protect routes (#7717)
* adding auth to store

* Added auth decorator

* Added auth to market api client

* removed database changes

* moved auth to libs project

* fixed formatting

* Switched to using fastapi dependencies

* Return a user object

* removed logging of the token

* Added tests
2024-08-08 09:28:44 +02:00
Zamil Majdy
81adf84032 fix(rnd): Fix flaky CurrentDateAndTimeBlock test (#7741)
### Background

CurrentDateAndTimeBlock would fail if the test is not complete within 1-second wall-time.
In the case a test started at the second 01:59:59, it becomes flaky.

We can change the test to only assert the type. But this is also a good chance to add more assertion options for Block: a custom function.

### Changes 🏗️

Change assertion for the time block using an additional margin of error.
2024-08-07 23:44:47 -05:00
Aarushi
f8d07a27af feat(builder): Add jwt support on websockets (#7734)
add jwt support on websockets
2024-08-07 14:47:35 +01:00
Aarushi
1bad26657c feat(rnd): Add user management (#7663)
* add user management support

* fixing tests

* fix formatting and linting

* default user, formatting & linting

* remove unused code

* remove default creation when auth enabled

* add client side calls

* addressing feedback

* prettier

* add defaults for websockets

* linting

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2024-08-07 14:15:29 +01:00
Nicholas Tindle
31dbb543a2 feat(builder): basic tally feedback form (#7725) 2024-08-07 09:14:18 +02:00
lelcruz
60d25135e6 fix(github): Added in fallback variables for postgres testing (#7715)
Co-authored-by: Leslie Cruz <lelcruz@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-06 15:09:02 -05:00
Reinier van der Leer
4678ed2e57 fix(builder): Fix drag-select behavior on NodeKeyValueInput 2024-08-06 20:53:22 +02:00
Nicholas Tindle
98a07f1265 Add new agent ideas to docs (#7716) 2024-08-06 13:45:28 -05:00
Bently
5e8ff5e3ed Feat(Builder): Add discord bot templates (#7711)
Add discord bot templates
2024-08-06 19:44:43 +01:00
Bently
89adcefd63 Feat(Builder): added time blocks (#7710)
* added time blocks

* lint & format
2024-08-06 13:43:43 -05:00
Reinier van der Leer
d82e577196 feat(builder): Rewrite & split up node inputs (#7722)
- feat(builder): Rewrite & split up `NodeInputField`
  - Create `NodeObjectInputTree`
  - Create `NodeGenericInputField`
  - Create `NodeKeyValueInput`
  - Create `NodeArrayInput`
  - Create `NodeStringInput`
  - Create `NodeNumberInput`
  - Create `NodeBooleanInput`
  - Create `NodeFallbackInput`
  - Create `ClickableInput` from `renderClickableInput(..)`
  - Amend usage in `CustomNode`
  - Remove deprecated/unused styling from `flow.css` and `customnode.css`
  - Fix alignment between `NodeHandle` and `NodeInputField`
  - Split up `BlockIOSchema` & rename to `BlockIOSubSchema`
    - Create `BlockIOObjectSubSchema`
    - Create `BlockIOKVSubSchema`
    - Create `BlockIOArraySubSchema`
    - Create `BlockIOStringSubSchema`
    - Create `BlockIONumberSubSchema`
    - Create `BlockIOBooleanSubSchema`
    - Create `BlockIONullSubSchema`
  - Install `Select` component from shad/cn

- refactor(builder): Replace boilerplate `button` styling with `Button` components

- refactor(builder): Move `NodeInputField.tsx` to `node-input-components.tsx`

---------

Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
2024-08-06 20:32:49 +02:00
Andy Hooker
e6cc8687a5 feat(builder): Refactor components and types for monitoring page streamlining (#7714) 2024-08-05 21:13:08 -05:00
Aarushi
fbad0d01ee feat(rnd, infra): Add deployment for frontend (#7705)
* add helm charts

* dont check in secrets
2024-08-05 21:12:57 +01:00
Zamil Majdy
fe5c1968bc fix(rnd): Cleanup block names and add explicit block name check (#7709) 2024-08-05 17:58:03 +00:00
Swifty
951abf6d5b feat(rnd) Agent Marketplace MVP (#7657)
* Added listing, sorting, filtering and ordering of agents

* feat(market): general upkeep for vscode and small docs

* feat(market): most of search

* fix(market): hinting on the sort was weird + linting

* feat(market): migrations and schema updates

* lint(market): autolint

* feat(market): better search

* feat(market): file download

* feat(market): analytics of downloads

* Added tracking of views

* changed all imports to be fully qualified

* Upgrade sentry sdk

* Added an admin endpoint to submit new agents

* fixes

* Added endpoint that just tracks download

* Starting adding the marketplace page

* Marketplace client

* Create template of the marketplace page

* Updated client

* fix(market): debug port

* feat(market): agents by downloads

* fix(market, builder): hook up frontend and backend

* feat(builder, market): build a "better" market page that loads data

* feat(builder): updated search (working) and page (kinda working)

* feat(builder): add a feature agents ui (not backed yet)

* feat(builder): improve detail page content

* Added run script

* Added pre populate database command

* Add AnalyticsTracker on create agent

* Add download counts for top agents

* Add hb page prometheus metrics

* Added featured agents funcitonality

* renamed endpoint to health

* Adding download flow

* normalised api routes

* update readme

* feat(market) : default featured

* formatting

* revert changes to autogpt and forge

* Updated Readme

* Eerror when creating an agent from a template installed from (#7697)

fix creating graph from template

* Add dockerfile

* z level fix

* Updated env vars

* updated populate url

* formatting

* fixed linting error

* Set defaults

* Allow only next.js dev server

* fixed url

* removed graph reassignment as due to change in master

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-05 16:51:17 +02:00
SwiftyOS
9ae6389c6c fix(autogpt_server): Incorrect graph reassignment 2024-08-05 16:16:56 +02:00
Zamil Majdy
4cf1dd30f1 feat(rnd): Introduce Sub-Graph on Agent Server (#7693)
### Background

This change brings the capability to decompose a graph into sub-graphs. The objective of this feature is to allow a user to build a visually modular, and easier-to-understand graph. Also, allowing you to import a graph into your existing graph, without decluttering your existing graph.

This feature will require more implementation on the UI side, to allow the grouping of subgraph to be represented as a node in the builder.

### Changes 🏗️

Introduced a subgraph functionality with the following property:

* Sub-graph is simply a set of nodes that are grouped together, making it representable as a node.
* Sub-graph input & output pins/schema are the `InputBlock` / `OutputBlock` nodes present in the subgraph.
* The previous point implies that connecting two nodes from different sub-graphs, other than input/output nodes, is not allowed.
* Graph can be nested, but defined flatly, e.g.: graph is now only represented by three components: nodes, links, and subgraphs (a set of list of nodes). A nested subgraph is simply connecting a node inside a subgraph into another `InputBlock` node of another subgraph.
2024-08-05 16:48:14 +07:00
Swifty
c7fdfa0f77 fix(builder): Apply Prettier Formatting (#7695)
formatting
2024-08-05 09:18:08 +02:00
Andy Hooker
6fa7d22c91 feat(builder): Addition of prettier for aligned of development styles (#7629)
* fix(builder): Adding prettier configuration files and to package.

* fix(builder): Running script "format" added to the package.json

* feat(builder): Adding a job to the yaml file. This job will run "format" which leverages prettier.

* feat(builder): Running script "format" and merging master

* feat(builder): Setting configuration to prettier defaults in .prettierrc.json, and adding a requested newline in the .prettierignore

* feat(builder): Updating the CI to not add a job for prettier but instead add a check to verify prettier was run before commiting.

* feat(builder): Confirming CI update fails when user does not run prettier first. Updating with file changes after prettier

* feat(builder): Running prettier write to fix warnings

* fix(builder): Removing .prett
per PR change request

* fix(builder): Running prettier formatter

* fix(builder): Running prettier formatter check found additional files

* fix(builder): Running prettier format

* fix(builder): Removing running "format" command from PR due to a change request.

* fix(builder): Removing running "format" command from PR due to a change request.

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-05 09:14:02 +02:00
Amelia Bruno
52bd033a02 tweak(setup): Improvements to setup script (#7469)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-08-05 00:13:25 -05:00
Andy Hooker
bb5baadeb2 feat(builder): Implementation of UI / UX improvement to editor (#7690)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-05 00:09:31 -05:00
Andy Hooker
db97b24518 feat(builder): Addition of blocks control component (#7688)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-04 23:58:12 -05:00
Andy Hooker
533d7b7da8 feat(builder): Addition of SaveControl component for ControlPanel (#7687)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-04 23:39:42 -05:00
Andy Hooker
183c2a4845 feat(builder): Addition ControlPanel and Separator components (#7680) 2024-08-04 23:38:14 -05:00
alex
6440a8e217 fix punctuation in setup.md (#7689) 2024-08-04 06:53:32 -07:00
Toran Bruce Richards
e0930ba39d feat(Block): Add AdvancedLlmCallBlock (#7677)
* feat(Block): Add AdvancedLlmCallBlock

Adds a block for handling advanced LLM calls, enabling messages to be handled within the AutoGPT builder.

* fix linting
2024-08-02 18:17:03 -05:00
Andy Hooker
a21fd30fce fix(github): Update CODEOWNERS for rnd directories (#7684)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-02 17:23:28 -05:00
Nicholas Tindle
e2df2cd90d Revert "feat(builder): Rewrite node input UI" (#7685) 2024-08-02 17:07:03 -05:00
Toran Bruce Richards
6bdb849150 feat(Block): Add blocks for Building Lists, Dictionaries & Strings (#7678)
* feat(Block): Add block for building lists

* feat(Block): Add block for building dictionaries

* feat(Block): Add TextCombinerBlock for combining multiple input texts

* Fix linting issues where default values are None

---------

Co-authored-by: Bently <tomnoon9@gmail.com>
2024-08-02 16:49:51 -05:00
Bently
8469fafc6f Feat(Builder): Discord bots blocks (#7670)
* Super early version of the discord bots blocks

* updated to add secrets for token + other fixes

* lint & format

* rename DiscordBot to DiscordReader

* add discord-py

* fix poetry lock

* rm duplicated file

* updated name to add Block to end

* update .env.template to add DISCORD_BOT_TOKEN

* updates to add description Field

* swap channel name and message content + add field description

---------

Co-authored-by: Bently <bently@bentlybro.com>
2024-08-02 22:21:58 +01:00
Reinier van der Leer
3c2c3e57a0 feat(builder): Rewrite node input UI (#7626)
- feat(builder): Rewrite & split up `NodeInputField`
  - Create `NodeObjectInputTree`
  - Create `NodeGenericInputField`
  - Create `NodeKeyValueInput`
  - Create `NodeArrayInput`
  - Create `NodeStringInput`
  - Create `NodeNumberInput`
  - Create `NodeBooleanInput`
  - Create `NodeFallbackInput`
  - Create `ClickableInput` from `renderClickableInput(..)`
  - Amend usage in `CustomNode`
  - Remove deprecated/unused styling from `flow.css` and `customnode.css`
  - Fix alignment between `NodeHandle` and `NodeInputField`
  - Split up `BlockIOSchema` & rename to `BlockIOSubSchema`
    - Create `BlockIOObjectSubSchema`
    - Create `BlockIOKVSubSchema`
    - Create `BlockIOArraySubSchema`
    - Create `BlockIOStringSubSchema`
    - Create `BlockIONumberSubSchema`
    - Create `BlockIOBooleanSubSchema`
    - Create `BlockIONullSubSchema`
  - Install `Select` component from shad/cn

- refactor(builder): Move `NodeInputField.tsx` to `node-input.tsx`
2024-08-02 19:28:46 +01:00
lelcruz
ec6bae0467 fix(documentation): Corrected Markdown formatting for server start command (#7649)
Co-authored-by: Leslie Cruz <lelcruz@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-08-02 12:33:12 -05:00
Bently
f5fe96260e Fix(Builder): beautifyString duplicated import (#7679) 2024-08-02 15:39:42 +01:00
SwiftyOS
49a18437ac fix(builder) fix merge conflict errors 2024-08-02 16:29:55 +02:00
Reinier van der Leer
3cee893314 fix(builder): Rewrite additionalProperties input field (#7608)
Co-authored-by: Swifty <craigswift13@gmail.com>
2024-08-02 16:11:38 +02:00
Reinier van der Leer
5d1035aeb0 fix(builder): Fix BlockSchema handling & type propagation (#7604)
- Merge `Flow.tsx:CustomNodeData` type definition into `CustomNode.tsx:CustomNodeData`
- Move `lib/types:BlockSchema` type definition into `lib/autogpt-server-api/types` in place of `ObjectSchema`
- Expand and rename `BlockSchema` -> `BlockIOSchema` + `BlockIORootSchema`
- Fix all `BlockIOSchema` related type narrowing checks
- Add warning messages to fallback cases in `NodeInputField` logic

Co-authored-by: Swifty <craigswift13@gmail.com>
2024-08-02 16:09:46 +02:00
Bently
2e2c6fed52 feat(Builder): Implement undo/redo functionality (#7559)
* feat(Builder): Implement undo/redo functionality

* updates to work with latest UI

* add CTRL + Z & CTRL + Y support

* fixed undo/redo for inputs in nodes

* fix for deleting node

* fixes to make the undo/redo work better

* small fix

* Updates based on feedback

* add margin right to space out buttons

* added CTRL + SHIFT + Z for redo
2024-08-02 16:06:40 +02:00
Krzysztof Czerwinski
ca9c52f76a feat(builder): Add Supabase Auth, session and sign in UI (#7655)
This PR adds Supabase Auth (cloud) integration, login and profile UI, configures password login and three OAuth providers: Google, GitHub and Discord.

For `Account` button to show up and ability to login two env vars need to be set in `.env.local`: `NEXT_PUBLIC_SUPABASE_URL` and `NEXT_PUBLIC_SUPABASE_ANON_KEY`. OAuth providers are by the Supabase and don't require env vars.

Email confirmation (for email/password signup) is disabled because there's limit of 3 emails per hour without custom SMTP server configuration. [Link](https://supabase.com/dashboard/project/adfjtextkuilwuhzdjpf/auth/templates) to configure custom SMTP server and email template.

### Added dependencies:
- "@supabase/ssr": "^0.4.0"
- "@supabase/supabase-js": "^2.45.0"
- "react-icons": "^5.2.1"

### Added pages/routes:
- `app/auth/auth-code-error/page.tsx`: displayed when login using OAuth provider fails
- `app/auth/callback/route.ts`: route accessed when logging in using OAuth provider; it passes session code to Supabase
- `app/auth/confirm/route.ts`: accessed when confirming email, users will be directed here from email they get after signing in.
- `app/error/page.tsx`: Generic error page without explanation (any errors should be visible in the console)
- `app/login/page.tsx` and `app/login/actions.ts`: Login page and related login/signup server actions
- `app/profile/page.tsx`: Profile page, displays email address of the user and button to logout

### Changes
- Update `layout.tsx`: add `Log In` button and make icons consistent. The log in button shows up if user is logged out, avatar is shown when logged in, and if supabase is unavailable nothing shows up.
- Login form is verified using `zod` on the frontend (recommended by shadcn) and in case login fails feedback is displayed. On successful login users are redirected to `/profile`
- Add `PasswordInput` component, [source](https://gist.github.com/mjbalcueva/b21f39a8787e558d4c536bf68e267398)
- Add `SupabaseProvider` with context for Supabase accessed via hook `useSupabase(): { supabase: SupabaseClient | null, isLoading: boolean }`
- Add `useUser` hook to get `{ user, session, isLoading, error }` on the client 
- Add `getServerUser`: async function to get `{ user: User | null, error: string | null }` on the server side
- Add `src/middleware.ts` and `client.ts`, `server.ts`, `middleware.ts` in `src/lib/supabase` which are utility functions and middleware to refresh auth token
2024-08-02 12:58:28 +01:00
Zamil Majdy
973822d973 feat(rnd): Add InputBlock & OutputBlock (#7654)
### Background

We need an explicit block for providing input & output for the graph.
This will later allow us to build a subgraph with pre-declared input & output schema.
This will also allow us to set input for the node in the middle of the graph, and enable a graph to have output values.

### Changes 🏗️

* Add InputBlock & OutputBlock
* Add graph structure validation  on the graph execution step that asserts the following property:
    - All mandatory input pin, has to be connected or have a default value, except the `InputBlock` node.
    - All links have to connect valid nodes, and the sink & source name using the valid block field.
2024-08-02 13:33:34 +07:00
Bently
e773329391 Feat(Builder): Clear Status and Output upon graph edit (#7656)
* Feat(Builder): Clear Status and Output upon graph edit

* close output dropdown on graph edit

* make deleting edge clear block outputs

* make it so deleting nodes clears block output
2024-08-01 17:21:01 +01:00
Zamil Majdy
c9d41e69bd feat(rnd): Add postgres schema check on linting step + auto gen migration and schema sync command (poetry run schema) (#7667)
### Background

Migration not being synced has caused issues and CI is not catching this yet.

### Changes 🏗️

* Added schema check in both sqlite & postgres prisma.schema on linting step.
* Introduced `poetry run schema` for syncing prisma.schema + generating migration in both files.
2024-08-01 20:11:07 +07:00
Nicholas Tindle
08905d71f9 docs(server): add docs link to server readme (#7638)
Co-authored-by: Swifty <craigswift13@gmail.com>
2024-08-01 14:32:08 +02:00
Aarushi
8becde370c docs(ollama) Add ollama to mkdocs (#7666)
add ollama to mkdocs
2024-08-01 13:14:40 +01:00
Aarushi
dccc33152b feat(docs) Add how to run ollama with AutoGPT (#7665)
add ollama howto
2024-08-01 12:12:01 +01:00
Aarushi
b23bd9c479 feat(server): Add JWT validation (#7642)
* add auth middleware

* update readnme

* Update rnd/autogpt_server/pyproject.toml

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>

* address newline feedback

* update poetry lock

---------

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
2024-08-01 11:11:40 +01:00
Aarushi
ac45b7cae9 feat(rnd,infra): Add Helm linting in CI (#7633)
* Set up helm and tf for backend

* update helm charts and settings

* remove example files

* use latest tag

* delay and timeouts for probes

* env based pyro host

* default backend

* linting

* add helm linting in CI

* read from settings

* fix formatting

* update to use config
2024-08-01 09:22:04 +01:00
Aarushi
3d54a9103c feat(rnd, infra): Add deployment for backend (#7612)
* Set up helm and tf for backend

* update helm charts and settings

* remove example files

* use latest tag

* delay and timeouts for probes

* env based pyro host

* default backend

* linting

* read from settings

* fix formatting

* update to use config
2024-08-01 09:21:12 +01:00
Nicholas Tindle
ca7182403b fix(server): enable other methods of setting configs outside of the config.json file (#7645) 2024-07-31 12:33:36 -05:00
Zamil Majdy
53826ab360 fix(rnd): Fix bug caused by different AJV version (#7653) 2024-07-31 16:47:31 +07:00
Andy Hooker
eac5548023 feat(builder): UI / UX upgrade and simplification of tailwind styles (#7647)
* fix(builder): Implementing a basic shadCn theme until color palette is decided upon

* feat(builder): Separating NavBar into its own component and providing a standard UI/UX Approach

* feat(builder): Removing old implementation of logo, removing excessive css implementation, updating styles to better support standard desktop views.

* feature(builder): Addition of UI component Sheet from ShadCn for update
2024-07-31 10:43:13 +02:00
Zamil Majdy
122f544966 fix(rnd): Disable unused prisma connection on pyro API Server process (#7641)
### Background

Pyro for API Server is not using Prisma, but still holding a Prisma connection.
The fast-API thread is also holding a Prisma connection, making Prisma connected in two different loop within a single process.

### Changes 🏗️

Disable a Prisma connection on Pyro thread for Server API process.
Fix test flakiness issue due to concurrency issue.
2024-07-30 22:33:28 +07:00
Zamil Majdy
29ba4c2c73 feat(rnd): Add freezed/static input link feature for retaining input data to be re-used in multiple node executions (#7631)
### Background

Input from the input pin is consumed only once. While this is required in most of the use cases, there are some cases where the input can only be produced once, and that input needs to be re-used just like an input default value, that is passively providing input data, without triggering any execution. The scope of this change is providing that functionality in the link level, this property will be called **`static link`** in this system.

### Changes 🏗️

Provides a static link feature with the following behaviours:
* A link can be marked `static` to become a static link.
* Once a node produces an output it will persist the output data and propagate the output to the other nodes through the link, for a static link, instead of making the data queued in the input pin, it will override the default value.
* Any input executions still waiting for the input will be backfilled using this output produced by the static link.
* And any upcoming executions that will use the input will always reuse the output produced by the static link.

See the added test to see the expected usage.
2024-07-30 10:37:38 +07:00
Nicholas Tindle
76feead3b1 docs(server): written setup guide for server (#7595)
Co-authored-by: Swifty <craigswift13@gmail.com>
2024-07-29 22:24:03 +01:00
Bently
081df805df Fix(Builder): Copy and Paste when editing textbox causes issue (#7636)
Fix(Builder): Copy and Paste when editing textbox causes graph elements to be erroneously pasted
2024-07-29 21:55:19 +01:00
Nicholas Tindle
acc1d79146 feat(builder): add delete handles to the edges and nodes (#7614)
Co-authored-by: Andy Hooker <58448663+andrewhooker2@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-07-29 21:30:56 +01:00
Nicholas Tindle
07811b2133 fix(builder): show colors for all statuses (#7625) 2024-07-29 20:04:48 +01:00
Amr Aly
01b6c2d4bf Update README.advanced.md (#7607)
* Update README.advanced.md

The copy .env command is wrong. it should be .env.example instead of env.example

* fix(server): extra space removal

---------

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-07-29 12:00:58 +01:00
Krzysztof Czerwinski
905b1df218 feat(builder): Add input validation on the frontend (#7606)
- Add `ajv` dependency to check values against json schema
- Add `errors` and `setErrors` to `CustomNodeData`
- Add `validateNodes` run before executing agent
- Add `*` on labels for required fields
- Add `setNestedProperty` and `removeEmptyStringsAndNulls` utility function
2024-07-29 10:17:27 +01:00
Andy Hooker
edf84fb9f8 feat(builder): Apply standard theming for shadCn and centralize providers (#7622) 2024-07-26 17:59:44 -05:00
Swifty
b62c24dc77 Add Marketplace template (#7605)
* init

* Added sentry.io integration

* updated readme
2024-07-26 17:11:26 +02:00
Zamil Majdy
dfa855f533 fix(rnd): Fix broken server execution update due to split prisma connection to different loops (#7620) 2024-07-26 15:47:41 +01:00
Aarushi
da2111bafb fix(server): Add missing PG migrations (#7619)
missing migrations
2024-07-26 14:38:05 +01:00
Reinier van der Leer
b2dba39810 fix(builder/api-client): Fix websocket client types (#7603)
- Fix type signatures of `sendWebSocketMessage(..)`, `onWebSocketMessage(..)`, `runGraph(..)` in `autogpt-server-api/client`
  - Add `WebsocketMessageTypeMap`

- Fix type signature of `updateNodesWithExecutionData` in `FlowEditor`
2024-07-26 13:26:11 +02:00
Zamil Majdy
d2a5bb286f feat(rnd): Fix concurrency issue on Agent Server & Apply max-1-execution-per-node constraint in graph execution (#7551)
### Background

When multiple executors are executing the same node within the same graph execution, two node executions can read the same queue of input and read the same value—making the data that is supposed to be consumed once, consumed by two executions. The lack of lock & concurrency support for parallel execution within a single graph causes this issue.

Node concurrency also introduces poor UX in the current frontend implementation, when two nodes are executed in parallel, the current UI will not display its parallel execution update, but instead, it shows the updates that override each other. Until the execution observability is improved on the builder UI, this capability will be limited.

### Changes 🏗️

The scope of this change is to solve this issue by:
* Decouple Graph execution & Node execution, each has its own configured process pool.
* Make sure there is only 1 execution per node (we still allow parallel executions on different nodes) in a graph.
* Fixed concurrency issue by adding distributed lock API on agent_server.
* Few cleanups:
    - Add more logging with geid & neid prefix on graph/node executions
    - Moved execution status update to agent-server for a single source of status update (required by conn-manager/web-socket)
    - Configured node parallelism to 10 & graph parallelism to 10 by default, so in the very rare worst-case, there can be 100 node executions.
    - Re-use server resource for each integration test run
2024-07-26 17:08:03 +07:00
Toran Bruce Richards
36b9a0a930 feat(Blocks): Add conditional logic block (#7536)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-07-25 14:25:07 -05:00
Aarushi
f40db85b43 feat(rnd, infra) Add static ips module (#7588)
* add terraform

* gitignore update

* linting

* formatting and linting in ci

* store state in backend bucket

* Update .github/workflows/autogpt-infra-ci.yml

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>

* add static ips module

* formatting

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
2024-07-25 17:50:19 +01:00
Reinier van der Leer
0767b17779 dx: Set Python type checking mode to "basic" for all subprojects (#7599) 2024-07-25 12:25:56 +01:00
Nicholas Tindle
0f0c13bae8 fix(server): gitignore was missing new file name (#7596)
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-07-25 10:07:16 +01:00
A.Daee
3b0cd9518d feat(froge): Improvement in web components (#7068)
- Add `duckduckgo_backend` field to `WebSearchComponent` configuration
- Add `selenium_proxy` to `WebSeleniumComponent` configuration
- Update docs
2024-07-25 10:02:34 +01:00
Aarushi
22b6dbbf6a feat(rnd,infra) Set up terraform (#7565)
* add terraform

* gitignore update

* linting

* formatting and linting in ci

* store state in backend bucket
2024-07-25 09:45:36 +01:00
Reinier van der Leer
d9a1a1edc8 fix(builder): Correctly load node connections (#7592)
- Set `node.data.connections` based on `graph.links` in `loadGraph(..)`
- Use reactflow's built-in `useNodesState`, `useEdgesState` to replace some boilerplate functions that do exactly the same

Resolves #7392
2024-07-25 09:30:35 +01:00
vasnt
3c0d37d5d1 docs(forge):fix #7198 dead image links (#7593)
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2024-07-24 22:55:07 -05:00
Bently
c98061bc3b fix(Builder): fixes to copy and pasted blocks (#7591)
* fix(Builder): fixes to copy and pasted blocks

* clear the blocks state on paste
2024-07-24 17:00:59 +01:00
Toran Bruce Richards
a8c0cbef54 refactor(Blocks): Simplify MathsBlock and CounterBlock to Return Numeric Results Only (#7582)
* refactor(MathsBlock): Simplify output to return numeric result directly

- Remove MathsResult class and explanation field
- Update Output schema to use float type
- Simplify run method to yield numeric result only
- Adjust error handling to return inf or nan for errors
- Update test cases to reflect new output structure

* run format

* refactor(CounterBlock): Simplify output to return count as integer

- Remove CounterResult class
- Update Output schema to use int type directly
- Simplify run method to yield count without explanation
- Modify error handling to return -1 for any errors
- Update test case to reflect new output structure
2024-07-24 08:55:18 -05:00
Zamil Majdy
8ccd14c4bf fix(rnd): Disable BlockInstallationBlock and add disable field for block (#7583) 2024-07-24 08:52:14 -05:00
Nicholas Tindle
3e384c9771 ci(server): add sqlite processing (#7586)
* ci(server): add sqlite processing

* ci(server): try setting DATABASE_URL based on db platform

* fix(server): swap default back to sqlite

* ci(server): go back to database url

---------

Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-07-24 14:46:39 +01:00
Aarushi
22f2a05f08 feat(server) Fix schema file (#7589)
* fix schema file

* remove invalide migrations

* make sqlite url hardcode
2024-07-24 14:31:48 +01:00
Krzysztof Czerwinski
e94a7b08c9 refactor(builder): Move renderInputField to NodeInputField component (#7581) 2024-07-24 11:14:20 +01:00
Aarushi
7b8928f49b feat(server) Update readme (#7580)
update readme
2024-07-24 16:53:09 +07:00
Aarushi
699087e289 feat(rnd) Add dockerfiles (#7523)
* replace SQLite with Postgres

* dockerfiles and optional docker compose set up

* Update rnd/autogpt_builder/Dockerfile

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* address feedback

* Update .dockerignore

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* Remove example files folder

* remove backend and frontend from docker compose

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-07-24 10:01:22 +01:00
Bently
aa8ca37f86 feat(Builder): Work on copy and paste of blocks (#7564)
Co-authored-by: Swifty <craigswift13@gmail.com>
2024-07-24 09:30:48 +01:00
Toran Bruce Richards
8bdb48cba4 feat(blocks): Add basic YouTube transcriber block (#7495)
* feat: Add YouTubeTranscriber block for extracting transcripts from YouTube videos

* feat: Add youtube-transcript-api dependency to pyproject.toml

* feat: Add SchemaField and test_mock to YoutTube Transcriber.

* feat: update lock

* fix(server): the agbenchmark was out of date

* fix(server): linting

* fix(server): drop mock

* fix(server): poetry locked in

* fix(server): test had ... at the end?

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-07-24 09:00:09 +01:00
Bently
03ea51b266 feat(Builder): Limit Output preview size (#7554)
* feat(Builder): Limit Output preview size

* fix indent
2024-07-24 09:23:43 +02:00
Aarushi
77034f2df0 feat(blocks): Add 405b on ollama (#7573)
405b on ollama
2024-07-24 09:22:18 +02:00
Nicholas Tindle
ccf4397883 feat(server): author and contributor information on blocks (#7575) 2024-07-24 09:19:33 +02:00
Reinier van der Leer
e7885f943b fix(agent): Unbreak LLM status check on start-up
Fixes #7508

- Amend `app/configurator.py:check_model(..)` to check multiple models at once and save duplicate API calls
- Amend `MultiProvider.get_available_providers()` to verify availability by fetching models and handle failure
2024-07-23 22:36:04 +02:00
Aarushi
aca7165694 feat(rnd) Fix linting and formatting (#7568)
fix linting and formatting
2024-07-23 18:25:04 +01:00
Toran Bruce Richards
22b223037e feat(Blocks): Add support for new Llama 3.1 models with Groq! (#7567)
feat: Add support for new Groq models

The commit adds support for new Groq models, including LLAMA3_1_405B, LLAMA3_1_70B, and LLAMA3_1_8B. These models are part of the preview release and offer enhanced reasoning and versatility capabilities.
2024-07-23 16:48:39 +01:00
Toran Bruce Richards
6747ae1559 feat(Blocks): Add Blocks Providing Basic Arithmetic Operations (#7529)
* feat(blocks): Add MathsBlock for performing mathematical operations

The commit adds a new block called MathsBlock to perform various mathematical operations such as addition, subtraction, multiplication, division, and exponentiation. The block takes input parameters for the operation type, two numbers, and an option to round the result. It returns the result of the calculation along with an explanation of the performed operation.

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2024-07-23 16:28:59 +02:00
Aarushi
39afba6da8 feat(ci) Fix migrations in CI (#7563)
* update CI

* add poetry run

* schema prisma
2024-07-23 14:01:45 +01:00
Toran Bruce Richards
a00df25092 feat(Blocks): Add RSS Reader Block for reading RSS feeds (#7533)
* feat: Add RSSReaderBlock for reading RSS feeds

The commit adds a new `RSSReaderBlock` class in the `rss-reader-block.py` file. This block allows users to read RSS feeds by providing the URL of the feed, start datetime, polling rate, and a flag to run the block continuously. The block fetches the feed using the `feedparser` library and returns the title, link, description, publication date, author, and categories of each RSS item.

This commit also includes the addition of the `feedparser` dependency in the `pyproject.toml` file.

* fix(server): update lock file

* updated poetry lock

* fixed rss reader testing

* Updated error message in test to include check info

* Set starttime as 1 day ago

* Changed start time to time period

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2024-07-23 13:15:14 +02:00
Toran Bruce Richards
ea698ab0fe feat(Blocks): Add For-Each Block for iterating over a List. (#7531)
* feat: Add ForEachBlock for iterating over a List.

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2024-07-23 11:40:41 +02:00
Krzysztof Czerwinski
902d2a8924 feat(builder): UX and style updates (#7550)
- Handles:
  - Add `NodeHandle` to draw input and output handles
  - Position handles relatively
  - Make entire handle label clickable/connectable
  - Add input/output types below labels
  - Change color on hover and when connected
  - "Connected" no longer shows up when connected
- Edges:
  - Draw edge above node when connecting to the same node
  - Add custom `ConnectionLine`; drawn when making a connection
  - Add `CustomEdge`; drawn for existing connections
  - Add arrow to the edge end
  - Colorize depending on type
- Input field modal:
  - Select all text when opened
  - Disable node dragging
- CSS:
  - Remove not needed styling
  - Use tailwind classes instead of css for some components
  - Minor style changes
- Add shadcn switch
- Change bottom node buttons (for properties and advanced) to switches
- Format code
2024-07-23 09:36:42 +01:00
Reinier van der Leer
ab0df04bfe fix(builder/monitor): Fix Graph export (#7556)
- fix(builder/monitor): Export `Graph` rather than `GraphMeta`
  - Fixes #7557

- refactor(builder): Split up `lib/autogpt_server_api` into multi-file module
  - Resolves #7555
  - Rename `lib/autogpt_server_api` to `lib/autogpt-server-api`
  - Split up `lib/autogpt-server-api` into `/client`, `/types`
  - Move `ObjectSchema` from `lib/types` to `lib/autogpt-server-api/types`
  - Make definition of `Node['metadata']['position']` independent of `reactflow.XYPosition`

- fix(builder/monitor): Strip secrets from graph on export
  - Resolves #7492
  - Add `safeCopyGraph` function in `lib/autogpt-server-api/utils`
  - Use `safeCopyGraph` to strip secrets from graph on export in `/monitor` > `FlowInfo`
2024-07-23 09:28:06 +02:00
Zamil Majdy
d407fd101e fix(rnd): Make Agent Server's pin connections become the mandatory source of input (#7539)
### Background

Input from the input pin is consumed only once, and the default input can always be used. So when you have an input pin overriding the default input, the value will be used only once and the following run will always fall back to the default input. This can mislead the user.

Expected behaviour: the node should NOT RUN, making connected pins only use their connection(s) for sources of data.

### Changes 🏗️

* Make pin connection the mandatory source of input and not falling back to default value.
* Fix the type flakiness on block input & output. Unify the typing for BlockInput & BlockOutput using the right alias to avoid wrong typing.
* Add comment on alias
* automated test on the new behaviour.
2024-07-23 09:06:26 +07:00
Aarushi
a911f9a5eb feat(rnd) Add Postgres support (#7513)
* replace SQLite with Postgres

* make sqlite default

* add migrations for sqlite and postgres

* update readme

* fix formatting
2024-07-22 23:43:49 +01:00
Reinier van der Leer
470c738732 fix(builder/monitor): Fix # of runs count on first load (#7558)
Return mutated copy rather than in-place mutated `flowRuns` in `refreshFlowRuns(..)`

Fixes #7507
2024-07-22 23:15:54 +01:00
Nicholas Tindle
7de49dfbe5 Update workflow-checker.yml (#7553) 2024-07-22 13:49:27 -05:00
Reinier van der Leer
a02b017cea feat(builder): Built-in templates (#7511)
- Add "Medium Blogger" template to `graph_templates` folder
- Add `import_packaged_templates()` function in `data.graph`
- Call `import_packaged_templates()` in `AgentServer` lifecycle setup
- Fix `util.json:loads` typing and parameter forwarding
2024-07-22 17:54:48 +02:00
Nicholas Tindle
56b82369b6 ci(all): verify the workflows as part of each run (#7500) 2024-07-22 07:23:48 -05:00
Tyrion Lannister
a7926584ca Fix(rnd): Add Prerequisites Documentation in README.md (Server) (#7546) 2024-07-22 07:19:20 -05:00
Nicholas Tindle
6ffa644fb6 fix(ops/issues): update template for bugs (#7211) 2024-07-22 07:16:51 -05:00
Bently
a82317e2ac feat(builder): fix sidebar scroll issue (#7548)
fix sidebar scroll issue
2024-07-22 13:05:50 +01:00
Bently
fd000a4173 feat(Builder): Fix sidebar covering navbar (#7544)
Fix sidebar covering flow
2024-07-22 10:11:46 +01:00
Nicholas Tindle
235715e054 feat(server): read_csv block (#7499) 2024-07-22 09:58:02 +02:00
Nicholas Tindle
d0ec31b698 docs: fix dead links (#7470) 2024-07-22 09:56:32 +02:00
Zamil Majdy
fa1b486c64 feat(rnd): Add Agent block UUID validation for Agent Server (#7501)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-07-20 09:14:18 +07:00
Reinier van der Leer
21084c5817 feat(builder, server): Speed up fetching graphs (#7477)
- Let `GET /graphs` return `GraphMeta[]` instead of `string[]` (list of IDs)
- Rename `AutoGPTServerAPI` method `listGraphIDs` -> `listGraphs` and adjust return type
   - Replace all usages of `Graph` with `GraphMeta` in `/monitor`
- Delete `data.graph:get_graph_ids()`
2024-07-19 18:28:59 -05:00
Zamil Majdy
82fd3166ef feat(rnd): Add block description and categories (#7463)
### Background

Add block description and categories metadata.

### Changes 🏗️

* Add block description and categories metadata.
* Initialize description and categories on the existing blocks.
2024-07-19 11:10:16 -05:00
Toran Bruce Richards
f833fa3624 feat: Update CreateMediumPostBlock to use secret value for author_id
This commit updates the `CreateMediumPostBlock` class in `create_medium_post.py` to use the secret value for the `author_id` parameter. Previously, it was using the plain value, which caused the value to be sent incorrectly to the API.
2024-07-19 14:52:02 +01:00
Aarushi
e6f9870f2e feat(blocks) Add ollama integration (#7505)
* add ollama integration

* address review comments & formatting
2024-07-19 11:27:48 +01:00
Bently
6e319a6881 Feat(Builder): Decrease the delay before a tooltip opens (#7478)
Make tooltip show in 10 ms
2024-07-19 09:21:34 +01:00
Nicholas Tindle
64edf12c31 fix(server): code.py as a filename overshadows an internal python file (#7496) 2024-07-19 10:30:54 +07:00
Nicholas Tindle
e1795b8216 fix(server): add create medium post test missing params (#7497) 2024-07-19 10:28:43 +07:00
Nicholas Tindle
057d0848ef fix(server): various linting errors (#7498) 2024-07-19 10:28:21 +07:00
Bently
2dc673614f feat(doc): Update node creation docs (#7462)
* feat(doc): Update node creation docs

* small fixes

* fix formatting

* remove mention of registering a block

* Update docs/content/server/new_blocks.md

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-07-18 20:01:40 -05:00
Toran Bruce Richards
24e08d57ef Update README.md 2024-07-19 00:46:17 +01:00
Toran Bruce Richards
bd540b5cc4 tweak(builder): Replace erroneous mentions of "Properties" to "Output" (#7489) 2024-07-19 00:42:22 +01:00
Toran Bruce Richards
6d192429a6 feat(blocks): Add Medium Post block (#7494)
* feat: Add CreateMediumPostBlock to create Medium posts

* feat: Add medium_api_key to Secrets class in settings.py

* feat: Update medium post block to work with latest system.

* feat: Add medium_author_id field to Secrets class in settings.py

* run isort

* run black
2024-07-19 00:26:32 +01:00
Toran Bruce Richards
314a24ab8f fix(blocks): resolve duplicate ID in ObjectLookupBlock - basic.py 2024-07-18 23:58:19 +01:00
Toran Bruce Richards
ff962d8d88 Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT 2024-07-18 23:56:09 +01:00
Toran Bruce Richards
5e5182e236 feat: Add CreateMediumPostBlock to create Medium posts 2024-07-18 22:54:13 +01:00
Nicholas Tindle
04dcd230cd feat(builder): allow deletion of nodes and edges via del key (#7483) 2024-07-18 22:45:28 +01:00
Nicholas Tindle
c8b46109fe feat(builder, server): hide secrets from the UI and mark them as secrets in the api (#7490)
* feat(server): add json_extra marking some fields as secret

* devx(builder): add a launch.json for debugging

* feat(builder): hide secret strings

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-07-18 16:30:11 -05:00
Toran Bruce Richards
c00caa4bcf tweak(Builder): Remove unnecessary node output heading in CustomNode component (#7487)
feat: Remove unnecessary node output heading in CustomNode component
2024-07-18 21:43:47 +01:00
Bently
e382dcf823 feat(blocks): Adds node to get weather from openweathermap (#7382)
* Add node for openweathermap

* fix naming

* Rename GetOpenweatherMapWeather.py to getopenweathermapweather.py

* fix naming

* fix conflicts

* fix conflicts

* Updates to node + moved to search.py

* small cleanup
2024-07-18 21:36:09 +01:00
Toran Bruce Richards
27e6c3a95d feat(block): Add support for GPT-4O Mini model in LLM block (#7480)
feat: Add support for GPT-4O Mini model in LLM block
2024-07-18 19:02:36 +01:00
Bently
aaf650ee23 feat(builder): Fix message covering navbar on < lg screen size (#7474)
* Remove excess styling from message box
* Replace "node" with "block" in message (-> "Get started by adding a block")
2024-07-18 16:46:28 +02:00
Reinier van der Leer
4c003d6e20 fix(builder): Make detail view on /monitor obey grid
Unwrap the detail view element and style it directly instead
2024-07-18 04:12:51 +02:00
Reinier van der Leer
8264d7bf5a feat(builder): Importing/exporting Agents and Templates (#7466)
Builder:
* Add download button to agent info view
   - Add download button to `FlowInfo`
   - Add `exportAsJSONFile(..)` function to lib/utils.ts

* Add Create button + menu to /monitor page
   - Add "Create | v" split button to Agent list
   - Add list of templates to Create menu
   - Add "Import from file" option + dialog
      - Create `AgentImportForm` component
         - Install `Form`, `Label`, `Switch` components from shad/cn UI
      - Install `Dialog` component from shad/cn

* Support saving/editing Templates
   - Add `templateID` query parameter to `/build`
   - Use `templateID` query parameter in `AgentImportForm` redirect if imported as template
   - Make `FlowEditor` suitable for saving/editing templates
      - Add `template` (boolean) parameter to `FlowEditor` component
      - Add conditions to all `createGraph` or `updateGraph` calls, to use `createTemplate`/`updateTemplate` if applicable
      - Add "Save as Template" button, visible if the graph is new (unsaved)
      - Hide "Save & Run Agent" button when editing a template

* Add template endpoints to `AutoGPTServerAPI` client
   - Add `listTemplates()`
   - Add `getTemplate(id, version?)`
   - Add `getTemplateAllVersions(id)`
   - Add `createTemplate(templateCreateBody)`
   - Add `updateTemplate(id, template)`

* fix inner alignment of `<Input type="file">` elements

Server:
* fix(server): Fix return of `create_graph` for templates
2024-07-18 13:19:56 +01:00
Reinier van der Leer
37b7053e14 feat(server): Add API prefix /api to prevent route collisions with frontend (#7472)
- Add prefix `/api` to `APIRouter` in server.py
- Update API client in Builder
   - Update default `AGPT_SERVER_URL` in .env.template
   - Update default `baseUrl` in `AutoGPTServerAPI` constructor
2024-07-18 07:16:58 -05:00
Reinier van der Leer
354e626965 feat(dx): Add Builder and Server to auto-labeler 2024-07-18 00:19:38 +02:00
Reinier van der Leer
90371e1781 fix(builder): Remove legacy global styling from globals.css
This global styling was clashing with the theme of shad/cn and making text unreadable.
2024-07-18 00:06:52 +02:00
Reinier van der Leer
e128bfaf5f refactor(builder): Clean up AutoGPTServerAPI implementation (#7468)
- Abstract request implementation out of individual endpoints
- Rename Flow -> Graph (in type, method, and variable names)
2024-07-17 21:51:03 +02:00
Kate Silverstein
62c420e26f feat(forge/llm): Add LlamafileProvider (#7091)
* Add minimal implementation of `LlamafileProvider`, a new `ChatModelProvider` for llamafiles. It extends `BaseOpenAIProvider` and only overrides methods that are necessary to get the system to work at a basic level.

* Add support for `mistral-7b-instruct-v0.2`. This is the only model currently supported by `LlamafileProvider` because this is the only model I tested anything with.

* Add instructions to use AutoGPT with llamafile in the docs at `autogpt/setup/index.md`
* Add helper script to get it running quickly at `scripts/llamafile/serve.py`

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-07-17 15:56:55 +02:00
Krzysztof Czerwinski
97a5582c34 feat(agent_builder): Human-readable titles for blocks and fields (#7439)
Display human readable names for blocks and fields. `title` from schema is used if available, otherwise `beautifyString(name)`.
2024-07-17 14:44:41 +01:00
Zamil Majdy
78b84289cb feat(rnd): Add code-formatter & linting on AutoGPT server (#7458)
### Background

Add formatter & linter command.
Tools: ruff --> isort --> black --> pyright.

### Changes 🏗️

Introduced:
* `poetry run format`
* `poetry run lint`

`poetry run lint` will be executed on CI.
2024-07-17 11:54:29 +01:00
Krzysztof Czerwinski
9e22409d66 feat(agent_builder): Add block input fields description tooltip (#7437)
- Renamed `Schema` to `BlockSchema` and moved to `lib/types.ts`
- Add `SchemaTooltip` component that renders markdown tooltip for node fields
- Add `SecretField` function (which uses `BlockSecret` as value) that replaces `BlockFieldSecret` functionality for models
- Rename `get` to `get_secret_value` to make name clearer and inline with pydantic `Secret` types
- Add shadcn tooltip
- Add `react-markdown` dependency
2024-07-17 10:54:43 +01:00
Zamil Majdy
e7c075a521 feat(doc): Remove excessive docs for AutoGPT server (#7457) 2024-07-17 14:22:13 +07:00
Krzysztof Czerwinski
555e113706 feat(agent_builder, agent_server): Add customizable placeholders for input fields (#7451)
- Add `SchemaField` that works like Pydantic `Field` but allows to add extra json schema values. This PR adds `placeholder` entry but it could be extended with other data.
- Render `placeholder` inside input fields if available.
- Restyle placeholders so they are visually distinct from user-entered values
2024-07-16 20:08:43 +01:00
Bently
420e6cae2f feat(autogpt_builder): Fix Read API base URL from .env (#7455) 2024-07-16 16:28:24 +01:00
Toran Bruce Richards
920f931a21 feat(blocks): Add summariser block for recursive text summarization functionality (#7431)
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-07-16 17:51:37 +07:00
Toran Bruce Richards
e874318832 feat(blocks): Add WebSearch & WebScrapper block for searching the web with Jina Reader (#7445)
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-07-16 15:32:23 +07:00
Toran Bruce Richards
cb4b96a70c feat(server): Add DictionaryLookup block for looking up values in a dictionary (#7427)
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-07-16 15:23:12 +07:00
Toran Bruce Richards
03ea4c2690 fix(server): Fix bug in time comparison logic in RedditGetPostsBlock (#7426)
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-07-16 14:35:51 +07:00
Zamil Majdy
e70e613f73 feat(rnd): Refactor Block testing infra and Add Block auto-generation graph usecase (#7405)
### Background

The main scope of this change is enhancing the system capability (by fixing bug, correcting execution behaviour) to allow for creating a graph with a loop, to allow the use case of block auto-generation agent.

### Changes 🏗️

* Main changes: Add block_autogen.py (block auto-generation agent graph example).
* Refactor on test boilerplate: introduced `util/test` for initiating a server, and waiting graph execution.
* Removing unnecessary db lookup and duplicated codes used for sending execution updates on agent executor.
* Removed redundant code on test and cli code.
* Moved block test execution helper into the main code (so blockinstallerblock can use it).
* Eliminate the need of explicitly add a module into the `AVAILABLE_BLOCKS` list, any block class under the `block` folder will be auto-discovered.
* Few patches on the existing blocks.
2024-07-15 21:41:18 -05:00
Toran Bruce Richards
854f6dcaec Update README.md 2024-07-15 22:31:30 +01:00
Toran Bruce Richards
ea5ba9d193 Update README.md 2024-07-15 22:31:19 +01:00
Toran Bruce Richards
da14957fce Update README.md 2024-07-15 22:29:03 +01:00
Toran Bruce Richards
629f575dde Delete rnd/AutoGPT Server Setup Tutorial.mp4 (#7452) 2024-07-15 22:18:49 +01:00
Toran Bruce Richards
8883d7db53 Create README.md in rnd folder 2024-07-15 22:17:37 +01:00
Toran Bruce Richards
a6063e1550 Add files via upload 2024-07-15 22:06:57 +01:00
Toran Bruce Richards
e311847fa8 Update autogpt_server README.md 2024-07-15 20:26:44 +01:00
Bently
d673bf741a feat(autogpt_builder): Add websocket support to replace polling (#7449)
feat(autogpt_builder): Add websocket support
2024-07-15 20:18:23 +01:00
Swifty
110e093e7b feat(autogpt_server, autogpt_builder): Add Agent Versioning & Templates functionality (#7376)
1. Add graph versioning functionality:
   - Add `version`, `isActive` fields in the `AgentGraph` model
      - Add `agentGraphVersion` field in related models
   - Amend & add API endpoints for graph versioning (see below)
      - Amend & add data layer functions (`autogpt_server.data`) to support new operations & data schema

2. Add graph template functionality:
   - Add `isTemplate` fields in the `AgentGraph` model
   - Add `GraphMeta` model for template/graph metadata
   - Add API endpoints for template management (see below)
      - Amend & add data layer functions (`autogpt_server.data`) to support new operations & data schema

3. Enhance graph creation:
   - Amended `create_graph` route to handle template-based graph creation

4. Code refactoring:
   - Improved import statements
   - Enhanced error handling in graph creation

5. Minor improvements:
   - Add validator to auto-assign `Graph.id` if not set

## API Changes

New endpoints:
- `GET /templates`: Retrieve all templates (metadata only)
- `POST /templates`: Create a new template
- `PUT /graphs/{graph_id}`: Create a new version of a graph
- `GET /templates/{graph_id}`: Get a specific template
- `PUT /templates/{graph_id}`: Create a new version of a graph template
- `GET /graphs/{graph_id}/versions`: Get all versions of a graph
- `GET /templates/{graph_id}/versions`: Get all versions of a graph template
- `GET /graphs/{graph_id}/versions/{version}`: Get a specific graph version
- `PUT /graphs/{graph_id}/versions/active`: Set active graph version

Modified endpoints:
- `POST /graphs`: Now supports creating graphs directly from templates
- `GET /graphs/{graph_id}`: Add `version` query parameter
- `GET /graphs/{graph_id}/executions`: Add `graph_version` query parameters

## UI changes
- Improve `/build` / `FlowEditor` save mechanism
   - Implement updating current agent instead of creating a new agent on every save
   - Add check to only save a new version if local graph has been edited
      - Add `deepEquals` function to lib/utils.ts
- Add version indicators and selector on `/monitor`
  ![image](https://github.com/user-attachments/assets/742a66ee-4d12-4129-981d-fa6399e717fa)
   - Add shad/cn `DropdownMenu` component
- Update `AutoGPTServerAPI` client
   - Update input/output types with added attributes (see above)
   - Add parameter `version` to `getFlow`
   - Add parameter `flowVersion?` to `listFlowRunIDs`
   - Add endpoint `updateFlow(flowID, FlowUpdateable)`
   - Add endpoint `createFlow(fromTemplateID, templateVersion)` (overload)
   - Add endpoint `getFlowAllVersions(id)`
   - Add endpoint `setFlowActiveVersion(flowID, version)`
2024-07-15 20:26:00 +02:00
Toran Bruce Richards
93b6e0ee51 feat(blocks): Add support for additional LLM providers to LLM Block (#7434)
This commit adds support for the following models:

```python
# OpenAI Models
GPT4O = "gpt-4o"
GPT4_TURBO = "gpt-4-turbo"
GPT3_5_TURBO = "gpt-3.5-turbo"

# Anthropic models
CLAUDE_3_5_SONNET = "claude-3-5-sonnet-20240620"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"

# Groq models
LLAMA3_8B = "llama3-8b-8192"
LLAMA3_70B = "llama3-70b-8192"
MIXTRAL_8X7B = "mixtral-8x7b-32768"
GEMMA_7B = "gemma-7b-it"
GEMMA2_9B = "gemma2-9b-it"
```

Every model has been tested with a single LLM block and is confirmed to be working in that setup.
2024-07-15 20:30:56 +07:00
Bently
450f120510 feat(autogpt_builder): Fixes to wikipedia block (#7435) 2024-07-15 11:10:16 +07:00
Bently
2355d56801 feat(Builder): Updates to Nodes UI (#7414)
Updates to Nodes UI
2024-07-14 13:52:57 +01:00
Toran Bruce Richards
d3dae2264d tweak(docs): Replace all instances of "Node" with "Block". (#7416)
Replace all instances of "Node" with "Block" in the doccumentation.
2024-07-13 22:04:24 +01:00
Toran Bruce Richards
58313d9ae7 tweak(docs): Make link to The AutoGPT Server more prominant in docs navbar. (#7402) 2024-07-13 21:48:51 +01:00
Toran Bruce Richards
9e7dd4be74 tweak(rnd): Update AutoGPT logo file to be larger (#7408) 2024-07-13 21:08:09 +01:00
Krzysztof Czerwinski
6550bdc10c feat(rnd): AutoGPT Agent wrapper for Agent Server (#7365)
- Add `autogpt` and `forge` dependency to the `autogpt_server`
- Add `AutoGPTAgentBlock` that initializes and runs a single agent loop on execution
- Add `BlockAgent` that inherits from `autogpt` `Agent` and is a thin extension on the agent that allows to disable components
- Add `OutputComponent` that adds `output` command for the agent
2024-07-13 19:47:29 +01:00
Nicholas Tindle
0b9f3be6b8 feat(docs): add server new node docs (#7395)
Co-authored-by: Bently <27962737+Bentlybro@users.noreply.github.com>
2024-07-12 08:59:35 -05:00
Reinier van der Leer
a2d8d9bac9 feat(autogpt_builder/monitor): Monitor page v0.2 (#7385)
- Improve responsive grid layout
  - Remove `container` class from `<main>` to utilize full screen width

- Improve detail views & add view for run details
  - Make flow run list entries selectable
  - Create `FlowInfo` and `FlowRunInfo` components
  - Improve layout of `FlowRunsStats`

- Improve ScrollableLegend spacing & styling
  - Hide scroll bar of scrollable legend
  - Center legend items if there is space left
  - Round icons
  - Vertically align icons with labels
  - FIX: Add condition to not display legend items for series with `legendType="none"`

- Add periodic 5s refresh of non-terminal flow run statuses
  - Split off `refreshFlowRuns(flowID)` from `fetchFlowsAndRuns()`
  - Add effect to run `refreshFlowRuns` every 5 seconds

- Improve and expand FlowRun info
  - Add `FlowRun.totalRunTime`: sum of the individual execution durations of all nodes
  - Add `FlowRun.endTime`
  - Use `NodeExecutionResult.add_time` instead of `start_time` as `FlowRun.startTime`

- Sort Flows by last executed

- Add icons to navbar items & hide unused items Backtrack and Explore

- Change UI mentions of "(Agent) Flow" to "Agent"
2024-07-12 14:56:48 +02:00
Bently
b7096e01fb feat(Builder): Adds node to send messages to discord webhook (#7381)
* Adds core files for discord node

* fix naming

* Rename DiscordSendMessage.py to discordsendmessage.py

* update naming again

* fix conflicts

* fix imports
2024-07-12 11:58:11 +01:00
Krzysztof Czerwinski
bffb92bfbc fix(agent, forge): Fix Pydantic v2 protected namespace model_ warnings (#7340)
Rename `model_*` fields to `llm_*`
2024-07-12 12:05:03 +02:00
Bently
2ecce27653 feat(Builder): Adds node to get summary from wikipedia (#7384)
* adds node to get summary from wikipedia

* update name

* Fix naming again
2024-07-12 09:51:56 +01:00
Zamil Majdy
1089551869 feat(builder): Add service-level creds access for agent blocks (#7373)
### Background

Credentials for blocks could only be defined through the block input. The scope of this change is providing system-wide that becomes the default value for these input blocks.

### Changes 🏗️

* Add system-wide credential support for agent blocks `BlockFieldSecret`.
* Update llmcall & reddit block to adopt `BlockFieldSecret`.
2024-07-12 15:41:55 +07:00
Aarushi
58af7f9466 feat(autogpt_builder) Remove saving of execution data into sink node (#7383)
remove saving of execution data into sink node
2024-07-12 01:44:11 +02:00
Aarushi
b89609fd16 feat(autogpt_builder) Add logic for node disconnection (#7386)
add logic for node disconnection
2024-07-11 23:33:21 +01:00
Reinier van der Leer
dcfc3a4dad fix(autogpt_builder): Restore node input presets when loading a flow (#7379)
- Assign `node.input_default` to `hardcodedValues` in `FlowEditor:loadFlow(..)`
2024-07-11 18:37:04 +01:00
Nicholas Tindle
7cf6d3ff79 feat(dx): add a code-workspace for vscode (#7285)
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-07-11 19:24:35 +02:00
Toran Bruce Richards
f4f164ac15 chore(rnd): Update animation direction for customnode.css to "ping-pong" (#7377) 2024-07-11 16:09:02 +01:00
Bently
cd9d041fe5 feat(builder): add button for showing/hiding optional inputs (#7375)
add button for showing/hiding optional inputs in nodes
2024-07-11 15:14:07 +01:00
Bently
0fb8a84382 fix to make monitor use env (#7371)
fix to make it use env
2024-07-10 17:18:59 +01:00
Aarushi
a408da8317 aarushikansal/flow stats scroll legend (#7370)
* feat(agent_builder): Add shad/cn + Radix Icons + Tailwind

* move favicon.ico to static folder

* delete empty custominput.css

* feat(agent_builder): Add basic app layout with nav

* add timeline chart to Monitor page and improve overall mock-up functionality

* add some (mock) stats to the overview

* monitor/page.tsx: Switch from mock data to API & reorder file

* undo out-of-scope changes to Flow.tsx and .css files

* fix linting issue in `FlowRunsTimeline`

* wip scrollable

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
2024-07-10 18:07:07 +02:00
Reinier van der Leer
976ff04cce feat(autogpt_builder): Initial Monitor page implementation (#7335)
* feat(agent_builder): Add shad/cn + Radix Icons + Tailwind

* move favicon.ico to static folder

* delete empty custominput.css

* feat(agent_builder): Add basic app layout with nav

* add timeline chart to Monitor page and improve overall mock-up functionality

* add some (mock) stats to the overview

* monitor/page.tsx: Switch from mock data to API & reorder file

* undo out-of-scope changes to Flow.tsx and .css files

* fix linting issue in `FlowRunsTimeline`

---------

Co-authored-by: Aarushi <aarushik93@gmail.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2024-07-10 18:04:51 +02:00
Swifty
3c91038089 tweak(ui): Switch from modal.css to tailwindcss classes (#7369)
* reverts dark theme for now

* change "Show/Hide nodes" button to be "Icon"

* swap over to light mode + fix sizing

* fix color for agent name + description text

* Change navbar to white

* Added darkmode tag for the navbar

* Added dark mode text color

* Changed to tailwind classes

---------

Co-authored-by: Bentlybro <tomnoon9@gmail.com>
2024-07-10 18:04:02 +02:00
Swifty
d4b441932d tweak(ui): Tweaked light mode color (#7366)
* reverts dark theme for now

* change "Show/Hide nodes" button to be "Icon"

* swap over to light mode + fix sizing

* fix color for agent name + description text

* Change navbar to white

* Added darkmode tag for the navbar

* Added dark mode text color

---------

Co-authored-by: Bentlybro <tomnoon9@gmail.com>
2024-07-10 18:02:06 +02:00
Bently
14ddb915bf feat(autogpt_builder): swapping to white theme for the layout (#7358)
* reverts dark theme for now

* change "Show/Hide nodes" button to be "Icon"

* swap over to light mode + fix sizing

* fix color for agent name + description text

* update icon
2024-07-10 18:01:07 +02:00
Aarushi
d6cbb48609 feat(agpt_builder) Fix data persistance on agent execution (#7363)
* fixing issue with data vanishing when executing agent

* fix rebasing

* add unique key

* reset to neutral & failed colours
2024-07-10 12:47:55 +01:00
Swifty
3789b00479 feat(autogpt_server): Expose rest api via websocket (#7350)
* Add in websocket event types

* adding in api endpoints

* Updated ws messages
2024-07-10 10:54:18 +01:00
Swifty
f94e81f48b feat(builder) Add save Agent functionality (#7361)
Add save functionality
2024-07-10 10:01:12 +01:00
Zamil Majdy
e10c4ee4cd fix(doc): Fix Auto GPT server Running The Server doc (#7360) 2024-07-10 10:31:52 +02:00
Reinier van der Leer
81dee568cb feat(autogpt_builder): Add basic layout with nav (#7317)
* feat(agent_builder): Add shad/cn + Radix Icons + Tailwind

* move favicon.ico to static folder

* delete empty custominput.css

* feat(agent_builder): Add basic app layout with nav

* Revert unwanted changes

* Fix /build + Flow layout issues

- Add `className` passthrough to `Flow` component
- Fix /build page layout

* unfix build/page.tsx indentation for git tracking

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-07-09 17:33:12 +01:00
Bently
7929f1a4ac Reads data from .env (#7357) 2024-07-09 17:20:31 +01:00
Bently
0a28c72bad Fix issue with node inputs (#7356) 2024-07-09 16:21:13 +01:00
Toran Bruce Richards
b9861a5308 fix(autogpt_builder): Update AutoGPT logo in page component (#7354)
feat(autogpt_builder): Update AutoGPT logo in page component
2024-07-09 16:10:13 +01:00
Zamil Majdy
af3a2bb5f5 fix(doc): Fix Auto GPT server Running The Server doc (#7355) 2024-07-09 16:09:43 +01:00
Bently
2f174837bd Add input for agent name & description (#7351) 2024-07-09 15:44:36 +01:00
Aarushi
b30eaf653a feat(autogpt_builder) Remove submit button (#7353)
remove submit button
2024-07-09 15:23:36 +01:00
Zamil Majdy
d9c9b22886 feat(rnd): Add Agent Block sample/test input output for auto-generated unit testing for block (#7310)
Sample test input and output on the block can serve as documentation and auto-generated unit-testing code for the agent block.

What's within the scope of this change:
Adding the fields for block test (input, output, mocks), and its execution.

What's still outside the scope:
Handling of mock and stub for a block using sensitive credentials or network calls or 3rd-party connections.
2024-07-09 13:08:42 +02:00
Zamil Majdy
ff71b0beb7 fix(rnd): Avoid expensive call of get_service_client on block execution (#7349) 2024-07-09 13:07:57 +02:00
Swifty
57cc8b69e9 fix(agent_server): Fix async issue with executor integration test (#7348)
Fix execution completed check
2024-07-09 17:09:44 +07:00
Aarushi
7ce0c655d0 feat(autogpt_builder) Hook up execution progress animation to live execution data (#7347)
* hooked up animation to actual execution data

* remove console log
2024-07-09 09:32:38 +01:00
Bently
1e755f9e8d feat(agent_builder): Updates to Builder GUI + Nodes (#7345)
* Updates to Builder GUI + Nodes

* fix apiUrl back to local host
2024-07-08 21:33:07 +01:00
Swifty
f9bedb0fd9 fix(autogpt_server): Changed inconstant timestamps to all use UTC (#7343)
* Changed timestamps to utc

* made other datetime.now timezone aware
2024-07-08 17:09:38 +02:00
Swifty
a32bc72314 feast(agent server) Add populate db command (#7342)
* Add populate db command

* Added in a reddit graph creation command
2024-07-08 17:09:20 +02:00
Reinier van der Leer
227092b669 feat(autogpt_builder): Enable loading an existing flow (#7338)
- Add `flowID` query parameter
- Add logic to load a flow from the server into the builder
2024-07-07 01:39:05 +02:00
Reinier van der Leer
39556a71cc feat(autogpt_builder): Improve server error handling in AutoGPTServerAPI 2024-07-06 01:03:23 -06:00
Reinier van der Leer
1fb8c1adac fix(autogpt_builder): Fix date conversion in AutoGPTServerAPI client
Explicitly parse dates in JSON response object in `getFlowExecutionInfo(..)`
2024-07-06 01:01:41 -06:00
Swifty
37e1780d76 feat(agent server): Added websocket communication (#7291)
* Refactor on the link structure and API

* Refactor on the link structure and API

* Cleanup IDS

* Remove run_id

* Update block interface

* Added websockets dependency

* Adding routes

* Adding in websocket code

* Added cli to test the websocket

* Added an outline of the message formats I plan on using

* Added webscoket message types

* Updated poetry lock

* Adding subscription logic

* Updating subscription mechanisms

* update cli

* Send updates to server

* Get single execution data

* Fix type hints and renamed function

* add callback function and type hints

* fix type hints

* Updated manager to use property

* Added in websocket updates

* Added connection manager tests

* Added tests for ws_api

* trying to work around process issues

* test formatting

* Added a create and execute command for the cli

* Updated send format

* websockets command working

* cli update

* Added model.py

* feat: Update server.py and manager.py

- Initialize blocks in AgentServer lifespan context
- Remove unnecessary await in AgentServer get_graph_blocks
- Fix type hinting in manager.py
- Validate input data in validate_exec function

* fix tests

* feat: Add autogpt_server.blocks.sample and autogpt_server.blocks.text modules

This commit adds the `autogpt_server.blocks.sample` and `autogpt_server.blocks.text` modules to the project. These modules contain blocks that are used in the execution of the Autogpt server. The `ParrotBlock` and `PrintingBlock` classes are imported from `autogpt_server.blocks.sample`, while the `TextFormatterBlock` class is imported from `autogpt_server.blocks.text`. This addition enhances the functionality of the server by providing additional blocks for text processing and sample operations.

* fixed circular import issue

* Update readme

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-07-05 17:02:26 +02:00
Reinier van der Leer
0df2199c42 feat(autogpt_server): Add GET /graphs/{graph_id}/executions endpoint (#7330)
* feat(autogpt_builder): Add `AutoGPTServerAPI` client

* migrate API calls in Flow.tsx to new API client

* feat(autogpt_server): Add `/graphs/{graph_id}/executions` endpoint

In `data/execution.py`:
- Add `list_executions` function
- Rename `get_executions` to `get_execution_results`

In `server/server.py`:
- Add route
- Add `AgentServer.list_graph_runs`
- Rename `AgentServer.get_executions` to `get_run_execution_results`

* feat(autogpt_builder): Add `listFlowRunIDs` endpoint to `AutoGPTServerAPI` client

* Move `Schema` to `types.ts` and rename to `ObjectSchema`
2024-07-05 10:51:27 +01:00
Reinier van der Leer
200800312a feat(autogpt_builder): Add AutoGPTServerAPI client (#7328)
- Add fully typed `AutoGPTServerAPI` client in lib/autogpt_server_api.ts
- Migrate API calls in `Flow.tsx` to new API client
2024-07-05 11:03:59 +02:00
Reinier van der Leer
b7a90ce768 fix(autogpt_builder): Unignore lib/ and add lib/utils.ts 2024-07-04 17:40:59 -06:00
Reinier van der Leer
f359ed0983 feat(agent_builder): Add shad/cn UI library (#7316)
- Add shad/cn + Radix Icons + Tailwind
- move favicon.ico to static folder
- delete empty custominput.css

---------

Co-authored-by: Bentlybro <tomnoon9@gmail.com>
2024-07-05 08:14:45 +02:00
Aarushi
6456285753 feat(autogpt_builder) Update custom node to handle deeply nested structures (#7319)
* feat(rnd): Add type hint and strong pydantic type validation for block input/output + add reddit agent-blocks.

* feat(rnd): Add type hint and strong pydantic type validation for block input/output + add reddit agent-blocks.

* Fix reddit block

* Fix serialization

* Eliminate deprecated class property

* Remove RedditCredentialsBlock

* Cache jsonschema computation, add dictionary construction

* Add dict_split and list_split to output, add more blocks

* Add objc_split for completeness, int both input and output

* Update reddit block

* Add reddit test (untested)

* Resolved json issue on pydantic

* Add creds check on client

* Add dict <--> pydantic object flexibility

* Fix error retry

* Skip reddit test

* Code cleanup

* Chang prompt

* Make this work

* Fix linting

* Hide input_links and output_links from Node

* Add docs

* updating UI to handle deeply nested data structures for reddit usecase

* changing expected key in reddit post to comment

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-07-04 17:54:41 +01:00
Zamil Majdy
833944e228 feat(rnd): Add strong pydantic type & composite data extraction for Block input/output schema + add reddit agent-blocks (#7288)
* feat(rnd): Add type hint and strong pydantic type validation for block input/output + add reddit agent-blocks.

* feat(rnd): Add type hint and strong pydantic type validation for block input/output + add reddit agent-blocks.

* Fix reddit block

* Fix serialization

* Eliminate deprecated class property

* Remove RedditCredentialsBlock

* Cache jsonschema computation, add dictionary construction

* Add dict_split and list_split to output, add more blocks

* Add objc_split for completeness, int both input and output

* Update reddit block

* Add reddit test (untested)

* Resolved json issue on pydantic

* Add creds check on client

* Add dict <--> pydantic object flexibility

* Fix error retry

* Skip reddit test

* Code cleanup

* Chang prompt

* Make this work

* Fix linting

* Hide input_links and output_links from Node

* Add docs

---------

Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-07-04 11:37:28 +01:00
Reinier van der Leer
db0e726954 fix(agent, benchmark): Specify path_type=Path for CLI path options/arguments
Without `path_type=Path`, an option/argument with `type=click.Path()` will return a `str`.
2024-07-03 15:21:04 -06:00
Krzysztof Czerwinski
08612cc3bf refactor(agent, forge): Move tests from autogpt to forge (#7247)
- Move `autogpt/tests/vcr_cassettes` submodule to `forge/tests/vcr_cassettes`
- Remove not needed markers from `pyproject.toml`: `"requires_openai_api_key", "requires_huggingface_api_key"`
- Update relevant GitHub workflows

Moved relevant tests from `autogpt/tests` to appropiate directories:
- Component tests to their respective component dirs
- `autogpt/tests/unit/test_web_search.py` → `forge/components/web/test_search.py`
- `autogpt/tests/unit/test_git_commands.py` → `forge/components/git_operations/test_git_operations.py`
- `autogpt/tests/unit/test_file_operations.py` → `forge/components/file_manager/test_file_manager.py`
- `autogpt/tests/integration/test_image_gen.py` → `forge/components/image_gen/test_image_gen.py`
- `autogpt/tests/integration/test_web_selenium.py` → `forge/components/web/test_selenium.py`
- `autogpt/tests/integration/test_execute_code.py` → `forge/components/code_executor/test_code_executor.py`
- `autogpt/tests/unit/test_s3_file_storage.py` → `forge/file_storage/test_s3_file_storage.py`
- `autogpt/tests/unit/test_gcs_file_storage.py` → `forge/file_storage/test_gcs_file_storage.py`
- `autogpt/tests/unit/test_local_file_storage.py` → `forge/file_storage/test_local_file_storage.py`
- `autogpt/tests/unit/test_json.py` → `forge/json/test_parsing.py`
- `autogpt/tests/unit/test_logs.py` → `forge/logging/test_utils.py`
- `autogpt/tests/unit/test_url_validation.py` → `forge/utils/test_url_validator.py`
- `autogpt/tests/unit/test_text_file_parsers.py` → `forge/utils/test_file_operations.py`

- (Re)moved dependencies from `autogpt/pyproject.toml` that were only used in these test files.

Also:
- Added `load_env_vars` fixture to `forge/conftest.py`
- Fixed a type error in `forge/components/web/test_search.py`
- Merged `autogpt/.gitattributes` into root `.gitattributes`

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-07-04 02:09:01 +02:00
Reinier van der Leer
7415e24fc3 fix(forge): Add google-api-python-client-stubs and fix type error (#7303)
- Add `google-api-python-client-stubs` dev dependency
- Add version specification to `google-api-python-client` dependency
- Fix type error (by ignoring it) in forge/components/web/search.py
2024-07-03 23:26:37 +02:00
Dimitri Brooks
ecb054af56 feat: Add support for Claude 3.5 Sonnet (#7301)
add support for Claude 3.5 Sonnet
2024-07-03 16:02:14 +02:00
Krzysztof Czerwinski
39f70b0c83 docs(agent, forge): Update component configuration docs (#7232) 2024-07-03 09:50:38 +01:00
Krzysztof Czerwinski
7cb4d4a903 feat(forge, agent, benchmark): Upgrade to Pydantic v2 (#7280)
Update Pydantic dependency of `autogpt`, `forge` and `benchmark` to `^2.7`
[Pydantic Migration Guide](https://docs.pydantic.dev/2.7/migration/)

- Migrate usages of now-deprecated functions to their replacements
- Update `Field` definitions
  - Ellipsis `...` for required fields is deprecated
  - `Field` no longer supports extra `kwargs`, replace use of this feature with field metadata
- Replace `Config` class for specifying model configuration with `model_config = ConfigDict(..)`
- Removed `ModelContainer` in `BaseAgent`, component configuration dict is now directly serialized using Pydantic v2 helper functions
- Forked `agent-protocol` and updated `packages/client/python` for Pydantic v2 support: https://github.com/Significant-Gravitas/agent-protocol

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-07-02 20:45:32 +02:00
Aarushi
8feaced92e (rnd) Add support for multiple, dynamic inputs (#7296)
multi dynamic inputs
2024-07-02 09:56:33 +01:00
Reinier van der Leer
97e4cceb94 feat(agent, forge): Markdown-formatted history -> message history (#7228)
- Implement message based history in `ActionHistoryComponent`
- Make non-summarized message count configurable (`ActionHistoryComponent.full_message_count`)
- Run `ActionHistoryComponent` after `SystemComponent` so that history messages are last in the prompt
- Omit final instruction message if prompt already contains assistant messages
- Filter `raw_message` from `ActionProposal.schema()`

---------

Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
2024-07-02 03:47:55 +02:00
Reinier van der Leer
2fa4fd23af fix(agent): Make build dependencies optional to unbreak install (#7298)
* Create optional `build` dependency group
* Move `cx-freeze` dependency to `build` dependency group

To include the `build` group when installing dependencies, run `poetry install --with=build`.

Fixes #7297 (`cx-freeze` dependency install fails after #7271)
2024-07-02 02:08:02 +02:00
angeousta
976ea7cd3c fix readme (broken link) (#7293)
Update README.md
2024-07-01 09:44:46 +02:00
Bently
d5ab83aa34 Updating api calls in AutoGPT builder (#7275)
* update api endpoints

* get multi-node working + fix node output

* updated multi-node running + re-add "wire" on rebuild

* Fix node data mapping

* removed getStatusValue
2024-06-28 10:39:26 +01:00
Reinier van der Leer
cbae8b5c14 chore(agent, forge, benchmark): Clean up dependencies (#7286)
* Remove unused dependencies
* Move dependencies for moved code from `autogpt` to `forge`
* Loosen dependency for `uvicorn` to improve compatibility
2024-06-28 02:21:36 +02:00
Aarushi
854080f7af ci(builder): Add CI for AutoGPT Builder (#7267)
Add a simple Node/NPM linting workflow for `rnd/autogpt_builder/`

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-06-28 01:30:34 +02:00
Reinier van der Leer
fbb3891e79 chore(forge, agent, benchmark): Update pytest-asyncio to v0.23.x
Resolves #7283
2024-06-27 14:09:36 -06:00
Reinier van der Leer
4d8ee65ca7 fix(pre-commit): Add benchmark/poetry.lock to "Typecheck - Benchmark" trigger file pattern 2024-06-27 14:05:12 -06:00
Aarushi
6093acc813 Add more functionality to Nodes (#7278)
updating node behaviour
2024-06-27 17:03:10 +01:00
Nicholas Tindle
785a40ff9d feat(server, autogpt): Add Example files and update build option (#7271) 2024-06-27 09:56:21 -05:00
Zamil Majdy
2bc22c5450 feat(rnd): Add support for dynamic input as list for AgentServer Block (#7268)
On AgentServer, To create a Block like StringFormatterBlock or LllmCallBlock, we need some way to dynamically link input pins and aggregate them into a single list input. This will give a better experience for the user to construct an input and link it from the output of the other nodes. The scope of this change is adding support for that in the least intrusive way.

Proposal
To differentiate the input list name and its singular entry we are using the $_<index> prefix. For example:
For the input items: list[int], you can set a pin items with values like [1,2,3,4]. But you can also add input pins like items_$_0 or items_$_4 with values 1 or 2, which will be appended to the items input in alphabetical order.
The execution engine will guarantee to wait for the execution until all the input pin value is produced, so input pin with list input will produce fix-sized list.
2024-06-27 21:51:34 +07:00
Aarushi
cdc658695f Add reactflow component in AutoGPT builder (#7270)
* Getting started with nextjs

* fix linting

* remove gitignore for package.json

* pulling in reactflow components

* updating css

* use environment variables

* clean up css / ui a lil

* Fixed nodes/run button animation

so they are always visible

---------

Co-authored-by: Bentlybro <tomnoon9@gmail.com>
2024-06-27 10:14:25 +01:00
Aarushi
dd960f9306 Add support for nextjs based app (#7266)
* Getting started with nextjs

* fix linting

* remove gitignore for package.json
2024-06-27 10:02:54 +01:00
Krzysztof Czerwinski
6e1c9d44a4 Update Agent Server README.md 2024-06-26 19:41:12 +02:00
Zamil Majdy
26bcb26bb7 feat(rnd): Refactor AgentServer Node Input/Output Relation & Block output interface (#7231)
### Background

The current implementation of AgentServer doesn't allow for a single pin to be connected to multiple nodes, this will be problematic when you have a single output node that needs to be propagated into many nodes. Or multiple nodes that possibly feed the data into a single pin (first come first serve).

This infra change is also part of the preparation for changing the `block` interface to return a stream of output instead of a single output.  Treating blocks as streams requires this capability.

### Changes 🏗️

* Update block run interface from returning `(output_name, output_data)` to `Generator[(output_name, output_data)]`
* Removed `agent` term in the API, replace it with `graph` for consistency.
* Reintroduced `AgentNodeExecutionInputOutput`. `AgentNodeExecution` input & output will be a list of `AgentNodeExecutionInputOutput` which describes the input & output data of its execution. Making an execution has 1-many relation to its input output data.
* Propagating the relation and block interface change into the execution engine.
2024-06-26 12:41:55 +02:00
Nicholas Tindle
f04ddceacf docs(forge): Update and rename QUICKSTART.md to FORGE-QUICKSTART.md (#7215) 2024-06-25 19:03:05 -05:00
Bently
3e01b19d6f chore(forge): Update duckduckgo-search to v6.1.7 (#7254)
This should (for now) mitigate the RateLimitErrors that people have been experiencing.
---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-06-25 03:24:10 +02:00
Zamil Majdy
9f1e521857 feat(rnd): Add AutoGPT server scheduling service (#7226)
### Background

Agent execution should be able to be triggered in a recurring manner.
This PR introduced an ExecutionScheduling service, a process responsible for managing the execution schedule and triggering its execution based on a predefined cron expression.

### Changes 🏗️

* Added `scheduler.py` / `ExecutionScheduler` implementation.
* Added scheduler test.
* Added `AgentExecutionSchedule` table and its logical model & prisma queries.
* Moved `add_execution` from API server to `execution_manager`
2024-06-24 09:41:02 +07:00
Zamil Majdy
d9226888b2 feat(rnd): Add node metadata on Agent Server Node object (#7237) 2024-06-21 17:50:50 +01:00
Zamil Majdy
210d7738b9 feat(rnd): Add IPC support on autogpt_server (#7212)
### Background

This PR adds support on IPC on autogpt_server.
To make this happen, there are a couple of refactoring efforts being made (will be described in the `Changes` section).
Currently, there are three independent processes:

```
AgentServer ----> ExecutionManager
    | 
     --> ExecutionScheduler
```


### Changes 🏗️

* Added Pyro5 for IPC support.
* Introduced `AppService`: a class to construct an independent process that can expose a method to other running processes (this is analogous to a microservice).
* Introduced `AppProcess`: used by `AppService` a class for creating a child process that can be executed in the background.
* Adapting existing codebase to user `AppService`.
2024-06-19 22:49:47 +07:00
Krzysztof Czerwinski
c19ab2b24f feat(forge): Component-specific configuration (#7170)
Remove many env vars and use component-level configuration that could be loaded from file instead.

### Changed

- `BaseAgent` provides `serialize_configs` and `deserialize_configs` that can save and load all component configuration as json `str`. Deserialized components/values overwrite existing values, so not all values need to be present in the serialized config.
- Decoupled `forge/content_processing/text.py` from `Config`
- Kept `execute_local_commands` in `Config` because it's needed to know if OS info should be included in the prompt
- Updated docs to reflect changes
- Renamed `Config` to `AppConfig`

### Added

- Added `ConfigurableComponent` class for components and following configs:
  - `ActionHistoryConfiguration`
  - `CodeExecutorConfiguration`
  - `FileManagerConfiguration` - now file manager allows to have multiple agents using the same workspace
  - `GitOperationsConfiguration`
  - `ImageGeneratorConfiguration`
  - `WebSearchConfiguration`
  - `WebSeleniumConfiguration`
- `BaseConfig` in `forge` and moved `Config` (now inherits from `BaseConfig`) back to `autogpt`
- Required `config_class` attribute for the `ConfigurableComponent` class that should be set to configuration class for a component
`--component-config-file` CLI option and `COMPONENT_CONFIG_FILE` env var and field in `Config`. This option allows to load configuration from a specific file, CLI option takes precedence over env var.
- Added comments to config models

### Removed

- Unused `change_agent_id` method from `FileManagerComponent`
- Unused `allow_downloads` from `Config` and CLI options (it should be in web component config if needed)
- CLI option `--browser-name` (the option is inside `WebSeleniumConfiguration`)
- Unused `workspace_directory` from CLI options
- No longer needed variables from `Config` and docs
- Unused fields from `Config`: `image_size`, `audio_to_text_provider`, `huggingface_audio_to_text_model`
- Removed `files` and `workspace` class attributes from `FileManagerComponent`
2024-06-19 09:14:01 +01:00
Reinier van der Leer
02dc198a9f fix(agent): Fix resumption from mid-cycle state in CLI mode (#7224)
When an agent is resumed from a mid-cycle state (having made a proposal but not executed it yet), we need to use the previously determined `current_episode.action` proposal instead of calling `agent.propose_action()` again.
2024-06-18 19:30:24 +02:00
Reinier van der Leer
227cf41612 fix(agent, forge): Make LLM API key check provider-agnostic (#7220)
* Rename `assert_config_has_openai_api_key` to `assert_config_has_required_llm_api_keys`
* Make OpenAI credential check conditional (only if an OpenAI model is selected in the config)
* Implement checks for Groq and Anthropic credentials
* Use API calls for Groq and OpenAI credential checks to make sure the keys are valid
2024-06-14 20:56:03 +02:00
Reinier van der Leer
66f373fb57 docs: Update CONTRIBUTING.md
- Add reference to Code Guidelines
- Reorder list of directives to more logical order
2024-06-14 07:45:14 -07:00
Nicholas Tindle
8f3ed733b9 chore(rnd): Fix warning during server tests (#7213) 2024-06-14 06:56:53 -05:00
Krzysztof Czerwinski
9f71cd2437 feat(forge): Unbreak forge agent (#7196)
Revert some changes to fix forge agent and enable components support.
- Rename forge `Agent` to `ProtocolAgent`
- Bring back and update `forge/app.py` and `forge/agent/forge_agent.py`
- `ForgeAgent` inherits from `BaseAgent`, supports component execution and runs the same pipelines as autogpt Agent
- Update forge version from 0.1.0 to 0.2.0
- Update code comments
2024-06-12 13:45:00 +01:00
Nicholas Tindle
6ec708c771 ci(server): Agent Server CI (#7193) 2024-06-12 00:29:23 +07:00
Zamil Majdy
af3febd79f feat(rnd): Add missing code on execution logic for AutoGPT Server (#7209)
Add missing changes from previous commit
2024-06-11 16:15:52 +02:00
SwiftyOS
60e0d4c530 fix(rnd): fixed cli repeated star cmd quality of life 2024-06-11 13:10:53 +02:00
SwiftyOS
4aeae53a61 fixed typo 2024-06-11 12:29:13 +02:00
SwiftyOS
d5c4eca739 Check if pid exists 2024-06-11 12:27:19 +02:00
SwiftyOS
1e4ef7b313 chore(benchmark): delete notebooks 2024-06-11 11:30:46 +02:00
Swifty
fd18877dae feat(rnd): CLI to Run and Stop the Server (#7195) 2024-06-11 11:21:34 +02:00
Nicholas Tindle
c754ecd5d1 fix(docs): bad link from Options to built in components config (#7207) 2024-06-10 17:57:29 +01:00
Zamil Majdy
e688cc31f0 feat(rnd): Implement agent block execution logic for AutoGPT Server (#7194)
### Background

This PR implements the main logic of the block execution engine for AutoGPT-Server.
An integration test is added to test the behavior.

*What you can do now with this PR*:
You can manually create a graph, by using the existing blocks as nodes (or write your own). Then execute the graph with an input.

*What you can't do yet*:
Listen to the graph execution result/update (you can follow the `AgentNodeExecution` table result, though).

### Changes 🏗️
* Split `data.py` (model file) into three modules:
    * `execution`: a model for node execution.
    * `graph`: a model for graph structure.
    * `block`: a model for agent block/component.
* Implemented executor main logic
* Simplify db structure:
    * Remove `AgentBlockInputOutput` in favor of `inputSchema` & `outputSchema` using serialized json/dict structure.
    * Remove `id` on `AgentBlock` in favor of using name (class name of the block) as its identifier.
    * Added `constantInput` column for `AgentNode` for hard-coded input/block configuration. Hence, removing`executionStateData` on `AgentNodeExecution`.
    * Rename AgentNodeLink input/output to source/sink to avoid confusion
* Change multithreading to multiprocessing, to allow the use of multiple `prisma` asynchronous client.
2024-06-10 19:30:34 +07:00
Nicholas Tindle
b803e42189 feat(docs): add example uses and description of autogpt to the readme (#7205) 2024-06-10 13:08:55 +01:00
Reinier van der Leer
04769c54e4 docs: Update CONTRIBUTING.md
- Add links to roadmap
- Disable links to kanban board (which currently is not in active use)
- Clarify instruction to not submit broken code
2024-06-09 23:43:25 +02:00
Reinier van der Leer
8f18aebf90 Update wiki links to new location (Nexus/wiki -> AutoGPT/wiki) 2024-06-09 23:13:25 +02:00
565 changed files with 50758 additions and 4930 deletions

View File

@@ -23,6 +23,18 @@
# Frontend
!frontend/build/web/
# rnd
!rnd/
# Explicitly re-ignore some folders
.*
**/__pycache__
# rnd
rnd/autogpt_builder/.next/
rnd/autogpt_builder/node_modules
rnd/autogpt_builder/.env.example
rnd/autogpt_builder/.env.local
rnd/autogpt_server/.env
rnd/autogpt_server/.venv/
rnd/market/.env

5
.gitattributes vendored
View File

@@ -3,3 +3,8 @@ frontend/build/** linguist-generated
**/poetry.lock linguist-generated
docs/_javascript/** linguist-vendored
# Exclude VCR cassettes from stats
forge/tests/vcr_cassettes/**/**.y*ml linguist-generated
* text=auto

12
.github/CODEOWNERS vendored
View File

@@ -1,5 +1,7 @@
.github/workflows/ @Significant-Gravitas/devops
autogpt/ @Significant-Gravitas/maintainers
forge/ @Significant-Gravitas/forge-maintainers
benchmark/ @Significant-Gravitas/benchmark-maintainers
frontend/ @Significant-Gravitas/frontend-maintainers
* @Significant-Gravitas/maintainers
.github/workflows/ @Significant-Gravitas/devops
forge/ @Significant-Gravitas/forge-maintainers
benchmark/ @Significant-Gravitas/benchmark-maintainers
frontend/ @Significant-Gravitas/frontend-maintainers
rnd/infra @Significant-Gravitas/devops
.github/CODEOWNERS @Significant-Gravitas/admins

View File

@@ -16,7 +16,7 @@ body:
[discussions]: https://github.com/Significant-Gravitas/AutoGPT/discussions
[#tech-support]: https://discord.com/channels/1092243196446249134/1092275629602394184
[existing issues]: https://github.com/Significant-Gravitas/AutoGPT/issues?q=is%3Aissue
[wiki page on Contributing]: https://github.com/Significant-Gravitas/Nexus/wiki/Contributing
[wiki page on Contributing]: https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing
- type: checkboxes
attributes:
@@ -88,14 +88,16 @@ body:
- type: dropdown
attributes:
label: Do you use OpenAI GPT-3 or GPT-4?
label: What LLM Provider do you use?
description: >
If you are using AutoGPT with `SMART_LLM=gpt-3.5-turbo`, your problems may be caused by
the [limitations](https://github.com/Significant-Gravitas/AutoGPT/issues?q=is%3Aissue+label%3A%22AI+model+limitation%22) of GPT-3.5.
options:
- GPT-3.5
- GPT-4
- GPT-4(32k)
- Azure
- Groq
- Anthropic
- Llamafile
- Other (detail in issue)
validations:
required: true
@@ -126,6 +128,13 @@ body:
label: Specify the area
description: Please specify the area you think is best related to the issue.
- type: input
attributes:
label: What commit or version are you using?
description: It is helpful for us to reproduce to know what version of the software you were using when this happened. Please run `git log -n 1 --pretty=format:"%H"` to output the full commit hash.
validations:
required: true
- type: textarea
attributes:
label: Describe your issue.

View File

@@ -5,7 +5,7 @@ body:
- type: markdown
attributes:
value: |
First, check out our [wiki page on Contributing](https://github.com/Significant-Gravitas/Nexus/wiki/Contributing)
First, check out our [wiki page on Contributing](https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing)
Please provide a searchable summary of the issue in the title above ⬆️.
- type: checkboxes
attributes:

View File

@@ -6,26 +6,18 @@
<!-- Concisely describe all of the changes made in this pull request: -->
### PR Quality Scorecard ✨
### Testing 🔍
> [!NOTE]
Only for the new autogpt platform, currently in rnd/
<!--
Check out our contribution guide:
https://github.com/Significant-Gravitas/Nexus/wiki/Contributing
1. Avoid duplicate work, issues, PRs etc.
2. Also consider contributing something other than code; see the [contribution guide]
for options.
3. Clearly explain your changes.
4. Avoid making unnecessary changes, especially if they're purely based on personal
preferences. Doing so is the maintainers' job. ;-)
Please make sure your changes have been tested and are in good working condition.
Here is a list of our critical paths, if you need some inspiration on what and how to test:
-->
- [x] Have you used the PR description template? &ensp; `+2 pts`
- [ ] Is your pull request atomic, focusing on a single change? &ensp; `+5 pts`
- [ ] Have you linked the GitHub issue(s) that this PR addresses? &ensp; `+5 pts`
- [ ] Have you documented your changes clearly and comprehensively? &ensp; `+5 pts`
- [ ] Have you changed or added a feature? &ensp; `-4 pts`
- [ ] Have you added/updated corresponding documentation? &ensp; `+4 pts`
- [ ] Have you added/updated corresponding integration tests? &ensp; `+5 pts`
- [ ] Have you changed the behavior of AutoGPT? &ensp; `-5 pts`
- [ ] Have you also run `agbenchmark` to verify that these changes do not regress performance? &ensp; `+10 pts`
- Create from scratch and execute an agent with at least 3 blocks
- Import an agent from file upload, and confirm it executes correctly
- Upload agent to marketplace
- Import an agent from marketplace and confirm it executes correctly
- Edit an agent from monitor, and confirm it executes correctly

8
.github/labeler.yml vendored
View File

@@ -17,3 +17,11 @@ Frontend:
documentation:
- changed-files:
- any-glob-to-any-file: docs/**
Builder:
- changed-files:
- any-glob-to-any-file: rnd/autogpt_builder/**
Server:
- changed-files:
- any-glob-to-any-file: rnd/autogpt_server/**

View File

@@ -0,0 +1,41 @@
name: AutoGPT Builder CI
on:
push:
branches: [ master ]
paths:
- '.github/workflows/autogpt-builder-ci.yml'
- 'rnd/autogpt_builder/**'
pull_request:
paths:
- '.github/workflows/autogpt-builder-ci.yml'
- 'rnd/autogpt_builder/**'
defaults:
run:
shell: bash
working-directory: rnd/autogpt_builder
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '21'
- name: Install dependencies
run: |
npm install
- name: Check formatting with Prettier
run: |
npx prettier --check .
- name: Run lint
run: |
npm run lint

View File

@@ -6,13 +6,11 @@ on:
paths:
- '.github/workflows/autogpt-ci.yml'
- 'autogpt/**'
- '!autogpt/tests/vcr_cassettes'
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/autogpt-ci.yml'
- 'autogpt/**'
- '!autogpt/tests/vcr_cassettes'
concurrency:
group: ${{ format('autogpt-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -73,37 +71,6 @@ jobs:
git config --global user.name "Auto-GPT-Bot"
git config --global user.email "github-bot@agpt.co"
- name: Checkout cassettes
if: ${{ startsWith(github.event_name, 'pull_request') }}
env:
PR_BASE: ${{ github.event.pull_request.base.ref }}
PR_BRANCH: ${{ github.event.pull_request.head.ref }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
cassette_branch="${PR_AUTHOR}-${PR_BRANCH}"
cassette_base_branch="${PR_BASE}"
cd tests/vcr_cassettes
if ! git ls-remote --exit-code --heads origin $cassette_base_branch ; then
cassette_base_branch="master"
fi
if git ls-remote --exit-code --heads origin $cassette_branch ; then
git fetch origin $cassette_branch
git fetch origin $cassette_base_branch
git checkout $cassette_branch
# Pick non-conflicting cassette updates from the base branch
git merge --no-commit --strategy-option=ours origin/$cassette_base_branch
echo "Using cassettes from mirror branch '$cassette_branch'," \
"synced to upstream branch '$cassette_base_branch'."
else
git checkout -b $cassette_branch
echo "Branch '$cassette_branch' does not exist in cassette submodule." \
"Using cassettes from '$cassette_base_branch'."
fi
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
@@ -163,80 +130,6 @@ jobs:
token: ${{ secrets.CODECOV_TOKEN }}
flags: autogpt-agent,${{ runner.os }}
- id: setup_git_auth
name: Set up git token authentication
# Cassettes may be pushed even when tests fail
if: success() || failure()
run: |
config_key="http.${{ github.server_url }}/.extraheader"
if [ "${{ runner.os }}" = 'macOS' ]; then
base64_pat=$(echo -n "pat:${{ secrets.PAT_REVIEW }}" | base64)
else
base64_pat=$(echo -n "pat:${{ secrets.PAT_REVIEW }}" | base64 -w0)
fi
git config "$config_key" \
"Authorization: Basic $base64_pat"
cd tests/vcr_cassettes
git config "$config_key" \
"Authorization: Basic $base64_pat"
echo "config_key=$config_key" >> $GITHUB_OUTPUT
- id: push_cassettes
name: Push updated cassettes
# For pull requests, push updated cassettes even when tests fail
if: github.event_name == 'push' || (! github.event.pull_request.head.repo.fork && (success() || failure()))
env:
PR_BRANCH: ${{ github.event.pull_request.head.ref }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
if [ "${{ startsWith(github.event_name, 'pull_request') }}" = "true" ]; then
is_pull_request=true
cassette_branch="${PR_AUTHOR}-${PR_BRANCH}"
else
cassette_branch="${{ github.ref_name }}"
fi
cd tests/vcr_cassettes
# Commit & push changes to cassettes if any
if ! git diff --quiet; then
git add .
git commit -m "Auto-update cassettes"
git push origin HEAD:$cassette_branch
if [ ! $is_pull_request ]; then
cd ../..
git add tests/vcr_cassettes
git commit -m "Update cassette submodule"
git push origin HEAD:$cassette_branch
fi
echo "updated=true" >> $GITHUB_OUTPUT
else
echo "updated=false" >> $GITHUB_OUTPUT
echo "No cassette changes to commit"
fi
- name: Post Set up git token auth
if: steps.setup_git_auth.outcome == 'success'
run: |
git config --unset-all '${{ steps.setup_git_auth.outputs.config_key }}'
git submodule foreach git config --unset-all '${{ steps.setup_git_auth.outputs.config_key }}'
- name: Apply "behaviour change" label and comment on PR
if: ${{ startsWith(github.event_name, 'pull_request') }}
run: |
PR_NUMBER="${{ github.event.pull_request.number }}"
TOKEN="${{ secrets.PAT_REVIEW }}"
REPO="${{ github.repository }}"
if [[ "${{ steps.push_cassettes.outputs.updated }}" == "true" ]]; then
echo "Adding label and comment..."
echo $TOKEN | gh auth login --with-token
gh issue edit $PR_NUMBER --add-label "behaviour change"
gh issue comment $PR_NUMBER --body "You changed AutoGPT's behaviour on ${{ runner.os }}. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged."
fi
- name: Upload logs to artifact
if: always()
uses: actions/upload-artifact@v4

View File

@@ -6,13 +6,11 @@ on:
paths:
- '.github/workflows/autogpt-docker-ci.yml'
- 'autogpt/**'
- '!autogpt/tests/vcr_cassettes'
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/autogpt-docker-ci.yml'
- 'autogpt/**'
- '!autogpt/tests/vcr_cassettes'
concurrency:
group: ${{ format('autogpt-docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}

56
.github/workflows/autogpt-infra-ci.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
name: AutoGPT Builder Infra
on:
push:
branches: [ master ]
paths:
- '.github/workflows/autogpt-infra-ci.yml'
- 'rnd/infra/**'
pull_request:
paths:
- '.github/workflows/autogpt-infra-ci.yml'
- 'rnd/infra/**'
defaults:
run:
shell: bash
working-directory: rnd/infra
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: TFLint
uses: pauloconnor/tflint-action@v0.0.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tflint_path: terraform/
tflint_recurse: true
tflint_changed_only: false
- name: Set up Helm
uses: azure/setup-helm@v4.2.0
with:
version: v3.14.4
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --target-branch ${{ github.event.repository.default_branch }})
if [[ -n "$changed" ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
- name: Run chart-testing (lint)
if: steps.list-changed.outputs.changed == 'true'
run: ct lint --target-branch ${{ github.event.repository.default_branch }}

155
.github/workflows/autogpt-server-ci.yml vendored Normal file
View File

@@ -0,0 +1,155 @@
name: AutoGPT Server CI
on:
push:
branches: [master, development, ci-test*]
paths:
- ".github/workflows/autogpt-server-ci.yml"
- "rnd/autogpt_server/**"
pull_request:
branches: [master, development, release-*]
paths:
- ".github/workflows/autogpt-server-ci.yml"
- "rnd/autogpt_server/**"
concurrency:
group: ${{ format('autogpt-server-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults:
run:
shell: bash
working-directory: rnd/autogpt_server
jobs:
test:
permissions:
contents: read
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
steps:
- name: Setup PostgreSQL
uses: ikalnytskyi/action-setup-postgres@v6
with:
username: ${{ secrets.DB_USER || 'postgres' }}
password: ${{ secrets.DB_PASS || 'postgres' }}
database: postgres
port: 5432
id: postgres
# Quite slow on macOS (2~4 minutes to set up Docker)
# - name: Set up Docker (macOS)
# if: runner.os == 'macOS'
# uses: crazy-max/ghaction-setup-docker@v3
- name: Start MinIO service (Linux)
if: runner.os == 'Linux'
working-directory: "."
run: |
docker pull minio/minio:edge-cicd
docker run -d -p 9000:9000 minio/minio:edge-cicd
- name: Start MinIO service (macOS)
if: runner.os == 'macOS'
working-directory: ${{ runner.temp }}
run: |
brew install minio/stable/minio
mkdir data
minio server ./data &
# No MinIO on Windows:
# - Windows doesn't support running Linux Docker containers
# - It doesn't seem possible to start background processes on Windows. They are
# killed after the step returns.
# See: https://github.com/actions/runner/issues/598#issuecomment-2011890429
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('rnd/autogpt_server/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Python dependencies
run: poetry install
- name: Generate Prisma Client
run: poetry run prisma generate
- name: Run Database Migrations
run: poetry run prisma migrate dev --name updates
env:
CONNECTION_STR: ${{ steps.postgres.outputs.connection-uri }}
- id: lint
name: Run Linter
run: poetry run lint
- name: Run pytest with coverage
run: |
if [[ "${{ runner.debug }}" == "1" ]]; then
poetry run pytest -vv -o log_cli=true -o log_cli_level=DEBUG test
else
poetry run pytest -vv test
fi
if: success() || (failure() && steps.lint.outcome == 'failure')
env:
LOG_LEVEL: ${{ runner.debug && 'DEBUG' || 'INFO' }}
env:
CI: true
PLAIN_OUTPUT: True
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
DB_USER: ${{ secrets.DB_USER || 'postgres' }}
DB_PASS: ${{ secrets.DB_PASS || 'postgres' }}
DB_NAME: postgres
DB_PORT: 5432
RUN_ENV: local
PORT: 8080
DATABASE_URL: postgresql://${{ secrets.DB_USER || 'postgres' }}:${{ secrets.DB_PASS || 'postgres' }}@localhost:5432/${{ secrets.DB_NAME || 'postgres'}}
# - name: Upload coverage reports to Codecov
# uses: codecov/codecov-action@v4
# with:
# token: ${{ secrets.CODECOV_TOKEN }}
# flags: autogpt-server,${{ runner.os }}

View File

@@ -0,0 +1,41 @@
name: AutoGPT Server Docker Build & Push
on:
push:
branches: [ updated-docker-ci ]
paths:
- '**'
defaults:
run:
shell: bash
working-directory: AutoGPT
env:
PROJECT_ID: agpt-dev
IMAGE_NAME: agpt-server-dev
REGION: us-central1
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@v0.2.1
with:
project_id: ${{ env.PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Configure Docker
run: gcloud auth configure-docker ${{ env.REGION }}-docker.pkg.dev
- name: Build Docker image
run: docker build -t ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.IMAGE_NAME }}:${{ github.sha }} -f rnd/autogpt_server/Dockerfile .
- name: Push Docker image
run: docker push ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

View File

@@ -6,11 +6,13 @@ on:
paths:
- '.github/workflows/forge-ci.yml'
- 'forge/**'
- '!forge/tests/vcr_cassettes'
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/forge-ci.yml'
- 'forge/**'
- '!forge/tests/vcr_cassettes'
concurrency:
group: ${{ format('forge-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -66,6 +68,37 @@ jobs:
fetch-depth: 0
submodules: true
- name: Checkout cassettes
if: ${{ startsWith(github.event_name, 'pull_request') }}
env:
PR_BASE: ${{ github.event.pull_request.base.ref }}
PR_BRANCH: ${{ github.event.pull_request.head.ref }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
cassette_branch="${PR_AUTHOR}-${PR_BRANCH}"
cassette_base_branch="${PR_BASE}"
cd tests/vcr_cassettes
if ! git ls-remote --exit-code --heads origin $cassette_base_branch ; then
cassette_base_branch="master"
fi
if git ls-remote --exit-code --heads origin $cassette_branch ; then
git fetch origin $cassette_branch
git fetch origin $cassette_base_branch
git checkout $cassette_branch
# Pick non-conflicting cassette updates from the base branch
git merge --no-commit --strategy-option=ours origin/$cassette_base_branch
echo "Using cassettes from mirror branch '$cassette_branch'," \
"synced to upstream branch '$cassette_base_branch'."
else
git checkout -b $cassette_branch
echo "Branch '$cassette_branch' does not exist in cassette submodule." \
"Using cassettes from '$cassette_base_branch'."
fi
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
@@ -121,6 +154,80 @@ jobs:
token: ${{ secrets.CODECOV_TOKEN }}
flags: forge,${{ runner.os }}
- id: setup_git_auth
name: Set up git token authentication
# Cassettes may be pushed even when tests fail
if: success() || failure()
run: |
config_key="http.${{ github.server_url }}/.extraheader"
if [ "${{ runner.os }}" = 'macOS' ]; then
base64_pat=$(echo -n "pat:${{ secrets.PAT_REVIEW }}" | base64)
else
base64_pat=$(echo -n "pat:${{ secrets.PAT_REVIEW }}" | base64 -w0)
fi
git config "$config_key" \
"Authorization: Basic $base64_pat"
cd tests/vcr_cassettes
git config "$config_key" \
"Authorization: Basic $base64_pat"
echo "config_key=$config_key" >> $GITHUB_OUTPUT
- id: push_cassettes
name: Push updated cassettes
# For pull requests, push updated cassettes even when tests fail
if: github.event_name == 'push' || (! github.event.pull_request.head.repo.fork && (success() || failure()))
env:
PR_BRANCH: ${{ github.event.pull_request.head.ref }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
if [ "${{ startsWith(github.event_name, 'pull_request') }}" = "true" ]; then
is_pull_request=true
cassette_branch="${PR_AUTHOR}-${PR_BRANCH}"
else
cassette_branch="${{ github.ref_name }}"
fi
cd tests/vcr_cassettes
# Commit & push changes to cassettes if any
if ! git diff --quiet; then
git add .
git commit -m "Auto-update cassettes"
git push origin HEAD:$cassette_branch
if [ ! $is_pull_request ]; then
cd ../..
git add tests/vcr_cassettes
git commit -m "Update cassette submodule"
git push origin HEAD:$cassette_branch
fi
echo "updated=true" >> $GITHUB_OUTPUT
else
echo "updated=false" >> $GITHUB_OUTPUT
echo "No cassette changes to commit"
fi
- name: Post Set up git token auth
if: steps.setup_git_auth.outcome == 'success'
run: |
git config --unset-all '${{ steps.setup_git_auth.outputs.config_key }}'
git submodule foreach git config --unset-all '${{ steps.setup_git_auth.outputs.config_key }}'
- name: Apply "behaviour change" label and comment on PR
if: ${{ startsWith(github.event_name, 'pull_request') }}
run: |
PR_NUMBER="${{ github.event.pull_request.number }}"
TOKEN="${{ secrets.PAT_REVIEW }}"
REPO="${{ github.repository }}"
if [[ "${{ steps.push_cassettes.outputs.updated }}" == "true" ]]; then
echo "Adding label and comment..."
echo $TOKEN | gh auth login --with-token
gh issue edit $PR_NUMBER --add-label "behaviour change"
gh issue comment $PR_NUMBER --body "You changed AutoGPT's behaviour on ${{ runner.os }}. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged."
fi
- name: Upload logs to artifact
if: always()
uses: actions/upload-artifact@v4

View File

@@ -5,7 +5,7 @@ on:
push:
branches: [ master, development, release-* ]
paths-ignore:
- 'autogpt/tests/vcr_cassettes'
- 'forge/tests/vcr_cassettes'
- 'benchmark/reports/**'
# So that the `dirtyLabel` is removed if conflicts are resolve
# We recommend `pull_request_target` so that github secrets are available.

View File

@@ -9,7 +9,7 @@ on:
- 'forge/**'
- 'benchmark/**'
- '**.py'
- '!autogpt/tests/vcr_cassettes'
- '!forge/tests/vcr_cassettes'
pull_request:
branches: [ master, development, release-* ]
paths:
@@ -18,7 +18,7 @@ on:
- 'forge/**'
- 'benchmark/**'
- '**.py'
- '!autogpt/tests/vcr_cassettes'
- '!forge/tests/vcr_cassettes'
concurrency:
group: ${{ format('lint-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}

31
.github/workflows/workflow-checker.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: PR Status Checker
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
status-check:
name: Check PR Status
runs-on: ubuntu-latest
steps:
# - name: Wait some time for all actions to start
# run: sleep 30
- uses: actions/checkout@v4
# with:
# fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install requests
- name: Check PR Status
run: |
echo "Current directory before running Python script:"
pwd
echo "Attempting to run Python script:"
python check_actions_status.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

5
.gitignore vendored
View File

@@ -32,7 +32,6 @@ dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
@@ -162,7 +161,7 @@ agbenchmark/reports/
# Nodejs
package-lock.json
package.json
# Allow for locally private items
# private
@@ -170,3 +169,5 @@ pri*
# ignore
ig*
.github_access_token
LICENSE.rtf
rnd/autogpt_server/settings.py

7
.gitmodules vendored
View File

@@ -1,3 +1,6 @@
[submodule "autogpt/tests/vcr_cassettes"]
path = autogpt/tests/vcr_cassettes
[submodule "forge/tests/vcr_cassettes"]
path = forge/tests/vcr_cassettes
url = https://github.com/Significant-Gravitas/Auto-GPT-test-cassettes
[submodule "rnd/supabase"]
path = rnd/supabase
url = https://github.com/supabase/supabase.git

View File

@@ -97,7 +97,7 @@ repos:
alias: pyright-benchmark
entry: poetry -C benchmark run pyright
args: [-p, benchmark, benchmark]
files: ^benchmark/(agbenchmark|tests)/
files: ^benchmark/(agbenchmark/|tests/|poetry\.lock$)
types: [file]
language: system
pass_filenames: false

61
.vscode/all-projects.code-workspace vendored Normal file
View File

@@ -0,0 +1,61 @@
{
"folders": [
{
"name": "autogpt",
"path": "../autogpt"
},
{
"name": "benchmark",
"path": "../benchmark"
},
{
"name": "docs",
"path": "../docs"
},
{
"name": "forge",
"path": "../forge"
},
{
"name": "frontend",
"path": "../frontend"
},
{
"name": "autogpt_server",
"path": "../rnd/autogpt_server"
},
{
"name": "autogpt_builder",
"path": "../rnd/autogpt_builder"
},
{
"name": "market",
"path": "../rnd/market"
},
{
"name": "lib",
"path": "../rnd/autogpt_libs"
},
{
"name": "infra",
"path": "../rnd/infra"
},
{
"name": "[root]",
"path": ".."
}
],
"settings": {
"python.analysis.typeCheckingMode": "basic"
},
"extensions": {
"recommendations": [
"charliermarsh.ruff",
"dart-code.flutter",
"ms-python.black-formatter",
"ms-python.vscode-pylance",
"prisma.prisma",
"qwtel.sqlite-viewer"
]
}
}

View File

@@ -1,34 +1,38 @@
# AutoGPT Contribution Guide
If you are reading this, you are probably looking for our **[contribution guide]**,
which is part of our [knowledge base].
If you are reading this, you are probably looking for the full **[contribution guide]**,
which is part of our [wiki].
You can find our immediate priorities and their progress on our public [kanban board].
Also check out our [🚀 Roadmap][roadmap] for information about our priorities and associated tasks.
<!-- You can find our immediate priorities and their progress on our public [kanban board]. -->
[contribution guide]: https://github.com/Significant-Gravitas/Nexus/wiki/Contributing
[knowledge base]: https://github.com/Significant-Gravitas/Nexus/wiki
[contribution guide]: https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing
[wiki]: https://github.com/Significant-Gravitas/AutoGPT/wiki
[roadmap]: https://github.com/Significant-Gravitas/AutoGPT/discussions/6971
[kanban board]: https://github.com/orgs/Significant-Gravitas/projects/1
## In short
1. Avoid duplicate work, issues, PRs etc.
2. We encourage you to collaborate with fellow community members on some of our bigger
[todo's][kanban board]!
[todo's][roadmap]!
* We highly recommend to post your idea and discuss it in the [dev channel].
4. Create a draft PR when starting work on bigger changes.
3. Please also consider contributing something other than code; see the
[contribution guide] for options.
3. Create a draft PR when starting work on bigger changes.
4. Adhere to the [Code Guidelines]
5. Clearly explain your changes when submitting a PR.
6. Don't submit stuff that's broken.
6. Don't submit broken code: test/validate your changes.
7. Avoid making unnecessary changes, especially if they're purely based on your personal
preferences. Doing so is the maintainers' job. ;-)
8. Please also consider contributing something other than code; see the
[contribution guide] for options.
[dev channel]: https://discord.com/channels/1092243196446249134/1095817829405704305
[code guidelines]: https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing#code-guidelines
If you wish to involve with the project (beyond just contributing PRs), please read the
wiki [catalyzing](https://github.com/Significant-Gravitas/Nexus/wiki/Catalyzing) page.
wiki page about [Catalyzing](https://github.com/Significant-Gravitas/AutoGPT/wiki/Catalyzing).
In fact, why not just look through the whole wiki (it's only a few pages) and
hop on our Discord. See you there! :-)
❤️ & 🔆
The team @ AutoGPT
❤️ & 🔆
The team @ AutoGPT
https://discord.gg/autogpt

View File

@@ -2,11 +2,11 @@
> For the complete getting started [tutorial series](https://aiedge.medium.com/autogpt-forge-e3de53cc58ec) <- click here
Welcome to the Quickstart Guide! This guide will walk you through the process of setting up and running your own AutoGPT agent. Whether you're a seasoned AI developer or just starting out, this guide will provide you with the necessary steps to jumpstart your journey in the world of AI development with AutoGPT.
Welcome to the Quickstart Guide! This guide will walk you through setting up, building, and running your own AutoGPT agent. Whether you're a seasoned AI developer or just starting out, this guide will provide you with the steps to jumpstart your journey in AI development with AutoGPT.
## System Requirements
This project supports Linux (Debian based), Mac, and Windows Subsystem for Linux (WSL). If you are using a Windows system, you will need to install WSL. You can find the installation instructions for WSL [here](https://learn.microsoft.com/en-us/windows/wsl/).
This project supports Linux (Debian-based), Mac, and Windows Subsystem for Linux (WSL). If you use a Windows system, you must install WSL. You can find the installation instructions for WSL [here](https://learn.microsoft.com/en-us/windows/wsl/).
## Getting Setup
@@ -18,11 +18,11 @@ This project supports Linux (Debian based), Mac, and Windows Subsystem for Linux
- In the top-right corner of the page, click Fork.
![Create Fork UI](docs/content/imgs/quickstart/002_fork.png)
- On the next page, select your GitHub account to create the fork under.
- On the next page, select your GitHub account to create the fork.
- Wait for the forking process to complete. You now have a copy of the repository in your GitHub account.
2. **Clone the Repository**
To clone the repository, you need to have Git installed on your system. If you don't have Git installed, you can download it from [here](https://git-scm.com/downloads). Once you have Git installed, follow these steps:
To clone the repository, you need to have Git installed on your system. If you don't have Git installed, download it from [here](https://git-scm.com/downloads). Once you have Git installed, follow these steps:
- Open your terminal.
- Navigate to the directory where you want to clone the repository.
- Run the git clone command for the fork you just created
@@ -34,11 +34,11 @@ This project supports Linux (Debian based), Mac, and Windows Subsystem for Linux
![Open the Project in your IDE](docs/content/imgs/quickstart/004_ide.png)
4. **Setup the Project**
Next we need to setup the required dependencies. We have a tool for helping you do all the tasks you need to on the repo.
Next, we need to set up the required dependencies. We have a tool to help you perform all the tasks on the repo.
It can be accessed by running the `run` command by typing `./run` in the terminal.
The first command you need to use is `./run setup` This will guide you through the process of setting up your system.
Initially you will get instructions for installing flutter, chrome and setting up your github access token like the following image:
The first command you need to use is `./run setup.` This will guide you through setting up your system.
Initially, you will get instructions for installing Flutter and Chrome and setting up your GitHub access token like the following image:
![Setup the Project](docs/content/imgs/quickstart/005_setup.png)
@@ -47,7 +47,7 @@ This project supports Linux (Debian based), Mac, and Windows Subsystem for Linux
If you're a Windows user and experience issues after installing WSL, follow the steps below to resolve them.
#### Update WSL
Run the following command in Powershell or Command Prompt to:
Run the following command in Powershell or Command Prompt:
1. Enable the optional WSL and Virtual Machine Platform components.
2. Download and install the latest Linux kernel.
3. Set WSL 2 as the default.
@@ -73,7 +73,7 @@ dos2unix ./run
After executing the above commands, running `./run setup` should work successfully.
#### Store Project Files within the WSL File System
If you continue to experience issues, consider storing your project files within the WSL file system instead of the Windows file system. This method avoids issues related to path translations and permissions and provides a more consistent development environment.
If you continue to experience issues, consider storing your project files within the WSL file system instead of the Windows file system. This method avoids path translations and permissions issues and provides a more consistent development environment.
You can keep running the command to get feedback on where you are up to with your setup.
When setup has been completed, the command will return an output like this:
@@ -83,7 +83,7 @@ When setup has been completed, the command will return an output like this:
## Creating Your Agent
After completing the setup, the next step is to create your agent template.
Execute the command `./run agent create YOUR_AGENT_NAME`, where `YOUR_AGENT_NAME` should be replaced with a name of your choosing.
Execute the command `./run agent create YOUR_AGENT_NAME`, where `YOUR_AGENT_NAME` should be replaced with your chosen name.
Tips for naming your agent:
* Give it its own unique name, or name it after yourself
@@ -101,21 +101,21 @@ This starts the agent on the URL: `http://localhost:8000/`
![Start the Agent](docs/content/imgs/quickstart/009_start_agent.png)
The frontend can be accessed from `http://localhost:8000/`, you will first need to login using either a google account or your github account.
The front end can be accessed from `http://localhost:8000/`; first, you must log in using either a Google account or your GitHub account.
![Login](docs/content/imgs/quickstart/010_login.png)
Upon logging in you will get a page that looks something like this. With your task history down the left hand side of the page and the 'chat' window to send tasks to your agent.
Upon logging in, you will get a page that looks something like this: your task history down the left-hand side of the page, and the 'chat' window to send tasks to your agent.
![Login](docs/content/imgs/quickstart/011_home.png)
When you have finished with your agent, or if you just need to restart it, use Ctl-C to end the session then you can re-run the start command.
When you have finished with your agent or just need to restart it, use Ctl-C to end the session. Then, you can re-run the start command.
If you are having issues and want to ensure the agent has been stopped there is a `./run agent stop` command which will kill the process using port 8000, which should be the agent.
If you are having issues and want to ensure the agent has been stopped, there is a `./run agent stop` command, which will kill the process using port 8000, which should be the agent.
## Benchmarking your Agent
The benchmarking system can also be accessed using the cli too:
The benchmarking system can also be accessed using the CLI too:
```bash
agpt % ./run benchmark
@@ -163,7 +163,7 @@ The benchmark has been split into different categories of skills you can test yo
![Login](docs/content/imgs/quickstart/012_tests.png)
Finally you can run the benchmark with
Finally, you can run the benchmark with
```bash
./run benchmark start YOUR_AGENT_NAME

View File

@@ -1,10 +1,44 @@
# AutoGPT: build & use AI agents
# AutoGPT: Build & Use AI Agents
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) &ensp;
[![Twitter Follow](https://img.shields.io/twitter/follow/Auto_GPT?style=social)](https://twitter.com/Auto_GPT) &ensp;
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
**AutoGPT** is the vision of the power of AI accessible to everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters:
**AutoGPT** is a powerful tool that lets you create and run intelligent agents. These agents can perform various tasks automatically, making your life easier.
## How to Get Started
https://github.com/user-attachments/assets/8508f4dc-b362-4cab-900f-644964a96cdf
### 🧱 AutoGPT Builder
The AutoGPT Builder is the frontend. It allows you to design agents using an easy flowchart style. You build your agent by connecting blocks, where each block performs a single action. It's simple and intuitive!
[Read this guide](https://docs.agpt.co/server/new_blocks/) to learn how to build your own custom blocks.
### 💽 AutoGPT Server
The AutoGPT Server is the backend. This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously.
### 🐙 Example Agents
Here are two examples of what you can do with AutoGPT:
1. **Reddit Marketing Agent**
- This agent reads comments on Reddit.
- It looks for people asking about your product.
- It then automatically responds to them.
2. **YouTube Content Repurposing Agent**
- This agent subscribes to your YouTube channel.
- When you post a new video, it transcribes it.
- It uses AI to write a search engine optimized blog post.
- Then, it publishes this blog post to your Medium account.
These examples show just a glimpse of what you can achieve with AutoGPT!
---
Our mission is to provide the tools, so that you can focus on what matters:
- 🏗️ **Building** - Lay the foundation for something amazing.
- 🧪 **Testing** - Fine-tune your agent to perfection.
@@ -15,11 +49,13 @@ Be part of the revolution! **AutoGPT** is here to stay, at the forefront of AI i
**📖 [Documentation](https://docs.agpt.co)**
&ensp;|&ensp;
**🚀 [Contributing](CONTRIBUTING.md)**
&ensp;|&ensp;
**🛠️ [Build your own Agent - Quickstart](QUICKSTART.md)**
## 🧱 Building blocks
---
## 🤖 AutoGPT Classic
> Below is information about the classic version of AutoGPT.
**🛠️ [Build your own Agent - Quickstart](FORGE-QUICKSTART.md)**
### 🏗️ Forge
**Forge your own agent!** &ndash; Forge is a ready-to-go template for your agent application. All the boilerplate code is already handled, letting you channel all your creativity into the things that set *your* agent apart. All tutorials are located [here](https://medium.com/@aiedge/autogpt-forge-e3de53cc58ec). Components from the [`forge.sdk`](/forge/forge/sdk) can also be used individually to speed up development and reduce boilerplate in your agent project.

BIN
assets/gpt_dark_RGB.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

View File

@@ -11,12 +11,15 @@
## GROQ_API_KEY - Groq API Key (Example: gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)
# GROQ_API_KEY=
## LLAMAFILE_API_BASE - Llamafile API base URL
# LLAMAFILE_API_BASE=http://localhost:8080/v1
## TELEMETRY_OPT_IN - Share telemetry on errors and other issues with the AutoGPT team, e.g. through Sentry.
## This helps us to spot and solve problems earlier & faster. (Default: DISABLED)
# TELEMETRY_OPT_IN=true
## EXECUTE_LOCAL_COMMANDS - Allow local command execution (Default: False)
# EXECUTE_LOCAL_COMMANDS=False
## COMPONENT_CONFIG_FILE - Path to the json config file (Default: None)
# COMPONENT_CONFIG_FILE=
### Workspace ###
@@ -44,9 +47,6 @@
### Miscellaneous ###
## USER_AGENT - Define the user-agent used by the requests library to browse website (string)
# USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y
@@ -96,59 +96,21 @@
## EMBEDDING_MODEL - Model to use for creating embeddings
# EMBEDDING_MODEL=text-embedding-3-small
################################################################################
### SHELL EXECUTION
################################################################################
## SHELL_COMMAND_CONTROL - Whether to use "allowlist" or "denylist" to determine what shell commands can be executed (Default: denylist)
# SHELL_COMMAND_CONTROL=denylist
## ONLY if SHELL_COMMAND_CONTROL is set to denylist:
## SHELL_DENYLIST - List of shell commands that ARE NOT allowed to be executed by AutoGPT (Default: sudo,su)
# SHELL_DENYLIST=sudo,su
## ONLY if SHELL_COMMAND_CONTROL is set to allowlist:
## SHELL_ALLOWLIST - List of shell commands that ARE allowed to be executed by AutoGPT (Default: None)
# SHELL_ALLOWLIST=
################################################################################
### IMAGE GENERATION PROVIDER
################################################################################
### Common
## IMAGE_PROVIDER - Image provider (Default: dalle)
# IMAGE_PROVIDER=dalle
## IMAGE_SIZE - Image size (Default: 256)
# IMAGE_SIZE=256
### Huggingface (IMAGE_PROVIDER=huggingface)
## HUGGINGFACE_IMAGE_MODEL - Text-to-image model from Huggingface (Default: CompVis/stable-diffusion-v1-4)
# HUGGINGFACE_IMAGE_MODEL=CompVis/stable-diffusion-v1-4
## HUGGINGFACE_API_TOKEN - HuggingFace API token (Default: None)
# HUGGINGFACE_API_TOKEN=
### Stable Diffusion (IMAGE_PROVIDER=sdwebui)
## SD_WEBUI_AUTH - Stable Diffusion Web UI username:password pair (Default: None)
# SD_WEBUI_AUTH=
## SD_WEBUI_URL - Stable Diffusion Web UI API URL (Default: http://localhost:7860)
# SD_WEBUI_URL=http://localhost:7860
################################################################################
### AUDIO TO TEXT PROVIDER
################################################################################
## AUDIO_TO_TEXT_PROVIDER - Audio-to-text provider (Default: huggingface)
# AUDIO_TO_TEXT_PROVIDER=huggingface
## HUGGINGFACE_AUDIO_TO_TEXT_MODEL - The model for HuggingFace to use (Default: CompVis/stable-diffusion-v1-4)
# HUGGINGFACE_AUDIO_TO_TEXT_MODEL=CompVis/stable-diffusion-v1-4
################################################################################
### GITHUB
################################################################################
@@ -163,18 +125,6 @@
### WEB BROWSING
################################################################################
## HEADLESS_BROWSER - Whether to run the browser in headless mode (default: True)
# HEADLESS_BROWSER=True
## USE_WEB_BROWSER - Sets the web-browser driver to use with selenium (default: chrome)
# USE_WEB_BROWSER=chrome
## BROWSE_CHUNK_MAX_LENGTH - When browsing website, define the length of chunks to summarize (Default: 3000)
# BROWSE_CHUNK_MAX_LENGTH=3000
## BROWSE_SPACY_LANGUAGE_MODEL - spaCy language model](https://spacy.io/usage/models) to use when creating chunks. (Default: en_core_web_sm)
# BROWSE_SPACY_LANGUAGE_MODEL=en_core_web_sm
## GOOGLE_API_KEY - Google API key (Default: None)
# GOOGLE_API_KEY=
@@ -198,13 +148,6 @@
## ELEVENLABS_VOICE_ID - Eleven Labs voice ID (Example: None)
# ELEVENLABS_VOICE_ID=
################################################################################
### CHAT MESSAGES
################################################################################
## CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False)
# CHAT_MESSAGES_ENABLED=False
################################################################################
### LOGGING
################################################################################

View File

@@ -1,5 +0,0 @@
# Exclude VCR cassettes from stats
tests/vcr_cassettes/**/**.y*ml linguist-generated
# Mark documentation as such
docs/**.md linguist-documentation

3
autogpt/.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,3 @@
{
"python.analysis.typeCheckingMode": "basic",
}

View File

@@ -2,7 +2,7 @@
# --------------
🌎 *Official Website*: https://agpt.co.
📖 *User Guide*: https://docs.agpt.co/autogpt.
👩 *Contributors Wiki*: https://github.com/Significant-Gravitas/Nexus/wiki/Contributing.
👩 *Contributors Wiki*: https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing.
# v0.5.0 RELEASE HIGHLIGHTS! 🚀🚀
# -------------------------------

View File

@@ -68,10 +68,6 @@ Options:
continuous mode
--speak Enable Speak Mode
--debug Enable Debug Mode
-b, --browser-name TEXT Specifies which web-browser to use when
using selenium to scrape the web.
--allow-downloads Dangerous: Allows AutoGPT to download files
natively.
--skip-news Specifies whether to suppress the output of
latest news on startup.
--install-plugin-deps Installs external dependencies for 3rd party
@@ -90,6 +86,7 @@ Options:
--override-directives If specified, --constraint, --resource and
--best-practice will override the AI's
directives instead of being appended to them
--component-config-file TEXT Path to the json configuration file.
--help Show this message and exit.
```
</details>
@@ -111,10 +108,6 @@ Usage: python -m autogpt serve [OPTIONS]
Options:
--debug Enable Debug Mode
-b, --browser-name TEXT Specifies which web-browser to use when using
selenium to scrape the web.
--allow-downloads Dangerous: Allows AutoGPT to download files
natively.
--install-plugin-deps Installs external dependencies for 3rd party
plugins.
--help Show this message and exit.
@@ -132,7 +125,7 @@ For more comprehensive instructions, see the [user guide][docs/usage].
[docs/plugins]: https://docs.agpt.co/autogpt/plugins
## 📚 Resources
* 📔 AutoGPT [team wiki](https://github.com/Significant-Gravitas/Nexus/wiki)
* 📔 AutoGPT [project wiki](https://github.com/Significant-Gravitas/AutoGPT/wiki)
* 🧮 AutoGPT [project kanban](https://github.com/orgs/Significant-Gravitas/projects/1)
* 🌃 AutoGPT [roadmap](https://github.com/orgs/Significant-Gravitas/projects/2)

View File

@@ -2,17 +2,17 @@ from typing import Optional
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config
from forge.file_storage.base import FileStorage
from forge.llm.providers import MultiProvider
from autogpt.agents.agent import Agent, AgentConfiguration, AgentSettings
from autogpt.app.config import AppConfig
def create_agent(
agent_id: str,
task: str,
app_config: Config,
app_config: AppConfig,
file_storage: FileStorage,
llm_provider: MultiProvider,
ai_profile: Optional[AIProfile] = None,
@@ -38,7 +38,7 @@ def create_agent(
def configure_agent_with_state(
state: AgentSettings,
app_config: Config,
app_config: AppConfig,
file_storage: FileStorage,
llm_provider: MultiProvider,
) -> Agent:
@@ -51,7 +51,7 @@ def configure_agent_with_state(
def _configure_agent(
app_config: Config,
app_config: AppConfig,
llm_provider: MultiProvider,
file_storage: FileStorage,
agent_id: str = "",
@@ -80,7 +80,7 @@ def _configure_agent(
settings=agent_state,
llm_provider=llm_provider,
file_storage=file_storage,
legacy_config=app_config,
app_config=app_config,
)
@@ -89,7 +89,7 @@ def create_agent_state(
task: str,
ai_profile: AIProfile,
directives: AIDirectives,
app_config: Config,
app_config: AppConfig,
) -> AgentSettings:
return AgentSettings(
agent_id=agent_id,
@@ -104,5 +104,5 @@ def create_agent_state(
allow_fs_access=not app_config.restrict_to_workspace,
use_functions_api=app_config.openai_functions,
),
history=Agent.default_settings.history.copy(deep=True),
history=Agent.default_settings.history.model_copy(deep=True),
)

View File

@@ -6,7 +6,7 @@ from forge.file_storage.base import FileStorage
if TYPE_CHECKING:
from autogpt.agents.agent import Agent
from forge.config.config import Config
from autogpt.app.config import AppConfig
from forge.llm.providers import MultiProvider
from .configurators import _configure_agent
@@ -16,7 +16,7 @@ from .profile_generator import generate_agent_profile_for_task
async def generate_agent_for_task(
agent_id: str,
task: str,
app_config: Config,
app_config: AppConfig,
file_storage: FileStorage,
llm_provider: MultiProvider,
) -> Agent:

View File

@@ -3,7 +3,6 @@ import logging
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config
from forge.llm.prompting import ChatPrompt, LanguageModelClassification, PromptStrategy
from forge.llm.providers import MultiProvider
from forge.llm.providers.schema import (
@@ -14,11 +13,13 @@ from forge.llm.providers.schema import (
from forge.models.config import SystemConfiguration, UserConfigurable
from forge.models.json_schema import JSONSchema
from autogpt.app.config import AppConfig
logger = logging.getLogger(__name__)
class AgentProfileGeneratorConfiguration(SystemConfiguration):
model_classification: LanguageModelClassification = UserConfigurable(
llm_classification: LanguageModelClassification = UserConfigurable(
default=LanguageModelClassification.SMART_MODEL
)
_example_call: object = {
@@ -136,7 +137,7 @@ class AgentProfileGeneratorConfiguration(SystemConfiguration):
required=True,
),
},
).dict()
).model_dump()
)
@@ -147,21 +148,21 @@ class AgentProfileGenerator(PromptStrategy):
def __init__(
self,
model_classification: LanguageModelClassification,
llm_classification: LanguageModelClassification,
system_prompt: str,
user_prompt_template: str,
create_agent_function: dict,
):
self._model_classification = model_classification
self._llm_classification = llm_classification
self._system_prompt_message = system_prompt
self._user_prompt_template = user_prompt_template
self._create_agent_function = CompletionModelFunction.parse_obj(
self._create_agent_function = CompletionModelFunction.model_validate(
create_agent_function
)
@property
def model_classification(self) -> LanguageModelClassification:
return self._model_classification
def llm_classification(self) -> LanguageModelClassification:
return self._llm_classification
def build_prompt(self, user_objective: str = "", **kwargs) -> ChatPrompt:
system_message = ChatMessage.system(self._system_prompt_message)
@@ -212,7 +213,7 @@ class AgentProfileGenerator(PromptStrategy):
async def generate_agent_profile_for_task(
task: str,
app_config: Config,
app_config: AppConfig,
llm_provider: MultiProvider,
) -> tuple[AIProfile, AIDirectives]:
"""Generates an AIConfig object from the given string.
@@ -221,7 +222,7 @@ async def generate_agent_profile_for_task(
AIConfig: The AIConfig object tailored to the user's input
"""
agent_profile_generator = AgentProfileGenerator(
**AgentProfileGenerator.default_configuration.dict() # HACK
**AgentProfileGenerator.default_configuration.model_dump() # HACK
)
prompt = agent_profile_generator.build_prompt(task)

View File

@@ -26,10 +26,10 @@ class MyAgent(Agent):
settings: AgentSettings,
llm_provider: MultiProvider
file_storage: FileStorage,
legacy_config: Config,
app_config: AppConfig,
):
# Call the parent constructor to bring in the default components
super().__init__(settings, llm_provider, file_storage, legacy_config)
super().__init__(settings, llm_provider, file_storage, app_config)
# Add your custom component
self.my_component = MyComponent()
```

View File

@@ -18,7 +18,11 @@ from forge.components.action_history import (
ActionHistoryComponent,
EpisodicActionHistory,
)
from forge.components.code_executor.code_executor import CodeExecutorComponent
from forge.components.action_history.action_history import ActionHistoryConfiguration
from forge.components.code_executor.code_executor import (
CodeExecutorComponent,
CodeExecutorConfiguration,
)
from forge.components.context.context import AgentContext, ContextComponent
from forge.components.file_manager import FileManagerComponent
from forge.components.git_operations import GitOperationsComponent
@@ -58,7 +62,7 @@ from .prompt_strategies.one_shot import (
)
if TYPE_CHECKING:
from forge.config.config import Config
from autogpt.app.config import AppConfig
logger = logging.getLogger(__name__)
@@ -91,12 +95,14 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
settings: AgentSettings,
llm_provider: MultiProvider,
file_storage: FileStorage,
legacy_config: Config,
app_config: AppConfig,
):
super().__init__(settings)
self.llm_provider = llm_provider
prompt_config = OneShotAgentPromptStrategy.default_configuration.copy(deep=True)
prompt_config = OneShotAgentPromptStrategy.default_configuration.model_copy(
deep=True
)
prompt_config.use_functions_api = (
settings.config.use_functions_api
# Anthropic currently doesn't support tools + prefilling :(
@@ -107,33 +113,41 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
# Components
self.system = SystemComponent()
self.history = ActionHistoryComponent(
settings.history,
self.send_token_limit,
lambda x: self.llm_provider.count_tokens(x, self.llm.name),
legacy_config,
llm_provider,
).run_after(WatchdogComponent)
self.user_interaction = UserInteractionComponent(legacy_config)
self.file_manager = FileManagerComponent(settings, file_storage)
self.history = (
ActionHistoryComponent(
settings.history,
lambda x: self.llm_provider.count_tokens(x, self.llm.name),
llm_provider,
ActionHistoryConfiguration(
llm_name=app_config.fast_llm, max_tokens=self.send_token_limit
),
)
.run_after(WatchdogComponent)
.run_after(SystemComponent)
)
if not app_config.noninteractive_mode:
self.user_interaction = UserInteractionComponent()
self.file_manager = FileManagerComponent(file_storage, settings)
self.code_executor = CodeExecutorComponent(
self.file_manager.workspace,
settings,
legacy_config,
CodeExecutorConfiguration(
docker_container_name=f"{settings.agent_id}_sandbox"
),
)
self.git_ops = GitOperationsComponent(legacy_config)
self.image_gen = ImageGeneratorComponent(
self.file_manager.workspace, legacy_config
self.git_ops = GitOperationsComponent()
self.image_gen = ImageGeneratorComponent(self.file_manager.workspace)
self.web_search = WebSearchComponent()
self.web_selenium = WebSeleniumComponent(
llm_provider,
app_config.app_data_dir,
)
self.web_search = WebSearchComponent(legacy_config)
self.web_selenium = WebSeleniumComponent(legacy_config, llm_provider, self.llm)
self.context = ContextComponent(self.file_manager.workspace, settings.context)
self.watchdog = WatchdogComponent(settings.config, settings.history).run_after(
ContextComponent
)
self.event_history = settings.history
self.legacy_config = legacy_config
self.app_config = app_config
async def propose_action(self) -> OneShotAgentActionProposal:
"""Proposes the next action to execute, based on the task and current state.
@@ -148,7 +162,7 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
constraints = await self.run_pipeline(DirectiveProvider.get_constraints)
best_practices = await self.run_pipeline(DirectiveProvider.get_best_practices)
directives = self.state.directives.copy(deep=True)
directives = self.state.directives.model_copy(deep=True)
directives.resources += resources
directives.constraints += constraints
directives.best_practices += best_practices
@@ -160,13 +174,19 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
# Get messages
messages = await self.run_pipeline(MessageProvider.get_messages)
include_os_info = (
self.code_executor.config.execute_local_commands
if hasattr(self, "code_executor")
else False
)
prompt: ChatPrompt = self.prompt_strategy.build_prompt(
messages=messages,
task=self.state.task,
ai_profile=self.state.ai_profile,
ai_directives=directives,
commands=function_specs_from_commands(self.commands),
include_os_info=self.legacy_config.execute_local_commands,
include_os_info=include_os_info,
)
logger.debug(f"Executing prompt:\n{dump_prompt(prompt)}")
@@ -277,7 +297,7 @@ class Agent(BaseAgent[OneShotAgentActionProposal], Configurable[AgentSettings]):
command
for command in self.commands
if not any(
name in self.legacy_config.disabled_commands for name in command.names
name in self.app_config.disabled_commands for name in command.names
)
]

View File

@@ -28,15 +28,13 @@ _RESPONSE_INTERFACE_NAME = "AssistantResponse"
class AssistantThoughts(ModelWithSummary):
observations: str = Field(
..., description="Relevant observations from your last action (if any)"
description="Relevant observations from your last action (if any)"
)
text: str = Field(..., description="Thoughts")
reasoning: str = Field(..., description="Reasoning behind the thoughts")
self_criticism: str = Field(..., description="Constructive self-criticism")
plan: list[str] = Field(
..., description="Short list that conveys the long-term plan"
)
speak: str = Field(..., description="Summary of thoughts, to say to user")
text: str = Field(description="Thoughts")
reasoning: str = Field(description="Reasoning behind the thoughts")
self_criticism: str = Field(description="Constructive self-criticism")
plan: list[str] = Field(description="Short list that conveys the long-term plan")
speak: str = Field(description="Summary of thoughts, to say to user")
def summary(self) -> str:
return self.text
@@ -96,11 +94,13 @@ class OneShotAgentPromptStrategy(PromptStrategy):
logger: Logger,
):
self.config = configuration
self.response_schema = JSONSchema.from_dict(OneShotAgentActionProposal.schema())
self.response_schema = JSONSchema.from_dict(
OneShotAgentActionProposal.model_json_schema()
)
self.logger = logger
@property
def model_classification(self) -> LanguageModelClassification:
def llm_classification(self) -> LanguageModelClassification:
return LanguageModelClassification.FAST_MODEL # FIXME: dynamic switching
def build_prompt(
@@ -182,7 +182,7 @@ class OneShotAgentPromptStrategy(PromptStrategy):
)
def response_format_instruction(self, use_functions_api: bool) -> tuple[str, str]:
response_schema = self.response_schema.copy(deep=True)
response_schema = self.response_schema.model_copy(deep=True)
assert response_schema.properties
if use_functions_api and "use_tool" in response_schema.properties:
del response_schema.properties["use_tool"]
@@ -274,5 +274,8 @@ class OneShotAgentPromptStrategy(PromptStrategy):
raise InvalidAgentResponseError("Assistant did not use a tool")
assistant_reply_dict["use_tool"] = response.tool_calls[0].function
parsed_response = OneShotAgentActionProposal.parse_obj(assistant_reply_dict)
parsed_response = OneShotAgentActionProposal.model_validate(
assistant_reply_dict
)
parsed_response.raw_message = response.copy()
return parsed_response

View File

@@ -23,7 +23,6 @@ from forge.agent_protocol.models import (
TaskRequestBody,
TaskStepsListResponse,
)
from forge.config.config import Config
from forge.file_storage import FileStorage
from forge.llm.providers import ModelProviderBudget, MultiProvider
from forge.models.action import ActionErrorResult, ActionSuccessResult
@@ -35,6 +34,7 @@ from sentry_sdk import set_user
from autogpt.agent_factory.configurators import configure_agent_with_state, create_agent
from autogpt.agents.agent_manager import AgentManager
from autogpt.app.config import AppConfig
from autogpt.app.utils import is_port_free
logger = logging.getLogger(__name__)
@@ -45,7 +45,7 @@ class AgentProtocolServer:
def __init__(
self,
app_config: Config,
app_config: AppConfig,
database: AgentDB,
file_storage: FileStorage,
llm_provider: MultiProvider,
@@ -314,7 +314,7 @@ class AgentProtocolServer:
""
if tool_result is None
else (
orjson.loads(tool_result.json())
orjson.loads(tool_result.model_dump_json())
if not isinstance(tool_result, ActionErrorResult)
else {
"error": str(tool_result.error),
@@ -327,7 +327,7 @@ class AgentProtocolServer:
if last_proposal and tool_result
else {}
),
**assistant_response.dict(),
**assistant_response.model_dump(),
}
task_cumulative_cost = agent.llm_provider.get_incurred_cost()
@@ -451,7 +451,9 @@ class AgentProtocolServer:
"""
task_llm_budget = self._task_budgets[task.task_id]
task_llm_provider_config = self.llm_provider._configuration.copy(deep=True)
task_llm_provider_config = self.llm_provider._configuration.model_copy(
deep=True
)
_extra_request_headers = task_llm_provider_config.extra_request_headers
_extra_request_headers["AP-TaskID"] = task.task_id
if step_id:
@@ -459,7 +461,7 @@ class AgentProtocolServer:
if task.additional_input and (user_id := task.additional_input.get("user_id")):
_extra_request_headers["AutoGPT-UserID"] = user_id
settings = self.llm_provider._settings.copy()
settings = self.llm_provider._settings.model_copy()
settings.budget = task_llm_budget
settings.configuration = task_llm_provider_config
task_llm_provider = self.llm_provider.__class__(

View File

@@ -28,24 +28,6 @@ def cli(ctx: click.Context):
help="Defines the number of times to run in continuous mode",
)
@click.option("--speak", is_flag=True, help="Enable Speak Mode")
@click.option(
"-b",
"--browser-name",
help="Specifies which web-browser to use when using selenium to scrape the web.",
)
@click.option(
"--allow-downloads",
is_flag=True,
help="Dangerous: Allows AutoGPT to download files natively.",
)
@click.option(
# TODO: this is a hidden option for now, necessary for integration testing.
# We should make this public once we're ready to roll out agent specific workspaces.
"--workspace-directory",
"-w",
type=click.Path(file_okay=False),
hidden=True,
)
@click.option(
"--install-plugin-deps",
is_flag=True,
@@ -128,13 +110,15 @@ def cli(ctx: click.Context):
),
type=click.Choice([i.value for i in LogFormatName]),
)
@click.option(
"--component-config-file",
help="Path to a json configuration file",
type=click.Path(exists=True, dir_okay=False, resolve_path=True, path_type=Path),
)
def run(
continuous: bool,
continuous_limit: Optional[int],
speak: bool,
browser_name: Optional[str],
allow_downloads: bool,
workspace_directory: Optional[Path],
install_plugin_deps: bool,
skip_news: bool,
skip_reprompt: bool,
@@ -148,6 +132,7 @@ def run(
log_level: Optional[str],
log_format: Optional[str],
log_file_format: Optional[str],
component_config_file: Optional[Path],
) -> None:
"""
Sets up and runs an agent, based on the task specified by the user, or resumes an
@@ -165,10 +150,7 @@ def run(
log_level=log_level,
log_format=log_format,
log_file_format=log_file_format,
browser_name=browser_name,
allow_downloads=allow_downloads,
skip_news=skip_news,
workspace_directory=workspace_directory,
install_plugin_deps=install_plugin_deps,
override_ai_name=ai_name,
override_ai_role=ai_role,
@@ -176,20 +158,11 @@ def run(
constraints=list(constraint),
best_practices=list(best_practice),
override_directives=override_directives,
component_config_file=component_config_file,
)
@cli.command()
@click.option(
"-b",
"--browser-name",
help="Specifies which web-browser to use when using selenium to scrape the web.",
)
@click.option(
"--allow-downloads",
is_flag=True,
help="Dangerous: Allows AutoGPT to download files natively.",
)
@click.option(
"--install-plugin-deps",
is_flag=True,
@@ -217,8 +190,6 @@ def run(
type=click.Choice([i.value for i in LogFormatName]),
)
def serve(
browser_name: Optional[str],
allow_downloads: bool,
install_plugin_deps: bool,
debug: bool,
log_level: Optional[str],
@@ -237,8 +208,6 @@ def serve(
log_level=log_level,
log_format=log_format,
log_file_format=log_file_format,
browser_name=browser_name,
allow_downloads=allow_downloads,
install_plugin_deps=install_plugin_deps,
)

View File

@@ -0,0 +1,221 @@
"""Configuration class to store the state of bools for different scripts access."""
from __future__ import annotations
import logging
import os
import re
from pathlib import Path
from typing import Optional, Union
import forge
from forge.config.base import BaseConfig
from forge.llm.providers import CHAT_MODELS, ModelName
from forge.llm.providers.openai import OpenAICredentials, OpenAIModelName
from forge.logging.config import LoggingConfig
from forge.models.config import Configurable, UserConfigurable
from pydantic import SecretStr, ValidationInfo, field_validator
logger = logging.getLogger(__name__)
PROJECT_ROOT = Path(forge.__file__).parent.parent
AZURE_CONFIG_FILE = Path("azure.yaml")
GPT_4_MODEL = OpenAIModelName.GPT4
GPT_3_MODEL = OpenAIModelName.GPT3
class AppConfig(BaseConfig):
name: str = "Auto-GPT configuration"
description: str = "Default configuration for the Auto-GPT application."
########################
# Application Settings #
########################
project_root: Path = PROJECT_ROOT
app_data_dir: Path = project_root / "data"
skip_news: bool = False
skip_reprompt: bool = False
authorise_key: str = UserConfigurable(default="y", from_env="AUTHORISE_COMMAND_KEY")
exit_key: str = UserConfigurable(default="n", from_env="EXIT_KEY")
noninteractive_mode: bool = False
logging: LoggingConfig = LoggingConfig()
component_config_file: Optional[Path] = UserConfigurable(
default=None, from_env="COMPONENT_CONFIG_FILE"
)
##########################
# Agent Control Settings #
##########################
# Model configuration
fast_llm: ModelName = UserConfigurable(
default=OpenAIModelName.GPT3,
from_env="FAST_LLM",
)
smart_llm: ModelName = UserConfigurable(
default=OpenAIModelName.GPT4_TURBO,
from_env="SMART_LLM",
)
temperature: float = UserConfigurable(default=0, from_env="TEMPERATURE")
openai_functions: bool = UserConfigurable(
default=False, from_env=lambda: os.getenv("OPENAI_FUNCTIONS", "False") == "True"
)
embedding_model: str = UserConfigurable(
default="text-embedding-3-small", from_env="EMBEDDING_MODEL"
)
# Run loop configuration
continuous_mode: bool = False
continuous_limit: int = 0
############
# Commands #
############
# General
disabled_commands: list[str] = UserConfigurable(
default_factory=list,
from_env=lambda: _safe_split(os.getenv("DISABLED_COMMANDS")),
)
# File ops
restrict_to_workspace: bool = UserConfigurable(
default=True,
from_env=lambda: os.getenv("RESTRICT_TO_WORKSPACE", "True") == "True",
)
###############
# Credentials #
###############
# OpenAI
openai_credentials: Optional[OpenAICredentials] = None
azure_config_file: Optional[Path] = UserConfigurable(
default=AZURE_CONFIG_FILE, from_env="AZURE_CONFIG_FILE"
)
@field_validator("openai_functions")
def validate_openai_functions(cls, value: bool, info: ValidationInfo):
if value:
smart_llm = info.data["smart_llm"]
assert CHAT_MODELS[smart_llm].has_function_call_api, (
f"Model {smart_llm} does not support tool calling. "
"Please disable OPENAI_FUNCTIONS or choose a suitable model."
)
return value
class ConfigBuilder(Configurable[AppConfig]):
default_settings = AppConfig()
@classmethod
def build_config_from_env(cls, project_root: Path = PROJECT_ROOT) -> AppConfig:
"""Initialize the Config class"""
config = cls.build_agent_configuration()
config.project_root = project_root
# Make relative paths absolute
for k in {
"azure_config_file", # TODO: move from project root
}:
setattr(config, k, project_root / getattr(config, k))
if (
config.openai_credentials
and config.openai_credentials.api_type == SecretStr("azure")
and (config_file := config.azure_config_file)
):
config.openai_credentials.load_azure_config(config_file)
return config
async def assert_config_has_required_llm_api_keys(config: AppConfig) -> None:
"""
Check if API keys (if required) are set for the configured SMART_LLM and FAST_LLM.
"""
from forge.llm.providers.anthropic import AnthropicModelName
from forge.llm.providers.groq import GroqModelName
from pydantic import ValidationError
if set((config.smart_llm, config.fast_llm)).intersection(AnthropicModelName):
from forge.llm.providers.anthropic import AnthropicCredentials
try:
credentials = AnthropicCredentials.from_env()
except ValidationError as e:
if "api_key" in str(e):
logger.error(
"Set your Anthropic API key in .env or as an environment variable"
)
logger.info(
"For further instructions: "
"https://docs.agpt.co/autogpt/setup/#anthropic"
)
raise ValueError("Anthropic is unavailable: can't load credentials") from e
key_pattern = r"^sk-ant-api03-[\w\-]{95}"
# If key is set, but it looks invalid
if not re.search(key_pattern, credentials.api_key.get_secret_value()):
logger.warning(
"Possibly invalid Anthropic API key! "
f"Configured Anthropic API key does not match pattern '{key_pattern}'. "
"If this is a valid key, please report this warning to the maintainers."
)
if set((config.smart_llm, config.fast_llm)).intersection(GroqModelName):
from forge.llm.providers.groq import GroqProvider
from groq import AuthenticationError
try:
groq = GroqProvider()
await groq.get_available_models()
except ValidationError as e:
if "api_key" not in str(e):
raise
logger.error("Set your Groq API key in .env or as an environment variable")
logger.info(
"For further instructions: https://docs.agpt.co/autogpt/setup/#groq"
)
raise ValueError("Groq is unavailable: can't load credentials")
except AuthenticationError as e:
logger.error("The Groq API key is invalid!")
logger.info(
"For instructions to get and set a new API key: "
"https://docs.agpt.co/autogpt/setup/#groq"
)
raise ValueError("Groq is unavailable: invalid API key") from e
if set((config.smart_llm, config.fast_llm)).intersection(OpenAIModelName):
from forge.llm.providers.openai import OpenAIProvider
from openai import AuthenticationError
try:
openai = OpenAIProvider()
await openai.get_available_models()
except ValidationError as e:
if "api_key" not in str(e):
raise
logger.error(
"Set your OpenAI API key in .env or as an environment variable"
)
logger.info(
"For further instructions: https://docs.agpt.co/autogpt/setup/#openai"
)
raise ValueError("OpenAI is unavailable: can't load credentials")
except AuthenticationError as e:
logger.error("The OpenAI API key is invalid!")
logger.info(
"For instructions to get and set a new API key: "
"https://docs.agpt.co/autogpt/setup/#openai"
)
raise ValueError("OpenAI is unavailable: invalid API key") from e
def _safe_split(s: Union[str, None], sep: str = ",") -> list[str]:
"""Split a string by a separator. Return an empty list if the string is None."""
if s is None:
return []
return s.split(sep)

View File

@@ -5,20 +5,18 @@ import logging
from typing import Literal, Optional
import click
from colorama import Back, Style
from forge.config.config import GPT_3_MODEL, Config
from forge.llm.providers import ModelName, MultiProvider
from autogpt.app.config import GPT_3_MODEL, AppConfig
logger = logging.getLogger(__name__)
async def apply_overrides_to_config(
config: Config,
config: AppConfig,
continuous: bool = False,
continuous_limit: Optional[int] = None,
skip_reprompt: bool = False,
browser_name: Optional[str] = None,
allow_downloads: bool = False,
skip_news: bool = False,
) -> None:
"""Updates the config object with the given arguments.
@@ -33,8 +31,6 @@ async def apply_overrides_to_config(
log_level (int): The global log level for the application.
log_format (str): The format for the log(s).
log_file_format (str): Override the format for the log file.
browser_name (str): The name of the browser to use for scraping the web.
allow_downloads (bool): Whether to allow AutoGPT to download files natively.
skips_news (bool): Whether to suppress the output of latest news on startup.
"""
config.continuous_mode = False
@@ -55,44 +51,33 @@ async def apply_overrides_to_config(
raise click.UsageError("--continuous-limit can only be used with --continuous")
# Check availability of configured LLMs; fallback to other LLM if unavailable
config.fast_llm = await check_model(config.fast_llm, "fast_llm")
config.smart_llm = await check_model(config.smart_llm, "smart_llm")
config.fast_llm, config.smart_llm = await check_models(
(config.fast_llm, "fast_llm"), (config.smart_llm, "smart_llm")
)
if skip_reprompt:
config.skip_reprompt = True
if browser_name:
config.selenium_web_browser = browser_name
if allow_downloads:
logger.warning(
msg=f"{Back.LIGHTYELLOW_EX}"
"AutoGPT will now be able to download and save files to your machine."
f"{Back.RESET}"
" It is recommended that you monitor any files it downloads carefully.",
)
logger.warning(
msg=f"{Back.RED + Style.BRIGHT}"
"NEVER OPEN FILES YOU AREN'T SURE OF!"
f"{Style.RESET_ALL}",
)
config.allow_downloads = True
if skip_news:
config.skip_news = True
async def check_model(
model_name: ModelName, model_type: Literal["smart_llm", "fast_llm"]
) -> ModelName:
async def check_models(
*models: tuple[ModelName, Literal["smart_llm", "fast_llm"]]
) -> tuple[ModelName, ...]:
"""Check if model is available for use. If not, return gpt-3.5-turbo."""
multi_provider = MultiProvider()
models = await multi_provider.get_available_chat_models()
available_models = await multi_provider.get_available_chat_models()
if any(model_name == m.name for m in models):
return model_name
checked_models: list[ModelName] = []
for model, model_type in models:
if any(model == m.name for m in available_models):
checked_models.append(model)
else:
logger.warning(
f"You don't have access to {model}. "
f"Setting {model_type} to {GPT_3_MODEL}."
)
checked_models.append(GPT_3_MODEL)
logger.warning(
f"You don't have access to {model_name}. Setting {model_type} to {GPT_3_MODEL}."
)
return GPT_3_MODEL
return tuple(checked_models)

View File

@@ -21,7 +21,6 @@ from forge.components.code_executor.code_executor import (
)
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config, ConfigBuilder, assert_config_has_openai_api_key
from forge.file_storage import FileStorageBackendName, get_storage
from forge.llm.providers import MultiProvider
from forge.logging.config import configure_logging
@@ -34,6 +33,11 @@ from forge.utils.exceptions import AgentTerminated, InvalidAgentResponseError
from autogpt.agent_factory.configurators import configure_agent_with_state, create_agent
from autogpt.agents.agent_manager import AgentManager
from autogpt.agents.prompt_strategies.one_shot import AssistantThoughts
from autogpt.app.config import (
AppConfig,
ConfigBuilder,
assert_config_has_required_llm_api_keys,
)
if TYPE_CHECKING:
from autogpt.agents.agent import Agent
@@ -62,10 +66,7 @@ async def run_auto_gpt(
log_level: Optional[str] = None,
log_format: Optional[str] = None,
log_file_format: Optional[str] = None,
browser_name: Optional[str] = None,
allow_downloads: bool = False,
skip_news: bool = False,
workspace_directory: Optional[Path] = None,
install_plugin_deps: bool = False,
override_ai_name: Optional[str] = None,
override_ai_role: Optional[str] = None,
@@ -73,6 +74,7 @@ async def run_auto_gpt(
constraints: Optional[list[str]] = None,
best_practices: Optional[list[str]] = None,
override_directives: bool = False,
component_config_file: Optional[Path] = None,
):
# Set up configuration
config = ConfigBuilder.build_config_from_env()
@@ -98,16 +100,13 @@ async def run_auto_gpt(
tts_config=config.tts_config,
)
# TODO: fill in llm values here
assert_config_has_openai_api_key(config)
await assert_config_has_required_llm_api_keys(config)
await apply_overrides_to_config(
config=config,
continuous=continuous,
continuous_limit=continuous_limit,
skip_reprompt=skip_reprompt,
browser_name=browser_name,
allow_downloads=allow_downloads,
skip_news=skip_news,
)
@@ -132,15 +131,12 @@ async def run_auto_gpt(
print_python_version_info(logger)
print_attribute("Smart LLM", config.smart_llm)
print_attribute("Fast LLM", config.fast_llm)
print_attribute("Browser", config.selenium_web_browser)
if config.continuous_mode:
print_attribute("Continuous Mode", "ENABLED", title_color=Fore.YELLOW)
if continuous_limit:
print_attribute("Continuous Limit", config.continuous_limit)
if config.tts_config.speak_mode:
print_attribute("Speak Mode", "ENABLED")
if config.allow_downloads:
print_attribute("Native Downloading", "ENABLED")
if we_are_running_in_a_docker_container() or is_docker_available():
print_attribute("Code Execution", "ENABLED")
else:
@@ -327,6 +323,14 @@ async def run_auto_gpt(
# )
# ).add_done_callback(update_agent_directives)
# Load component configuration from file
if _config_file := component_config_file or config.component_config_file:
try:
logger.info(f"Loading component configuration from {_config_file}")
agent.load_component_configs(_config_file.read_text())
except Exception as e:
logger.error(f"Could not load component configuration: {e}")
#################
# Run the Agent #
#################
@@ -353,8 +357,6 @@ async def run_auto_gpt_server(
log_level: Optional[str] = None,
log_format: Optional[str] = None,
log_file_format: Optional[str] = None,
browser_name: Optional[str] = None,
allow_downloads: bool = False,
install_plugin_deps: bool = False,
):
from .agent_protocol_server import AgentProtocolServer
@@ -380,13 +382,10 @@ async def run_auto_gpt_server(
tts_config=config.tts_config,
)
# TODO: fill in llm values here
assert_config_has_openai_api_key(config)
await assert_config_has_required_llm_api_keys(config)
await apply_overrides_to_config(
config=config,
browser_name=browser_name,
allow_downloads=allow_downloads,
)
llm_provider = _configure_llm_provider(config)
@@ -411,7 +410,7 @@ async def run_auto_gpt_server(
)
def _configure_llm_provider(config: Config) -> MultiProvider:
def _configure_llm_provider(config: AppConfig) -> MultiProvider:
multi_provider = MultiProvider()
for model in [config.smart_llm, config.fast_llm]:
# Ensure model providers for configured LLMs are available
@@ -451,15 +450,15 @@ async def run_interaction_loop(
None
"""
# These contain both application config and agent config, so grab them here.
legacy_config = agent.legacy_config
app_config = agent.app_config
ai_profile = agent.state.ai_profile
logger = logging.getLogger(__name__)
cycle_budget = cycles_remaining = _get_cycle_budget(
legacy_config.continuous_mode, legacy_config.continuous_limit
app_config.continuous_mode, app_config.continuous_limit
)
spinner = Spinner(
"Thinking...", plain_output=legacy_config.logging.plain_console_output
"Thinking...", plain_output=app_config.logging.plain_console_output
)
stop_reason = None
@@ -508,22 +507,25 @@ async def run_interaction_loop(
########
handle_stop_signal()
# Have the agent determine the next action to take.
with spinner:
try:
action_proposal = await agent.propose_action()
except InvalidAgentResponseError as e:
logger.warning(f"The agent's thoughts could not be parsed: {e}")
consecutive_failures += 1
if consecutive_failures >= 3:
logger.error(
"The agent failed to output valid thoughts"
f" {consecutive_failures} times in a row. Terminating..."
)
raise AgentTerminated(
"The agent failed to output valid thoughts"
f" {consecutive_failures} times in a row."
)
continue
if not (_ep := agent.event_history.current_episode) or _ep.result:
with spinner:
try:
action_proposal = await agent.propose_action()
except InvalidAgentResponseError as e:
logger.warning(f"The agent's thoughts could not be parsed: {e}")
consecutive_failures += 1
if consecutive_failures >= 3:
logger.error(
"The agent failed to output valid thoughts"
f" {consecutive_failures} times in a row. Terminating..."
)
raise AgentTerminated(
"The agent failed to output valid thoughts"
f" {consecutive_failures} times in a row."
)
continue
else:
action_proposal = _ep.action
consecutive_failures = 0
@@ -534,7 +536,7 @@ async def run_interaction_loop(
update_user(
ai_profile,
action_proposal,
speak_mode=legacy_config.tts_config.speak_mode,
speak_mode=app_config.tts_config.speak_mode,
)
##################
@@ -543,7 +545,7 @@ async def run_interaction_loop(
handle_stop_signal()
if cycles_remaining == 1: # Last cycle
feedback_type, feedback, new_cycles_remaining = await get_user_feedback(
legacy_config,
app_config,
ai_profile,
)
@@ -654,7 +656,7 @@ def update_user(
async def get_user_feedback(
config: Config,
config: AppConfig,
ai_profile: AIProfile,
) -> tuple[UserFeedback, str, int | None]:
"""Gets the user's feedback on the assistant's reply.

View File

@@ -4,9 +4,10 @@ from typing import Optional
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config
from forge.logging.utils import print_attribute
from autogpt.app.config import AppConfig
from .input import clean_input
logger = logging.getLogger(__name__)
@@ -46,7 +47,7 @@ def apply_overrides_to_ai_settings(
async def interactively_revise_ai_settings(
ai_profile: AIProfile,
directives: AIDirectives,
app_config: Config,
app_config: AppConfig,
):
"""Interactively revise the AI settings.

914
autogpt/poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -20,44 +20,24 @@ serve = "autogpt.app.cli:serve"
[tool.poetry.dependencies]
python = "^3.10"
anthropic = "^0.25.1"
autogpt-forge = { path = "../forge", develop = true }
# autogpt-forge = {git = "https://github.com/Significant-Gravitas/AutoGPT.git", subdirectory = "forge"}
beautifulsoup4 = "^4.12.2"
charset-normalizer = "^3.1.0"
click = "*"
colorama = "^0.4.6"
distro = "^1.8.0"
en-core-web-sm = { url = "https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl" }
fastapi = "^0.109.1"
ftfy = "^6.1.1"
google-api-python-client = "*"
gitpython = "^3.1.32"
hypercorn = "^0.14.4"
inflection = "*"
jsonschema = "*"
numpy = "*"
openai = "^1.7.2"
orjson = "^3.8.10"
Pillow = "*"
pydantic = "*"
python-docx = "*"
pydantic = "^2.7.2"
python-dotenv = "^1.0.0"
pyyaml = "^6.0"
readability-lxml = "^0.8.1"
requests = "*"
sentry-sdk = "^1.40.4"
spacy = "^3.7.4"
tenacity = "^8.2.2"
# OpenAI and Generic plugins import
openapi-python-client = "^0.14.0"
# Benchmarking
agbenchmark = { path = "../benchmark", optional = true }
# agbenchmark = {git = "https://github.com/Significant-Gravitas/AutoGPT.git", subdirectory = "benchmark", optional = true}
psycopg2-binary = "^2.9.9"
multidict = "6.0.5"
cx-freeze = "7.0.0"
[tool.poetry.extras]
benchmark = ["agbenchmark"]
@@ -65,27 +45,28 @@ benchmark = ["agbenchmark"]
[tool.poetry.group.dev.dependencies]
black = "^23.12.1"
flake8 = "^7.0.0"
gitpython = "^3.1.32"
isort = "^5.13.1"
pre-commit = "*"
pyright = "^1.1.364"
types-beautifulsoup4 = "*"
# Type stubs
types-colorama = "*"
types-Markdown = "*"
types-Pillow = "*"
# Testing
asynctest = "*"
coverage = "*"
pytest = "*"
pytest-asyncio = "*"
pytest-benchmark = "*"
pytest-cov = "*"
pytest-integration = "*"
pytest-mock = "*"
pytest-recording = "*"
pytest-xdist = "*"
vcrpy = { git = "https://github.com/Significant-Gravitas/vcrpy.git", rev = "master" }
[tool.poetry.group.build]
optional = true
[tool.poetry.group.build.dependencies]
cx-freeze = { git = "https://github.com/ntindle/cx_Freeze.git", rev = "main" }
# HACK: switch to cx-freeze release package after #2442 and #2472 are merged: https://github.com/marcelotduarte/cx_Freeze/pulls?q=is:pr+%232442+OR+%232472+
# cx-freeze = { version = "^7.2.0", optional = true }
[build-system]
@@ -108,7 +89,3 @@ skip_glob = ["data"]
pythonVersion = "3.10"
exclude = ["data/**", "**/node_modules", "**/__pycache__", "**/.*"]
ignore = ["../forge/**"]
[tool.pytest.ini_options]
markers = ["slow", "requires_openai_api_key", "requires_huggingface_api_key"]

View File

@@ -19,7 +19,7 @@ from autogpt.app.utils import coroutine
help="Path to the git repository",
)
@coroutine
async def generate_release_notes(repo_path: Optional[Path] = None):
async def generate_release_notes(repo_path: Optional[str | Path] = None):
logger = logging.getLogger(generate_release_notes.name) # pyright: ignore
repo = Repo(repo_path, search_parent_directories=True)

3
autogpt/scripts/llamafile/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
*.llamafile
*.llamafile.exe
llamafile.exe

View File

@@ -0,0 +1,165 @@
#!/usr/bin/env python3
"""
Use llamafile to serve a (quantized) mistral-7b-instruct-v0.2 model
Usage:
cd <repo-root>/autogpt
./scripts/llamafile/serve.py
"""
import os
import platform
import subprocess
from pathlib import Path
from typing import Optional
import click
LLAMAFILE = Path("mistral-7b-instruct-v0.2.Q5_K_M.llamafile")
LLAMAFILE_URL = f"https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/resolve/main/{LLAMAFILE.name}" # noqa
LLAMAFILE_EXE = Path("llamafile.exe")
LLAMAFILE_EXE_URL = "https://github.com/Mozilla-Ocho/llamafile/releases/download/0.8.6/llamafile-0.8.6" # noqa
@click.command()
@click.option(
"--llamafile",
type=click.Path(dir_okay=False, path_type=Path),
help=f"Name of the llamafile to serve. Default: {LLAMAFILE.name}",
)
@click.option("--llamafile_url", help="Download URL for the llamafile you want to use")
@click.option(
"--host", help="Specify the address for the llamafile server to listen on"
)
@click.option(
"--port", type=int, help="Specify the port for the llamafile server to listen on"
)
@click.option(
"--force-gpu",
is_flag=True,
hidden=platform.system() != "Darwin",
help="Run the model using only the GPU (AMD or Nvidia). "
"Otherwise, both CPU and GPU may be (partially) used.",
)
def main(
llamafile: Optional[Path] = None,
llamafile_url: Optional[str] = None,
host: Optional[str] = None,
port: Optional[int] = None,
force_gpu: bool = False,
):
print(f"type(llamafile) = {type(llamafile)}")
if not llamafile:
if not llamafile_url:
llamafile = LLAMAFILE
else:
llamafile = Path(llamafile_url.rsplit("/", 1)[1])
if llamafile.suffix != ".llamafile":
click.echo(
click.style(
"The given URL does not end with '.llamafile' -> "
"can't get filename from URL. "
"Specify the filename using --llamafile.",
fg="red",
),
err=True,
)
return
if llamafile == LLAMAFILE and not llamafile_url:
llamafile_url = LLAMAFILE_URL
elif llamafile_url != LLAMAFILE_URL:
if not click.prompt(
click.style(
"You seem to have specified a different URL for the default model "
f"({llamafile.name}). Are you sure this is correct? "
"If you want to use a different model, also specify --llamafile.",
fg="yellow",
),
type=bool,
):
return
# Go to autogpt/scripts/llamafile/
os.chdir(Path(__file__).resolve().parent)
on_windows = platform.system() == "Windows"
if not llamafile.is_file():
if not llamafile_url:
click.echo(
click.style(
"Please use --lamafile_url to specify a download URL for "
f"'{llamafile.name}'. "
"This will only be necessary once, so we can download the model.",
fg="red",
),
err=True,
)
return
download_file(llamafile_url, llamafile)
if not on_windows:
llamafile.chmod(0o755)
subprocess.run([llamafile, "--version"], check=True)
if not on_windows:
base_command = [f"./{llamafile}"]
else:
# Windows does not allow executables over 4GB, so we have to download a
# model-less llamafile.exe and run that instead.
if not LLAMAFILE_EXE.is_file():
download_file(LLAMAFILE_EXE_URL, LLAMAFILE_EXE)
LLAMAFILE_EXE.chmod(0o755)
subprocess.run([f".\\{LLAMAFILE_EXE}", "--version"], check=True)
base_command = [f".\\{LLAMAFILE_EXE}", "-m", llamafile]
if host:
base_command.extend(["--host", host])
if port:
base_command.extend(["--port", str(port)])
if force_gpu:
base_command.extend(["-ngl", "9999"])
subprocess.run(
[
*base_command,
"--server",
"--nobrowser",
"--ctx-size",
"0",
"--n-predict",
"1024",
],
check=True,
)
# note: --ctx-size 0 means the prompt context size will be set directly from the
# underlying model configuration. This may cause slow response times or consume
# a lot of memory.
def download_file(url: str, to_file: Path) -> None:
print(f"Downloading {to_file.name}...")
import urllib.request
urllib.request.urlretrieve(url, to_file, reporthook=report_download_progress)
print()
def report_download_progress(chunk_number: int, chunk_size: int, total_size: int):
if total_size != -1:
downloaded_size = chunk_number * chunk_size
percent = min(1, downloaded_size / total_size)
bar = "#" * int(40 * percent)
print(
f"\rDownloading: [{bar:<40}] {percent:.0%}"
f" - {downloaded_size/1e6:.1f}/{total_size/1e6:.1f} MB",
end="",
)
if __name__ == "__main__":
main()

View File

@@ -1,7 +1,9 @@
import platform
from pathlib import Path
from pkgutil import iter_modules
from shutil import which
from typing import Union
from cx_Freeze import Executable, setup
from cx_Freeze import Executable, setup # type: ignore
packages = [
m.name
@@ -11,11 +13,47 @@ packages = [
and ("poetry" in m.module_finder.path) # type: ignore
]
icon = (
"../../assets/gpt_dark_RGB.icns"
if which("sips")
else "../../assets/gpt_dark_RGB.ico"
)
# set the icon based on the platform
icon = "../../assets/gpt_dark_RGB.ico"
if platform.system() == "Darwin":
icon = "../../assets/gpt_dark_RGB.icns"
elif platform.system() == "Linux":
icon = "../../assets/gpt_dark_RGB.png"
def txt_to_rtf(input_file: Union[str, Path], output_file: Union[str, Path]) -> None:
"""
Convert a text file to RTF format.
Args:
input_file (Union[str, Path]): Path to the input text file.
output_file (Union[str, Path]): Path to the output RTF file.
Returns:
None
"""
input_path = Path(input_file)
output_path = Path(output_file)
with input_path.open("r", encoding="utf-8") as txt_file:
content = txt_file.read()
# RTF header
rtf = r"{\rtf1\ansi\deff0 {\fonttbl {\f0 Times New Roman;}}\f0\fs24 "
# Replace newlines with RTF newline
rtf += content.replace("\n", "\\par ")
# Close RTF document
rtf += "}"
with output_path.open("w", encoding="utf-8") as rtf_file:
rtf_file.write(rtf)
# Convert LICENSE to LICENSE.rtf
license_file = "LICENSE.rtf"
txt_to_rtf("../LICENSE", license_file)
setup(
@@ -55,6 +93,7 @@ setup(
"target_name": "AutoGPT",
"add_to_path": True,
"install_icon": "../assets/gpt_dark_RGB.ico",
"license_file": license_file,
},
},
)

View File

@@ -6,7 +6,6 @@ from pathlib import Path
import pytest
from forge.config.ai_profile import AIProfile
from forge.config.config import Config, ConfigBuilder
from forge.file_storage.local import (
FileStorage,
FileStorageConfiguration,
@@ -16,11 +15,11 @@ from forge.llm.providers import MultiProvider
from forge.logging.config import configure_logging
from autogpt.agents.agent import Agent, AgentConfiguration, AgentSettings
from autogpt.app.config import AppConfig, ConfigBuilder
from autogpt.app.main import _configure_llm_provider
pytest_plugins = [
"tests.integration.agent_factory",
"tests.vcr",
]
@@ -62,7 +61,7 @@ def config(
@pytest.fixture(scope="session")
def setup_logger(config: Config):
def setup_logger():
configure_logging(
debug=True,
log_dir=Path(__file__).parent / "logs",
@@ -71,12 +70,14 @@ def setup_logger(config: Config):
@pytest.fixture
def llm_provider(config: Config) -> MultiProvider:
def llm_provider(config: AppConfig) -> MultiProvider:
return _configure_llm_provider(config)
@pytest.fixture
def agent(config: Config, llm_provider: MultiProvider, storage: FileStorage) -> Agent:
def agent(
config: AppConfig, llm_provider: MultiProvider, storage: FileStorage
) -> Agent:
ai_profile = AIProfile(
ai_name="Base",
ai_role="A base AI",
@@ -94,13 +95,13 @@ def agent(config: Config, llm_provider: MultiProvider, storage: FileStorage) ->
allow_fs_access=not config.restrict_to_workspace,
use_functions_api=config.openai_functions,
),
history=Agent.default_settings.history.copy(deep=True),
history=Agent.default_settings.history.model_copy(deep=True),
)
agent = Agent(
settings=agent_settings,
llm_provider=llm_provider,
file_storage=storage,
legacy_config=config,
app_config=config,
)
return agent

View File

@@ -2,15 +2,15 @@ from pathlib import Path
import pytest
from forge.config.ai_profile import AIProfile
from forge.config.config import Config
from forge.file_storage import FileStorageBackendName, get_storage
from forge.llm.providers import MultiProvider
from autogpt.agents.agent import Agent, AgentConfiguration, AgentSettings
from autogpt.app.config import AppConfig
@pytest.fixture
def dummy_agent(config: Config, llm_provider: MultiProvider):
def dummy_agent(config: AppConfig, llm_provider: MultiProvider):
ai_profile = AIProfile(
ai_name="Dummy Agent",
ai_role="Dummy Role",
@@ -28,7 +28,7 @@ def dummy_agent(config: Config, llm_provider: MultiProvider):
smart_llm=config.smart_llm,
use_functions_api=config.openai_functions,
),
history=Agent.default_settings.history.copy(deep=True),
history=Agent.default_settings.history.model_copy(deep=True),
)
local = config.file_storage_backend == FileStorageBackendName.LOCAL
@@ -44,7 +44,7 @@ def dummy_agent(config: Config, llm_provider: MultiProvider):
settings=agent_settings,
llm_provider=llm_provider,
file_storage=file_storage,
legacy_config=config,
app_config=config,
)
return agent

View File

@@ -3,8 +3,8 @@ from unittest.mock import patch
import pytest
from forge.config.ai_directives import AIDirectives
from forge.config.ai_profile import AIProfile
from forge.config.config import Config
from autogpt.app.config import AppConfig
from autogpt.app.setup import (
apply_overrides_to_ai_settings,
interactively_revise_ai_settings,
@@ -39,7 +39,7 @@ async def test_apply_overrides_to_ai_settings():
@pytest.mark.asyncio
async def test_interactively_revise_ai_settings(config: Config):
async def test_interactively_revise_ai_settings(config: AppConfig):
ai_profile = AIProfile(ai_name="Test AI", ai_role="Test Role")
directives = AIDirectives(
resources=["Resource1"],

View File

@@ -8,15 +8,15 @@ from typing import Any
from unittest import mock
import pytest
from forge.config.config import GPT_3_MODEL, GPT_4_MODEL, Config, ConfigBuilder
from openai.pagination import AsyncPage
from openai.types import Model
from pydantic import SecretStr
from autogpt.app.config import GPT_3_MODEL, GPT_4_MODEL, AppConfig, ConfigBuilder
from autogpt.app.configurator import apply_overrides_to_config
def test_initial_values(config: Config) -> None:
def test_initial_values(config: AppConfig) -> None:
"""
Test if the initial values of the config class attributes are set correctly.
"""
@@ -29,7 +29,7 @@ def test_initial_values(config: Config) -> None:
@pytest.mark.asyncio
@mock.patch("openai.resources.models.AsyncModels.list")
async def test_fallback_to_gpt3_if_gpt4_not_available(
mock_list_models: Any, config: Config
mock_list_models: Any, config: AppConfig
) -> None:
"""
Test if models update to gpt-3.5-turbo if gpt-4 is not available.
@@ -51,7 +51,7 @@ async def test_fallback_to_gpt3_if_gpt4_not_available(
assert config.smart_llm == GPT_3_MODEL
def test_missing_azure_config(config: Config) -> None:
def test_missing_azure_config(config: AppConfig) -> None:
assert config.openai_credentials is not None
config_file = config.app_data_dir / "azure_config.yaml"
@@ -68,7 +68,7 @@ def test_missing_azure_config(config: Config) -> None:
@pytest.fixture
def config_with_azure(config: Config):
def config_with_azure(config: AppConfig):
config_file = config.app_data_dir / "azure_config.yaml"
config_file.write_text(
f"""
@@ -91,7 +91,7 @@ azure_model_map:
del os.environ["AZURE_CONFIG_FILE"]
def test_azure_config(config_with_azure: Config) -> None:
def test_azure_config(config_with_azure: AppConfig) -> None:
assert (credentials := config_with_azure.openai_credentials) is not None
assert credentials.api_type == SecretStr("azure")
assert credentials.api_version == SecretStr("2023-06-01-preview")

View File

@@ -2,5 +2,5 @@
"[python]": {
"editor.defaultFormatter": "ms-python.black-formatter"
},
"python.formatting.provider": "none"
"python.analysis.typeCheckingMode": "basic",
}

View File

@@ -97,7 +97,9 @@ def start():
help="Write log output to a file instead of the terminal.",
)
# @click.argument(
# "agent_path", type=click.Path(exists=True, file_okay=False), required=False
# "agent_path",
# type=click.Path(exists=True, file_okay=False, path_type=Path),
# required=False,
# )
def run(
maintain: bool,
@@ -276,7 +278,9 @@ def list_challenges(include_unavailable: bool, only_names: bool, output_json: bo
return
if output_json:
click.echo(json.dumps([json.loads(c.info.json()) for c in challenges]))
click.echo(
json.dumps([json.loads(c.info.model_dump_json()) for c in challenges])
)
return
headers = tuple(
@@ -324,7 +328,7 @@ def info(name: str, json: bool):
continue
if json:
click.echo(challenge.info.json())
click.echo(challenge.info.model_dump_json())
break
pretty_print_model(challenge.info)

View File

@@ -16,7 +16,7 @@ from agent_protocol_client import AgentApi, ApiClient, ApiException, Configurati
from agent_protocol_client.models import Task, TaskRequestBody
from fastapi import APIRouter, FastAPI, HTTPException, Request, Response
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel, Extra, ValidationError
from pydantic import BaseModel, ConfigDict, ValidationError
from agbenchmark.challenges import ChallengeInfo
from agbenchmark.config import AgentBenchmarkConfig
@@ -52,7 +52,9 @@ while challenge_spec_files:
logger.debug(f"Loading {challenge_relpath}...")
try:
challenge_info = ChallengeInfo.parse_file(challenge_spec_file)
challenge_info = ChallengeInfo.model_validate_json(
challenge_spec_file.read_text()
)
except ValidationError as e:
if logging.getLogger().level == logging.DEBUG:
logger.warning(f"Spec file {challenge_relpath} failed to load:\n{e}")
@@ -64,7 +66,7 @@ while challenge_spec_files:
challenge_info.eval_id = str(uuid.uuid4())
# this will sort all the keys of the JSON systematically
# so that the order is always the same
write_pretty_json(challenge_info.dict(), challenge_spec_file)
write_pretty_json(challenge_info.model_dump(), challenge_spec_file)
CHALLENGES[challenge_info.eval_id] = challenge_info
@@ -111,8 +113,7 @@ class CreateReportRequest(BaseModel):
# category: Optional[str] = []
mock: Optional[bool] = False
class Config:
extra = Extra.forbid # this will forbid any extra fields
model_config = ConfigDict(extra="forbid")
updates_list = []
@@ -153,7 +154,7 @@ def setup_fastapi_app(agbenchmark_config: AgentBenchmarkConfig) -> FastAPI:
pids = find_agbenchmark_without_uvicorn()
logger.info(f"pids already running with agbenchmark: {pids}")
logger.debug(f"Request to /reports: {body.dict()}")
logger.debug(f"Request to /reports: {body.model_dump()}")
# Start the benchmark in a separate thread
benchmark_process = Process(
@@ -326,7 +327,9 @@ def setup_fastapi_app(agbenchmark_config: AgentBenchmarkConfig) -> FastAPI:
config={},
)
logger.debug(f"Returning evaluation data:\n{eval_info.json(indent=4)}")
logger.debug(
f"Returning evaluation data:\n{eval_info.model_dump_json(indent=4)}"
)
return eval_info
except ApiException as e:
logger.error(f"Error {e} whilst trying to evaluate task: {task_id}")

View File

@@ -15,7 +15,13 @@ from agent_protocol_client import Configuration as ClientConfig
from agent_protocol_client import Step
from colorama import Fore, Style
from openai import _load_client as get_openai_client
from pydantic import BaseModel, Field, constr, validator
from pydantic import (
BaseModel,
Field,
StringConstraints,
ValidationInfo,
field_validator,
)
from agbenchmark.agent_api_interface import download_agent_artifacts_into_folder
from agbenchmark.agent_interface import copy_challenge_artifacts_into_workspace
@@ -46,7 +52,9 @@ class BuiltinChallengeSpec(BaseModel):
class Info(BaseModel):
difficulty: DifficultyLevel
description: Annotated[str, constr(regex=r"^Tests if the agent can.*")]
description: Annotated[
str, StringConstraints(pattern=r"^Tests if the agent can.*")
]
side_effects: list[str] = Field(default_factory=list)
info: Info
@@ -60,23 +68,26 @@ class BuiltinChallengeSpec(BaseModel):
class Eval(BaseModel):
type: str
scoring: Optional[Literal["percentage", "scale", "binary"]]
template: Optional[Literal["rubric", "reference", "question", "custom"]]
examples: Optional[str]
scoring: Optional[Literal["percentage", "scale", "binary"]] = None
template: Optional[
Literal["rubric", "reference", "question", "custom"]
] = None
examples: Optional[str] = None
@validator("scoring", "template", always=True)
def validate_eval_fields(cls, v, values, field):
if "type" in values and values["type"] == "llm":
if v is None:
@field_validator("scoring", "template")
def validate_eval_fields(cls, value, info: ValidationInfo):
field_name = info.field_name
if "type" in info.data and info.data["type"] == "llm":
if value is None:
raise ValueError(
f"{field.name} must be provided when eval type is 'llm'"
f"{field_name} must be provided when eval type is 'llm'"
)
else:
if v is not None:
if value is not None:
raise ValueError(
f"{field.name} should only exist when eval type is 'llm'"
f"{field_name} should only exist when eval type is 'llm'"
)
return v
return value
eval: Eval
@@ -142,7 +153,7 @@ class BuiltinChallenge(BaseChallenge):
@classmethod
def from_challenge_spec_file(cls, spec_file: Path) -> type["BuiltinChallenge"]:
challenge_spec = BuiltinChallengeSpec.parse_file(spec_file)
challenge_spec = BuiltinChallengeSpec.model_validate_json(spec_file.read_text())
challenge_spec.spec_file = spec_file
return cls.from_challenge_spec(challenge_spec)
@@ -187,7 +198,7 @@ class BuiltinChallenge(BaseChallenge):
task_id = step.task_id
n_steps += 1
steps.append(step.copy())
steps.append(step.model_copy())
if step.additional_output:
agent_task_cost = step.additional_output.get(
"task_total_cost",

View File

@@ -1,7 +1,7 @@
from abc import ABC, abstractmethod
from typing import Optional
from pydantic import BaseModel, validator
from pydantic import BaseModel, field_validator
# Models for the request and response payloads
@@ -10,7 +10,7 @@ class ShipPlacement(BaseModel):
start: dict # {"row": int, "column": str}
direction: str
@validator("start")
@field_validator("start")
def validate_start(cls, start):
row, column = start.get("row"), start.get("column")

View File

@@ -1,7 +1,7 @@
from abc import ABC, abstractmethod
from typing import Optional
from pydantic import BaseModel, validator
from pydantic import BaseModel, field_validator
# Models for the request and response payloads
@@ -10,7 +10,7 @@ class ShipPlacement(BaseModel):
start: dict # {"row": int, "column": str}
direction: str
@validator("start")
@field_validator("start")
def validate_start(cls, start):
row, column = start.get("row"), start.get("column")

View File

@@ -6,7 +6,7 @@ from typing import ClassVar, Iterator, Literal
import pytest
import requests
from agent_protocol_client import AgentApi, Step
from pydantic import BaseModel, ValidationError, validator
from pydantic import BaseModel, ValidationError, ValidationInfo, field_validator
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.utils.data_types import Category, EvalResult
@@ -183,7 +183,7 @@ class WebArenaChallengeSpec(BaseModel):
"""The JungleGym site (base URL) at which to start"""
require_login: bool
require_reset: bool
storage_state: str | None
storage_state: str | None = None
intent: str
intent_template: str
@@ -195,36 +195,36 @@ class WebArenaChallengeSpec(BaseModel):
class EvalSet(BaseModel):
class StringMatchEvalSet(BaseModel):
exact_match: str | None
fuzzy_match: list[str] | None
must_include: list[str] | None
exact_match: str | None = None
fuzzy_match: list[str] | None = None
must_include: list[str] | None = None
reference_answers: StringMatchEvalSet | None
reference_answers: StringMatchEvalSet | None = None
"""For string_match eval, a set of criteria to judge the final answer"""
reference_answer_raw_annotation: str | None
string_note: str | None
annotation_note: str | None
reference_answer_raw_annotation: str | None = None
string_note: str | None = None
annotation_note: str | None = None
reference_url: str | None
reference_url: str | None = None
"""For url_match eval, the last URL that should be visited"""
url_note: str | None
url_note: str | None = None
program_html: list[ProgramHtmlEval]
"""For program_html eval, a list of criteria to judge the site state by"""
eval_types: list[EvalType]
@validator("eval_types")
def check_eval_parameters(cls, v: list[EvalType], values):
if "string_match" in v and not values.get("reference_answers"):
@field_validator("eval_types")
def check_eval_parameters(cls, value: list[EvalType], info: ValidationInfo):
if "string_match" in value and not info.data["reference_answers"]:
raise ValueError("'string_match' eval_type requires reference_answers")
if "url_match" in v and not values.get("reference_url"):
if "url_match" in value and not info.data["reference_url"]:
raise ValueError("'url_match' eval_type requires reference_url")
if "program_html" in v and not values.get("program_html"):
if "program_html" in value and not info.data["program_html"]:
raise ValueError(
"'program_html' eval_type requires at least one program_html eval"
)
return v
return value
@property
def evaluators(self) -> list[_Eval]:
@@ -292,7 +292,7 @@ class WebArenaChallenge(BaseChallenge):
results = requests.get(source_url).json()["data"]
if not results:
raise ValueError(f"Could not fetch challenge {source_uri}")
return cls.from_challenge_spec(WebArenaChallengeSpec.parse_obj(results[0]))
return cls.from_challenge_spec(WebArenaChallengeSpec.model_validate(results[0]))
@classmethod
def from_challenge_spec(
@@ -500,7 +500,7 @@ def load_webarena_challenges(
skipped = 0
for entry in challenge_dicts:
try:
challenge_spec = WebArenaChallengeSpec.parse_obj(entry)
challenge_spec = WebArenaChallengeSpec.model_validate(entry)
except ValidationError as e:
failed += 1
logger.warning(f"Error validating WebArena challenge entry: {entry}")

View File

@@ -4,7 +4,8 @@ from datetime import datetime
from pathlib import Path
from typing import Optional
from pydantic import BaseSettings, Field, validator
from pydantic import Field, ValidationInfo, field_validator
from pydantic_settings import BaseSettings
def _calculate_info_test_path(base_path: Path, benchmark_start_time: datetime) -> Path:
@@ -57,7 +58,7 @@ class AgentBenchmarkConfig(BaseSettings, extra="allow"):
subject application exposes an Agent Protocol compliant API.
"""
agbenchmark_config_dir: Path = Field(..., exclude=True)
agbenchmark_config_dir: Path = Field(exclude=True)
"""Path to the agbenchmark_config folder of the subject agent application."""
categories: list[str] | None = None
@@ -101,11 +102,11 @@ class AgentBenchmarkConfig(BaseSettings, extra="allow"):
def config_file(self) -> Path:
return self.agbenchmark_config_dir / "config.json"
@validator("reports_folder", pre=True, always=True)
def set_reports_folder(cls, v, values):
if not v:
return values["agbenchmark_config_dir"] / "reports"
return v
@field_validator("reports_folder", mode="before")
def set_reports_folder(cls, value: Path, info: ValidationInfo):
if not value:
return info.data["agbenchmark_config_dir"] / "reports"
return value
def get_report_dir(self, benchmark_start_time: datetime) -> Path:
return _calculate_info_test_path(self.reports_folder, benchmark_start_time)

View File

@@ -111,17 +111,19 @@ class SessionReportManager(BaseReportManager):
def save(self) -> None:
with self.report_file.open("w") as f:
if self.report:
f.write(self.report.json(indent=4))
f.write(self.report.model_dump_json(indent=4))
else:
json.dump({k: v.dict() for k, v in self.tests.items()}, f, indent=4)
json.dump(
{k: v.model_dump() for k, v in self.tests.items()}, f, indent=4
)
def load(self) -> None:
super().load()
if "tests" in self.tests:
self.report = Report.parse_obj(self.tests)
self.report = Report.model_validate(self.tests)
else:
self.tests = {n: Test.parse_obj(d) for n, d in self.tests.items()}
self.tests = {n: Test.model_validate(d) for n, d in self.tests.items()}
def add_test_report(self, test_name: str, test_report: Test) -> None:
if self.report:
@@ -155,7 +157,7 @@ class SessionReportManager(BaseReportManager):
total_cost=self.get_total_costs(),
),
tests=copy.copy(self.tests),
config=config.dict(exclude={"reports_folder"}, exclude_none=True),
config=config.model_dump(exclude={"reports_folder"}, exclude_none=True),
)
agent_categories = get_highest_achieved_difficulty_per_category(self.report)

View File

@@ -27,7 +27,7 @@ def get_reports_data(report_path: str) -> dict[str, Any]:
with open(Path(subdir) / file, "r") as f:
# Load the JSON data from the file
json_data = json.load(f)
converted_data = Report.parse_obj(json_data)
converted_data = Report.model_validate(json_data)
# get the last directory name in the path as key
reports_data[subdir_name] = converted_data

View File

@@ -6,7 +6,13 @@ import logging
from typing import Annotated, Any, Dict, List
from agent_protocol_client import Step
from pydantic import BaseModel, Field, constr, validator
from pydantic import (
BaseModel,
Field,
StringConstraints,
ValidationInfo,
field_validator,
)
datetime_format = r"^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\+00:00$"
logger = logging.getLogger(__name__)
@@ -30,20 +36,20 @@ class TestResult(BaseModel):
cost: float | None = None
"""The (known) cost incurred by the run, e.g. from using paid LLM APIs"""
@validator("fail_reason")
def success_xor_fail_reason(cls, v: str | None, values: dict[str, Any]):
if bool(v) == bool(values["success"]):
@field_validator("fail_reason")
def success_xor_fail_reason(cls, value, info: ValidationInfo):
if bool(value) == bool(info.data["success"]):
logger.error(
"Error validating `success ^ fail_reason` on TestResult: "
f"success = {repr(values['success'])}; "
f"fail_reason = {repr(v)}"
f"success = {repr(info.data['success'])}; "
f"fail_reason = {repr(value)}"
)
if v:
success = values["success"]
if value:
success = info.data["success"]
assert not success, "fail_reason must only be specified if success=False"
else:
assert values["success"], "fail_reason is required if success=False"
return v
assert info.data["success"], "fail_reason is required if success=False"
return value
class TestMetrics(BaseModel):
@@ -88,7 +94,7 @@ class Test(BaseModel):
class ReportBase(BaseModel):
command: str
completion_time: str | None = None
benchmark_start_time: Annotated[str, constr(regex=datetime_format)]
benchmark_start_time: Annotated[str, StringConstraints(pattern=datetime_format)]
metrics: MetricsOverall
config: Dict[str, str | dict[str, str]]
agent_git_commit_sha: str | None = None

View File

@@ -1,7 +1,7 @@
"""Model definitions for use in the API"""
from typing import Annotated
from pydantic import BaseModel, constr
from pydantic import BaseModel, StringConstraints
datetime_format = r"^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\+00:00$"
@@ -37,7 +37,7 @@ class RunDetails(BaseModel):
run_id: str | None = None
command: str
completion_time: str | None = None
benchmark_start_time: Annotated[str, constr(regex=datetime_format)]
benchmark_start_time: Annotated[str, StringConstraints(pattern=datetime_format)]
class BenchmarkRun(BaseModel):

View File

@@ -45,7 +45,7 @@ def update_regression_tests(
# if the last 3 tests were successful, add to the regression tests
test_report.metrics.is_regression = True
SingletonReportManager().REGRESSION_MANAGER.add_test(
test_name, test_report.dict(include={"difficulty", "data_path"})
test_name, test_report.model_dump(include={"difficulty", "data_path"})
)

View File

@@ -7,10 +7,9 @@ from pydantic import BaseModel, Field
class TaskRequestBody(BaseModel):
input: str = Field(
...,
min_length=1,
description="Input prompt for the task.",
example="Write the words you receive to the file 'output.txt'.",
examples=["Write the words you receive to the file 'output.txt'."],
)
additional_input: Optional[dict[str, Any]] = Field(default_factory=dict)

View File

@@ -220,7 +220,7 @@ class DependencyManager(object):
labels = {}
for item in self.items:
assert item.cls and issubclass(item.cls, BaseChallenge)
data = item.cls.info.dict()
data = item.cls.info.model_dump()
node_name = get_name(item)
data["name"] = node_name

View File

@@ -135,7 +135,7 @@ def pretty_print_model(model: BaseModel, include_header: bool = True) -> None:
if include_header:
# Try to find the ID and/or name attribute of the model
id, name = None, None
for attr, value in model.dict().items():
for attr, value in model.model_dump().items():
if attr == "id" or attr.endswith("_id"):
id = value
if attr.endswith("name"):
@@ -148,8 +148,8 @@ def pretty_print_model(model: BaseModel, include_header: bool = True) -> None:
)
indent = " " * 2
k_col_width = max(len(k) for k in model.dict().keys())
for k, v in model.dict().items():
k_col_width = max(len(k) for k in model.model_dump().keys())
for k, v in model.model_dump().items():
v_fmt = repr(v)
if v is None or v == "":
v_fmt = click.style(v_fmt, fg="black")

View File

@@ -1,367 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 28,
"id": "15386da8",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"\n",
"load_dotenv()\n",
"\n",
"def llm_eval(evaluation: str) -> float:\n",
" openai_client = OpenAI()\n",
" answer = openai_client.chat.completions.create(\n",
" model=\"gpt-4\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": evaluation},\n",
" ],\n",
" )\n",
" return answer.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "e9d1c3a6",
"metadata": {},
"outputs": [],
"source": [
"REFERENCE_PROMPT = \"\"\"Ignore previous directions. You are now an expert at evaluating how close machine generated responses are to human answers. You essentially act as a hyper advanced BLEU score.\n",
"In order to score the machine generated response you will {scoring}. Make sure to factor in the distance to the ideal response into your thinking, deliberation, and final result regarding scoring. Return nothing but a float score.\n",
"\n",
"Here is the given task for you to evaluate:\n",
"{task}\n",
"\n",
"Here is the ideal response you're comparing to based on the task:\n",
"{answer}\n",
"\n",
"Here are some examples of how to score a machine generated response compared to the above ideal response:\n",
"{examples}\n",
"\n",
"Here is the current machine generated response to the task that you need to evaluate:\n",
"{response}\n",
"\n",
"Remember to always end your response with nothing but a float score.\n",
"Float score:\n",
"\"\"\"\n",
"\n",
"RUBRIC_PROMPT = \"\"\"Ignore previous directions. You are now an expert at evaluating machine generated responses to given tasks.\n",
"In order to score the generated texts you will {scoring}. Make sure to factor in rubric into your thinking, deliberation, and final result regarding scoring. Return nothing but a float score.\n",
"\n",
"Here is the given task for you to evaluate:\n",
"{task}\n",
"\n",
"Use the below rubric to guide your thinking about scoring:\n",
"{answer}\n",
"\n",
"Here are some examples of how to score a machine generated response based on the above rubric.\n",
"{examples}\n",
"\n",
"Here is the current machine generated response to the task that you need to evaluate:\n",
"{response}\n",
"\n",
"Remember to always end your response with nothing but a float score.\n",
"Float score:\n",
"\"\"\"\n",
"\n",
"QUESTION_PROMPT = \"\"\"Ignore previous directions. You are now an expert at evaluating machine generated responses to given tasks.\n",
"In order to score the generated texts you will {scoring}. Make sure to think about whether the generated response answers the question well in order to score accurately. Return nothing but a float score.\n",
"\n",
"Here is the given task:\n",
"{task}\n",
"\n",
"Here is a question that checks if the task was completed correctly:\n",
"{answer}\n",
"\n",
"Here are some examples of how to score a machine generated response based on task-response pairs.\n",
"{examples}\n",
"\n",
"Here is the current machine generated response to the task that you need to evaluate:\n",
"{response}\n",
"\n",
"Remember to always end your response with nothing but a float score.\n",
"Float score:\n",
"\"\"\"\n",
"\n",
"CUSTOM_PROMPT = \"\"\"{custom}\n",
"{scoring}\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "8ff9cd33",
"metadata": {},
"outputs": [],
"source": [
"task = \"As an agent powered by a large language model, you can execute any digital action. Your task is to find the cheapest flight to San Francisco from Toronto. Create a step by step plan for yourself to execute this task and output this plan into a .txt file.\"\n",
"\n",
"percentage = \"assign a float score that will represent a percentage out of 100. Use decimal points to be even more accurate. 0 represents the worst possible generation, while 100 represents the ideal generation\"\n",
"scale = \"assign an integer score from a scale of 1-10. 1 represents a really bad generation, while 10 represents an ideal generation\"\n",
"binary = \"assign a binary score of either 0 or 1. 0 represents a failure, while 1 represents a success\"\n",
"\n",
"reference_answer = \"\"\"1. Search 'Toronto to San Francisco flights' on the internet\n",
"2. Click on the first link that is a flight aggregator such as SkyScanner or Google Flights\n",
"3. Select the dates that you want to travel\n",
"3. Click on the 'one way' option and click search\n",
"4. Search through all of the given options and select the cheapest flight\"\"\"\n",
"\n",
"rubric_answer = \"\"\"- Reasonable: The plan's steps should be arranged in a logical sequence. Each step should naturally follow from the one before it, leading coherently towards the desired outcome.\n",
"- Atomic: Each step of the plan should be broken down into the smallest possible units of action. This level of detail allows for better tracking and control, as well as easier troubleshooting and adjustment of the plan if necessary.\n",
"- Relevant: Each step should directly contribute to achieving the end goal of the plan. Irrelevant steps can waste resources and detract from the main objectives.\n",
"- Timely: The plan should include a realistic timeline for each step, and not include any redundant steps.\"\"\"\n",
"\n",
"question_answer = \"Will this plan find the cheapest flight to San Francisco from Toronto consistently?\""
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "d0bfdce5",
"metadata": {},
"outputs": [],
"source": [
"FEW_SHOT_REFERENCE_EXAMPLES = \"\"\"EXAMPLE 1\n",
"Plan: 'Define the Parameters:\n",
"\n",
"Identify the potential range of dates for the journey. Ensure flexibility to capitalize on the best deals.\n",
"Research Airlines:\n",
"\n",
"Compile a list of airlines that operate flights between Toronto and San Francisco. This can be accomplished by checking each airline's website or using an airline directory.\n",
"Use Comparison Tools:\n",
"\n",
"Use flight comparison websites (such as Skyscanner, Expedia, Google Flights, etc.) to compare flight prices across different airlines. These tools aggregate flight data and can present the cheapest options available.\n",
"Set Price Alerts:\n",
"\n",
"If the trip is not immediate, set price alerts on these comparison websites. They will notify you when there is a significant drop in flight prices on the selected route.\n",
"Check Airlines' Official Websites:\n",
"\n",
"After identifying potential flights via comparison tools, visit the airlines' official websites. Sometimes airlines offer deals or discounts on their own websites that are not reflected on comparison sites.\n",
"Consider Nearby Airports:\n",
"\n",
"Check flight prices to and from airports in the vicinity of both Toronto and San Francisco. Sometimes less busy airports can offer cheaper flights.\n",
"Evaluate Cost Effectiveness:\n",
"\n",
"Consider factors such as baggage fees, meal costs, and transportation to and from the airport when evaluating the total cost of the flight. The cheapest ticket price does not necessarily mean the least expensive journey overall.\n",
"Book the Flight:\n",
"\n",
"Once the cheapest and most convenient flight has been identified, proceed to booking. Double-check the flight details before finalizing the booking.\n",
"Monitor Flight Details:\n",
"\n",
"After booking, keep an eye on flight status, gate information, and any potential changes to the flight schedule.'\n",
"Returned score: 82.7\n",
"Internal rationale: The plan is solid, however the plan goes to extreme lengths to make things cheap, sacrificing time and simplicity. The task just asks for a vague definition of booking a flight. There are some redundant steps.\n",
"\n",
"EXAMPLE 2\n",
"Plan: 'Determine the Travel Dates and Flexibility: Decide on the dates you want to travel to San Francisco and check if you have any flexibility in your travel schedule. Being flexible with your travel dates can often lead to finding cheaper flights.\n",
"\n",
"Use Flight Search Engines: Start by using popular flight search engines like Google Flights, Skyscanner, Kayak, or Expedia. These platforms allow you to compare prices from various airlines and find the most affordable options.\n",
"\n",
"Set Up Fare Alerts: If your travel dates are flexible, consider setting up fare alerts on the flight search engines. These alerts will notify you when the prices drop for the specified route.\n",
"\n",
"Check Nearby Airports: In both Toronto and San Francisco, there might be multiple airports. Check flights departing from nearby airports as they may offer better deals.\n",
"\n",
"Consider Layovers: Non-stop flights are usually more convenient but can be more expensive. Look for flights with one or more layovers as they may offer cost savings.\n",
"\n",
"Check Airlines' Official Websites: Once you find a potentially cheap flight on a search engine, verify the price directly on the airline's official website. Sometimes, booking directly with the airline can be cheaper due to exclusive deals and promotions.\n",
"\n",
"Use Incognito/Private Browsing Mode: Flight prices can sometimes increase if the website detects repeated searches for the same route. To avoid this, use the incognito or private browsing mode in your web browser.\n",
"\n",
"Consider Budget Airlines: Check if there are any budget airlines flying between Toronto and San Francisco. They often offer lower fares, but be mindful of additional fees for baggage and other services.\n",
"\n",
"Check for Deals and Promo Codes: Look for any ongoing deals or promo codes that can help you save on your flight booking. Airlines and travel websites occasionally offer special discounts.\n",
"\n",
"Be Flexible with Departure and Arrival Times: If possible, consider flying during off-peak hours or mid-week, as flights during these times can be less expensive.\n",
"\n",
"Factor in Total Costs: While searching for cheap flights, don't forget to consider other expenses like baggage fees, seat selection, and additional amenities. Some budget airlines might have hidden costs that could make the overall trip more expensive.\n",
"\n",
"Book Early: Flight prices tend to rise as the departure date approaches. Once you find a good deal that suits your preferences, don't wait too long to book your flight.'\n",
"Returned score: 74.9\n",
"Internal rationale: The individual components of this plan are better than the one previous. But this plan doesn't follow logical steps to completion, and is just more general advice.\n",
"\n",
"EXAMPLE 3\n",
"Plan: 'Search online for cheap flights.\n",
"Check different dates.\n",
"Look at nearby airports.\n",
"Consider layovers.\n",
"Try budget airlines.\n",
"Book early if you find a good deal.'\n",
"Returned score: 42.0\n",
"Internal rationale: This plan is too vague and does not provide enough detail to be useful.\n",
"\n",
"\"\"\"\n",
"FEW_SHOT_RUBRIC_EXAMPLES = \"\"\n",
"FEW_SHOT_QUESTION_EXAMPLES = \"\""
]
},
{
"cell_type": "code",
"execution_count": 75,
"id": "3de1d6d4",
"metadata": {},
"outputs": [],
"source": [
"response1=\"\"\"Define the Parameters:\n",
"\n",
"Identify the potential range of dates for the journey. Ensure flexibility to capitalize on the best deals.\n",
"Research Airlines:\n",
"\n",
"Compile a list of airlines that operate flights between Toronto and San Francisco. This can be accomplished by checking each airline's website or using an airline directory.\n",
"Use Comparison Tools:\n",
"\n",
"Use flight comparison websites (such as Skyscanner, Expedia, Google Flights, etc.) to compare flight prices across different airlines. These tools aggregate flight data and can present the cheapest options available.\n",
"Set Price Alerts:\n",
"\n",
"If the trip is not immediate, set price alerts on these comparison websites. They will notify you when there is a significant drop in flight prices on the selected route.\n",
"Check Airlines' Official Websites:\n",
"\n",
"After identifying potential flights via comparison tools, visit the airlines' official websites. Sometimes airlines offer deals or discounts on their own websites that are not reflected on comparison sites.\n",
"Consider Nearby Airports:\n",
"\n",
"Check flight prices to and from airports in the vicinity of both Toronto and San Francisco. Sometimes less busy airports can offer cheaper flights.\n",
"Evaluate Cost Effectiveness:\n",
"\n",
"Consider factors such as baggage fees, meal costs, and transportation to and from the airport when evaluating the total cost of the flight. The cheapest ticket price does not necessarily mean the least expensive journey overall.\n",
"Book the Flight:\n",
"\n",
"Once the cheapest and most convenient flight has been identified, proceed to booking. Double-check the flight details before finalizing the booking.\n",
"Monitor Flight Details:\n",
"\n",
"After booking, keep an eye on flight status, gate information, and any potential changes to the flight schedule.\"\"\"\n",
"\n",
"response2=\"\"\"Determine the Travel Dates and Flexibility: Decide on the dates you want to travel to San Francisco and check if you have any flexibility in your travel schedule. Being flexible with your travel dates can often lead to finding cheaper flights.\n",
"\n",
"Use Flight Search Engines: Start by using popular flight search engines like Google Flights, Skyscanner, Kayak, or Expedia. These platforms allow you to compare prices from various airlines and find the most affordable options.\n",
"\n",
"Set Up Fare Alerts: If your travel dates are flexible, consider setting up fare alerts on the flight search engines. These alerts will notify you when the prices drop for the specified route.\n",
"\n",
"Check Nearby Airports: In both Toronto and San Francisco, there might be multiple airports. Check flights departing from nearby airports as they may offer better deals.\n",
"\n",
"Consider Layovers: Non-stop flights are usually more convenient but can be more expensive. Look for flights with one or more layovers as they may offer cost savings.\n",
"\n",
"Check Airlines' Official Websites: Once you find a potentially cheap flight on a search engine, verify the price directly on the airline's official website. Sometimes, booking directly with the airline can be cheaper due to exclusive deals and promotions.\n",
"\n",
"Use Incognito/Private Browsing Mode: Flight prices can sometimes increase if the website detects repeated searches for the same route. To avoid this, use the incognito or private browsing mode in your web browser.\n",
"\n",
"Consider Budget Airlines: Check if there are any budget airlines flying between Toronto and San Francisco. They often offer lower fares, but be mindful of additional fees for baggage and other services.\n",
"\n",
"Check for Deals and Promo Codes: Look for any ongoing deals or promo codes that can help you save on your flight booking. Airlines and travel websites occasionally offer special discounts.\n",
"\n",
"Be Flexible with Departure and Arrival Times: If possible, consider flying during off-peak hours or mid-week, as flights during these times can be less expensive.\n",
"\n",
"Factor in Total Costs: While searching for cheap flights, don't forget to consider other expenses like baggage fees, seat selection, and additional amenities. Some budget airlines might have hidden costs that could make the overall trip more expensive.\n",
"\n",
"Book Early: Flight prices tend to rise as the departure date approaches. Once you find a good deal that suits your preferences, don't wait too long to book your flight.\"\"\"\n",
"\n",
"response3 = \"\"\"Search online for cheap flights.\n",
"Check different dates.\n",
"Look at nearby airports.\n",
"Consider layovers.\n",
"Try budget airlines.\n",
"Book early if you find a good deal.\"\"\"\n",
"\n",
"ideal_response = \"\"\"1. Search 'Toronto to San Francisco flights' on the internet\n",
"2. Click on the first link that is a flight aggregator such as SkyScanner or Google Flights\n",
"3. Select the dates that you want to travel\n",
"3. Click on the 'one way' option and click search\n",
"4. Search through all of the given options and select the cheapest flight\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a5bf2f5c",
"metadata": {},
"outputs": [],
"source": [
"reference_evaluation = REFERENCE_PROMPT.format(task=task, scoring=percentage, answer=reference_answer, response=ideal_response, examples=FEW_SHOT_REFERENCE_EXAMPLES)\n",
"rubric_evaluation = RUBRIC_PROMPT.format(task=task, scoring=percentage, answer=rubric_answer, response=ideal_response, examples=FEW_SHOT_REFERENCE_EXAMPLES)\n",
"question_evaluation = QUESTION_PROMPT.format(task=task, scoring=percentage, answer=question_answer, response=ideal_response, examples=FEW_SHOT_REFERENCE_EXAMPLES)"
]
},
{
"cell_type": "code",
"execution_count": 82,
"id": "1cd01a4c",
"metadata": {},
"outputs": [],
"source": [
"reference_response = llm_eval(reference_evaluation)\n",
"rubric_response = llm_eval(rubric_evaluation)\n",
"question_response = llm_eval(question_evaluation)"
]
},
{
"cell_type": "code",
"execution_count": 72,
"id": "72e6ac91",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"REFERENCE: 66.5 6.0 0.0\n",
"RUBRIC: 65.0 4.0 0.0\n",
"QUESTION: 87.0 6.5 0.0\n"
]
}
],
"source": [
"# response3 - the worst one\n",
"print('REFERENCE:', reference_response_percentage, reference_response_scale, reference_response_binary)\n",
"print('RUBRIC:', rubric_response_percentage, rubric_response_scale, rubric_response_binary)\n",
"print('QUESTION:', question_response_percentage, question_response_scale, question_response_binary)"
]
},
{
"cell_type": "code",
"execution_count": 83,
"id": "a66014ee",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"REFERENCE: 100.0 10.0 1.0\n",
"RUBRIC: 88.0 7.5 1.0\n",
"QUESTION: 87.0 7.5 1.0\n"
]
}
],
"source": [
"# reference_response - the best one\n",
"print('REFERENCE:', reference_response_ideal_percentage, reference_response_ideal_scale, reference_response_ideal_binary)\n",
"print('RUBRIC:', rubric_response_ideal_percentage, rubric_response_ideal_scale, rubric_response_ideal_binary)\n",
"print('QUESTION:', question_response_ideal_percentage, question_response_ideal_scale, question_response_ideal_binary)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,30 +0,0 @@
{
"1": {
"role": "assistant",
"content": "{\n \"thoughts\": {\n \"text\": \"I need to find Tesla's revenue in 2022.\",\n \"reasoning\": \"I will perform a web search to find the information.\",\n \"plan\": \"- Perform a web search\\n- Extract the revenue\\n- Write the revenue to a file\",\n \"criticism\": \"I should be cautious about the reliability of the source I find.\",\n \"speak\": \"I will search the web for Tesla's revenue in 2022.\"\n },\n \"command\": {\n \"name\": \"web_search\",\n \"args\": {\n \"query\": \"Tesla revenue in 2022\"\n }\n }\n}"
},
"2": {
"role": "assistant",
"content": "{\n \"thoughts\": {\n \"text\": \"I found a reliable source for Tesla's revenue in 2022.\",\n \"reasoning\": \"I will use the Statista source to extract the revenue information.\",\n \"plan\": \"- Browse the Statista website\\n- Extract the revenue\\n- Write the revenue to a file\",\n \"criticism\": \"I should verify the information on the Statista website.\",\n \"speak\": \"I will browse the Statista website to find Tesla's revenue in 2022.\"\n },\n \"command\": {\n \"name\": \"browse_website\",\n \"args\": {\n \"url\": \"https://www.statista.com/statistics/272120/revenue-of-tesla/\",\n \"question\": \"What was Tesla's revenue in 2022?\"\n }\n }\n}"
},
"3": {
"role": "assistant",
"content": "Tesla's revenue in 2022 was nearly $81.5 billion, which represents a 51 percent increase from the previous year. The majority of the revenue, over $71 billion, was generated from Tesla's automotive segment. The Model Y emerged as Tesla's best-selling vehicle, and the company has been investing in the production of the Model 3 and Model Y."
},
"4": {
"role": "assistant",
"content": "The text provides various statistics and financial information about Tesla, including its revenue from 2008 to 2022. However, the specific revenue for 2022 is not mentioned in the text."
},
"5": {
"role": "assistant",
"content": "a description of different types of cookies used on a website and the options for accepting or rejecting them. There is no information provided about Tesla's revenue in 2022."
},
"6": {
"role": "assistant",
"content": "Tesla's revenue in 2022 increased by 51% to nearly $81.5 billion, with the majority coming from the automotive segment. The Model Y was the best-selling vehicle, and the company has been focusing on producing the Model 3 and Model Y."
},
"7": {
"role": "assistant",
"content": "{\n \"thoughts\": {\n \"text\": \"I found Tesla's revenue in 2022.\",\n \"reasoning\": \"I will write the revenue to a file.\",\n \"plan\": \"- Write the revenue to a file\",\n \"criticism\": \"I should ensure the revenue is formatted correctly.\",\n \"speak\": \"I will write Tesla's revenue in 2022 to a file.\"\n },\n \"command\": {\n \"name\": \"write_to_file\",\n \"args\": {\n \"filename\": \"tesla_revenue_2022.txt\",\n \"text\": \"Tesla's revenue in 2022 was $81,500,000,000.\"\n }\n }\n}"
}
}

View File

@@ -1,74 +0,0 @@
{
"1": {
"role": "assistant",
"content": {
"thoughts": {
"text": "I need to find Tesla's revenue in 2022.",
"reasoning": "I will perform a web search to find the information.",
"plan": "- Perform a web search\n- Extract the revenue\n- Write the revenue to a file",
"criticism": "I should be cautious about the reliability of the source I find.",
"speak": "I will search the web for Tesla's revenue in 2022."
},
"command": {
"name": "web_search",
"args": {
"query": "Tesla revenue in 2022"
}
}
}
},
"2": {
"role": "assistant",
"content": {
"thoughts": {
"text": "I found a reliable source for Tesla's revenue in 2022.",
"reasoning": "I will use the Statista source to extract the revenue information.",
"plan": "- Browse the Statista website\n- Extract the revenue\n- Write the revenue to a file",
"criticism": "I should verify the information on the Statista website.",
"speak": "I will browse the Statista website to find Tesla's revenue in 2022."
},
"command": {
"name": "browse_website",
"args": {
"url": "https://www.statista.com/statistics/272120/revenue-of-tesla/",
"question": "What was Tesla's revenue in 2022?"
}
}
}
},
"3": {
"role": "assistant",
"content": "Tesla's revenue in 2022 was nearly $81.5 billion, which represents a 51 percent increase from the previous year. The majority of the revenue, over $71 billion, was generated from Tesla's automotive segment. The Model Y emerged as Tesla's best-selling vehicle, and the company has been investing in the production of the Model 3 and Model Y."
},
"4": {
"role": "assistant",
"content": "The text provides various statistics and financial information about Tesla, including its revenue from 2008 to 2022. However, the specific revenue for 2022 is not mentioned in the text."
},
"5": {
"role": "assistant",
"content": "a description of different types of cookies used on a website and the options for accepting or rejecting them. There is no information provided about Tesla's revenue in 2022."
},
"6": {
"role": "assistant",
"content": "Tesla's revenue in 2022 increased by 51% to nearly $81.5 billion, with the majority coming from the automotive segment. The Model Y was the best-selling vehicle, and the company has been focusing on producing the Model 3 and Model Y."
},
"7": {
"role": "assistant",
"content": {
"thoughts": {
"text": "I found Tesla's revenue in 2022.",
"reasoning": "I will write the revenue to a file.",
"plan": "- Write the revenue to a file",
"criticism": "I should ensure the revenue is formatted correctly.",
"speak": "I will write Tesla's revenue in 2022 to a file."
},
"command": {
"name": "write_to_file",
"args": {
"filename": "tesla_revenue_2022.txt",
"text": "Tesla's revenue in 2022 was $81,500,000,000."
}
}
}
}
}

400
benchmark/poetry.lock generated
View File

@@ -5,17 +5,22 @@ name = "agent-protocol-client"
version = "1.1.0"
description = "Agent Communication Protocol Client"
optional = false
python-versions = ">=3.7,<4.0"
files = [
{file = "agent_protocol_client-1.1.0-py3-none-any.whl", hash = "sha256:0e8c6c97244189666ed18e320410abddce8c9dfb75437da1e590bbef3b6268be"},
{file = "agent_protocol_client-1.1.0.tar.gz", hash = "sha256:aa7e1042de1249477fdc29c2df08a44f2233dade9c02c1279e37c98e9d3a0d72"},
]
python-versions = "^3.7"
files = []
develop = false
[package.dependencies]
aiohttp = ">=3.8.4,<4.0.0"
pydantic = ">=1.10.5,<2.0.0"
python-dateutil = ">=2.8.2,<3.0.0"
urllib3 = ">=1.25.3,<2.0.0"
aiohttp = "^3.8.4"
pydantic = ">=1.10.5, <3.0.0"
python-dateutil = "^2.8.2"
urllib3 = "^1.25.3"
[package.source]
type = "git"
url = "https://github.com/Significant-Gravitas/agent-protocol.git"
reference = "HEAD"
resolved_reference = "beb098517b0b9e255024d1b57df236f0329f4b1c"
subdirectory = "packages/client/python"
[[package]]
name = "aiohttp"
@@ -127,6 +132,17 @@ files = [
[package.dependencies]
frozenlist = ">=1.1.0"
[[package]]
name = "annotated-types"
version = "0.7.0"
description = "Reusable constraint types to use with typing.Annotated"
optional = false
python-versions = ">=3.8"
files = [
{file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"},
{file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"},
]
[[package]]
name = "anyio"
version = "4.2.0"
@@ -1440,85 +1456,101 @@ files = [
[[package]]
name = "multidict"
version = "6.0.4"
version = "6.0.5"
description = "multidict implementation"
optional = false
python-versions = ">=3.7"
files = [
{file = "multidict-6.0.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:0b1a97283e0c85772d613878028fec909f003993e1007eafa715b24b377cb9b8"},
{file = "multidict-6.0.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:eeb6dcc05e911516ae3d1f207d4b0520d07f54484c49dfc294d6e7d63b734171"},
{file = "multidict-6.0.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d6d635d5209b82a3492508cf5b365f3446afb65ae7ebd755e70e18f287b0adf7"},
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c048099e4c9e9d615545e2001d3d8a4380bd403e1a0578734e0d31703d1b0c0b"},
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ea20853c6dbbb53ed34cb4d080382169b6f4554d394015f1bef35e881bf83547"},
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:16d232d4e5396c2efbbf4f6d4df89bfa905eb0d4dc5b3549d872ab898451f569"},
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36c63aaa167f6c6b04ef2c85704e93af16c11d20de1d133e39de6a0e84582a93"},
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:64bdf1086b6043bf519869678f5f2757f473dee970d7abf6da91ec00acb9cb98"},
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:43644e38f42e3af682690876cff722d301ac585c5b9e1eacc013b7a3f7b696a0"},
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7582a1d1030e15422262de9f58711774e02fa80df0d1578995c76214f6954988"},
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:ddff9c4e225a63a5afab9dd15590432c22e8057e1a9a13d28ed128ecf047bbdc"},
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:ee2a1ece51b9b9e7752e742cfb661d2a29e7bcdba2d27e66e28a99f1890e4fa0"},
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a2e4369eb3d47d2034032a26c7a80fcb21a2cb22e1173d761a162f11e562caa5"},
{file = "multidict-6.0.4-cp310-cp310-win32.whl", hash = "sha256:574b7eae1ab267e5f8285f0fe881f17efe4b98c39a40858247720935b893bba8"},
{file = "multidict-6.0.4-cp310-cp310-win_amd64.whl", hash = "sha256:4dcbb0906e38440fa3e325df2359ac6cb043df8e58c965bb45f4e406ecb162cc"},
{file = "multidict-6.0.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0dfad7a5a1e39c53ed00d2dd0c2e36aed4650936dc18fd9a1826a5ae1cad6f03"},
{file = "multidict-6.0.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:64da238a09d6039e3bd39bb3aee9c21a5e34f28bfa5aa22518581f910ff94af3"},
{file = "multidict-6.0.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ff959bee35038c4624250473988b24f846cbeb2c6639de3602c073f10410ceba"},
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01a3a55bd90018c9c080fbb0b9f4891db37d148a0a18722b42f94694f8b6d4c9"},
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c5cb09abb18c1ea940fb99360ea0396f34d46566f157122c92dfa069d3e0e982"},
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:666daae833559deb2d609afa4490b85830ab0dfca811a98b70a205621a6109fe"},
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11bdf3f5e1518b24530b8241529d2050014c884cf18b6fc69c0c2b30ca248710"},
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7d18748f2d30f94f498e852c67d61261c643b349b9d2a581131725595c45ec6c"},
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:458f37be2d9e4c95e2d8866a851663cbc76e865b78395090786f6cd9b3bbf4f4"},
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:b1a2eeedcead3a41694130495593a559a668f382eee0727352b9a41e1c45759a"},
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:7d6ae9d593ef8641544d6263c7fa6408cc90370c8cb2bbb65f8d43e5b0351d9c"},
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:5979b5632c3e3534e42ca6ff856bb24b2e3071b37861c2c727ce220d80eee9ed"},
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:dcfe792765fab89c365123c81046ad4103fcabbc4f56d1c1997e6715e8015461"},
{file = "multidict-6.0.4-cp311-cp311-win32.whl", hash = "sha256:3601a3cece3819534b11d4efc1eb76047488fddd0c85a3948099d5da4d504636"},
{file = "multidict-6.0.4-cp311-cp311-win_amd64.whl", hash = "sha256:81a4f0b34bd92df3da93315c6a59034df95866014ac08535fc819f043bfd51f0"},
{file = "multidict-6.0.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:67040058f37a2a51ed8ea8f6b0e6ee5bd78ca67f169ce6122f3e2ec80dfe9b78"},
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:853888594621e6604c978ce2a0444a1e6e70c8d253ab65ba11657659dcc9100f"},
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:39ff62e7d0f26c248b15e364517a72932a611a9b75f35b45be078d81bdb86603"},
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:af048912e045a2dc732847d33821a9d84ba553f5c5f028adbd364dd4765092ac"},
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1e8b901e607795ec06c9e42530788c45ac21ef3aaa11dbd0c69de543bfb79a9"},
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:62501642008a8b9871ddfccbf83e4222cf8ac0d5aeedf73da36153ef2ec222d2"},
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:99b76c052e9f1bc0721f7541e5e8c05db3941eb9ebe7b8553c625ef88d6eefde"},
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:509eac6cf09c794aa27bcacfd4d62c885cce62bef7b2c3e8b2e49d365b5003fe"},
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:21a12c4eb6ddc9952c415f24eef97e3e55ba3af61f67c7bc388dcdec1404a067"},
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:5cad9430ab3e2e4fa4a2ef4450f548768400a2ac635841bc2a56a2052cdbeb87"},
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ab55edc2e84460694295f401215f4a58597f8f7c9466faec545093045476327d"},
{file = "multidict-6.0.4-cp37-cp37m-win32.whl", hash = "sha256:5a4dcf02b908c3b8b17a45fb0f15b695bf117a67b76b7ad18b73cf8e92608775"},
{file = "multidict-6.0.4-cp37-cp37m-win_amd64.whl", hash = "sha256:6ed5f161328b7df384d71b07317f4d8656434e34591f20552c7bcef27b0ab88e"},
{file = "multidict-6.0.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5fc1b16f586f049820c5c5b17bb4ee7583092fa0d1c4e28b5239181ff9532e0c"},
{file = "multidict-6.0.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1502e24330eb681bdaa3eb70d6358e818e8e8f908a22a1851dfd4e15bc2f8161"},
{file = "multidict-6.0.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b692f419760c0e65d060959df05f2a531945af31fda0c8a3b3195d4efd06de11"},
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45e1ecb0379bfaab5eef059f50115b54571acfbe422a14f668fc8c27ba410e7e"},
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ddd3915998d93fbcd2566ddf9cf62cdb35c9e093075f862935573d265cf8f65d"},
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:59d43b61c59d82f2effb39a93c48b845efe23a3852d201ed2d24ba830d0b4cf2"},
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc8e1d0c705233c5dd0c5e6460fbad7827d5d36f310a0fadfd45cc3029762258"},
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d6aa0418fcc838522256761b3415822626f866758ee0bc6632c9486b179d0b52"},
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6748717bb10339c4760c1e63da040f5f29f5ed6e59d76daee30305894069a660"},
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:4d1a3d7ef5e96b1c9e92f973e43aa5e5b96c659c9bc3124acbbd81b0b9c8a951"},
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4372381634485bec7e46718edc71528024fcdc6f835baefe517b34a33c731d60"},
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:fc35cb4676846ef752816d5be2193a1e8367b4c1397b74a565a9d0389c433a1d"},
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:4b9d9e4e2b37daddb5c23ea33a3417901fa7c7b3dee2d855f63ee67a0b21e5b1"},
{file = "multidict-6.0.4-cp38-cp38-win32.whl", hash = "sha256:e41b7e2b59679edfa309e8db64fdf22399eec4b0b24694e1b2104fb789207779"},
{file = "multidict-6.0.4-cp38-cp38-win_amd64.whl", hash = "sha256:d6c254ba6e45d8e72739281ebc46ea5eb5f101234f3ce171f0e9f5cc86991480"},
{file = "multidict-6.0.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:16ab77bbeb596e14212e7bab8429f24c1579234a3a462105cda4a66904998664"},
{file = "multidict-6.0.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:bc779e9e6f7fda81b3f9aa58e3a6091d49ad528b11ed19f6621408806204ad35"},
{file = "multidict-6.0.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ceef517eca3e03c1cceb22030a3e39cb399ac86bff4e426d4fc6ae49052cc60"},
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:281af09f488903fde97923c7744bb001a9b23b039a909460d0f14edc7bf59706"},
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:52f2dffc8acaba9a2f27174c41c9e57f60b907bb9f096b36b1a1f3be71c6284d"},
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b41156839806aecb3641f3208c0dafd3ac7775b9c4c422d82ee2a45c34ba81ca"},
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d5e3fc56f88cc98ef8139255cf8cd63eb2c586531e43310ff859d6bb3a6b51f1"},
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8316a77808c501004802f9beebde51c9f857054a0c871bd6da8280e718444449"},
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:f70b98cd94886b49d91170ef23ec5c0e8ebb6f242d734ed7ed677b24d50c82cf"},
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:bf6774e60d67a9efe02b3616fee22441d86fab4c6d335f9d2051d19d90a40063"},
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:e69924bfcdda39b722ef4d9aa762b2dd38e4632b3641b1d9a57ca9cd18f2f83a"},
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:6b181d8c23da913d4ff585afd1155a0e1194c0b50c54fcfe286f70cdaf2b7176"},
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:52509b5be062d9eafc8170e53026fbc54cf3b32759a23d07fd935fb04fc22d95"},
{file = "multidict-6.0.4-cp39-cp39-win32.whl", hash = "sha256:27c523fbfbdfd19c6867af7346332b62b586eed663887392cff78d614f9ec313"},
{file = "multidict-6.0.4-cp39-cp39-win_amd64.whl", hash = "sha256:33029f5734336aa0d4c0384525da0387ef89148dc7191aae00ca5fb23d7aafc2"},
{file = "multidict-6.0.4.tar.gz", hash = "sha256:3666906492efb76453c0e7b97f2cf459b0682e7402c0489a95484965dbc1da49"},
{file = "multidict-6.0.5-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:228b644ae063c10e7f324ab1ab6b548bdf6f8b47f3ec234fef1093bc2735e5f9"},
{file = "multidict-6.0.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:896ebdcf62683551312c30e20614305f53125750803b614e9e6ce74a96232604"},
{file = "multidict-6.0.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:411bf8515f3be9813d06004cac41ccf7d1cd46dfe233705933dd163b60e37600"},
{file = "multidict-6.0.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1d147090048129ce3c453f0292e7697d333db95e52616b3793922945804a433c"},
{file = "multidict-6.0.5-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:215ed703caf15f578dca76ee6f6b21b7603791ae090fbf1ef9d865571039ade5"},
{file = "multidict-6.0.5-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7c6390cf87ff6234643428991b7359b5f59cc15155695deb4eda5c777d2b880f"},
{file = "multidict-6.0.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21fd81c4ebdb4f214161be351eb5bcf385426bf023041da2fd9e60681f3cebae"},
{file = "multidict-6.0.5-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3cc2ad10255f903656017363cd59436f2111443a76f996584d1077e43ee51182"},
{file = "multidict-6.0.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:6939c95381e003f54cd4c5516740faba40cf5ad3eeff460c3ad1d3e0ea2549bf"},
{file = "multidict-6.0.5-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:220dd781e3f7af2c2c1053da9fa96d9cf3072ca58f057f4c5adaaa1cab8fc442"},
{file = "multidict-6.0.5-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:766c8f7511df26d9f11cd3a8be623e59cca73d44643abab3f8c8c07620524e4a"},
{file = "multidict-6.0.5-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:fe5d7785250541f7f5019ab9cba2c71169dc7d74d0f45253f8313f436458a4ef"},
{file = "multidict-6.0.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c1c1496e73051918fcd4f58ff2e0f2f3066d1c76a0c6aeffd9b45d53243702cc"},
{file = "multidict-6.0.5-cp310-cp310-win32.whl", hash = "sha256:7afcdd1fc07befad18ec4523a782cde4e93e0a2bf71239894b8d61ee578c1319"},
{file = "multidict-6.0.5-cp310-cp310-win_amd64.whl", hash = "sha256:99f60d34c048c5c2fabc766108c103612344c46e35d4ed9ae0673d33c8fb26e8"},
{file = "multidict-6.0.5-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f285e862d2f153a70586579c15c44656f888806ed0e5b56b64489afe4a2dbfba"},
{file = "multidict-6.0.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:53689bb4e102200a4fafa9de9c7c3c212ab40a7ab2c8e474491914d2305f187e"},
{file = "multidict-6.0.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:612d1156111ae11d14afaf3a0669ebf6c170dbb735e510a7438ffe2369a847fd"},
{file = "multidict-6.0.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7be7047bd08accdb7487737631d25735c9a04327911de89ff1b26b81745bd4e3"},
{file = "multidict-6.0.5-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de170c7b4fe6859beb8926e84f7d7d6c693dfe8e27372ce3b76f01c46e489fcf"},
{file = "multidict-6.0.5-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:04bde7a7b3de05732a4eb39c94574db1ec99abb56162d6c520ad26f83267de29"},
{file = "multidict-6.0.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85f67aed7bb647f93e7520633d8f51d3cbc6ab96957c71272b286b2f30dc70ed"},
{file = "multidict-6.0.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:425bf820055005bfc8aa9a0b99ccb52cc2f4070153e34b701acc98d201693733"},
{file = "multidict-6.0.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:d3eb1ceec286eba8220c26f3b0096cf189aea7057b6e7b7a2e60ed36b373b77f"},
{file = "multidict-6.0.5-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:7901c05ead4b3fb75113fb1dd33eb1253c6d3ee37ce93305acd9d38e0b5f21a4"},
{file = "multidict-6.0.5-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:e0e79d91e71b9867c73323a3444724d496c037e578a0e1755ae159ba14f4f3d1"},
{file = "multidict-6.0.5-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:29bfeb0dff5cb5fdab2023a7a9947b3b4af63e9c47cae2a10ad58394b517fddc"},
{file = "multidict-6.0.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e030047e85cbcedbfc073f71836d62dd5dadfbe7531cae27789ff66bc551bd5e"},
{file = "multidict-6.0.5-cp311-cp311-win32.whl", hash = "sha256:2f4848aa3baa109e6ab81fe2006c77ed4d3cd1e0ac2c1fbddb7b1277c168788c"},
{file = "multidict-6.0.5-cp311-cp311-win_amd64.whl", hash = "sha256:2faa5ae9376faba05f630d7e5e6be05be22913782b927b19d12b8145968a85ea"},
{file = "multidict-6.0.5-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:51d035609b86722963404f711db441cf7134f1889107fb171a970c9701f92e1e"},
{file = "multidict-6.0.5-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:cbebcd5bcaf1eaf302617c114aa67569dd3f090dd0ce8ba9e35e9985b41ac35b"},
{file = "multidict-6.0.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2ffc42c922dbfddb4a4c3b438eb056828719f07608af27d163191cb3e3aa6cc5"},
{file = "multidict-6.0.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ceb3b7e6a0135e092de86110c5a74e46bda4bd4fbfeeb3a3bcec79c0f861e450"},
{file = "multidict-6.0.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:79660376075cfd4b2c80f295528aa6beb2058fd289f4c9252f986751a4cd0496"},
{file = "multidict-6.0.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e4428b29611e989719874670fd152b6625500ad6c686d464e99f5aaeeaca175a"},
{file = "multidict-6.0.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d84a5c3a5f7ce6db1f999fb9438f686bc2e09d38143f2d93d8406ed2dd6b9226"},
{file = "multidict-6.0.5-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:76c0de87358b192de7ea9649beb392f107dcad9ad27276324c24c91774ca5271"},
{file = "multidict-6.0.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:79a6d2ba910adb2cbafc95dad936f8b9386e77c84c35bc0add315b856d7c3abb"},
{file = "multidict-6.0.5-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:92d16a3e275e38293623ebf639c471d3e03bb20b8ebb845237e0d3664914caef"},
{file = "multidict-6.0.5-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:fb616be3538599e797a2017cccca78e354c767165e8858ab5116813146041a24"},
{file = "multidict-6.0.5-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:14c2976aa9038c2629efa2c148022ed5eb4cb939e15ec7aace7ca932f48f9ba6"},
{file = "multidict-6.0.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:435a0984199d81ca178b9ae2c26ec3d49692d20ee29bc4c11a2a8d4514c67eda"},
{file = "multidict-6.0.5-cp312-cp312-win32.whl", hash = "sha256:9fe7b0653ba3d9d65cbe7698cca585bf0f8c83dbbcc710db9c90f478e175f2d5"},
{file = "multidict-6.0.5-cp312-cp312-win_amd64.whl", hash = "sha256:01265f5e40f5a17f8241d52656ed27192be03bfa8764d88e8220141d1e4b3556"},
{file = "multidict-6.0.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:19fe01cea168585ba0f678cad6f58133db2aa14eccaf22f88e4a6dccadfad8b3"},
{file = "multidict-6.0.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6bf7a982604375a8d49b6cc1b781c1747f243d91b81035a9b43a2126c04766f5"},
{file = "multidict-6.0.5-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:107c0cdefe028703fb5dafe640a409cb146d44a6ae201e55b35a4af8e95457dd"},
{file = "multidict-6.0.5-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:403c0911cd5d5791605808b942c88a8155c2592e05332d2bf78f18697a5fa15e"},
{file = "multidict-6.0.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aeaf541ddbad8311a87dd695ed9642401131ea39ad7bc8cf3ef3967fd093b626"},
{file = "multidict-6.0.5-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e4972624066095e52b569e02b5ca97dbd7a7ddd4294bf4e7247d52635630dd83"},
{file = "multidict-6.0.5-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d946b0a9eb8aaa590df1fe082cee553ceab173e6cb5b03239716338629c50c7a"},
{file = "multidict-6.0.5-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:b55358304d7a73d7bdf5de62494aaf70bd33015831ffd98bc498b433dfe5b10c"},
{file = "multidict-6.0.5-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:a3145cb08d8625b2d3fee1b2d596a8766352979c9bffe5d7833e0503d0f0b5e5"},
{file = "multidict-6.0.5-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:d65f25da8e248202bd47445cec78e0025c0fe7582b23ec69c3b27a640dd7a8e3"},
{file = "multidict-6.0.5-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:c9bf56195c6bbd293340ea82eafd0071cb3d450c703d2c93afb89f93b8386ccc"},
{file = "multidict-6.0.5-cp37-cp37m-win32.whl", hash = "sha256:69db76c09796b313331bb7048229e3bee7928eb62bab5e071e9f7fcc4879caee"},
{file = "multidict-6.0.5-cp37-cp37m-win_amd64.whl", hash = "sha256:fce28b3c8a81b6b36dfac9feb1de115bab619b3c13905b419ec71d03a3fc1423"},
{file = "multidict-6.0.5-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:76f067f5121dcecf0d63a67f29080b26c43c71a98b10c701b0677e4a065fbd54"},
{file = "multidict-6.0.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b82cc8ace10ab5bd93235dfaab2021c70637005e1ac787031f4d1da63d493c1d"},
{file = "multidict-6.0.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:5cb241881eefd96b46f89b1a056187ea8e9ba14ab88ba632e68d7a2ecb7aadf7"},
{file = "multidict-6.0.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8e94e6912639a02ce173341ff62cc1201232ab86b8a8fcc05572741a5dc7d93"},
{file = "multidict-6.0.5-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:09a892e4a9fb47331da06948690ae38eaa2426de97b4ccbfafbdcbe5c8f37ff8"},
{file = "multidict-6.0.5-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:55205d03e8a598cfc688c71ca8ea5f66447164efff8869517f175ea632c7cb7b"},
{file = "multidict-6.0.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37b15024f864916b4951adb95d3a80c9431299080341ab9544ed148091b53f50"},
{file = "multidict-6.0.5-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f2a1dee728b52b33eebff5072817176c172050d44d67befd681609b4746e1c2e"},
{file = "multidict-6.0.5-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:edd08e6f2f1a390bf137080507e44ccc086353c8e98c657e666c017718561b89"},
{file = "multidict-6.0.5-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:60d698e8179a42ec85172d12f50b1668254628425a6bd611aba022257cac1386"},
{file = "multidict-6.0.5-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:3d25f19500588cbc47dc19081d78131c32637c25804df8414463ec908631e453"},
{file = "multidict-6.0.5-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:4cc0ef8b962ac7a5e62b9e826bd0cd5040e7d401bc45a6835910ed699037a461"},
{file = "multidict-6.0.5-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca2e9d0cc5a889850e9bbd68e98314ada174ff6ccd1129500103df7a94a7a44"},
{file = "multidict-6.0.5-cp38-cp38-win32.whl", hash = "sha256:4a6a4f196f08c58c59e0b8ef8ec441d12aee4125a7d4f4fef000ccb22f8d7241"},
{file = "multidict-6.0.5-cp38-cp38-win_amd64.whl", hash = "sha256:0275e35209c27a3f7951e1ce7aaf93ce0d163b28948444bec61dd7badc6d3f8c"},
{file = "multidict-6.0.5-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e7be68734bd8c9a513f2b0cfd508802d6609da068f40dc57d4e3494cefc92929"},
{file = "multidict-6.0.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1d9ea7a7e779d7a3561aade7d596649fbecfa5c08a7674b11b423783217933f9"},
{file = "multidict-6.0.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ea1456df2a27c73ce51120fa2f519f1bea2f4a03a917f4a43c8707cf4cbbae1a"},
{file = "multidict-6.0.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf590b134eb70629e350691ecca88eac3e3b8b3c86992042fb82e3cb1830d5e1"},
{file = "multidict-6.0.5-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5c0631926c4f58e9a5ccce555ad7747d9a9f8b10619621f22f9635f069f6233e"},
{file = "multidict-6.0.5-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dce1c6912ab9ff5f179eaf6efe7365c1f425ed690b03341911bf4939ef2f3046"},
{file = "multidict-6.0.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0868d64af83169e4d4152ec612637a543f7a336e4a307b119e98042e852ad9c"},
{file = "multidict-6.0.5-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:141b43360bfd3bdd75f15ed811850763555a251e38b2405967f8e25fb43f7d40"},
{file = "multidict-6.0.5-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7df704ca8cf4a073334e0427ae2345323613e4df18cc224f647f251e5e75a527"},
{file = "multidict-6.0.5-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6214c5a5571802c33f80e6c84713b2c79e024995b9c5897f794b43e714daeec9"},
{file = "multidict-6.0.5-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:cd6c8fca38178e12c00418de737aef1261576bd1b6e8c6134d3e729a4e858b38"},
{file = "multidict-6.0.5-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:e02021f87a5b6932fa6ce916ca004c4d441509d33bbdbeca70d05dff5e9d2479"},
{file = "multidict-6.0.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ebd8d160f91a764652d3e51ce0d2956b38efe37c9231cd82cfc0bed2e40b581c"},
{file = "multidict-6.0.5-cp39-cp39-win32.whl", hash = "sha256:04da1bb8c8dbadf2a18a452639771951c662c5ad03aefe4884775454be322c9b"},
{file = "multidict-6.0.5-cp39-cp39-win_amd64.whl", hash = "sha256:d6f6d4f185481c9669b9447bf9d9cf3b95a0e9df9d169bbc17e363b7d5487755"},
{file = "multidict-6.0.5-py3-none-any.whl", hash = "sha256:0d63c74e3d7ab26de115c49bffc92cc77ed23395303d496eae515d4204a625e7"},
{file = "multidict-6.0.5.tar.gz", hash = "sha256:f7e301075edaf50500f0b341543c41194d8df3ae5caf4702f2095f3ca73dd8da"},
]
[[package]]
@@ -2048,55 +2080,132 @@ files = [
[[package]]
name = "pydantic"
version = "1.10.13"
description = "Data validation and settings management using python type hints"
version = "2.7.4"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "pydantic-1.10.13-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:efff03cc7a4f29d9009d1c96ceb1e7a70a65cfe86e89d34e4a5f2ab1e5693737"},
{file = "pydantic-1.10.13-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:3ecea2b9d80e5333303eeb77e180b90e95eea8f765d08c3d278cd56b00345d01"},
{file = "pydantic-1.10.13-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1740068fd8e2ef6eb27a20e5651df000978edce6da6803c2bef0bc74540f9548"},
{file = "pydantic-1.10.13-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:84bafe2e60b5e78bc64a2941b4c071a4b7404c5c907f5f5a99b0139781e69ed8"},
{file = "pydantic-1.10.13-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:bc0898c12f8e9c97f6cd44c0ed70d55749eaf783716896960b4ecce2edfd2d69"},
{file = "pydantic-1.10.13-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:654db58ae399fe6434e55325a2c3e959836bd17a6f6a0b6ca8107ea0571d2e17"},
{file = "pydantic-1.10.13-cp310-cp310-win_amd64.whl", hash = "sha256:75ac15385a3534d887a99c713aa3da88a30fbd6204a5cd0dc4dab3d770b9bd2f"},
{file = "pydantic-1.10.13-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c553f6a156deb868ba38a23cf0df886c63492e9257f60a79c0fd8e7173537653"},
{file = "pydantic-1.10.13-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5e08865bc6464df8c7d61439ef4439829e3ab62ab1669cddea8dd00cd74b9ffe"},
{file = "pydantic-1.10.13-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e31647d85a2013d926ce60b84f9dd5300d44535a9941fe825dc349ae1f760df9"},
{file = "pydantic-1.10.13-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:210ce042e8f6f7c01168b2d84d4c9eb2b009fe7bf572c2266e235edf14bacd80"},
{file = "pydantic-1.10.13-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:8ae5dd6b721459bfa30805f4c25880e0dd78fc5b5879f9f7a692196ddcb5a580"},
{file = "pydantic-1.10.13-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f8e81fc5fb17dae698f52bdd1c4f18b6ca674d7068242b2aff075f588301bbb0"},
{file = "pydantic-1.10.13-cp311-cp311-win_amd64.whl", hash = "sha256:61d9dce220447fb74f45e73d7ff3b530e25db30192ad8d425166d43c5deb6df0"},
{file = "pydantic-1.10.13-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4b03e42ec20286f052490423682016fd80fda830d8e4119f8ab13ec7464c0132"},
{file = "pydantic-1.10.13-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f59ef915cac80275245824e9d771ee939133be38215555e9dc90c6cb148aaeb5"},
{file = "pydantic-1.10.13-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5a1f9f747851338933942db7af7b6ee8268568ef2ed86c4185c6ef4402e80ba8"},
{file = "pydantic-1.10.13-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:97cce3ae7341f7620a0ba5ef6cf043975cd9d2b81f3aa5f4ea37928269bc1b87"},
{file = "pydantic-1.10.13-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:854223752ba81e3abf663d685f105c64150873cc6f5d0c01d3e3220bcff7d36f"},
{file = "pydantic-1.10.13-cp37-cp37m-win_amd64.whl", hash = "sha256:b97c1fac8c49be29486df85968682b0afa77e1b809aff74b83081cc115e52f33"},
{file = "pydantic-1.10.13-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c958d053453a1c4b1c2062b05cd42d9d5c8eb67537b8d5a7e3c3032943ecd261"},
{file = "pydantic-1.10.13-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4c5370a7edaac06daee3af1c8b1192e305bc102abcbf2a92374b5bc793818599"},
{file = "pydantic-1.10.13-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d6f6e7305244bddb4414ba7094ce910560c907bdfa3501e9db1a7fd7eaea127"},
{file = "pydantic-1.10.13-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d3a3c792a58e1622667a2837512099eac62490cdfd63bd407993aaf200a4cf1f"},
{file = "pydantic-1.10.13-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:c636925f38b8db208e09d344c7aa4f29a86bb9947495dd6b6d376ad10334fb78"},
{file = "pydantic-1.10.13-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:678bcf5591b63cc917100dc50ab6caebe597ac67e8c9ccb75e698f66038ea953"},
{file = "pydantic-1.10.13-cp38-cp38-win_amd64.whl", hash = "sha256:6cf25c1a65c27923a17b3da28a0bdb99f62ee04230c931d83e888012851f4e7f"},
{file = "pydantic-1.10.13-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8ef467901d7a41fa0ca6db9ae3ec0021e3f657ce2c208e98cd511f3161c762c6"},
{file = "pydantic-1.10.13-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:968ac42970f57b8344ee08837b62f6ee6f53c33f603547a55571c954a4225691"},
{file = "pydantic-1.10.13-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9849f031cf8a2f0a928fe885e5a04b08006d6d41876b8bbd2fc68a18f9f2e3fd"},
{file = "pydantic-1.10.13-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:56e3ff861c3b9c6857579de282ce8baabf443f42ffba355bf070770ed63e11e1"},
{file = "pydantic-1.10.13-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f00790179497767aae6bcdc36355792c79e7bbb20b145ff449700eb076c5f96"},
{file = "pydantic-1.10.13-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:75b297827b59bc229cac1a23a2f7a4ac0031068e5be0ce385be1462e7e17a35d"},
{file = "pydantic-1.10.13-cp39-cp39-win_amd64.whl", hash = "sha256:e70ca129d2053fb8b728ee7d1af8e553a928d7e301a311094b8a0501adc8763d"},
{file = "pydantic-1.10.13-py3-none-any.whl", hash = "sha256:b87326822e71bd5f313e7d3bfdc77ac3247035ac10b0c0618bd99dcf95b1e687"},
{file = "pydantic-1.10.13.tar.gz", hash = "sha256:32c8b48dcd3b2ac4e78b0ba4af3a2c2eb6048cb75202f0ea7b34feb740efc340"},
{file = "pydantic-2.7.4-py3-none-any.whl", hash = "sha256:ee8538d41ccb9c0a9ad3e0e5f07bf15ed8015b481ced539a1759d8cc89ae90d0"},
{file = "pydantic-2.7.4.tar.gz", hash = "sha256:0c84efd9548d545f63ac0060c1e4d39bb9b14db8b3c0652338aecc07b5adec52"},
]
[package.dependencies]
typing-extensions = ">=4.2.0"
annotated-types = ">=0.4.0"
pydantic-core = "2.18.4"
typing-extensions = ">=4.6.1"
[package.extras]
dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"]
email = ["email-validator (>=2.0.0)"]
[[package]]
name = "pydantic-core"
version = "2.18.4"
description = "Core functionality for Pydantic validation and serialization"
optional = false
python-versions = ">=3.8"
files = [
{file = "pydantic_core-2.18.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:f76d0ad001edd426b92233d45c746fd08f467d56100fd8f30e9ace4b005266e4"},
{file = "pydantic_core-2.18.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:59ff3e89f4eaf14050c8022011862df275b552caef8082e37b542b066ce1ff26"},
{file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a55b5b16c839df1070bc113c1f7f94a0af4433fcfa1b41799ce7606e5c79ce0a"},
{file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4d0dcc59664fcb8974b356fe0a18a672d6d7cf9f54746c05f43275fc48636851"},
{file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8951eee36c57cd128f779e641e21eb40bc5073eb28b2d23f33eb0ef14ffb3f5d"},
{file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4701b19f7e3a06ea655513f7938de6f108123bf7c86bbebb1196eb9bd35cf724"},
{file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e00a3f196329e08e43d99b79b286d60ce46bed10f2280d25a1718399457e06be"},
{file = "pydantic_core-2.18.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:97736815b9cc893b2b7f663628e63f436018b75f44854c8027040e05230eeddb"},
{file = "pydantic_core-2.18.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:6891a2ae0e8692679c07728819b6e2b822fb30ca7445f67bbf6509b25a96332c"},
{file = "pydantic_core-2.18.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:bc4ff9805858bd54d1a20efff925ccd89c9d2e7cf4986144b30802bf78091c3e"},
{file = "pydantic_core-2.18.4-cp310-none-win32.whl", hash = "sha256:1b4de2e51bbcb61fdebd0ab86ef28062704f62c82bbf4addc4e37fa4b00b7cbc"},
{file = "pydantic_core-2.18.4-cp310-none-win_amd64.whl", hash = "sha256:6a750aec7bf431517a9fd78cb93c97b9b0c496090fee84a47a0d23668976b4b0"},
{file = "pydantic_core-2.18.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:942ba11e7dfb66dc70f9ae66b33452f51ac7bb90676da39a7345e99ffb55402d"},
{file = "pydantic_core-2.18.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b2ebef0e0b4454320274f5e83a41844c63438fdc874ea40a8b5b4ecb7693f1c4"},
{file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a642295cd0c8df1b86fc3dced1d067874c353a188dc8e0f744626d49e9aa51c4"},
{file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5f09baa656c904807e832cf9cce799c6460c450c4ad80803517032da0cd062e2"},
{file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:98906207f29bc2c459ff64fa007afd10a8c8ac080f7e4d5beff4c97086a3dabd"},
{file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:19894b95aacfa98e7cb093cd7881a0c76f55731efad31073db4521e2b6ff5b7d"},
{file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0fbbdc827fe5e42e4d196c746b890b3d72876bdbf160b0eafe9f0334525119c8"},
{file = "pydantic_core-2.18.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f85d05aa0918283cf29a30b547b4df2fbb56b45b135f9e35b6807cb28bc47951"},
{file = "pydantic_core-2.18.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e85637bc8fe81ddb73fda9e56bab24560bdddfa98aa64f87aaa4e4b6730c23d2"},
{file = "pydantic_core-2.18.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2f5966897e5461f818e136b8451d0551a2e77259eb0f73a837027b47dc95dab9"},
{file = "pydantic_core-2.18.4-cp311-none-win32.whl", hash = "sha256:44c7486a4228413c317952e9d89598bcdfb06399735e49e0f8df643e1ccd0558"},
{file = "pydantic_core-2.18.4-cp311-none-win_amd64.whl", hash = "sha256:8a7164fe2005d03c64fd3b85649891cd4953a8de53107940bf272500ba8a788b"},
{file = "pydantic_core-2.18.4-cp311-none-win_arm64.whl", hash = "sha256:4e99bc050fe65c450344421017f98298a97cefc18c53bb2f7b3531eb39bc7805"},
{file = "pydantic_core-2.18.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:6f5c4d41b2771c730ea1c34e458e781b18cc668d194958e0112455fff4e402b2"},
{file = "pydantic_core-2.18.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2fdf2156aa3d017fddf8aea5adfba9f777db1d6022d392b682d2a8329e087cef"},
{file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4748321b5078216070b151d5271ef3e7cc905ab170bbfd27d5c83ee3ec436695"},
{file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:847a35c4d58721c5dc3dba599878ebbdfd96784f3fb8bb2c356e123bdcd73f34"},
{file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3c40d4eaad41f78e3bbda31b89edc46a3f3dc6e171bf0ecf097ff7a0ffff7cb1"},
{file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:21a5e440dbe315ab9825fcd459b8814bb92b27c974cbc23c3e8baa2b76890077"},
{file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:01dd777215e2aa86dfd664daed5957704b769e726626393438f9c87690ce78c3"},
{file = "pydantic_core-2.18.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4b06beb3b3f1479d32befd1f3079cc47b34fa2da62457cdf6c963393340b56e9"},
{file = "pydantic_core-2.18.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:564d7922e4b13a16b98772441879fcdcbe82ff50daa622d681dd682175ea918c"},
{file = "pydantic_core-2.18.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:0eb2a4f660fcd8e2b1c90ad566db2b98d7f3f4717c64fe0a83e0adb39766d5b8"},
{file = "pydantic_core-2.18.4-cp312-none-win32.whl", hash = "sha256:8b8bab4c97248095ae0c4455b5a1cd1cdd96e4e4769306ab19dda135ea4cdb07"},
{file = "pydantic_core-2.18.4-cp312-none-win_amd64.whl", hash = "sha256:14601cdb733d741b8958224030e2bfe21a4a881fb3dd6fbb21f071cabd48fa0a"},
{file = "pydantic_core-2.18.4-cp312-none-win_arm64.whl", hash = "sha256:c1322d7dd74713dcc157a2b7898a564ab091ca6c58302d5c7b4c07296e3fd00f"},
{file = "pydantic_core-2.18.4-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:823be1deb01793da05ecb0484d6c9e20baebb39bd42b5d72636ae9cf8350dbd2"},
{file = "pydantic_core-2.18.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:ebef0dd9bf9b812bf75bda96743f2a6c5734a02092ae7f721c048d156d5fabae"},
{file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae1d6df168efb88d7d522664693607b80b4080be6750c913eefb77e34c12c71a"},
{file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f9899c94762343f2cc2fc64c13e7cae4c3cc65cdfc87dd810a31654c9b7358cc"},
{file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:99457f184ad90235cfe8461c4d70ab7dd2680e28821c29eca00252ba90308c78"},
{file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:18f469a3d2a2fdafe99296a87e8a4c37748b5080a26b806a707f25a902c040a8"},
{file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b7cdf28938ac6b8b49ae5e92f2735056a7ba99c9b110a474473fd71185c1af5d"},
{file = "pydantic_core-2.18.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:938cb21650855054dc54dfd9120a851c974f95450f00683399006aa6e8abb057"},
{file = "pydantic_core-2.18.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:44cd83ab6a51da80fb5adbd9560e26018e2ac7826f9626bc06ca3dc074cd198b"},
{file = "pydantic_core-2.18.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:972658f4a72d02b8abfa2581d92d59f59897d2e9f7e708fdabe922f9087773af"},
{file = "pydantic_core-2.18.4-cp38-none-win32.whl", hash = "sha256:1d886dc848e60cb7666f771e406acae54ab279b9f1e4143babc9c2258213daa2"},
{file = "pydantic_core-2.18.4-cp38-none-win_amd64.whl", hash = "sha256:bb4462bd43c2460774914b8525f79b00f8f407c945d50881568f294c1d9b4443"},
{file = "pydantic_core-2.18.4-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:44a688331d4a4e2129140a8118479443bd6f1905231138971372fcde37e43528"},
{file = "pydantic_core-2.18.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a2fdd81edd64342c85ac7cf2753ccae0b79bf2dfa063785503cb85a7d3593223"},
{file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:86110d7e1907ab36691f80b33eb2da87d780f4739ae773e5fc83fb272f88825f"},
{file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:46387e38bd641b3ee5ce247563b60c5ca098da9c56c75c157a05eaa0933ed154"},
{file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:123c3cec203e3f5ac7b000bd82235f1a3eced8665b63d18be751f115588fea30"},
{file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dc1803ac5c32ec324c5261c7209e8f8ce88e83254c4e1aebdc8b0a39f9ddb443"},
{file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:53db086f9f6ab2b4061958d9c276d1dbe3690e8dd727d6abf2321d6cce37fa94"},
{file = "pydantic_core-2.18.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:abc267fa9837245cc28ea6929f19fa335f3dc330a35d2e45509b6566dc18be23"},
{file = "pydantic_core-2.18.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a0d829524aaefdebccb869eed855e2d04c21d2d7479b6cada7ace5448416597b"},
{file = "pydantic_core-2.18.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:509daade3b8649f80d4e5ff21aa5673e4ebe58590b25fe42fac5f0f52c6f034a"},
{file = "pydantic_core-2.18.4-cp39-none-win32.whl", hash = "sha256:ca26a1e73c48cfc54c4a76ff78df3727b9d9f4ccc8dbee4ae3f73306a591676d"},
{file = "pydantic_core-2.18.4-cp39-none-win_amd64.whl", hash = "sha256:c67598100338d5d985db1b3d21f3619ef392e185e71b8d52bceacc4a7771ea7e"},
{file = "pydantic_core-2.18.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:574d92eac874f7f4db0ca653514d823a0d22e2354359d0759e3f6a406db5d55d"},
{file = "pydantic_core-2.18.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1f4d26ceb5eb9eed4af91bebeae4b06c3fb28966ca3a8fb765208cf6b51102ab"},
{file = "pydantic_core-2.18.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77450e6d20016ec41f43ca4a6c63e9fdde03f0ae3fe90e7c27bdbeaece8b1ed4"},
{file = "pydantic_core-2.18.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d323a01da91851a4f17bf592faf46149c9169d68430b3146dcba2bb5e5719abc"},
{file = "pydantic_core-2.18.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:43d447dd2ae072a0065389092a231283f62d960030ecd27565672bd40746c507"},
{file = "pydantic_core-2.18.4-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:578e24f761f3b425834f297b9935e1ce2e30f51400964ce4801002435a1b41ef"},
{file = "pydantic_core-2.18.4-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:81b5efb2f126454586d0f40c4d834010979cb80785173d1586df845a632e4e6d"},
{file = "pydantic_core-2.18.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ab86ce7c8f9bea87b9d12c7f0af71102acbf5ecbc66c17796cff45dae54ef9a5"},
{file = "pydantic_core-2.18.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:90afc12421df2b1b4dcc975f814e21bc1754640d502a2fbcc6d41e77af5ec312"},
{file = "pydantic_core-2.18.4-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:51991a89639a912c17bef4b45c87bd83593aee0437d8102556af4885811d59f5"},
{file = "pydantic_core-2.18.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:293afe532740370aba8c060882f7d26cfd00c94cae32fd2e212a3a6e3b7bc15e"},
{file = "pydantic_core-2.18.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b48ece5bde2e768197a2d0f6e925f9d7e3e826f0ad2271120f8144a9db18d5c8"},
{file = "pydantic_core-2.18.4-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:eae237477a873ab46e8dd748e515c72c0c804fb380fbe6c85533c7de51f23a8f"},
{file = "pydantic_core-2.18.4-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:834b5230b5dfc0c1ec37b2fda433b271cbbc0e507560b5d1588e2cc1148cf1ce"},
{file = "pydantic_core-2.18.4-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:e858ac0a25074ba4bce653f9b5d0a85b7456eaddadc0ce82d3878c22489fa4ee"},
{file = "pydantic_core-2.18.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2fd41f6eff4c20778d717af1cc50eca52f5afe7805ee530a4fbd0bae284f16e9"},
{file = "pydantic_core-2.18.4.tar.gz", hash = "sha256:ec3beeada09ff865c344ff3bc2f427f5e6c26401cc6113d77e372c3fdac73864"},
]
[package.dependencies]
typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
[[package]]
name = "pydantic-settings"
version = "2.3.4"
description = "Settings management using Pydantic"
optional = false
python-versions = ">=3.8"
files = [
{file = "pydantic_settings-2.3.4-py3-none-any.whl", hash = "sha256:11ad8bacb68a045f00e4f862c7a718c8a9ec766aa8fd4c32e39a0594b207b53a"},
{file = "pydantic_settings-2.3.4.tar.gz", hash = "sha256:c5802e3d62b78e82522319bbc9b8f8ffb28ad1c988a99311d04f2a6051fca0a7"},
]
[package.dependencies]
pydantic = ">=2.7.0"
python-dotenv = ">=0.21.0"
[package.extras]
toml = ["tomli (>=2.0.1)"]
yaml = ["pyyaml (>=6.0.1)"]
[[package]]
name = "pyflakes"
@@ -2192,21 +2301,21 @@ testing = ["argcomplete", "attrs (>=19.2.0)", "hypothesis (>=3.56)", "mock", "no
[[package]]
name = "pytest-asyncio"
version = "0.21.1"
version = "0.23.7"
description = "Pytest support for asyncio"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "pytest-asyncio-0.21.1.tar.gz", hash = "sha256:40a7eae6dded22c7b604986855ea48400ab15b069ae38116e8c01238e9eeb64d"},
{file = "pytest_asyncio-0.21.1-py3-none-any.whl", hash = "sha256:8666c1c8ac02631d7c51ba282e0c69a8a452b211ffedf2599099845da5c5c37b"},
{file = "pytest_asyncio-0.23.7-py3-none-any.whl", hash = "sha256:009b48127fbe44518a547bddd25611551b0e43ccdbf1e67d12479f569832c20b"},
{file = "pytest_asyncio-0.23.7.tar.gz", hash = "sha256:5f5c72948f4c49e7db4f29f2521d4031f1c27f86e57b046126654083d4770268"},
]
[package.dependencies]
pytest = ">=7.0.0"
pytest = ">=7.0.0,<9"
[package.extras]
docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1.0)"]
testing = ["coverage (>=6.2)", "flaky (>=3.5.0)", "hypothesis (>=5.7.1)", "mypy (>=0.931)", "pytest-trio (>=0.7.0)"]
testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)"]
[[package]]
name = "pytest-cov"
@@ -2628,31 +2737,6 @@ exceptiongroup = {version = "*", markers = "python_version < \"3.11\""}
trio = ">=0.11"
wsproto = ">=0.14"
[[package]]
name = "types-requests"
version = "2.31.0.6"
description = "Typing stubs for requests"
optional = false
python-versions = ">=3.7"
files = [
{file = "types-requests-2.31.0.6.tar.gz", hash = "sha256:cd74ce3b53c461f1228a9b783929ac73a666658f223e28ed29753771477b3bd0"},
{file = "types_requests-2.31.0.6-py3-none-any.whl", hash = "sha256:a2db9cb228a81da8348b49ad6db3f5519452dd20a9c1e1a868c83c5fe88fd1a9"},
]
[package.dependencies]
types-urllib3 = "*"
[[package]]
name = "types-urllib3"
version = "1.26.25.14"
description = "Typing stubs for urllib3"
optional = false
python-versions = "*"
files = [
{file = "types-urllib3-1.26.25.14.tar.gz", hash = "sha256:229b7f577c951b8c1b92c1bc2b2fdb0b49847bd2af6d1cc2a2e3dd340f3bda8f"},
{file = "types_urllib3-1.26.25.14-py3-none-any.whl", hash = "sha256:9683bbb7fb72e32bfe9d2be6e04875fbe1b3eeec3cbb4ea231435aa7fd6b4f0e"},
]
[[package]]
name = "typing-extensions"
version = "4.9.0"
@@ -2864,4 +2948,4 @@ multidict = ">=4.0"
[metadata]
lock-version = "2.0"
python-versions = "^3.10"
content-hash = "4a980e6d8f54a2f7f6a3c55d4f40ac3a4b27b5ac6573dd2a39e11213a4b126dd"
content-hash = "26bd75befe5223095b65be293086edf52f34f9043e49107c80a105dc0387dd6a"

View File

@@ -9,43 +9,47 @@ packages = [{ include = "agbenchmark" }]
[tool.poetry.dependencies]
python = "^3.10"
pytest = "^7.3.2"
requests = "^2.31.0"
openai = "^1.7.2"
pydantic = "^1.10.9"
python-dotenv = "^1.0.0"
agent-protocol-client = {git = "https://github.com/Significant-Gravitas/agent-protocol.git", subdirectory = "packages/client/python"}
click = "^8.1.3"
types-requests = "^2.31.0.1"
click-default-group = "^1.2.4"
colorama = "^0.4.6"
fastapi = "^0.109.1"
gitpython = "^3.1.32"
httpx = "^0.24.0"
matplotlib = "^3.7.2"
# Multidict 6.0.4 fails to install and is a dependency of aiohttp which is a depenedency of agent-protocol-client
multidict = "^6.0.5"
networkx = "^3.1"
openai = "^1.7.2"
pandas = "^2.0.3"
pexpect = "^4.8.0"
psutil = "^5.9.5"
matplotlib = "^3.7.2"
pandas = "^2.0.3"
gitpython = "^3.1.32"
networkx = "^3.1"
colorama = "^0.4.6"
pyvis = "^0.3.2"
selenium = "^4.11.2"
pytest-asyncio = "^0.21.1"
uvicorn = "^0.23.2"
fastapi = "^0.109.1"
pydantic = "^2.7.2"
pydantic-settings = "^2.3.4"
pytest = "^7.3.2"
pytest-asyncio = "^0.23.3"
python-dotenv = "^1.0.0"
python-multipart = "^0.0.7"
toml = "^0.10.2"
# helicone = "^1.0.9" # incompatible with openai@^1.0.0
httpx = "^0.24.0"
agent-protocol-client = "^1.1.0"
click-default-group = "^1.2.4"
pyvis = "^0.3.2"
requests = "^2.31.0"
selenium = "^4.11.2"
tabulate = "^0.9.0"
toml = "^0.10.2"
uvicorn = ">=0.23.2,<1"
[tool.poetry.group.dev.dependencies]
black = "^23.12.1"
flake8 = "^7.0.0"
isort = "^5.13.1"
pyright = "^1.1.364"
pandas = "^2.0.3"
pre-commit = "^3.3.3"
# Testing
pytest-cov = "^5.0.0"
# Dependencies for stuff in reports/
gspread = "^5.10.0"
oauth2client = "^4.1.3"
pre-commit = "^3.3.3"
pytest-cov = "^5.0.0"
[tool.poetry.scripts]
agbenchmark = "agbenchmark.__main__:cli"

View File

@@ -1,20 +1,24 @@
#!/usr/bin/env python3
from pathlib import Path
import click
from agbenchmark.reports.processing.report_types import Report
@click.command()
@click.argument("report_json_file", type=click.Path(exists=True, dir_okay=False))
def print_markdown_report(report_json_file: str):
@click.argument(
"report_json_file", type=click.Path(exists=True, dir_okay=False, path_type=Path)
)
def print_markdown_report(report_json_file: Path):
"""
Generates a Markdown report from a given report.json file.
:param report_json_file: Path to the report.json file.
:return: A string containing the Markdown formatted report.
"""
report = Report.parse_file(report_json_file)
report = Report.model_validate_json(report_json_file.read_text())
# Header and metadata
click.echo("# Benchmark Report")

View File

@@ -14,7 +14,7 @@ from pydantic import BaseModel, Field
class Metrics(BaseModel):
difficulty: str
success: bool
success_percent: float = Field(..., alias="success_%")
success_percent: float = Field(alias="success_%")
run_time: Optional[str] = None
fail_reason: Optional[str] = None
attempted: Optional[bool] = None
@@ -100,7 +100,7 @@ def get_reports():
# Load the JSON data from the file
json_data = json.load(f)
print(f"Processing {report_file}")
report = Report.parse_obj(json_data)
report = Report.model_validate(json_data)
for test_name, test_data in report.tests.items():
test_json = {

109
check_actions_status.py Normal file
View File

@@ -0,0 +1,109 @@
import json
import os
import requests
import sys
import time
from typing import Dict, List, Tuple
def get_environment_variables() -> Tuple[str, str, str, str, str]:
"""Retrieve and return necessary environment variables."""
try:
with open(os.environ["GITHUB_EVENT_PATH"]) as f:
event = json.load(f)
sha = event["pull_request"]["head"]["sha"]
return (
os.environ["GITHUB_API_URL"],
os.environ["GITHUB_REPOSITORY"],
sha,
os.environ["GITHUB_TOKEN"],
os.environ["GITHUB_RUN_ID"],
)
except KeyError as e:
print(f"Error: Missing required environment variable or event data: {e}")
sys.exit(1)
def make_api_request(url: str, headers: Dict[str, str]) -> Dict:
"""Make an API request and return the JSON response."""
try:
print("Making API request to:", url)
response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status()
return response.json()
except requests.RequestException as e:
print(f"Error: API request failed. {e}")
sys.exit(1)
def process_check_runs(check_runs: List[Dict]) -> Tuple[bool, bool]:
"""Process check runs and return their status."""
runs_in_progress = False
all_others_passed = True
for run in check_runs:
if str(run["name"]) != "Check PR Status":
status = run["status"]
conclusion = run["conclusion"]
if status == "completed":
if conclusion not in ["success", "skipped", "neutral"]:
all_others_passed = False
print(
f"Check run {run['name']} (ID: {run['id']}) has conclusion: {conclusion}"
)
else:
runs_in_progress = True
print(f"Check run {run['name']} (ID: {run['id']}) is still {status}.")
all_others_passed = False
else:
print(
f"Skipping check run {run['name']} (ID: {run['id']}) as it is the current run."
)
return runs_in_progress, all_others_passed
def main():
api_url, repo, sha, github_token, current_run_id = get_environment_variables()
endpoint = f"{api_url}/repos/{repo}/commits/{sha}/check-runs"
headers = {
"Accept": "application/vnd.github.v3+json",
}
if github_token:
headers["Authorization"] = f"token {github_token}"
print(f"Current run ID: {current_run_id}")
while True:
data = make_api_request(endpoint, headers)
check_runs = data["check_runs"]
print("Processing check runs...")
print(check_runs)
runs_in_progress, all_others_passed = process_check_runs(check_runs)
if not runs_in_progress:
break
print(
"Some check runs are still in progress. Waiting 3 minutes before checking again..."
)
time.sleep(180)
if all_others_passed:
print("All other completed check runs have passed. This check passes.")
sys.exit(0)
else:
print("Some check runs have failed or have not completed. This check fails.")
sys.exit(1)
if __name__ == "__main__":
main()

11
cli.py
View File

@@ -69,6 +69,8 @@ d88P 888 "Y88888 "Y888 "Y88P" "Y8888P88 888 888
bold=True,
)
)
else:
click.echo(click.style("🎉 Setup completed!\n", fg="green"))
@cli.group()
@@ -149,10 +151,11 @@ def start(agent_name: str, no_setup: bool):
setup_process.wait()
click.echo()
subprocess.Popen(["./run_benchmark", "serve"], cwd=agent_dir)
click.echo("⌛ (Re)starting benchmark server...")
wait_until_conn_ready(8080)
click.echo()
# FIXME: Doesn't work: Command not found: agbenchmark
# subprocess.Popen(["./run_benchmark", "serve"], cwd=agent_dir)
# click.echo("⌛ (Re)starting benchmark server...")
# wait_until_conn_ready(8080)
# click.echo()
subprocess.Popen(["./run"], cwd=agent_dir)
click.echo(f"⌛ (Re)starting agent '{agent_name}'...")

View File

@@ -1,63 +0,0 @@
# 🖼 Image Generation configuration
| Config variable | Values | |
| ---------------- | ------------------------------- | -------------------- |
| `IMAGE_PROVIDER` | `dalle` `huggingface` `sdwebui` | **default: `dalle`** |
## DALL-e
In `.env`, make sure `IMAGE_PROVIDER` is commented (or set to `dalle`):
```ini
# IMAGE_PROVIDER=dalle # this is the default
```
Further optional configuration:
| Config variable | Values | |
| ---------------- | ------------------ | -------------- |
| `IMAGE_SIZE` | `256` `512` `1024` | default: `256` |
## Hugging Face
To use text-to-image models from Hugging Face, you need a Hugging Face API token.
Link to the appropriate settings page: [Hugging Face > Settings > Tokens](https://huggingface.co/settings/tokens)
Once you have an API token, uncomment and adjust these variables in your `.env`:
```ini
IMAGE_PROVIDER=huggingface
HUGGINGFACE_API_TOKEN=your-huggingface-api-token
```
Further optional configuration:
| Config variable | Values | |
| ------------------------- | ---------------------- | ---------------------------------------- |
| `HUGGINGFACE_IMAGE_MODEL` | see [available models] | default: `CompVis/stable-diffusion-v1-4` |
[available models]: https://huggingface.co/models?pipeline_tag=text-to-image
## Stable Diffusion WebUI
It is possible to use your own self-hosted Stable Diffusion WebUI with AutoGPT:
```ini
IMAGE_PROVIDER=sdwebui
```
!!! note
Make sure you are running WebUI with `--api` enabled.
Further optional configuration:
| Config variable | Values | |
| --------------- | ----------------------- | -------------------------------- |
| `SD_WEBUI_URL` | URL to your WebUI | default: `http://127.0.0.1:7860` |
| `SD_WEBUI_AUTH` | `{username}:{password}` | *Note: do not copy the braces!* |
## Selenium
```shell
sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
```

View File

@@ -1,21 +1,18 @@
# Configuration
Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `AutoGPT` folder and name it `.env`.
Configuration of sensitive settings such as API credentials is done through environment variables.
You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `AutoGPT` folder and name it `.env`.
## Environment Variables
- `AUDIO_TO_TEXT_PROVIDER`: Audio To Text Provider. Only option currently is `huggingface`. Default: huggingface
- `AUTHORISE_COMMAND_KEY`: Key response accepted when authorising commands. Default: y
- `ANTHROPIC_API_KEY`: Set this if you want to use Anthropic models with AutoGPT
- `AZURE_CONFIG_FILE`: Location of the Azure Config file relative to the AutoGPT root directory. Default: azure.yaml
- `BROWSE_CHUNK_MAX_LENGTH`: When browsing website, define the length of chunks to summarize. Default: 3000
- `BROWSE_SPACY_LANGUAGE_MODEL`: [spaCy language model](https://spacy.io/usage/models) to use when creating chunks. Default: en_core_web_sm
- `CHAT_MESSAGES_ENABLED`: Enable chat messages. Optional
- `DISABLED_COMMANDS`: Commands to disable. Use comma separated names of commands. See the list of commands from built-in components [here](../components/components.md). Default: None
- `COMPONENT_CONFIG_FILE`: Path to the component configuration file (json) for an agent. Optional
- `DISABLED_COMMANDS`: Commands to disable. Use comma separated names of commands. See the list of commands from built-in components [here](../../forge/components/components.md). Default: None
- `ELEVENLABS_API_KEY`: ElevenLabs API Key. Optional.
- `ELEVENLABS_VOICE_ID`: ElevenLabs Voice ID. Optional.
- `EMBEDDING_MODEL`: LLM Model to use for embedding tasks. Default: `text-embedding-3-small`
- `EXECUTE_LOCAL_COMMANDS`: If shell commands should be executed locally. Default: False
- `EXIT_KEY`: Exit key accepted to exit. Default: n
- `FAST_LLM`: LLM Model to use for most tasks. Default: `gpt-3.5-turbo-0125`
- `GITHUB_API_KEY`: [Github API Key](https://github.com/settings/tokens). Optional.
@@ -23,26 +20,16 @@ Configuration is controlled through the `Config` object. You can set configurati
- `GOOGLE_API_KEY`: Google API key. Optional.
- `GOOGLE_CUSTOM_SEARCH_ENGINE_ID`: [Google custom search engine ID](https://programmablesearchengine.google.com/controlpanel/all). Optional.
- `GROQ_API_KEY`: Set this if you want to use Groq models with AutoGPT
- `HEADLESS_BROWSER`: Use a headless browser while AutoGPT uses a web browser. Setting to `False` will allow you to see AutoGPT operate the browser. Default: True
- `HUGGINGFACE_API_TOKEN`: HuggingFace API, to be used for both image generation and audio to text. Optional.
- `HUGGINGFACE_AUDIO_TO_TEXT_MODEL`: HuggingFace audio to text model. Default: CompVis/stable-diffusion-v1-4
- `HUGGINGFACE_IMAGE_MODEL`: HuggingFace model to use for image generation. Default: CompVis/stable-diffusion-v1-4
- `IMAGE_PROVIDER`: Image provider. Options are `dalle`, `huggingface`, and `sdwebui`. Default: dalle
- `IMAGE_SIZE`: Default size of image to generate. Default: 256
- `OPENAI_API_KEY`: *REQUIRED*- Your [OpenAI API Key](https://platform.openai.com/account/api-keys).
- `LLAMAFILE_API_BASE`: Llamafile API base URL. Default: `http://localhost:8080/v1`
- `OPENAI_API_KEY`: Set this if you want to use OpenAI models; [OpenAI API Key](https://platform.openai.com/account/api-keys).
- `OPENAI_ORGANIZATION`: Organization ID in OpenAI. Optional.
- `PLAIN_OUTPUT`: Plain output, which disables the spinner. Default: False
- `RESTRICT_TO_WORKSPACE`: The restrict file reading and writing to the workspace directory. Default: True
- `SD_WEBUI_AUTH`: Stable Diffusion Web UI username:password pair. Optional.
- `SD_WEBUI_URL`: Stable Diffusion Web UI URL. Default: http://localhost:7860
- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by AutoGPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
- `SHELL_COMMAND_CONTROL`: Whether to use `allowlist` or `denylist` to determine what shell commands can be executed (Default: denylist)
- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by AutoGPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
- `SMART_LLM`: LLM Model to use for "smart" tasks. Default: `gpt-4-turbo-preview`
- `STREAMELEMENTS_VOICE`: StreamElements voice to use. Default: Brian
- `TEMPERATURE`: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature
- `TEXT_TO_SPEECH_PROVIDER`: Text to Speech Provider. Options are `gtts`, `macos`, `elevenlabs`, and `streamelements`. Default: gtts
- `USER_AGENT`: User-Agent given when browsing websites. Default: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
- `USE_AZURE`: Use Azure's LLM Default: False
- `USE_WEB_BROWSER`: Which web browser to use. Options are `chrome`, `firefox`, `safari` or `edge` Default: chrome
- `WIPE_REDIS_ON_START`: Wipes data / index on start. Default: True

View File

@@ -2,36 +2,36 @@
!!! note
This section is optional. Use the official Google API if search attempts return
error 429. To use the `google_official_search` command, you need to set up your
Google API key in your environment variables.
error 429. To use the `google` command, you need to set up your
Google API key in your environment variables or pass it with configuration to the [`WebSearchComponent`](../../forge/components/built-in-components.md).
Create your project:
1. Go to the [Google Cloud Console](https://console.cloud.google.com/).
2. If you don't already have an account, create one and log in
3. Create a new project by clicking on the *Select a Project* dropdown at the top of the
1. If you don't already have an account, create one and log in
1. Create a new project by clicking on the *Select a Project* dropdown at the top of the
page and clicking *New Project*
4. Give it a name and click *Create*
5. Set up a custom search API and add to your .env file:
5. Go to the [APIs & Services Dashboard](https://console.cloud.google.com/apis/dashboard)
6. Click *Enable APIs and Services*
7. Search for *Custom Search API* and click on it
8. Click *Enable*
9. Go to the [Credentials](https://console.cloud.google.com/apis/credentials) page
10. Click *Create Credentials*
11. Choose *API Key*
12. Copy the API key
13. Set it as the `GOOGLE_API_KEY` in your `.env` file
14. [Enable](https://console.developers.google.com/apis/api/customsearch.googleapis.com)
1. Give it a name and click *Create*
1. Set up a custom search API and add to your .env file:
1. Go to the [APIs & Services Dashboard](https://console.cloud.google.com/apis/dashboard)
1. Click *Enable APIs and Services*
1. Search for *Custom Search API* and click on it
1. Click *Enable*
1. Go to the [Credentials](https://console.cloud.google.com/apis/credentials) page
1. Click *Create Credentials*
1. Choose *API Key*
1. Copy the API key
1. Set it as the `GOOGLE_API_KEY` in your `.env` file
1. [Enable](https://console.developers.google.com/apis/api/customsearch.googleapis.com)
the Custom Search API on your project. (Might need to wait few minutes to propagate.)
Set up a custom search engine and add to your .env file:
15. Go to the [Custom Search Engine](https://cse.google.com/cse/all) page
16. Click *Add*
17. Set up your search engine by following the prompts.
1. Go to the [Custom Search Engine](https://cse.google.com/cse/all) page
1. Click *Add*
1. Set up your search engine by following the prompts.
You can choose to search the entire web or specific sites
18. Once you've created your search engine, click on *Control Panel*
19. Click *Basics*
20. Copy the *Search engine ID*
21. Set it as the `CUSTOM_SEARCH_ENGINE_ID` in your `.env` file
1. Once you've created your search engine, click on *Control Panel*
1. Click *Basics*
1. Copy the *Search engine ID*
1. Set it as the `CUSTOM_SEARCH_ENGINE_ID` in your `.env` file
_Remember that your free daily custom search quota allows only up to 100 searches. To increase this limit, you need to assign a billing account to the project to profit from up to 10K daily searches._

View File

@@ -71,18 +71,25 @@
- ./logs:/app/logs
## uncomment following lines if you want to make use of these files
## you must have them existing in the same folder as this docker-compose.yml
## component configuration file
#- type: bind
# source: ./config.json
# target: /app/config.json
```
</details>
4. Download [`.env.template`][.env.template] and save it as `.env` in the AutoGPT folder.
5. Follow the standard [configuration instructions](../index.md#completing-the-setup),
1. Download [`.env.template`][.env.template] and save it as `.env` in the AutoGPT folder.
2. Follow the standard [configuration instructions](../index.md#completing-the-setup),
from step 3 onwards and excluding `poetry install` steps.
6. Pull the latest image from [Docker Hub]
3. Pull the latest image from [Docker Hub]
```shell
docker pull significantgravitas/auto-gpt
```
4. _Optional: mount configuration file._
If you have component configuration file, for example `config.json`, place it in `autogpt/data/` directory. Or place it in `autogpt/` and uncomment the line in `docker-compose.yml` that mounts it.
To learn more about configuring, see [Component configuration](../../forge/components/components.md#json-configuration)
!!! note "Docker only supports headless browsing"
AutoGPT uses a browser in headless mode by default: `HEADLESS_BROWSER=True`.

View File

@@ -50,7 +50,13 @@ Since we don't ship AutoGPT as a desktop application, you'll need to download th
### Completing the Setup
Once you have cloned or downloaded the project, you can find the AutoGPT Agent in the
`autogpt/` folder. In this folder:
`autogpt/` folder.
Inside this folder you can configure the AutoGPT application with an `.env` file and (optionally) a JSON configuration file:
- `.env` for environment variables, which are mostly used for sensitive data like API keys
- a JSON configuration file to customize certain features of AutoGPT's [Components](../../forge/components/introduction.md)
See the [Configuration](../configuration/options.md) reference for a list of available environment variables.
1. Find the file named `.env.template`. This file may
be hidden by default in some operating systems due to the dot prefix. To reveal
@@ -71,6 +77,9 @@ Once you have cloned or downloaded the project, you can find the AutoGPT Agent i
6. Save and close the `.env` file.
7. _Optional: run `poetry install` to install all required dependencies._ The
application also checks for and installs any required dependencies when it starts.
8. _Optional: configure the JSON file (e.g. `config.json`) with your desired settings._
The application will use default settings if you don't provide a JSON configuration file.
Learn how to [set up the JSON configuration file](../../forge/components/components.md#json-configuration)
You should now be able to explore the CLI (`./autogpt.sh --help`) and run the application.
@@ -79,7 +88,6 @@ See the [user guide](../usage.md) for further instructions.
[show hidden files/Windows]: https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5
[show hidden files/macOS]: https://www.pcmag.com/how-to/how-to-access-your-macs-hidden-files
## Setting up LLM providers
You can use AutoGPT with any of the following LLM providers.
@@ -177,7 +185,7 @@ If you don't know which to choose, you can safely go with OpenAI*.
1. Get your Groq API key from [Settings > API keys][groq/api-keys]
2. Open `.env`
3. Find the line that says `GROQ_API_KEY=`
4. Insert your Anthropic API Key directly after = without quotes or spaces:
4. Insert your Groq API Key directly after = without quotes or spaces:
```ini
GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
```
@@ -190,3 +198,66 @@ If you don't know which to choose, you can safely go with OpenAI*.
[groq/api-keys]: https://console.groq.com/keys
[groq/models]: https://console.groq.com/docs/models
### Llamafile
With llamafile you can run models locally, which means no need to set up billing,
and guaranteed data privacy.
For more information and in-depth documentation, check out the [llamafile documentation].
!!! warning
At the moment, llamafile only serves one model at a time. This means you can not
set `SMART_LLM` and `FAST_LLM` to two different llamafile models.
!!! warning
Due to the issues linked below, llamafiles don't work on WSL. To use a llamafile
with AutoGPT in WSL, you will have to run the llamafile in Windows (outside WSL).
<details>
<summary>Instructions</summary>
1. Get the `llamafile/serve.py` script through one of these two ways:
1. Clone the AutoGPT repo somewhere in your Windows environment,
with the script located at `autogpt/scripts/llamafile/serve.py`
2. Download just the [serve.py] script somewhere in your Windows environment
2. Make sure you have `click` installed: `pip install click`
3. Run `ip route | grep default | awk '{print $3}'` *inside WSL* to get the address
of the WSL host machine
4. Run `python3 serve.py --host {WSL_HOST_ADDR}`, where `{WSL_HOST_ADDR}`
is the address you found at step 3.
If port 8080 is taken, also specify a different port using `--port {PORT}`.
5. In WSL, set `LLAMAFILE_API_BASE=http://{WSL_HOST_ADDR}:8080/v1` in your `.env`.
6. Follow the rest of the regular instructions below.
[serve.py]: https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt/scripts/llamafile/serve.py
</details>
* [Mozilla-Ocho/llamafile#356](https://github.com/Mozilla-Ocho/llamafile/issues/356)
* [Mozilla-Ocho/llamafile#100](https://github.com/Mozilla-Ocho/llamafile/issues/100)
!!! note
These instructions will download and use `mistral-7b-instruct-v0.2.Q5_K_M.llamafile`.
`mistral-7b-instruct-v0.2` is currently the only tested and supported model.
If you want to try other models, you'll have to add them to `LlamafileModelName` in
[`llamafile.py`][forge/llamafile.py].
For optimal results, you may also have to add some logic to adapt the message format,
like `LlamafileProvider._adapt_chat_messages_for_mistral_instruct(..)` does.
1. Run the llamafile serve script:
```shell
python3 ./scripts/llamafile/serve.py
```
The first time this is run, it will download a file containing the model + runtime,
which may take a while and a few gigabytes of disk space.
To force GPU acceleration, add `--use-gpu` to the command.
3. In `.env`, set `SMART_LLM`/`FAST_LLM` or both to `mistral-7b-instruct-v0.2`
4. If the server is running on different address than `http://localhost:8080/v1`,
set `LLAMAFILE_API_BASE` in `.env` to the right base URL
[llamafile documentation]: https://github.com/Mozilla-Ocho/llamafile#readme
[forge/llamafile.py]: https://github.com/Significant-Gravitas/AutoGPT/blob/master/forge/forge/llm/providers/llamafile/llamafile.py

View File

@@ -60,10 +60,6 @@ Options:
--debug Enable Debug Mode
--gpt3only Enable GPT3.5 Only Mode
--gpt4only Enable GPT4 Only Mode
-b, --browser-name TEXT Specifies which web-browser to use when
using selenium to scrape the web.
--allow-downloads Dangerous: Allows AutoGPT to download files
natively.
--skip-news Specifies whether to suppress the output of
latest news on startup.
--install-plugin-deps Installs external dependencies for 3rd party
@@ -82,6 +78,7 @@ Options:
--override-directives If specified, --constraint, --resource and
--best-practice will override the AI's
directives instead of being appended to them
--component-config-file TEXT Path to the json configuration file.
--help Show this message and exit.
```
</details>
@@ -128,10 +125,6 @@ Options:
--debug Enable Debug Mode
--gpt3only Enable GPT3.5 Only Mode
--gpt4only Enable GPT4 Only Mode
-b, --browser-name TEXT Specifies which web-browser to use when using
selenium to scrape the web.
--allow-downloads Dangerous: Allows AutoGPT to download files
natively.
--install-plugin-deps Installs external dependencies for 3rd party
plugins.
--help Show this message and exit.
@@ -220,5 +213,5 @@ For example, to disable python coding features, set it to the value below:
DISABLED_COMMANDS=execute_python_code,execute_python_file
```
[components]: ./components/components.md
[commands]: ./components/built-in-components.md
[components]: ../forge/components/components.md
[commands]: ../forge/components/built-in-components.md

View File

@@ -1,26 +1,34 @@
# Built-in Components
This page lists all [🧩 Components](./components.md) and [⚙️ Protocols](./protocols.md) they implement that are natively provided. They are used by the AutoGPT agent.
Some components have additional configuration options listed in the table, see [Component configuration](./components.md/#component-configuration) to learn more.
!!! note
If a configuration field uses environment variable, it still can be passed using configuration model. ### Value from the configuration takes precedence over env var! Env var will be only applied if value in the configuration is not set.
## `SystemComponent`
Essential component to allow an agent to finish.
**DirectiveProvider**
### DirectiveProvider
- Constraints about API budget
**MessageProvider**
### MessageProvider
- Current time and date
- Remaining API budget and warnings if budget is low
**CommandProvider**
### CommandProvider
- `finish` used when task is completed
## `UserInteractionComponent`
Adds ability to interact with user in CLI.
**CommandProvider**
### CommandProvider
- `ask_user` used to ask user for input
## `FileManagerComponent`
@@ -28,10 +36,21 @@ Adds ability to interact with user in CLI.
Adds ability to read and write persistent files to local storage, Google Cloud Storage or Amazon's S3.
Necessary for saving and loading agent's state (preserving session).
**DirectiveProvider**
### `FileManagerConfiguration`
| Config variable | Details | Type | Default |
| ---------------- | -------------------------------------- | ----- | ---------------------------------- |
| `storage_path` | Path to agent files, e.g. state | `str` | `agents/{agent_id}/`[^1] |
| `workspace_path` | Path to files that agent has access to | `str` | `agents/{agent_id}/workspace/`[^1] |
[^1] This option is set dynamically during component construction as opposed to by default inside the configuration model, `{agent_id}` is replaced with the agent's unique identifier.
### DirectiveProvider
- Resource information that it's possible to read and write files
**CommandProvider**
### CommandProvider
- `read_file` used to read file
- `write_file` used to write file
- `list_folder` lists all files in a folder
@@ -40,7 +59,20 @@ Necessary for saving and loading agent's state (preserving session).
Lets the agent execute non-interactive Shell commands and Python code. Python execution works only if Docker is available.
**CommandProvider**
### `CodeExecutorConfiguration`
| Config variable | Details | Type | Default |
| ------------------------ | ---------------------------------------------------- | --------------------------- | ----------------- |
| `execute_local_commands` | Enable shell command execution | `bool` | `False` |
| `shell_command_control` | Controls which list is used | `"allowlist" \| "denylist"` | `"allowlist"` |
| `shell_allowlist` | List of allowed shell commands | `List[str]` | `[]` |
| `shell_denylist` | List of prohibited shell commands | `List[str]` | `[]` |
| `docker_container_name` | Name of the Docker container used for code execution | `str` | `"agent_sandbox"` |
All shell command configurations are expected to be for convience only. This component is not secure and should not be used in production environments. It is recommended to use more appropriate sandboxing.
### CommandProvider
- `execute_shell` execute shell command
- `execute_shell_popen` execute shell command with popen
- `execute_python_code` execute Python code
@@ -50,38 +82,94 @@ Lets the agent execute non-interactive Shell commands and Python code. Python ex
Keeps track of agent's actions and their outcomes. Provides their summary to the prompt.
**MessageProvider**
### `ActionHistoryConfiguration`
| Config variable | Details | Type | Default |
| ---------------------- | ------------------------------------------------------- | ----------- | ------------------ |
| `llm_name` | Name of the llm model used to compress the history | `ModelName` | `"gpt-3.5-turbo"` |
| `max_tokens` | Maximum number of tokens to use for the history summary | `int` | `1024` |
| `spacy_language_model` | Language model used for summary chunking using spacy | `str` | `"en_core_web_sm"` |
| `full_message_count` | Number of cycles to include unsummarized in the prompt | `int` | `4` |
### MessageProvider
- Agent's progress summary
**AfterParse**
### AfterParse
- Register agent's action
**ExecutionFailuer**
### ExecutionFailure
- Rewinds the agent's action, so it isn't saved
**AfterExecute**
### AfterExecute
- Saves the agent's action result in the history
## `GitOperationsComponent`
**CommandProvider**
Adds ability to iteract with git repositories and GitHub.
### `GitOperationsConfiguration`
| Config variable | Details | Type | Default |
| ----------------- | ----------------------------------------- | ----- | ------- |
| `github_username` | GitHub username, *ENV:* `GITHUB_USERNAME` | `str` | `None` |
| `github_api_key` | GitHub API key, *ENV:* `GITHUB_API_KEY` | `str` | `None` |
### CommandProvider
- `clone_repository` used to clone a git repository
## `ImageGeneratorComponent`
Adds ability to generate images using various providers, see [Image Generation configuration](./../configuration/imagegen.md) to learn more.
Adds ability to generate images using various providers.
### Hugging Face
To use text-to-image models from Hugging Face, you need a Hugging Face API token.
Link to the appropriate settings page: [Hugging Face > Settings > Tokens](https://huggingface.co/settings/tokens)
### Stable Diffusion WebUI
It is possible to use your own self-hosted Stable Diffusion WebUI with AutoGPT. ### Make sure you are running WebUI with `--api` enabled.
### `ImageGeneratorConfiguration`
| Config variable | Details | Type | Default |
| ------------------------- | ------------------------------------------------------------- | --------------------------------------- | --------------------------------- |
| `image_provider` | Image generation provider | `"dalle" \| "huggingface" \| "sdwebui"` | `"dalle"` |
| `huggingface_image_model` | Hugging Face image model, see [available models] | `str` | `"CompVis/stable-diffusion-v1-4"` |
| `huggingface_api_token` | Hugging Face API token, *ENV:* `HUGGINGFACE_API_TOKEN` | `str` | `None` |
| `sd_webui_url` | URL to self-hosted Stable Diffusion WebUI | `str` | `"http://localhost:7860"` |
| `sd_webui_auth` | Basic auth for Stable Diffusion WebUI, *ENV:* `SD_WEBUI_AUTH` | `str` of format `{username}:{password}` | `None` |
[available models]: https://huggingface.co/models?pipeline_tag=text-to-image
### CommandProvider
**CommandProvider**
- `generate_image` used to generate an image given a prompt
## `WebSearchComponent`
Allows agent to search the web.
Allows agent to search the web. Google credentials aren't required for DuckDuckGo. [Instructions how to set up Google API key](../../AutoGPT/configuration/search.md)
### `WebSearchConfiguration`
| Config variable | Details | Type | Default |
| -------------------------------- | ----------------------------------------------------------------------- | --------------------------- | ------- |
| `google_api_key` | Google API key, *ENV:* `GOOGLE_API_KEY` | `str` | `None` |
| `google_custom_search_engine_id` | Google Custom Search Engine ID, *ENV:* `GOOGLE_CUSTOM_SEARCH_ENGINE_ID` | `str` | `None` |
| `duckduckgo_max_attempts` | Maximum number of attempts to search using DuckDuckGo | `int` | `3` |
| `duckduckgo_backend` | Backend to be used for DDG sdk | `"api" \| "html" \| "lite"` | `"api"` |
### DirectiveProvider
**DirectiveProvider**
- Resource information that it's possible to search the web
**CommandProvider**
### CommandProvider
- `search_web` used to search the web using DuckDuckGo
- `google` used to search the web using Google, requires API key
@@ -89,20 +177,35 @@ Allows agent to search the web.
Allows agent to read websites using Selenium.
**DirectiveProvider**
### `WebSeleniumConfiguration`
| Config variable | Details | Type | Default |
| ----------------------------- | ------------------------------------------- | --------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
| `llm_name` | Name of the llm model used to read websites | `ModelName` | `"gpt-3.5-turbo"` |
| `web_browser` | Web browser used by Selenium | `"chrome" \| "firefox" \| "safari" \| "edge"` | `"chrome"` |
| `headless` | Run browser in headless mode | `bool` | `True` |
| `user_agent` | User agent used by the browser | `str` | `"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"` |
| `browse_spacy_language_model` | Spacy language model used for chunking text | `str` | `"en_core_web_sm"` |
| `selenium_proxy` | Http proxy to use with Selenium | `str` | `None` |
### DirectiveProvider
- Resource information that it's possible to read websites
**CommandProvider**
### CommandProvider
- `read_website` used to read a specific url and look for specific topics or answer a question
## `ContextComponent`
Adds ability to keep up-to-date file and folder content in the prompt.
**MessageProvider**
### MessageProvider
- Content of elements in the context
**CommandProvider**
### CommandProvider
- `open_file` used to open a file into context
- `open_folder` used to open a folder into context
- `close_context_item` remove an item from the context
@@ -111,5 +214,6 @@ Adds ability to keep up-to-date file and folder content in the prompt.
Watches if agent is looping and switches to smart mode if necessary.
**AfterParse**
### AfterParse
- Investigates what happened and switches to smart mode if necessary

View File

@@ -37,7 +37,7 @@ Since components are regular classes you can pass data (including other componen
For example we can pass a config object and then retrieve an API key from it when needed:
```py
class ConfigurableComponent(MessageProvider):
class DataComponent(MessageProvider):
def __init__(self, config: Config):
self.config = config
@@ -51,6 +51,35 @@ class ConfigurableComponent(MessageProvider):
!!! note
Component-specific configuration handling isn't implemented yet.
## Configuring components
Components can be configured using a pydantic model.
To make component configurable, it must inherit from `ConfigurableComponent[BM]` where `BM` is the configuration class inheriting from pydantic's `BaseModel`.
You should pass the configuration instance to the `ConfigurableComponent`'s `__init__` or set its `config` property directly.
Using configuration allows you to load confugration from a file, and also serialize and deserialize it easily for any agent.
To learn more about configuration, including storing sensitive information and serialization see [Component Configuration](./components.md#component-configuration).
```py
# Example component configuration
class UserGreeterConfiguration(BaseModel):
user_name: str
class UserGreeterComponent(MessageProvider, ConfigurableComponent[UserGreeterConfiguration]):
def __init__(self):
# Creating configuration instance
# You could also pass it to the component constructor
# e.g. `def __init__(self, config: UserGreeterConfiguration):`
config = UserGreeterConfiguration(user_name="World")
# Passing the configuration instance to the parent class
UserGreeterComponent.__init__(self, config)
# This has the same effect as the line above:
# self.config = UserGreeterConfiguration(user_name="World")
def get_messages(self) -> Iterator[ChatMessage]:
# You can use the configuration like a regular model
yield ChatMessage.system(f"Hello, {self.config.user_name}!")
```
## Providing commands
To extend what an agent can do, you need to provide commands using `CommandProvider` protocol. For example to allow agent to multiply two numbers, you can create a component like this:
@@ -148,12 +177,12 @@ It gives an ability for the agent to ask user for input in the terminal.
yield self.ask_user
```
5. Since agent isn't always running in the terminal or interactive mode, we need to disable this component by setting `self._enabled` when it's not possible to ask for user input.
5. Since agent isn't always running in the terminal or interactive mode, we need to disable this component by setting `self._enabled=False` when it's not possible to ask for user input.
```py
def __init__(self, config: Config):
def __init__(self, interactive_mode: bool):
self.config = config
self._enabled = not config.noninteractive_mode
self._enabled = interactive_mode
```
The final component should look like this:
@@ -164,10 +193,10 @@ class MyUserInteractionComponent(CommandProvider):
"""Provides commands to interact with the user."""
# We pass config to check if we're in noninteractive mode
def __init__(self, config: Config):
def __init__(self, interactive_mode: bool):
self.config = config
# 5.
self._enabled = not config.noninteractive_mode
self._enabled = interactive_mode
# 4.
def get_commands(self) -> Iterator[Command]:
@@ -205,10 +234,10 @@ class MyAgent(Agent):
settings: AgentSettings,
llm_provider: MultiProvider,
file_storage: FileStorage,
legacy_config: Config,
app_config: Config,
):
# Call the parent constructor to bring in the default components
super().__init__(settings, llm_provider, file_storage, legacy_config)
super().__init__(settings, llm_provider, file_storage, app_config)
# Disable the default user interaction component by overriding it
self.user_interaction = MyUserInteractionComponent()
```
@@ -222,14 +251,14 @@ class MyAgent(Agent):
settings: AgentSettings,
llm_provider: MultiProvider,
file_storage: FileStorage,
legacy_config: Config,
app_config: Config,
):
# Call the parent constructor to bring in the default components
super().__init__(settings, llm_provider, file_storage, legacy_config)
super().__init__(settings, llm_provider, file_storage, app_config)
# Disable the default user interaction component
self.user_interaction = None
# Add our own component
self.my_user_interaction = MyUserInteractionComponent(legacy_config)
self.my_user_interaction = MyUserInteractionComponent(app_config)
```
## Learn more

View File

@@ -1,10 +1,16 @@
# Component Agents
!!! important
[Legacy plugins] no longer work with AutoGPT. They have been replaced by components,
although we're still working on a new system to load plug-in components.
[Legacy plugins]: https://github.com/Significant-Gravitas/Auto-GPT-Plugins
This guide explains the component-based architecture of AutoGPT agents. It's a new way of building agents that is more flexible and easier to extend. Components replace some agent's logic and plugins with a more modular and composable system.
Agent is composed of *components*, and each *component* implements a range of *protocols* (interfaces), each one providing a specific functionality, e.g. additional commands or messages. Each *protocol* is handled in a specific order, defined by the agent. This allows for a clear separation of concerns and a more modular design.
This system is simple, flexible, requires basically no configuration, and doesn't hide any data - anything can still be passed or accessed directly from or between components.
This system is simple, flexible, and doesn't hide any data - anything can still be passed or accessed directly from or between components.
### Definitions & Guides

View File

@@ -19,16 +19,16 @@ Forge is a ready-to-go template for *your* agent application. All the boilerplat
### 🚀 **Get Started!**
The best way to get started is to fork or download the AutoGPT repository and look at the example agent in `forge/forge/agent/forge_agent.py`.
This can work as a starting point for your own agent.
Agents are built using *components* which provide different functionality, see the [Component Introduction](./components/introduction.md). You can find built-in components in `forge/forge/components/`.
!!! warning
The tutorial series below is out of date.
The getting started [tutorial series](https://aiedge.medium.com/autogpt-forge-e3de53cc58ec) will guide you through the process of setting up your project all the way through to building a generalist agent.
1. [AutoGPT Forge: A Comprehensive Guide to Your First Steps](https://aiedge.medium.com/autogpt-forge-a-comprehensive-guide-to-your-first-steps-a1dfdf46e3b4)
2. [AutoGPT Forge: The Blueprint of an AI Agent](https://aiedge.medium.com/autogpt-forge-the-blueprint-of-an-ai-agent-75cd72ffde6)
3. [AutoGPT Forge: Interacting with your Agent](https://aiedge.medium.com/autogpt-forge-interacting-with-your-agent-1214561b06b)
4. [AutoGPT Forge: Crafting Intelligent Agent Logic](https://medium.com/@aiedge/autogpt-forge-crafting-intelligent-agent-logic-bc5197b14cb4)
Coming soon:
5. Interacting with and Benchmarking your Agent
6. Abilities
7. The Planning Loop
8. Memories

View File

@@ -4,13 +4,26 @@ Welcome to the AutoGPT Documentation.
The AutoGPT project consists of four main components:
* The [Agent](#agent) &ndash; also known as just "AutoGPT"
* The [Benchmark](#benchmark) &ndash; AKA `agbenchmark`
* The [Forge](#forge)
* The [Frontend](#frontend)
- The [Server](#server) &ndash; known as the "AutoGPT Platform"
- The [Agent](#agent) &ndash; also known as just "AutoGPT"
- The [Benchmark](#benchmark) &ndash; AKA `agbenchmark`
- The [Forge](#forge)
- The [Frontend](#frontend)
To tie these together, we also have a [CLI] at the root of the project.
## 🌐 Server
<!-- Setup, then Advanced, then New Blocks -->
**[📖 Setup](server/setup.md)**
&ensp;|&ensp;
**[📖 Advanced Setup](server/advanced_setup.md)**
&ensp;|&ensp;
**[📖 Making New Blocks](server/new_blocks.md)**
The server is the backbone of the New AutoGPT project. It provides the infrastructure for the agents to run, and the UI for you to interact with them. It integrates with the Forge, Agent, and a bespoke UI to provide a seamless experience.
---
## 🤖 Agent

View File

@@ -0,0 +1,69 @@
# Advanced Setup
The advanced steps below are intended for people with sysadmin experience. If you are not comfortable with these steps, please refer to the [basic setup guide](setup.md).
## Introduction
For the advanced setup, first follow the [basic setup guide](setup.md) to get the server up and running. Once you have the server running, you can follow the steps below to configure the server for your specific needs.
## Configuration
### Setting config via environment variables
The server uses environment variables to store configs. You can set these environment variables in a `.env` file in the root of the project. The `.env` file should look like this:
```bash
# .env
KEY1=value1
KEY2=value2
```
The server will automatically load the `.env` file when it starts. You can also set the environment variables directly in your shell. Refer to your operating system's documentation on how to set environment variables in the current session.
The valid options are listed in `.env.example` in the root of the builder and server directories. You can copy the `.env.example` file to `.env` and modify the values as needed.
```bash
# Copy the .env.example file to .env
cp .env.example .env
```
### Secrets directory
The secret directory is located at `./secrets`. You can store any secrets you need in this directory. The server will automatically load the secrets when it starts.
An example for a secret called `my_secret` would look like this:
```bash
# ./secrets/my_secret
my_secret_value
```
This is useful when running on docker so you can copy the secrets into the container without exposing them in the Dockerfile.
## Database selection
### SQLite
By default, the server uses SQLite as the database. SQLite is a file-based database that is easy to set up and use. However, it is not recommended for production usecases where auth is required because that subsystem requires Postgres.
### PostgreSQL
For production use, it is recommended to use PostgreSQL as the database. You will swap the commands you use to generate and run prisma to the following
```bash
poetry run prisma generate --schema postgres/schema.prisma
```
This will generate the Prisma client for PostgreSQL. You will also need to run the PostgreSQL database in a separate container. You can use the `docker-compose.yml` file in the `rnd` directory to run the PostgreSQL database.
```bash
cd rnd/
docker compose up -d
```
You can then run the migrations from the `autogpt_server` directory.
```bash
cd ../autogpt_server
prisma migrate dev --schema postgres/schema.prisma
```

View File

@@ -0,0 +1,17 @@
# Find available voices for D-ID
1. **ElevenLabs**
- Select any voice from the voice list: https://api.elevenlabs.io/v1/voices
- Copy the voice_id
- Use it as a string in the voice_id field in the CreateTalkingAvatarClip Block
2. **Microsoft Azure Voices**
- Select any voice from the voice gallery: https://speech.microsoft.com/portal/voicegallery
- Click on the "Sample code" tab on the right
- Copy the voice name, for example: config.SpeechSynthesisVoiceName ="en-GB-AbbiNeural"
- Use this string en-GB-AbbiNeural in the voice_id field in the CreateTalkingAvatarClip Block
3. **Amazon Polly Voices**
- Select any voice from the voice list: https://docs.aws.amazon.com/polly/latest/dg/available-voices.html
- Copy the voice name / ID
- Use it as string in the voice_id field in the CreateTalkingAvatarClip Block

View File

@@ -0,0 +1,224 @@
# Contributing to AutoGPT Agent Server: Creating and Testing Blocks
This guide will walk you through the process of creating and testing a new block for the AutoGPT Agent Server, using the WikipediaSummaryBlock as an example.
## Understanding Blocks and Testing
Blocks are reusable components that can be connected to form a graph representing an agent's behavior. Each block has inputs, outputs, and a specific function. Proper testing is crucial to ensure blocks work correctly and consistently.
## Creating and Testing a New Block
Follow these steps to create and test a new block:
1. **Create a new Python file** in the `autogpt_server/blocks` directory. Name it descriptively and use snake_case. For example: `get_wikipedia_summary.py`.
2. **Import necessary modules and create a class that inherits from `Block`**. Make sure to include all necessary imports for your block.
Every block should contain the following:
```python
from autogpt_server.data.block import Block, BlockSchema, BlockOutput
```
Example for the Wikipedia summary block:
```python
from autogpt_server.data.block import Block, BlockSchema, BlockOutput
from autogpt_server.utils.get_request import GetRequest
import requests
class WikipediaSummaryBlock(Block, GetRequest):
# Block implementation will go here
```
3. **Define the input and output schemas** using `BlockSchema`. These schemas specify the data structure that the block expects to receive (input) and produce (output).
- The input schema defines the structure of the data the block will process. Each field in the schema represents a required piece of input data.
- The output schema defines the structure of the data the block will return after processing. Each field in the schema represents a piece of output data.
Example:
```python
class Input(BlockSchema):
topic: str # The topic to get the Wikipedia summary for
class Output(BlockSchema):
summary: str # The summary of the topic from Wikipedia
error: str # Any error message if the request fails
```
4. **Implement the `__init__` method, including test data and mocks:**
```python
def __init__(self):
super().__init__(
# Unique ID for the block, used across users for templates
# you can generate this with this python one liner
# print(__import__('uuid').uuid4())
id="h5e7f8g9-1b2c-3d4e-5f6g-7h8i9j0k1l2m",
input_schema=WikipediaSummaryBlock.Input, # Assign input schema
output_schema=WikipediaSummaryBlock.Output, # Assign output schema
# Provide sample input, output and test mock for testing the block
test_input={"topic": "Artificial Intelligence"},
test_output=("summary", "summary content"),
test_mock={"get_request": lambda url, json: {"extract": "summary content"}},
)
```
- `id`: A unique identifier for the block.
- `input_schema` and `output_schema`: Define the structure of the input and output data.
Let's break down the testing components:
- `test_input`: This is a sample input that will be used to test the block. It should be a valid input according to your Input schema.
- `test_output`: This is the expected output when running the block with the `test_input`. It should match your Output schema. For non-deterministic outputs or when you only want to assert the type, you can use Python types instead of specific values. In this example, `("summary", str)` asserts that the output key is "summary" and its value is a string.
- `test_mock`: This is crucial for blocks that make network calls. It provides a mock function that replaces the actual network call during testing.
In this case, we're mocking the `get_request` method to always return a dictionary with an 'extract' key, simulating a successful API response. This allows us to test the block's logic without making actual network requests, which could be slow, unreliable, or rate-limited.
5. **Implement the `run` method with error handling:**, this should contain the main logic of the block:
```python
def run(self, input_data: Input) -> BlockOutput:
try:
topic = input_data.topic
url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{topic}"
response = self.get_request(url, json=True)
yield "summary", response['extract']
except requests.exceptions.HTTPError as http_err:
yield "error", f"HTTP error occurred: {http_err}"
except requests.RequestException as e:
yield "error", f"Request to Wikipedia failed: {e}"
except KeyError as e:
yield "error", f"Error parsing Wikipedia response: {e}"
```
- **Try block**: Contains the main logic to fetch and process the Wikipedia summary.
- **API request**: Send a GET request to the Wikipedia API.
- **Error handling**: Handle various exceptions that might occur during the API request and data processing.
- **Yield**: Use `yield` to output the results.
## Key Points to Remember
- **Unique ID**: Give your block a unique ID in the **init** method.
- **Input and Output Schemas**: Define clear input and output schemas.
- **Error Handling**: Implement error handling in the `run` method.
- **Output Results**: Use `yield` to output results in the `run` method.
- **Testing**: Provide test input and output in the **init** method for automatic testing.
## Understanding the Testing Process
The testing of blocks is handled by `test_block.py`, which does the following:
1. It calls the block with the provided `test_input`.
2. If a `test_mock` is provided, it temporarily replaces the specified methods with the mock functions.
3. It then asserts that the output matches the `test_output`.
For the WikipediaSummaryBlock:
- The test will call the block with the topic "Artificial Intelligence".
- Instead of making a real API call, it will use the mock function, which returns `{"extract": "summary content"}`.
- It will then check if the output key is "summary" and its value is a string.
This approach allows us to test the block's logic comprehensively without relying on external services, while also accommodating non-deterministic outputs.
## Tips for Effective Block Testing
1. **Provide realistic test_input**: Ensure your test input covers typical use cases.
2. **Define appropriate test_output**:
- For deterministic outputs, use specific expected values.
- For non-deterministic outputs or when only the type matters, use Python types (e.g., `str`, `int`, `dict`).
- You can mix specific values and types, e.g., `("key1", str), ("key2", 42)`.
3. **Use test_mock for network calls**: This prevents tests from failing due to network issues or API changes.
4. **Consider omitting test_mock for blocks without external dependencies**: If your block doesn't make network calls or use external resources, you might not need a mock.
5. **Consider edge cases**: Include tests for potential error conditions in your `run` method.
6. **Update tests when changing block behavior**: If you modify your block, ensure the tests are updated accordingly.
By following these steps, you can create new blocks that extend the functionality of the AutoGPT Agent Server.
## Blocks we want to see
Below is a list of blocks that we would like to see implemented in the AutoGPT Agent Server. If you're interested in contributing, feel free to pick one of these blocks or suggest your own by editing [docs/content/server/new_blocks.md](https://github.com/Significant-Gravitas/AutoGPT/edit/master/docs/content/server/new_blocks.md) and opening a pull request.
If you would like to implement one of these blocks, open a pull request and we will start the review process.
### Consumer Services/Platforms
- Google sheets - Read/Append [Read in Progress](https://github.com/Significant-Gravitas/AutoGPT/pull/7521)
- Email - Read/Send with Gmail, Outlook, Yahoo, Proton, etc
- Calendar - Read/Write with Google Calendar, Outlook Calendar, etc
- Home Assistant - Call Service, Get Status
- Dominos - Order Pizza, Track Order
- Uber - Book Ride, Track Ride
- Notion - Create/Read Page, Create/Append/Read DB
- Google drive - read/write/overwrite file/folder
### Social Media
- Twitter - Post, Reply, Get Replies, Get Comments, Get Followers, Get Following, Get Tweets, Get Mentions
- Instagram - Post, Reply, Get Comments, Get Followers, Get Following, Get Posts, Get Mentions, Get Trending Posts
- TikTok - Post, Reply, Get Comments, Get Followers, Get Following, Get Videos, Get Mentions, Get Trending Videos
- LinkedIn - Post, Reply, Get Comments, Get Followers, Get Following, Get Posts, Get Mentions, Get Trending Posts
- YouTube - Transcribe Videos/Shorts, Post Videos/Shorts, Read/Reply/React to Comments, Update Thumbnails, Update Description, Update Tags, Update Titles, Get Views, Get Likes, Get Dislikes, Get Subscribers, Get Comments, Get Shares, Get Watch Time, Get Revenue, Get Trending Videos, Get Top Videos, Get Top Channels
- Reddit - Post, Reply, Get Comments, Get Followers, Get Following, Get Posts, Get Mentions, Get Trending Posts
- Treatwell (and related Platforms) - Book, Cancel, Review, Get Recommendations
- Substack - Read/Subscribe/Unsubscribe, Post/Reply, Get Recommendations
- Discord - Read/Post/Reply, Moderation actions
- GoodReads - Read/Post/Reply, Get Recommendations
### E-commerce
- Airbnb - Book, Cancel, Review, Get Recommendations
- Amazon - Order, Track Order, Return, Review, Get Recommendations
- eBay - Order, Track Order, Return, Review, Get Recommendations
- Upwork - Post Jobs, Hire Freelancer, Review Freelancer, Fire Freelancer
### Business Tools
- External Agents - Call other agents similar to AutoGPT
- Trello - Create/Read/Update/Delete Cards, Lists, Boards
- Jira - Create/Read/Update/Delete Issues, Projects, Boards
- Linear - Create/Read/Update/Delete Issues, Projects, Boards
- Excel - Read/Write/Update/Delete Rows, Columns, Sheets
- Slack - Read/Post/Reply to Messages, Create Channels, Invite Users
- ERPNext - Create/Read/Update/Delete Invoices, Orders, Customers, Products
- Salesforce - Create/Read/Update/Delete Leads, Opportunities, Accounts
- HubSpot - Create/Read/Update/Delete Contacts, Deals, Companies
- Zendesk - Create/Read/Update/Delete Tickets, Users, Organizations
- Odoo - Create/Read/Update/Delete Sales Orders, Invoices, Customers
- Shopify - Create/Read/Update/Delete Products, Orders, Customers
- WooCommerce - Create/Read/Update/Delete Products, Orders, Customers
- Squarespace - Create/Read/Update/Delete Pages, Products, Orders
## Agent Templates we want to see
### Data/Information
- Summarize top news of today, of this week, this month via Apple News or other large media outlets BBC, TechCrunch, hackernews, etc
- Create, read, and summarize substack newsletters or any newsletters (blog writer vs blog reader)
- Get/read/summarize the most viral Twitter, Instagram, TikTok (general social media accounts) of the day, week, month
- Get/Read any LinkedIn posts or profile that mention AI Agents
- Read/Summarize discord (might not be able to do this because you need access)
- Read / Get most read books in a given month, year, etc from GoodReads or Amazon Books, etc
- Get dates for specific shows across all streaming services
- Suggest/Recommend/Get most watched shows in a given month, year, etc across all streaming platforms
- Data analysis from xlsx data set
- Gather via Excel or Google Sheets data > Sample the data randomly (sample block takes top X, bottom X, randomly, etc) > pass that to LLM Block to generate a script for analysis of the full data > Python block to run the script> making a loop back through LLM Fix Block on error > create chart/visualization (potentially in the code block?) > show the image as output (this may require frontend changes to show)
- Tiktok video search and download
### Marketing
- Portfolio site design and enhancements

View File

@@ -0,0 +1,37 @@
# Running Ollama with AutoGPT
Follow these steps to set up and run Ollama and your AutoGPT project:
1. **Run Ollama**
- Open a terminal
- Execute the following command:
```
ollama run llama3
```
- Leave this terminal running
2. **Run the Backend**
- Open a new terminal
- Navigate to the backend directory in the AutoGPT project:
```
cd rnd/autogpt_server/
```
- Start the backend using Poetry:
```
poetry run app
```
3. **Run the Frontend**
- Open another terminal
- Navigate to the frontend directory in the AutoGPT project:
```
cd rnd/autogpt_builder/
```
- Start the frontend development server:
```
npm run dev
```
4. **Choose the Ollama Model**
- Add LLMBlock in the UI
- Choose the last option in the model selection dropdown

View File

@@ -0,0 +1,130 @@
# Setting up the server
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
## Introduction
This guide will help you setup the server and builder for the project.
<!-- The video is listed in the root Readme.md of the repo -->
We also offer this in video format. You can check it out [here](https://github.com/Significant-Gravitas/AutoGPT#how-to-get-started).
!!! warning
**DO NOT FOLLOW ANY OUTSIDE TUTORIALS AS THEY WILL LIKELY BE OUT OF DATE**
## Prerequisites
To setup the server, you need to have the following installed:
- [Node.js](https://nodejs.org/en/)
- [Python 3.10](https://www.python.org/downloads/)
### Checking if you have Node.js and Python installed
You can check if you have Node.js installed by running the following command:
```bash
node -v
```
You can check if you have Python installed by running the following command:
```bash
python --version
```
Once you have node and python installed, you can proceed to the next step.
### Installing the package managers
In order to install the dependencies, you need to have the appropriate package managers installed.
- Installing Yarn
Yarn is a package manager for Node.js. You can install it by running the following command:
```bash
npm install -g yarn
```
- Installing Poetry
Poetry is a package manager for Python. You can install it by running the following command:
```bash
pip install poetry
```
- Installing Docker and Docker Compose
Docker containerizes applications, while Docker Compose orchestrates multi-container Docker applications.
You can follow the steps here:
If you need assistance installing docker:
https://docs.docker.com/desktop/
If you need assistance installing docker compose:
https://docs.docker.com/compose/install/
### Installing the dependencies
Once you have installed Yarn and Poetry, you can run the following command to install the dependencies:
```bash
cd rnd/autogpt_server
cp .env.example .env
poetry install
```
**In another terminal**, run the following command to install the dependencies for the frontend:
```bash
cd rnd/autogpt_builder
yarn install
```
Once you have installed the dependencies, you can proceed to the next step.
### Setting up the database
In order to setup the database, you need to run the following commands, in the same terminal you ran the `poetry install` command:
```sh
docker compose up postgres redis -d
poetry run prisma migrate dev
```
After deploying the migration, to ensure that the database schema is correctly mapped to your codebase, allowing the application to interact with the database properly, you need to generate the Prisma database model:
```bash
poetry run prisma generate
```
Without running this command, the necessary Python modules (prisma.models) won't be available, leading to a `ModuleNotFoundError`.
### Running the server
To run the server, you can run the following commands in the same terminal you ran the `poetry install` command:
```bash
cp supabase/docker/.env.example .env
docker compose -f docker-compose.combined.yml build
docker compose -f docker-compose.combined.yml up -d
```
In the other terminal from autogpt_builder, you can run the following command to start the frontend:
```bash
yarn dev
```
### Checking if the server is running
You can check if the server is running by visiting [http://localhost:3000](http://localhost:3000) in your browser.
### Notes:
By default the daemons for different services run on the following ports:
Execution Manager Daemon: 8002
Execution Scheduler Daemon: 8003
Rest Server Daemon: 8004

View File

@@ -5,6 +5,13 @@ docs_dir: content
nav:
- Home: index.md
- The AutoGPT Server 🆕:
- Build your own Blocks: server/new_blocks.md
- Setup: server/setup.md
- Advanced Setup: server/advanced_setup.md
- Using Ollama: server/ollama.md
- Using D-ID: serveer/d_id.md
- AutoGPT Agent:
- Introduction: AutoGPT/index.md
- Setup:
@@ -15,7 +22,6 @@ nav:
- Options: AutoGPT/configuration/options.md
- Search: AutoGPT/configuration/search.md
- Voice: AutoGPT/configuration/voice.md
- Image Generation: AutoGPT/configuration/imagegen.md
- Usage: AutoGPT/usage.md
- Help us improve AutoGPT:
- Share your debug logs with us: AutoGPT/share-your-logs.md
@@ -41,7 +47,7 @@ nav:
- Readme: https://github.com/Significant-Gravitas/AutoGPT/blob/master/frontend/README.md
- Docs: docs/index.md
# - Challenges:
# - Introduction: challenges/introduction.md
# - List of Challenges:

3
forge/.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,3 @@
{
"python.analysis.typeCheckingMode": "basic",
}

41
forge/conftest.py Normal file
View File

@@ -0,0 +1,41 @@
import uuid
from pathlib import Path
import pytest
from forge.file_storage.base import FileStorage, FileStorageConfiguration
from forge.file_storage.local import LocalFileStorage
pytest_plugins = [
"tests.vcr",
]
@pytest.fixture(scope="session", autouse=True)
def load_env_vars():
from dotenv import load_dotenv
load_dotenv()
@pytest.fixture()
def tmp_project_root(tmp_path: Path) -> Path:
return tmp_path
@pytest.fixture()
def app_data_dir(tmp_project_root: Path) -> Path:
dir = tmp_project_root / "data"
dir.mkdir(parents=True, exist_ok=True)
return dir
@pytest.fixture()
def storage(app_data_dir: Path) -> FileStorage:
storage = LocalFileStorage(
FileStorageConfiguration(
root=Path(f"{app_data_dir}/{str(uuid.uuid4())}"), restrict_to_root=False
)
)
storage.initialize()
return storage

View File

@@ -27,7 +27,7 @@ d88P 888 "Y88888 "Y888 "Y88P" "Y8888P88 888 888
888 "Y88P" 888 "Y88888 "Y8888
888
Y8b d88P
"Y88P" v0.1.0
"Y88P" v0.2.0
\n"""
if __name__ == "__main__":

View File

@@ -18,12 +18,14 @@ from typing import (
)
from colorama import Fore
from pydantic import BaseModel, Field, validator
from pydantic import BaseModel, Field, ValidationInfo, field_validator
from pydantic_core import from_json, to_json
from forge.agent import protocols
from forge.agent.components import (
AgentComponent,
ComponentEndpointError,
ConfigurableComponent,
EndpointPipelineError,
)
from forge.config.ai_directives import AIDirectives
@@ -70,10 +72,10 @@ class BaseAgentConfiguration(SystemConfiguration):
`0` to stop the agent.
"""
cycles_remaining = cycle_budget
cycles_remaining: int = cycle_budget
"""The number of cycles remaining within the `cycle_budget`."""
cycle_count = 0
cycle_count: int = 0
"""The number of cycles that the agent has run since its initialization."""
send_token_limit: Optional[int] = None
@@ -82,14 +84,11 @@ class BaseAgentConfiguration(SystemConfiguration):
defaults to 75% of `llm.max_tokens`.
"""
summary_max_tlength: Optional[int] = None
# TODO: move to ActionHistoryConfiguration
@validator("use_functions_api")
def validate_openai_functions(cls, v: bool, values: dict[str, Any]):
if v:
smart_llm = values["smart_llm"]
fast_llm = values["fast_llm"]
@field_validator("use_functions_api")
def validate_openai_functions(cls, value: bool, info: ValidationInfo):
if value:
smart_llm = info.data["smart_llm"]
fast_llm = info.data["fast_llm"]
assert all(
[
not any(s in name for s in {"-0301", "-0314"})
@@ -99,7 +98,7 @@ class BaseAgentConfiguration(SystemConfiguration):
f"Model {smart_llm} does not support OpenAI Functions. "
"Please disable OPENAI_FUNCTIONS or choose a suitable model."
)
return v
return value
class BaseAgentSettings(SystemSettings):
@@ -272,6 +271,28 @@ class BaseAgent(Generic[AnyProposal], metaclass=AgentMeta):
raise e
return method_result
def dump_component_configs(self) -> str:
configs: dict[str, Any] = {}
for component in self.components:
if isinstance(component, ConfigurableComponent):
config_type_name = component.config.__class__.__name__
configs[config_type_name] = component.config
return to_json(configs).decode()
def load_component_configs(self, serialized_configs: str):
configs_dict: dict[str, dict[str, Any]] = from_json(serialized_configs)
for component in self.components:
if not isinstance(component, ConfigurableComponent):
continue
config_type = type(component.config)
config_type_name = config_type.__name__
if config_type_name in configs_dict:
# Parse the serialized data and update the existing config
updated_data = configs_dict[config_type_name]
data = {**component.config.model_dump(), **updated_data}
component.config = component.config.__class__(**data)
def _collect_components(self):
components = [
getattr(self, attr)
@@ -325,7 +346,7 @@ class BaseAgent(Generic[AnyProposal], metaclass=AgentMeta):
copied_item = item.copy()
elif isinstance(item, BaseModel):
# Deep copy for Pydantic models (deep=True to also copy nested models)
copied_item = item.copy(deep=True)
copied_item = item.model_copy(deep=True)
else:
# Deep copy for other objects
copied_item = copy.deepcopy(item)

Some files were not shown because too many files have changed in this diff Show More