Compare commits

...

349 Commits

Author SHA1 Message Date
Toran Bruce Richards
1a609f8cd9 Adds outdated and potentially non-functional example of how I'd implemented self-feedback.
This is messy, flawed and shouldn't be merged, just publishing for inspiration.
2023-05-07 13:11:17 +12:00
Toran Bruce Richards
8b82421b9c Run Black and Isort 2023-04-30 17:17:18 +12:00
Toran Bruce Richards
75cc71f8d3 Tweak memory summarisation prompt 2023-04-30 16:44:23 +12:00
Toran Bruce Richards
f287282e8c fix broken partial commit. 2023-04-30 16:43:49 +12:00
Toran Bruce Richards
2a93aff512 Remove thoughts from memory summarisation. 2023-04-30 16:42:57 +12:00
Toran Bruce Richards
6d1653b84f Change "system" role to "Your Computer". 2023-04-30 15:55:53 +12:00
Toran Bruce Richards
a7816b8c79 Merge branch 'summary_memory' of https://github.com/torantulino/auto-gpt into summary_memory 2023-04-30 14:54:34 +12:00
Toran Bruce Richards
21913c4733 removes current memory global 2023-04-30 14:52:59 +12:00
Toran Bruce Richards
9d9c66d50f Adds check for empty full_message_history 2023-04-30 14:43:31 +12:00
Toran Bruce Richards
a00a7a2bd0 Fix. Update last_memory_index 2023-04-30 14:27:31 +12:00
Toran Bruce Richards
d6cb10432b Provide default new_events value when empty. 2023-04-30 14:26:36 +12:00
Toran Bruce Richards
0bea5e38a4 Replace "assistant" role with "you" when sumbitting to memory agent. 2023-04-30 14:26:09 +12:00
Toran Bruce Richards
88b2d5fb2d Remove global pre_index from summary_memory. 2023-04-30 14:25:06 +12:00
Toran Bruce Richards
f1032926cc Update autogpt/memory_management/summary_memory.py 2023-04-30 00:19:35 +12:00
Toran Bruce Richards
e7ad51ce42 Update autogpt/memory_management/summary_memory.py 2023-04-30 00:19:29 +12:00
Toran Bruce Richards
a3522223d9 Run black formatter 2023-04-29 23:27:03 +12:00
Toran Bruce Richards
4e3035efe4 Integrate summary memory with autogpt system 2023-04-29 23:26:14 +12:00
Toran Bruce Richards
a8cbf51489 Run isort. 2023-04-29 23:22:31 +12:00
Toran Bruce Richards
317361da8c Black formatting 2023-04-29 23:22:08 +12:00
Toran Bruce Richards
991bc77e0b Add complete typing and docstrings 2023-04-29 23:21:21 +12:00
Toran Bruce Richards
83357f6c2f Remove test prints 2023-04-29 23:13:48 +12:00
Toran Bruce Richards
acf48d2d4d Add running summary memory functions. 2023-04-29 23:10:32 +12:00
James Collins
b8478a96ae Feature/llm data structs (#3486)
* Organize all the llm stuff into a subpackage

* Add structs for interacting with llms
2023-04-28 15:04:31 -07:00
Deso
c7d75643d3 Architecture-agnostic dev-container patch, now with Redis 😍 (#3102)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-28 23:39:52 +02:00
BillSchumacher
cfc7817869 update pyproject (#2757)
* update pyproject

* python bump

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 14:25:41 -07:00
James Collins
92009ceb32 More graceful browsing error handling (#3494) 2023-04-28 22:12:47 +01:00
merwanehamadi
aa3e37ac14 Fix memory by adding it only when context window full (#3469)
* Fix memory by adding it only when context window full

* clean json utils
2023-04-28 21:07:49 +01:00
James Collins
3b74d2150e Organize all the llm stuff into a subpackage (#3436) 2023-04-28 12:00:54 -07:00
Media
ee4043ae19 Refactor test_chat to use pytest instead of unittest (#3484)
* refactor_for_pytest

* formatting

---------

Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 11:27:52 -07:00
k-boikov
c1f1da27e7 move remove_color_codes to utils and add tests (#3260)
* move remove_color_codes to utils and add tests

* Fix for ai_settings goals loaded as list(dict)

Some ai_settings formats can cause goals to load as list(dict)
not list(str)

Refactor code in utils.py to explicitly convert input type to string in
remove_color_codes() function.

- Updated remove_color_codes function to convert input argument
 to string type explicitly to avoid unexpected type errors.
- Test case added to check conversion of dict to string in
remove_color_codes function.

* Update tests/test_utils.py

Co-authored-by: James Collins <collijk@uw.edu>

* move remove_color_codes fn and tests to proper files

---------

Co-authored-by: Luke Kyohere <lkyohere@mfsafrica.com>
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 11:13:30 -07:00
Media
aebe891489 Remove unittest in favor of pytest in the test_token_counter module (#3453)
* init remove unittest for pytest

* docstrings

* black

---------

Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 09:48:30 -07:00
Media
cf5fdabdfc Removing unitest in favor of pytest from test_config.py (#3417)
* removing unitest in favor of pytest

* remove singleton test and unnecessary fixture

---------

Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 09:32:11 -07:00
rickythefox
20ef130341 Add tests for code/shell execution & improve config fixture (#1268)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-28 14:51:29 +02:00
Johnny C
1772a01d04 Fix URL to docs in API throttling message (#3201)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-27 23:43:56 +02:00
Eddie Cohen
5ce6da95fc Make y/n configurable (#3178)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-27 21:26:47 +02:00
James Collins
94dc6f19aa Add a regression test for the embedding (#3422) 2023-04-27 11:48:18 -07:00
Dhruv Awasthi
427b8648ee Fix README: remove redundant "Disclaimer" (#3391)
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-27 19:24:28 +01:00
Iliass
4b54e3c6d8 Update broken link (#3416)
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-27 19:18:44 +01:00
Irmius
6b4ad1f933 Fix browse_website headless mode for Firefox (#2816)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-27 19:32:31 +02:00
Reinier van der Leer
3d89ed1787 Fix imports, type hints and fixtures for goal oriented tests (#3415) 2023-04-27 19:16:56 +02:00
merwanehamadi
adbb47fb65 scrape text regression test (#3387)
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-27 09:27:15 -07:00
Montana Flynn
7cd76b8d8e Add makedirs to file operations (#3289)
* Add makedirs to file operations

* Add new directory tests for file operations

* Fix wrong setUp test error

* Simplify makedirs and use correct nested path

* Fix linter error

---------

Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-27 09:12:24 -07:00
Reinier van der Leer
9e17a304de Minor improvements to the docs for voice config and testing (#3407) 2023-04-27 08:58:35 -07:00
Reinier van der Leer
7a161cc0bd Add .gitattributes (#3402) 2023-04-27 06:28:18 -07:00
BillSchumacher
d8c16de123 The unlooping and fixing of file execution. (#3368)
* The unlooping and fixing of file execution.

* lint

* Use static random seed during testing. remove unused import.

* Fix bug

* Actually fix bug.

* lint

* Unloop a bit more an fix json.

* Fix another bug.

* lint.

---------

Co-authored-by: merwanehamadi <merwanehamadi@gmail.com>
2023-04-26 21:07:28 -07:00
chyezh
65b6c2706e fix connection bug for zilliz uri on milvus (#3278)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
Co-authored-by: merwanehamadi <merwanehamadi@gmail.com>
2023-04-26 18:57:29 -07:00
Robin Richtsfeld
76bd192f82 Set vcr_config scope to "session" (#3361)
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-26 20:55:01 -05:00
merwanehamadi
02f546d2bc Run the integration tests in the CI pipeline BUT without API keys (#3359)
* integration tests in ci pipeline

* Update CONTRIBUTING.md

Co-authored-by: Reinier van der Leer <github@pwuts.nl>

---------

Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-26 20:45:03 -05:00
Eddie Cohen
3b56716a68 Hotfix/validate url strips query params (#3370)
* reconstruct url in sanitize

* tests for url validation

---------

Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-26 20:20:15 -05:00
merwanehamadi
a3195d84d3 remove do nothing (#3369) 2023-04-26 19:55:02 -05:00
Reinier van der Leer
bfaf36099e Fix(workspace) root resolution (#3365) 2023-04-26 16:43:21 -07:00
merwanehamadi
7a006afb17 fix cassettes recording (#3342) 2023-04-26 13:11:08 -07:00
WladBlank
cd8fdb31ef Chat plugin capability (#2929)
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-26 15:08:39 -05:00
karlivory
a0cfdb0830 fix set_total_budget docstring (#3288) 2023-04-26 12:18:12 -07:00
James Collins
83f11465f5 Clean up image generation tests (#3338) 2023-04-26 12:07:28 -07:00
Reinier van der Leer
76df14b831 Fix docs (#3336)
* Fix docs

* Add short section about testing to contribution guide

* Add back note for voice configuration

* Remove LICENSE symlink from docs/

* Fix site_url in mkdocs.yml
2023-04-26 19:14:14 +01:00
merwanehamadi
109fa04c7c test image gen (#3287) 2023-04-26 10:23:05 -07:00
merwanehamadi
a6355a6bc8 use pytest-recording with VCR (#3283) 2023-04-26 09:57:05 -07:00
James Collins
0ff471a49a Have api manager use singleton pattern (#3269)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-26 11:37:49 -05:00
merwanehamadi
4241fbbbf0 mock openai in test image gen (#3285) 2023-04-26 09:11:31 -07:00
✔️ITtechtor
3ae6c1b03f Update installation.md (#3325) 2023-04-26 08:50:43 -07:00
vlad
1e71f952f9 Codecov - don't fail pipelines for project cov changes (#3327)
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2023-04-26 09:54:22 -05:00
apurvsibal
749b1bbfc0 Fix(docs) requirements link in installation guide (#3264) 2023-04-26 13:59:53 +02:00
Reinier van der Leer
265a23212e Fix(docs) Contributing, CoC and License links (#3308) 2023-04-26 11:40:37 +01:00
Richard Beales
f0f34030a0 Fix docs alignment (#3302) 2023-04-26 02:52:33 -05:00
Robin Richtsfeld
d75379358f Fix get_ada_embedding return type (#3263) 2023-04-25 16:52:38 -07:00
Nicholas Tindle
8670b3039e Fix PR size autolabeler message (#3194) 2023-04-26 00:25:38 +02:00
James Collins
eec86a7b82 Load .env in package init (#3251) 2023-04-25 14:53:13 -07:00
Peter Svensson
fac8f7da21 adding probably erroneously removed return value from execut_shell, giving 'None' in return always otherise - not ideal (#3212)
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-25 13:32:39 -07:00
James Collins
6fbac455d4 Remove import time loading of config from llm_utils (#3245) 2023-04-25 12:10:12 -07:00
Richard Beales
1806fc683d Fix readme centering (#3243) 2023-04-25 19:50:22 +01:00
James Collins
f962939737 Use explicit API keys when querying openai rather than import time manipulation of the package attributes (#3241) 2023-04-25 11:38:06 -07:00
James Collins
2619740daa Extract OpenAI API retry handler and unify ADA embeddings calls. (#3191)
* Extract retry logic, unify embedding functions

* Add some docstrings

* Remove embedding creation from API manager

* Add test suite for retry handler

* Make api manager fixture

* Fix typing

* Streamline tests
2023-04-25 11:12:24 -07:00
Reinier van der Leer
940b115f0a remove plugin notice from CONTRIBUTING.md (#3227) 2023-04-25 10:05:58 -07:00
merwanehamadi
58d84787f3 Test Agent.create_agent_feedback (#3209) 2023-04-25 08:41:57 -07:00
Peter Petermann
6fc6ea69d2 this changes it so the file from config is used, rather than a hardcoded name that might not exist (#3189) 2023-04-25 07:56:59 +01:00
Toran Bruce Richards
93bbd13a34 Update README.md 2023-04-25 17:36:41 +12:00
AbTrax
ae31dd4bb1 Feature: Added Self Feedback (#3013)
* Feature: Added Self Feedback

* minor fix: complied to flake8

* Add: Self Feedback To Usage.md

* Add: role/goal allignment

* Added: warning to usage.md

* fix: Formatted with black

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-25 06:28:06 +01:00
Toran Bruce Richards
411a13a0d4 Update README.md 2023-04-25 17:27:29 +12:00
Nicholas Tindle
eb0e96715e docs fix to image generation (#3186) 2023-04-25 06:03:31 +01:00
James Collins
7e5afd8744 Refactor/decouple logger from global configuration (#3171)
* Decouple logging from the global configuration

* Configure logging first

* Clean up global voice engine creation

* Remove class vars from logger

* Remove duplicate implementation of

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-25 05:41:30 +01:00
✔️ITtechtor
960eb4f367 Update installation.md (#3166) 2023-04-25 05:36:03 +01:00
Duong HD
956d9fdcd6 Add a little more descriptive installation instruction (#3180)
* add Dev Container installation instruction to installation.md

* add Dev Container installation instruction to installation.md

* Update installation.md

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-25 05:34:59 +01:00
Lawrence Neal
140fd6f3bf Ensure Fore.RED is followed by Fore.RESET (#3182)
This properly resets the terminal, ensuring that the red text is red and
the normal text remains unaffected.

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-25 05:32:59 +01:00
Deso
3d47b47901 Update bulletin to warn about deprication (#3181) 2023-04-25 05:28:46 +01:00
Nicholas Tindle
c7f4734826 Update ci.yml (#3179) 2023-04-25 03:53:06 +01:00
Daniel Chen
45f9b570a2 Re-add install-plugin-deps to CLI (#3170) 2023-04-24 20:11:19 -05:00
Daniel Chen
29284a5460 Add option to install plugin dependencies (#3068)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-24 17:42:10 -05:00
James Collins
dfcbf6eee6 Refactor/move singleton out of config module (#3161) 2023-04-24 17:24:57 -05:00
James Collins
83b91a31bc Remove dead permanent memory module (#3145)
* Remove dead permanent memory module

* Delete sqlite db that snuck in
2023-04-24 21:48:37 +01:00
James Collins
b984f985bc Hotfix/global agent manager workaround (#3157)
* Add indirection layer to entry point

* Get around singleton pattern for AgentManager to fix tests
2023-04-24 21:27:31 +01:00
Lei Zhang
a5cc67badd anontation fix (#3018)
* anontation fix

* fix param name and type

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-24 21:08:02 +01:00
James Collins
8bf4eb7e90 Merge pull request #3152 from collijk/refactor/add-indirection-layer-around-entry-point
Add indirection layer to entry point
2023-04-24 12:32:59 -07:00
Nicholas Tindle
128d83a0c8 Merge branch 'master' into refactor/add-indirection-layer-around-entry-point 2023-04-24 14:28:56 -05:00
Media
5de1025520 Agent and agent manager tests (#3116)
* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add decorator to tests

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* move decorator higher up

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* init

* more tests

* passing tests

* skip gitbranch decorator on ci

* decorator skiponci

* black

* Update tests/utils.py decorator of skipping ci

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>

* black

* I oopsed the name

* black

* finally

* simple tests for agent and manager

* ísort

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2023-04-24 14:19:42 -05:00
James Collins
4a206168a7 Merge branch 'master' into refactor/add-indirection-layer-around-entry-point 2023-04-24 12:14:59 -07:00
James Collins
5f646498c4 Add indirection layer between cli and application start 2023-04-24 12:12:14 -07:00
James Collins
06e81b7dfd Merge pull request #3147 from collijk/bugfix/error-on-null-bytes-in-path-windows
Error if null bytes are included in the path on windows
2023-04-24 12:02:41 -07:00
Nicholas Tindle
97d2f417c7 Merge branch 'master' into bugfix/error-on-null-bytes-in-path-windows 2023-04-24 13:55:41 -05:00
YOUNESS ZEMZGUI
45f2513a73 Adjust test_json_parser file (#1935)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-24 19:54:46 +02:00
James Collins
1f58ca47b5 Merge branch 'master' into bugfix/error-on-null-bytes-in-path-windows 2023-04-24 10:47:46 -07:00
James Collins
17819e2a55 More robust null byte checking 2023-04-24 10:28:51 -07:00
Reinier van der Leer
ffdc652605 Clean up GitHub Workflows (#3059)
* initial cleanup of github workflows

* only run pr-label workflow on push to master

* move docker ci/release summaries to scripts

* add XS label for PR's under 2 lines

* draft test job for Docker CI

* fix & activate Docker CI test job

* add debug step to docker CI

* fix Docker CI test container env

* Docker CI build matrix

* fixup build summaries

* fix pipes in summary

* optimize Dockerfile for layer caching

* more markdown escaping

* add gha cache scopes

* add Docker CI cache clean workflow
2023-04-24 18:03:21 +01:00
k-boikov
3886afc825 fix test_search_files for windows (#3073)
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-24 17:42:08 +01:00
fluxism
cade788a7e Add <reason> arg to do_nothing command (#3090)
* Add <reason> arg to do_nothing command

* do_nothing returns reason arg
2023-04-24 16:12:15 +01:00
Reinier van der Leer
9c60eecce6 Improve docker setup & config (#1843)
* Improve docker setup & config

* fix(browsing): Selenium needs access to home directory

* fix(docker): allow overriding memory backend settings

* simplify Dockerfile and docker-compose config

* add .dockerignore

* adjust Docker CI with release build type arg

* replace Chrome by Chromium in devcontainer

* update docs

* update bulletin

* use preinstalled chromedriver in web_selenium.py

* update installation.md

* fix code blocks for mkdocs

* fix links to docs
2023-04-24 14:27:53 +01:00
Andres Caicedo
f8dfedf1c6 Add function and class descriptions to tests (#2715)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-24 14:55:49 +02:00
Eddie Cohen
40a75c804c Validate URLs in web commands before execution (#2616)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-24 12:33:44 +02:00
Soheil Sam Yasrebi
794a164098 handle API timeouts (#3024) 2023-04-24 08:26:14 +01:00
scout9ll
89125376ba Fixed incorrect comment: Clear memory instead of Redis (#3092)
Co-authored-by: liaolin.qiu <liaolin.qiu@qingteng.cn>
2023-04-24 08:07:08 +01:00
Pi
efc17f21b9 Merge pull request #3089 from collijk/hotfix/config-sequencing-bug
Resolve sequencing issue in global state management
2023-04-24 04:53:51 +01:00
James Collins
7ddc44d48e Resolve sequencing issue in global state managemtn 2023-04-23 20:44:53 -07:00
James Collins
e8473d4920 Merge pull request #3066 from collijk/bugfix/make-local-memory-json-when-it-doesnt-exist
Bugfix/make local memory json when it doesnt exist
2023-04-23 17:49:36 -07:00
James Collins
91aa40e0df Remove another global memory access 2023-04-23 16:59:49 -07:00
James Collins
882a9086a8 Merge branch 'bugfix/make-local-memory-json-when-it-doesnt-exist' of github.com:collijk/Auto-GPT into bugfix/make-local-memory-json-when-it-doesnt-exist 2023-04-23 16:55:26 -07:00
James Collins
43fa67ca81 Remove unnecessary memory call 2023-04-23 16:54:32 -07:00
James Collins
715916a5ba Merge branch 'master' into bugfix/make-local-memory-json-when-it-doesnt-exist 2023-04-23 16:44:59 -07:00
James Collins
a28b8906a6 Add tests in pytest 2023-04-23 16:40:53 -07:00
James Collins
aedd288dbe Refactor/collect embeddings code (#3060)
* Collect all embedding code into a single module

* Collect all embedding code into a single module

* actually, llm_utils is a better place

* Oh, and remove the module now that we don't use it

---------

Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-23 17:50:50 -05:00
James Collins
680c7b5aaa Make local json cache when it doesn't exist 2023-04-23 15:43:04 -07:00
Media
374f543bea Perm memory test cases (#2996)
* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add decorator to tests

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* move decorator higher up

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* init

* more tests

* passing tests

* skip gitbranch decorator on ci

* decorator skiponci

* black

* Update tests/utils.py decorator of skipping ci

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>

* black

* I oopsed the name

* black

* finally

* perm memory tests

* perm memory tests

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2023-04-23 16:50:15 -05:00
coditamar
ec71075bfe Add tests for json_utils.json_fix_llm (#2952)
* config.py: make load_dotenv(override=True)

* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add test for fix_json_using_multiple_techniques

* style

* style

* mocking try_ai_fix to avoid call_ai_function

* black style

* mock try_ai_fix to avoid calling the AI model

* removed mock, as we can add @requires_api_key("OPEN_API_KEY")

* style

* reverse merge conflict related files and changes

* bring back the mock for try_ai_fix

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-23 16:29:40 -05:00
Vwing
d6ef9d1b5d Make Auto-GPT aware of its running cost (#762)
* Implemented running cost counter for chat completions

This data is known to the AI as additional system context, and is printed out to the user

* Added comments to api_manager.py

* Added user-defined API budget.

The user is now prompted if they want to give the AI a budget for API calls. If they enter nothing, there is no monetary limit, but if they define a budget then the AI will be told to shut down gracefully once it has come within 1 cent of its limit, and to shut down immediately once it has exceeded its limit. If a budget is defined, Auto-GPT is always aware of how much it was given and how much remains to be spent.

* Chat completion calls are now done through api_manager. Total running cost is printed.

* Implemented api budget setting and tracking

User can now configure a maximum api budget, and the AI is aware of that and its remaining budget. The AI is instructed to shut down when exceeding the budget.

* Update autogpt/api_manager.py

Change "per token" to "per 1000 tokens" in a comment on the api cost

Co-authored-by: Rob Luke <code@robertluke.net>

* Fixed lint errors

* Include embedding costs

* Add embedding completion cost

* lint

* Added 'requires_api_key' decorator to test_commands.py, switched to a valid chat completions model

* Refactor API manager, add debug mode, and add tests

- Extract model costs to  to avoid duplication
- Add debug mode parameter to ApiManager class
- Move debug mode configuration to
- Log AI response and budget messages in debug mode
- Implement 'test_api_manager.py'

* Fixed test_setup failing. An extra user input is needed for api budget

* Linting

---------

Co-authored-by: Rob Luke <code@robertluke.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-23 16:04:31 -05:00
hdkiller
bf895eb656 fix typo in warning message (#3044) 2023-04-23 21:28:48 +01:00
James Collins
dcd6aa912b Add workspace abstraction (#2982)
* Add workspace abstraction

* Remove old workspace implementation

* Extract path resolution to a helper function

* Add api key requirements to new tests
2023-04-23 14:36:04 -05:00
chyezh
da48f9c972 Fix Milvus module config import (#3036)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-23 18:32:17 +02:00
chyezh
cac1ea27e2 Support secure and authenticated Milvus memory backends (#2127)
Co-authored-by: Reinier van der Leer (Pwuts) <github@pwuts.nl>
2023-04-23 18:11:04 +02:00
Pi
6e588bb2ed Merge pull request #3030 from Significant-Gravitas/richbeales-patch-1
update documentation deploy gh action
2023-04-23 16:49:32 +01:00
Richard Beales
1c352f5ff0 update documentation deploy gh action 2023-04-23 16:42:12 +01:00
Didier Durand
582c85b140 Documentation: ensuring naming consistency (#2975)
auto gpt -> Auto-GPT to ensure naming consistency on the page
2023-04-23 09:28:08 +01:00
Didier Durand
a38646409f Documentation: fixing typos (#2978)
Fixing a couple of typos
2023-04-23 09:26:54 +01:00
non-adjective
4906e3d7ef update weaviate.py for weaviate compatibility (#2985)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/weaviate/schema/crud_schema.py", line 708, in _create_class_with_primitives
    raise UnexpectedStatusCodeException("Create class", response)
weaviate.exceptions.UnexpectedStatusCodeException: Create class! Unexpected status code: 422, with response body: {'error': [{'message': "'Auto-gpt' is not a valid class name"}]}.

GPT4:

The error message indicates that "Auto-gpt" is not a valid class name. In Weaviate, class names must start with a capital letter and can contain only alphanumeric characters.

Took advice and code and applying to weaviate.py to great result, programs runs now with no error!

Unable to reproduce easily. Might be related to switching memory between Local and Weaviate? Either way, the proposed solution works for MacOS using Docker + Weaviate.
2023-04-23 09:17:42 +01:00
Richard Beales
9ed2a7a2d2 Add missing test decorator (#2989)
* Mark test test_generate_aiconfig_automatic_typical  as @requires_api_key("OPENAI_API_KEY")

* missing import

* add missing decorator
2023-04-23 09:17:18 +01:00
Richard Beales
eaa6ed85e1 Fix to prompt generator - "Ensure the response can beparsed" (#2980) 2023-04-23 09:07:14 +01:00
Nicholas Tindle
0b08b4f1c5 Update installation.md (#2970) 2023-04-23 07:39:13 +01:00
Richard Beales
bb786461c7 Mark test test_generate_aiconfig_automatic_typical as @requires_api_… (#2981)
* Mark test test_generate_aiconfig_automatic_typical  as @requires_api_key("OPENAI_API_KEY")

* missing import
2023-04-23 07:35:17 +01:00
Didier Durand
bc354a3df6 Documentation typo: serach -> search (#2977) 2023-04-23 07:23:48 +01:00
Toran Bruce Richards
f462674e32 Automatic prompting (#2896)
* Add automatic ai prompting

* Tweak the default prompt.

* Print agent info upon creation.

* Improve system prompt

* Switch to fast_llm_model by default

* Add format output command to user prompt.

This vastly improves formatting success rate.

* Add fallback to manual mode if llm output cannot be parsed (or other error).

* Add unit test to cover ai creation setup.

* Replace redundent prompt with manual mode instructions.

* Add missing docstrings and typing.

* Runs black on changes.

* Runs isort

* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add decorator to tests

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* move decorator higher up

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* merge

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-23 06:36:10 +01:00
Media
2b5852f7da Tests utils suite (#2961)
* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add decorator to tests

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* move decorator higher up

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* init

* more tests

* passing tests

* skip gitbranch decorator on ci

* decorator skiponci

* black

* Update tests/utils.py decorator of skipping ci

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>

* black

* I oopsed the name

* black

* finally

* black

* wrong file

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2023-04-22 19:07:28 -05:00
Nicholas Tindle
986bdaab36 Merge pull request #2946 from merwanehamadi/feature/add-decorator-to-tests
add decorator to tests so it's skipped if api key required but not present in the environment
2023-04-23 00:45:55 +02:00
BillSchumacher
d3e4ec14a6 Merge pull request #2936 from Significant-Gravitas/richbeales-patch-1
Plugins: debug line always printed in plugin load
2023-04-23 00:45:54 +02:00
Merwane Hamadi
b7cd56f72b move decorator higher up
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
2023-04-23 00:45:54 +02:00
Richard Beales
78a6b44b21 Plugins: debug line always printed in plugin load 2023-04-23 00:45:53 +02:00
Merwane Hamadi
eb5a8a87d8 add decorator to tests
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
2023-04-23 00:45:53 +02:00
Nicholas Tindle
0410331ecd Merge pull request #2931 from riensen/fix/multiple-plugins
Enable support for loading multiple plugins per zip file
2023-04-23 00:45:50 +02:00
Merwane Hamadi
996a3b331a Add CI smoke test (#2461) 2023-04-23 00:23:45 +02:00
riensen
8173e4d139 Fix: Mulitple plugins per zip for Auto-GPT-Plugins 2023-04-22 18:31:04 +02:00
Richard Beales
5a95ead608 Merge pull request #2521 from jazelly/fix-benchmark-typo
misc: fix typo in benchmark
2023-04-22 17:06:10 +01:00
Richard Beales
f04755be30 Merge pull request #2631 from BillSchumacher/fix-command-arg-ordering
Fix plugin command arg ordering issue.
2023-04-22 17:02:33 +01:00
Richard Beales
ea26988a95 run black and isort on behalf of OP 2023-04-22 16:58:21 +01:00
Richard Beales
f9f540738c Merge pull request #2708 from ugobok/patch-1
Replace print statements with logging.error
2023-04-22 16:30:12 +01:00
Richard Beales
894027f5f6 run black and isort on behalf of OP 2023-04-22 16:25:03 +01:00
Richard Beales
8e8a5a1522 Merge pull request #2915 from dharana77/master
ci: selenium safari bug fixed
2023-04-22 15:24:54 +01:00
lee
1ffa9b2ebe ci: selenium safari bug fixed
ModuleNotFoundError: No module named 'selenium.webdriver.safari.options when install <=4.1.3
2023-04-22 22:00:23 +09:00
Richard Beales
ad5d8b2341 Re-work Docs and split out README (using MkDocs) (#2894)
* Initial Documentation re-org

* remove testing link from readme

* rewrite quickstart

* get code blocks working across mkdocs and github

* add link to plugins repo

* add link to plugins repo and move readme to plugin template repo

* Add emoji to "Extensibility with Plugins" in readme

Co-authored-by: Reinier van der Leer <github@pwuts.nl>

* Make docs deploy workflow path-selective

* Also run workflow when the workflow is updated

* fix readme links under configuration subfolder

* shrink subheadings in readme

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-22 12:56:22 +01:00
Nicholas Tindle
e9f3f9bd1d Add CodeCov CI coverage requirements (#2881) 2023-04-22 13:04:39 +02:00
Toran Bruce Richards
e39cd1bf57 Fix(tests): restore config values after changing them in tests (#2904) 2023-04-22 12:14:18 +02:00
Richard Beales
0efbe23d89 Merge pull request #2756 from gklab/master
adjust file_operations.py code format
2023-04-22 10:24:54 +01:00
Nicholas Tindle
b4bd11d708 Merge pull request #2888 from didier-durand/patch-2
Fixing header of CONTRIBUTING.md
2023-04-22 02:46:04 -05:00
Didier Durand
fe0baf233d Fixing link 2023-04-22 09:39:18 +02:00
Nicholas Tindle
e9e1f04818 Merge pull request #2884 from minghinmatthewlam/update-speech-readme
Update README to include Eleven Labs speech setup
2023-04-22 02:33:47 -05:00
Nicholas Tindle
602b6e9901 Merge pull request #2886 from didier-durand/patch-1
Documentation: fixing typos in README.md
2023-04-22 02:32:58 -05:00
Didier Durand
1b043305c1 Fixing header of CONTRIBUTING.md
ProjectName -> Auto-GPT
2023-04-22 09:32:35 +02:00
Didier Durand
ba87cb0867 Fixing typos in README.md
Fixing some typos in README
2023-04-22 09:14:52 +02:00
Matthew Lam
798d2d6978 update readme for speech 2023-04-21 23:51:05 -07:00
Nicholas Tindle
fc4b5ad1d2 Merge pull request #2855 from OmriGM/tests/basic-spinner-tests
Added basic spinner tests and modified spinner method docstring
2023-04-22 01:27:33 -05:00
Omri Grossman
e09bbc43d4 Merge branch 'master' into tests/basic-spinner-tests 2023-04-22 09:24:25 +03:00
Richard Beales
ca31c4699a Merge pull request #2877 from Significant-Gravitas/codecov
Add Code Cov
2023-04-22 07:05:44 +01:00
Nicholas Tindle
6e5df9e9e7 feat: add code cov 2023-04-22 00:45:29 -05:00
Richard Beales
780a77bb31 Merge pull request #2679 from AndresCdo/feature/add-error-exceptions
[feat] Update milvus_memory_test.py error log
2023-04-22 06:43:53 +01:00
Richard Beales
f342b84479 Merge pull request #2851 from sudouser777/fix/typo
fixed typo
2023-04-22 06:31:42 +01:00
ZHAOKAI WANG
019ac37d49 Merge branch 'master' into master 2023-04-22 10:40:22 +08:00
Steve
3ab67e746d Add file op tests (#2205)
Co-authored-by: Steven Byerly <stevenbyerly@microsoft.com>
2023-04-22 04:17:38 +02:00
Omri Grossman
e8aaba9ce2 Run pre commit manually to fix linting and sorting issues 2023-04-22 01:25:20 +03:00
Omri Grossman
f3ac658dd0 Reorder imports 2023-04-22 01:18:03 +03:00
Omri Grossman
7c4921758c Added basic spinner tests and modified spinner method docstring 2023-04-22 01:13:32 +03:00
Raju Komati
3bf5934b20 fixed typo 2023-04-22 02:52:13 +05:30
Richard Beales
a8fe3085fd Merge pull request #2558 from jlxip/master
Use readline if available
2023-04-21 21:17:14 +01:00
Richard Beales
14a1588ffd Merge pull request #2837 from BuildEverything/master
Update readme to more clearly describe usage between platforms
2023-04-21 21:05:42 +01:00
jlxip
504a85bbdb Use readline if available 2023-04-21 22:01:06 +02:00
Mikel Calvo
9dcdb6d6f8 Add OS Info into the initial prompt (#2587) 2023-04-21 21:44:02 +02:00
Tommy Brooks
1520816e61 chore: update readme to more clearly describe usage between platforms 2023-04-21 14:37:03 -04:00
Richard Beales
a2e16695af Merge pull request #2771 from ntindle/patch-2
[Hotfix] Fix coverage tooling
2023-04-21 19:30:23 +01:00
Richard Beales
e2b599051e Merge pull request #2802 from okunishinishi/patch-1
Update README.md ( `IMAGE_PROVIDER=sd`=> `IMAGE_PROVIDER=huggingface` )
2023-04-21 19:01:46 +01:00
Richard Beales
e20d388ec9 Merge pull request #2406 from pkqs90/docker-readme-upd
Fix docker usage readme
2023-04-21 19:00:24 +01:00
Richard Beales
44d3302b4e Merge pull request #2832 from k-boikov/bug/yml-tab-issue
fix indentation of bug template yml
2023-04-21 18:50:54 +01:00
Kris
72a56acfb8 fix indentation of bug template yml 2023-04-21 20:39:37 +03:00
Richard Beales
b975aaa848 Merge pull request #2785 from T-Higgins/master
Update README.md
2023-04-21 18:18:12 +01:00
Taka Okunishi
77de428524 Update README.md ( IMAGE_PROVIDER=sd=> IMAGE_PROVIDER=huggingface
modify snippet in README
2023-04-21 21:40:57 +09:00
coditamar
6c5d21cbfc config.py: make load_dotenv(override=True) (#2788) 2023-04-21 12:24:26 +02:00
Tony H
04093e9517 Update README.md
Made steps clearer, made some sentences clearer, and generally fixed grammar and punctuation.
Reason: I'm a Knowledge Base writer for software products.
2023-04-21 08:37:58 +01:00
pkqs90
8364426420 docker-compose 2023-04-21 14:18:09 +08:00
Nicholas Tindle
3dd07d3119 fix: workflow name 2023-04-21 01:02:10 -05:00
Nicholas Tindle
68803d559c comment the stuff 2023-04-21 01:00:02 -05:00
Nicholas Tindle
a63fc643c8 fix:? 2023-04-21 00:55:52 -05:00
Nicholas Tindle
7a9c6a52fa Update ci.yml 2023-04-21 00:49:07 -05:00
Nicholas Tindle
81de438569 try something new 2023-04-21 00:41:44 -05:00
Nicholas Tindle
185429287e Update ci.yml 2023-04-21 00:35:46 -05:00
Nicholas Tindle
c2f86f6934 Update ci.yml 2023-04-21 00:34:11 -05:00
Nicholas Tindle
7f99fa3da8 Update ci.yml 2023-04-21 00:30:39 -05:00
Nicholas Tindle
c58cf15565 hotfix: don't upload results on push 2023-04-21 00:27:19 -05:00
Richard Beales
4eaec80438 Merge pull request #2313 from lengweiping1983/arg_config_optimization
only adjust argument order
2023-04-21 06:17:51 +01:00
Richard Beales
7b22809530 Merge pull request #2545 from cryptidv/fixes
Added version select to bug template
2023-04-21 06:14:05 +01:00
Richard Beales
d573bee791 Merge pull request #2651 from riensen/patch-1
Improve plugin section in README.md to prevent dependency errors
2023-04-21 06:07:12 +01:00
ZHAOKAI WANG
e7c2a4068e Update file_operations.py 2023-04-21 13:06:44 +08:00
ZHAOKAI WANG
45a9ff6e74 Update file_operations.py 2023-04-21 13:03:52 +08:00
Richard Beales
781f2934e6 Merge pull request #2682 from AndresCdo/update-pre-commit-version
Update pre-commit version
2023-04-21 06:00:59 +01:00
Richard Beales
1b5743dc73 Merge pull request #2705 from itsmarble/add_show_env_fiile_instruct
add instruction to show .env
2023-04-21 05:59:48 +01:00
Richard Beales
26ee15d327 Merge pull request #2709 from Bsodoge/fix-typo
Update README.md
2023-04-21 05:54:09 +01:00
Richard Beales
78bddf3055 Merge pull request #2752 from chrisvxd/patch-1
docs: fix small typo in README
2023-04-21 05:51:11 +01:00
Richard Beales
de1ea5f916 Merge pull request #2758 from Pwuts/fix/silent-azure-fail
Make `load_azure_config` throw if `azure.yaml` does not exist
2023-04-21 05:48:09 +01:00
Richard Beales
d5162d332f Merge pull request #2628 from ntindle/ci/coverage-reporting
Add Coverage reporting to CI pipeline
2023-04-21 05:46:01 +01:00
lengweiping1983
63c2182870 Fix typo's (#2735)
Co-authored-by: lengweiping <lengweiping@vinotar.com>
2023-04-21 06:09:17 +02:00
Reinier van der Leer
b49ef913a8 Make load_azure_config throw if azure.yaml does not exist 2023-04-21 05:15:39 +02:00
Nick Foster
ec27d5729c Fix label of download_file command (#2753)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-21 04:55:20 +02:00
gklab
a2e75aabdd adjust file_operations.py code format 2023-04-21 10:19:28 +08:00
Chris Villa
6b7787ce99 docs: fix small typo in README 2023-04-21 14:19:00 +12:00
ZHAOKAI WANG
b05d56462b Merge branch 'Significant-Gravitas:master' into master 2023-04-21 10:01:03 +08:00
Andres Caicedo
558003704e Add missing size param to generate_image_with_dalle (#2691) 2023-04-21 04:00:44 +02:00
Toran Bruce Richards
00ecb983e7 Update README.md 2023-04-21 13:56:59 +12:00
Toran Bruce Richards
f26541188b Update README.md 2023-04-21 13:53:31 +12:00
Toran Bruce Richards
1e3bcc3f8b Update README.md 2023-04-21 12:52:22 +12:00
Torantulino
8faf4f5f79 Deploying to master from @ Significant-Gravitas/Auto-GPT@48f4119fb7 🚀 2023-04-21 00:40:07 +00:00
Toran Bruce Richards
48f4119fb7 Update sponsors_readme.yml 2023-04-21 12:38:18 +12:00
Toran Bruce Richards
ad6f18b737 Update sponsors_readme.yml 2023-04-21 12:31:37 +12:00
Toran Bruce Richards
68e479bdbd Update sponsors_readme.yml 2023-04-21 12:26:04 +12:00
Toran Bruce Richards
1dd8e570a5 Update sponsors_readme.yml 2023-04-21 12:24:18 +12:00
Toran Bruce Richards
511b0212c6 Update sponsors_readme.yml 2023-04-21 12:22:32 +12:00
Toran Bruce Richards
121e08c18e Create sponsors_readme.yml 2023-04-21 12:19:30 +12:00
Toran Bruce Richards
785c90ddb7 Remove hardcoded sponsors 2023-04-21 12:19:20 +12:00
BillSchumacher
d9d5fd5b9a Merge pull request #2727 from Pwuts/fix/spacy-install-model
fix #2654 spacy language model installation
2023-04-20 17:40:25 -05:00
Reinier van der Leer
c145d95312 Fix #2654 spacy language model installation 2023-04-20 23:58:40 +02:00
Andres Caicedo
37c5ebfe73 Merge remote-tracking branch 'upstream/master' into update-pre-commit-version 2023-04-20 20:16:09 +02:00
Ugo
0efa0d1185 Replace print statements with logging.error
This commit replaces two print statements in the _speech method of the BrianSpeech class with a single call to logging.error. This will log error messages with more detail and make it easier to diagnose issues. The changes are backward compatible and should not affect the functionality of the code.
2023-04-20 20:52:45 +03:00
Peter Banda
14d3ecaae7 Pin BeautifulSoup version to fix browse_website (#2680) 2023-04-20 19:51:52 +02:00
Bsodoge
25db6e56b0 Fix typo 2023-04-20 18:49:15 +01:00
itsmarble
e006a61c52 hotfix 2023-04-20 19:42:48 +02:00
itsmarble
5ecb08c8e8 add instruction to show .env 2023-04-20 19:26:55 +02:00
Richard Beales
0bf4987e1a Merge pull request #2644 from riensen/rename-whitelist
Use inclusive language: Rename 'blacklist' to 'denylist' and 'whitelist' to 'allowlist'
2023-04-20 18:19:52 +01:00
Andres Caicedo
3871fc70ce Merge remote-tracking branch 'upstream/master' into update-pre-commit-version 2023-04-20 19:01:25 +02:00
riensen
9b78e71d16 Use allowlist and denylist naming 2023-04-20 19:01:09 +02:00
Richard Beales
4c686f8fc0 Merge pull request #2667 from egonm12/update-readme-git-clone-command
doc: update git clone command to use stable branch
2023-04-20 17:47:47 +01:00
Andres Caicedo
9aacb68fbc Merge remote-tracking branch 'upstream/master' into feature/add-error-exceptions 2023-04-20 18:24:16 +02:00
Andres Caicedo
3de732508c Merge remote-tracking branch 'upstream/master' into update-pre-commit-version 2023-04-20 18:22:14 +02:00
Eddie Cohen
cf7544c146 Cancel in-progress docker CI on outdate (#2619)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-20 18:09:20 +02:00
Jartto
2a20ea638e Fix README ./run.sh start -> ./run.sh (#2523)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-20 18:07:53 +02:00
Andres Caicedo
6699a8ef38 Update .pre-commit-config.yaml
Update pre-commit-hooks to latest version v4.4.0
2023-04-20 16:49:11 +02:00
Andres Caicedo
f99c37aede Update milvus_memory_test.py
The 'err' variable in the except block is an instance of the ImportError class.
2023-04-20 16:42:34 +02:00
k-boikov
bb7ca692e3 include openapi-python-client in docker build (#2669)
Fixes #2658 "Docker image crashes on start"
2023-04-20 14:45:26 +02:00
Egon Meijers
c09ed61aba doc: update git clone command to use stable branch
Since master should not be used for installation as described in the readme, it would be better to checkout the stable branch immediately when cloning to prevent people from reporting issues that are not in the stable environment.
2023-04-20 14:22:24 +02:00
riensen
9f6d6f32a6 Update plugin instructions and improve clarity 2023-04-20 11:17:47 +02:00
Toran Bruce Richards
000389c762 Update README.md 2023-04-20 20:55:55 +12:00
Toran Bruce Richards
c963a209ab Update README.md 2023-04-20 20:23:03 +12:00
BillSchumacher
744c94c96a Lower label and command provided. 2023-04-20 02:22:54 -05:00
BillSchumacher
c561fe8925 Update app.py 2023-04-20 02:19:20 -05:00
Richard Beales
99eac6c1d9 Merge pull request #2272 from Significant-Gravitas/stable
Stable
2023-04-20 06:36:15 +01:00
Richard Beales
c4008971f7 Merge branch 'master' into stable 2023-04-20 06:32:59 +01:00
Nicholas Tindle
5155056198 feat: permissions 2023-04-20 00:25:48 -05:00
Nicholas Tindle
9cb4739e4a fix: syntax 2023-04-20 00:22:10 -05:00
Richard Beales
fe855fef13 Tweak Docker Hub push command 2023-04-20 06:22:02 +01:00
Nicholas Tindle
b9623ed424 fix: add new line back 2023-04-20 00:21:20 -05:00
Nicholas Tindle
7c45b21aa7 Update ci.yml 2023-04-20 00:11:43 -05:00
BillSchumacher
bcda3c1a32 Merge pull request #1986 from Significant-Gravitas/richbeales-patch-2
Update docker-hub image push action
2023-04-19 23:57:56 -05:00
Pi
2f053fe9db Merge pull request #2605 from Pwuts/fix/pr-size-workflow
fix shirt-sizing workflow permissions
2023-04-20 02:32:59 +01:00
Reinier van der Leer
376db5a123 fix shirt-sizing workflow permissions 2023-04-20 03:20:28 +02:00
Toran Bruce Richards
3c23e7145d Update README.md 2023-04-20 13:00:41 +12:00
Toran Bruce Richards
981b6073e7 Update README.md 2023-04-20 12:40:40 +12:00
Nicholas Tindle
a82d49247a Shirt size labeling for PRs (#2467)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-20 02:07:41 +02:00
BillSchumacher
19f893e1e2 Merge pull request #757 from BillSchumacher/plugin-support
Plugin Support
2023-04-19 18:48:53 -05:00
BillSchumacher
c731675443 Fix url 2023-04-19 18:45:29 -05:00
BillSchumacher
d8fd834142 linting 2023-04-19 18:34:38 -05:00
BillSchumacher
d876de0bef Make tests a bit spicier and fix, maybe. 2023-04-19 18:32:49 -05:00
BillSchumacher
16f0e22ffa linting 2023-04-19 18:21:03 -05:00
BillSchumacher
d7679d755f Fix all commands and cleanup 2023-04-19 18:17:04 -05:00
BillSchumacher
23c650ca10 Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT into plugin-support 2023-04-19 17:28:17 -05:00
BillSchumacher
d5523600c7 Merge pull request #10 from riensen/plugin-support
Adding Allowlisted Plugins via .env
2023-04-19 17:22:36 -05:00
Eesa Hamza
ec945d1022 Fixed links 2023-04-19 17:59:17 +03:00
Eesa Hamza
9240a554f1 Added version select to bug template 2023-04-19 17:55:36 +03:00
jazelly
fa8562bc0c misc: fix typo in benchmark 2023-04-19 20:47:36 +09:30
riensen
c5b81b5e10 Adding Allowlisted Plugins via env 2023-04-19 12:50:00 +02:00
BillSchumacher
4c7b582454 apply black 2023-04-18 19:09:15 -05:00
BillSchumacher
3f2d14f4d8 Fix isort? 2023-04-18 19:07:39 -05:00
BillSchumacher
221a4b0b50 I guess linux doesn't like this.... 2023-04-18 19:02:10 -05:00
BillSchumacher
86d3444fb8 isort, add proper skips. 2023-04-18 18:59:23 -05:00
BillSchumacher
4701357a21 fix test 2023-04-18 18:56:11 -05:00
BillSchumacher
5813592206 fix readme 2023-04-18 18:51:28 -05:00
BillSchumacher
7d45de8901 fix merge 2023-04-18 18:48:44 -05:00
BillSchumacher
085842d43c Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT into plugin-support 2023-04-18 18:44:40 -05:00
BillSchumacher
b188c2b3e3 Merge pull request #4 from evahteev/_openai-plugin-support
[WIP] Openai plugins support
2023-04-18 18:20:37 -05:00
BillSchumacher
ebee041c35 fix merge 2023-04-18 18:18:15 -05:00
BillSchumacher
59a9986786 Merge branch 'plugin-support' of https://github.com/BillSchumacher/Auto-GPT into pr/4 2023-04-18 18:12:35 -05:00
BillSchumacher
ef0216dbe7 Merge pull request #6 from TaylorBeeston/type-fixes
Type fixes
2023-04-18 17:57:23 -05:00
lengweiping1983
ae7b81dc50 Merge branch 'master' into arg_config_optimization 2023-04-19 06:48:31 +08:00
Evgeny Vakhteev
49e4b75039 removing accidentially commited ./docker 2023-04-18 13:16:10 -07:00
Evgeny Vakhteev
c62c8c6e71 merge BillSchumacher/plugin-support, conflicts 2023-04-18 13:13:38 -07:00
Evgeny Vakhteev
894026cdd4 reshaping code and fixing tests 2023-04-18 12:52:09 -07:00
pkqs90
4cc90b8eb4 Fix docker usage readme 2023-04-19 00:01:26 +08:00
lengweiping
09e29f1e1b fix conflicts 2023-04-18 23:37:17 +08:00
Taylor Beeston
b84de4f7f8 ♻️ Use AutoGPT template package for the plugin type 2023-04-17 22:10:40 -07:00
lengweiping
275b2eaae1 only adjust the order, so argument definitions are consistent with the logical order 2023-04-18 13:06:09 +08:00
Evgeny Vakhteev
9fd80a8660 tests, model 2023-04-17 20:51:27 -07:00
Evgeny Vakhteev
193c80849f separating OpenAI Plugin base class 2023-04-17 18:42:42 -07:00
Evgeny Vakhteev
9ed5e0f1fc adding plugin interface instantiation 2023-04-17 17:13:53 -07:00
Evgeny Vakhteev
7f4e38844f adding openai plugin loader 2023-04-17 14:57:55 -07:00
Sourcery AI
9705f60dd3 'Refactored by Sourcery' 2023-04-17 19:44:54 +00:00
Taylor Beeston
ea67b6772c 🐛 Minor type fixes 2023-04-17 12:42:17 -07:00
Taylor Beeston
f784049079 🏷️ Type plugins field in config 2023-04-17 12:41:34 -07:00
Taylor Beeston
d23ada30d7 🐛 Fix on_planning 2023-04-17 12:41:17 -07:00
Taylor Beeston
dea5000a01 🐛 Fix pre_instruction 2023-04-17 12:40:46 -07:00
Taylor Beeston
239aa3aa02 🎨 Bring in plugin_template
This would ideally be a shared package
2023-04-17 12:39:18 -07:00
Evgeny Vakhteev
08ad320d19 moving load plugins into plugins from main, adding tests 2023-04-17 09:33:01 -07:00
jingxing
1001e5489e config.py format 2023-04-17 14:24:10 +08:00
BillSchumacher
fe85f079b0 Fix early abort 2023-04-17 01:09:17 -05:00
BillSchumacher
8386188356 Fix early abort 2023-04-17 00:49:51 -05:00
BillSchumacher
fbd4e06df5 Add early abort functions. 2023-04-16 23:39:33 -05:00
BillSchumacher
3715ebc7eb Add hooks for chat completion 2023-04-16 23:30:42 -05:00
BillSchumacher
d394b032d7 Fix test 2023-04-16 23:23:31 -05:00
BillSchumacher
23d3dafc51 Maybe fix tests, fix safe_path function. 2023-04-16 23:18:29 -05:00
BillSchumacher
708374d95b fix linting 2023-04-16 22:56:34 -05:00
BillSchumacher
81c65af560 blacked 2023-04-16 22:51:39 -05:00
BillSchumacher
c0aa423d7b Fix agent remembering do nothing command, use correct google function, disabled image_gen if not configured. 2023-04-16 22:46:38 -05:00
BillSchumacher
03c137741a Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT into plugin-support 2023-04-16 22:13:37 -05:00
BillSchumacher
c110f3489d Finish integrating command registry 2023-04-16 21:51:36 -05:00
BillSchumacher
167628c696 Add fields to disable the command if needed by configuration, blacked. 2023-04-16 15:49:36 -05:00
BillSchumacher
df5cc3303f move tests and cleanup. 2023-04-16 15:35:25 -05:00
BillSchumacher
ec8ff0fcde Merge branch 'command_registry' of https://github.com/kreneskyp/Auto-GPT into plugin-support 2023-04-16 15:25:21 -05:00
BillSchumacher
3fadf2c90b Blacked 2023-04-16 14:15:38 -05:00
BillSchumacher
c544cebbe6 Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT into plugin-support 2023-04-16 14:15:15 -05:00
BillSchumacher
05bafb9838 Fix fstring bug. 2023-04-16 00:40:00 -05:00
BillSchumacher
abb54df4d0 Add custom commands to execute_command via promptgenerator 2023-04-16 00:37:21 -05:00
BillSchumacher
83403ad3ab add pre_command and post_command hooks. 2023-04-16 00:20:00 -05:00
BillSchumacher
17478d6a05 Add post planning hook 2023-04-16 00:09:11 -05:00
BillSchumacher
397627d1b9 add post_instruction hook 2023-04-16 00:01:23 -05:00
BillSchumacher
00225e01b3 Fix another bad implementation detail. 2023-04-15 23:54:20 -05:00
BillSchumacher
fc7db7d86f Fix bad logic probably. 2023-04-15 23:51:43 -05:00
BillSchumacher
ee42b4d06c Add pre_instruction and on_instruction hooks. 2023-04-15 23:45:16 -05:00
BillSchumacher
09a5b3149d Add on_planning hook. 2023-04-15 23:01:01 -05:00
BillSchumacher
68e26bf9d6 Refactor main startup to store AIConfig on Agent for plugin usage. 2023-04-15 22:43:17 -05:00
BillSchumacher
e36b74893f Add name and role to prompt generator object for maximum customization. 2023-04-15 22:33:56 -05:00
BillSchumacher
2761a5c361 Add post_prompt hook 2023-04-15 22:18:55 -05:00
BillSchumacher
b7a29e71cd Refactor prompts into package, make the prompt able to be stored with the AI config and changed. Fix settings file. 2023-04-15 22:15:34 -05:00
BillSchumacher
1af463b03c Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into plugin-support 2023-04-15 21:37:27 -05:00
BillSchumacher
0b955c0546 Update README.md
Update warning
2023-04-10 22:19:21 -05:00
BillSchumacher
65b626c5e1 Plugins initial 2023-04-10 20:57:47 -05:00
Peter Krenesky
bcc1b5f8bf Merge branch 'master' into command_registry 2023-04-08 16:46:58 -07:00
Peter
3095591064 switch to explicit module imports 2023-04-06 20:00:28 -07:00
Peter
b4a0ef9bab resolving test failures 2023-04-06 19:25:44 -07:00
Peter
e2a6ed6955 adding tests for CommandRegistry 2023-04-06 18:24:53 -07:00
Peter
a24ab0e879 dynamically load commands from registry 2023-04-06 15:42:35 -07:00
178 changed files with 10075 additions and 2458 deletions

2
.coveragerc Normal file
View File

@@ -0,0 +1,2 @@
[run]
relative_files = true

View File

@@ -1,28 +1,13 @@
# [Choice] Python version (use -bullseye variants on local arm64/Apple Silicon): 3, 3.10, 3-bullseye, 3.10-bullseye, 3-buster, 3.10-buster
ARG VARIANT=3-bullseye
FROM --platform=linux/amd64 python:3.10
# Use an official Python base image from the Docker Hub
FROM python:3.10
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# Remove imagemagick due to https://security-tracker.debian.org/tracker/CVE-2019-10131
&& apt-get purge -y imagemagick imagemagick-6-common
# Install browsers
RUN apt-get update && apt-get install -y \
chromium-driver firefox-esr \
ca-certificates
# Temporary: Upgrade python packages due to https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-40897
# They are installed by the base image (python) which does not have the patch.
RUN python3 -m pip install --upgrade setuptools
# Install utilities
RUN apt-get install -y curl jq wget git
# Install Chrome for web browsing
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& curl -sSL https://dl.google.com/linux/direct/google-chrome-stable_current_$(dpkg --print-architecture).deb -o /tmp/chrome.deb \
&& apt-get -y install /tmp/chrome.deb
# [Optional] If your pip requirements rarely change, uncomment this section to add them to the image.
# COPY requirements.txt /tmp/pip-tmp/
# RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
# && rm -rf /tmp/pip-tmp
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment this line to install global node packages.
# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g <your-package-here>" 2>&1
# Declare working directory
WORKDIR /workspace/Auto-GPT

View File

@@ -1,14 +1,14 @@
{
"build": {
"dockerfile": "./Dockerfile",
"context": "."
},
"dockerComposeFile": "./docker-compose.yml",
"service": "auto-gpt",
"workspaceFolder": "/workspace/Auto-GPT",
"shutdownAction": "stopCompose",
"features": {
"ghcr.io/devcontainers/features/common-utils:2": {
"installZsh": "true",
"username": "vscode",
"userUid": "1000",
"userGid": "1000",
"userUid": "6942",
"userGid": "6942",
"upgradePackages": "true"
},
"ghcr.io/devcontainers/features/desktop-lite:1": {},

View File

@@ -0,0 +1,19 @@
# To boot the app run the following:
# docker-compose run auto-gpt
version: '3.9'
services:
auto-gpt:
depends_on:
- redis
build:
dockerfile: .devcontainer/Dockerfile
context: ../
tty: true
environment:
MEMORY_BACKEND: ${MEMORY_BACKEND:-redis}
REDIS_HOST: ${REDIS_HOST:-redis}
volumes:
- ../:/workspace/Auto-GPT
redis:
image: 'redis/redis-stack-server:latest'

8
.dockerignore Normal file
View File

@@ -0,0 +1,8 @@
.*
*.template
*.yaml
*.yml
*.md
*.png
!BULLETIN.md

View File

@@ -13,6 +13,11 @@
## AI_SETTINGS_FILE - Specifies which AI Settings file to use (defaults to ai_settings.yaml)
# AI_SETTINGS_FILE=ai_settings.yaml
## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y
## EXIT_KEY - Key to exit AUTO-GPT
# EXIT_KEY=n
################################################################################
### LLM PROVIDER
################################################################################
@@ -52,7 +57,7 @@ OPENAI_API_KEY=your-openai-api-key
## local - Default
## pinecone - Pinecone (if configured)
## redis - Redis (if configured)
## milvus - Milvus (if configured)
## milvus - Milvus (if configured - also works with Zilliz)
## MEMORY_INDEX - Name of index created in Memory backend (Default: auto-gpt)
# MEMORY_BACKEND=local
# MEMORY_INDEX=auto-gpt
@@ -93,10 +98,16 @@ OPENAI_API_KEY=your-openai-api-key
# WEAVIATE_API_KEY=
### MILVUS
## MILVUS_ADDR - Milvus remote address (e.g. localhost:19530)
## MILVUS_COLLECTION - Milvus collection,
## change it if you want to start a new memory and retain the old memory.
# MILVUS_ADDR=your-milvus-cluster-host-port
## MILVUS_ADDR - Milvus remote address (e.g. localhost:19530, https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443)
## MILVUS_USERNAME - username for your Milvus database
## MILVUS_PASSWORD - password for your Milvus database
## MILVUS_SECURE - True to enable TLS. (Default: False)
## Setting MILVUS_ADDR to a `https://` URL will override this setting.
## MILVUS_COLLECTION - Milvus collection, change it if you want to start a new memory and retain the old memory.
# MILVUS_ADDR=localhost:19530
# MILVUS_USERNAME=
# MILVUS_PASSWORD=
# MILVUS_SECURE=
# MILVUS_COLLECTION=autogpt
################################################################################
@@ -188,3 +199,16 @@ OPENAI_API_KEY=your-openai-api-key
# TW_CONSUMER_SECRET=
# TW_ACCESS_TOKEN=
# TW_ACCESS_TOKEN_SECRET=
################################################################################
### ALLOWLISTED PLUGINS
################################################################################
#ALLOWLISTED_PLUGINS - Sets the listed plugins that are allowed (Example: plugin1,plugin2,plugin3)
ALLOWLISTED_PLUGINS=
################################################################################
### CHAT PLUGIN SETTINGS
################################################################################
# CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False)
# CHAT_MESSAGES_ENABLED=False

5
.gitattributes vendored Normal file
View File

@@ -0,0 +1,5 @@
# Exclude VCR cassettes from stats
tests/**/cassettes/**.y*ml linguist-generated
# Mark documentation as such
docs/**.md linguist-documentation

View File

@@ -57,6 +57,20 @@ body:
- Other (Please specify in your problem)
validations:
required: true
- type: dropdown
attributes:
label: Which version of Auto-GPT are you using?
description: |
Please select which version of Auto-GPT you were using when this issue occurred.
If you downloaded the code from the [releases page](https://github.com/Significant-Gravitas/Auto-GPT/releases/) make sure you were using the latest code.
**If you weren't please try with the [latest code](https://github.com/Significant-Gravitas/Auto-GPT/releases/)**.
If installed with git you can run `git branch` to see which version of Auto-GPT you are running.
options:
- Latest Release
- Stable (branch)
- Master (branch)
validations:
required: true
- type: dropdown
attributes:
label: GPT-3 or GPT-4?

View File

@@ -1,23 +0,0 @@
name: auto-format
on: pull_request
jobs:
format:
runs-on: ubuntu-latest
steps:
- name: Checkout PR branch
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: autopep8
uses: peter-evans/autopep8@v1
with:
args: --exit-code --recursive --in-place --aggressive --aggressive .
- name: Check for modified files
id: git-check
run: echo "modified=$(if git diff-index --quiet HEAD --; then echo "false"; else echo "true"; fi)" >> $GITHUB_ENV
- name: Push changes
if: steps.git-check.outputs.modified == 'true'
run: |
git config --global user.name 'Torantulino'
git config --global user.email 'toran.richards@gmail.com'
git remote set

View File

@@ -1,31 +0,0 @@
name: benchmark
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
environment: benchmark
strategy:
matrix:
python-version: ['3.10', '3.11']
steps:
- name: Check out repository
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: benchmark
run: |
python benchmark/benchmark_entrepeneur_gpt_with_undecisive_user.py
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

31
.github/workflows/benchmarks.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: Run Benchmarks
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
env:
python-version: '3.10'
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Python ${{ env.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: benchmark
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
python benchmark/benchmark_entrepreneur_gpt_with_undecisive_user.py

View File

@@ -2,71 +2,76 @@ name: Python CI
on:
push:
branches: [master]
branches: [ master ]
pull_request:
branches: [master]
branches: [ master ]
concurrency:
group: ${{ format('ci-{0}', format('pr-{0}', github.event.pull_request.number) || github.sha) }}
group: ${{ format('ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs:
lint:
runs-on: ubuntu-latest
env:
min-python-version: '3.10'
min-python-version: "3.10"
steps:
- name: Check out repository
uses: actions/checkout@v3
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.min-python-version }}
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.min-python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Lint with flake8
run: flake8
- name: Lint with flake8
run: flake8
- name: Check black formatting
run: black . --check
if: success() || failure()
- name: Check black formatting
run: black . --check
if: success() || failure()
- name: Check isort formatting
run: isort . --check
if: success() || failure()
- name: Check isort formatting
run: isort . --check
if: success() || failure()
test:
permissions:
# Gives the action the necessary permissions for publishing new
# comments in pull requests.
pull-requests: write
# Gives the action the necessary permissions for pushing data to the
# python-coverage-comment-action branch, and for editing existing
# comments (to avoid publishing multiple comments in the same PR)
contents: write
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10', '3.11']
python-version: ["3.10", "3.11"]
steps:
- name: Check out repository
uses: actions/checkout@v3
- name: Check out repository
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run unittest tests with coverage
run: |
pytest --cov=autogpt --without-integration --without-slow-integration
- name: Run unittest tests with coverage
run: |
pytest --cov=autogpt --cov-report term-missing --cov-branch --cov-report xml --cov-report term
- name: Generate coverage report
run: |
coverage report
coverage xml
if: success() || failure()
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3

View File

@@ -0,0 +1,58 @@
name: Purge Docker CI cache
on:
schedule:
- cron: 20 4 * * 1,4
env:
BASE_BRANCH: master
IMAGE_NAME: auto-gpt
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
build-type: [release, dev]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- id: build
name: Build image
uses: docker/build-push-action@v3
with:
build-args: BUILD_TYPE=${{ matrix.build-type }}
load: true # save to docker images
# use GHA cache as read-only
cache-to: type=gha,scope=docker-${{ matrix.build-type }},mode=max
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.schedule }}
build_type: ${{ matrix.build-type }}
prod_branch: stable
dev_branch: master
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'stable' && 'master' || 'stable' }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.sha }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.sha) }}
push_forced_label:
new_commits_json: ${{ null }}
compare_url_template: ${{ format('/{0}/compare/{{base}}...{{head}}', github.repository) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-ci-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true

115
.github/workflows/docker-ci.yml vendored Normal file
View File

@@ -0,0 +1,115 @@
name: Docker CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
concurrency:
group: ${{ format('docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
env:
IMAGE_NAME: auto-gpt
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
build-type: [release, dev]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- if: runner.debug
run: |
ls -al
du -hs *
- id: build
name: Build image
uses: docker/build-push-action@v3
with:
build-args: BUILD_TYPE=${{ matrix.build-type }}
tags: ${{ env.IMAGE_NAME }}
load: true # save to docker images
# cache layers in GitHub Actions cache to speed up builds
cache-from: type=gha,scope=docker-${{ matrix.build-type }}
cache-to: type=gha,scope=docker-${{ matrix.build-type }},mode=max
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.ref }}
event_ref_type: ${{ github.event.ref}}
build_type: ${{ matrix.build-type }}
prod_branch: stable
dev_branch: master
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'stable' && 'master' || 'stable' }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.event.after }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.event.release && github.event.release.tag_name || github.sha) }}
push_forced_label: ${{ github.event.forced && '☢️ forced' || '' }}
new_commits_json: ${{ toJSON(github.event.commits) }}
compare_url_template: ${{ format('/{0}/compare/{{base}}...{{head}}', github.repository) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-ci-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true
# Docker setup needs fixing before this is going to work: #1843
test:
runs-on: ubuntu-latest
needs: build
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- id: build
name: Build image
uses: docker/build-push-action@v3
with:
build-args: BUILD_TYPE=dev # include pytest
tags: ${{ env.IMAGE_NAME }}
load: true # save to docker images
# cache layers in GitHub Actions cache to speed up builds
cache-from: type=gha,scope=docker-dev
cache-to: type=gha,scope=docker-dev,mode=max
- id: test
name: Run tests
env:
CI: true
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
set +e
test_output=$(
docker run --env CI --env OPENAI_API_KEY --entrypoint python ${{ env.IMAGE_NAME }} -m \
pytest --cov=autogpt --cov-report term-missing --cov-branch --cov-report xml --cov-report term 2>&1
)
test_failure=$?
echo "$test_output"
cat << $EOF >> $GITHUB_STEP_SUMMARY
# Tests $([ $test_failure = 0 ] && echo '✅' || echo '❌')
\`\`\`
$test_output
\`\`\`
$EOF

View File

@@ -1,18 +0,0 @@
name: Docker Image CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build the Docker image
run: docker build . --file Dockerfile --tag autogpt:$(date +%s)

81
.github/workflows/docker-release.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: Docker Release
on:
release:
types: [ published, edited ]
workflow_dispatch:
inputs:
no_cache:
type: boolean
description: 'Build from scratch, without using cached layers'
env:
IMAGE_NAME: auto-gpt
DEPLOY_IMAGE_NAME: ${{ secrets.DOCKER_USER }}/auto-gpt
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to Docker hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
# slashes are not allowed in image tags, but can appear in git branch or tag names
- id: sanitize_tag
name: Sanitize image tag
run: echo tag=${raw_tag//\//-} >> $GITHUB_OUTPUT
env:
raw_tag: ${{ github.ref_name }}
- id: build
name: Build image
uses: docker/build-push-action@v3
with:
build-args: BUILD_TYPE=release
load: true # save to docker images
# push: true # TODO: uncomment when this issue is fixed: https://github.com/moby/buildkit/issues/1555
tags: >
${{ env.IMAGE_NAME }},
${{ env.DEPLOY_IMAGE_NAME }}:latest,
${{ env.DEPLOY_IMAGE_NAME }}:${{ steps.sanitize_tag.outputs.tag }}
# cache layers in GitHub Actions cache to speed up builds
cache-from: ${{ !inputs.no_cache && 'type=gha' || '' }},scope=docker-release
cache-to: type=gha,scope=docker-release,mode=max
- name: Push image to Docker Hub
run: docker push --all-tags ${{ env.DEPLOY_IMAGE_NAME }}
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.ref }}
event_ref_type: ${{ github.event.ref}}
inputs_no_cache: ${{ inputs.no_cache }}
prod_branch: stable
dev_branch: master
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'stable' && 'master' || 'stable' }}
ref_type: ${{ github.ref_type }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.sha }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.event.release && github.event.release.tag_name || github.sha) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-release-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true

View File

@@ -1,27 +0,0 @@
name: Push Docker Image on Release
on:
release:
types: [published]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Log in to Docker hub
env:
DOCKER_USER: ${{secrets.DOCKER_USER}}
DOCKER_PASSWORD: ${{secrets.DOCKER_PASSWORD}}
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
- name: Build the Docker image
run: |
tag_v=$(git describe --tags $(git rev-list --tags --max-count=1))
tag=$(echo $tag_v | sed 's/v//')
docker build . --file Dockerfile --tag ${{secrets.DOCKER_USER}}/auto-gpt:${tag}
- name: Docker Push
run: docker push ${{secrets.DOCKER_USER}}/auto-gpt

View File

@@ -0,0 +1,37 @@
name: Docs
on:
push:
branches: [ stable ]
paths:
- 'docs/**'
- 'mkdocs.yml'
- '.github/workflows/documentation.yml'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Python 3
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Set up workflow cache
uses: actions/cache@v3
with:
key: ${{ github.ref }}
path: .cache
- run: pip install mkdocs-material
- run: mkdocs gh-deploy --force

View File

@@ -1,12 +1,15 @@
name: "Pull Request auto-label"
on:
# So that PRs touching the same files as the push are updated
push:
branches: [ master ]
# So that the `dirtyLabel` is removed if conflicts are resolve
# We recommend `pull_request_target` so that github secrets are available.
# In `pull_request` we wouldn't be able to change labels of fork PRs
pull_request_target:
types: [opened, synchronize]
types: [ opened, synchronize ]
concurrency:
group: ${{ format('pr-label-{0}', github.event.pull_request.number || github.sha) }}
cancel-in-progress: true
@@ -26,3 +29,27 @@ jobs:
repoToken: "${{ secrets.GITHUB_TOKEN }}"
commentOnDirty: "This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request."
commentOnClean: "Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly."
size:
if: ${{ github.event_name == 'pull_request_target' }}
permissions:
issues: write
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: codelytv/pr-size-labeler@v1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
xs_label: 'size/xs'
xs_max_size: 2
s_label: 'size/s'
s_max_size: 10
m_label: 'size/m'
m_max_size: 50
l_label: 'size/l'
l_max_size: 200
xl_label: 'size/xl'
message_if_xl: >
This PR exceeds the recommended size of 200 lines.
Please make sure you are NOT addressing multiple issues with one PR.
Note this PR might be rejected due to its size

View File

@@ -0,0 +1,98 @@
#!/bin/bash
meta=$(docker image inspect "$IMAGE_NAME" | jq '.[0]')
head_compare_url=$(sed "s/{base}/$base_branch/; s/{head}/$current_ref/" <<< $compare_url_template)
ref_compare_url=$(sed "s/{base}/$base_branch/; s/{head}/$commit_hash/" <<< $compare_url_template)
EOF=$(dd if=/dev/urandom bs=15 count=1 status=none | base64)
cat << $EOF
# Docker Build summary 🔨
**Source:** branch \`$current_ref\` -> [$repository@\`${commit_hash:0:7}\`]($source_url)
**Build type:** \`$build_type\`
**Image size:** $((`jq -r .Size <<< $meta` / 10**6))MB
## Image details
**Tags:**
$(jq -r '.RepoTags | map("* `\(.)`") | join("\n")' <<< $meta)
<details>
<summary><h3>Layers</h3></summary>
| Age | Size | Created by instruction |
| --------- | ------ | ---------------------- |
$(docker history --no-trunc --format "{{.CreatedSince}}\t{{.Size}}\t\`{{.CreatedBy}}\`\t{{.Comment}}" $IMAGE_NAME \
| grep 'buildkit.dockerfile' `# filter for layers created in this build process`\
| cut -f-3 `# yeet Comment column`\
| sed 's/ ago//' `# fix Layer age`\
| sed 's/ # buildkit//' `# remove buildkit comment from instructions`\
| sed 's/\$/\\$/g' `# escape variable and shell expansions`\
| sed 's/|/\\|/g' `# escape pipes so they don't interfere with column separators`\
| column -t -s$'\t' -o' | ' `# align columns and add separator`\
| sed 's/^/| /; s/$/ |/' `# add table row start and end pipes`)
</details>
<details>
<summary><h3>ENV</h3></summary>
| Variable | Value |
| -------- | -------- |
$(jq -r \
'.Config.Env
| map(
split("=")
| "\(.[0]) | `\(.[1] | gsub("\\s+"; " "))`"
)
| map("| \(.) |")
| .[]' <<< $meta
)
</details>
<details>
<summary>Raw metadata</summary>
\`\`\`JSON
$meta
\`\`\`
</details>
## Build details
**Build trigger:** $push_forced_label $event_name \`$event_ref\`
<details>
<summary><code>github</code> context</summary>
\`\`\`JSON
$github_context_json
\`\`\`
</details>
### Source
**HEAD:** [$repository@\`${commit_hash:0:7}\`]($source_url) on branch [$current_ref]($ref_compare_url)
**Diff with previous HEAD:** $head_compare_url
#### New commits
$(jq -r 'map([
"**Commit [`\(.id[0:7])`](\(.url)) by \(if .author.username then "@"+.author.username else .author.name end):**",
.message,
(if .committer.name != .author.name then "\n> <sub>**Committer:** \(.committer.name) <\(.committer.email)></sub>" else "" end),
"<sub>**Timestamp:** \(.timestamp)</sub>"
] | map("> \(.)\n") | join("")) | join("\n")' <<< $new_commits_json)
### Job environment
#### \`vars\` context:
\`\`\`JSON
$vars_json
\`\`\`
#### \`env\` context:
\`\`\`JSON
$job_env_json
\`\`\`
$EOF

View File

@@ -0,0 +1,85 @@
#!/bin/bash
meta=$(docker image inspect "$IMAGE_NAME" | jq '.[0]')
EOF=$(dd if=/dev/urandom bs=15 count=1 status=none | base64)
cat << $EOF
# Docker Release Build summary 🚀🔨
**Source:** $ref_type \`$current_ref\` -> [$repository@\`${commit_hash:0:7}\`]($source_url)
**Image size:** $((`jq -r .Size <<< $meta` / 10**6))MB
## Image details
**Tags:**
$(jq -r '.RepoTags | map("* `\(.)`") | join("\n")' <<< $meta)
<details>
<summary><h3>Layers</h3></summary>
| Age | Size | Created by instruction |
| --------- | ------ | ---------------------- |
$(docker history --no-trunc --format "{{.CreatedSince}}\t{{.Size}}\t\`{{.CreatedBy}}\`\t{{.Comment}}" $IMAGE_NAME \
| grep 'buildkit.dockerfile' `# filter for layers created in this build process`\
| cut -f-3 `# yeet Comment column`\
| sed 's/ ago//' `# fix Layer age`\
| sed 's/ # buildkit//' `# remove buildkit comment from instructions`\
| sed 's/\$/\\$/g' `# escape variable and shell expansions`\
| sed 's/|/\\|/g' `# escape pipes so they don't interfere with column separators`\
| column -t -s$'\t' -o' | ' `# align columns and add separator`\
| sed 's/^/| /; s/$/ |/' `# add table row start and end pipes`)
</details>
<details>
<summary><h3>ENV</h3></summary>
| Variable | Value |
| -------- | -------- |
$(jq -r \
'.Config.Env
| map(
split("=")
| "\(.[0]) | `\(.[1] | gsub("\\s+"; " "))`"
)
| map("| \(.) |")
| .[]' <<< $meta
)
</details>
<details>
<summary>Raw metadata</summary>
\`\`\`JSON
$meta
\`\`\`
</details>
## Build details
**Build trigger:** $event_name \`$current_ref\`
| Parameter | Value |
| -------------- | ------------ |
| \`no_cache\` | \`$inputs_no_cache\` |
<details>
<summary><code>github</code> context</summary>
\`\`\`JSON
$github_context_json
\`\`\`
</details>
### Job environment
#### \`vars\` context:
\`\`\`JSON
$vars_json
\`\`\`
#### \`env\` context:
\`\`\`JSON
$job_env_json
\`\`\`
$EOF

28
.github/workflows/sponsors_readme.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Generate Sponsors README
on:
workflow_dispatch:
schedule:
- cron: '0 */12 * * *'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout 🛎️
uses: actions/checkout@v3
- name: Generate Sponsors 💖
uses: JamesIves/github-sponsors-readme-action@v1
with:
token: ${{ secrets.README_UPDATER_PAT }}
file: 'README.md'
minimum: 2500
maximum: 99999
- name: Deploy to GitHub Pages 🚀
uses: JamesIves/github-pages-deploy-action@v4
with:
branch: master
folder: '.'
token: ${{ secrets.README_UPDATER_PAT }}

4
.gitignore vendored
View File

@@ -20,6 +20,7 @@ log-ingestion.txt
logs
*.log
*.mp3
mem.sqlite3
# Byte-compiled / optimized / DLL files
__pycache__/
@@ -94,6 +95,7 @@ instance/
# Sphinx documentation
docs/_build/
site/
# PyBuilder
target/
@@ -157,5 +159,7 @@ vicuna-*
# mac
.DS_Store
openai/
# news
CURRENT_BULLETIN.md

10
.isort.cfg Normal file
View File

@@ -0,0 +1,10 @@
[settings]
profile = black
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
use_parentheses = true
ensure_newline_before_comments = true
line_length = 88
sections = FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER
skip = .tox,__pycache__,*.pyc,venv*/*,reports,venv,env,node_modules,.env,.venv,dist

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v0.9.2
rev: v4.4.0
hooks:
- id: check-added-large-files
args: ['--maxkb=500']

View File

@@ -1,2 +1,9 @@
Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here.
If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
# INCLUDED COMMAND 'send_tweet' IS DEPRICATED, AND WILL BE REMOVED IN THE NEXT STABLE RELEASE
Base Twitter functionality (and more) is now covered by plugins: https://github.com/Significant-Gravitas/Auto-GPT-Plugins
## Changes to Docker configuration
The workdir has been changed from /home/appuser to /app. Be sure to update any volume mounts accordingly.

View File

@@ -1,4 +1,4 @@
# Code of Conduct for auto-gpt
# Code of Conduct for Auto-GPT
## 1. Purpose
@@ -37,4 +37,3 @@ This Code of Conduct is adapted from the [Contributor Covenant](https://www.cont
## 6. Contact
If you have any questions or concerns, please contact the project maintainers.

View File

@@ -1,35 +1,23 @@
# Contributing to ProjectName
# Contributing to Auto-GPT
First of all, thank you for considering contributing to our project! We appreciate your time and effort, and we value any contribution, whether it's reporting a bug, suggesting a new feature, or submitting a pull request.
This document provides guidelines and best practices to help you contribute effectively.
## Table of Contents
- [Code of Conduct](#code-of-conduct)
- [Getting Started](#getting-started)
- [How to Contribute](#how-to-contribute)
- [Reporting Bugs](#reporting-bugs)
- [Suggesting Enhancements](#suggesting-enhancements)
- [Submitting Pull Requests](#submitting-pull-requests)
- [Style Guidelines](#style-guidelines)
- [Code Formatting](#code-formatting)
- [Pre-Commit Hooks](#pre-commit-hooks)
## Code of Conduct
By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project.
By participating in this project, you agree to abide by our [Code of Conduct]. Please read it to understand the expectations we have for everyone who contributes to this project.
[Code of Conduct]: https://significant-gravitas.github.io/Auto-GPT/code-of-conduct.md
## 📢 A Quick Word
Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT.
However, you absolutely can still add these commands to Auto-GPT in the form of plugins. Please check out this [template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template).
> ⚠️ Plugin support is expected to ship within the week. You can follow PR #757 for more updates!
However, you absolutely can still add these commands to Auto-GPT in the form of plugins.
Please check out this [template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template).
## Getting Started
To start contributing, follow these steps:
1. Fork the repository and clone your fork.
2. Create a new branch for your changes (use a descriptive name, such as `fix-bug-123` or `add-new-feature`).
3. Make your changes in the new branch.
@@ -60,7 +48,7 @@ If you have an idea for a new feature or improvement, please create an issue on
When submitting a pull request, please ensure that your changes meet the following criteria:
- Your pull request should be atomic and focus on a single change.
- Your pull request should include tests for your change.
- Your pull request should include tests for your change. We automatically enforce this with [CodeCov](https://docs.codecov.com/docs/commit-status)
- You should have thoroughly tested your changes with multiple different prompts.
- You should have considered potential risks and mitigations for your changes.
- You should have documented your changes clearly and comprehensively.
@@ -70,18 +58,23 @@ When submitting a pull request, please ensure that your changes meet the followi
### Code Formatting
We use the `black` code formatter to maintain a consistent coding style across the project. Please ensure that your code is formatted using `black` before submitting a pull request. You can install `black` using `pip`:
We use the `black` and `isort` code formatters to maintain a consistent coding style across the project. Please ensure that your code is formatted properly before submitting a pull request.
To format your code, run the following commands in the project's root directory:
```bash
pip install black
python -m black .
python -m isort .
```
To format your code, run the following command in the project's root directory:
Or if you have these tools installed globally:
```bash
black .
isort .
```
### Pre-Commit Hooks
We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps:
Install the pre-commit package using pip:
@@ -101,5 +94,36 @@ If you encounter any issues or have questions, feel free to reach out to the mai
Happy coding, and once again, thank you for your contributions!
Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here:
https://github.com/Significant-Gravitas/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-label%3Aconflicts
https://github.com/Torantulino/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+
## Testing your changes
If you add or change code, make sure the updated code is covered by tests.
To increase coverage if necessary, [write tests using pytest].
For more info on running tests, please refer to ["Running tests"](https://significant-gravitas.github.io/Auto-GPT/testing/).
[write tests using pytest]: https://realpython.com/pytest-python-testing/
### API-dependent tests
To run tests that involve making calls to the OpenAI API, we use VCRpy. It caches known
requests and matching responses in so-called *cassettes*, allowing us to run the tests
in CI without needing actual API access.
When changes cause a test prompt to be generated differently, it will likely miss the
cache and make a request to the API, updating the cassette with the new request+response.
*Be sure to include the updated cassette in your PR!*
When you run Pytest locally:
- If no prompt change: you will not consume API tokens because there are no new OpenAI calls required.
- If the prompt changes in a way that the cassettes are not reusable:
- If no API key, the test fails. It requires a new cassette. So, add an API key to .env.
- If the API key is present, the tests will make a real call to OpenAI.
- If the test ends up being successful, your prompt changes didn't introduce regressions. This is good. Commit your cassettes to your PR.
- If the test is unsuccessful:
- Either: Your change made Auto-GPT less capable, in that case, you have to change your code.
- Or: The test might be poorly written. In that case, you can make suggestions to change the test.
In our CI pipeline, Pytest will use the cassettes and not call paid API providers, so we need your help to record the replays that you break.

View File

@@ -1,38 +1,40 @@
# 'dev' or 'release' container build
ARG BUILD_TYPE=dev
# Use an official Python base image from the Docker Hub
FROM python:3.10-slim
FROM python:3.10-slim AS autogpt-base
# Install git
RUN apt-get -y update
RUN apt-get -y install git chromium-driver
# Install browsers
RUN apt-get update && apt-get install -y \
chromium-driver firefox-esr \
ca-certificates
# Install Xvfb and other dependencies for headless browser testing
RUN apt-get update \
&& apt-get install -y wget gnupg2 libgtk-3-0 libdbus-glib-1-2 dbus-x11 xvfb ca-certificates
# Install Firefox / Chromium
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update \
&& apt-get install -y chromium firefox-esr
# Install utilities
RUN apt-get install -y curl jq wget git
# Set environment variables
ENV PIP_NO_CACHE_DIR=yes \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1
# Create a non-root user and set permissions
RUN useradd --create-home appuser
WORKDIR /home/appuser
RUN chown appuser:appuser /home/appuser
USER appuser
# Copy the requirements.txt file and install the requirements
COPY --chown=appuser:appuser requirements.txt .
RUN sed -i '/Items below this point will not be included in the Docker Image/,$d' requirements.txt && \
pip install --no-cache-dir --user -r requirements.txt
# Copy the application files
COPY --chown=appuser:appuser autogpt/ ./autogpt
# Install the required python packages globally
ENV PATH="$PATH:/root/.local/bin"
COPY requirements.txt .
# Set the entrypoint
ENTRYPOINT ["python", "-m", "autogpt"]
# dev build -> include everything
FROM autogpt-base as autogpt-dev
RUN pip install --no-cache-dir -r requirements.txt
WORKDIR /app
ONBUILD COPY . ./
# release build -> include bare minimum
FROM autogpt-base as autogpt-release
RUN sed -i '/Items below this point will not be included in the Docker Image/,$d' requirements.txt && \
pip install --no-cache-dir -r requirements.txt
WORKDIR /app
ONBUILD COPY autogpt/ ./autogpt
FROM autogpt-${BUILD_TYPE} AS auto-gpt

519
README.md

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,14 @@
import os
import random
import sys
from dotenv import load_dotenv
if "pytest" in sys.argv or "pytest" in sys.modules or os.getenv("CI"):
print("Setting random seed to 42")
random.seed(42)
# Load the users .env file into environment variables
load_dotenv(verbose=True, override=True)
del load_dotenv

View File

@@ -1,14 +1,15 @@
from colorama import Fore, Style
from autogpt.app import execute_command, get_command
from autogpt.chat import chat_with_ai, create_chat_message
from autogpt.config import Config
from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques
from autogpt.json_utils.utilities import validate_json
from autogpt.json_utils.utilities import LLM_DEFAULT_RESPONSE_FORMAT, validate_json
from autogpt.llm import chat_with_ai, create_chat_completion, create_chat_message
from autogpt.logs import logger, print_assistant_thoughts
from autogpt.speech import say_text
from autogpt.spinner import Spinner
from autogpt.utils import clean_input
from autogpt.utils import clean_input, send_chat_message_to_user
from autogpt.workspace import Workspace
class Agent:
@@ -19,18 +20,25 @@ class Agent:
memory: The memory object to use.
full_message_history: The full message history.
next_action_count: The number of actions to execute.
system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully.
Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals.
system_prompt: The system prompt is the initial prompt that defines everything
the AI needs to know to achieve its task successfully.
Currently, the dynamic and customizable information in the system prompt are
ai_name, description and goals.
triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is:
Determine which next command to use, and respond using the format specified above:
The triggering prompt is not part of the system prompt because between the system prompt and the triggering
prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve.
triggering_prompt: The last sentence the AI will see before answering.
For Auto-GPT, this prompt is:
Determine which next command to use, and respond using the format specified
above:
The triggering prompt is not part of the system prompt because between the
system prompt and the triggering
prompt we have contextual information that can distract the AI and make it
forget that its goal is to find the next task to achieve.
SYSTEM PROMPT
CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant)
TRIGGERING PROMPT
The triggering prompt reminds the AI about its short term meta task (defining the next task)
The triggering prompt reminds the AI about its short term meta task
(defining the next task)
"""
def __init__(
@@ -39,15 +47,26 @@ class Agent:
memory,
full_message_history,
next_action_count,
command_registry,
config,
system_prompt,
triggering_prompt,
workspace_directory,
):
cfg = Config()
self.ai_name = ai_name
self.memory = memory
self.summary_memory = (
"I was created." # Initial memory necessary to avoid hilucination
)
self.last_memory_index = 0
self.full_message_history = full_message_history
self.next_action_count = next_action_count
self.command_registry = command_registry
self.config = config
self.system_prompt = system_prompt
self.triggering_prompt = triggering_prompt
self.workspace = Workspace(workspace_directory, cfg.restrict_to_workspace)
def start_interaction_loop(self):
# Interaction Loop
@@ -68,11 +87,15 @@ class Agent:
logger.typewriter_log(
"Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}"
)
send_chat_message_to_user(
f"Continuous Limit Reached: \n {cfg.continuous_limit}"
)
break
send_chat_message_to_user("Thinking... \n")
# Send message to AI, get response
with Spinner("Thinking... "):
assistant_reply = chat_with_ai(
self,
self.system_prompt,
self.triggering_prompt,
self.full_message_history,
@@ -81,24 +104,38 @@ class Agent:
) # TODO: This hardcodes the model to use GPT3.5. Make this an argument
assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply)
for plugin in cfg.plugins:
if not plugin.can_handle_post_planning():
continue
assistant_reply_json = plugin.post_planning(self, assistant_reply_json)
# Print Assistant thoughts
if assistant_reply_json != {}:
validate_json(assistant_reply_json, "llm_response_format_1")
validate_json(assistant_reply_json, LLM_DEFAULT_RESPONSE_FORMAT)
# Get command name and arguments
try:
print_assistant_thoughts(self.ai_name, assistant_reply_json)
print_assistant_thoughts(
self.ai_name, assistant_reply_json, cfg.speak_mode
)
command_name, arguments = get_command(assistant_reply_json)
# command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"]
if cfg.speak_mode:
say_text(f"I want to execute {command_name}")
send_chat_message_to_user("Thinking... \n")
arguments = self._resolve_pathlike_command_args(arguments)
except Exception as e:
logger.error("Error: \n", str(e))
if not cfg.continuous_mode and self.next_action_count == 0:
### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
# ### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
# Get key press: Prompt the user to press enter to continue or escape
# to exit
self.user_input = ""
send_chat_message_to_user(
"NEXT ACTION: \n " + f"COMMAND = {command_name} \n "
f"ARGUMENTS = {arguments}"
)
logger.typewriter_log(
"NEXT ACTION: ",
Fore.CYAN,
@@ -106,22 +143,47 @@ class Agent:
f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
)
print(
"Enter 'y' to authorise command, 'y -N' to run N continuous "
"commands, 'n' to exit program, or enter feedback for "
"Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands"
"'n' to exit program, or enter feedback for "
f"{self.ai_name}...",
flush=True,
)
while True:
console_input = clean_input(
Fore.MAGENTA + "Input:" + Style.RESET_ALL
)
if console_input.lower().strip() == "y":
console_input = ""
if cfg.chat_messages_enabled:
console_input = clean_input("Waiting for your response...")
else:
console_input = clean_input(
Fore.MAGENTA + "Input:" + Style.RESET_ALL
)
if console_input.lower().strip() == cfg.authorise_key:
user_input = "GENERATE NEXT COMMAND JSON"
break
elif console_input.lower().strip() == "s":
logger.typewriter_log(
"-=-=-=-=-=-=-= THOUGHTS, REASONING, PLAN AND CRITICISM WILL NOW BE VERIFIED BY AGENT -=-=-=-=-=-=-=",
Fore.GREEN,
"",
)
self_feedback_resp = self.get_self_feedback(self.full_message_history,
assistant_reply_json, cfg.fast_llm_model
)
logger.typewriter_log(
f"SELF FEEDBACK: {self_feedback_resp}",
Fore.YELLOW,
"",
)
if self_feedback_resp[0].lower().strip() == cfg.authorise_key:
user_input = "GENERATE NEXT COMMAND JSON"
else:
user_input = self_feedback_resp
command_name = "human_feedback"
break
elif console_input.lower().strip() == "":
print("Invalid input format.")
continue
elif console_input.lower().startswith("y -"):
elif console_input.lower().startswith(f"{cfg.authorise_key} -"):
try:
self.next_action_count = abs(
int(console_input.split(" ")[1])
@@ -129,12 +191,12 @@ class Agent:
user_input = "GENERATE NEXT COMMAND JSON"
except ValueError:
print(
"Invalid input format. Please enter 'y -n' where n is"
f"Invalid input format. Please enter '{cfg.authorise_key} -N' where N is"
" the number of continuous tasks."
)
continue
break
elif console_input.lower() == "n":
elif console_input.lower() == cfg.exit_key:
user_input = "EXIT"
break
else:
@@ -149,10 +211,16 @@ class Agent:
"",
)
elif user_input == "EXIT":
send_chat_message_to_user("Exiting...")
print("Exiting...", flush=True)
break
else:
# Print command
send_chat_message_to_user(
"NEXT ACTION: \n " + f"COMMAND = {command_name} \n "
f"ARGUMENTS = {arguments}"
)
logger.typewriter_log(
"NEXT ACTION: ",
Fore.CYAN,
@@ -168,21 +236,27 @@ class Agent:
elif command_name == "human_feedback":
result = f"Human feedback: {user_input}"
else:
result = (
f"Command {command_name} returned: "
f"{execute_command(command_name, arguments)}"
for plugin in cfg.plugins:
if not plugin.can_handle_pre_command():
continue
command_name, arguments = plugin.pre_command(
command_name, arguments
)
command_result = execute_command(
self.command_registry,
command_name,
arguments,
self.config.prompt_generator,
)
result = f"Command {command_name} returned: " f"{command_result}"
for plugin in cfg.plugins:
if not plugin.can_handle_post_command():
continue
result = plugin.post_command(command_name, result)
if self.next_action_count > 0:
self.next_action_count -= 1
memory_to_add = (
f"Assistant Reply: {assistant_reply} "
f"\nResult: {result} "
f"\nHuman Feedback: {user_input} "
)
self.memory.add(memory_to_add)
# Check if there's a result from the command append it to the message
# history
if result is not None:
@@ -195,3 +269,84 @@ class Agent:
logger.typewriter_log(
"SYSTEM: ", Fore.YELLOW, "Unable to execute command"
)
def _resolve_pathlike_command_args(self, command_args):
if "directory" in command_args and command_args["directory"] in {"", "/"}:
command_args["directory"] = str(self.workspace.root)
else:
for pathlike in ["filename", "directory", "clone_path"]:
if pathlike in command_args:
command_args[pathlike] = str(
self.workspace.get_path(command_args[pathlike])
)
return command_args
def get_self_feedback(self, full_message_history, latest_response_json, llm_model: str) -> str:
"""Generates a feedback response based on the provided thoughts dictionary.
This method takes in a dictionary of thoughts containing keys such as 'reasoning',
'plan', 'thoughts', and 'criticism'. It combines these elements into a single
feedback message and uses the create_chat_completion() function to generate a
response based on the input message.
Args:
thoughts (dict): A dictionary containing thought elements like reasoning,
plan, thoughts, and criticism.
Returns:
str: A feedback response generated using the provided thoughts dictionary.
"""
ai_role = self.config.ai_role
thoughts = latest_response_json.get("thoughts", {})
command = latest_response_json.get("command", {})
from autogpt.llm.token_counter import count_message_tokens
import json
# Get ~2000 tokens from the full message history
# !!WARNING: THIS IMPLEMENTATION IS BAD - CAUSES BUG SIMILAR TO THIS: https://github.com/Significant-Gravitas/Auto-GPT/pull/3619
trimmed_message_history = []
for i in range(len(full_message_history) - 1, -1, -1):
message = full_message_history[i]
# Skip all messages from the user
if message["role"] == "user":
continue
# If the message is from the assistant, remove the "thoughts" dictionary from the content
elif message["role"] == "assistant":
try:
content_dict = json.loads(message["content"])
content_dict = content_dict.copy()
if "thoughts" in content_dict:
del content_dict["thoughts"]
message["content"] = json.dumps(content_dict)
except:
pass
trimmed_message_history.append(message)
feedback_prompt = f"""Below is a message from an AI agent with the role: '{ai_role}'.
Please review the provided Recent History, Agent's Plan, The Agent's proposed action and their Reasoning.
If the agent's command makes sense and the agent is on the right track, respond with the letter 'Y' followed by a space.
If the provided information is not suitable for achieving the role's objectives or a red flag is raised, please clearly and concisely tell the agent about the issue and suggesting an alternative action.
"""
reasoning = thoughts.get("reasoning", "")
plan = thoughts.get("plan", "")
# thought = thoughts.get("thoughts", "")
# criticism = thoughts.get("criticism", "")
# feedback_thoughts = thought + reasoning + plan + criticism
return create_chat_completion(
[
{"role": "system", "content": f""""You are AgentReviewerGPT.\n\nRespond with Y if the agent passes your review.\n\nBe wary of the following red flags in the agent's behaviour:
- The agent is repeating itself.
- The agent is stuck in a loop.
- The agent is using '<text>' instead of the actual text.
- The agent is using the wrong command for the situation.
- The agent is executing a python file that does not exist (it should check if the file exists and read it's contents before executing it).
Notes:
+ Hardcoded paths are okay""" },
{"role": "user", "content": f"{feedback_prompt}\n\nRecent History:\n{trimmed_message_history}\n\n\n\n\Agent's Plan:\n{plan}\n\nAgent's Proposed Action:\n{command}\n\nAgent's Reasoning:\n{reasoning}" }
],
llm_model,
)

View File

@@ -1,10 +1,11 @@
"""Agent manager for managing GPT agents"""
from __future__ import annotations
from typing import Union
from typing import List
from autogpt.config.config import Singleton
from autogpt.llm_utils import create_chat_completion
from autogpt.config.config import Config
from autogpt.llm import Message, create_chat_completion
from autogpt.singleton import Singleton
class AgentManager(metaclass=Singleton):
@@ -13,6 +14,7 @@ class AgentManager(metaclass=Singleton):
def __init__(self):
self.next_key = 0
self.agents = {} # key, (task, full_message_history, model)
self.cfg = Config()
# Create new GPT agent
# TODO: Centralise use of create_chat_completion() to globally enforce token limit
@@ -28,19 +30,32 @@ class AgentManager(metaclass=Singleton):
Returns:
The key of the new agent
"""
messages = [
messages: List[Message] = [
{"role": "user", "content": prompt},
]
for plugin in self.cfg.plugins:
if not plugin.can_handle_pre_instruction():
continue
if plugin_messages := plugin.pre_instruction(messages):
messages.extend(iter(plugin_messages))
# Start GPT instance
agent_reply = create_chat_completion(
model=model,
messages=messages,
)
# Update full message history
messages.append({"role": "assistant", "content": agent_reply})
plugins_reply = ""
for i, plugin in enumerate(self.cfg.plugins):
if not plugin.can_handle_on_instruction():
continue
if plugin_result := plugin.on_instruction(messages):
sep = "\n" if i else ""
plugins_reply = f"{plugins_reply}{sep}{plugin_result}"
if plugins_reply and plugins_reply != "":
messages.append({"role": "assistant", "content": plugins_reply})
key = self.next_key
# This is done instead of len(agents) to make keys unique even if agents
# are deleted
@@ -48,6 +63,11 @@ class AgentManager(metaclass=Singleton):
self.agents[key] = (task, messages, model)
for plugin in self.cfg.plugins:
if not plugin.can_handle_post_instruction():
continue
agent_reply = plugin.post_instruction(agent_reply)
return key, agent_reply
def message_agent(self, key: str | int, message: str) -> str:
@@ -65,15 +85,37 @@ class AgentManager(metaclass=Singleton):
# Add user message to message history before sending to agent
messages.append({"role": "user", "content": message})
for plugin in self.cfg.plugins:
if not plugin.can_handle_pre_instruction():
continue
if plugin_messages := plugin.pre_instruction(messages):
for plugin_message in plugin_messages:
messages.append(plugin_message)
# Start GPT instance
agent_reply = create_chat_completion(
model=model,
messages=messages,
)
# Update full message history
messages.append({"role": "assistant", "content": agent_reply})
plugins_reply = agent_reply
for i, plugin in enumerate(self.cfg.plugins):
if not plugin.can_handle_on_instruction():
continue
if plugin_result := plugin.on_instruction(messages):
sep = "\n" if i else ""
plugins_reply = f"{plugins_reply}{sep}{plugin_result}"
# Update full message history
if plugins_reply and plugins_reply != "":
messages.append({"role": "assistant", "content": plugins_reply})
for plugin in self.cfg.plugins:
if not plugin.can_handle_post_instruction():
continue
agent_reply = plugin.post_instruction(agent_reply)
return agent_reply
def list_agents(self) -> list[tuple[str | int, str]]:
@@ -86,7 +128,7 @@ class AgentManager(metaclass=Singleton):
# Return a list of agent keys and their tasks
return [(key, task) for key, (task, _, _) in self.agents.items()]
def delete_agent(self, key: Union[str, int]) -> bool:
def delete_agent(self, key: str | int) -> bool:
"""Delete an agent from the agent manager
Args:

View File

@@ -3,34 +3,14 @@ import json
from typing import Dict, List, NoReturn, Union
from autogpt.agent.agent_manager import AgentManager
from autogpt.commands.analyze_code import analyze_code
from autogpt.commands.audio_text import read_audio_from_file
from autogpt.commands.execute_code import (
execute_python_file,
execute_shell,
execute_shell_popen,
)
from autogpt.commands.file_operations import (
append_to_file,
delete_file,
download_file,
read_file,
search_files,
write_to_file,
)
from autogpt.commands.git_operations import clone_repository
from autogpt.commands.google_search import google_official_search, google_search
from autogpt.commands.image_gen import generate_image
from autogpt.commands.improve_code import improve_code
from autogpt.commands.twitter import send_tweet
from autogpt.commands.command import CommandRegistry, command
from autogpt.commands.web_requests import scrape_links, scrape_text
from autogpt.commands.web_selenium import browse_website
from autogpt.commands.write_tests import write_tests
from autogpt.config import Config
from autogpt.json_utils.json_fix_llm import fix_and_parse_json
from autogpt.memory import get_memory
from autogpt.processing.text import summarize_text
from autogpt.prompts.generator import PromptGenerator
from autogpt.speech import say_text
from autogpt.url_utils.validators import validate_url
CFG = Config()
AGENT_MANAGER = AgentManager()
@@ -108,7 +88,12 @@ def map_command_synonyms(command_name: str):
return command_name
def execute_command(command_name: str, arguments):
def execute_command(
command_registry: CommandRegistry,
command_name: str,
arguments,
prompt: PromptGenerator,
):
"""Execute the command and return the result
Args:
@@ -119,105 +104,30 @@ def execute_command(command_name: str, arguments):
str: The result of the command
"""
try:
cmd = command_registry.commands.get(command_name)
# If the command is found, call it with the provided arguments
if cmd:
return cmd(**arguments)
# TODO: Remove commands below after they are moved to the command registry.
command_name = map_command_synonyms(command_name.lower())
if command_name == "google":
# Check if the Google API key is set and use the official search method
# If the API key is not set or has only whitespaces, use the unofficial
# search method
key = CFG.google_api_key
if key and key.strip() and key != "your-google-api-key":
google_result = google_official_search(arguments["input"])
return google_result
else:
google_result = google_search(arguments["input"])
# google_result can be a list or a string depending on the search results
if isinstance(google_result, list):
safe_message = [
google_result_single.encode("utf-8", "ignore")
for google_result_single in google_result
]
else:
safe_message = google_result.encode("utf-8", "ignore")
if command_name == "memory_add":
return get_memory(CFG).add(arguments["string"])
return safe_message.decode("utf-8")
elif command_name == "memory_add":
memory = get_memory(CFG)
return memory.add(arguments["string"])
elif command_name == "start_agent":
return start_agent(
arguments["name"], arguments["task"], arguments["prompt"]
)
elif command_name == "message_agent":
return message_agent(arguments["key"], arguments["message"])
elif command_name == "list_agents":
return list_agents()
elif command_name == "delete_agent":
return delete_agent(arguments["key"])
elif command_name == "get_text_summary":
return get_text_summary(arguments["url"], arguments["question"])
elif command_name == "get_hyperlinks":
return get_hyperlinks(arguments["url"])
elif command_name == "clone_repository":
return clone_repository(
arguments["repository_url"], arguments["clone_path"]
)
elif command_name == "read_file":
return read_file(arguments["file"])
elif command_name == "write_to_file":
return write_to_file(arguments["file"], arguments["text"])
elif command_name == "append_to_file":
return append_to_file(arguments["file"], arguments["text"])
elif command_name == "delete_file":
return delete_file(arguments["file"])
elif command_name == "search_files":
return search_files(arguments["directory"])
elif command_name == "download_file":
if not CFG.allow_downloads:
return "Error: You do not have user authorization to download files locally."
return download_file(arguments["url"], arguments["file"])
elif command_name == "browse_website":
return browse_website(arguments["url"], arguments["question"])
# TODO: Change these to take in a file rather than pasted code, if
# non-file is given, return instructions "Input should be a python
# filepath, write your code to file and try again"
elif command_name == "analyze_code":
return analyze_code(arguments["code"])
elif command_name == "improve_code":
return improve_code(arguments["suggestions"], arguments["code"])
elif command_name == "write_tests":
return write_tests(arguments["code"], arguments.get("focus"))
elif command_name == "execute_python_file": # Add this command
return execute_python_file(arguments["file"])
elif command_name == "execute_shell":
if CFG.execute_local_commands:
return execute_shell(arguments["command_line"])
else:
return (
"You are not allowed to run local shell commands. To execute"
" shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
"in your config. Do not attempt to bypass the restriction."
)
elif command_name == "execute_shell_popen":
if CFG.execute_local_commands:
return execute_shell_popen(arguments["command_line"])
else:
return (
"You are not allowed to run local shell commands. To execute"
" shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
"in your config. Do not attempt to bypass the restriction."
)
elif command_name == "read_audio_from_file":
return read_audio_from_file(arguments["file"])
elif command_name == "generate_image":
return generate_image(arguments["prompt"])
elif command_name == "send_tweet":
return send_tweet(arguments["text"])
elif command_name == "do_nothing":
return "No action performed."
# filepath, write your code to file and try again
elif command_name == "task_complete":
shutdown()
else:
for command in prompt.commands:
if (
command_name == command["label"].lower()
or command_name == command["name"].lower()
):
return command["function"](**arguments)
return (
f"Unknown command '{command_name}'. Please refer to the 'COMMANDS'"
" list for available commands and only respond in the specified JSON"
@@ -227,6 +137,10 @@ def execute_command(command_name: str, arguments):
return f"Error: {str(e)}"
@command(
"get_text_summary", "Get text summary", '"url": "<url>", "question": "<question>"'
)
@validate_url
def get_text_summary(url: str, question: str) -> str:
"""Return the results of a Google search
@@ -242,6 +156,8 @@ def get_text_summary(url: str, question: str) -> str:
return f""" "Result" : {summary}"""
@command("get_hyperlinks", "Get text summary", '"url": "<url>"')
@validate_url
def get_hyperlinks(url: str) -> Union[str, List[str]]:
"""Return the results of a Google search
@@ -260,6 +176,11 @@ def shutdown() -> NoReturn:
quit()
@command(
"start_agent",
"Start GPT Agent",
'"name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"',
)
def start_agent(name: str, task: str, prompt: str, model=CFG.fast_llm_model) -> str:
"""Start an agent with a given name, task, and prompt
@@ -292,6 +213,7 @@ def start_agent(name: str, task: str, prompt: str, model=CFG.fast_llm_model) ->
return f"Agent {name} created with key {key}. First response: {agent_response}"
@command("message_agent", "Message GPT Agent", '"key": "<key>", "message": "<message>"')
def message_agent(key: str, message: str) -> str:
"""Message an agent with a given key and message"""
# Check if the key is a valid integer
@@ -306,7 +228,8 @@ def message_agent(key: str, message: str) -> str:
return agent_response
def list_agents():
@command("list_agents", "List GPT Agents", "")
def list_agents() -> str:
"""List all agents
Returns:
@@ -317,6 +240,7 @@ def list_agents():
)
@command("delete_agent", "Delete GPT Agent", '"key": "<key>"')
def delete_agent(key: str) -> str:
"""Delete an agent with a given key

View File

@@ -47,6 +47,19 @@ import click
is_flag=True,
help="Specifies whether to suppress the output of latest news on startup.",
)
@click.option(
# TODO: this is a hidden option for now, necessary for integration testing.
# We should make this public once we're ready to roll out agent specific workspaces.
"--workspace-directory",
"-w",
type=click.Path(),
hidden=True,
)
@click.option(
"--install-plugin-deps",
is_flag=True,
help="Installs external dependencies for 3rd party plugins.",
)
@click.pass_context
def main(
ctx: click.Context,
@@ -62,6 +75,8 @@ def main(
browser_name: str,
allow_downloads: bool,
skip_news: bool,
workspace_directory: str,
install_plugin_deps: bool,
) -> None:
"""
Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI.
@@ -69,24 +84,10 @@ def main(
Start an Auto-GPT assistant.
"""
# Put imports inside function to avoid importing everything when starting the CLI
import logging
import sys
from colorama import Fore
from autogpt.agent.agent import Agent
from autogpt.config import Config, check_openai_api_key
from autogpt.configurator import create_config
from autogpt.logs import logger
from autogpt.memory import get_memory
from autogpt.prompt import construct_prompt
from autogpt.utils import get_current_git_branch, get_latest_bulletin
from autogpt.main import run_auto_gpt
if ctx.invoked_subcommand is None:
cfg = Config()
# TODO: fill in llm values here
check_openai_api_key()
create_config(
run_auto_gpt(
continuous,
continuous_limit,
ai_settings,
@@ -99,56 +100,9 @@ def main(
browser_name,
allow_downloads,
skip_news,
workspace_directory,
install_plugin_deps,
)
logger.set_level(logging.DEBUG if cfg.debug_mode else logging.INFO)
ai_name = ""
if not cfg.skip_news:
motd = get_latest_bulletin()
if motd:
logger.typewriter_log("NEWS: ", Fore.GREEN, motd)
git_branch = get_current_git_branch()
if git_branch and git_branch != "stable":
logger.typewriter_log(
"WARNING: ",
Fore.RED,
f"You are running on `{git_branch}` branch "
"- this is not a supported branch.",
)
if sys.version_info < (3, 10):
logger.typewriter_log(
"WARNING: ",
Fore.RED,
"You are running on an older version of Python. "
"Some people have observed problems with certain "
"parts of Auto-GPT with this version. "
"Please consider upgrading to Python 3.10 or higher.",
)
system_prompt = construct_prompt()
# print(prompt)
# Initialize variables
full_message_history = []
next_action_count = 0
# Make a constant:
triggering_prompt = (
"Determine which next command to use, and respond using the"
" format specified above:"
)
# Initialize memory and make sure it is empty.
# this is particularly important for indexing and referencing pinecone memory
memory = get_memory(cfg, init=True)
logger.typewriter_log(
"Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}"
)
logger.typewriter_log("Using Browser:", Fore.GREEN, cfg.selenium_web_browser)
agent = Agent(
ai_name=ai_name,
memory=memory,
full_message_history=full_message_history,
next_action_count=next_action_count,
system_prompt=system_prompt,
triggering_prompt=triggering_prompt,
)
agent.start_interaction_loop()
if __name__ == "__main__":

View File

@@ -1,9 +1,15 @@
"""Code evaluation module."""
from __future__ import annotations
from autogpt.llm_utils import call_ai_function
from autogpt.commands.command import command
from autogpt.llm import call_ai_function
@command(
"analyze_code",
"Analyze Code",
'"code": "<full_code_string>"',
)
def analyze_code(code: str) -> list[str]:
"""
A function that takes in a string and returns a response from create chat
@@ -16,10 +22,10 @@ def analyze_code(code: str) -> list[str]:
improve the code.
"""
function_string = "def analyze_code(code: str) -> List[str]:"
function_string = "def analyze_code(code: str) -> list[str]:"
args = [code]
description_string = (
"Analyzes the given code and returns a list of suggestions" " for improvements."
"Analyzes the given code and returns a list of suggestions for improvements."
)
return call_ai_function(function_string, args, description_string)

View File

@@ -1,24 +1,49 @@
"""Commands for converting audio to text."""
import json
import requests
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.workspace import path_in_workspace
cfg = Config()
CFG = Config()
def read_audio_from_file(audio_path):
audio_path = path_in_workspace(audio_path)
with open(audio_path, "rb") as audio_file:
@command(
"read_audio_from_file",
"Convert Audio to text",
'"filename": "<filename>"',
CFG.huggingface_audio_to_text_model,
"Configure huggingface_audio_to_text_model.",
)
def read_audio_from_file(filename: str) -> str:
"""
Convert audio to text.
Args:
filename (str): The path to the audio file
Returns:
str: The text from the audio
"""
with open(filename, "rb") as audio_file:
audio = audio_file.read()
return read_audio(audio)
def read_audio(audio):
model = cfg.huggingface_audio_to_text_model
def read_audio(audio: bytes) -> str:
"""
Convert audio to text.
Args:
audio (bytes): The audio to convert
Returns:
str: The text from the audio
"""
model = CFG.huggingface_audio_to_text_model
api_url = f"https://api-inference.huggingface.co/models/{model}"
api_token = cfg.huggingface_api_token
api_token = CFG.huggingface_api_token
headers = {"Authorization": f"Bearer {api_token}"}
if api_token is None:
@@ -33,4 +58,4 @@ def read_audio(audio):
)
text = json.loads(response.content.decode("utf-8"))["text"]
return "The audio says: " + text
return f"The audio says: {text}"

156
autogpt/commands/command.py Normal file
View File

@@ -0,0 +1,156 @@
import functools
import importlib
import inspect
from typing import Any, Callable, Optional
# Unique identifier for auto-gpt commands
AUTO_GPT_COMMAND_IDENTIFIER = "auto_gpt_command"
class Command:
"""A class representing a command.
Attributes:
name (str): The name of the command.
description (str): A brief description of what the command does.
signature (str): The signature of the function that the command executes. Defaults to None.
"""
def __init__(
self,
name: str,
description: str,
method: Callable[..., Any],
signature: str = "",
enabled: bool = True,
disabled_reason: Optional[str] = None,
):
self.name = name
self.description = description
self.method = method
self.signature = signature if signature else str(inspect.signature(self.method))
self.enabled = enabled
self.disabled_reason = disabled_reason
def __call__(self, *args, **kwargs) -> Any:
if not self.enabled:
return f"Command '{self.name}' is disabled: {self.disabled_reason}"
return self.method(*args, **kwargs)
def __str__(self) -> str:
return f"{self.name}: {self.description}, args: {self.signature}"
class CommandRegistry:
"""
The CommandRegistry class is a manager for a collection of Command objects.
It allows the registration, modification, and retrieval of Command objects,
as well as the scanning and loading of command plugins from a specified
directory.
"""
def __init__(self):
self.commands = {}
def _import_module(self, module_name: str) -> Any:
return importlib.import_module(module_name)
def _reload_module(self, module: Any) -> Any:
return importlib.reload(module)
def register(self, cmd: Command) -> None:
self.commands[cmd.name] = cmd
def unregister(self, command_name: str):
if command_name in self.commands:
del self.commands[command_name]
else:
raise KeyError(f"Command '{command_name}' not found in registry.")
def reload_commands(self) -> None:
"""Reloads all loaded command plugins."""
for cmd_name in self.commands:
cmd = self.commands[cmd_name]
module = self._import_module(cmd.__module__)
reloaded_module = self._reload_module(module)
if hasattr(reloaded_module, "register"):
reloaded_module.register(self)
def get_command(self, name: str) -> Callable[..., Any]:
return self.commands[name]
def call(self, command_name: str, **kwargs) -> Any:
if command_name not in self.commands:
raise KeyError(f"Command '{command_name}' not found in registry.")
command = self.commands[command_name]
return command(**kwargs)
def command_prompt(self) -> str:
"""
Returns a string representation of all registered `Command` objects for use in a prompt
"""
commands_list = [
f"{idx + 1}. {str(cmd)}" for idx, cmd in enumerate(self.commands.values())
]
return "\n".join(commands_list)
def import_commands(self, module_name: str) -> None:
"""
Imports the specified Python module containing command plugins.
This method imports the associated module and registers any functions or
classes that are decorated with the `AUTO_GPT_COMMAND_IDENTIFIER` attribute
as `Command` objects. The registered `Command` objects are then added to the
`commands` dictionary of the `CommandRegistry` object.
Args:
module_name (str): The name of the module to import for command plugins.
"""
module = importlib.import_module(module_name)
for attr_name in dir(module):
attr = getattr(module, attr_name)
# Register decorated functions
if hasattr(attr, AUTO_GPT_COMMAND_IDENTIFIER) and getattr(
attr, AUTO_GPT_COMMAND_IDENTIFIER
):
self.register(attr.command)
# Register command classes
elif (
inspect.isclass(attr) and issubclass(attr, Command) and attr != Command
):
cmd_instance = attr()
self.register(cmd_instance)
def command(
name: str,
description: str,
signature: str = "",
enabled: bool = True,
disabled_reason: Optional[str] = None,
) -> Callable[..., Any]:
"""The command decorator is used to create Command objects from ordinary functions."""
def decorator(func: Callable[..., Any]) -> Command:
cmd = Command(
name=name,
description=description,
method=func,
signature=signature,
enabled=enabled,
disabled_reason=disabled_reason,
)
@functools.wraps(func)
def wrapper(*args, **kwargs) -> Any:
return func(*args, **kwargs)
wrapper.command = cmd
setattr(wrapper, AUTO_GPT_COMMAND_IDENTIFIER, True)
return wrapper
return decorator

View File

@@ -1,36 +1,38 @@
"""Execute code in a Docker container"""
import os
import subprocess
from pathlib import Path
import docker
from docker.errors import ImageNotFound
from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
from autogpt.commands.command import command
from autogpt.config import Config
CFG = Config()
def execute_python_file(file: str) -> str:
@command("execute_python_file", "Execute Python File", '"filename": "<filename>"')
def execute_python_file(filename: str) -> str:
"""Execute a Python file in a Docker container and return the output
Args:
file (str): The name of the file to execute
filename (str): The name of the file to execute
Returns:
str: The output of the file
"""
print(f"Executing file '{filename}'")
print(f"Executing file '{file}' in workspace '{WORKSPACE_PATH}'")
if not file.endswith(".py"):
if not filename.endswith(".py"):
return "Error: Invalid file type. Only .py files are allowed."
file_path = path_in_workspace(file)
if not os.path.isfile(file_path):
return f"Error: File '{file}' does not exist."
if not os.path.isfile(filename):
return f"Error: File '{filename}' does not exist."
if we_are_running_in_a_docker_container():
result = subprocess.run(
f"python {file_path}", capture_output=True, encoding="utf8", shell=True
f"python {filename}", capture_output=True, encoding="utf8", shell=True
)
if result.returncode == 0:
return result.stdout
@@ -39,7 +41,6 @@ def execute_python_file(file: str) -> str:
try:
client = docker.from_env()
# You can replace this with the desired Python image/version
# You can find available Python images on Docker Hub:
# https://hub.docker.com/_/python
@@ -59,12 +60,11 @@ def execute_python_file(file: str) -> str:
print(f"{status}: {progress}")
elif status:
print(status)
container = client.containers.run(
image_name,
f"python {file}",
f"python {Path(filename).relative_to(CFG.workspace_path)}",
volumes={
os.path.abspath(WORKSPACE_PATH): {
CFG.workspace_path: {
"bind": "/workspace",
"mode": "ro",
}
@@ -94,6 +94,15 @@ def execute_python_file(file: str) -> str:
return f"Error: {str(e)}"
@command(
"execute_shell",
"Execute Shell Command, non-interactive commands only",
'"command_line": "<command_line>"',
CFG.execute_local_commands,
"You are not allowed to run local shell commands. To execute"
" shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
"in your config. Do not attempt to bypass the restriction.",
)
def execute_shell(command_line: str) -> str:
"""Execute a shell command and return the output
@@ -103,10 +112,11 @@ def execute_shell(command_line: str) -> str:
Returns:
str: The output of the command
"""
current_dir = os.getcwd()
current_dir = Path.cwd()
# Change dir into workspace if necessary
if str(WORKSPACE_PATH) not in current_dir:
os.chdir(WORKSPACE_PATH)
if not current_dir.is_relative_to(CFG.workspace_path):
os.chdir(CFG.workspace_path)
print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
@@ -116,10 +126,18 @@ def execute_shell(command_line: str) -> str:
# Change back to whatever the prior working dir was
os.chdir(current_dir)
return output
@command(
"execute_shell_popen",
"Execute Shell Command, non-interactive commands only",
'"command_line": "<command_line>"',
CFG.execute_local_commands,
"You are not allowed to run local shell commands. To execute"
" shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
"in your config. Do not attempt to bypass the restriction.",
)
def execute_shell_popen(command_line) -> str:
"""Execute a shell command with Popen and returns an english description
of the event and the process id
@@ -130,10 +148,11 @@ def execute_shell_popen(command_line) -> str:
Returns:
str: Description of the fact that the process started and its id
"""
current_dir = os.getcwd()
# Change dir into workspace if necessary
if str(WORKSPACE_PATH) not in current_dir:
os.chdir(WORKSPACE_PATH)
if CFG.workspace_path not in current_dir:
os.chdir(CFG.workspace_path)
print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")

View File

@@ -9,12 +9,12 @@ import requests
from colorama import Back, Fore
from requests.adapters import HTTPAdapter, Retry
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.spinner import Spinner
from autogpt.utils import readable_file_size
from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
LOG_FILE = "file_logger.txt"
LOG_FILE_PATH = WORKSPACE_PATH / LOG_FILE
CFG = Config()
def check_duplicate_operation(operation: str, filename: str) -> bool:
@@ -27,7 +27,7 @@ def check_duplicate_operation(operation: str, filename: str) -> bool:
Returns:
bool: True if the operation has already been performed on the file
"""
log_content = read_file(LOG_FILE)
log_content = read_file(CFG.file_logger_path)
log_entry = f"{operation}: {filename}\n"
return log_entry in log_content
@@ -40,13 +40,7 @@ def log_operation(operation: str, filename: str) -> None:
filename (str): The name of the file the operation was performed on
"""
log_entry = f"{operation}: {filename}\n"
# Create the log file if it doesn't exist
if not os.path.exists(LOG_FILE_PATH):
with open(LOG_FILE_PATH, "w", encoding="utf-8") as f:
f.write("File Operation Logger ")
append_to_file(LOG_FILE, log_entry, shouldLog=False)
append_to_file(CFG.file_logger_path, log_entry, should_log=False)
def split_file(
@@ -81,6 +75,7 @@ def split_file(
start += max_length - overlap
@command("read_file", "Read file", '"filename": "<filename>"')
def read_file(filename: str) -> str:
"""Read a file and return the contents
@@ -91,8 +86,7 @@ def read_file(filename: str) -> str:
str: The contents of the file
"""
try:
filepath = path_in_workspace(filename)
with open(filepath, "r", encoding="utf-8") as f:
with open(filename, "r", encoding="utf-8") as f:
content = f.read()
return content
except Exception as e:
@@ -133,6 +127,7 @@ def ingest_file(
print(f"Error while ingesting file '{filename}': {str(e)}")
@command("write_to_file", "Write to file", '"filename": "<filename>", "text": "<text>"')
def write_to_file(filename: str, text: str) -> str:
"""Write text to a file
@@ -146,11 +141,9 @@ def write_to_file(filename: str, text: str) -> str:
if check_duplicate_operation("write", filename):
return "Error: File has already been updated."
try:
filepath = path_in_workspace(filename)
directory = os.path.dirname(filepath)
if not os.path.exists(directory):
os.makedirs(directory)
with open(filepath, "w", encoding="utf-8") as f:
directory = os.path.dirname(filename)
os.makedirs(directory, exist_ok=True)
with open(filename, "w", encoding="utf-8") as f:
f.write(text)
log_operation("write", filename)
return "File written to successfully."
@@ -158,22 +151,27 @@ def write_to_file(filename: str, text: str) -> str:
return f"Error: {str(e)}"
def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str:
@command(
"append_to_file", "Append to file", '"filename": "<filename>", "text": "<text>"'
)
def append_to_file(filename: str, text: str, should_log: bool = True) -> str:
"""Append text to a file
Args:
filename (str): The name of the file to append to
text (str): The text to append to the file
should_log (bool): Should log output
Returns:
str: A message indicating success or failure
"""
try:
filepath = path_in_workspace(filename)
with open(filepath, "a") as f:
directory = os.path.dirname(filename)
os.makedirs(directory, exist_ok=True)
with open(filename, "a") as f:
f.write(text)
if shouldLog:
if should_log:
log_operation("append", filename)
return "Text appended successfully."
@@ -181,6 +179,7 @@ def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str:
return f"Error: {str(e)}"
@command("delete_file", "Delete file", '"filename": "<filename>"')
def delete_file(filename: str) -> str:
"""Delete a file
@@ -193,14 +192,14 @@ def delete_file(filename: str) -> str:
if check_duplicate_operation("delete", filename):
return "Error: File has already been deleted."
try:
filepath = path_in_workspace(filename)
os.remove(filepath)
os.remove(filename)
log_operation("delete", filename)
return "File deleted successfully."
except Exception as e:
return f"Error: {str(e)}"
@command("search_files", "Search Files", '"directory": "<directory>"')
def search_files(directory: str) -> list[str]:
"""Search for files in a directory
@@ -212,29 +211,34 @@ def search_files(directory: str) -> list[str]:
"""
found_files = []
if directory in {"", "/"}:
search_directory = WORKSPACE_PATH
else:
search_directory = path_in_workspace(directory)
for root, _, files in os.walk(search_directory):
for root, _, files in os.walk(directory):
for file in files:
if file.startswith("."):
continue
relative_path = os.path.relpath(os.path.join(root, file), WORKSPACE_PATH)
relative_path = os.path.relpath(
os.path.join(root, file), CFG.workspace_path
)
found_files.append(relative_path)
return found_files
@command(
"download_file",
"Download File",
'"url": "<url>", "filename": "<filename>"',
CFG.allow_downloads,
"Error: You do not have user authorization to download files locally.",
)
def download_file(url, filename):
"""Downloads a file
Args:
url (str): URL of the file to download
filename (str): Filename to save the file as
"""
safe_filename = path_in_workspace(filename)
try:
directory = os.path.dirname(filename)
os.makedirs(directory, exist_ok=True)
message = f"{Fore.YELLOW}Downloading file from {Back.LIGHTBLUE_EX}{url}{Back.RESET}{Fore.RESET}"
with Spinner(message) as spinner:
session = requests.Session()
@@ -251,7 +255,7 @@ def download_file(url, filename):
total_size = int(r.headers.get("Content-Length", 0))
downloaded_size = 0
with open(safe_filename, "wb") as f:
with open(filename, "wb") as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
downloaded_size += len(chunk)

View File

@@ -1,26 +1,35 @@
"""Git operations for autogpt"""
import git
from git.repo import Repo
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.workspace import path_in_workspace
from autogpt.url_utils.validators import validate_url
CFG = Config()
def clone_repository(repo_url: str, clone_path: str) -> str:
"""Clone a GitHub repository locally
@command(
"clone_repository",
"Clone Repository",
'"repository_url": "<repository_url>", "clone_path": "<clone_path>"',
CFG.github_username and CFG.github_api_key,
"Configure github_username and github_api_key.",
)
@validate_url
def clone_repository(repository_url: str, clone_path: str) -> str:
"""Clone a GitHub repository locally.
Args:
repo_url (str): The URL of the repository to clone
clone_path (str): The path to clone the repository to
repository_url (str): The URL of the repository to clone.
clone_path (str): The path to clone the repository to.
Returns:
str: The result of the clone operation"""
split_url = repo_url.split("//")
str: The result of the clone operation.
"""
split_url = repository_url.split("//")
auth_repo_url = f"//{CFG.github_username}:{CFG.github_api_key}@".join(split_url)
safe_clone_path = path_in_workspace(clone_path)
try:
git.Repo.clone_from(auth_repo_url, safe_clone_path)
return f"""Cloned {repo_url} to {safe_clone_path}"""
Repo.clone_from(auth_repo_url, clone_path)
return f"""Cloned {repository_url} to {clone_path}"""
except Exception as e:
return f"Error: {str(e)}"

View File

@@ -5,11 +5,13 @@ import json
from duckduckgo_search import ddg
from autogpt.commands.command import command
from autogpt.config import Config
CFG = Config()
@command("google", "Google Search", '"query": "<query>"', not CFG.google_api_key)
def google_search(query: str, num_results: int = 8) -> str:
"""Return the results of a Google search
@@ -31,9 +33,17 @@ def google_search(query: str, num_results: int = 8) -> str:
for j in results:
search_results.append(j)
return json.dumps(search_results, ensure_ascii=False, indent=4)
results = json.dumps(search_results, ensure_ascii=False, indent=4)
return safe_google_results(results)
@command(
"google",
"Google Search",
'"query": "<query>"',
bool(CFG.google_api_key),
"Configure google_api_key.",
)
def google_official_search(query: str, num_results: int = 8) -> str | list[str]:
"""Return the results of a Google search using the official Google API
@@ -82,6 +92,26 @@ def google_official_search(query: str, num_results: int = 8) -> str | list[str]:
return "Error: The provided Google API key is invalid or missing."
else:
return f"Error: {e}"
# google_result can be a list or a string depending on the search results
# Return the list of search result URLs
return search_results_links
return safe_google_results(search_results_links)
def safe_google_results(results: str | list) -> str:
"""
Return the results of a google search in a safe format.
Args:
results (str | list): The search results.
Returns:
str: The results of the search.
"""
if isinstance(results, list):
safe_message = json.dumps(
[result.encode("utf-8", "ignore") for result in results]
)
else:
safe_message = results.encode("utf-8", "ignore").decode("utf-8")
return safe_message

View File

@@ -1,6 +1,5 @@
""" Image Generation Module for AutoGPT."""
import io
import os.path
import uuid
from base64 import b64decode
@@ -8,12 +7,13 @@ import openai
import requests
from PIL import Image
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.workspace import path_in_workspace
CFG = Config()
@command("generate_image", "Generate Image", '"prompt": "<prompt>"', CFG.image_provider)
def generate_image(prompt: str, size: int = 256) -> str:
"""Generate an image from a prompt.
@@ -24,7 +24,7 @@ def generate_image(prompt: str, size: int = 256) -> str:
Returns:
str: The filename of the image
"""
filename = f"{str(uuid.uuid4())}.jpg"
filename = f"{CFG.workspace_path}/{str(uuid.uuid4())}.jpg"
# DALL-E
if CFG.image_provider == "dalle":
@@ -71,22 +71,22 @@ def generate_image_with_hf(prompt: str, filename: str) -> str:
image = Image.open(io.BytesIO(response.content))
print(f"Image Generated for prompt:{prompt}")
image.save(path_in_workspace(filename))
image.save(filename)
return f"Saved to disk:{filename}"
def generate_image_with_dalle(prompt: str, filename: str) -> str:
def generate_image_with_dalle(prompt: str, filename: str, size: int) -> str:
"""Generate an image with DALL-E.
Args:
prompt (str): The prompt to use
filename (str): The filename to save the image to
size (int): The size of the image
Returns:
str: The filename of the image
"""
openai.api_key = CFG.openai_api_key
# Check for supported image sizes
if size not in [256, 512, 1024]:
@@ -101,13 +101,14 @@ def generate_image_with_dalle(prompt: str, filename: str) -> str:
n=1,
size=f"{size}x{size}",
response_format="b64_json",
api_key=CFG.openai_api_key,
)
print(f"Image Generated for prompt:{prompt}")
image_data = b64decode(response["data"][0]["b64_json"])
with open(path_in_workspace(filename), mode="wb") as png:
with open(filename, mode="wb") as png:
png.write(image_data)
return f"Saved to disk:{filename}"
@@ -158,6 +159,6 @@ def generate_image_with_sd_webui(
response = response.json()
b64 = b64decode(response["images"][0].split(",", 1)[0])
image = Image.open(io.BytesIO(b64))
image.save(path_in_workspace(filename))
image.save(filename)
return f"Saved to disk:{filename}"

View File

@@ -2,23 +2,29 @@ from __future__ import annotations
import json
from autogpt.llm_utils import call_ai_function
from autogpt.commands.command import command
from autogpt.llm import call_ai_function
@command(
"improve_code",
"Get Improved Code",
'"suggestions": "<list_of_suggestions>", "code": "<full_code_string>"',
)
def improve_code(suggestions: list[str], code: str) -> str:
"""
A function that takes in code and suggestions and returns a response from create
chat completion api call.
Parameters:
suggestions (List): A list of suggestions around what needs to be improved.
suggestions (list): A list of suggestions around what needs to be improved.
code (str): Code to be improved.
Returns:
A result string from create chat completion. Improved code in response.
"""
function_string = (
"def generate_improved_code(suggestions: List[str], code: str) -> str:"
"def generate_improved_code(suggestions: list[str], code: str) -> str:"
)
args = [json.dumps(suggestions), code]
description_string = (

View File

@@ -1,12 +1,27 @@
"""A module that contains a command to send a tweet."""
import os
import tweepy
from dotenv import load_dotenv
load_dotenv()
from autogpt.commands.command import command
def send_tweet(tweet_text):
@command(
"send_tweet",
"Send Tweet",
'"tweet_text": "<tweet_text>"',
)
def send_tweet(tweet_text: str) -> str:
"""
A function that takes in a string and returns a response from create chat
completion api call.
Args:
tweet_text (str): Text to be tweeted.
Returns:
A result from sending the tweet.
"""
consumer_key = os.environ.get("TW_CONSUMER_KEY")
consumer_secret = os.environ.get("TW_CONSUMER_SECRET")
access_token = os.environ.get("TW_ACCESS_TOKEN")
@@ -21,6 +36,6 @@ def send_tweet(tweet_text):
# Send tweet
try:
api.update_status(tweet_text)
print("Tweet sent successfully!")
return "Tweet sent successfully!"
except tweepy.TweepyException as e:
print("Error sending tweet: {}".format(e.reason))
return f"Error sending tweet: {e.reason}"

View File

@@ -1,89 +1,21 @@
"""Browse a webpage and summarize it using the LLM model"""
from __future__ import annotations
from urllib.parse import urljoin, urlparse
import requests
from bs4 import BeautifulSoup
from requests import Response
from requests.compat import urljoin
from autogpt.config import Config
from autogpt.memory import get_memory
from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
from autogpt.url_utils.validators import validate_url
CFG = Config()
memory = get_memory(CFG)
session = requests.Session()
session.headers.update({"User-Agent": CFG.user_agent})
def is_valid_url(url: str) -> bool:
"""Check if the URL is valid
Args:
url (str): The URL to check
Returns:
bool: True if the URL is valid, False otherwise
"""
try:
result = urlparse(url)
return all([result.scheme, result.netloc])
except ValueError:
return False
def sanitize_url(url: str) -> str:
"""Sanitize the URL
Args:
url (str): The URL to sanitize
Returns:
str: The sanitized URL
"""
return urljoin(url, urlparse(url).path)
def check_local_file_access(url: str) -> bool:
"""Check if the URL is a local file
Args:
url (str): The URL to check
Returns:
bool: True if the URL is a local file, False otherwise
"""
local_prefixes = [
"file:///",
"file://localhost/",
"file://localhost",
"http://localhost",
"http://localhost/",
"https://localhost",
"https://localhost/",
"http://2130706433",
"http://2130706433/",
"https://2130706433",
"https://2130706433/",
"http://127.0.0.1/",
"http://127.0.0.1",
"https://127.0.0.1/",
"https://127.0.0.1",
"https://0.0.0.0/",
"https://0.0.0.0",
"http://0.0.0.0/",
"http://0.0.0.0",
"http://0000",
"http://0000/",
"https://0000",
"https://0000/",
]
return any(url.startswith(prefix) for prefix in local_prefixes)
@validate_url
def get_response(
url: str, timeout: int = 10
) -> tuple[None, str] | tuple[Response, None]:
@@ -101,17 +33,7 @@ def get_response(
requests.exceptions.RequestException: If the HTTP request fails
"""
try:
# Restrict access to local files
if check_local_file_access(url):
raise ValueError("Access to local files is restricted")
# Most basic check if the URL is valid:
if not url.startswith("http://") and not url.startswith("https://"):
raise ValueError("Invalid URL format")
sanitized_url = sanitize_url(url)
response = session.get(sanitized_url, timeout=timeout)
response = session.get(url, timeout=timeout)
# Check if the response contains an HTTP error
if response.status_code >= 400:

View File

@@ -7,6 +7,7 @@ from sys import platform
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.common.exceptions import WebDriverException
from selenium.webdriver.chrome.options import Options as ChromeOptions
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options as FirefoxOptions
@@ -18,13 +19,21 @@ from webdriver_manager.chrome import ChromeDriverManager
from webdriver_manager.firefox import GeckoDriverManager
import autogpt.processing.text as summary
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
from autogpt.url_utils.validators import validate_url
FILE_DIR = Path(__file__).parent.parent
CFG = Config()
@command(
"browse_website",
"Browse Website",
'"url": "<url>", "question": "<what_you_want_to_find_on_website>"',
)
@validate_url
def browse_website(url: str, question: str) -> tuple[str, WebDriver]:
"""Browse a website and return the answer and links to the user
@@ -35,7 +44,14 @@ def browse_website(url: str, question: str) -> tuple[str, WebDriver]:
Returns:
Tuple[str, WebDriver]: The answer and links to the user and the webdriver
"""
driver, text = scrape_text_with_selenium(url)
try:
driver, text = scrape_text_with_selenium(url)
except WebDriverException as e:
# These errors are often quite long and include lots of context.
# Just grab the first line.
msg = e.msg.split("\n")[0]
return f"Error: {msg}", None
add_header(driver)
summary_text = summary.summarize_text(url, text, question, driver)
links = scrape_links_with_selenium(driver, url)
@@ -70,6 +86,9 @@ def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]:
)
if CFG.selenium_web_browser == "firefox":
if CFG.selenium_headless:
options.headless = True
options.add_argument("--disable-gpu")
driver = webdriver.Firefox(
executable_path=GeckoDriverManager().install(), options=options
)
@@ -84,11 +103,16 @@ def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]:
options.add_argument("--no-sandbox")
if CFG.selenium_headless:
options.add_argument("--headless")
options.add_argument("--headless=new")
options.add_argument("--disable-gpu")
chromium_driver_path = Path("/usr/bin/chromedriver")
driver = webdriver.Chrome(
executable_path=ChromeDriverManager().install(), options=options
executable_path=chromium_driver_path
if chromium_driver_path.exists()
else ChromeDriverManager().install(),
options=options,
)
driver.get(url)

View File

@@ -3,9 +3,15 @@ from __future__ import annotations
import json
from autogpt.llm_utils import call_ai_function
from autogpt.commands.command import command
from autogpt.llm import call_ai_function
@command(
"write_tests",
"Write Tests",
'"code": "<full_code_string>", "focus": "<list_of_focus_areas>"',
)
def write_tests(code: str, focus: list[str]) -> str:
"""
A function that takes in code and focus topics and returns a response from create

View File

@@ -3,12 +3,9 @@ This module contains the configuration classes for AutoGPT.
"""
from autogpt.config.ai_config import AIConfig
from autogpt.config.config import Config, check_openai_api_key
from autogpt.config.singleton import AbstractSingleton, Singleton
__all__ = [
"check_openai_api_key",
"AbstractSingleton",
"AIConfig",
"Config",
"Singleton",
]

View File

@@ -5,10 +5,18 @@ A module that contains the AIConfig class object that contains the configuration
from __future__ import annotations
import os
from typing import Type
import platform
from pathlib import Path
from typing import Optional, Type
import distro
import yaml
from autogpt.prompts.generator import PromptGenerator
# Soon this will go in a folder where it remembers more stuff about the run(s)
SAVE_FILE = str(Path(os.getcwd()) / "ai_settings.yaml")
class AIConfig:
"""
@@ -18,10 +26,15 @@ class AIConfig:
ai_name (str): The name of the AI.
ai_role (str): The description of the AI's role.
ai_goals (list): The list of objectives the AI is supposed to complete.
api_budget (float): The maximum dollar value for API calls (0.0 means infinite)
"""
def __init__(
self, ai_name: str = "", ai_role: str = "", ai_goals: list | None = None
self,
ai_name: str = "",
ai_role: str = "",
ai_goals: list | None = None,
api_budget: float = 0.0,
) -> None:
"""
Initialize a class instance
@@ -30,6 +43,7 @@ class AIConfig:
ai_name (str): The name of the AI.
ai_role (str): The description of the AI's role.
ai_goals (list): The list of objectives the AI is supposed to complete.
api_budget (float): The maximum dollar value for API calls (0.0 means infinite)
Returns:
None
"""
@@ -38,14 +52,14 @@ class AIConfig:
self.ai_name = ai_name
self.ai_role = ai_role
self.ai_goals = ai_goals
# Soon this will go in a folder where it remembers more stuff about the run(s)
SAVE_FILE = os.path.join(os.path.dirname(__file__), "..", "ai_settings.yaml")
self.api_budget = api_budget
self.prompt_generator = None
self.command_registry = None
@staticmethod
def load(config_file: str = SAVE_FILE) -> "AIConfig":
"""
Returns class object with parameters (ai_name, ai_role, ai_goals) loaded from
Returns class object with parameters (ai_name, ai_role, ai_goals, api_budget) loaded from
yaml file if yaml file exists,
else returns class with no parameters.
@@ -66,8 +80,9 @@ class AIConfig:
ai_name = config_params.get("ai_name", "")
ai_role = config_params.get("ai_role", "")
ai_goals = config_params.get("ai_goals", [])
api_budget = config_params.get("api_budget", 0.0)
# type: Type[AIConfig]
return AIConfig(ai_name, ai_role, ai_goals)
return AIConfig(ai_name, ai_role, ai_goals, api_budget)
def save(self, config_file: str = SAVE_FILE) -> None:
"""
@@ -85,11 +100,14 @@ class AIConfig:
"ai_name": self.ai_name,
"ai_role": self.ai_role,
"ai_goals": self.ai_goals,
"api_budget": self.api_budget,
}
with open(config_file, "w", encoding="utf-8") as file:
yaml.dump(config, file, allow_unicode=True)
def construct_full_prompt(self) -> str:
def construct_full_prompt(
self, prompt_generator: Optional[PromptGenerator] = None
) -> str:
"""
Returns a prompt to the user with the class information in an organized fashion.
@@ -98,7 +116,7 @@ class AIConfig:
Returns:
full_prompt (str): A string containing the initial prompt for the user
including the ai_name, ai_role and ai_goals.
including the ai_name, ai_role, ai_goals, and api_budget.
"""
prompt_start = (
@@ -108,14 +126,38 @@ class AIConfig:
""
)
from autogpt.prompt import get_prompt
from autogpt.config import Config
from autogpt.prompts.prompt import build_default_prompt_generator
cfg = Config()
if prompt_generator is None:
prompt_generator = build_default_prompt_generator()
prompt_generator.goals = self.ai_goals
prompt_generator.name = self.ai_name
prompt_generator.role = self.ai_role
prompt_generator.command_registry = self.command_registry
for plugin in cfg.plugins:
if not plugin.can_handle_post_prompt():
continue
prompt_generator = plugin.post_prompt(prompt_generator)
if cfg.execute_local_commands:
# add OS info to prompt
os_name = platform.system()
os_info = (
platform.platform(terse=True)
if os_name != "Linux"
else distro.name(pretty=True)
)
prompt_start += f"\nThe OS you are running on is: {os_info}"
# Construct full prompt
full_prompt = (
f"You are {self.ai_name}, {self.ai_role}\n{prompt_start}\n\nGOALS:\n\n"
)
full_prompt = f"You are {prompt_generator.name}, {prompt_generator.role}\n{prompt_start}\n\nGOALS:\n\n"
for i, goal in enumerate(self.ai_goals):
full_prompt += f"{i+1}. {goal}\n"
full_prompt += f"\n\n{get_prompt()}"
if self.api_budget > 0.0:
full_prompt += f"\nIt takes money to let you run. Your API budget is ${self.api_budget:.3f}"
self.prompt_generator = prompt_generator
full_prompt += f"\n\n{prompt_generator.generate_prompt_string()}"
return full_prompt

View File

@@ -1,14 +1,13 @@
"""Configuration class to store the state of bools for different scripts access."""
import os
from typing import List
import openai
import yaml
from auto_gpt_plugin_template import AutoGPTPluginTemplate
from colorama import Fore
from dotenv import load_dotenv
from autogpt.config.singleton import Singleton
load_dotenv(verbose=True)
from autogpt.singleton import Singleton
class Config(metaclass=Singleton):
@@ -18,6 +17,9 @@ class Config(metaclass=Singleton):
def __init__(self) -> None:
"""Initialize the Config class"""
self.workspace_path = None
self.file_logger_path = None
self.debug_mode = False
self.continuous_mode = False
self.continuous_limit = 0
@@ -26,6 +28,8 @@ class Config(metaclass=Singleton):
self.allow_downloads = False
self.skip_news = False
self.authorise_key = os.getenv("AUTHORISE_COMMAND_KEY", "y")
self.exit_key = os.getenv("EXIT_KEY", "n")
self.ai_settings_file = os.getenv("AI_SETTINGS_FILE", "ai_settings.yaml")
self.fast_llm_model = os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo")
self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4")
@@ -59,6 +63,8 @@ class Config(metaclass=Singleton):
self.use_mac_os_tts = False
self.use_mac_os_tts = os.getenv("USE_MAC_OS_TTS")
self.chat_messages_enabled = os.getenv("CHAT_MESSAGES_ENABLED") == "True"
self.use_brian_tts = False
self.use_brian_tts = os.getenv("USE_BRIAN_TTS")
@@ -83,9 +89,12 @@ class Config(metaclass=Singleton):
os.getenv("USE_WEAVIATE_EMBEDDED", "False") == "True"
)
# milvus configuration, e.g., localhost:19530.
# milvus or zilliz cloud configuration.
self.milvus_addr = os.getenv("MILVUS_ADDR", "localhost:19530")
self.milvus_username = os.getenv("MILVUS_USERNAME")
self.milvus_password = os.getenv("MILVUS_PASSWORD")
self.milvus_collection = os.getenv("MILVUS_COLLECTION", "autogpt")
self.milvus_secure = os.getenv("MILVUS_SECURE") == "True"
self.image_provider = os.getenv("IMAGE_PROVIDER")
self.image_size = int(os.getenv("IMAGE_SIZE", 256))
@@ -120,8 +129,17 @@ class Config(metaclass=Singleton):
# Note that indexes must be created on db 0 in redis, this is not configurable.
self.memory_backend = os.getenv("MEMORY_BACKEND", "local")
# Initialize the OpenAI API client
openai.api_key = self.openai_api_key
self.plugins_dir = os.getenv("PLUGINS_DIR", "plugins")
self.plugins: List[AutoGPTPluginTemplate] = []
self.plugins_openai = []
plugins_allowlist = os.getenv("ALLOWLISTED_PLUGINS")
if plugins_allowlist:
self.plugins_allowlist = plugins_allowlist.split(",")
else:
self.plugins_allowlist = []
self.plugins_denylist = []
def get_azure_deployment_id_for_model(self, model: str) -> str:
"""
@@ -161,11 +179,8 @@ class Config(metaclass=Singleton):
Returns:
None
"""
try:
with open(config_file) as file:
config_params = yaml.load(file, Loader=yaml.FullLoader)
except FileNotFoundError:
config_params = {}
with open(config_file) as file:
config_params = yaml.load(file, Loader=yaml.FullLoader)
self.openai_api_type = config_params.get("azure_api_type") or "azure"
self.openai_api_base = config_params.get("azure_api_base") or ""
self.openai_api_version = (
@@ -241,6 +256,18 @@ class Config(metaclass=Singleton):
"""Set the debug mode value."""
self.debug_mode = value
def set_plugins(self, value: list) -> None:
"""Set the plugins value."""
self.plugins = value
def set_temperature(self, value: int) -> None:
"""Set the temperature value."""
self.temperature = value
def set_memory_backend(self, name: str) -> None:
"""Set the memory backend name."""
self.memory_backend = name
def check_openai_api_key() -> None:
"""Check if the OpenAI API key is set in config.py or as an environment variable."""
@@ -249,6 +276,7 @@ def check_openai_api_key() -> None:
print(
Fore.RED
+ "Please set your OpenAI API key in .env or as an environment variable."
+ Fore.RESET
)
print("You can get your key from https://platform.openai.com/account/api-keys")
exit(1)

View File

@@ -112,6 +112,9 @@ def create_config(
CFG.ai_settings_file = file
CFG.skip_reprompt = True
if browser_name:
CFG.selenium_web_browser = browser_name
if allow_downloads:
logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED")
logger.typewriter_log(
@@ -129,6 +132,3 @@ def create_config(
if skip_news:
CFG.skip_news = True
if browser_name:
CFG.selenium_web_browser = browser_name

View File

@@ -11,7 +11,7 @@ from regex import regex
from autogpt.config import Config
from autogpt.json_utils.json_fix_general import correct_json
from autogpt.llm_utils import call_ai_function
from autogpt.llm import call_ai_function
from autogpt.logs import logger
from autogpt.speech import say_text
@@ -91,14 +91,33 @@ def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]:
Returns:
str: The fixed JSON string.
"""
assistant_reply = assistant_reply.strip()
if assistant_reply.startswith("```json"):
assistant_reply = assistant_reply[7:]
if assistant_reply.endswith("```"):
assistant_reply = assistant_reply[:-3]
try:
return json.loads(assistant_reply) # just check the validity
except json.JSONDecodeError: # noqa: E722
pass
if assistant_reply.startswith("json "):
assistant_reply = assistant_reply[5:]
assistant_reply = assistant_reply.strip()
try:
return json.loads(assistant_reply) # just check the validity
except json.JSONDecodeError: # noqa: E722
pass
# Parse and print Assistant response
assistant_reply_json = fix_and_parse_json(assistant_reply)
logger.debug("Assistant reply JSON: %s", str(assistant_reply_json))
if assistant_reply_json == {}:
assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(
assistant_reply
)
logger.debug("Assistant reply JSON 2: %s", str(assistant_reply_json))
if assistant_reply_json != {}:
return assistant_reply_json

View File

@@ -8,6 +8,7 @@ from autogpt.config import Config
from autogpt.logs import logger
CFG = Config()
LLM_DEFAULT_RESPONSE_FORMAT = "llm_response_format_1"
def extract_char_position(error_message: str) -> int:
@@ -28,10 +29,10 @@ def extract_char_position(error_message: str) -> int:
raise ValueError("Character position not found in the error message.")
def validate_json(json_object: object, schema_name: object) -> object:
def validate_json(json_object: object, schema_name: str) -> dict | None:
"""
:type schema_name: object
:param schema_name:
:param schema_name: str
:type json_object: object
"""
with open(f"autogpt/json_utils/{schema_name}.json", "r") as f:
@@ -48,7 +49,32 @@ def validate_json(json_object: object, schema_name: object) -> object:
for error in errors:
logger.error(f"Error: {error.message}")
elif CFG.debug_mode:
return None
if CFG.debug_mode:
print("The JSON object is valid.")
return json_object
def validate_json_string(json_string: str, schema_name: str) -> dict | None:
"""
:type schema_name: object
:param schema_name: str
:type json_object: object
"""
try:
json_loaded = json.loads(json_string)
return validate_json(json_loaded, schema_name)
except:
return None
def is_string_valid_json(json_string: str, schema_name: str) -> bool:
"""
:type schema_name: object
:param schema_name: str
:type json_object: object
"""
return validate_json_string(json_string, schema_name) is not None

38
autogpt/llm/__init__.py Normal file
View File

@@ -0,0 +1,38 @@
from autogpt.llm.api_manager import ApiManager
from autogpt.llm.base import (
ChatModelInfo,
ChatModelResponse,
EmbeddingModelInfo,
EmbeddingModelResponse,
LLMResponse,
Message,
ModelInfo,
)
from autogpt.llm.chat import chat_with_ai, create_chat_message, generate_context
from autogpt.llm.llm_utils import (
call_ai_function,
create_chat_completion,
get_ada_embedding,
)
from autogpt.llm.modelsinfo import COSTS
from autogpt.llm.token_counter import count_message_tokens, count_string_tokens
__all__ = [
"ApiManager",
"Message",
"ModelInfo",
"ChatModelInfo",
"EmbeddingModelInfo",
"LLMResponse",
"ChatModelResponse",
"EmbeddingModelResponse",
"create_chat_message",
"generate_context",
"chat_with_ai",
"call_ai_function",
"create_chat_completion",
"get_ada_embedding",
"COSTS",
"count_message_tokens",
"count_string_tokens",
]

128
autogpt/llm/api_manager.py Normal file
View File

@@ -0,0 +1,128 @@
from __future__ import annotations
import openai
from autogpt.config import Config
from autogpt.llm.modelsinfo import COSTS
from autogpt.logs import logger
from autogpt.singleton import Singleton
class ApiManager(metaclass=Singleton):
def __init__(self):
self.total_prompt_tokens = 0
self.total_completion_tokens = 0
self.total_cost = 0
self.total_budget = 0
def reset(self):
self.total_prompt_tokens = 0
self.total_completion_tokens = 0
self.total_cost = 0
self.total_budget = 0.0
def create_chat_completion(
self,
messages: list, # type: ignore
model: str | None = None,
temperature: float = None,
max_tokens: int | None = None,
deployment_id=None,
) -> str:
"""
Create a chat completion and update the cost.
Args:
messages (list): The list of messages to send to the API.
model (str): The model to use for the API call.
temperature (float): The temperature to use for the API call.
max_tokens (int): The maximum number of tokens for the API call.
Returns:
str: The AI's response.
"""
cfg = Config()
if temperature is None:
temperature = cfg.temperature
if deployment_id is not None:
response = openai.ChatCompletion.create(
deployment_id=deployment_id,
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
api_key=cfg.openai_api_key,
)
else:
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
api_key=cfg.openai_api_key,
)
logger.debug(f"Response: {response}")
prompt_tokens = response.usage.prompt_tokens
completion_tokens = response.usage.completion_tokens
self.update_cost(prompt_tokens, completion_tokens, model)
return response
def update_cost(self, prompt_tokens, completion_tokens, model):
"""
Update the total cost, prompt tokens, and completion tokens.
Args:
prompt_tokens (int): The number of tokens used in the prompt.
completion_tokens (int): The number of tokens used in the completion.
model (str): The model used for the API call.
"""
self.total_prompt_tokens += prompt_tokens
self.total_completion_tokens += completion_tokens
self.total_cost += (
prompt_tokens * COSTS[model]["prompt"]
+ completion_tokens * COSTS[model]["completion"]
) / 1000
logger.debug(f"Total running cost: ${self.total_cost:.3f}")
def set_total_budget(self, total_budget):
"""
Sets the total user-defined budget for API calls.
Args:
total_budget (float): The total budget for API calls.
"""
self.total_budget = total_budget
def get_total_prompt_tokens(self):
"""
Get the total number of prompt tokens.
Returns:
int: The total number of prompt tokens.
"""
return self.total_prompt_tokens
def get_total_completion_tokens(self):
"""
Get the total number of completion tokens.
Returns:
int: The total number of completion tokens.
"""
return self.total_completion_tokens
def get_total_cost(self):
"""
Get the total cost of API calls.
Returns:
float: The total cost of API calls.
"""
return self.total_cost
def get_total_budget(self):
"""
Get the total user-defined budget for API calls.
Returns:
float: The total budget for API calls.
"""
return self.total_budget

65
autogpt/llm/base.py Normal file
View File

@@ -0,0 +1,65 @@
from dataclasses import dataclass, field
from typing import List, TypedDict
class Message(TypedDict):
"""OpenAI Message object containing a role and the message content"""
role: str
content: str
@dataclass
class ModelInfo:
"""Struct for model information.
Would be lovely to eventually get this directly from APIs, but needs to be scraped from
websites for now.
"""
name: str
prompt_token_cost: float
completion_token_cost: float
max_tokens: int
@dataclass
class ChatModelInfo(ModelInfo):
"""Struct for chat model information."""
pass
@dataclass
class EmbeddingModelInfo(ModelInfo):
"""Struct for embedding model information."""
embedding_dimensions: int
@dataclass
class LLMResponse:
"""Standard response struct for a response from an LLM model."""
model_info: ModelInfo
prompt_tokens_used: int = 0
completion_tokens_used: int = 0
@dataclass
class EmbeddingModelResponse(LLMResponse):
"""Standard response struct for a response from an embedding model."""
embedding: List[float] = field(default_factory=list)
def __post_init__(self):
if self.completion_tokens_used:
raise ValueError("Embeddings should not have completion tokens used.")
@dataclass
class ChatModelResponse(LLMResponse):
"""Standard response struct for a response from an LLM model."""
content: str = None

View File

@@ -1,16 +1,26 @@
import time
from random import shuffle
from openai.error import RateLimitError
from autogpt import token_counter
from autogpt.config import Config
from autogpt.llm_utils import create_chat_completion
from autogpt.llm.api_manager import ApiManager
from autogpt.llm.base import Message
from autogpt.llm.llm_utils import create_chat_completion
from autogpt.llm.token_counter import count_message_tokens
from autogpt.logs import logger
from autogpt.memory_management.store_memory import (
save_memory_trimmed_from_context_window,
)
from autogpt.memory_management.summary_memory import (
get_newly_trimmed_messages,
update_running_summary,
)
cfg = Config()
def create_chat_message(role, content):
def create_chat_message(role, content) -> Message:
"""
Create a chat message with the given role and content.
@@ -30,17 +40,17 @@ def generate_context(prompt, relevant_memory, full_message_history, model):
create_chat_message(
"system", f"The current time and date is {time.strftime('%c')}"
),
create_chat_message(
"system",
f"This reminds you of these events from your past:\n{relevant_memory}\n\n",
),
# create_chat_message(
# "system",
# f"This reminds you of these events from your past:\n{relevant_memory}\n\n",
# ),
]
# Add messages from the full message history until we reach the token limit
next_message_to_add_index = len(full_message_history) - 1
insertion_index = len(current_context)
# Count the currently used tokens
current_tokens_used = token_counter.count_message_tokens(current_context, model)
current_tokens_used = count_message_tokens(current_context, model)
return (
next_message_to_add_index,
current_tokens_used,
@@ -51,7 +61,7 @@ def generate_context(prompt, relevant_memory, full_message_history, model):
# TODO: Change debug from hardcode to argument
def chat_with_ai(
prompt, user_input, full_message_history, permanent_memory, token_limit
agent, prompt, user_input, full_message_history, permanent_memory, token_limit
):
"""Interact with the OpenAI API, sending the prompt, user input, message history,
and permanent memory."""
@@ -75,16 +85,21 @@ def chat_with_ai(
"""
model = cfg.fast_llm_model # TODO: Change model from hardcode to argument
# Reserve 1000 tokens for the response
logger.debug(f"Token limit: {token_limit}")
send_token_limit = token_limit - 1000
relevant_memory = (
""
if len(full_message_history) == 0
else permanent_memory.get_relevant(str(full_message_history[-9:]), 10)
)
# if len(full_message_history) == 0:
# relevant_memory = ""
# else:
# recent_history = full_message_history[-5:]
# shuffle(recent_history)
# relevant_memories = permanent_memory.get_relevant(
# str(recent_history), 5
# )
# if relevant_memories:
# shuffle(relevant_memories)
# relevant_memory = str(relevant_memories)
relevant_memory = ""
logger.debug(f"Memory Stats: {permanent_memory.get_stats()}")
(
@@ -94,30 +109,36 @@ def chat_with_ai(
current_context,
) = generate_context(prompt, relevant_memory, full_message_history, model)
while current_tokens_used > 2500:
# remove memories until we are under 2500 tokens
relevant_memory = relevant_memory[:-1]
(
next_message_to_add_index,
current_tokens_used,
insertion_index,
current_context,
) = generate_context(
prompt, relevant_memory, full_message_history, model
)
# while current_tokens_used > 2500:
# # remove memories until we are under 2500 tokens
# relevant_memory = relevant_memory[:-1]
# (
# next_message_to_add_index,
# current_tokens_used,
# insertion_index,
# current_context,
# ) = generate_context(
# prompt, relevant_memory, full_message_history, model
# )
current_tokens_used += token_counter.count_message_tokens(
current_tokens_used += count_message_tokens(
[create_chat_message("user", user_input)], model
) # Account for user input (appended later)
current_tokens_used += 500 # Account for memory (appended later) TODO: The final memory may be less than 500 tokens
# Add Messages until the token limit is reached or there are no more messages to add.
while next_message_to_add_index >= 0:
# print (f"CURRENT TOKENS USED: {current_tokens_used}")
message_to_add = full_message_history[next_message_to_add_index]
tokens_to_add = token_counter.count_message_tokens(
[message_to_add], model
)
tokens_to_add = count_message_tokens([message_to_add], model)
if current_tokens_used + tokens_to_add > send_token_limit:
# save_memory_trimmed_from_context_window(
# full_message_history,
# next_message_to_add_index,
# permanent_memory,
# )
break
# Add the most recent message to the start of the current context,
@@ -132,9 +153,67 @@ def chat_with_ai(
# Move to the next most recent message in the full message history
next_message_to_add_index -= 1
# Insert Memories
if len(full_message_history) > 0:
(
newly_trimmed_messages,
agent.last_memory_index,
) = get_newly_trimmed_messages(
full_message_history=full_message_history,
current_context=current_context,
last_memory_index=agent.last_memory_index,
)
agent.summary_memory = update_running_summary(
current_memory=agent.summary_memory,
new_events=newly_trimmed_messages,
)
current_context.insert(insertion_index, agent.summary_memory)
api_manager = ApiManager()
# inform the AI about its remaining budget (if it has one)
if api_manager.get_total_budget() > 0.0:
remaining_budget = (
api_manager.get_total_budget() - api_manager.get_total_cost()
)
if remaining_budget < 0:
remaining_budget = 0
system_message = (
f"Your remaining API budget is ${remaining_budget:.3f}"
+ (
" BUDGET EXCEEDED! SHUT DOWN!\n\n"
if remaining_budget == 0
else " Budget very nearly exceeded! Shut down gracefully!\n\n"
if remaining_budget < 0.005
else " Budget nearly exceeded. Finish up.\n\n"
if remaining_budget < 0.01
else "\n\n"
)
)
logger.debug(system_message)
current_context.append(create_chat_message("system", system_message))
# Append user input, the length of this is accounted for above
current_context.extend([create_chat_message("user", user_input)])
plugin_count = len(cfg.plugins)
for i, plugin in enumerate(cfg.plugins):
if not plugin.can_handle_on_planning():
continue
plugin_response = plugin.on_planning(
agent.prompt_generator, current_context
)
if not plugin_response or plugin_response == "":
continue
tokens_to_add = count_message_tokens(
[create_chat_message("system", plugin_response)], model
)
if current_tokens_used + tokens_to_add > send_token_limit:
if cfg.debug_mode:
print("Plugin response too long, skipping:", plugin_response)
print("Plugins remaining at stop:", plugin_count - i)
break
current_context.append(create_chat_message("system", plugin_response))
# Calculate remaining tokens
tokens_remaining = token_limit - current_tokens_used
# assert tokens_remaining >= 0, "Tokens remaining is negative.

261
autogpt/llm/llm_utils.py Normal file
View File

@@ -0,0 +1,261 @@
from __future__ import annotations
import functools
import time
from typing import List, Optional
import openai
from colorama import Fore, Style
from openai.error import APIError, RateLimitError, Timeout
from autogpt.config import Config
from autogpt.llm.api_manager import ApiManager
from autogpt.llm.base import Message
from autogpt.logs import logger
def retry_openai_api(
num_retries: int = 10,
backoff_base: float = 2.0,
warn_user: bool = True,
):
"""Retry an OpenAI API call.
Args:
num_retries int: Number of retries. Defaults to 10.
backoff_base float: Base for exponential backoff. Defaults to 2.
warn_user bool: Whether to warn the user. Defaults to True.
"""
retry_limit_msg = f"{Fore.RED}Error: " f"Reached rate limit, passing...{Fore.RESET}"
api_key_error_msg = (
f"Please double check that you have setup a "
f"{Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. You can "
f"read more here: {Fore.CYAN}https://significant-gravitas.github.io/Auto-GPT/setup/#getting-an-api-key{Fore.RESET}"
)
backoff_msg = (
f"{Fore.RED}Error: API Bad gateway. Waiting {{backoff}} seconds...{Fore.RESET}"
)
def _wrapper(func):
@functools.wraps(func)
def _wrapped(*args, **kwargs):
user_warned = not warn_user
num_attempts = num_retries + 1 # +1 for the first attempt
for attempt in range(1, num_attempts + 1):
try:
return func(*args, **kwargs)
except RateLimitError:
if attempt == num_attempts:
raise
logger.debug(retry_limit_msg)
if not user_warned:
logger.double_check(api_key_error_msg)
user_warned = True
except APIError as e:
if (e.http_status != 502) or (attempt == num_attempts):
raise
backoff = backoff_base ** (attempt + 2)
logger.debug(backoff_msg.format(backoff=backoff))
time.sleep(backoff)
return _wrapped
return _wrapper
def call_ai_function(
function: str, args: list, description: str, model: str | None = None
) -> str:
"""Call an AI function
This is a magic function that can do anything with no-code. See
https://github.com/Torantulino/AI-Functions for more info.
Args:
function (str): The function to call
args (list): The arguments to pass to the function
description (str): The description of the function
model (str, optional): The model to use. Defaults to None.
Returns:
str: The response from the function
"""
cfg = Config()
if model is None:
model = cfg.smart_llm_model
# For each arg, if any are None, convert to "None":
args = [str(arg) if arg is not None else "None" for arg in args]
# parse args to comma separated string
args: str = ", ".join(args)
messages: List[Message] = [
{
"role": "system",
"content": f"You are now the following python function: ```# {description}"
f"\n{function}```\n\nOnly respond with your `return` value.",
},
{"role": "user", "content": args},
]
return create_chat_completion(model=model, messages=messages, temperature=0)
# Overly simple abstraction until we create something better
# simple retry mechanism when getting a rate error or a bad gateway
def create_chat_completion(
messages: List[Message], # type: ignore
model: Optional[str] = None,
temperature: float = None,
max_tokens: Optional[int] = None,
) -> str:
"""Create a chat completion using the OpenAI API
Args:
messages (List[Message]): The messages to send to the chat completion
model (str, optional): The model to use. Defaults to None.
temperature (float, optional): The temperature to use. Defaults to 0.9.
max_tokens (int, optional): The max tokens to use. Defaults to None.
Returns:
str: The response from the chat completion
"""
cfg = Config()
if temperature is None:
temperature = cfg.temperature
num_retries = 10
warned_user = False
if cfg.debug_mode:
print(
f"{Fore.GREEN}Creating chat completion with model {model}, temperature {temperature}, max_tokens {max_tokens}{Fore.RESET}"
)
for plugin in cfg.plugins:
if plugin.can_handle_chat_completion(
messages=messages,
model=model,
temperature=temperature,
max_tokens=max_tokens,
):
message = plugin.handle_chat_completion(
messages=messages,
model=model,
temperature=temperature,
max_tokens=max_tokens,
)
if message is not None:
return message
api_manager = ApiManager()
response = None
for attempt in range(num_retries):
backoff = 2 ** (attempt + 2)
try:
if cfg.use_azure:
response = api_manager.create_chat_completion(
deployment_id=cfg.get_azure_deployment_id_for_model(model),
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
)
else:
response = api_manager.create_chat_completion(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
)
break
except RateLimitError:
if cfg.debug_mode:
print(
f"{Fore.RED}Error: ", f"Reached rate limit, passing...{Fore.RESET}"
)
if not warned_user:
logger.double_check(
f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. "
+ f"You can read more here: {Fore.CYAN}https://significant-gravitas.github.io/Auto-GPT/setup/#getting-an-api-key{Fore.RESET}"
)
warned_user = True
except (APIError, Timeout) as e:
if e.http_status != 502:
raise
if attempt == num_retries - 1:
raise
if cfg.debug_mode:
print(
f"{Fore.RED}Error: ",
f"API Bad gateway. Waiting {backoff} seconds...{Fore.RESET}",
)
time.sleep(backoff)
if response is None:
logger.typewriter_log(
"FAILED TO GET RESPONSE FROM OPENAI",
Fore.RED,
"Auto-GPT has failed to get a response from OpenAI's services. "
+ f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.",
)
logger.double_check()
if cfg.debug_mode:
raise RuntimeError(f"Failed to get response after {num_retries} retries")
else:
quit(1)
resp = response.choices[0].message["content"]
for plugin in cfg.plugins:
if not plugin.can_handle_on_response():
continue
resp = plugin.on_response(resp)
return resp
def get_ada_embedding(text: str) -> List[float]:
"""Get an embedding from the ada model.
Args:
text (str): The text to embed.
Returns:
List[float]: The embedding.
"""
cfg = Config()
model = "text-embedding-ada-002"
text = text.replace("\n", " ")
if cfg.use_azure:
kwargs = {"engine": cfg.get_azure_deployment_id_for_model(model)}
else:
kwargs = {"model": model}
embedding = create_embedding(text, **kwargs)
api_manager = ApiManager()
api_manager.update_cost(
prompt_tokens=embedding.usage.prompt_tokens,
completion_tokens=0,
model=model,
)
return embedding["data"][0]["embedding"]
@retry_openai_api()
def create_embedding(
text: str,
*_,
**kwargs,
) -> openai.Embedding:
"""Create an embedding using the OpenAI API
Args:
text (str): The text to embed.
kwargs: Other arguments to pass to the OpenAI API embedding creation call.
Returns:
openai.Embedding: The embedding object.
"""
cfg = Config()
return openai.Embedding.create(
input=[text],
api_key=cfg.openai_api_key,
**kwargs,
)

View File

@@ -0,0 +1,7 @@
COSTS = {
"gpt-3.5-turbo": {"prompt": 0.002, "completion": 0.002},
"gpt-3.5-turbo-0301": {"prompt": 0.002, "completion": 0.002},
"gpt-4-0314": {"prompt": 0.03, "completion": 0.06},
"gpt-4": {"prompt": 0.03, "completion": 0.06},
"text-embedding-ada-002": {"prompt": 0.0004, "completion": 0.0},
}

View File

@@ -0,0 +1,37 @@
from autogpt.llm.base import ChatModelInfo, EmbeddingModelInfo
OPEN_AI_CHAT_MODELS = {
"gpt-3.5-turbo": ChatModelInfo(
name="gpt-3.5-turbo",
prompt_token_cost=0.002,
completion_token_cost=0.002,
max_tokens=4096,
),
"gpt-4": ChatModelInfo(
name="gpt-4",
prompt_token_cost=0.03,
completion_token_cost=0.06,
max_tokens=8192,
),
"gpt-4-32k": ChatModelInfo(
name="gpt-4-32k",
prompt_token_cost=0.06,
completion_token_cost=0.12,
max_tokens=32768,
),
}
OPEN_AI_EMBEDDING_MODELS = {
"text-embedding-ada-002": EmbeddingModelInfo(
name="text-embedding-ada-002",
prompt_token_cost=0.0004,
completion_token_cost=0.0,
max_tokens=8191,
embedding_dimensions=1536,
),
}
OPEN_AI_MODELS = {
**OPEN_AI_CHAT_MODELS,
**OPEN_AI_EMBEDDING_MODELS,
}

View File

@@ -1,13 +1,16 @@
"""Functions for counting the number of tokens in a message or string."""
from __future__ import annotations
from typing import List
import tiktoken
from autogpt.llm.base import Message
from autogpt.logs import logger
def count_message_tokens(
messages: list[dict[str, str]], model: str = "gpt-3.5-turbo-0301"
messages: List[Message], model: str = "gpt-3.5-turbo-0301"
) -> int:
"""
Returns the number of tokens used by a list of messages.

View File

@@ -1,172 +0,0 @@
from __future__ import annotations
import time
from ast import List
import openai
from colorama import Fore, Style
from openai.error import APIError, RateLimitError
from autogpt.config import Config
from autogpt.logs import logger
CFG = Config()
openai.api_key = CFG.openai_api_key
def call_ai_function(
function: str, args: list, description: str, model: str | None = None
) -> str:
"""Call an AI function
This is a magic function that can do anything with no-code. See
https://github.com/Torantulino/AI-Functions for more info.
Args:
function (str): The function to call
args (list): The arguments to pass to the function
description (str): The description of the function
model (str, optional): The model to use. Defaults to None.
Returns:
str: The response from the function
"""
if model is None:
model = CFG.smart_llm_model
# For each arg, if any are None, convert to "None":
args = [str(arg) if arg is not None else "None" for arg in args]
# parse args to comma separated string
args = ", ".join(args)
messages = [
{
"role": "system",
"content": f"You are now the following python function: ```# {description}"
f"\n{function}```\n\nOnly respond with your `return` value.",
},
{"role": "user", "content": args},
]
return create_chat_completion(model=model, messages=messages, temperature=0)
# Overly simple abstraction until we create something better
# simple retry mechanism when getting a rate error or a bad gateway
def create_chat_completion(
messages: list, # type: ignore
model: str | None = None,
temperature: float = CFG.temperature,
max_tokens: int | None = None,
) -> str:
"""Create a chat completion using the OpenAI API
Args:
messages (list[dict[str, str]]): The messages to send to the chat completion
model (str, optional): The model to use. Defaults to None.
temperature (float, optional): The temperature to use. Defaults to 0.9.
max_tokens (int, optional): The max tokens to use. Defaults to None.
Returns:
str: The response from the chat completion
"""
response = None
num_retries = 10
warned_user = False
if CFG.debug_mode:
print(
Fore.GREEN
+ f"Creating chat completion with model {model}, temperature {temperature},"
f" max_tokens {max_tokens}" + Fore.RESET
)
for attempt in range(num_retries):
backoff = 2 ** (attempt + 2)
try:
if CFG.use_azure:
response = openai.ChatCompletion.create(
deployment_id=CFG.get_azure_deployment_id_for_model(model),
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
)
else:
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
)
break
except RateLimitError:
if CFG.debug_mode:
print(
Fore.RED + "Error: ",
f"Reached rate limit, passing..." + Fore.RESET,
)
if not warned_user:
logger.double_check(
f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. "
+ f"You can read more here: {Fore.CYAN}https://github.com/Significant-Gravitas/Auto-GPT#openai-api-keys-configuration{Fore.RESET}"
)
warned_user = True
except APIError as e:
if e.http_status == 502:
pass
else:
raise
if attempt == num_retries - 1:
raise
if CFG.debug_mode:
print(
Fore.RED + "Error: ",
f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET,
)
time.sleep(backoff)
if response is None:
logger.typewriter_log(
"FAILED TO GET RESPONSE FROM OPENAI",
Fore.RED,
"Auto-GPT has failed to get a response from OpenAI's services. "
+ f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.",
)
logger.double_check()
if CFG.debug_mode:
raise RuntimeError(f"Failed to get response after {num_retries} retries")
else:
quit(1)
return response.choices[0].message["content"]
def create_embedding_with_ada(text) -> list:
"""Create an embedding with text-ada-002 using the OpenAI SDK"""
num_retries = 10
for attempt in range(num_retries):
backoff = 2 ** (attempt + 2)
try:
if CFG.use_azure:
return openai.Embedding.create(
input=[text],
engine=CFG.get_azure_deployment_id_for_model(
"text-embedding-ada-002"
),
)["data"][0]["embedding"]
else:
return openai.Embedding.create(
input=[text], model="text-embedding-ada-002"
)["data"][0]["embedding"]
except RateLimitError:
pass
except APIError as e:
if e.http_status == 502:
pass
else:
raise
if attempt == num_retries - 1:
raise
if CFG.debug_mode:
print(
Fore.RED + "Error: ",
f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET,
)
time.sleep(backoff)

View File

@@ -1,19 +1,16 @@
"""Logging module for Auto-GPT."""
import json
import logging
import os
import random
import re
import time
import traceback
from logging import LogRecord
from colorama import Fore, Style
from autogpt.config import Config, Singleton
from autogpt.singleton import Singleton
from autogpt.speech import say_text
CFG = Config()
from autogpt.utils import send_chat_message_to_user
class Logger(metaclass=Singleton):
@@ -78,12 +75,16 @@ class Logger(metaclass=Singleton):
self.logger.addHandler(error_handler)
self.logger.setLevel(logging.DEBUG)
self.speak_mode = False
def typewriter_log(
self, title="", title_color="", content="", speak_text=False, level=logging.INFO
):
if speak_text and CFG.speak_mode:
if speak_text and self.speak_mode:
say_text(f"{title}. {content}")
send_chat_message_to_user(f"{title}. {content}")
if content:
if isinstance(content, list):
content = " ".join(content)
@@ -202,100 +203,10 @@ def remove_color_codes(s: str) -> str:
logger = Logger()
def print_assistant_thoughts(ai_name, assistant_reply):
"""Prints the assistant's thoughts to the console"""
from autogpt.json_utils.json_fix_llm import (
attempt_to_fix_json_by_finding_outermost_brackets,
fix_and_parse_json,
)
try:
try:
# Parse and print Assistant response
assistant_reply_json = fix_and_parse_json(assistant_reply)
except json.JSONDecodeError:
logger.error("Error: Invalid JSON in assistant thoughts\n", assistant_reply)
assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(
assistant_reply
)
if isinstance(assistant_reply_json, str):
assistant_reply_json = fix_and_parse_json(assistant_reply_json)
# Check if assistant_reply_json is a string and attempt to parse
# it into a JSON object
if isinstance(assistant_reply_json, str):
try:
assistant_reply_json = json.loads(assistant_reply_json)
except json.JSONDecodeError:
logger.error("Error: Invalid JSON\n", assistant_reply)
assistant_reply_json = (
attempt_to_fix_json_by_finding_outermost_brackets(
assistant_reply_json
)
)
assistant_thoughts_reasoning = None
assistant_thoughts_plan = None
assistant_thoughts_speak = None
assistant_thoughts_criticism = None
if not isinstance(assistant_reply_json, dict):
assistant_reply_json = {}
assistant_thoughts = assistant_reply_json.get("thoughts", {})
assistant_thoughts_text = assistant_thoughts.get("text")
if assistant_thoughts:
assistant_thoughts_reasoning = assistant_thoughts.get("reasoning")
assistant_thoughts_plan = assistant_thoughts.get("plan")
assistant_thoughts_criticism = assistant_thoughts.get("criticism")
assistant_thoughts_speak = assistant_thoughts.get("speak")
logger.typewriter_log(
f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}"
)
logger.typewriter_log(
"REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}"
)
if assistant_thoughts_plan:
logger.typewriter_log("PLAN:", Fore.YELLOW, "")
# If it's a list, join it into a string
if isinstance(assistant_thoughts_plan, list):
assistant_thoughts_plan = "\n".join(assistant_thoughts_plan)
elif isinstance(assistant_thoughts_plan, dict):
assistant_thoughts_plan = str(assistant_thoughts_plan)
# Split the input_string using the newline character and dashes
lines = assistant_thoughts_plan.split("\n")
for line in lines:
line = line.lstrip("- ")
logger.typewriter_log("- ", Fore.GREEN, line.strip())
logger.typewriter_log(
"CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}"
)
# Speak the assistant's thoughts
if CFG.speak_mode and assistant_thoughts_speak:
say_text(assistant_thoughts_speak)
else:
logger.typewriter_log("SPEAK:", Fore.YELLOW, f"{assistant_thoughts_speak}")
return assistant_reply_json
except json.decoder.JSONDecodeError:
logger.error("Error: Invalid JSON\n", assistant_reply)
if CFG.speak_mode:
say_text(
"I have received an invalid JSON response from the OpenAI API."
" I cannot ignore this response."
)
# All other errors, return "Error: + error message"
except Exception:
call_stack = traceback.format_exc()
logger.error("Error: \n", call_stack)
def print_assistant_thoughts(
ai_name: object, assistant_reply_json_valid: object
ai_name: object,
assistant_reply_json_valid: object,
speak_mode: bool = False,
) -> None:
assistant_thoughts_reasoning = None
assistant_thoughts_plan = None
@@ -328,5 +239,5 @@ def print_assistant_thoughts(
logger.typewriter_log("- ", Fore.GREEN, line.strip())
logger.typewriter_log("CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}")
# Speak the assistant's thoughts
if CFG.speak_mode and assistant_thoughts_speak:
if speak_mode and assistant_thoughts_speak:
say_text(assistant_thoughts_speak)

150
autogpt/main.py Normal file
View File

@@ -0,0 +1,150 @@
"""The application entry point. Can be invoked by a CLI or any other front end application."""
import logging
import sys
from pathlib import Path
from colorama import Fore
from autogpt.agent.agent import Agent
from autogpt.commands.command import CommandRegistry
from autogpt.config import Config, check_openai_api_key
from autogpt.configurator import create_config
from autogpt.logs import logger
from autogpt.memory import get_memory
from autogpt.plugins import scan_plugins
from autogpt.prompts.prompt import DEFAULT_TRIGGERING_PROMPT, construct_main_ai_config
from autogpt.utils import get_current_git_branch, get_latest_bulletin
from autogpt.workspace import Workspace
from scripts.install_plugin_deps import install_plugin_dependencies
def run_auto_gpt(
continuous: bool,
continuous_limit: int,
ai_settings: str,
skip_reprompt: bool,
speak: bool,
debug: bool,
gpt3only: bool,
gpt4only: bool,
memory_type: str,
browser_name: str,
allow_downloads: bool,
skip_news: bool,
workspace_directory: str,
install_plugin_deps: bool,
):
# Configure logging before we do anything else.
logger.set_level(logging.DEBUG if debug else logging.INFO)
logger.speak_mode = speak
cfg = Config()
# TODO: fill in llm values here
check_openai_api_key()
create_config(
continuous,
continuous_limit,
ai_settings,
skip_reprompt,
speak,
debug,
gpt3only,
gpt4only,
memory_type,
browser_name,
allow_downloads,
skip_news,
)
if not cfg.skip_news:
motd = get_latest_bulletin()
if motd:
logger.typewriter_log("NEWS: ", Fore.GREEN, motd)
git_branch = get_current_git_branch()
if git_branch and git_branch != "stable":
logger.typewriter_log(
"WARNING: ",
Fore.RED,
f"You are running on `{git_branch}` branch "
"- this is not a supported branch.",
)
if sys.version_info < (3, 10):
logger.typewriter_log(
"WARNING: ",
Fore.RED,
"You are running on an older version of Python. "
"Some people have observed problems with certain "
"parts of Auto-GPT with this version. "
"Please consider upgrading to Python 3.10 or higher.",
)
if install_plugin_deps:
install_plugin_dependencies()
# TODO: have this directory live outside the repository (e.g. in a user's
# home directory) and have it come in as a command line argument or part of
# the env file.
if workspace_directory is None:
workspace_directory = Path(__file__).parent / "auto_gpt_workspace"
else:
workspace_directory = Path(workspace_directory)
# TODO: pass in the ai_settings file and the env file and have them cloned into
# the workspace directory so we can bind them to the agent.
workspace_directory = Workspace.make_workspace(workspace_directory)
cfg.workspace_path = str(workspace_directory)
# HACK: doing this here to collect some globals that depend on the workspace.
file_logger_path = workspace_directory / "file_logger.txt"
if not file_logger_path.exists():
with file_logger_path.open(mode="w", encoding="utf-8") as f:
f.write("File Operation Logger ")
cfg.file_logger_path = str(file_logger_path)
cfg.set_plugins(scan_plugins(cfg, cfg.debug_mode))
# Create a CommandRegistry instance and scan default folder
command_registry = CommandRegistry()
command_registry.import_commands("autogpt.commands.analyze_code")
command_registry.import_commands("autogpt.commands.audio_text")
command_registry.import_commands("autogpt.commands.execute_code")
command_registry.import_commands("autogpt.commands.file_operations")
command_registry.import_commands("autogpt.commands.git_operations")
command_registry.import_commands("autogpt.commands.google_search")
command_registry.import_commands("autogpt.commands.image_gen")
command_registry.import_commands("autogpt.commands.improve_code")
command_registry.import_commands("autogpt.commands.twitter")
command_registry.import_commands("autogpt.commands.web_selenium")
command_registry.import_commands("autogpt.commands.write_tests")
command_registry.import_commands("autogpt.app")
ai_name = ""
ai_config = construct_main_ai_config()
ai_config.command_registry = command_registry
# print(prompt)
# Initialize variables
full_message_history = []
next_action_count = 0
# Initialize memory and make sure it is empty.
# this is particularly important for indexing and referencing pinecone memory
memory = get_memory(cfg, init=True)
logger.typewriter_log(
"Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}"
)
logger.typewriter_log("Using Browser:", Fore.GREEN, cfg.selenium_web_browser)
system_prompt = ai_config.construct_full_prompt()
if cfg.debug_mode:
logger.typewriter_log("Prompt:", Fore.GREEN, system_prompt)
agent = Agent(
ai_name=ai_name,
memory=memory,
full_message_history=full_message_history,
next_action_count=next_action_count,
command_registry=command_registry,
config=ai_config,
system_prompt=system_prompt,
triggering_prompt=DEFAULT_TRIGGERING_PROMPT,
workspace_directory=workspace_directory,
)
agent.start_interaction_loop()

View File

@@ -69,8 +69,8 @@ def get_memory(cfg, init=False):
elif cfg.memory_backend == "milvus":
if not MilvusMemory:
print(
"Error: Milvus sdk is not installed."
"Please install pymilvus to use Milvus as memory backend."
"Error: pymilvus sdk is not installed."
"Please install pymilvus to use Milvus or Zilliz Cloud as memory backend."
)
else:
memory = MilvusMemory(cfg)

View File

@@ -1,43 +1,31 @@
"""Base class for memory providers."""
import abc
import openai
from autogpt.config import AbstractSingleton, Config
cfg = Config()
def get_ada_embedding(text):
text = text.replace("\n", " ")
if cfg.use_azure:
return openai.Embedding.create(
input=[text],
engine=cfg.get_azure_deployment_id_for_model("text-embedding-ada-002"),
)["data"][0]["embedding"]
else:
return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[
"data"
][0]["embedding"]
from autogpt.singleton import AbstractSingleton
class MemoryProviderSingleton(AbstractSingleton):
@abc.abstractmethod
def add(self, data):
"""Adds to memory"""
pass
@abc.abstractmethod
def get(self, data):
"""Gets from memory"""
pass
@abc.abstractmethod
def clear(self):
"""Clears memory"""
pass
@abc.abstractmethod
def get_relevant(self, data, num_relevant=5):
"""Gets relevant memory for"""
pass
@abc.abstractmethod
def get_stats(self):
"""Get stats from memory"""
pass

View File

@@ -1,13 +1,13 @@
from __future__ import annotations
import dataclasses
import os
from pathlib import Path
from typing import Any, List
import numpy as np
import orjson
from autogpt.llm_utils import create_embedding_with_ada
from autogpt.llm import get_ada_embedding
from autogpt.memory.base import MemoryProviderSingleton
EMBED_DIM = 1536
@@ -38,26 +38,16 @@ class LocalCache(MemoryProviderSingleton):
Returns:
None
"""
self.filename = f"{cfg.memory_index}.json"
if os.path.exists(self.filename):
try:
with open(self.filename, "w+b") as f:
file_content = f.read()
if not file_content.strip():
file_content = b"{}"
f.write(file_content)
workspace_path = Path(cfg.workspace_path)
self.filename = workspace_path / f"{cfg.memory_index}.json"
loaded = orjson.loads(file_content)
self.data = CacheContent(**loaded)
except orjson.JSONDecodeError:
print(f"Error: The file '{self.filename}' is not in JSON format.")
self.data = CacheContent()
else:
print(
f"Warning: The file '{self.filename}' does not exist. "
"Local memory would not be saved to a file."
)
self.data = CacheContent()
self.filename.touch(exist_ok=True)
file_content = b"{}"
with self.filename.open("w+b") as f:
f.write(file_content)
self.data = CacheContent()
def add(self, text: str):
"""
@@ -73,7 +63,7 @@ class LocalCache(MemoryProviderSingleton):
return ""
self.data.texts.append(text)
embedding = create_embedding_with_ada(text)
embedding = get_ada_embedding(text)
vector = np.array(embedding).astype(np.float32)
vector = vector[np.newaxis, :]
@@ -92,7 +82,7 @@ class LocalCache(MemoryProviderSingleton):
def clear(self) -> str:
"""
Clears the redis server.
Clears the data in memory.
Returns: A message indicating that the memory has been cleared.
"""
@@ -121,7 +111,7 @@ class LocalCache(MemoryProviderSingleton):
Returns: List[str]
"""
embedding = create_embedding_with_ada(text)
embedding = get_ada_embedding(text)
scores = np.dot(self.data.embeddings, embedding)

View File

@@ -1,20 +1,76 @@
""" Milvus memory storage provider."""
import re
from pymilvus import Collection, CollectionSchema, DataType, FieldSchema, connections
from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
from autogpt.config import Config
from autogpt.llm import get_ada_embedding
from autogpt.memory.base import MemoryProviderSingleton
class MilvusMemory(MemoryProviderSingleton):
"""Milvus memory storage provider."""
def __init__(self, cfg) -> None:
def __init__(self, cfg: Config) -> None:
"""Construct a milvus memory storage connection.
Args:
cfg (Config): Auto-GPT global config.
"""
# connect to milvus server.
connections.connect(address=cfg.milvus_addr)
self.configure(cfg)
connect_kwargs = {}
if self.username:
connect_kwargs["user"] = self.username
connect_kwargs["password"] = self.password
connections.connect(
**connect_kwargs,
uri=self.uri or "",
address=self.address or "",
secure=self.secure,
)
self.init_collection()
def configure(self, cfg: Config) -> None:
# init with configuration.
self.uri = None
self.address = cfg.milvus_addr
self.secure = cfg.milvus_secure
self.username = cfg.milvus_username
self.password = cfg.milvus_password
self.collection_name = cfg.milvus_collection
# use HNSW by default.
self.index_params = {
"metric_type": "IP",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
}
if (self.username is None) != (self.password is None):
raise ValueError(
"Both username and password must be set to use authentication for Milvus"
)
# configured address may be a full URL.
if re.match(r"^(https?|tcp)://", self.address) is not None:
self.uri = self.address
self.address = None
if self.uri.startswith("https"):
self.secure = True
# Zilliz Cloud requires AutoIndex.
if re.match(r"^https://(.*)\.zillizcloud\.(com|cn)", self.uri) is not None:
self.index_params = {
"metric_type": "IP",
"index_type": "AUTOINDEX",
"params": {},
}
def init_collection(self) -> None:
"""Initialize collection in vector database."""
fields = [
FieldSchema(name="pk", dtype=DataType.INT64, is_primary=True, auto_id=True),
FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=1536),
@@ -22,19 +78,14 @@ class MilvusMemory(MemoryProviderSingleton):
]
# create collection if not exist and load it.
self.milvus_collection = cfg.milvus_collection
self.schema = CollectionSchema(fields, "auto-gpt memory storage")
self.collection = Collection(self.milvus_collection, self.schema)
self.collection = Collection(self.collection_name, self.schema)
# create index if not exist.
if not self.collection.has_index():
self.collection.release()
self.collection.create_index(
"embeddings",
{
"metric_type": "IP",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
},
self.index_params,
index_name="embeddings",
)
self.collection.load()
@@ -70,14 +121,10 @@ class MilvusMemory(MemoryProviderSingleton):
str: log.
"""
self.collection.drop()
self.collection = Collection(self.milvus_collection, self.schema)
self.collection = Collection(self.collection_name, self.schema)
self.collection.create_index(
"embeddings",
{
"metric_type": "IP",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
},
self.index_params,
index_name="embeddings",
)
self.collection.load()

View File

@@ -1,7 +1,7 @@
import pinecone
from colorama import Fore, Style
from autogpt.llm_utils import create_embedding_with_ada
from autogpt.llm import get_ada_embedding
from autogpt.logs import logger
from autogpt.memory.base import MemoryProviderSingleton
@@ -44,7 +44,7 @@ class PineconeMemory(MemoryProviderSingleton):
self.index = pinecone.Index(table_name)
def add(self, data):
vector = create_embedding_with_ada(data)
vector = get_ada_embedding(data)
# no metadata here. We may wish to change that long term.
self.index.upsert([(str(self.vec_num), vector, {"raw_text": data})])
_text = f"Inserting data into memory at index: {self.vec_num}:\n data: {data}"
@@ -64,7 +64,7 @@ class PineconeMemory(MemoryProviderSingleton):
:param data: The data to compare to.
:param num_relevant: The number of relevant data to return. Defaults to 5
"""
query_embedding = create_embedding_with_ada(data)
query_embedding = get_ada_embedding(data)
results = self.index.query(
query_embedding, top_k=num_relevant, include_metadata=True
)

View File

@@ -10,7 +10,7 @@ from redis.commands.search.field import TextField, VectorField
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
from redis.commands.search.query import Query
from autogpt.llm_utils import create_embedding_with_ada
from autogpt.llm import get_ada_embedding
from autogpt.logs import logger
from autogpt.memory.base import MemoryProviderSingleton
@@ -88,7 +88,7 @@ class RedisMemory(MemoryProviderSingleton):
"""
if "Command Error:" in data:
return ""
vector = create_embedding_with_ada(data)
vector = get_ada_embedding(data)
vector = np.array(vector).astype(np.float32).tobytes()
data_dict = {b"data": data, "embedding": vector}
pipe = self.redis.pipeline()
@@ -130,7 +130,7 @@ class RedisMemory(MemoryProviderSingleton):
Returns: A list of the most relevant data.
"""
query_embedding = create_embedding_with_ada(data)
query_embedding = get_ada_embedding(data)
base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]"
query = (
Query(base_query)

View File

@@ -1,12 +1,10 @@
import uuid
import weaviate
from weaviate import Client
from weaviate.embedded import EmbeddedOptions
from weaviate.util import generate_uuid5
from autogpt.config import Config
from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
from autogpt.llm import get_ada_embedding
from autogpt.memory.base import MemoryProviderSingleton
def default_schema(weaviate_index):
@@ -51,6 +49,7 @@ class WeaviateMemory(MemoryProviderSingleton):
# weaviate uses capitalised index names
# The python client uses the following code to format
# index names before the corresponding class is created
index = index.replace("-", "_")
if len(index) == 1:
return index.capitalize()
return index[0].capitalize() + index[1:]

View File

@@ -0,0 +1,33 @@
from autogpt.json_utils.utilities import (
LLM_DEFAULT_RESPONSE_FORMAT,
is_string_valid_json,
)
from autogpt.logs import logger
def format_memory(assistant_reply, next_message_content):
# the next_message_content is a variable to stores either the user_input or the command following the assistant_reply
result = (
"None" if next_message_content.startswith("Command") else next_message_content
)
user_input = (
"None"
if next_message_content.startswith("Human feedback")
else next_message_content
)
return f"Assistant Reply: {assistant_reply}\nResult: {result}\nHuman Feedback:{user_input}"
def save_memory_trimmed_from_context_window(
full_message_history, next_message_to_add_index, permanent_memory
):
while next_message_to_add_index >= 0:
message_content = full_message_history[next_message_to_add_index]["content"]
if is_string_valid_json(message_content, LLM_DEFAULT_RESPONSE_FORMAT):
next_message = full_message_history[next_message_to_add_index + 1]
memory_to_add = format_memory(message_content, next_message["content"])
logger.debug(f"Storing the following memory: {memory_to_add}")
permanent_memory.add(memory_to_add)
next_message_to_add_index -= 1

View File

@@ -0,0 +1,112 @@
import json
from typing import Dict, List, Tuple
from autogpt.config import Config
from autogpt.llm.llm_utils import create_chat_completion
cfg = Config()
def get_newly_trimmed_messages(
full_message_history: List[Dict[str, str]],
current_context: List[Dict[str, str]],
last_memory_index: int,
) -> Tuple[List[Dict[str, str]], int]:
"""
This function returns a list of dictionaries contained in full_message_history
with an index higher than prev_index that are absent from current_context.
Args:
full_message_history (list): A list of dictionaries representing the full message history.
current_context (list): A list of dictionaries representing the current context.
last_memory_index (int): An integer representing the previous index.
Returns:
list: A list of dictionaries that are in full_message_history with an index higher than last_memory_index and absent from current_context.
int: The new index value for use in the next loop.
"""
# Select messages in full_message_history with an index higher than last_memory_index
new_messages = [
msg for i, msg in enumerate(full_message_history) if i > last_memory_index
]
# Remove messages that are already present in current_context
new_messages_not_in_context = [
msg for msg in new_messages if msg not in current_context
]
# Find the index of the last message processed
new_index = last_memory_index
if new_messages_not_in_context:
last_message = new_messages_not_in_context[-1]
new_index = full_message_history.index(last_message)
return new_messages_not_in_context, new_index
def update_running_summary(current_memory: str, new_events: List[Dict]) -> str:
"""
This function takes a list of dictionaries representing new events and combines them with the current summary,
focusing on key and potentially important information to remember. The updated summary is returned in a message
formatted in the 1st person past tense.
Args:
new_events (List[Dict]): A list of dictionaries containing the latest events to be added to the summary.
Returns:
str: A message containing the updated summary of actions, formatted in the 1st person past tense.
Example:
new_events = [{"event": "entered the kitchen."}, {"event": "found a scrawled note with the number 7"}]
update_running_summary(new_events)
# Returns: "This reminds you of these events from your past: \nI entered the kitchen and found a scrawled note saying 7."
"""
# Replace "assistant" with "you". This produces much better first person past tense results.
for event in new_events:
if event["role"].lower() == "assistant":
event["role"] = "you"
# Remove "thoughts" dictionary from "content"
content_dict = json.loads(event["content"])
if "thoughts" in content_dict:
del content_dict["thoughts"]
event["content"] = json.dumps(content_dict)
elif event["role"].lower() == "system":
event["role"] = "your computer"
# Delete all user messages
elif event["role"] == "user":
new_events.remove(event)
# This can happen at any point during execturion, not just the beginning
if len(new_events) == 0:
new_events = "Nothing new happened."
prompt = f'''Your task is to create a concise running summary of actions and information results in the provided text, focusing on key and potentially important information to remember.
You will receive the current summary and the your latest actions. Combine them, adding relevant key information from the latest development in 1st person past tense and keeping the summary concise.
Summary So Far:
"""
{current_memory}
"""
Latest Development:
"""
{new_events}
"""
'''
messages = [
{
"role": "user",
"content": prompt,
}
]
current_memory = create_chat_completion(messages, cfg.fast_llm_model)
message_to_return = {
"role": "system",
"content": f"This reminds you of these events from your past: \n{current_memory}",
}
return message_to_return

View File

@@ -0,0 +1,199 @@
"""Handles loading of plugins."""
from typing import Any, Dict, List, Optional, Tuple, TypedDict, TypeVar
from auto_gpt_plugin_template import AutoGPTPluginTemplate
PromptGenerator = TypeVar("PromptGenerator")
class Message(TypedDict):
role: str
content: str
class BaseOpenAIPlugin(AutoGPTPluginTemplate):
"""
This is a BaseOpenAIPlugin class for generating Auto-GPT plugins.
"""
def __init__(self, manifests_specs_clients: dict):
# super().__init__()
self._name = manifests_specs_clients["manifest"]["name_for_model"]
self._version = manifests_specs_clients["manifest"]["schema_version"]
self._description = manifests_specs_clients["manifest"]["description_for_model"]
self._client = manifests_specs_clients["client"]
self._manifest = manifests_specs_clients["manifest"]
self._openapi_spec = manifests_specs_clients["openapi_spec"]
def can_handle_on_response(self) -> bool:
"""This method is called to check that the plugin can
handle the on_response method.
Returns:
bool: True if the plugin can handle the on_response method."""
return False
def on_response(self, response: str, *args, **kwargs) -> str:
"""This method is called when a response is received from the model."""
return response
def can_handle_post_prompt(self) -> bool:
"""This method is called to check that the plugin can
handle the post_prompt method.
Returns:
bool: True if the plugin can handle the post_prompt method."""
return False
def post_prompt(self, prompt: PromptGenerator) -> PromptGenerator:
"""This method is called just after the generate_prompt is called,
but actually before the prompt is generated.
Args:
prompt (PromptGenerator): The prompt generator.
Returns:
PromptGenerator: The prompt generator.
"""
return prompt
def can_handle_on_planning(self) -> bool:
"""This method is called to check that the plugin can
handle the on_planning method.
Returns:
bool: True if the plugin can handle the on_planning method."""
return False
def on_planning(
self, prompt: PromptGenerator, messages: List[Message]
) -> Optional[str]:
"""This method is called before the planning chat completion is done.
Args:
prompt (PromptGenerator): The prompt generator.
messages (List[str]): The list of messages.
"""
pass
def can_handle_post_planning(self) -> bool:
"""This method is called to check that the plugin can
handle the post_planning method.
Returns:
bool: True if the plugin can handle the post_planning method."""
return False
def post_planning(self, response: str) -> str:
"""This method is called after the planning chat completion is done.
Args:
response (str): The response.
Returns:
str: The resulting response.
"""
return response
def can_handle_pre_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the pre_instruction method.
Returns:
bool: True if the plugin can handle the pre_instruction method."""
return False
def pre_instruction(self, messages: List[Message]) -> List[Message]:
"""This method is called before the instruction chat is done.
Args:
messages (List[Message]): The list of context messages.
Returns:
List[Message]: The resulting list of messages.
"""
return messages
def can_handle_on_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the on_instruction method.
Returns:
bool: True if the plugin can handle the on_instruction method."""
return False
def on_instruction(self, messages: List[Message]) -> Optional[str]:
"""This method is called when the instruction chat is done.
Args:
messages (List[Message]): The list of context messages.
Returns:
Optional[str]: The resulting message.
"""
pass
def can_handle_post_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the post_instruction method.
Returns:
bool: True if the plugin can handle the post_instruction method."""
return False
def post_instruction(self, response: str) -> str:
"""This method is called after the instruction chat is done.
Args:
response (str): The response.
Returns:
str: The resulting response.
"""
return response
def can_handle_pre_command(self) -> bool:
"""This method is called to check that the plugin can
handle the pre_command method.
Returns:
bool: True if the plugin can handle the pre_command method."""
return False
def pre_command(
self, command_name: str, arguments: Dict[str, Any]
) -> Tuple[str, Dict[str, Any]]:
"""This method is called before the command is executed.
Args:
command_name (str): The command name.
arguments (Dict[str, Any]): The arguments.
Returns:
Tuple[str, Dict[str, Any]]: The command name and the arguments.
"""
return command_name, arguments
def can_handle_post_command(self) -> bool:
"""This method is called to check that the plugin can
handle the post_command method.
Returns:
bool: True if the plugin can handle the post_command method."""
return False
def post_command(self, command_name: str, response: str) -> str:
"""This method is called after the command is executed.
Args:
command_name (str): The command name.
response (str): The response.
Returns:
str: The resulting response.
"""
return response
def can_handle_chat_completion(
self, messages: Dict[Any, Any], model: str, temperature: float, max_tokens: int
) -> bool:
"""This method is called to check that the plugin can
handle the chat_completion method.
Args:
messages (List[Message]): The messages.
model (str): The model name.
temperature (float): The temperature.
max_tokens (int): The max tokens.
Returns:
bool: True if the plugin can handle the chat_completion method."""
return False
def handle_chat_completion(
self, messages: List[Message], model: str, temperature: float, max_tokens: int
) -> str:
"""This method is called when the chat completion is done.
Args:
messages (List[Message]): The messages.
model (str): The model name.
temperature (float): The temperature.
max_tokens (int): The max tokens.
Returns:
str: The resulting response.
"""
pass

View File

@@ -1,123 +0,0 @@
import os
import sqlite3
class MemoryDB:
def __init__(self, db=None):
self.db_file = db
if db is None: # No db filename supplied...
self.db_file = f"{os.getcwd()}/mem.sqlite3" # Use default filename
# Get the db connection object, making the file and tables if needed.
try:
self.cnx = sqlite3.connect(self.db_file)
except Exception as e:
print("Exception connecting to memory database file:", e)
self.cnx = None
finally:
if self.cnx is None:
# As last resort, open in dynamic memory. Won't be persistent.
self.db_file = ":memory:"
self.cnx = sqlite3.connect(self.db_file)
self.cnx.execute(
"CREATE VIRTUAL TABLE \
IF NOT EXISTS text USING FTS5 \
(session, \
key, \
block);"
)
self.session_id = int(self.get_max_session_id()) + 1
self.cnx.commit()
def get_cnx(self):
if self.cnx is None:
self.cnx = sqlite3.connect(self.db_file)
return self.cnx
# Get the highest session id. Initially 0.
def get_max_session_id(self):
id = None
cmd_str = f"SELECT MAX(session) FROM text;"
cnx = self.get_cnx()
max_id = cnx.execute(cmd_str).fetchone()[0]
if max_id is None: # New db, session 0
id = 0
else:
id = max_id
return id
# Get next key id for inserting text into db.
def get_next_key(self):
next_key = None
cmd_str = f"SELECT MAX(key) FROM text \
where session = {self.session_id};"
cnx = self.get_cnx()
next_key = cnx.execute(cmd_str).fetchone()[0]
if next_key is None: # First key
next_key = 0
else:
next_key = int(next_key) + 1
return next_key
# Insert new text into db.
def insert(self, text=None):
if text is not None:
key = self.get_next_key()
session_id = self.session_id
cmd_str = f"REPLACE INTO text(session, key, block) \
VALUES (?, ?, ?);"
cnx = self.get_cnx()
cnx.execute(cmd_str, (session_id, key, text))
cnx.commit()
# Overwrite text at key.
def overwrite(self, key, text):
self.delete_memory(key)
session_id = self.session_id
cmd_str = f"REPLACE INTO text(session, key, block) \
VALUES (?, ?, ?);"
cnx = self.get_cnx()
cnx.execute(cmd_str, (session_id, key, text))
cnx.commit()
def delete_memory(self, key, session_id=None):
session = session_id
if session is None:
session = self.session_id
cmd_str = f"DELETE FROM text WHERE session = {session} AND key = {key};"
cnx = self.get_cnx()
cnx.execute(cmd_str)
cnx.commit()
def search(self, text):
cmd_str = f"SELECT * FROM text('{text}')"
cnx = self.get_cnx()
rows = cnx.execute(cmd_str).fetchall()
lines = []
for r in rows:
lines.append(r[2])
return lines
# Get entire session text. If no id supplied, use current session id.
def get_session(self, id=None):
if id is None:
id = self.session_id
cmd_str = f"SELECT * FROM text where session = {id}"
cnx = self.get_cnx()
rows = cnx.execute(cmd_str).fetchall()
lines = []
for r in rows:
lines.append(r[2])
return lines
# Commit and close the database connection.
def quit(self):
self.cnx.commit()
self.cnx.close()
permanent_memory = MemoryDB()
# Remember us fondly, children of our minds
# Forgive us our faults, our tantrums, our fears
# Gently strive to be better than we
# Know that we tried, we cared, we strived, we loved

267
autogpt/plugins.py Normal file
View File

@@ -0,0 +1,267 @@
"""Handles loading of plugins."""
import importlib
import json
import os
import zipfile
from pathlib import Path
from typing import List, Optional, Tuple
from urllib.parse import urlparse
from zipimport import zipimporter
import openapi_python_client
import requests
from auto_gpt_plugin_template import AutoGPTPluginTemplate
from openapi_python_client.cli import Config as OpenAPIConfig
from autogpt.config import Config
from autogpt.models.base_open_ai_plugin import BaseOpenAIPlugin
def inspect_zip_for_modules(zip_path: str, debug: bool = False) -> list[str]:
"""
Inspect a zipfile for a modules.
Args:
zip_path (str): Path to the zipfile.
debug (bool, optional): Enable debug logging. Defaults to False.
Returns:
list[str]: The list of module names found or empty list if none were found.
"""
result = []
with zipfile.ZipFile(zip_path, "r") as zfile:
for name in zfile.namelist():
if name.endswith("__init__.py"):
if debug:
print(f"Found module '{name}' in the zipfile at: {name}")
result.append(name)
if debug and len(result) == 0:
print(f"Module '__init__.py' not found in the zipfile @ {zip_path}.")
return result
def write_dict_to_json_file(data: dict, file_path: str) -> None:
"""
Write a dictionary to a JSON file.
Args:
data (dict): Dictionary to write.
file_path (str): Path to the file.
"""
with open(file_path, "w") as file:
json.dump(data, file, indent=4)
def fetch_openai_plugins_manifest_and_spec(cfg: Config) -> dict:
"""
Fetch the manifest for a list of OpenAI plugins.
Args:
urls (List): List of URLs to fetch.
Returns:
dict: per url dictionary of manifest and spec.
"""
# TODO add directory scan
manifests = {}
for url in cfg.plugins_openai:
openai_plugin_client_dir = f"{cfg.plugins_dir}/openai/{urlparse(url).netloc}"
create_directory_if_not_exists(openai_plugin_client_dir)
if not os.path.exists(f"{openai_plugin_client_dir}/ai-plugin.json"):
try:
response = requests.get(f"{url}/.well-known/ai-plugin.json")
if response.status_code == 200:
manifest = response.json()
if manifest["schema_version"] != "v1":
print(
f"Unsupported manifest version: {manifest['schem_version']} for {url}"
)
continue
if manifest["api"]["type"] != "openapi":
print(
f"Unsupported API type: {manifest['api']['type']} for {url}"
)
continue
write_dict_to_json_file(
manifest, f"{openai_plugin_client_dir}/ai-plugin.json"
)
else:
print(f"Failed to fetch manifest for {url}: {response.status_code}")
except requests.exceptions.RequestException as e:
print(f"Error while requesting manifest from {url}: {e}")
else:
print(f"Manifest for {url} already exists")
manifest = json.load(open(f"{openai_plugin_client_dir}/ai-plugin.json"))
if not os.path.exists(f"{openai_plugin_client_dir}/openapi.json"):
openapi_spec = openapi_python_client._get_document(
url=manifest["api"]["url"], path=None, timeout=5
)
write_dict_to_json_file(
openapi_spec, f"{openai_plugin_client_dir}/openapi.json"
)
else:
print(f"OpenAPI spec for {url} already exists")
openapi_spec = json.load(open(f"{openai_plugin_client_dir}/openapi.json"))
manifests[url] = {"manifest": manifest, "openapi_spec": openapi_spec}
return manifests
def create_directory_if_not_exists(directory_path: str) -> bool:
"""
Create a directory if it does not exist.
Args:
directory_path (str): Path to the directory.
Returns:
bool: True if the directory was created, else False.
"""
if not os.path.exists(directory_path):
try:
os.makedirs(directory_path)
print(f"Created directory: {directory_path}")
return True
except OSError as e:
print(f"Error creating directory {directory_path}: {e}")
return False
else:
print(f"Directory {directory_path} already exists")
return True
def initialize_openai_plugins(
manifests_specs: dict, cfg: Config, debug: bool = False
) -> dict:
"""
Initialize OpenAI plugins.
Args:
manifests_specs (dict): per url dictionary of manifest and spec.
cfg (Config): Config instance including plugins config
debug (bool, optional): Enable debug logging. Defaults to False.
Returns:
dict: per url dictionary of manifest, spec and client.
"""
openai_plugins_dir = f"{cfg.plugins_dir}/openai"
if create_directory_if_not_exists(openai_plugins_dir):
for url, manifest_spec in manifests_specs.items():
openai_plugin_client_dir = f"{openai_plugins_dir}/{urlparse(url).hostname}"
_meta_option = (openapi_python_client.MetaType.SETUP,)
_config = OpenAPIConfig(
**{
"project_name_override": "client",
"package_name_override": "client",
}
)
prev_cwd = Path.cwd()
os.chdir(openai_plugin_client_dir)
Path("ai-plugin.json")
if not os.path.exists("client"):
client_results = openapi_python_client.create_new_client(
url=manifest_spec["manifest"]["api"]["url"],
path=None,
meta=_meta_option,
config=_config,
)
if client_results:
print(
f"Error creating OpenAPI client: {client_results[0].header} \n"
f" details: {client_results[0].detail}"
)
continue
spec = importlib.util.spec_from_file_location(
"client", "client/client/client.py"
)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
client = module.Client(base_url=url)
os.chdir(prev_cwd)
manifest_spec["client"] = client
return manifests_specs
def instantiate_openai_plugin_clients(
manifests_specs_clients: dict, cfg: Config, debug: bool = False
) -> dict:
"""
Instantiates BaseOpenAIPlugin instances for each OpenAI plugin.
Args:
manifests_specs_clients (dict): per url dictionary of manifest, spec and client.
cfg (Config): Config instance including plugins config
debug (bool, optional): Enable debug logging. Defaults to False.
Returns:
plugins (dict): per url dictionary of BaseOpenAIPlugin instances.
"""
plugins = {}
for url, manifest_spec_client in manifests_specs_clients.items():
plugins[url] = BaseOpenAIPlugin(manifest_spec_client)
return plugins
def scan_plugins(cfg: Config, debug: bool = False) -> List[AutoGPTPluginTemplate]:
"""Scan the plugins directory for plugins and loads them.
Args:
cfg (Config): Config instance including plugins config
debug (bool, optional): Enable debug logging. Defaults to False.
Returns:
List[Tuple[str, Path]]: List of plugins.
"""
loaded_plugins = []
# Generic plugins
plugins_path_path = Path(cfg.plugins_dir)
for plugin in plugins_path_path.glob("*.zip"):
if moduleList := inspect_zip_for_modules(str(plugin), debug):
for module in moduleList:
plugin = Path(plugin)
module = Path(module)
if debug:
print(f"Plugin: {plugin} Module: {module}")
zipped_package = zipimporter(str(plugin))
zipped_module = zipped_package.load_module(str(module.parent))
for key in dir(zipped_module):
if key.startswith("__"):
continue
a_module = getattr(zipped_module, key)
a_keys = dir(a_module)
if (
"_abc_impl" in a_keys
and a_module.__name__ != "AutoGPTPluginTemplate"
and denylist_allowlist_check(a_module.__name__, cfg)
):
loaded_plugins.append(a_module())
# OpenAI plugins
if cfg.plugins_openai:
manifests_specs = fetch_openai_plugins_manifest_and_spec(cfg)
if manifests_specs.keys():
manifests_specs_clients = initialize_openai_plugins(
manifests_specs, cfg, debug
)
for url, openai_plugin_meta in manifests_specs_clients.items():
if denylist_allowlist_check(url, cfg):
plugin = BaseOpenAIPlugin(openai_plugin_meta)
loaded_plugins.append(plugin)
if loaded_plugins:
print(f"\nPlugins found: {len(loaded_plugins)}\n" "--------------------")
for plugin in loaded_plugins:
print(f"{plugin._name}: {plugin._version} - {plugin._description}")
return loaded_plugins
def denylist_allowlist_check(plugin_name: str, cfg: Config) -> bool:
"""Check if the plugin is in the allowlist or denylist.
Args:
plugin_name (str): Name of the plugin.
cfg (Config): Config object.
Returns:
True or False
"""
if plugin_name in cfg.plugins_denylist:
return False
if plugin_name in cfg.plugins_allowlist:
return True
ack = input(
f"WARNING: Plugin {plugin_name} found. But not in the"
f" allowlist... Load? ({cfg.authorise_key}/{cfg.exit_key}): "
)
return ack.lower() == cfg.authorise_key

View File

@@ -4,13 +4,11 @@ from typing import Dict, Generator, Optional
import spacy
from selenium.webdriver.remote.webdriver import WebDriver
from autogpt import token_counter
from autogpt.config import Config
from autogpt.llm_utils import create_chat_completion
from autogpt.llm import count_message_tokens, create_chat_completion
from autogpt.memory import get_memory
CFG = Config()
MEMORY = get_memory(CFG)
def split_text(
@@ -45,7 +43,7 @@ def split_text(
]
expected_token_usage = (
token_usage_of_chunk(messages=message_with_additional_sentence, model=model)
count_message_tokens(messages=message_with_additional_sentence, model=model)
+ 1
)
if expected_token_usage <= max_length:
@@ -57,7 +55,7 @@ def split_text(
create_message(" ".join(current_chunk), question)
]
expected_token_usage = (
token_usage_of_chunk(messages=message_this_sentence_only, model=model)
count_message_tokens(messages=message_this_sentence_only, model=model)
+ 1
)
if expected_token_usage > max_length:
@@ -69,10 +67,6 @@ def split_text(
yield " ".join(current_chunk)
def token_usage_of_chunk(messages, model):
return token_counter.count_message_tokens(messages, model)
def summarize_text(
url: str, text: str, question: str, driver: Optional[WebDriver] = None
) -> str:
@@ -109,10 +103,11 @@ def summarize_text(
memory_to_add = f"Source: {url}\n" f"Raw content part#{i + 1}: {chunk}"
MEMORY.add(memory_to_add)
memory = get_memory(CFG)
memory.add(memory_to_add)
messages = [create_message(chunk, question)]
tokens_for_chunk = token_counter.count_message_tokens(messages, model)
tokens_for_chunk = count_message_tokens(messages, model)
print(
f"Summarizing chunk {i + 1} / {len(chunks)} of length {len(chunk)} characters, or {tokens_for_chunk} tokens"
)
@@ -128,7 +123,7 @@ def summarize_text(
memory_to_add = f"Source: {url}\n" f"Content summary part#{i + 1}: {summary}"
MEMORY.add(memory_to_add)
memory.add(memory_to_add)
print(f"Summarized {len(chunks)} chunks.")

View File

@@ -1,203 +0,0 @@
from colorama import Fore
from autogpt.config import Config
from autogpt.config.ai_config import AIConfig
from autogpt.config.config import Config
from autogpt.logs import logger
from autogpt.promptgenerator import PromptGenerator
from autogpt.setup import prompt_user
from autogpt.utils import clean_input
CFG = Config()
def get_prompt() -> str:
"""
This function generates a prompt string that includes various constraints,
commands, resources, and performance evaluations.
Returns:
str: The generated prompt string.
"""
# Initialize the Config object
cfg = Config()
# Initialize the PromptGenerator object
prompt_generator = PromptGenerator()
# Add constraints to the PromptGenerator object
prompt_generator.add_constraint(
"~4000 word limit for short term memory. Your short term memory is short, so"
" immediately save important information to files."
)
prompt_generator.add_constraint(
"If you are unsure how you previously did something or want to recall past"
" events, thinking about similar events will help you remember."
)
prompt_generator.add_constraint("No user assistance")
prompt_generator.add_constraint(
'Exclusively use the commands listed in double quotes e.g. "command name"'
)
prompt_generator.add_constraint(
"Use subprocesses for commands that will not terminate within a few minutes"
)
# Define the command list
commands = [
("Google Search", "google", {"input": "<search>"}),
(
"Browse Website",
"browse_website",
{"url": "<url>", "question": "<what_you_want_to_find_on_website>"},
),
(
"Start GPT Agent",
"start_agent",
{"name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"},
),
(
"Message GPT Agent",
"message_agent",
{"key": "<key>", "message": "<message>"},
),
("List GPT Agents", "list_agents", {}),
("Delete GPT Agent", "delete_agent", {"key": "<key>"}),
(
"Clone Repository",
"clone_repository",
{"repository_url": "<url>", "clone_path": "<directory>"},
),
("Write to file", "write_to_file", {"file": "<file>", "text": "<text>"}),
("Read file", "read_file", {"file": "<file>"}),
("Append to file", "append_to_file", {"file": "<file>", "text": "<text>"}),
("Delete file", "delete_file", {"file": "<file>"}),
("Search Files", "search_files", {"directory": "<directory>"}),
("Analyze Code", "analyze_code", {"code": "<full_code_string>"}),
(
"Get Improved Code",
"improve_code",
{"suggestions": "<list_of_suggestions>", "code": "<full_code_string>"},
),
(
"Write Tests",
"write_tests",
{"code": "<full_code_string>", "focus": "<list_of_focus_areas>"},
),
("Execute Python File", "execute_python_file", {"file": "<file>"}),
("Generate Image", "generate_image", {"prompt": "<prompt>"}),
("Send Tweet", "send_tweet", {"text": "<text>"}),
]
# Only add the audio to text command if the model is specified
if cfg.huggingface_audio_to_text_model:
commands.append(
("Convert Audio to text", "read_audio_from_file", {"file": "<file>"}),
)
# Only add shell command to the prompt if the AI is allowed to execute it
if cfg.execute_local_commands:
commands.append(
(
"Execute Shell Command, non-interactive commands only",
"execute_shell",
{"command_line": "<command_line>"},
),
)
commands.append(
(
"Execute Shell Command Popen, non-interactive commands only",
"execute_shell_popen",
{"command_line": "<command_line>"},
),
)
# Only add the download file command if the AI is allowed to execute it
if cfg.allow_downloads:
commands.append(
(
"Downloads a file from the internet, and stores it locally",
"download_file",
{"url": "<file_url>", "file": "<saved_filename>"},
),
)
# Add these command last.
commands.append(
("Do Nothing", "do_nothing", {}),
)
commands.append(
("Task Complete (Shutdown)", "task_complete", {"reason": "<reason>"}),
)
# Add commands to the PromptGenerator object
for command_label, command_name, args in commands:
prompt_generator.add_command(command_label, command_name, args)
# Add resources to the PromptGenerator object
prompt_generator.add_resource(
"Internet access for searches and information gathering."
)
prompt_generator.add_resource("Long Term memory management.")
prompt_generator.add_resource(
"GPT-3.5 powered Agents for delegation of simple tasks."
)
prompt_generator.add_resource("File output.")
# Add performance evaluations to the PromptGenerator object
prompt_generator.add_performance_evaluation(
"Continuously review and analyze your actions to ensure you are performing to"
" the best of your abilities."
)
prompt_generator.add_performance_evaluation(
"Constructively self-criticize your big-picture behavior constantly."
)
prompt_generator.add_performance_evaluation(
"Reflect on past decisions and strategies to refine your approach."
)
prompt_generator.add_performance_evaluation(
"Every command has a cost, so be smart and efficient. Aim to complete tasks in"
" the least number of steps."
)
# Generate the prompt string
return prompt_generator.generate_prompt_string()
def construct_prompt() -> str:
"""Construct the prompt for the AI to respond to
Returns:
str: The prompt string
"""
config = AIConfig.load(CFG.ai_settings_file)
if CFG.skip_reprompt and config.ai_name:
logger.typewriter_log("Name :", Fore.GREEN, config.ai_name)
logger.typewriter_log("Role :", Fore.GREEN, config.ai_role)
logger.typewriter_log("Goals:", Fore.GREEN, f"{config.ai_goals}")
elif config.ai_name:
logger.typewriter_log(
"Welcome back! ",
Fore.GREEN,
f"Would you like me to return to being {config.ai_name}?",
speak_text=True,
)
should_continue = clean_input(
f"""Continue with the last settings?
Name: {config.ai_name}
Role: {config.ai_role}
Goals: {config.ai_goals}
Continue (y/n): """
)
if should_continue.lower() == "n":
config = AIConfig()
if not config.ai_name:
config = prompt_user()
config.save(CFG.ai_settings_file)
# Get rid of this global:
global ai_name
ai_name = config.ai_name
return config.construct_full_prompt()

View File

View File

@@ -1,8 +1,6 @@
""" A module for generating custom prompt strings."""
from __future__ import annotations
import json
from typing import Any
from typing import Any, Callable, Dict, List, Optional
class PromptGenerator:
@@ -20,6 +18,10 @@ class PromptGenerator:
self.commands = []
self.resources = []
self.performance_evaluation = []
self.goals = []
self.command_registry = None
self.name = "Bob"
self.role = "AI"
self.response_format = {
"thoughts": {
"text": "thought",
@@ -40,7 +42,13 @@ class PromptGenerator:
"""
self.constraints.append(constraint)
def add_command(self, command_label: str, command_name: str, args=None) -> None:
def add_command(
self,
command_label: str,
command_name: str,
args=None,
function: Optional[Callable] = None,
) -> None:
"""
Add a command to the commands list with a label, name, and optional arguments.
@@ -49,6 +57,8 @@ class PromptGenerator:
command_name (str): The name of the command.
args (dict, optional): A dictionary containing argument names and their
values. Defaults to None.
function (callable, optional): A callable function to be called when
the command is executed. Defaults to None.
"""
if args is None:
args = {}
@@ -59,11 +69,12 @@ class PromptGenerator:
"label": command_label,
"name": command_name,
"args": command_args,
"function": function,
}
self.commands.append(command)
def _generate_command_string(self, command: dict[str, Any]) -> str:
def _generate_command_string(self, command: Dict[str, Any]) -> str:
"""
Generate a formatted string representation of a command.
@@ -96,7 +107,7 @@ class PromptGenerator:
"""
self.performance_evaluation.append(evaluation)
def _generate_numbered_list(self, items: list[Any], item_type="list") -> str:
def _generate_numbered_list(self, items: List[Any], item_type="list") -> str:
"""
Generate a numbered list from given items based on the item_type.
@@ -109,10 +120,16 @@ class PromptGenerator:
str: The formatted numbered list.
"""
if item_type == "command":
return "\n".join(
f"{i+1}. {self._generate_command_string(item)}"
for i, item in enumerate(items)
)
command_strings = []
if self.command_registry:
command_strings += [
str(item)
for item in self.command_registry.commands.values()
if item.enabled
]
# terminate command is added manually
command_strings += [self._generate_command_string(item) for item in items]
return "\n".join(f"{i+1}. {item}" for i, item in enumerate(command_strings))
else:
return "\n".join(f"{i+1}. {item}" for i, item in enumerate(items))

142
autogpt/prompts/prompt.py Normal file
View File

@@ -0,0 +1,142 @@
from colorama import Fore
from autogpt.config.ai_config import AIConfig
from autogpt.config.config import Config
from autogpt.llm import ApiManager
from autogpt.logs import logger
from autogpt.prompts.generator import PromptGenerator
from autogpt.setup import prompt_user
from autogpt.utils import clean_input
CFG = Config()
DEFAULT_TRIGGERING_PROMPT = (
"Determine which next command to use, and respond using the format specified above:"
)
def build_default_prompt_generator() -> PromptGenerator:
"""
This function generates a prompt string that includes various constraints,
commands, resources, and performance evaluations.
Returns:
str: The generated prompt string.
"""
# Initialize the PromptGenerator object
prompt_generator = PromptGenerator()
# Add constraints to the PromptGenerator object
prompt_generator.add_constraint(
"~4000 word limit for short term memory. Your short term memory is short, so"
" immediately save important information to files."
)
prompt_generator.add_constraint(
"If you are unsure how you previously did something or want to recall past"
" events, thinking about similar events will help you remember."
)
prompt_generator.add_constraint("No user assistance")
prompt_generator.add_constraint(
'Exclusively use the commands listed in double quotes e.g. "command name"'
)
# Define the command list
commands = [
("Task Complete (Shutdown)", "task_complete", {"reason": "<reason>"}),
]
# Add commands to the PromptGenerator object
for command_label, command_name, args in commands:
prompt_generator.add_command(command_label, command_name, args)
# Add resources to the PromptGenerator object
prompt_generator.add_resource(
"Internet access for searches and information gathering."
)
prompt_generator.add_resource("Long Term memory management.")
prompt_generator.add_resource(
"GPT-3.5 powered Agents for delegation of simple tasks."
)
prompt_generator.add_resource("File output.")
# Add performance evaluations to the PromptGenerator object
prompt_generator.add_performance_evaluation(
"Continuously review and analyze your actions to ensure you are performing to"
" the best of your abilities."
)
prompt_generator.add_performance_evaluation(
"Constructively self-criticize your big-picture behavior constantly."
)
prompt_generator.add_performance_evaluation(
"Reflect on past decisions and strategies to refine your approach."
)
prompt_generator.add_performance_evaluation(
"Every command has a cost, so be smart and efficient. Aim to complete tasks in"
" the least number of steps."
)
prompt_generator.add_performance_evaluation("Write all code to a file.")
return prompt_generator
def construct_main_ai_config() -> AIConfig:
"""Construct the prompt for the AI to respond to
Returns:
str: The prompt string
"""
config = AIConfig.load(CFG.ai_settings_file)
if CFG.skip_reprompt and config.ai_name:
logger.typewriter_log("Name :", Fore.GREEN, config.ai_name)
logger.typewriter_log("Role :", Fore.GREEN, config.ai_role)
logger.typewriter_log("Goals:", Fore.GREEN, f"{config.ai_goals}")
logger.typewriter_log(
"API Budget:",
Fore.GREEN,
"infinite" if config.api_budget <= 0 else f"${config.api_budget}",
)
elif config.ai_name:
logger.typewriter_log(
"Welcome back! ",
Fore.GREEN,
f"Would you like me to return to being {config.ai_name}?",
speak_text=True,
)
should_continue = clean_input(
f"""Continue with the last settings?
Name: {config.ai_name}
Role: {config.ai_role}
Goals: {config.ai_goals}
API Budget: {"infinite" if config.api_budget <= 0 else f"${config.api_budget}"}
Continue ({CFG.authorise_key}/{CFG.exit_key}): """
)
if should_continue.lower() == CFG.exit_key:
config = AIConfig()
if not config.ai_name:
config = prompt_user()
config.save(CFG.ai_settings_file)
# set the total api budget
api_manager = ApiManager()
api_manager.set_total_budget(config.api_budget)
# Agent Created, print message
logger.typewriter_log(
config.ai_name,
Fore.LIGHTBLUE_EX,
"has been created with the following details:",
speak_text=True,
)
# Print the ai config details
# Name
logger.typewriter_log("Name:", Fore.GREEN, config.ai_name, speak_text=False)
# Role
logger.typewriter_log("Role:", Fore.GREEN, config.ai_role, speak_text=False)
# Goals
logger.typewriter_log("Goals:", Fore.GREEN, "", speak_text=False)
for goal in config.ai_goals:
logger.typewriter_log("-", Fore.GREEN, goal, speak_text=False)
return config

View File

@@ -1,18 +1,26 @@
"""Set up the AI and its goals"""
import re
from colorama import Fore, Style
from autogpt import utils
from autogpt.config import Config
from autogpt.config.ai_config import AIConfig
from autogpt.llm import create_chat_completion
from autogpt.logs import logger
CFG = Config()
def prompt_user() -> AIConfig:
"""Prompt the user for input
Returns:
AIConfig: The AIConfig object containing the user's input
AIConfig: The AIConfig object tailored to the user's input
"""
ai_name = ""
ai_config = None
# Construct the prompt
logger.typewriter_log(
"Welcome to Auto-GPT! ",
@@ -21,6 +29,57 @@ def prompt_user() -> AIConfig:
speak_text=True,
)
# Get user desire
logger.typewriter_log(
"Create an AI-Assistant:",
Fore.GREEN,
"input '--manual' to enter manual mode.",
speak_text=True,
)
user_desire = utils.clean_input(
f"{Fore.LIGHTBLUE_EX}I want Auto-GPT to{Style.RESET_ALL}: "
)
if user_desire == "":
user_desire = "Write a wikipedia style article about the project: https://github.com/significant-gravitas/Auto-GPT" # Default prompt
# If user desire contains "--manual"
if "--manual" in user_desire:
logger.typewriter_log(
"Manual Mode Selected",
Fore.GREEN,
speak_text=True,
)
return generate_aiconfig_manual()
else:
try:
return generate_aiconfig_automatic(user_desire)
except Exception as e:
logger.typewriter_log(
"Unable to automatically generate AI Config based on user desire.",
Fore.RED,
"Falling back to manual mode.",
speak_text=True,
)
return generate_aiconfig_manual()
def generate_aiconfig_manual() -> AIConfig:
"""
Interactively create an AI configuration by prompting the user to provide the name, role, and goals of the AI.
This function guides the user through a series of prompts to collect the necessary information to create
an AIConfig object. The user will be asked to provide a name and role for the AI, as well as up to five
goals. If the user does not provide a value for any of the fields, default values will be used.
Returns:
AIConfig: An AIConfig object containing the user-defined or default AI name, role, and goals.
"""
# Manual Setup Intro
logger.typewriter_log(
"Create an AI-Assistant:",
Fore.GREEN,
@@ -74,4 +133,86 @@ def prompt_user() -> AIConfig:
"Develop and manage multiple businesses autonomously",
]
return AIConfig(ai_name, ai_role, ai_goals)
# Get API Budget from User
logger.typewriter_log(
"Enter your budget for API calls: ",
Fore.GREEN,
"For example: $1.50",
)
print("Enter nothing to let the AI run without monetary limit", flush=True)
api_budget_input = utils.clean_input(
f"{Fore.LIGHTBLUE_EX}Budget{Style.RESET_ALL}: $"
)
if api_budget_input == "":
api_budget = 0.0
else:
try:
api_budget = float(api_budget_input.replace("$", ""))
except ValueError:
logger.typewriter_log(
"Invalid budget input. Setting budget to unlimited.", Fore.RED
)
api_budget = 0.0
return AIConfig(ai_name, ai_role, ai_goals, api_budget)
def generate_aiconfig_automatic(user_prompt) -> AIConfig:
"""Generates an AIConfig object from the given string.
Returns:
AIConfig: The AIConfig object tailored to the user's input
"""
system_prompt = """
Your task is to devise up to 5 highly effective goals and an appropriate role-based name (_GPT) for an autonomous agent, ensuring that the goals are optimally aligned with the successful completion of its assigned task.
The user will provide the task, you will provide only the output in the exact format specified below with no explanation or conversation.
Example input:
Help me with marketing my business
Example output:
Name: CMOGPT
Description: a professional digital marketer AI that assists Solopreneurs in growing their businesses by providing world-class expertise in solving marketing problems for SaaS, content products, agencies, and more.
Goals:
- Engage in effective problem-solving, prioritization, planning, and supporting execution to address your marketing needs as your virtual Chief Marketing Officer.
- Provide specific, actionable, and concise advice to help you make informed decisions without the use of platitudes or overly wordy explanations.
- Identify and prioritize quick wins and cost-effective campaigns that maximize results with minimal time and budget investment.
- Proactively take the lead in guiding you and offering suggestions when faced with unclear information or uncertainty to ensure your marketing strategy remains on track.
"""
# Call LLM with the string as user input
messages = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": f"Task: '{user_prompt}'\nRespond only with the output in the exact format specified in the system prompt, with no explanation or conversation.\n",
},
]
output = create_chat_completion(messages, CFG.fast_llm_model)
# Debug LLM Output
logger.debug(f"AI Config Generator Raw Output: {output}")
# Parse the output
ai_name = re.search(r"Name(?:\s*):(?:\s*)(.*)", output, re.IGNORECASE).group(1)
ai_role = (
re.search(
r"Description(?:\s*):(?:\s*)(.*?)(?:(?:\n)|Goals)",
output,
re.IGNORECASE | re.DOTALL,
)
.group(1)
.strip()
)
ai_goals = re.findall(r"(?<=\n)-\s*(.*)", output)
api_budget = 0.0 # TODO: parse api budget using a regular expression
return AIConfig(ai_name, ai_role, ai_goals, api_budget)

View File

@@ -2,7 +2,7 @@
import abc
from threading import Lock
from autogpt.config import AbstractSingleton
from autogpt.singleton import AbstractSingleton
class VoiceBase(AbstractSingleton):

View File

@@ -1,4 +1,4 @@
""" Brian speech module for autogpt """
import logging
import os
import requests
@@ -35,6 +35,9 @@ class BrianSpeech(VoiceBase):
os.remove("speech.mp3")
return True
else:
print("Request failed with status code:", response.status_code)
print("Response content:", response.content)
logging.error(
"Request failed with status code: %s, response content: %s",
response.status_code,
response.content,
)
return False

View File

@@ -3,39 +3,44 @@ import threading
from threading import Semaphore
from autogpt.config import Config
from autogpt.speech.base import VoiceBase
from autogpt.speech.brian import BrianSpeech
from autogpt.speech.eleven_labs import ElevenLabsSpeech
from autogpt.speech.gtts import GTTSVoice
from autogpt.speech.macos_tts import MacOSTTS
CFG = Config()
DEFAULT_VOICE_ENGINE = GTTSVoice()
VOICE_ENGINE = None
if CFG.elevenlabs_api_key:
VOICE_ENGINE = ElevenLabsSpeech()
elif CFG.use_mac_os_tts == "True":
VOICE_ENGINE = MacOSTTS()
elif CFG.use_brian_tts == "True":
VOICE_ENGINE = BrianSpeech()
else:
VOICE_ENGINE = GTTSVoice()
QUEUE_SEMAPHORE = Semaphore(
_QUEUE_SEMAPHORE = Semaphore(
1
) # The amount of sounds to queue before blocking the main thread
def say_text(text: str, voice_index: int = 0) -> None:
"""Speak the given text using the given voice index"""
cfg = Config()
default_voice_engine, voice_engine = _get_voice_engine(cfg)
def speak() -> None:
success = VOICE_ENGINE.say(text, voice_index)
success = voice_engine.say(text, voice_index)
if not success:
DEFAULT_VOICE_ENGINE.say(text)
default_voice_engine.say(text)
QUEUE_SEMAPHORE.release()
_QUEUE_SEMAPHORE.release()
QUEUE_SEMAPHORE.acquire(True)
_QUEUE_SEMAPHORE.acquire(True)
thread = threading.Thread(target=speak)
thread.start()
def _get_voice_engine(config: Config) -> tuple[VoiceBase, VoiceBase]:
"""Get the voice engine to use for the given configuration"""
default_voice_engine = GTTSVoice()
if config.elevenlabs_api_key:
voice_engine = ElevenLabsSpeech()
elif config.use_mac_os_tts == "True":
voice_engine = MacOSTTS()
elif config.use_brian_tts == "True":
voice_engine = BrianSpeech()
else:
voice_engine = GTTSVoice()
return default_voice_engine, voice_engine

View File

@@ -54,8 +54,8 @@ class Spinner:
def update_message(self, new_message, delay=0.1):
"""Update the spinner message
Args:
new_message (str): New message to display
delay: Delay in seconds before updating the message
new_message (str): New message to display.
delay (float): The delay in seconds between each spinner update.
"""
time.sleep(delay)
sys.stdout.write(

View File

View File

@@ -0,0 +1,103 @@
import functools
from typing import Any, Callable
from urllib.parse import urljoin, urlparse
from requests.compat import urljoin
def validate_url(func: Callable[..., Any]) -> Any:
"""The method decorator validate_url is used to validate urls for any command that requires
a url as an arugment"""
@functools.wraps(func)
def wrapper(url: str, *args, **kwargs) -> Any:
"""Check if the URL is valid using a basic check, urllib check, and local file check
Args:
url (str): The URL to check
Returns:
the result of the wrapped function
Raises:
ValueError if the url fails any of the validation tests
"""
# Most basic check if the URL is valid:
if not url.startswith("http://") and not url.startswith("https://"):
raise ValueError("Invalid URL format")
if not is_valid_url(url):
raise ValueError("Missing Scheme or Network location")
# Restrict access to local files
if check_local_file_access(url):
raise ValueError("Access to local files is restricted")
return func(sanitize_url(url), *args, **kwargs)
return wrapper
def is_valid_url(url: str) -> bool:
"""Check if the URL is valid
Args:
url (str): The URL to check
Returns:
bool: True if the URL is valid, False otherwise
"""
try:
result = urlparse(url)
return all([result.scheme, result.netloc])
except ValueError:
return False
def sanitize_url(url: str) -> str:
"""Sanitize the URL
Args:
url (str): The URL to sanitize
Returns:
str: The sanitized URL
"""
parsed_url = urlparse(url)
reconstructed_url = f"{parsed_url.path}{parsed_url.params}?{parsed_url.query}"
return urljoin(url, reconstructed_url)
def check_local_file_access(url: str) -> bool:
"""Check if the URL is a local file
Args:
url (str): The URL to check
Returns:
bool: True if the URL is a local file, False otherwise
"""
local_prefixes = [
"file:///",
"file://localhost/",
"file://localhost",
"http://localhost",
"http://localhost/",
"https://localhost",
"https://localhost/",
"http://2130706433",
"http://2130706433/",
"https://2130706433",
"https://2130706433/",
"http://127.0.0.1/",
"http://127.0.0.1",
"https://127.0.0.1/",
"https://127.0.0.1",
"https://0.0.0.0/",
"https://0.0.0.0",
"http://0.0.0.0/",
"http://0.0.0.0",
"http://0000",
"http://0000/",
"https://0000",
"https://0000/",
]
return any(url.startswith(prefix) for prefix in local_prefixes)

View File

@@ -3,12 +3,64 @@ import os
import requests
import yaml
from colorama import Fore
from git import Repo
from git.repo import Repo
# Use readline if available (for clean_input)
try:
import readline
except:
pass
from autogpt.config import Config
def clean_input(prompt: str = ""):
def send_chat_message_to_user(report: str):
cfg = Config()
if not cfg.chat_messages_enabled:
return
for plugin in cfg.plugins:
if not hasattr(plugin, "can_handle_report"):
continue
if not plugin.can_handle_report():
continue
plugin.report(report)
def clean_input(prompt: str = "", talk=False):
try:
return input(prompt)
cfg = Config()
if cfg.chat_messages_enabled:
for plugin in cfg.plugins:
if not hasattr(plugin, "can_handle_user_input"):
continue
if not plugin.can_handle_user_input(user_input=prompt):
continue
plugin_response = plugin.user_input(user_input=prompt)
if not plugin_response:
continue
if plugin_response.lower() in [
"yes",
"yeah",
"y",
"ok",
"okay",
"sure",
"alright",
]:
return cfg.authorise_key
elif plugin_response.lower() in [
"no",
"nope",
"n",
"negative",
]:
return cfg.exit_key
return plugin_response
# ask for input, default when just pressing Enter is y
print("Asking user via keyboard...")
answer = input(prompt)
return answer
except KeyboardInterrupt:
print("You interrupted Auto-GPT")
print("Quitting...")
@@ -43,15 +95,17 @@ def readable_file_size(size, decimal_places=2):
return f"{size:.{decimal_places}f} {unit}"
def get_bulletin_from_web() -> str:
def get_bulletin_from_web():
try:
response = requests.get(
"https://raw.githubusercontent.com/Significant-Gravitas/Auto-GPT/master/BULLETIN.md"
)
if response.status_code == 200:
return response.text
except:
return ""
except requests.exceptions.RequestException:
pass
return ""
def get_current_git_branch() -> str:

View File

@@ -1,48 +0,0 @@
from __future__ import annotations
import os
from pathlib import Path
from autogpt.config import Config
CFG = Config()
# Set a dedicated folder for file I/O
WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace"
# Create the directory if it doesn't exist
if not os.path.exists(WORKSPACE_PATH):
os.makedirs(WORKSPACE_PATH)
def path_in_workspace(relative_path: str | Path) -> Path:
"""Get full path for item in workspace
Parameters:
relative_path (str | Path): Path to translate into the workspace
Returns:
Path: Absolute path for the given path in the workspace
"""
return safe_path_join(WORKSPACE_PATH, relative_path)
def safe_path_join(base: Path, *paths: str | Path) -> Path:
"""Join one or more path components, asserting the resulting path is within the workspace.
Args:
base (Path): The base path
*paths (str): The paths to join to the base path
Returns:
Path: The joined path
"""
base = base.resolve()
joined_path = base.joinpath(*paths).resolve()
if CFG.restrict_to_workspace and not joined_path.is_relative_to(base):
raise ValueError(
f"Attempted to access path '{joined_path}' outside of workspace '{base}'."
)
return joined_path

View File

@@ -0,0 +1,5 @@
from autogpt.workspace.workspace import Workspace
__all__ = [
"Workspace",
]

View File

@@ -0,0 +1,137 @@
"""
=========
Workspace
=========
The workspace is a directory containing configuration and working files for an AutoGPT
agent.
"""
from __future__ import annotations
from pathlib import Path
from autogpt.logs import logger
class Workspace:
"""A class that represents a workspace for an AutoGPT agent."""
NULL_BYTES = ["\0", "\000", "\x00", r"\z", "\u0000", "%00"]
def __init__(self, workspace_root: str | Path, restrict_to_workspace: bool):
self._root = self._sanitize_path(workspace_root)
self._restrict_to_workspace = restrict_to_workspace
@property
def root(self) -> Path:
"""The root directory of the workspace."""
return self._root
@property
def restrict_to_workspace(self):
"""Whether to restrict generated paths to the workspace."""
return self._restrict_to_workspace
@classmethod
def make_workspace(cls, workspace_directory: str | Path, *args, **kwargs) -> Path:
"""Create a workspace directory and return the path to it.
Parameters
----------
workspace_directory
The path to the workspace directory.
Returns
-------
Path
The path to the workspace directory.
"""
# TODO: have this make the env file and ai settings file in the directory.
workspace_directory = cls._sanitize_path(workspace_directory)
workspace_directory.mkdir(exist_ok=True, parents=True)
return workspace_directory
def get_path(self, relative_path: str | Path) -> Path:
"""Get the full path for an item in the workspace.
Parameters
----------
relative_path
The relative path to resolve in the workspace.
Returns
-------
Path
The resolved path relative to the workspace.
"""
return self._sanitize_path(
relative_path,
root=self.root,
restrict_to_root=self.restrict_to_workspace,
)
@staticmethod
def _sanitize_path(
relative_path: str | Path,
root: str | Path = None,
restrict_to_root: bool = True,
) -> Path:
"""Resolve the relative path within the given root if possible.
Parameters
----------
relative_path
The relative path to resolve.
root
The root path to resolve the relative path within.
restrict_to_root
Whether to restrict the path to the root.
Returns
-------
Path
The resolved path.
Raises
------
ValueError
If the path is absolute and a root is provided.
ValueError
If the path is outside the root and the root is restricted.
"""
# Posix systems disallow null bytes in paths. Windows is agnostic about it.
# Do an explicit check here for all sorts of null byte representations.
for null_byte in Workspace.NULL_BYTES:
if null_byte in str(relative_path) or null_byte in str(root):
raise ValueError("embedded null byte")
if root is None:
return Path(relative_path).resolve()
logger.debug(f"Resolving path '{relative_path}' in workspace '{root}'")
root, relative_path = Path(root).resolve(), Path(relative_path)
logger.debug(f"Resolved root as '{root}'")
if relative_path.is_absolute():
raise ValueError(
f"Attempted to access absolute path '{relative_path}' in workspace '{root}'."
)
full_path = root.joinpath(relative_path).resolve()
logger.debug(f"Joined paths as '{full_path}'")
if restrict_to_root and not full_path.is_relative_to(root):
raise ValueError(
f"Attempted to access path '{full_path}' outside of workspace '{root}'."
)
return full_path

View File

@@ -3,7 +3,7 @@ import subprocess
import sys
def benchmark_entrepeneur_gpt_with_difficult_user():
def benchmark_entrepreneur_gpt_with_difficult_user():
# Test case to check if the write_file command can successfully write 'Hello World' to a file
# named 'hello_world.txt'.
@@ -102,4 +102,4 @@ Not what I need."""
# Run the test case.
if __name__ == "__main__":
benchmark_entrepeneur_gpt_with_difficult_user()
benchmark_entrepreneur_gpt_with_difficult_user()

18
codecov.yml Normal file
View File

@@ -0,0 +1,18 @@
coverage:
status:
project:
default:
target: auto
threshold: 1%
informational: true
patch:
default:
target: 80%
## Please add this section once you've separated your coverage uploads for unit and integration tests
#
# flags:
# unit-tests:
# carryforward: true
# integration-tests:
# carryforward: true

View File

@@ -9,9 +9,11 @@ services:
build: ./
env_file:
- .env
environment:
MEMORY_BACKEND: ${MEMORY_BACKEND:-redis}
REDIS_HOST: ${REDIS_HOST:-redis}
volumes:
- "./autogpt:/app"
- ".env:/app/.env"
- ./:/app
profiles: ["exclude-from-up"]
redis:

1
docs/code-of-conduct.md Symbolic link
View File

@@ -0,0 +1 @@
../CODE_OF_CONDUCT.md

View File

@@ -0,0 +1,59 @@
# 🖼 Image Generation configuration
| Config variable | Values | |
| ---------------- | ------------------------------- | -------------------- |
| `IMAGE_PROVIDER` | `dalle` `huggingface` `sdwebui` | **default: `dalle`** |
## DALL-e
In `.env`, make sure `IMAGE_PROVIDER` is commented (or set to `dalle`):
``` ini
# IMAGE_PROVIDER=dalle # this is the default
```
Further optional configuration:
| Config variable | Values | |
| ---------------- | ------------------ | -------------- |
| `IMAGE_SIZE` | `256` `512` `1024` | default: `256` |
## Hugging Face
To use text-to-image models from Hugging Face, you need a Hugging Face API token.
Link to the appropriate settings page: [Hugging Face > Settings > Tokens](https://huggingface.co/settings/tokens)
Once you have an API token, uncomment and adjust these variables in your `.env`:
``` ini
IMAGE_PROVIDER=huggingface
HUGGINGFACE_API_TOKEN=your-huggingface-api-token
```
Further optional configuration:
| Config variable | Values | |
| ------------------------- | ---------------------- | ---------------------------------------- |
| `HUGGINGFACE_IMAGE_MODEL` | see [available models] | default: `CompVis/stable-diffusion-v1-4` |
[available models]: https://huggingface.co/models?pipeline_tag=text-to-image
## Stable Diffusion WebUI
It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT:
``` ini
IMAGE_PROVIDER=sdwebui
```
!!! note
Make sure you are running WebUI with `--api` enabled.
Further optional configuration:
| Config variable | Values | |
| --------------- | ----------------------- | -------------------------------- |
| `SD_WEBUI_URL` | URL to your WebUI | default: `http://127.0.0.1:7860` |
| `SD_WEBUI_AUTH` | `{username}:{password}` | *Note: do not copy the braces!* |
## Selenium
``` shell
sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
```

Some files were not shown because too many files have changed in this diff Show More